Categories
AI Today in 5

AI Today in 5: March 20, 2026, The AI Changing Compliance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Has AI changed the rules of compliance? (Forbes)
  2. How AI and deep fakes are reshaping identity fraud. (FinTechGlobal)
  3. How AI is changing product compliance. (SupplySidesJ)
  4. World Bank to focus on AI-resilient job creation. (Bloomberg)
  5. How AI is changing fintech. (Intuit)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: March 20, 2026, The Flight Corridor Risk Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Why did the lead investigator not testify in the FirstEnergy trial? (Cleveland.com)
  • The Nigerian ABC commission pays money back to NNPC. (Business Insider Africa)
  • Flight corridors and risk management. (NYT)
  • COI, corruption, and more in the Paramount deal. (WSJ)
Categories
Blog

AI Governance and Fiduciary Duty: Board Oversight of AI As Core Governance

There was a time when boards could treat AI as a management-side innovation issue, something for the technology team, the innovation committee, or perhaps an occasional strategy offsite. That time is ending. No longer. For every compliance professional, AI stops being a technology story and becomes a governance story. And once it becomes a governance story, boards need to pay attention through the lens they know best: fiduciary duty.

The issue is not whether every director needs to become an engineer. They do not. The issue is whether the board is exercising appropriate oversight over a capability that can materially affect legal exposure, operational resilience, internal controls, reputation, and enterprise value. Under that lens, ignoring AI oversight begins to look less like prudence and more like a governance gap.

The Board Question Is No Longer “Do We Use AI?”

Too many board discussions still start in the wrong place. A director asks, “Are we using AI?” Management says yes, in a handful of pilots. Another director asks whether there is a policy. Legal says yes, one is being drafted. Everyone nods, reassured that the matter is under control. That is not oversight. That is atmospherics.

The real board questions are different. Where is AI being used? What decisions does it influence? What data does it rely on? Who owns it? How is risk assessed? What controls are in place? What gets reported upward when something changes or goes wrong?

COSO’s GenAI guidance is quite direct on this point. It states that the board of directors must have visibility into GenAI use and associated risks, including regular reporting on adoption, key risk indicators, incidents, and material changes to high-impact use cases. It also says oversight bodies should have the capacity to challenge assumptions, request independent validation, and direct corrective action.

Fiduciary Duty Means Oversight, Not Technical Mastery

The fiduciary duty standard is more practical and more familiar. Directors are expected to exercise informed oversight over material risk. If AI is shaping material processes, material decisions, or material exposures, then the board should ask how management governs it and what evidence supports that confidence.

This is where compliance can be a true translator. We understand how to connect abstract governance expectations to operational proof. We know the difference between having a policy and having a control. We know that a dashboard without escalation is theater. We know that a pilot without documentation is an anecdote. And we know that “the business owns it” is not enough unless ownership is defined, trained, monitored, and accountable.

COSO again gives a helpful framework. It emphasizes clear ownership of each GenAI tool, platform, or capability, with defined authority, escalation paths, and documented scope of use. It further stresses that assigning ownership without the capability to deliver invites failure, and that accountability should be tied not only to adoption but also to accuracy, safety, compliance, and adherence to controls. Boards do not need to run AI. But they do need assurance that someone competent owns it and that the ownership model is real.

Why AI Oversight Is Different from Ordinary IT Oversight

Some directors may be tempted to ask whether this is simply another version of cybersecurity or of oversight for digital transformation. There is overlap, certainly, but AI presents a different governance profile. COSO notes several characteristics that distinguish GenAI. It is dynamic: models, prompts, and retrieval data can change frequently, requiring continuous risk assessment, change control, and monitoring. It is easily scalable, meaning it can amplify errors and bias as readily as it can amplify efficiency. It has a low barrier to entry, which increases the risk of shadow AI and ungoverned adoption. And critically, it can be confidently wrong.

That last point is especially important for boards. A broken machine usually signals that it is broken. AI often does the opposite. It produces polished, persuasive, and highly plausible output even when it is materially mistaken. That means traditional management confidence can be a weak proxy for actual reliability. Boards, therefore, need a different kind of assurance model, one that asks not only whether the system is in place, but whether the organization can validate outputs, explain limitations, monitor drift, and intervene when use cases expand beyond what was originally approved.

The Governance Gap Boards Must Avoid

Here is where the fiduciary-duty lens becomes especially useful. The governance failure in the AI era is unlikely to be that a board has never heard the term “AI.” Every board in America has heard it. The failure is more likely to be subtler and therefore more dangerous: the board heard about AI in broad strategic terms but never built a repeatable oversight mechanism around it.

That is the governance gap.

It shows up when management reports adoption but not risk classification.

It shows up when directors hear about productivity gains but not control failures.

It shows up when there is an AI policy but no inventory of use cases.

It shows up when there is enthusiasm about innovation but no discussion of third-party dependencies, data quality, escalation paths, or human review.

It shows up when incidents are handled ad hoc rather than through a defined reporting structure.

COSO warns that rapid iteration can outpace existing processes, and that prompts, thresholds, and retrieval connectors are critical configuration elements that require the same rigor as other controlled system settings. It also highlights third-party and vendor risk, noting that outsourced GenAI capabilities can limit visibility into training data, model updates, data handling, and underlying controls.

In other words, the board should not assume AI risk is contained simply because a vendor is involved or because the tool sits inside a familiar enterprise platform. That should sharpen the oversight question.

What Good Board Oversight Looks Like

The good news is that effective AI oversight is not mystical. It looks a great deal like good oversight in other high-risk areas. It is structured, periodic, evidence-based, and tied to accountability. At a minimum, boards should expect management to provide five things.

  1. An inventory of material AI use cases, categorized by risk and business impact.
  2. A governance structure that identifies owners, review forums, escalation paths, and the role of compliance, legal, risk, audit, and technology.
  3. Clear policies and boundaries around acceptable use, prohibited data, high-impact decisions, and when human review is mandatory.
  4. Meaningful reporting. Not just adoption statistics, but risk indicators, incidents, model or vendor changes, validation results, and material control exceptions.
  5. A remediation and monitoring process that reflects the dynamic nature of AI.

That is consistent with COSO’s broader framework, which stresses alignment with organizational goals and risk appetite, the use of relevant information, internal communication, ongoing evaluations, and the communication of deficiencies. This is where I would encourage boards to think less in terms of “AI briefings” and more in terms of “AI oversight cadence.” A one-time presentation is not governance. A recurring structure is.

The Board Does Not Need More Hype. It Needs Evidence.

One risk in the current market is that AI discussions are still drenched in promotional language. Faster. Smarter. More innovative. Transformational. Useful words, but not enough for a board discharging fiduciary obligations.

Boards need evidence. This is where the compliance function can shine. Compliance professionals know how to convert aspiration into evidence. We know how to build a record showing that oversight is not merely claimed, but exercised.

And make no mistake, documentation matters. Structured communication and clear records are essential for reconstructing decisions, demonstrating accountability, and supporting regulatory or audit review. That principle runs through effective compliance practice generally and becomes even more important in AI governance, where organizations must often explain not only what decision was made, but how the process was overseen.

Five Questions Every Board Should Ask Now

If I were advising a board chair or audit committee chair, I would start with five questions.

  1. What are our highest-risk AI use cases, and who owns each one?
  2. What information does the board receive regularly about AI adoption, incidents, and material changes?
  3. How do we know that management is validating AI outputs rather than simply trusting them?
  4. Where are third-party AI tools embedded in our environment, and what visibility do we have into the risks they pose?
  5. What evidence would we produce tomorrow if a regulator, auditor, or shareholder asked how this board oversees AI?

Those questions do not require the board to become technical. They require the board to become disciplined.

The Bottom Line

AI governance is moving quickly from optional good practice to expected governance hygiene. That is the real message the real message boards need to hear. Under a fiduciary-duty lens, the challenge is straightforward. Directors do not need to be AI developers. But they do need to ensure that management has built a credible system for identifying, governing, monitoring, and escalating AI risk. When AI touches material business processes, board silence is not neutrality. It is exposure.

The companies that get this right will not be the ones that talk most loudly about innovation. They will be the ones whose boards insist on visibility, accountability, evidence, and follow-through. That is not anti-innovation. That is governance doing its job.

Categories
Hill Country Authors

Hill Country Authors Podcast: Paul McGrath on “Left is Right”: Satire, Darker Threats, and Current-Events Inspiration

Welcome to a new season of the award-winning Hill Country Authors Podcast, sponsored by Stoney Creek Publishing. In this podcast, Hill Country resident Tom Fox visits with authors who live in and write about the Texas Hill Country.  Host Tom Fox opens a new season of the Texas Hill Country Authors Podcast with returning guest Paul McGrath to discuss McGrath’s novel Left is Right, a sequel to the PEN Craft award-winning Left.

McGrath recounts a 37-year career at Texas newspapers, primarily the Houston Chronicle, plus teaching at Texas A&M and Clear Lake, and his A&M roots with The Battalion. He explains expanding Anton’s story into a multi-book series (with five planned), driven by character attachment and news-inspired plots. McGrath describes layered “Left” titles, using Ellie to express progressive viewpoints, and empathy as a motivating force for Anton and Ellie, including Ezra’s lingering influence. He notes a darker tone influenced by right-wing militias, human trafficking, and a Texas motorcycle gang, balanced by humor, wordplay, and pop-culture references like a Jon Hamm dream sequence. He outlines the FBI’s and alien authorities’ ongoing pursuit, then a return to alien supervision, credits Stoney Creek Publishing’s support, shares on social platforms, and previews future themes involving Russians and cryptocurrency.

Key highlights:

  • Why Continue with Anton
  • Series Titles and ‘Left’
  • Empathy Driving the Plot
  • Darker Satire and Villains
  • Humor Wordplay and Names
  • Pop Culture Cameos
  • Where the Series Goes

Resources:

Paul McGrath on Stoney Creek Publishing

Left is Right on Texas A&M University Press

Social Media 

Instagram

X

Threads

 Podcast Cover Art

Nancy Huffman Fine Art

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: March 19, 2026, The Corruption in Soccer Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • US relaxes sanctions on PDVSA. (FT)
  • Chin wants the Malaysian ABC agency investigated. (Bloomberg)
  • Hacker breaks into law enforcement tip database. (Reuters)
  • Senegal, stripped of the Africa Cup title, calls for a corruption investigation. (NYT)
Categories
AI Today in 5

AI Today in 5: March 19, 2026, The Elasticity Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Elasticity as a compliance standard in the age of AI. (UCToday)
  2. Context-first AI for Co-Pilot. (FinTechGlobal)
  3. AI agents to reduce discovery costs. (BusinessWire)
  4. GSA AI clause. (Holland & Knight)
  5. How the military is using AI. (CBS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – GSK in China: The Compliance Breakdown That Still Echoes 13 Years Later

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. In this inaugural episode, we take a deep dive into the 2013 GSK China bribery scandal and examine why it remains one of the most important case studies in corporate compliance, governance, and culture. Our hosts are Timothy and Fiona.

We unpack how a global pharmaceutical giant was alleged to have used travel agencies, fake conferences, false VAT receipts, and targeted marketing programs to channel illicit payments to doctors, officials, and other intermediaries, all while an internal whistleblower warning and a four-month internal investigation failed to detect the misconduct. The episode also explores the tension between polished global compliance structures and compromised local execution, showing how incentives, third-party relationships, and regional sales pressure can overwhelm formal controls. Most importantly, it asks a question that remains urgent today: are corporate compliance systems truly designed to find the truth, or can they create a false sense of security that allows misconduct to flourish undetected?

Key highlights:

  • The scale of the alleged misconduct was enormous.
  • Third parties were central to the scheme.
  • Internal controls failed when they were needed most.
  • Corporate culture and incentives drove the risk.
  • Why the lessons are still highly relevant today.

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: The Notebook LM created notes, the voices of the hosts, Timothy and Fiona, based upon text written by Tom Fox

Categories
Blog

Vendor AI Risk Is the New Third-Party Risk Frontier: From Contracts to Compliance Evidence

For years, compliance professionals have understood a basic truth about third-party risk: your company can outsource a function, but it cannot outsource accountability. That principle has long applied to distributors, agents, resellers, consultants, customs brokers, and supply-chain partners. In the age of artificial intelligence, it now applies equally to AI vendors.

And here is the key issue. Most companies are not building AI entirely in-house. They are licensing models, embedding third-party copilots, procuring AI-enabled platforms, connecting external APIs, and relying on vendors for everything from data enrichment to automated decision support. In other words, the AI stack is increasingly a third-party stack.

That means AI governance is rapidly becoming a third-party risk management problem. For compliance officers, this is a critical shift. The question is no longer simply whether your organization is using AI. The question is whether you have sufficient contractual leverage, operational visibility, and documentary evidence to demonstrate that third-party AI risk is managed in a credible, defensible, and scalable manner. If the answer is no, then your AI program may be far less mature than it looks on the PowerPoint slide.

AI Is Rarely a Standalone Tool

One of the most dangerous myths in the current AI conversation is that “the AI” is a single product that can be evaluated once and approved once. That is not how most enterprise deployments work. A single AI-enabled workflow may involve a foundation model provider, a cloud host, a retrieval layer, one or more data processors, a business application vendor, and internal configuration choices that change over time. Add subcontractors, model updates, and cross-border data flows, and you begin to see the real picture. The risk does not sit neatly with any single vendor. It sits across an ecosystem.

That matters because when something goes wrong, regulators, plaintiffs, auditors, and boards will not care that the problem sat in a vendor dependency chain. They will ask what your company knew, what it required, what it monitored, and what evidence it retained. The bottom line is that vendor AI risk has to move out of the procurement annex and into the core compliance framework.

Start with a More Realistic Definition of Third-Party AI Risk

When many companies think about vendor AI risk, they default to privacy and cybersecurity. Those issues are absolutely important, but they are only the beginning.

Third-party AI risk can also include opaque training data, weak model governance, unexplained output variability, inaccurate summarization, hidden subcontractors, unauthorized data retention, insufficient segregation of customer data, model changes without notice, untested bias, poor incident response, weak record retention, and limited auditability. If the tool affects regulated processes, the stakes rise even higher.

Think about the real-world use cases now being deployed. AI tools support customer communications, onboarding, HR screening, contract review, due diligence triage, transaction monitoring, investigations, and report drafting. In each of those settings, the company may be relying on output it did not fully generate, cannot fully inspect, and may not be able to reproduce later without the right controls in place.

That is where compliance must lean in. The core question is not whether the vendor claims to use responsible AI. The core question is whether your company can obtain sufficient evidence that the system is well-controlled for its intended use.

Contracts Are the First Line of Governance

If AI risk is outsourced to vendors, contracts become the first line of governance. Yet too many AI agreements still read like standard software contracts with a few privacy words sprinkled on top. That is not good enough. A sound AI vendor agreement should, at a minimum, address permitted use, data rights, confidentiality, security, model-change notification, subcontractor transparency, performance expectations, audit rights, incident reporting, regulatory cooperation, and termination support.

Most importantly, the contract should define the use case. That sounds basic, but it is essential. A vendor tool approved for low-risk drafting support is not automatically appropriate for high-impact decision-making. If the intended use is not defined, the actual use will drift. And drift is where governance begins to fail. The agreement should also make clear what data the vendor can use, for what purpose, and for how long. Can the vendor use your inputs to train its models? Can it retain prompts or outputs? Can it use metadata to improve service? Can affiliates or subprocessors access the data? If those questions are not answered with precision, you lack clarity. You have hope. Hope is not a control.

SLAs Need to Measure More Than Uptime

Service level agreements are another area where companies need to upgrade their thinking. Traditional SLAs focus on uptime, availability, and support response times. Those are still necessary, but with AI, they are not sufficient. For an AI-enabled service, the SLA discussion should expand to include quality, reliability, explainability support, incident escalation, and change transparency. A system can be available 99.9% of the time and still produce garbage. That is not a service success. That is a control failure delivered efficiently.

I am not suggesting that every company can negotiate custom model-accuracy guarantees from every AI vendor. In many cases, that will not be realistic. But companies can require practical commitments around things like response logging, traceability, notification of material model or system changes, error-handling workflows, and support for validation testing. They can define turnaround times for incidents involving hallucinations, security breaches, inappropriate outputs, or data leakage. They can require that the vendor cooperate with investigations and remediation.

That is where the compliance function should partner closely with legal, procurement, information security, and the business owner. The goal is not to demand impossible warranties. The goal is to create enough visibility so that the company is not flying blind.

Audit Rights Must Be Usable, Not Decorative

Many vendor contracts include broad-sounding audit clauses that are so restricted, delayed, or indirect that they provide little real assurance. In the AI context, that problem is magnified. If you cannot meaningfully assess controls over data handling, model governance, subprocessors, logging, incident response, and change management, then your audit right is little more than legal wallpaper.

A usable audit-right framework does not always mean sending a team on-site with clipboards. It can include layered assurance mechanisms: independent third-party assessments, SOC reports, model governance summaries, penetration-test results, bias testing documentation, incident logs, certifications, tabletop exercise results, and the right to ask targeted follow-up questions. In higher-risk arrangements, it may also include deeper review rights, validation support, or the ability to commission an independent assessment.

From Due Diligence to Ongoing Monitoring

Once a contract is signed, the real work begins. Models change. Vendors add subprocessors. Features evolve. Use cases expand. Business users discover new workflows that procurement never contemplated. A vendor that began as a low-risk drafting tool can quietly become embedded in a regulated process six months later. That is why monitoring matters.

Companies should inventory AI vendors and classify them by risk. They should map which business processes depend on them, what data they touch, what decisions they inform, and what regulatory exposure they create. They should require periodic attestations, monitor control changes, review incidents, reassess data use, and revisit whether the tool is being used in line with approved purposes.

This is also where shadow AI becomes a third-party problem. Employees often access AI functionality through existing vendors before compliance even realizes it is enabled. Suddenly, a platform you bought for workflow management has rolled out AI summarization, drafting, or analytics features. If no one is watching vendor change notices and product updates, the company can slide into AI use without ever consciously approving it. That is a governance gap.

Build a Compliance Evidence File

If there is one practical takeaway, it is this: for significant AI vendors, build a compliance evidence file.

By that, I mean a documented record showing the rationale for approval, the use case, the risk classification, the key contractual controls, the diligence performed, the evidence reviewed, the approvals obtained, and the monitoring steps required going forward. If the vendor supports a high-risk process, the file should also include validation results, escalation pathways, and a record of any incidents or material changes.

Why does this matter? Because when the board asks why the company trusted a third-party AI tool, you need a better answer than “the business wanted it.” When the internal audit asks how control assurance was established, you need something more concrete than “a legal review of the contract.” And when a regulator asks how the company oversees outsourced AI risk, you need documentation that demonstrates a repeatable, risk-based process.

Five Questions Every CCO Should Ask

Every Chief Compliance Officer should be asking five simple questions right now.

  1. Do we know which vendors in our ecosystem are using or enabling AI?
  2. Have we classified those vendors based on data sensitivity and the business impact of the use case?
  3. Do our contracts clearly address data rights, change notification, incident response, and usable audit rights?
  4. Do our SLAs measure what matters for AI-enabled services, not just uptime?
  5. Can we produce evidence showing why a vendor was approved, what controls we relied on, and how the relationship is being monitored?

If the answer to any of those questions is no, the work is not done.

The Bottom Line

Third-party risk has always been about visibility, leverage, and evidence. AI does not change that. It intensifies it. The organizations that manage vendor AI risk well will not be the ones with the flashiest AI procurement strategy. They will be the ones that define use cases carefully, contract for transparency, demand usable assurance, monitor continuously, and retain evidence that their oversight is real.

That is where compliance comes in. Not as the department that slows innovation down, but as the function that makes outsourced innovation governable. Because in the end, if AI is rarely in-house, then AI governance cannot be either.

Categories
All Things Investigations

ATI In-House Insights: Challenges and Tips for Navigating a Changing Risk Landscape with Sarah Iles

In this episode of the ATI: InHouse Insights Podcast, Mike DeBernardis speaks with seasoned in-house compliance leader Sarah Isles about navigating an ever-changing risk landscape shaped by political, geopolitical, regulatory, and technological shifts. 

Sarah shares her background across manufacturing sectors and discusses how multinational compliance risks evolve as jurisdictional priorities shift, including sanctions, export controls, tariffs, sustainability, labor rights, data protection, and AI. They identify internal challenges, including a lack of infrastructure to address new risks, siloed ownership, and weak change management, and emphasize clear governance and accountability. Sarah advises “back to basics,” using DOJ’s Evaluation of Corporate Compliance Programs, focusing on real risk mitigation over form-heavy questionnaires, keeping communication channels open through formal committees and informal connections, scaling risk assessments appropriately, targeting communications to relevant audiences, escalating thoughtfully, and building resilient programs by expecting and embracing constant change.

Key highlights:

  • Geopolitics Drives Risk
  • Internal Adaptation Hurdles
  • Silos and Ownership
  • Culture and Change
  • Proactive Compliance Basics
  • Partnering With Business
  • Right-Sized Risk Assessments
  • Communicating Emerging Risks

Resources:

Sarah Iles LinkedIn

Mike DeBernardis LinkedIn

ATI: In-House Insights Podcast

Hughes Hubbard & Reed Website

Categories
Great Women in Compliance

Great Women in Compliance: SCCE ECEI 2026 – Berlin Highlights

Lisa Fine and Ellen Hunt were at the SCCE ECEI in Berlin earlier in March and asked some members of the #GWIC community to share their experiences and insights from the event. They asked people to reflect on something that stood out from the presentations, a takeaway to apply when they return home, and something memorable to them. And, no surprise, the answers were insightful and thought-provoking. Their reflections highlighted themes including the growing importance of behavioral science, third-party risk, and AI, as well as creativity and engaging our audience.

Everyone mentioned the sense of community – from conversations over lunch to reconnecting with global peers, both inside and outside the sessions. Whether it was hallway discussions, shared meals, or cultural experiences in Berlin, the conference underscored the energy and innovation that come from the global ethics and compliance community, as well as the meaningful relationships built in person at the event.