Categories
AI Today in 5

AI Today in 5: March 20, 2026, The AI Changing Compliance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Has AI changed the rules of compliance? (Forbes)
  2. How AI and deep fakes are reshaping identity fraud. (FinTechGlobal)
  3. How AI is changing product compliance. (SupplySidesJ)
  4. World Bank to focus on AI-resilient job creation. (Bloomberg)
  5. How AI is changing fintech. (Intuit)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

AI Governance and Fiduciary Duty: Board Oversight of AI As Core Governance

There was a time when boards could treat AI as a management-side innovation issue, something for the technology team, the innovation committee, or perhaps an occasional strategy offsite. That time is ending. No longer. For every compliance professional, AI stops being a technology story and becomes a governance story. And once it becomes a governance story, boards need to pay attention through the lens they know best: fiduciary duty.

The issue is not whether every director needs to become an engineer. They do not. The issue is whether the board is exercising appropriate oversight over a capability that can materially affect legal exposure, operational resilience, internal controls, reputation, and enterprise value. Under that lens, ignoring AI oversight begins to look less like prudence and more like a governance gap.

The Board Question Is No Longer “Do We Use AI?”

Too many board discussions still start in the wrong place. A director asks, “Are we using AI?” Management says yes, in a handful of pilots. Another director asks whether there is a policy. Legal says yes, one is being drafted. Everyone nods, reassured that the matter is under control. That is not oversight. That is atmospherics.

The real board questions are different. Where is AI being used? What decisions does it influence? What data does it rely on? Who owns it? How is risk assessed? What controls are in place? What gets reported upward when something changes or goes wrong?

COSO’s GenAI guidance is quite direct on this point. It states that the board of directors must have visibility into GenAI use and associated risks, including regular reporting on adoption, key risk indicators, incidents, and material changes to high-impact use cases. It also says oversight bodies should have the capacity to challenge assumptions, request independent validation, and direct corrective action.

Fiduciary Duty Means Oversight, Not Technical Mastery

The fiduciary duty standard is more practical and more familiar. Directors are expected to exercise informed oversight over material risk. If AI is shaping material processes, material decisions, or material exposures, then the board should ask how management governs it and what evidence supports that confidence.

This is where compliance can be a true translator. We understand how to connect abstract governance expectations to operational proof. We know the difference between having a policy and having a control. We know that a dashboard without escalation is theater. We know that a pilot without documentation is an anecdote. And we know that “the business owns it” is not enough unless ownership is defined, trained, monitored, and accountable.

COSO again gives a helpful framework. It emphasizes clear ownership of each GenAI tool, platform, or capability, with defined authority, escalation paths, and documented scope of use. It further stresses that assigning ownership without the capability to deliver invites failure, and that accountability should be tied not only to adoption but also to accuracy, safety, compliance, and adherence to controls. Boards do not need to run AI. But they do need assurance that someone competent owns it and that the ownership model is real.

Why AI Oversight Is Different from Ordinary IT Oversight

Some directors may be tempted to ask whether this is simply another version of cybersecurity or of oversight for digital transformation. There is overlap, certainly, but AI presents a different governance profile. COSO notes several characteristics that distinguish GenAI. It is dynamic: models, prompts, and retrieval data can change frequently, requiring continuous risk assessment, change control, and monitoring. It is easily scalable, meaning it can amplify errors and bias as readily as it can amplify efficiency. It has a low barrier to entry, which increases the risk of shadow AI and ungoverned adoption. And critically, it can be confidently wrong.

That last point is especially important for boards. A broken machine usually signals that it is broken. AI often does the opposite. It produces polished, persuasive, and highly plausible output even when it is materially mistaken. That means traditional management confidence can be a weak proxy for actual reliability. Boards, therefore, need a different kind of assurance model, one that asks not only whether the system is in place, but whether the organization can validate outputs, explain limitations, monitor drift, and intervene when use cases expand beyond what was originally approved.

The Governance Gap Boards Must Avoid

Here is where the fiduciary-duty lens becomes especially useful. The governance failure in the AI era is unlikely to be that a board has never heard the term “AI.” Every board in America has heard it. The failure is more likely to be subtler and therefore more dangerous: the board heard about AI in broad strategic terms but never built a repeatable oversight mechanism around it.

That is the governance gap.

It shows up when management reports adoption but not risk classification.

It shows up when directors hear about productivity gains but not control failures.

It shows up when there is an AI policy but no inventory of use cases.

It shows up when there is enthusiasm about innovation but no discussion of third-party dependencies, data quality, escalation paths, or human review.

It shows up when incidents are handled ad hoc rather than through a defined reporting structure.

COSO warns that rapid iteration can outpace existing processes, and that prompts, thresholds, and retrieval connectors are critical configuration elements that require the same rigor as other controlled system settings. It also highlights third-party and vendor risk, noting that outsourced GenAI capabilities can limit visibility into training data, model updates, data handling, and underlying controls.

In other words, the board should not assume AI risk is contained simply because a vendor is involved or because the tool sits inside a familiar enterprise platform. That should sharpen the oversight question.

What Good Board Oversight Looks Like

The good news is that effective AI oversight is not mystical. It looks a great deal like good oversight in other high-risk areas. It is structured, periodic, evidence-based, and tied to accountability. At a minimum, boards should expect management to provide five things.

  1. An inventory of material AI use cases, categorized by risk and business impact.
  2. A governance structure that identifies owners, review forums, escalation paths, and the role of compliance, legal, risk, audit, and technology.
  3. Clear policies and boundaries around acceptable use, prohibited data, high-impact decisions, and when human review is mandatory.
  4. Meaningful reporting. Not just adoption statistics, but risk indicators, incidents, model or vendor changes, validation results, and material control exceptions.
  5. A remediation and monitoring process that reflects the dynamic nature of AI.

That is consistent with COSO’s broader framework, which stresses alignment with organizational goals and risk appetite, the use of relevant information, internal communication, ongoing evaluations, and the communication of deficiencies. This is where I would encourage boards to think less in terms of “AI briefings” and more in terms of “AI oversight cadence.” A one-time presentation is not governance. A recurring structure is.

The Board Does Not Need More Hype. It Needs Evidence.

One risk in the current market is that AI discussions are still drenched in promotional language. Faster. Smarter. More innovative. Transformational. Useful words, but not enough for a board discharging fiduciary obligations.

Boards need evidence. This is where the compliance function can shine. Compliance professionals know how to convert aspiration into evidence. We know how to build a record showing that oversight is not merely claimed, but exercised.

And make no mistake, documentation matters. Structured communication and clear records are essential for reconstructing decisions, demonstrating accountability, and supporting regulatory or audit review. That principle runs through effective compliance practice generally and becomes even more important in AI governance, where organizations must often explain not only what decision was made, but how the process was overseen.

Five Questions Every Board Should Ask Now

If I were advising a board chair or audit committee chair, I would start with five questions.

  1. What are our highest-risk AI use cases, and who owns each one?
  2. What information does the board receive regularly about AI adoption, incidents, and material changes?
  3. How do we know that management is validating AI outputs rather than simply trusting them?
  4. Where are third-party AI tools embedded in our environment, and what visibility do we have into the risks they pose?
  5. What evidence would we produce tomorrow if a regulator, auditor, or shareholder asked how this board oversees AI?

Those questions do not require the board to become technical. They require the board to become disciplined.

The Bottom Line

AI governance is moving quickly from optional good practice to expected governance hygiene. That is the real message the real message boards need to hear. Under a fiduciary-duty lens, the challenge is straightforward. Directors do not need to be AI developers. But they do need to ensure that management has built a credible system for identifying, governing, monitoring, and escalating AI risk. When AI touches material business processes, board silence is not neutrality. It is exposure.

The companies that get this right will not be the ones that talk most loudly about innovation. They will be the ones whose boards insist on visibility, accountability, evidence, and follow-through. That is not anti-innovation. That is governance doing its job.

Categories
AI Today in 5

AI Today in 5: March 19, 2026, The Elasticity Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Elasticity as a compliance standard in the age of AI. (UCToday)
  2. Context-first AI for Co-Pilot. (FinTechGlobal)
  3. AI agents to reduce discovery costs. (BusinessWire)
  4. GSA AI clause. (Holland & Knight)
  5. How the military is using AI. (CBS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: March 18, 2026, The AI Compliance in GCC Companies Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Data privacy and compliance in the age of AI. (BloombergLaw)
  2. Agentic AI in insurance claims. (FinTechGlobal)
  3. Leading through AI transformation in FinTech. (Forbes)
  4. AI compliance in GCC organizations. (BizToday)
  5. Does healthcare need specialized AI? (HBR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: McKinsey’s Lilli AI Hack: What It Signals for AI Governance, Security and Disclosure

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at the recent hack of McKinsey’s AI tool Lilli.

Tom and Matt discuss a Financial Times report that a white-hat hacker, Paul Price of one-person firm Code Wall, exploited flaws in McKinsey’s internal AI tool “Lilli” to access millions of internal chat messages, view sensitive client-related file names, and see the model weights used to train the system; McKinsey patched the vulnerabilities after disclosure. They argue that the incident highlights emerging AI risks beyond traditional cybersecurity, including AI agents autonomously scouting for targets, the possibility of attackers altering models to alter outputs and create hard-to-detect “drift,” and confusion over who within organizations owns AI security and governance. The episode also explores the messy, inconsistent landscape of disclosure for AI-related incidents and urges compliance and GRC leaders to slow AI adoption, pressure-test systems, clarify accountability, ensure kill-switch/manual fallback capabilities, and consider reputational fallout.

Key highlights:

  • McKinsey AI Hack Overview
  • Three Big Implications
  • Model Drift and Tampering
  • GRC Playbook for AI Risk
  • Accountability and Kill Switches

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Blog

AI Is Only as Good as the Data: What Compliance Leaders Need to Know About Data Readiness

There is an old lesson in compliance that remains evergreen: bad facts produce bad decisions. The same is true for data science: Garbage In, Garbage Out (GIGO). In the GenAI era, that lesson has a new twist. Bad data produces bad outputs at machine speed.

That is why the report, Taming the Complexity of AI Data Readiness, deserves the attention of every Chief Compliance Officer, compliance technologist, and board member who asks management, “What is our AI strategy?” The better follow-up question is, “What is our data readiness strategy?” Because the report makes one point with unmistakable clarity: the model is not the mission; the data foundation is.

For compliance professionals, this is not a technical side issue. It is central to the enterprise risk conversation. If your organization is training, testing, or deploying AI on messy, siloed, biased, stale, or poorly governed data, you are not building a competitive advantage. You are an industrializing risk.

The Dirty Little Secret of Enterprise AI

The report lays out a reality that will not surprise anyone who has lived through a data initiative. Most organizations are not ready. Only 7% of survey respondents said their company’s data was completely ready for AI adoption. By contrast, 51% said it was only somewhat ready, while 27% said it was not very or not at all ready. Only 42% said their organization had high trust in its AI data, and 73% agreed their company should prioritize AI data quality more than it currently does. That should give every compliance officer pause.

We are living through a corporate rush toward GenAI, yet most companies are still stuck at the same old starting line: fragmented, inconsistent, poorly governed data. Many AI conversations inside companies still begin with use cases, copilots, and vendor demos. Far fewer begin with data lineage, data permissions, data quality, or governance maturity. That is a mistake.

If the underlying data is unreliable, the downstream output will be unreliable as well. Worse, it may arrive dressed up in polished prose, persuasive charts, or tidy summaries that create a false sense of confidence. In compliance with that, it is especially dangerous. Whether the use case is sanctions screening, due diligence, internal investigations, policy management, financial controls, or regulatory reporting, a bad answer delivered quickly is still a bad answer.

Bad Data Is Not Just a Tech Problem

One of the most useful parts of the report is how it frames the core barriers. The top challenge cited by respondents was siloed data and difficulty integrating sources at 56%. After that, a lack of a clear data strategy ranked 44%, and data quality or bias issues ranked 41%. Other concerns included regulatory constraints on data use, unclear data lineage, inadequate security, and outdated data. Every one of those should sound familiar to compliance professionals.

Siloed data means incomplete visibility. Weak lineage means you may not be able to defend how an answer was generated. Bias in the data means distorted outputs. Outdated data means inaccurate decisions. Weak security exposes sensitive information. Regulatory constraints mean the company may not even have the right to use certain data the way its AI aspirations assume.

The report underscores this point. 52% of respondents identified inaccurate or biased AI results as a top concern, while 40% cited the loss of security or intellectual property. That is not abstract. That is the modern compliance risk register.

Can We Trust the Data?

A quote from Teresa Tung of Accenture in the report is worth lingering over. She said data readiness means “you can access data to see an accurate view of what is happening in your business and what you can do about it.” That is also a very good working definition of compliance intelligence.

A mature compliance program helps a company understand what is happening inside the business and what should be done in response. That means your hotline data, your gifts and entertainment data, your training metrics, your third-party files, your investigation records, and your control data all need to mean what you think they mean.

The report makes this point with a simple example. Price data is not useful unless you know whether it is in U.S. or Australian dollars, whether it is a unit or bulk price, and when it applies. The compliance equivalent is easy to imagine. A third-party risk flag is not useful unless you know what triggered it, what jurisdiction it covers, how recently it was refreshed, what source produced it, and whether anyone validated it. Context is a control. Without it, data can mislead just as easily as it can inform.

Why This Is Becoming a Board-Level Issue

Another important finding is that only 23% of organizations have created a data strategy for AI adoption, although 53% are currently developing one. In other words, companies know they have a problem, but most are still working through it. This is where compliance can truly function as a business enabler.

The best compliance leaders know that governance is not the enemy of innovation. Governance is what makes innovation scalable and sustainable. If the business wants to use AI at scale, compliance should request a documented AI data strategy that addresses security, privacy, data quality, governance, accessibility, bias management, and alignment with business objectives.

The report found that security and protection of sensitive data were the most critical elements of such plans, at 59%, followed by clean, usable data quality at 46% and data governance at 41%. That is not just an IT checklist. That is a board conversation.

Bring AI to the Data

The report also discusses a concept compliance professionals need to understand: data gravity. Large and sensitive data sets tend to stay where they are because moving them is costly, slow, and risky. Increasingly, organizations are turning to architectures that bring AI processing to the data rather than moving data to the model. The report highlights approaches, such as zero-copy access and containerized applications, that can reduce latency, control costs, and address security and sovereignty concerns. This matters greatly for compliance.

Many regulated environments cannot simply move sensitive data across systems or borders because a vendor wants a cleaner AI workflow. Privacy laws, localization rules, contracts, and plain good judgment all cut against that approach. If AI can be brought to the data rather than copying data into multiple new environments, the organization may reduce both operational and compliance risk.

Compliance officers do not need to become cloud architects. But they do need to ask the right questions. Are we duplicating sensitive data unnecessarily? Are we crossing jurisdictional lines? Can we explain lineage, access, and security? Are we creating an AI environment that looks controlled or improvised?

Agentic AI: Real Promise, Real Risk

The report is optimistic about the potential of agentic AI for data management. 47% of respondents said their organizations believe agentic AI can solve data quality issues, and 65% expect many business processes to be augmented or replaced by agentic AI over the next 2 years. Experts cited benefits such as mapping data, documenting it, performing quality checks, monitoring drift, and automating routine tasks that previously required significant manual effort.

There is real promise here. Compliance teams spend far too much time on manual work that adds little strategic value. Tools that can responsibly automate mapping, documentation, testing, triage, or drift monitoring deserve serious attention.

But this is no place for magical thinking. The report is equally clear that success requires the right team: data engineers, domain experts, prompt expertise, and a product owner aligned to a business objective. That is the lesson. Agentic AI does not eliminate the need for governance. It raises the stakes for governance. If you automate poor judgment on top of poor data, you do not get efficiency. You get scalable failure.

Five Questions for Every CCO

So what should compliance leaders do now? Start with five questions.

  1. Which AI use cases in our company depend on sensitive, regulated, or high-risk data?
  2. Can we explain the lineage, quality, freshness, permissions, and context of that data?
  3. Do we have a documented AI data strategy, or are we confusing pilots with governance?
  4. Are we moving data in ways that create avoidable privacy, security, or sovereignty risks?
  5. Who owns the meaning of the data?

That final question may be the most important. The report stresses that the business must own the data so it is described properly and used correctly. Data is not just a technical asset. It is a business asset with legal, ethical, and operational meaning. Compliance should insist that meaning be defined before AI starts drawing inferences from it.

The Bottom Line

The great temptation in the AI era is to focus on the model’s brilliance. The wiser course is to focus on the data’s readiness. That is where trust begins. That is where defensibility begins. And that is where sustainable value begins. For compliance professionals, the message is plain. AI governance that ignores data readiness is not governance at all. It is wishful thinking with a dashboard.

The organizations that win with AI will not simply have more tools. They will have better data, better lineage, better controls, better discipline, and better judgment about when and how to use AI. In compliance, that is not glamorous. But it is where real success usually lives.

Categories
AI Today in 5

AI Today in 5: March 17, 2026, The $1tn in Value Wipe-Out Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Open Claw exposes Agentic AI risk. (GovInfoSecurity)
  2. Using AI for comms compliance. (FinTechGlobal)
  3. AI with integrity in FinCrime compliance. (Forbes)
  4. AI in life insurance. (InsuranceNewsNet)
  5. Amazon leads the $1tn wipeout in AI value. (CNBC)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: March 16, 2026, The Who Owns the Decision Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI boosts brainstorming. (Earth.com)
  2. The AI Imperative. (Wolters Kluwer)
  3. Who owns compliance decisions? (FinTech Global)
  4. AI opens a new front in the hospitals v. insurers battle. (Reuters)
  5. Embodied AI for manufacturing. (FinanceMagnates)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: March 13, 2026, The KYA Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. From KYC to Know Your Agent. (PYMNTS)
  2. Big Tech’s entire AI operations under EU scrutiny. (Bloomberg)
  3. Using Napier AI in transaction monitoring. (FinTechGlobal)
  4. Retail banks are putting AI to use. (BCG)
  5. Embodied AI for manufacturing. (Automate)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 72 – The Kristy in London Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

  • What did the FCPA pause do? (JustSecurity)
  • Wells Fargo is free from the Consent Order. (WSJ)
  • Senator flags White House corruption for betting markets. (Decrypt)
  • A DOJ lawyer quit before the hearing on the use of false AI-generated cases. (Bloomberg-Law)
  • DOJ wants authority over state bar discipline. (NYT)
  • Discussion: SCCE Europe Keynote
  • Target’s ICE Arrests Expose the Gap Between Legal Compliance & Duty of Care – Corporate Compliance Insights
  • Dems Propose ‘FCPA Reinforcement Act’ – Radical Compliance
  • International agents take down major site where criminals traded stolen corporate info – Compliance Week
  • Woman Dressed In Hot Dog Costume Busted For Toilet Paper Caper – The Smoking Gun

 Resources:

Kristy Grant-Hart on LinkedIn

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn