Categories
Daily Compliance News

Daily Compliance News: April 21, 2026, The Scambodia Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Pope Leo calls on Angolans to fight corruption. (Africa News)
  • Should CEOs be the face of a company? (NYT)
  • Cambodia’s business model is scamming. (WSJ)
  • SCt to review SEC disgorgement powers. (Reuters)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 21, 2026, The 7 Questions You Should Ask Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. 7 questions to ask about AI and compliance. (The News Tribune)
  2. Compliance can outsource tools to AI but not judgment. (FinTech Global)
  3. Data Authenticity and Accountability for AI. (CCI)
  4. Do AI chatbots make you stupider? (BBC)
  5. ICU nurses get AI help. (HealthcareItNews)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance: When a Senior Leader Faces Cancer: Disclosure, Continuity Planning, and Resilience with Deb Krier

Innovation comes in many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom visits Deb Krier to discuss her work coaching primarily executives after serious cancer diagnoses.

Deb discusses the unique leadership challenges of privacy, disclosure, and maintaining credibility while undergoing treatment. Deb, a corporate communications professional and founder of Wise Women Communications, discusses what leaders should share with boards, HR, close colleagues, and clients, emphasizing the importance of controlling the narrative to prevent rumors and coordinating with medical teams to plan around energy levels, treatment, and time away. She describes resilience as “grit,” encourages leaders to delegate and empower teams, and urges organizations to strengthen business continuity and contingency planning so no single person holds ultimate authority. Deb highlights the importance of a support “tribe,” the benefits of humor, and advises compliance professionals to listen with empathy while addressing any legal disclosure obligations.

Key highlights:

  • Cancer Coaching for Executives
  • Work Impact and Treatment Planning
  • Resilient Leadership in Crisis
  • Support Tribe and Community
  • Humor as Medicine
  • Compliance, Empathy, and Culture

Resources:

Deb Krier on LinkedIn

Your Cancer Coach Website 

The Business Power Hour Podcast

Innovation in Compliance is a multi-award-winning podcast that was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
SBR - Authors' Podcast

SBR-Author’s Podcast: Invitational Selling: Building Trust, Engagement, and Human Connection in a Digital World with Dr. Dennis Cummins

Welcome to the SBR-Author’s Podcast! In this podcast series, Host Tom Fox visits with authors in the compliance arena and beyond. In this episode, Tom Fox welcomes Dr. Dennis Cummins to talk about his new book “Invitational Selling: The Human Connection Advantage.”

Dr. Cummins to discuss his new book, which grew from his experience “selling from the stage” and from learning that pressing less and connecting more led to better results. Dr. Cummins argues that traditional high-pressure sales tactics are failing because buyers have more information, face constant messaging, and are increasingly skeptical, while AI-driven speed and automation can erode authenticity and trust. He defines invitational selling as a three-phase framework: connect, convey, and convert by inviting next steps. This is usable not only for products but for leaders seeking organizational buy-in, speak-up/listen-up cultures, and engagement that reduces resistance and turnover. He shares a story about his late daughter, Lauren, selling bracelets as a lesson in rapport, value, and meaning. The book launches April 28, with launch proceeds donated to Make-A-Wish Foundation.

Key highlights:

  • Why Write This Book
  • Connect Convey Convert
  • Beyond Sales Organizational – Buy In
  • Speak Up Listen Up Culture
  • Inviting Beats Telling
  • Using AI Without Losing Trust

Resources:

Dr. Dennis Cummins on LinkedIn

Dr. Dennis Cummins Website

Invitational Selling click here

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
The PfBCon Podcast

The PFBCon Podcast: From Podcast to Book (and Back): Building a Content Engine with AI Support

Tom Fox explains how podcasts and books can fuel each other in a circular content-creation process and how AI can assist as a research and editorial assistant. Drawing on experience founding the Compliance Podcast Network (growing from five to 75 shows) and the Texas Hill Country Podcast Network (from three to 15 shows, with 3,000 subscribers), they emphasize the value of podcasting, especially in rural areas. Examples include creating a podcast that led to a book about seven older female artists, producing a leader’s autobiography by recording outlined life stories into transcripts (earning a gold podcast award), and turning books into podcast series (e.g., risk management/AI, FCPA Survival Guide, and Megan Dougherty’s Podcasting for Business to support her launch). They describe writing The Compliance Handbook via daily podcast recordings as an editing tool and using AI prompts to generate topic ideas, guest outlines, blog drafts, white paper drafts, chapter drafts, and social post drafts—always requiring human fact-checking and editing.

Key highlights:

  • Podcast Book Loop
  • Building Podcast Networks
  • Magnificent Seven Story
  • Legacy Autobiography Podcast
  • Upping Your Game Brand
  • FCPA Book to Podcast
  • Podcasting for Business Launch
  • Compliance Handbook Method
  • Write What You Love
  • AI Research Editorial Tools

Resources:

Follow Tom Fox on:

Instagram

Facebook

YouTube

Twitter

LinkedIn

Compliance Podcast Network

Texas Hill Country Podcast Network

Categories
Blog

AI Disclosures, Controls, and D&O Coverage: Closing the Governance Gap Around Artificial Intelligence

A new governance gap is emerging around artificial intelligence, and it is one that Chief Compliance Officers, compliance professionals, and boards need to confront now. It sits at the intersection of three areas that too many companies still treat separately: public disclosures, internal controls, and insurance coverage. That siloed approach is no longer sustainable.

As companies speak more confidently about their AI strategies, insurers are becoming more cautious about the risks those strategies create. That tension matters. It signals that the market is beginning to see something many organizations have not yet fully addressed: when a company’s statements about AI outpace its actual governance, the exposure is not merely operational or reputational. It can become a disclosure issue, a board oversight issue, and ultimately a proof-of-governance issue under the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP).

For the compliance professional, this is not simply an insurance story. It is a compliance integration story. The question is whether the company can align its statements about AI, the controls it has in place, and the protections it believes it has in place if something goes wrong.

The New Governance Gap

Many organizations are eager to describe AI as a source of innovation, efficiency, better decision-making, or competitive advantage. Those messages increasingly appear in earnings calls, investor decks, public filings, marketing materials, and board presentations. Yet the underlying governance structures often remain immature. That disconnect is the governance gap.

It appears when management speaks broadly about responsible AI but has not built a complete inventory of AI use cases. It appears when companies discuss oversight but cannot show testing, documentation, or monitoring. It appears that boards assume that insurance will respond to AI-related claims without understanding how new policy language may narrow coverage.

This is where D&O coverage becomes so important. It is not the center of the story, but it is a revealing signal. If insurers are revisiting policy language and introducing exclusions or limitations tied to AI-related conduct, it suggests the market sees governance risk. In other words, the insurance market is sending a message: AI-related claims are no longer hypothetical, and companies that cannot demonstrate disciplined oversight may find that risk transfer is less available than they assumed.

Why the ECCP Should Be the Primary Lens

The DOJ’s ECCP remains the most useful framework for analyzing this issue because it asks exactly the right questions.

Has the company conducted a risk assessment that accounts for emerging risks? Are policies and procedures aligned with actual business practice? Are controls working in practice? Is there proper oversight, accountability, and continuous improvement? Can the company demonstrate all of this with evidence? Those are compliance questions, but they are also the right AI governance questions.

If a company makes public statements about AI capability, oversight, or reliability, the ECCP lens requires more than aspiration. It requires substantiation. Can the company show who owns the AI risk? Can it demonstrate how models or systems are tested? Can it show escalation procedures when problems arise? Can it document how AI-related decisions are monitored, reviewed, and improved over time?

If the answer is no, then the issue is not simply that the company may have overpromised. The issue is that its compliance program may not be adequately addressing a material emerging risk. That is why CCOs should view AI as a cross-functional challenge requiring integration across legal, compliance, technology, risk, audit, investor relations, and the board.

AI Disclosure Must Be Evidence-Based

One of the most practical steps a compliance function can take is to push for an evidence-based disclosure process around AI. This means that public statements about AI should not be driven solely by enthusiasm, market pressure, or executive optimism. They should be grounded in underlying documentation. If the company says it uses AI responsibly, where is the governance framework? If it claims AI improves decision-making, what testing supports that assertion? If it says it has safeguards, where are the control descriptions, monitoring results, and escalation records?

This is not about suppressing innovation. It is about ensuring that disclosure discipline keeps pace with technological ambition. For boards, this means asking harder questions before approving or relying on public AI narratives. For compliance officers, it means helping management build the evidentiary record that turns broad statements into defensible representations.

Controls Must Catch Up to Strategy

This is where the “how-to” work begins. Compliance professionals should begin by creating a structured inventory of AI use cases across the enterprise. That inventory should identify where AI is being used, what decisions it informs, what data it relies on, who owns it, and what risks it entails.

Once that inventory exists, risk tiering should follow. Not every AI use case carries the same compliance significance. A low-risk productivity tool does not need the same oversight as a system that affects investigations, third-party due diligence, customer interactions, financial reporting, or core operational decisions.

From there, the company can design controls proportionate to risk. High-impact uses of AI should have documented governance, human review where appropriate, testing protocols, escalation triggers, and monitoring requirements. The compliance team should be able to answer a simple question: where are the controls, and how do we know they work? That is the heart of the ECCP inquiry.

Where NIST AI RMF and ISO/IEC 42001 Fit

This is also where the NIST AI Risk Management Framework and ISO/IEC 42001 become highly practical tools. NIST AI RMF helps organizations govern, map, measure, and manage AI risks. For compliance professionals, this provides a disciplined structure for identifying AI use cases, understanding impacts, assessing reliability, and managing response. It is especially useful in linking abstract AI risk to operational decision-making.

ISO/IEC 42001 brings management system discipline to AI governance. It focuses on defined roles, documented processes, control implementation, monitoring, internal review, and continual improvement. That makes it an excellent bridge between policy and execution. Together, these frameworks help operationalize the ECCP. The ECCP tells you what an effective compliance program should be able to demonstrate. NIST AI RMF helps structure the risk analysis. ISO 42001 helps embed those requirements into a repeatable governance process.

For CCOs, the practical lesson is clear: use these frameworks not as academic overlays, but as working tools to build ownership, documentation, testing, and accountability.

Insurance Is a Governance Input

Companies also need to stop treating insurance as an afterthought. D&O coverage should be considered a governance input, not merely a downstream purchase. If policy language is narrowing around AI-related claims, boards and compliance leaders need to understand what that means. What scenarios might raise disclosure-related allegations? Where is ambiguity in coverage? What assumptions has management made about protection that may no longer hold?

Compliance does not need to become an insurance specialist. But it does need to ensure that disclosure, governance, and risk transfer are aligned. If the company is making strong public claims about AI while carrying unexamined governance weaknesses and uncertain coverage, that is precisely the kind of mismatch that can trigger a crisis.

Closing the Gap Before It Becomes a Failure

The larger lesson is straightforward. AI governance is not simply about technology controls. It is about integration. It is about ensuring that what the company says, what it does, and what it can prove all line up. That is why the governance gap matters so much. It is the space where strategy outruns structure, where disclosure outruns evidence, and where confidence outruns control. For boards and compliance professionals, the task is to close that gap before it becomes a failure.

The companies that do this well will not necessarily be the ones moving the fastest. They will be the ones building documented, tested, monitored, and governed AI programs that stand up to regulatory scrutiny, investor pressure, and real-world disruption. That is not bureaucracy. That is the price of sustainable innovation.