Categories
Daily Compliance News

Daily Compliance News: November 17, 2025, The Protests Against Corruption Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest that are relevant to the compliance professional.

Top stories include:

  • Hundreds of thousands protest in the Philippines against corruption. (NPR)
  • Protests rage in Mexico over corruption. (NYT)
  • They really don’t want to pay her attorney’s fees. (WSJ)
  • How much did this pardon cost? (BBC)

The Daily Compliance News has been honored as the No. 2 in the Best Regulatory Compliance Podcasts category.

Categories
Blog

20 Questions Every Board Should Ask About AI

In boardrooms around the world, one theme now appears with more regularity than cyber risk, M&A uncertainty, or even financial performance. That topic is artificial intelligence. Not the lofty philosophical debate about whether machines will overtake human judgment, but the immediate, pragmatic question every director is trying to solve: How do we oversee AI in a way that protects the enterprise, unlocks value, and keeps regulators out of the boardroom?

For compliance professionals, this is a defining moment. AI risk has become the newest frontier where the board relies heavily on the compliance function to guide them. Sometimes with clarity, sometimes with guardrails, and occasionally with a well-timed reality check. This is the type of risk that exposes governance gaps quickly, and the questions the board asks, or fails to ask, will determine whether the company thrives in the age of AI or becomes the following cautionary tale.

Today, I outline 20 critical questions that every board should ask about AI. Think of them not simply as oversight prompts but as governance accelerators. Each one creates visibility, accountability, and structure. Those three elements are the foundation of every effective compliance program.

1. What are our highest-impact AI use cases, and who owns them?

Boards cannot oversee what they cannot see. The first and arguably most crucial step is obtaining a clear inventory of where AI is embedded in operations, not at a conceptual level, but with owners, systems, and risk ratings attached. When accountability is vague, risk grows quietly in the background.

2. How does AI support our strategic objectives and create measurable value?

AI is not a magic wand. It must support strategy, not distract from it. Boards should ask whether AI materially improves revenue, reduces cost, enhances safety, increases accuracy, or strengthens customer outcomes. If the answer is ambiguous, the company may be deploying AI for the wrong reasons.

3. What data powers these systems, and do we have the legal and ethical rights to use it?

Data is the fuel for AI, but not all data is created or sourced equally. Boards should expect clarity on licensing rights, privacy implications, and any limitations on the use and reuse of data. If data lineage is unclear, the company’s regulatory exposure may be far greater than it realizes.

4. How are we assessing and mitigating bias in both data and outcomes?

Bias is not only a fairness issue. It poses operational, legal, and reputational risks. Boards should see a methodology, not simply an aspiration. That includes periodic testing, remediation procedures, and documentation that can withstand scrutiny from regulators, auditors, or litigators.

5. What guardrails prevent employees from entering sensitive information into generative AI tools?

Most AI failures begin with human error. Boards should understand which safeguards are currently in place, including policies, training programs, and technical restrictions, and how the company tests their effectiveness.

6. What is our model validation process before deployment?

Deploying unvalidated models, or worse, models validated exclusively by developers, invites significant risk. Boards should confirm that model validation includes accuracy testing, robustness checks, and cross-functional review involving compliance, legal, risk, and data science.

7. How do we monitor for model drift or degraded performance over time?

AI is not static. Models evolve, environments shift, and accuracy degrades. Ongoing monitoring is essential. Boards should request a drift detection plan that includes clear thresholds, well-defined triggers, designated responsible owners, and documented response actions.

8. What is our incident response plan for AI failures, hallucinations, or data leakage?

AI failures rarely resemble traditional IT outages. They can be subtle, gradual, or hidden until significant damage occurs. A strong incident response plan clarifies roles, timelines, escalation paths, and expectations for communication with customers and regulators. Boards should insist on a rehearsal, not merely a promise.

9. How are we documenting AI-related decisions?

When regulators come calling, documentation becomes destiny. Boards should ensure that decisions tied to high-impact AI models are recorded in a manner that demonstrates thoughtful oversight, rather than blind reliance on automation.

10. Which AI regulatory regimes apply to us across global markets?

The regulatory landscape is evolving rapidly. The EU AI Act, sector-specific guidance from the United States, China’s AI regulations, and new frameworks emerging in Australia, Brazil, Singapore, and the United Kingdom are just a few examples. Boards should expect a regulatory heat map that outlines exposure, obligations, and enforcement priorities.

11. How do we manage the risk associated with third-party AI vendors and model providers?

Vendors introduce significant risk, particularly when foundation models or APIs change without notice. Contracts must include audit rights, IP protections, confidentiality provisions, and mechanisms for monitoring downstream risk. Boards should look for a vendor governance framework, not a spreadsheet with logos.

12. What training have employees received on the responsible use of AI?

Employees cannot follow principles they do not understand. Boards should expect role-based training with regular refreshers, testing, and usage monitoring, rather than one-time videos or superficial check-the-box modules.

13. How do we ensure human oversight for high-impact or high-risk decisions?

This is where compliance delivers real value. “Human in the loop” cannot simply mean that a person glanced at a dashboard. It means the right individuals reviewed the right decisions with clarity on when they are obligated to intervene.

14. What KPIs tell us whether our AI systems are performing safely and as intended?

Boards should expect dashboards containing more than accuracy scores. KPIs should include incident counts, time-to-remediation, drift flags, bias findings, and operational impacts. What the company measures reveals what the company values.

15. What controls protect AI models and proprietary data from cyber threats?

AI significantly expands the attack surface. Models can be stolen, manipulated, or poisoned. Boards should see evidence of hardened access controls, encryption, logging, and monitoring, along with procedures for handling prompt-injection attacks and adversarial inputs.

16. How do we ensure transparency with customers, employees, and regulators when AI is used?

Transparency is becoming a regulatory expectation in many jurisdictions. Boards should verify whether AI disclosures are clear, accurate, and accessible to users, rather than being hidden in dense terms of service.

17. Are we over-relying on AI in any mission-critical processes?

AI concentration risk is real. When too many decisions or functions depend on a single model or vendor, the entire enterprise becomes fragile. Boards should evaluate whether redundancies exist and whether a single point of AI failure could create systemic risk.

18. What ethical principles guide our AI development and deployment?

Ethical frameworks only matter when they are embedded in daily processes and decision-making. Boards should expect evidence that ethical considerations influenced model selection, data sourcing, vendor evaluation, and deployment controls.

19. How is Internal Audit providing independent assurance over AI?

Internal Audit must play a role. AI risk touches processes, data, controls, vendors, and governance. These are areas Internal Audit already understands well. Boards should expect AI to be included in the annual audit plan, supported by a structured methodology.

20. What investments are required to manage AI risk in the next 12 months?

Boards appreciate transparency, not surprises. AI governance necessitates ongoing investment in personnel, skills, monitoring tools, testing environments, and data management capabilities. If AI grows without proportional governance funding, the company creates risk rather than value.

Why These Questions Matter Now

We are entering an era in which regulators expect boards to demonstrate active oversight of AI, just as they do for cybersecurity, financial controls, and data privacy. Gone are the days when AI could be treated as an IT experiment or a futuristic curiosity. Today, it sits squarely in the center of corporate governance. This means compliance oversight is required. For compliance professionals, this is an opportunity to step forward and provide structure. We can shape the conversation, establish frameworks, and guide leadership toward responsible adoption and implementation. These 20 questions give the boards the clarity they need and ensure compliance with the influence it deserves.

AI presents extraordinary potential, but potential without oversight becomes risk. Compliance professionals can ensure that the board asks the right questions, receives the necessary information, and establishes the appropriate controls to ensure effective oversight. In the age of AI, strong governance is not simply a competitive advantage. It is a survival strategy.

If you would like the whole 20 Question list, please leave us a Voicemail.

Categories
Sunday Book Review

Sunday Book Review: November 16, 2025, The Robert McKee on Storytelling Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious about the subject. It could be books about business, compliance, history, leadership, current events, or any other topic that might interest Tom. Today, we review five top business books on storytelling written by Robert McKee.

  • Story by Robert McKee
  • Character by Robert McKee
  • Action by Robert McKee
  • Dialogue by Robert McKee
  • Storynomics by Robert McKee and Thomas Gerace
Categories
10 For 10

10 For 10: Top Compliance Stories For the Week Ending, November 15, 2025

Welcome to 10 For 10, the podcast that brings you the week’s top 10 compliance stories in one episode each week. Tom Fox, the Voice of Compliance, brings to you, the compliance professional, the compliance stories you need to be aware of to end your busy week. Sit back, and in 10 minutes, hear about the stories every compliance professional should be aware of from the prior week. Every Saturday, 10 For 10 highlights the most important news, insights, and analysis for the compliance professional, all curated by the Voice of Compliance, Tom Fox. Get your weekly filling of compliance stories with 10 for 10, a podcast produced by the Compliance Podcast Network.

This week’s stories include:

  • Right-wing EU lawmakers want a corruption inquiry opened. (Politico)
  • FinTech fraud scandal in Germany. (Bloomberg)
  • The $1tn Man tells workers 2026 will be the ‘hardest year’. (BusinessInsider)
  • Top Ukrainian energy ministers resign. (AP)
  • How China Evaded Sanctions to Obtain Chips. (WSJ)
  • Stupid is as stupid does-all South Korean visas in GA reinstated. (NYT)
  • Ex-Glencore staff all plead not guilty. (Bloomberg)
  • Pitch rigging in baseball brings indictments. (ESPN)
  • Corruption saps growth in the Philippines. (Bloomberg)
  • The Trump Administration blocks Gunvor’s takeover of Russia’s energy assets. (WSJ)

You can check out the Daily Compliance News for four curated compliance and ethics-related stories each day, here.

Connect with Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
ACI FCPA Conference 2025

ACI-FCPA Conference Speaker Preview Series – Chuck Duross on Internal Investigation Strategies

In this episode of the ACI-FCPA and Global Anti-Corruption Conference Speaker Podcasts series, Chuck Duross discusses his panel at the event, “Rethinking Your Internal Investigations Playbook: Adapting Processes and Protocols to New Risk Realities from Anti-Corruption to Fraud, Trade, and Beyond.”

Some of the issues the panel will discuss are:

  • Reassessing investigation triggers;
  • Better identifying risk signals;
  • Coordinating investigations across multiple risk functions.

I hope you can join me at the ACI–FCPA Conference. This year’s event will take place on December 3-4 at the Gaylord National Resort & Convention Center in National Harbor, Maryland, near Washington, D.C. The lineup of this year’s event is simply first-rate, featuring some of the top FCPA professionals, white-collar attorneys, and compliance practitioners in the field.

The 2025 program is being completely redesigned to help your organization stay agile, responsive, and ahead of the curve. Expect a dynamic agenda shaped by real-world priorities, practical takeaways, and the most cutting-edge thinking in compliance—led by a faculty of global practitioners with boots on the ground, encountering the very risks that come across your desk.

Please join me at the event. For information on the event, click here. Listeners of this podcast will receive a discount by using the code D10-999-CPN26.

Categories
The Hill Country Podcast

Student Voices of the Hill Country: A Schreiner Student Pod Series – Episode 4: Skip It or Play It: Understanding the Power of Influencer Advertising

Welcome to a special production of the Hill Country Podcast, which is a 12-part series collaboration with the communicators of tomorrow from right here in the Texas Hill Country. The Hill Country Podcast and the Texas Hill Country Podcast Network have partnered with the talented students from Dr. Adolfo Mora’s Communications class at Schreiner University to turn the microphone over to them. Join us each episode as these fresh voices explore critical topics, challenge modern ideas, and provide their unique perspectives on the world of communication.

In this episode of ‘Would You Skip It,’ hosts Pilar and Brianna delve into the captivating world of social media and influencer advertising. They explore how parasocial relationships—one-sided bonds formed between audiences and influencers—impact purchasing decisions. The episode breaks down three key aspects: perceived intimacy, authenticity, and emotional investment, using examples like Emma Chamberlain, Mr. Beast, and Selena Gomez. The discussion aims to unravel why influencer ads often feel more genuine and effective compared to traditional advertisements. By understanding these dynamics, listeners can more effectively navigate the digital marketing landscape and make more informed decisions.

Key highlights:

  • Understanding Parasocial Relationships
  • Perceived Intimacy in Influencer Marketing
  • Authenticity and Trust in Influencer Ads
  • Emotional Investment and Influencer Loyalty
  • Future of Influencer Advertising

Other Hill Country Focused Podcasts

Hill Country Authors Podcast

Hill Country Artists Podcast

Texas Hill Country Podcast Network

Categories
Fox on Podcasting

Fox on Podcasting – Navigating Podcasting and Digital Marketing in Healthcare with Eva Sheie

Join Tom Fox as he explores the world of podcasting, and get ready to be inspired to start your own podcast. In this episode, Tom Fox sits down with Eva Sheie, founder of The Axis, to discuss her dynamic career journey from a public radio enthusiast and professional musician to a digital marketing expert specializing in healthcare.

Eva shares her experiences working with healthcare providers to build trust with patients through digital platforms, especially through the innovative use of podcasts. She also delves into the ethical and compliance challenges in healthcare marketing, her personal motivations behind founding her own company, and the importance of storytelling in building brand trust. Tune in for insights on integrating podcasts into marketing strategies and transforming patient engagement in the healthcare industry.

Key Highlights:

  • Eva’s Journey: From Public Radio to Digital Marketing
  • Discovering the Power of Podcasting
  • Founding The Axis: A Leap of Faith
  • Building Trust Through Storytelling in Healthcare
  • Ethical Boundaries and Compliance in Healthcare Marketing 

Resources:

Eva Sheie on LinkedIn

The Axis

Artwork

Elaine Capers

Art by Elaine

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: November 14, 2025, The Hardest Year Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • The $1tn Man tells workers 2026 will be the ‘hardest year’. (BusinessInsider)
  • Top Ukrainian energy ministers resign. (AP)
  • How China Evaded Sanctions to Obtain Chips. (WSJ)
  • Stupid is as stupid does-all South Korean visas in GA reinstated. (NYT)

The Daily Compliance News has been honored as the No. 2 in the Best Regulatory Compliance Podcasts category.

Categories
AI Today in 5

AI Today in 5: November 14, 2025, The AI for Cyber Attacks Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, I will bring to you 5 stories about AI stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest about AI.

  1. Pitching AI as Mission Critical in wealth management. (FinTechGlobal)
  2. Chinese hackers reportedly used Anthropic for cyberattacks. (WSJ)
  3. Are BODs ready for AI? (WSJ)
  4. Fraud in AI data center planning. (FT)
  5. BODs seek AI experts. (CCI)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Winnie the Pooh and Compliance Week – Winnie the Pooh as CECO (Think, Think, Think)

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with bite-sized, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

We conclude our week of fun in compliance by looking at how Winnie the Pooh and his friends inform your compliance program. Today, we reflect on lessons from Winnie the Pooh for the CECO.

For more information on this topic, refer to The Compliance Handbook: A Guide to Operationalizing Your Compliance Program, 6th edition, recently released by LexisNexis. It is available here.