Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 73 – The Technology Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

  • Stoicism without self-examination is moral bankruptcy. (⁠FT⁠)
  • Is China more stable for companies than the US?  (⁠FT⁠)
  • JPMorgan to monitor jr. bankers’ hours. (⁠FT)⁠
  • EDNY says fighting the appeal of the FIFA corruption case is not worth the resources. (⁠Reuters)⁠
  • Judge questions DOJ’s decision to drop Halkbank AML case. (⁠Bloomberg⁠)
  • One CEP to Rule Them All (⁠CCI⁠)
  • Banning Sports Betting on Prediction Markets (⁠WSJ⁠)
  • US Regulatory Fines Plummet (⁠CCI⁠)
  • You Need an Automated Compliance Program (⁠Volkov Blog⁠)
  • Florida Man-Dress for Arrest (⁠NBC Miami⁠)

Resources:

Kristy Grant-Hart on ⁠LinkedIn⁠

⁠Prove Your Worth⁠

Tom

⁠Instagram⁠

⁠Facebook⁠

⁠YouTube⁠

⁠Twitter⁠

⁠LinkedIn

Categories
Fox on Podcasting

Fox on Podcasting: Rural Podcasts as Civic Institutions: Trust, Storytelling, and Sustainable Local Media

Join Tom Fox as he explores the world of podcasting and get ready to be inspired to start your own podcast. In this episode, Tom takes a solo turn behind the mic to advocate for rural podcasts and rural podcast networks. He says that a Rural Podcast Network can function as a civic institution in rural America. Drawing on research and his move to rural West Texas, he describes a widening gap in human-interest storytelling as NPR affiliates, public radio stations, and other local media face funding pressure, programming cuts, and retrenchment. Tom contends that rural podcasters have a competitive edge in proximity, context, and community trust, which enables credibility that outside media cannot replicate. He frames the opportunity as both mission-driven and commercial, citing local sponsor ecosystems (banks, hospitals, colleges, chambers, foundations, tourism, regional firms, agricultural suppliers, and small businesses). He emphasizes consistency over scale to build loyalty and create an archive of community memory that complements—not replaces—legacy institutions.

Key highlights:

  • A Storytelling Void Opens
  • Why Local Proximity Wins
  • Trust as Competitive Edge
  • Consistency Builds Institutions
  • Podcasts as Civic Infrastructure

 Resources:

 Artwork

Elaine Capers

Art by Elaine

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: March 27, 2026, The No to AI Data Centers Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Customer service AI improving fintech. (Global Banking & Finance)
  2. GenAI for healthcare. (The Hastings Center)
  3. Local opposition is slowing data center construction. (NYT)
  4. Corporate AI adoption outpacing compliance. (The Global Legal Post)
  5. Agentic AI transforming compliance ROI. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: March 27, 2026, The Meta Moment Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • The jury spanked Meta and YouTube. (WSJ)
  • Former Taipei Mayor sentenced to 17 years for corruption. (Reuters)
  • A corruption prosecution to benefit Rubio? (NYT)
  • EY sets aside record £188MM for fines and penalties. (FT)
Categories
AI in Financial Services in 5 Stories

AI in Financial Services in 5 Stories – Week Ending March 27, 2026

Welcome to AI in Financial Services in 5 Stories. A practical weekly roundup of the five most important AI developments affecting banking, insurance, payments, asset management, and fintech. Each Friday, Tom Fox will break down the top stories that matter most through the lenses of compliance, risk management, governance, and business strategy. Designed for compliance professionals, executives, legal teams, and financial services leaders, it goes beyond headlines to explain why each development matters in a highly regulated industry. The result is a concise weekly briefing that helps listeners stay current on AI innovation while asking sharper questions about oversight, accountability, and trust.

This week’s stories include:

  1. Customer service AI improving fintech. (Global Banking & Finance)
  2. Solaris to become the first EU all-AI bank. (FinTech Futures)
  3. Moving from detection to prevention using AI in FinTech. (FinTechGlobal)
  4. FCA evolving on payment priorities. (FinTech Magazine)
  5. Future-proofing AI for the Agentic AI era. (FinTech Weekly)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – March 27, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories in clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending March 27, 2026, include:

  1. GenAI for healthcare. (The Hastings Center)
  2. Responsible AI in healthcare. (Cisco)
  3. How Oracle is transforming healthcare. (CloudWars)
  4. 1in 3 adults is using chatbots for healthcare. (ModernHealthcare)
  5. AI in healthcare administration. (The AI Journal)

For more information on the use of AI in Compliance programs, Tom Fox’s new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

AI Governance and Speak-Up Culture: The Earliest Warning System May Already Be in Your Workforce

There is a hard truth about AI governance that too many companies are still avoiding: the first people to spot an AI problem are usually not board members, not senior executives, and not even the governance committee. It is the employee using the tool, reviewing the output, dealing with the customer, watching the workflow break down, or seeing the machine produce something that feels off. That is why AI governance is not only about policies, models, controls, and oversight structures. It is also about culture. More specifically, it is about a culture of speaking up.

If employees see an AI tool making questionable recommendations, generating inaccurate summaries, mishandling sensitive information, producing biased outcomes, or being used beyond its approved purpose, do they know that this is a reportable issue? Do they know where to raise it? Do they believe someone will listen? Do they trust that raising a concern will help rather than harm their career? Those are not soft questions. They are governance questions.

In anti-corruption compliance, we have long since learned that hotlines, reporting channels, and anti-retaliation protections are not mere ethical ornaments. They are detection mechanisms. They are how organizations surface risks before they become scandals. AI governance now needs the same mindset. If your employees are your earliest warning system, then your speak-up culture may be one of your most important AI controls.

Why Employees See AI Failures First

AI rarely fails in the abstract. It fails in use. A board deck may describe a tool in elegant terms. A vendor demo may look polished. A pilot may be carefully supervised. But once a system enters daily operations, it interacts with real people, real data, real pressures, and real shortcuts. That is when the problems begin to show themselves.

An employee may notice that a tool is confidently wrong. A manager may realize that staff are over-relying on generated summaries without checking the source material. Someone in HR may see that a screening tool is producing odd results. A sales employee may notice that a customer-facing chatbot is inventing answers. A compliance analyst may find that an AI-assisted monitoring process is missing obvious red flags. A procurement professional may discover that a vendor quietly changed a feature set or data practice.

In each of those examples, the problem shows up at the point of use, not at the point of approval. That is why the old compliance lesson still applies: the people closest to the work are often closest to the risk. In AI governance, that means employees are often the first line of detection. But detection is useless if the culture tells them to keep their heads down.

The Governance Blind Spot

Many organizations are investing significant effort in AI principles, governance committees, acceptable-use policies, and risk classification. That is all important. But many of these programs have a blind spot. They are built as if AI risk will reveal itself only through formal testing, audit reviews, or leadership dashboards. It will not.

Some AI failures will surface through monitoring and controls. But many will first appear as employee discomfort, confusion, skepticism, or observation. Someone will notice that a tool is being used in a way that feels wrong. Someone will catch a factual error before it leaves the building. Someone will realize that human review is not actually happening. Someone will see mission creep. Someone will spot a gap between policy and practice.

If the governance model does not actively encourage employees to raise those concerns, the company has built an AI oversight program with one eye closed. That is a dangerous place to be because AI risk is often cumulative. A small issue ignored today becomes a larger issue tomorrow. An inaccurate output tolerated in a low-stakes setting becomes normalized in a higher-stakes one. A quietly expanded use case becomes a de facto business process. Silence is how minor flaws become systemic failures.

Speak-Up Culture as an AI Control

Let us be clear about terms. Speak-up culture is not simply a hotline number posted on the intranet. It is the set of signals an organization sends about whether employees are expected, supported, and protected when they raise concerns.

In the AI context, a healthy speak-up culture means employees understand that reporting concerns about AI outputs, use cases, data handling, or control failures is part of responsible business conduct. It means managers know that AI concerns are not “just tech issues” to be brushed aside. It means investigators and compliance teams are prepared to triage and assess AI-related reports intelligently. It means retaliation protections apply as much to someone challenging a machine-enabled workflow as they do to someone reporting bribery, harassment, or fraud.

This matters because AI can create a special kind of silence. Employees may hesitate to challenge a system that leadership has praised as innovative. They may worry that questioning the tool makes them sound resistant to change or insufficiently sophisticated. They may assume someone more senior has already validated the output. They may think, “Surely the machine knows better than I do.” That is exactly the kind of cultural dynamic compliance should distrust.

Machines do not deserve deference. Controls deserve scrutiny. A mature AI governance program, therefore, needs to treat employee reporting as a formal part of its control environment. Speak-up culture is not adjacent to AI governance. It is part of AI governance.

What CCOs Should Be Asking

If you are a Chief Compliance Officer, there are several questions you should be asking right now.

First, do employees understand that AI-related concerns are reportable? Many organizations have not made this explicit. Staff know they should report harassment, bribery, theft, and retaliation. They may not know whether to report unreliable AI output, a suspicious recommendation, a data input concern, or a business team using a tool outside its approved scope. If you have not told them, do not assume they know.

Second, are your reporting channels equipped to receive AI-related concerns? Hotline categories, case-intake forms, and triage protocols may need to be updated. If an employee reports that an AI tool is generating misleading outputs in a regulated workflow, who receives that report? Compliance? Legal? Security? IT? HR? Some combination? If ownership is unclear, reports will stall, and stalled reports teach employees not to bother.

Third, are managers trained to respond appropriately when AI concerns are raised informally? This is critical. Many concerns will not begin in a hotline. They will begin in a meeting, a hallway conversation, a team chat, or an email to a supervisor. If the manager shrugs, dismisses, or minimizes the issue, the detection system fails before it starts.

Fourth, are anti-retaliation protections being reinforced in the AI context? Employees who challenge AI use may be questioning a high-profile project, a popular vendor, or a senior executive’s initiative. That can create subtle pressure to stay quiet. Compliance should be ahead of that dynamic, not behind it.

Building an AI Speak-Up Framework

What does a practical approach look like?

The first step is to define what types of AI concerns employees should raise. Be concrete. Tell them to report suspected misuse of AI tools, outputs that appear inaccurate or biased, use of AI in sensitive decisions without proper review, input of restricted data into unapproved systems, unauthorized expansion of use cases, missing human oversight, and vendor or system changes that appear to alter risk.

The second step is to build AI examples into training and communication. Employees need realistic scenarios, not vague encouragement. Show them what an AI red flag looks like. Show them what “raising a hand” looks like. Show them where to go and what happens next.

The third step is to update the hotline and investigations protocols. Add intake categories if needed. Develop triage guidance. Decide when AI matters should be handled as compliance cases, operational incidents, model-risk issues, or cross-functional reviews. The goal is not bureaucracy. The goal is clarity.

The fourth step is to train managers as escalation points. In every effective compliance program, middle management is the translation layer between policy and daily operations. AI governance is no different. Managers need to know when a concern can be resolved locally, when it must be escalated, and when the pattern itself suggests a control problem.

The fifth step is to close the feedback loop. Employees are more likely to report concerns when they believe reporting leads to action. That does not mean revealing confidential case details. Communicating that the company takes these issues seriously, investigates them, learns from them, and improves controls as needed. Silence from management breeds silence from employees.

What to Monitor in an AI Speak-Up Program

Here is where compliance can bring its trademark discipline. Track the volume and type of AI-related concerns. Look for concentration by business unit, geography, or tool. Monitor whether concerns are coming in through formal hotlines or informal channels. Review time to triage and time to resolution. Look for patterns involving data handling, output reliability, human review failures, or scope creep. Compare the reported concerns with the company’s list of approved use cases. If you see repeated confusion or repeated exceptions, that tells you something important about your governance design.

Just as importantly, look for the absence of reporting. If your company has materially deployed AI tools and no employee has ever raised a concern, I would not automatically celebrate. I would ask whether employees know what to report, trust the channels, or believe leadership wants candor. In compliance, no reports can mean no problems. It can also mean no trust. Wise CCOs know the difference is everything.

Why This Is Good for Business

Some executives still hear “speak-up culture” and think of delay, friction, and complication. I hear something different. I hear early detection, faster correction, and better decision-making.

A workforce that feels empowered to raise AI-related concerns provides the company with a real-time sensing mechanism. It catches problems before they scale. It surfaces control failures before regulators, plaintiffs’ lawyers, journalists, or customers do. It gives management better information. It helps the board exercise real oversight. Most of all, it creates a culture where innovation is more sustainable because people are not afraid to challenge what does not look right. That is not anti-innovation. That is responsible innovation.

Compliance has always been at its best when it helps the business move fast without becoming reckless. Speak-up culture does exactly that. It does not tell employees to fear AI. It tells them to use judgment, raise concerns, and protect the enterprise when the technology does not behave as expected.

Final Thoughts

Every company deploying AI should ask itself a simple question: Who will notice first when something goes wrong? In many cases, the answer is your employees. The next question is even more important: have you built a culture where they will say something?

If the answer is uncertain, then your AI governance program has a serious weakness. You may have policies. You may have committees. You may have training modules and vendor reviews. But if employees do not feel empowered to raise a hand when they see a problem, then one of your most valuable detection controls is missing in action.