Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – After the Humphreys Verdict: Managing Third-Party Risk When You Can’t Verify

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. In this episode, we take a deep dive into the 2013 GSK China bribery scandal and examine why it remains one of the most important case studies in corporate compliance, governance, and culture. Our hosts are Timothy and Fiona.

The episode examines how multinational companies should manage third-party relationships and compliance in opaque markets like China when traditional intelligence-gathering is curtailed by privacy laws, using the case of corporate investigators Peter Humphreys and his wife Ying Zeng, who were hired by GSK to investigate a sex-tape scandal but were convicted and imprisoned for purchasing Chinese citizens’ personal data. The discussion highlights how the verdict created operational uncertainty for due diligence, M&A, supplier vetting, and anti-bribery efforts, and notes Humphrey’s claim that GSK withheld the fact that it faced internal whistleblower allegations of corruption. Drawing on DOJ expectations and an SCCE framework, it argues for shifting from “vet and forget” to continuous third-party management across five steps, reinforcing business justification, questionnaires, contracts, and ongoing oversight with mitigations like capped commissions, detailed invoice review, early audits, and use of public records and in-person interviews.

Key highlights:

  • Why Verification Matters
  • Privacy Laws Change Everything
  • When Partners Refuse Disclosure
  • Build Your Own Intelligence
  • Contract Controls and Oversight

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: the voices of the hosts, Timothy and Fiona, were created by Notebook LM based upon text written by Tom Fox

Categories
AI Today in 5

AI Today in 5: April 23, 2026, The AI Maga Influencer Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Agentic AI reshaping bank compliance. (FinTechGlobal)
  2. Compliance First AI for AML. (FinTechGlobal)
  3. Monetizing AI and compliance as a service. (CRN)
  4. Using AI to personalize health care. (Forbes)
  5. The top MAGA influencer is an AI created in India. (NYPost)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 23, 2026, The Who Gets Profit Disgorgement Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Spain seeks to close the investigation into the wife of the Spanish PM. (Reuters)
  • Anthropic is investigating unauthorized use of Mythos. (FT)
  • Crypto billionaire accuses the Trump family’s Liberty World of ‘criminal extortion’. (WSJ)
  • Matt Levine explores who should get disgorged profits. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Kerr250 Podcast

The Kerr250 Podcast: Kerr250 – Guadalupe Bank Launches ‘Patriot Cash Reserve’ Account to Commemorate America’s 250th Birthday

Kerr250 is a community-focused podcast dedicated to celebrating America’s 250th birthday through the people, businesses, traditions, and events of Kerr County. As our nation marks this historic anniversary on July 4, 2026, Kerr250 will highlight local celebrations and community efforts that bring this milestone to life. Each episode will feature conversations with local leaders, business owners, organizers, volunteers, and proud citizens who are helping make Kerr County a vibrant part of this national moment. The podcast will explore how history, patriotism, service, and community pride come together in one county that believes America’s strength has always come from its people. Kerr250 is where Kerr County honors the past, celebrates the present, and helps inspire the future. In this episode, Tom Fox visits with AJ Rodriguez, Chairman and CEO of Guadalupe Bank, as part of a series on how local businesses plan to mark the nation’s 250th anniversary on July 4, 2026.

Rodriguez describes the bank’s new “Patriot Cash Reserve” checking account, created after engagement with Patriot Academy’s constitutional citizenship course, designed to commemorate the 250th anniversary and offer customers an interest-bearing transaction option for new money only. Balances from $75,200 to $249,999 pay the federal funds rate minus 2% (about 1.6%), with a free first order of checks and certain fees waived based on balances; balances over $250,000 pay the federal funds minus 1% (about 2.6%). Rodriguez discusses optimism about increased national unity and invites other businesses to collaborate.

Highlights include:

  • Patriot Cash Reserve
  • Meaning of July Fourth
  • Bicentennial Memories

Resources:

Kerr250 website

Guadalupe Bank

Categories
Pod and Port

Pod and Port: Podcasting, Social Media and Yacht Rock – Instagram’s Long Game and the Yacht Rock Power of Kenny Loggins

In Episode 2 of Pod & Port: Podcasting, Social Media and Yacht Rock, Tom Fox and Jeff Dwoskin explore a major shift in how creators, marketers, podcasters, and business owners should think about Instagram: it is no longer just a closed social platform. With stronger Google indexing, Instagram content can now have a much longer life cycle, which means captions, keywords, file names, and value-driven content matter more than ever.

Jeff explains why this changes the game for creators. Instead of treating Instagram as a short-lived post-and-forget channel, listeners are encouraged to think of it as part of a broader search and content strategy. The discussion covers evergreen content, smarter SEO, descriptive file naming, calls to action, and repurposing content to keep valuable work visible without exhausting your audience.

Then the show turns to Yacht Rock, with Tom taking the lead on Kenny Loggins. From his transition out of Loggins and Messina to defining Yacht Rock tracks such as “This Is It,” “Whenever I Call You Friend,” and “Heart to Heart,” Tom makes the case that Loggins was a major force in the genre. The conversation also highlights how Loggins successfully leaped into the video era through blockbuster soundtrack hits like “Footloose,” “Danger Zone” from Top Gun, and “I’m Alright” from Caddyshack, showing how adaptability can extend both relevance and reach.

Key takeaways:

  • Instagram content now has a longer shelf life.
  • SEO is now part of social media strategy.
  • Descriptive file names matter.
  • Repurpose with intention.
  • Kenny Loggins shows the power of reinvention.

Resources

Jeff

Jeff Dwoskin on LinkedIn

Stampede Social Website

Kenny Loggins on Spotify

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

The 30-Day Shadow-AI Amnesty: Turning Hidden Risk into Governance

There is a hard truth that every Chief Compliance Officer and compliance professional needs to confront right now: artificial intelligence is already inside your organization, whether it arrived through formal approval channels or not.

Employees are testing tools independently. Business teams are adopting AI-enabled workflows without waiting for a governance committee to approve them. Vendors are embedding AI into products and services faster than many companies can update their policies. Somewhere inside that mix, decisions are being influenced by systems that may not be documented, reviewed, or governed in any meaningful way. That is the world of Shadow-AI.

It is not necessarily malicious. In many cases, it is simply the predictable result of innovation outpacing governance. But from a compliance perspective, that does not make it any less risky. Under the Department of Justice’s Evaluation of Corporate Compliance Programs, the question is not whether management intended to allow uncontrolled use of AI. The question is whether the company can identify emerging risks, implement controls that address them, encourage internal reporting, and demonstrate that the program works in practice.

That is why the 30-day Shadow-AI Amnesty matters. Properly designed, it is not an admission of failure. It is proof of governance. It is a practical mechanism for surfacing hidden risk, reinforcing a speak-up culture, and creating the operational baseline needed to govern AI over the long term.

You Cannot Govern What You Cannot See

The first challenge with Shadow-AI is visibility. Too many organizations still assume that AI risk begins with approved enterprise systems. That assumption is already outdated. The real risk universe is broader. It includes employees using public generative AI tools for drafts or analysis. It includes business units creating internal automations that affect workflows. It includes third-party applications with embedded AI functionality that have not been separately assessed. It includes pilots that started small and quietly became part of day-to-day decision-making.

This is exactly the sort of problem the ECCP is built to address. The DOJ asks whether a company’s risk assessment is dynamic and updated in light of lessons learned and changing business realities. Shadow-AI embodies the changing business reality. If your risk assessment fails to account for hidden AI use, your compliance program is lagging behind the business.

A 30-day amnesty closes that gap by creating a controlled mechanism to identify what is already happening. It allows the company to convert unknown risk into known risk and known risk into governable risk. In other words, it turns hidden risk into a governance advantage.

Why Amnesty Works Better Than Enforcement at the Start

One of the smartest features of a Shadow-AI Amnesty is that it begins with disclosure rather than punishment. If you want employees to report unapproved AI use, you need to give them a credible reason to come forward. If the first signal from compliance is that disclosure will trigger blame, discipline, or reputational harm, employees will remain silent. The result will be exactly the opposite of what the compliance function needs. This is where the amnesty becomes a culture-and-speak-up control.

The ECCP places significant emphasis on culture, internal reporting, and non-retaliation. Prosecutors are instructed to evaluate whether employees feel comfortable raising concerns and whether the company responds appropriately when they do. A well-structured amnesty aligns directly with those expectations because it tells employees that transparency is valued, that reporting is encouraged, and that remediation matters more than finger-pointing.

That does not mean there are no consequences for reckless or prohibited conduct. It means the organization recognizes that the first step toward control is visibility. The safe-harbor period exists to gather information, assess risk, and bring informal AI activity into a formal governance structure. That is not a weakness. That is smart compliance design.

Designing the Amnesty for Participation

The success of a Shadow-AI Amnesty depends heavily on its design. If the process is burdensome, legalistic, or overly technical, participation will be limited. The design principle should be simple: lower the barrier to disclosure while collecting enough information to support triage.

A short intake process is essential. Employees should be able to disclose a tool, workflow, or use case quickly. The company needs basic information: what the tool is, who owns it, where it is used, what data it touches, what decisions it may influence, and whether any controls already exist. This is not the stage for a full investigation. It is the stage for building inventory and context.

That approach is fully consistent with good governance practice. The NIST AI Risk Management Framework emphasizes understanding context, mapping use cases, and establishing governance for the actual use of AI. ISO/IEC 42001 similarly reflects the principle that effective AI management begins with a defined scope, documented processes, and clear responsibility. You cannot apply either framework if you do not know what systems or uses exist in the first place. The amnesty, then, is not a side exercise. It is the front door to a credible AI governance program.

Triage Is Where Governance Becomes Real

Once disclosures start coming in, the company must shift from intake to triage. This is where design and control become critical. Not every disclosed use of AI presents the same level of risk. Some uses may be low-risk productivity aids. Others may influence hiring, investigations, financial reporting, customer-facing communications, or core operational decisions. The compliance function needs a disciplined way to distinguish between them.

A risk-based triage model should ask a few straightforward questions. Does the AI influence a decision that affects employees, customers, or regulated outcomes? Does it involve sensitive or confidential data? Is there human review, or is the output used automatically? Is the use visible externally? Is it part of a business-critical workflow? What controls exist today?

These are compliance questions. They are also ECCP questions because they go directly to risk assessment, resource allocation, and whether controls are tailored to the realities of the business. This is also where culture and control begin to work together. A company that invites disclosure but fails to triage intelligently will lose credibility. Employees need to see that reporting leads to measured, thoughtful governance, not chaos. The point is not to shut everything down. The point is to classify, prioritize, and respond appropriately.

Culture as a Control

One of the most important themes in the modern compliance conversation is that culture is not soft. Culture is a control. That is especially true with Shadow-AI. In many organizations, the first people to know that a workflow has drifted outside approved channels are the employees using it every day. The first people to spot unreviewed prompts, risky data inputs, or overreliance on AI-generated outputs are often not senior executives or formal governance committees. They are line employees, managers, analysts, and business operators.

If those people do not believe they can report what they see without retaliation or embarrassment, then the organization loses one of its most effective early warning systems. A Shadow-AI Amnesty sends a powerful signal. It says the company would rather know than remain in the dark. It says that governance begins with honesty. It says that disclosure is part of doing the right thing.

Under the ECCP, that matters. A culture that encourages internal reporting and constructive remediation is a hallmark of an effective compliance program. In the AI context, it may be one of the few ways to surface emerging risks before they become control failures, regulatory issues, or public problems.

From Amnesty to Operating Model

The amnesty itself is only the beginning. Its true value lies in what follows. Once the company has a baseline inventory of disclosed AI uses, it should not let that information sit in a spreadsheet and die. The next step is to convert the amnesty into a long-term governance operating model.

That means maintaining a living registry of AI use cases. It means embedding disclosure and review into normal business processes. It means defining approval pathways for higher-risk uses. It means establishing ongoing monitoring to detect performance changes, data drift, and control effectiveness. It means updating policies, training, and communications based on what the company has actually learned from the amnesty.

This is where the governance frameworks become especially useful. NIST AI RMF helps organizations move from mapping and understanding AI uses to governing, measuring, and managing them. ISO/IEC 42001 provides the management-system discipline needed to assign responsibility, document controls, review performance, and drive continual improvement.

In other words, the amnesty is not the solution by itself. It is the catalyst that allows a real operating model to emerge.

Proof of Governance Under the ECCP

Why does this matter so much from an enforcement perspective? Because the amnesty produces evidence. If regulators ask how the company identified AI uses, there is a process. If they ask how risks were assessed, there is a methodology for it. If they ask what was done with high-risk cases, there are records of triage and remediation. If they ask what role culture played, there is a concrete speak-up initiative tied to internal reporting and governance design.

This is exactly what the ECCP is looking for. Not slogans. Not a glossy AI principles deck. Evidence that the company identified a risk, created a mechanism to surface it, encouraged reporting, evaluated what it found, and built controls that match the risk. That is why the 30-day Shadow-AI Amnesty is so important. It transforms governance from assertion into proof.

The Practical Bottom Line

The compliance function does not need to wait for a perfect enterprise AI strategy before acting. In fact, waiting may be the biggest mistake. Shadow-AI is already there. The question is whether your organization is prepared to see it, hear about it, and govern it.

A 30-day amnesty is one of the most practical tools available because it combines two things strong compliance programs need: better visibility and a stronger culture. It surfaces risk while reinforcing speak-up. It creates documentation while improving control design. It gives the company a starting point for long-term governance without pretending the problem can be solved in one month.

In the end, that is what good compliance has always done. It does not deny business reality. It creates the structure that allows the business to move forward with integrity, accountability, and confidence.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Banking Regulators Cut Model Risk Guidance: Implications for Compliance, Audit, and AML Oversight

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully, and looking for some hard-hitting insights on compliance. Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss new Federal Reserve, FDIC, and OCC model risk management guidance issued late Friday, arguing it replaces detailed, bright-line expectations with thin, principles-based language.

They contrast the prior OCC guidance (109 pages) with the new 12-page document, saying it describes model risk governance abstractly but offers little direction on what banks should do, leaving decisions about materiality and oversight to management. They highlight practical consequences for bank compliance and internal audit, including reduced leverage to insist on prudent governance, potential weakening of AML model oversight under the strict-liability Bank Secrecy Act, and the risk of more arbitrary enforcement amid reduced regulatory staffing. They also note that the guidance excludes AI models, with future AI guidance promised only through a later comment process.

Key highlights:

  • From 109 pages to 12
  • Principles vs specifics debate
  • Internal audit sidelined
  • Regulators and capacity cuts
  • AI models left out 

Resources:

Matt on Radical Compliance

 Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
Great Women in Compliance

Great Women in Compliance: Culture Check: Are Your Speak Up Channels Effective?

Ever wish you could benchmark your Speak Up channels against more than just volume, issue types, and time to close? 

The Speak Up Self-Assessment (SUSA) was designed to help you go deeper by assessing organizational infrastructure—including reporting channels, confidentiality safeguards, follow-up processes, and governance of whistleblowing systems.

In this roundtable episode, we speak with guests: 

  • Professor Jessica McManus Warnell
  • Dr. Mary Gentile 
  • Allison Narmi 

about the work they are doing to bring a free, anonymous diagnostic tool to self-assess speak-up channels. Building on the work done in the EU, our guests today are members of the project team that has developed an American version of the tool, with support from the Notre Dame Deloitte Center for Ethical Leadership. Link to the EU version here – https://edhec.az1.qualtrics.com/jfe/form/SV_eleMjkHraHzw6Hk

U.S. version coming soon.  

Categories
Daily Compliance News

Daily Compliance News: April 22, 2026, The AI Hallucinations from Sullivan & Cromwell Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Ex-Algerian Minister of Industry jailed for corruption. (Aljazeera)
  • A wish list for John Ternus. (NYT)
  • Best 5 books on the Fed. (WSJ)
  • AI hallucinations from Sullivan & Cromwell court filing. (FT)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX 20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 22, 2026, The AI Ready Lawyer Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. The AI-ready lawyer. (Wolters Kluwer)
  2. MetaComp launches AI agent governance framework. (PR Newswire)
  3. APAC CFOs embrace AI. (Wolters Kluwer)
  4. What the AI mirror reveals about us. (BankInfoSecurity)
  5. OpenAI is providing cyber protection for banks. (FinTechMagazine)

Interested in attending Compliance Week 2026? Click here for information and Registration. Listeners to this podcast receive a 20% discount on the event. Use the Registration Code TOMFOX20

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.