Categories
AI Today in 5

AI Today in 5: March 26, 2026, The 1 in 3 Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Transforming compliance in finance. (FinTechGlobal)
  2. Embedding compliance in AI adoption. (SiliconRepublic)
  3. What the National AI Framework means for employers. (TheEmployerReport)
  4. 1 in 3 patients is using AI in their healthcare. (ModernHealthcare)
  5. Solving RegTech compliance. (VocalMedia)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – How “Good Fraud” Bypassed Audits, Compliance, and IT Controls

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. In this powerful second episode, we unpack one of the most defining corporate scandals of the past decade—the 2013 GSK China bribery case. More than a headline-making event, it’s a masterclass in how sophisticated “good fraud” can slip past audits, evade compliance safeguards, and outmaneuver IT controls.

The episode examines allegations that GSK faced a bribery and corruption scheme in China involving sums reported up to $500 million, despite extensive compliance resources, including more compliance officers in China than anywhere outside the US, up to 20 internal audits annually, and external auditing by PwC. Drawing on Reuters, the Financial Times, the Wall Street Journal, and analysis from The Texas Lawyer, it explains how bribery was described as rampant in China’s healthcare system and how payments were structured through direct cash and vouchers and, more commonly, indirect channels such as travel agencies, hospital “sponsorships,” and conference trips. It outlines “good fraud” enabled by collusion and flawless paperwork, audit materiality thresholds that miss fragmented FCPA-risk payments, China’s data-export restrictions that limit oversight, and a WSJ-reported Botox example where managers directed staff to use personal email to coordinate rewards for prescriptions, concluding with compliance program directives emphasizing IT-compliance coordination, data mapping, enforceable policies, employee reporting, and stress testing.

Key highlights:

  • How Bribes Were Paid
  • Good Fraud and Audit Failure
  • Materiality Trap and Fragmentation
  • Data Blockade and External Audits
  • Five Compliance Fixes

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: The Notebook LM created notes, the voices of the hosts, Timothy and Fiona, based upon text written by Tom Fox

Categories
Daily Compliance News

Daily Compliance News: March 26, 2026, The Mind Blowing Corruption Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • The Trump Administration is allowing COI-related corruption to continue. (CNBC)
  • Another day, another $1.5bn in insider trading. (TheHill)
  • Marco Rubio testifies in corruption trial. (NYT)
  • The US ban on Anthropic ‘looks like punishment’. (WSJ)
Categories
Hill Country Hustlers

Hill Country Hustlers: Mason County Chamber Momentum: Events, Small Business Growth & Putting Mason on the Digital Map

On Hill Country Hustlers, host Zachary Green interviews Taylor Krull, Executive Director of the Mason County Chamber of Commerce since July 2025, about her background, her move from the Midwest to Texas, and why Mason’s people and tight-knit Hill Country culture drew her in.

Taylor shares how she brought stronger organization and processes to the Chamber with volunteer help, expanded member support through social media marketing and on-site video creation, and built regional collaboration with nearby chambers. She highlights upcoming events including the Mason Arts & Wine Festival (first weekend of April), a Mason–Brady joint mixer at Peters Prairie (April 9), a May mixer (May 7), quarterly Ladies Night Out (next May 15), June’s first Hotdog & Hot Rod Night with a car show and street dance, and July’s Roundup Weekend (July 11) with added festivities, plus ways Chamber membership helps businesses with visibility and promotion.

Key highlights:

  • How Mason Became Home
  • Why Mason Stood Out & Career Before the Chamber
  • Launching New Chamber Events & Hitting the Ground Running
  • Social Media for Small Towns
  • Hill Country Unity and Flood Response
  • Spring & Summer Events Lineup
  • Hill Country Community Spirit & Why Mason Feels Unique

Resources:

Follow Mason County Chamber of Commerce on:

Website

Facebook

Instagram

Categories
Blog

The “Day Two” Problem of AI Governance: What CCOs Must Monitor After the Launch

A scene is playing out in companies across the globe right now. Innovation teams are moving fast. Procurement is signing contracts. Business units are experimenting with copilots, workflow agents, and internal knowledge tools. Marketing is testing generative content. HR is evaluating AI for talent processes. Finance wants forecasting help. Security is watching from the corner. Legal is asking pointed questions. Compliance is handed the bill for governance after the train has already left the station. But the reality is that it is a board governance issue.

The problem is not that companies are moving too slowly on AI. In many organizations, the opposite is true. AI strategy is moving faster than the governance structure designed to oversee it. When that happens, the gap creates risk in ways boards understand very well: unmanaged decision-making, unclear accountability, inconsistent controls, fragmented reporting, and blind spots around operational resilience, ethics, and trust.

If you are a Chief Compliance Officer (CCO), this is your moment. Not to say no to AI. Not to become the Department of Technological Misery. But to help the board and senior leadership understand that AI governance is about capturing upside without swallowing avoidable downside. That is the central lesson. Strategy without governance is aspiration. Strategy with governance is a business discipline.

Why This Is a Board Issue

Boards are not expected to code models, evaluate vector databases, or decide which prompt library a business unit should use. They are expected to oversee risk, culture, controls, and management accountability. AI now sits squarely in that lane.

Once AI touches business processes, it can affect decision rights, data usage, customer interactions, employee treatment, financial reporting inputs, records management, and reputation. That means the board does not need to manage the machinery, but it must ensure a management system is in place for it.

This is where compliance can bring real value. Ethisphere’s latest work on the Ethics Premium makes a useful point for governance professionals: leading programs improve board reporting practices, including more frequent meetings with directors to ensure they receive the information needed for effective oversight, and they are also pushing documentation to be ready for AI-driven assistance so employees can find answers when they need them. In other words, mature governance is not static. It evolves as technology evolves.

That same report also reminds us that strong ethics and compliance systems are associated with higher returns, less downside, and faster recoveries, which is exactly the language boards understand when evaluating strategic risk and resilience.

So let us translate that lesson into the AI context. The board’s task is not to bless every shiny new tool. Its task is to ensure management has built an operating system for responsible AI use.

What a Board Should Do

The first thing a board should do is insist on a clear AI governance architecture. That means management should be able to answer basic questions cleanly and quickly. Who owns the enterprise AI strategy? Who approves high-risk use cases? Who validates controls before deployment? Who monitors incidents, exceptions, and drift? Who reports to the board? If five executives give five different answers, you do not have governance. You have a theater.

Second, the board should require a risk-based inventory of AI use cases. I am continually amazed at how many organizations start with policy language before they know where AI is actually being used. That is backwards. Boards should ask for a current inventory of internal, customer-facing, employee-facing, and vendor-enabled AI use cases. The inventory should distinguish between low-risk productivity tools and higher-risk uses involving sensitive data, regulated processes, legal judgments, employment decisions, or customer outcomes. If management cannot map the use cases, it cannot credibly manage the risk.

Third, the board should demand decision-use discipline. Not every AI output deserves the same level of trust. Some uses are advisory. Some are operational. Some may influence consequential business judgments. Boards should ask management where AI outputs are being relied upon, who reviews them, and what level of human oversight is required before action is taken. The issue is not whether humans are “in the loop” as a slogan. The issue is whether human review is meaningful, documented, and tied to the use case’s risk.

Fourth, the board should require intelligible reporting, not merely technical. Board oversight fails when management delivers either fluff or jargon. Directors need reporting that answers practical questions: What are our top AI use cases? Which ones are classified as high risk? What incidents or near misses have occurred? What controls were tested? What third parties are material to our AI stack? What changed this quarter? What needs escalation? Good board reporting turns AI from mystique into management.

That point is entirely consistent with what Ethisphere identifies in leading ethics and compliance programs: improved board reporting practices that provide directors with the information they need for effective oversight.

Where Compliance Officers Can Help the Board Most

This is where the CCO earns their seat at the table.

First, the compliance function can help management create the classification framework. Compliance professionals know how to tier risk, define escalation paths, and build governance around business reality. You have been doing it for years with third parties, gifts and entertainment, investigations, and training. AI is a new technology, but the governance muscle memory is familiar.

Second, compliance can help build the policy-to-practice bridge. A glossy AI principles statement is not governance. Governance is what happens when procurement uses approved clauses, HR knows what tools it can use, managers understand escalation triggers, training is tailored to real workflows, and documentation supports decision-making. Ethisphere’s report notes that best-in-class programs are investing in clear, compelling documentation and training approaches designed for actual employee use, not simply for formal compliance completion. That is precisely the model AI governance needs.

Third, compliance can help the board by translating operational signals into governance signals. A rejected deployment, a data-permission problem, a hallucinated output in a sensitive workflow, a vendor change notice, a policy exception, or a spike in employee questions may each seem isolated. They are not. They are governance indicators. The CCO can aggregate them into trend lines that the board can actually use.

Fourth, compliance can help define the cadence and content of board reporting. Directors do not need every technical detail. They do need a disciplined dashboard and escalation protocol. Compliance is often the right function to help standardize that process, because it lives at the intersection of risk, policy, training, speak-up culture, investigations, and controls.

The Operational Reality Boards Must Understand

One reason AI governance lags strategy is that AI adoption is not happening in one place. It is happening everywhere. That decentralization is what makes governance hard. The legal team may be reviewing one contract while a business leader is piloting another tool within budget. An employee may paste sensitive information into a system that was never intended to accept it. A vendor may quietly add AI functionality to an existing platform. A manager may begin relying on generated summaries as if they are verified facts. None of this requires malicious intent. It only requires speed, convenience, and a little ambiguity. Corporate history teaches that those ingredients are often enough.

Boards, therefore, need to understand a simple truth: AI risk is not only model risk. It is a workflow risk. It is a data risk. It is governance risk. It is a cultural risk. But culture matters here. Ethisphere found that nearly every honoree equips managers with toolkits and talk tracks to discuss ethical dilemmas with their teams, and 51% require managers to do so. That should be a flashing neon sign for AI governance. If managers are not talking with employees about responsible use, escalation expectations, and when not to trust the machine, the company is relying on hope as a control. Hope is not a control. It is a prayer.

Final Thoughts

When AI strategy outruns governance, the problem is not innovation. The problem is unmanaged innovation. Boards should not respond by slamming on the brakes. They should respond by insisting on lanes, guardrails, dashboards, and accountability.

For compliance officers, the opportunity is enormous. You can help the board ask better questions. You can help management build a governance operating system. You can help the business adopt AI faster, smarter, and more defensibly.

That is the larger point. Compliance is not there to suffocate strategy. Compliance is there to make the strategy sustainable.

Here are the questions I would leave you with:

  • Does your board receive meaningful AI oversight reporting, or only periodic reassurance?
  • Can your company identify its highest-risk AI use cases today, not next quarter?
  • If a director asked tomorrow who owns AI governance end-to-end, would the answer be immediate and credible?
  • If not, your AI strategy may already be outrunning your governance.