Categories
Hill Country Authors

Hill Country Authors Podcast: Paul McGrath on “Left is Right”: Satire, Darker Threats, and Current-Events Inspiration

Welcome to a new season of the award-winning Hill Country Authors Podcast, sponsored by Stoney Creek Publishing. In this podcast, Hill Country resident Tom Fox visits with authors who live in and write about the Texas Hill Country.  Host Tom Fox opens a new season of the Texas Hill Country Authors Podcast with returning guest Paul McGrath to discuss McGrath’s novel Left is Right, a sequel to the PEN Craft award-winning Left.

McGrath recounts a 37-year career at Texas newspapers, primarily the Houston Chronicle, plus teaching at Texas A&M and Clear Lake, and his A&M roots with The Battalion. He explains expanding Anton’s story into a multi-book series (with five planned), driven by character attachment and news-inspired plots. McGrath describes layered “Left” titles, using Ellie to express progressive viewpoints, and empathy as a motivating force for Anton and Ellie, including Ezra’s lingering influence. He notes a darker tone influenced by right-wing militias, human trafficking, and a Texas motorcycle gang, balanced by humor, wordplay, and pop-culture references like a Jon Hamm dream sequence. He outlines the FBI’s and alien authorities’ ongoing pursuit, then a return to alien supervision, credits Stoney Creek Publishing’s support, shares on social platforms, and previews future themes involving Russians and cryptocurrency.

Key highlights:

  • Why Continue with Anton
  • Series Titles and ‘Left’
  • Empathy Driving the Plot
  • Darker Satire and Villains
  • Humor Wordplay and Names
  • Pop Culture Cameos
  • Where the Series Goes

Resources:

Paul McGrath on Stoney Creek Publishing

Left is Right on Texas A&M University Press

Social Media 

Instagram

X

Threads

 Podcast Cover Art

Nancy Huffman Fine Art

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: March 19, 2026, The Corruption in Soccer Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • US relaxes sanctions on PDVSA. (FT)
  • Chin wants the Malaysian ABC agency investigated. (Bloomberg)
  • Hacker breaks into law enforcement tip database. (Reuters)
  • Senegal, stripped of the Africa Cup title, calls for a corruption investigation. (NYT)
Categories
AI Today in 5

AI Today in 5: March 19, 2026 the Context 1st AI Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, I will bring to you 5 stories about AI stories to start your day. Sit back, enjoy a cup of morning coffee and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day we consider four stories from the business world, compliance, ethics, risk management, leadership or general interest about AI.

  1. Elasticity as a compliance standard in the age of AI. (UCToday)
  2. Context-first AI for Co-Pilot. (FinTechGlobal)
  3. AI agents to reduce discovery costs. (BusinessWire)
  4. GSA AI clause. (Holland & Knight)
  5. How the military is using AI. (CBS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – GSK in China: The Compliance Breakdown That Still Echoes 13 Years Later

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join me as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. In this inaugural episode, we take a deep dive into the 2013 GSK China bribery scandal and examine why it still stands as one of the most important case studies in corporate compliance, governance, and culture. Our hosts are Timothy and Fiona.

We unpack how a global pharmaceutical giant was alleged to have used travel agencies, fake conferences, false VAT receipts, and targeted marketing programs to channel illicit payments to doctors, officials, and other intermediaries, all while an internal whistleblower warning and a four-month internal investigation failed to detect the misconduct. The episode also explores the tension between polished global compliance structures and compromised local execution, showing how incentives, third-party relationships, and regional sales pressure can overwhelm formal controls. Most importantly, it asks a question that remains urgent today: are corporate compliance systems truly designed to find the truth, or can they create a false sense of security that allows misconduct to flourish undetected?

Key Highlights

  • The scale of the alleged misconduct was enormous.
  • Third parties were central to the scheme.
  • Internal controls failed when they were needed most.
  • Corporate culture and incentives drove the risk.
  • Why the lessons are still highly relevant today.

Resources

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note-the voices of the hosts, Timothy and Fiona were created by Notebook LM based upon text written by Tom Fox

Categories
Blog

Vendor AI Risk Is the New Third-Party Risk Frontier: From Contracts to Compliance Evidence

For years, compliance professionals have understood a basic truth about third-party risk: your company can outsource a function, but it cannot outsource accountability. That principle has long applied to distributors, agents, resellers, consultants, customs brokers, and supply-chain partners. In the age of artificial intelligence, it now applies equally to AI vendors.

And here is the key issue. Most companies are not building AI entirely in-house. They are licensing models, embedding third-party copilots, procuring AI-enabled platforms, connecting external APIs, and relying on vendors for everything from data enrichment to automated decision support. In other words, the AI stack is increasingly a third-party stack.

That means AI governance is rapidly becoming a third-party risk management problem. For compliance officers, this is a critical shift. The question is no longer simply whether your organization is using AI. The question is whether you have sufficient contractual leverage, operational visibility, and documentary evidence to demonstrate that third-party AI risk is managed in a credible, defensible, and scalable manner. If the answer is no, then your AI program may be far less mature than it looks on the PowerPoint slide.

AI Is Rarely a Standalone Tool

One of the most dangerous myths in the current AI conversation is that “the AI” is a single product that can be evaluated once and approved once. That is not how most enterprise deployments work. A single AI-enabled workflow may involve a foundation model provider, a cloud host, a retrieval layer, one or more data processors, a business application vendor, and internal configuration choices that change over time. Add subcontractors, model updates, and cross-border data flows, and you begin to see the real picture. The risk does not sit neatly with any single vendor. It sits across an ecosystem.

That matters because when something goes wrong, regulators, plaintiffs, auditors, and boards will not care that the problem sat in a vendor dependency chain. They will ask what your company knew, what it required, what it monitored, and what evidence it retained. The bottom line is that vendor AI risk has to move out of the procurement annex and into the core compliance framework.

Start with a More Realistic Definition of Third-Party AI Risk

When many companies think about vendor AI risk, they default to privacy and cybersecurity. Those issues are absolutely important, but they are only the beginning.

Third-party AI risk can also include opaque training data, weak model governance, unexplained output variability, inaccurate summarization, hidden subcontractors, unauthorized data retention, insufficient segregation of customer data, model changes without notice, untested bias, poor incident response, weak record retention, and limited auditability. If the tool affects regulated processes, the stakes rise even higher.

Think about the real-world use cases now being deployed. AI tools support customer communications, onboarding, HR screening, contract review, due diligence triage, transaction monitoring, investigations, and report drafting. In each of those settings, the company may be relying on output it did not fully generate, cannot fully inspect, and may not be able to reproduce later without the right controls in place.

That is where compliance must lean in. The core question is not whether the vendor claims to use responsible AI. The core question is whether your company can obtain sufficient evidence that the system is well-controlled for its intended use.

Contracts Are the First Line of Governance

If AI risk is outsourced to vendors, contracts become the first line of governance. Yet too many AI agreements still read like standard software contracts with a few privacy words sprinkled on top. That is not good enough. A sound AI vendor agreement should, at a minimum, address permitted use, data rights, confidentiality, security, model-change notification, subcontractor transparency, performance expectations, audit rights, incident reporting, regulatory cooperation, and termination support.

Most importantly, the contract should define the use case. That sounds basic, but it is essential. A vendor tool approved for low-risk drafting support is not automatically appropriate for high-impact decision-making. If the intended use is not defined, the actual use will drift. And drift is where governance begins to fail. The agreement should also make clear what data the vendor can use, for what purpose, and for how long. Can the vendor use your inputs to train its models? Can it retain prompts or outputs? Can it use metadata to improve service? Can affiliates or subprocessors access the data? If those questions are not answered with precision, you lack clarity. You have hope. Hope is not a control.

SLAs Need to Measure More Than Uptime

Service level agreements are another area where companies need to upgrade their thinking. Traditional SLAs focus on uptime, availability, and support response times. Those are still necessary, but with AI, they are not sufficient. For an AI-enabled service, the SLA discussion should expand to include quality, reliability, explainability support, incident escalation, and change transparency. A system can be available 99.9% of the time and still produce garbage. That is not a service success. That is a control failure delivered efficiently.

I am not suggesting that every company can negotiate custom model-accuracy guarantees from every AI vendor. In many cases, that will not be realistic. But companies can require practical commitments around things like response logging, traceability, notification of material model or system changes, error-handling workflows, and support for validation testing. They can define turnaround times for incidents involving hallucinations, security breaches, inappropriate outputs, or data leakage. They can require that the vendor cooperate with investigations and remediation.

That is where the compliance function should partner closely with legal, procurement, information security, and the business owner. The goal is not to demand impossible warranties. The goal is to create enough visibility so that the company is not flying blind.

Audit Rights Must Be Usable, Not Decorative

Many vendor contracts include broad-sounding audit clauses that are so restricted, delayed, or indirect that they provide little real assurance. In the AI context, that problem is magnified. If you cannot meaningfully assess controls over data handling, model governance, subprocessors, logging, incident response, and change management, then your audit right is little more than legal wallpaper.

A usable audit-right framework does not always mean sending a team on-site with clipboards. It can include layered assurance mechanisms: independent third-party assessments, SOC reports, model governance summaries, penetration-test results, bias testing documentation, incident logs, certifications, tabletop exercise results, and the right to ask targeted follow-up questions. In higher-risk arrangements, it may also include deeper review rights, validation support, or the ability to commission an independent assessment.

From Due Diligence to Ongoing Monitoring

Once a contract is signed, the real work begins. Models change. Vendors add subprocessors. Features evolve. Use cases expand. Business users discover new workflows that procurement never contemplated. A vendor that began as a low-risk drafting tool can quietly become embedded in a regulated process six months later. That is why monitoring matters.

Companies should inventory AI vendors and classify them by risk. They should map which business processes depend on them, what data they touch, what decisions they inform, and what regulatory exposure they create. They should require periodic attestations, monitor control changes, review incidents, reassess data use, and revisit whether the tool is being used in line with approved purposes.

This is also where shadow AI becomes a third-party problem. Employees often access AI functionality through existing vendors before compliance even realizes it is enabled. Suddenly, a platform you bought for workflow management has rolled out AI summarization, drafting, or analytics features. If no one is watching vendor change notices and product updates, the company can slide into AI use without ever consciously approving it. That is a governance gap.

Build a Compliance Evidence File

If there is one practical takeaway, it is this: for significant AI vendors, build a compliance evidence file.

By that, I mean a documented record showing the rationale for approval, the use case, the risk classification, the key contractual controls, the diligence performed, the evidence reviewed, the approvals obtained, and the monitoring steps required going forward. If the vendor supports a high-risk process, the file should also include validation results, escalation pathways, and a record of any incidents or material changes.

Why does this matter? Because when the board asks why the company trusted a third-party AI tool, you need a better answer than “the business wanted it.” When the internal audit asks how control assurance was established, you need something more concrete than “a legal review of the contract.” And when a regulator asks how the company oversees outsourced AI risk, you need documentation that demonstrates a repeatable, risk-based process.

Five Questions Every CCO Should Ask

Every Chief Compliance Officer should be asking five simple questions right now.

  1. Do we know which vendors in our ecosystem are using or enabling AI?
  2. Have we classified those vendors based on data sensitivity and the business impact of the use case?
  3. Do our contracts clearly address data rights, change notification, incident response, and usable audit rights?
  4. Do our SLAs measure what matters for AI-enabled services, not just uptime?
  5. Can we produce evidence showing why a vendor was approved, what controls we relied on, and how the relationship is being monitored?

If the answer to any of those questions is no, the work is not done.

The Bottom Line

Third-party risk has always been about visibility, leverage, and evidence. AI does not change that. It intensifies it. The organizations that manage vendor AI risk well will not be the ones with the flashiest AI procurement strategy. They will be the ones that define use cases carefully, contract for transparency, demand usable assurance, monitor continuously, and retain evidence that their oversight is real.

That is where compliance comes in. Not as the department that slows innovation down, but as the function that makes outsourced innovation governable. Because in the end, if AI is rarely in-house, then AI governance cannot be either.