Categories
Hill Country Hustlers

Hill Country Hustlers: Episode 16 – Let’s Talk About It: Hill Country MHDD’s Family Partner Program and the YES Waiver

We take things in a different direction today as Zach steps back from behind the microphone to produce an episode with members of the Hill Country MHDD Center. The members are Kelsi Wilmot (Director of Community Development), Tyler Townsend (Communication Specialist), and Wanda Ferguson (Lead Family Partner).

They introduce Hill Country MHDD’s new podcast, “Let’s Talk About It,” intended to help audiences learn about staff, lived experience, and agency programs. Ferguson explains the Family Partner role, emphasizing advocacy for caregivers, collaboration with schools and juvenile justice, and skills-based supports such as the nurturing program to help families accommodate a child’s needs while maintaining structure and boundaries. She shares personal motivation connected to her son Ryan’s mental health challenges and death in 2016, and provides examples of helping families avoid juvenile detention, address safety risks, and stabilize at home. The team describes the YES Waiver as a wraparound, grant-funded service designed to keep children in their homes and reduce hospitalizations or residential placements, and notes that services are optional and Medicaid-billable.

Key highlights:

  • Why This Podcast
  • What Family Partners Do
  • Parenting Tools and Real Stories
  • YES Waiver Explained
  • New Programs and Facilities
  • Getting Enrolled

Resources:

Hill Country MHDD

Categories
Pod and Port

Pod and Port: Podcasting, Social Media and Yacht Rock – AI, Authenticity, Instagram, and Christopher Cross

In the debut episode of Pod & Port: Podcasting, Social Media and Yacht Rock, Tom Fox and Jeff Dwoskin dive into one of the biggest questions facing creators, marketers, podcasters, and business owners today: how do you use AI and social media tools effectively without losing authenticity?

Tom and Jeff discuss Instagram, creator monetization, transparency, algorithmic control, and the dangers of relying on AI for generic content. Their central message is clear: AI can be a powerful tool, but it should enhance your creativity, not replace it. The conversation offers practical insights into how creators can think about content, voice, originality, and audience trust.

Then the show shifts into Yacht Rock mode, with Jeff leading a spotlight on Christopher Cross, one of the genre’s defining voices. From “Ride Like the Wind” to “Sailing” and beyond, Tom and Jeff reflect on Cross’s impact, his remarkable success, and why his music still resonates. If you care about smarter content creation and smooth musical memories, Episode 1 has you covered.

Key takeaways:

  • AI works best when it enhances your ideas rather than replacing your creativity.
  • Authenticity still matters, and audiences can often sense when content feels overly automated.
  • Social media platforms may offer more tools, but creators still need to stay grounded in their own voice.
  • Transparency and trust remain critical for audience engagement.
  • Christopher Cross remains one of the essential artists in any Yacht Rock conversation.

Resources:

Jeff

Jeff Dwoskin on LinkedIn

Stampede Social Website

Christopher Cross on Spotify

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
GSK in China: 13 Years Later

GSK In China: 13 Years Later – Whistleblower Emails, a Sex Tape, and the Compliance Failures That Triggered a Global Bribery Probe

Thirteen years after the GSK China scandal exploded onto the global stage, its lessons remain as urgent as ever for compliance professionals and business leaders. In this podcast series, we revisit the case not simply as corporate history, but as a living cautionary tale about culture, incentives, third parties, investigations, and governance. Each episode explores what went wrong, why it went wrong, and how those failures still echo in today’s compliance and ethics landscape. Join us as we unpack the scandal and draw practical lessons for building stronger, more resilient organizations. This episode dissects how an anonymous “GSK whistleblower” email campaign—culminating in a covertly filmed sex tape of China executive Mark Reilly—triggered a wider reckoning over alleged systemic bribery in GSK’s China business.

Drawing on reporting from MailOnline, The Wall Street Journal, The Sunday Times, and Time, it outlines claims of a £320m bribery budget routed through third-party travel agencies via fake or inflated medical conferences, with allegations extending to sexual favors, and how GSK initially treated the tape as a compartmentalized security/blackmail issue. GSK hired China-based investigators, Peter Humphrey and Yu Yingzeng, to identify the source; they failed and were arrested for privacy-law violations, as Chinese police opened a formal bribery probe that led to charges against Reilly and 45 others. The fallout expanded to the UK SFO and potential U.S. FCPA exposure via GSK’s NYSE listing, framed against pervasive surveillance risks in China and the dangers of “toothless” internal investigations.

Key highlights:

  • Stranger Than Fiction
  • The Sex Tape Email
  • Whistleblower Bribery Claims
  • Hiring China Wise
  • Investigators Arrested

Resources:

GSK in China: A Game Changer for Compliance on Amazon.com

GSK in China: Anti-Bribery Enforcement Goes Global on Amazon.com

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Ed. Note: the voices of the hosts, Timothy and Fiona, were created by Notebook LM based upon text written by Tom Fox

Categories
Daily Compliance News

Daily Compliance News: April 9, 2026, The FCPA Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Federal judge to dismiss FCPA conviction. (National Today)
  • Smartmatic FCPA prosecution. (Law Fare Media)
  • Top 10 International ABC developments from March. (MOFO)
  • AI goes on charm offensive. (WSJ)
Categories
AI Today in 5

AI Today in 5: April 9, 2026, The Mythos Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Human in the loop as the ultimate moat. (FastCompany)
  2. AI washing in compliance. (FinTechGlobal)
  3. AI is accelerating cyber attacks. (BankInfoSecurity)
  4. AI and virtual care in eye healthcare. (UM)
  5. Is Anthropic’s Mythos dangerous? (The Economist)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Blog

Data Governance, Privacy, and Model Integrity: The Control Foundation of AI Governance

Artificial intelligence may look like a technology story on the surface, but beneath that surface lies a governance reality every board and Chief Compliance Officer must confront. AI systems are only as sound as the data that feeds them, the controls that govern them, and the integrity of the outputs they generate. When data governance is weak, privacy obligations are poorly managed, or model integrity is assumed rather than tested, AI risk can move quickly from a technical flaw to enterprise exposure.

In the prior blog posts in this series, I examined the foundational questions of AI governance: board oversight and accountability, and the danger of strategy outrunning governance. Today, I want to turn to a third issue that sits at the core of every credible AI governance program: data governance, privacy, and model integrity.

This is where the AI conversation often moves from excitement to discipline. Companies may be eager to deploy tools, automate functions, and improve decision-making. But none of that matters if the underlying data is flawed, sensitive information is mishandled, or the model produces outputs that are unreliable, biased, or impossible to explain in context—the more powerful the technology, the more important the governance framework beneath it.

For boards and CCOs, this is not simply a technical control matter. It is a governance matter because failures in data integrity, privacy management, and model performance can have legal, regulatory, reputational, financial, and cultural consequences simultaneously.

AI Governance Begins with the Data

There is an old saying in technology: garbage in, garbage out. In the AI era, that phrase remains true, but it is no longer sufficient. In corporate governance terms, the problem is not merely bad data. It is unknown, unauthorized, untraceable, biased, stale, overexposed, or used in ways the organization never properly approved. That is why data governance is the control foundation of AI governance.

Every AI use case depends on inputs. Those inputs may include structured internal data, public information, personal data, third-party data, proprietary records, historical documents, transactional records, prompts, or user interactions. If management does not understand where that data comes from, who has rights over it, whether it is accurate, how it is classified, and whether it is appropriate for the intended purpose, then the company is not governing AI. It is merely using it.

For compliance professionals, this point should feel familiar. Data governance is not new. What is new is the speed and scale at which AI can amplify data weaknesses. A spreadsheet error may affect one report. A flawed AI input may affect thousands of interactions, recommendations, or decisions before anyone notices.

Why Boards Should Care About Data Lineage

Boards do not need to become technical experts in model training or data architecture. But they do need to ask whether management understands the provenance and reliability of the information flowing into critical AI systems.

At a governance level, this is a question of data lineage. Can the company trace the source of the data, how it was curated, whether it was changed, and whether it was approved for the intended use? If a customer, regulator, employee, or auditor asks why the system reached a particular result, can management explain not only the output, but the data conditions that shaped it?

A board that does not ask these questions risks receiving polished dashboards and impressive demonstrations while missing the underlying weaknesses. AI systems can sound authoritative even when they are wrong. That is part of what makes governance here so essential. Confidence is not the same as integrity.

This is also where the Department of Justice’s Evaluation of Corporate Compliance Programs (ECCP) offers a helpful mindset. The ECCP pushes companies to think in terms of operational reality. Do policies work in practice? Are controls tested? Is the company learning from what goes wrong? The same discipline applies here. A company should not assume its data environment is fit for AI simply because it has data available. It should test, verify, document, and challenge that assumption.

Privacy Is Not an Adjacent Issue

Too many organizations still treat privacy as adjacent to AI governance rather than central to it. That is a mistake. AI systems often rely on data sets that include personal information, employee information, customer records, usage patterns, communications, or behavior-based inputs. Even when a company believes it has de-identified or anonymized data, there may still be re-identification risks, overcollection concerns, retention issues, or use limitations tied to law, contract, or internal policy.

For the board and the CCO, privacy should not be discussed as a compliance side note. It should be part of the approval and governance architecture from the outset. Before an AI use case is deployed, management should understand what personal data is involved, whether its use is permitted, what notices or disclosures apply, what access restrictions are required, how the data will be retained, and whether any vendor relationships create additional privacy exposure.

This is particularly important in generative AI environments, where employees may paste confidential, proprietary, or personal information into tools without fully appreciating the consequences. A privacy incident in the AI context may not begin with malicious intent. It may begin with convenience. That is why governance must focus not only on policy, but on system design, training, and usage constraints.

The CCO has a critical role here because privacy governance often intersects with policy management, employee conduct, training, investigations, and disciplinary response. If privacy is left solely to specialists without integration into the broader governance process, the organization risks building fragmented controls that do not hold together under pressure.

Model Integrity Is a Governance Question

Model integrity sounds like a technical term, but it is really a governance concept. It asks whether the system is performing in a manner consistent with its intended purpose, risk classification, and control expectations.

That means asking hard questions. Is the model accurate enough for the use case? Has it been validated before deployment? Are there known limitations? Does it perform differently across populations or scenarios? Can outputs be reviewed in a meaningful way by human decision-makers? Are there conditions under which the model should not be used? These are not engineering questions alone. They are governance questions because they determine whether management is relying on the system responsibly.

This is where NIST’s AI Risk Management Framework is especially valuable. NIST emphasizes that organizations should map, measure, and manage AI risks, including those related to validity, reliability, safety, security, resilience, explainability, and fairness. It is not enough to say that a tool works most of the time. The organization must understand where it may fail, how failure will be detected, and what safeguards are in place when it does.

ISO/IEC 42001 reinforces the same discipline through the lens of management systems. It requires structured attention to risk identification, control design, monitoring, documentation, and continual improvement. In other words, it treats model integrity not as a technical aspiration, but as an organizational responsibility. For boards, the takeaway is direct: if management cannot explain how model integrity is validated and maintained, then the board does not yet have assurance that AI is being governed effectively.

Third Parties Increase the Stakes

One of the more dangerous assumptions in AI governance is that outsourcing technology also outsources risk. It does not. Many organizations will deploy AI through third-party vendors, embedded tools, software platforms, or external service providers. That may be practical, even necessary. But it also means the company may be relying on data practices, training methods, model assumptions, or privacy safeguards it did not design and cannot fully see.

That is why data governance, privacy, and model integrity must extend to third-party risk management. Procurement cannot focus solely on functionality and price. Legal cannot focus solely on contract form. Compliance, privacy, security, and risk all need to understand what the vendor is doing, what data is being used, what rights the company has to inspect or question performance, and what happens when the vendor changes the model or its underlying terms.

This is not simply good vendor management. It is a governance necessity. A company remains accountable for business decisions made using third-party AI tools, especially when those tools affect customers, employees, compliance obligations, or regulated activities.

Documentation Is What Makes Governance Real

As with every major governance issue, documentation is what turns theory into evidence. If a company is serious about data governance, privacy, and model integrity, it should have records that show it. Those records may include data inventories, data classification standards, model validation summaries, privacy assessments, vendor due diligence files, testing results, approved use cases, control requirements, escalation logs, and remediation actions. Without this documentation, governance becomes anecdotal. With it, governance becomes reviewable, auditable, and improvable.

This is another place where the ECCP mindset is so useful. Prosecutors and regulators tend to ask the same core question in different ways: how do you know your program works? In the AI context, the answer cannot be “our vendor told us so” or “the business says the tool is helpful.” It must be grounded in evidence, testing, and management discipline.

What Boards and CCOs Should Be Pressing For

Boards should expect management to present AI use cases with enough clarity to answer four questions. What data is being used? What privacy implications attach to that use? How has model integrity been tested? What controls will remain in place after deployment?

CCOs should press equally hard from the management side. Is there a documented data governance process for AI? Are privacy reviews built into the intake and approval process? Are models validated according to risk? Are third-party tools subject to diligence and contract controls? Are incidents and anomalies logged and investigated? Are employees trained not to expose confidential or personal information through improper use? These are not burdensome questions. They are the practical questions that separate governed AI from hopeful AI.

Governance Requires Trustworthy Inputs and Defensible Outputs

In the end, AI governance depends on a simple but demanding truth: the organization must be able to trust what goes into the system and defend what comes out of it.

If the data is poorly governed, privacy rights are handled casually, or model integrity is assumed rather than demonstrated, then no amount of strategic enthusiasm will make the program safe. Boards will not have real oversight. CCOs will not have a defensible control environment. The company will merely have a faster way to create risk.

That is why data governance, privacy, and model integrity are not support issues in AI governance. They are central issues. They determine whether the enterprise is using AI with discipline or simply hoping for the best.

In the next article in this series, I will turn to the fourth governance challenge: ongoing monitoring, where many organizations discover that approving an AI use case is far easier than governing it after it goes live.