Categories
Fox on Podcasting

Fox on Podcasting – Auditing Media Assets for Compliance with Dr. Yolanda Nollie

Join Tom Fox as he explores the world of podcasting and get ready to be inspired to start your own podcast. In this episode, Tom is joined by Dr. Yolanda Nollie, a US Navy veteran and media governance auditor, about applying audit and compliance concepts to audio, visual, and IP assets in media and creative businesses.

Dr. Nollie describes auditing both creators and investors, producing detailed data-driven reports that are transferred confidentially and encrypted, and using audits to help protect IP, support funding decisions, and prevent unfunded liabilities. She explains that audits can be light or in-depth and result in pass/fail findings without “closing down” a business. Ley outlines key concepts such as “shadow IT of media” (risk created by unmanaged asset creation and transfer), IP sovereignty and chain-of-title rigor, “copyright inoculation” at the point of creation, operational drift, decision rights mapping for fiduciary clarity and asset clearance authority, and a hybrid internal/external audit model.

Dr. Nollie addresses AI governance risks posed by employees using generative AI on personal devices, advocating embedded technical guardrails and customized “blueprinting” rather than a gatekeeper compliance model, and explains her assessment-to-report-to-remediation approach for identifying and addressing control gaps. They also cover bridging conversations between legal/compliance and creative teams to maintain speed while making outputs audit-ready, and Dr. Nollie shares her background in arts, journalism, media production, podcasting, and documentary work.

 

Resources:

Dr. Yolanda Nollie on LinkedIn

Clarity for Creatives Website 

Artwork

Elaine Capers

Art by Elaine

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: February 20, 2026, The Spinx Raises Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI compliance demands grow. (PlanAdviser)
  2. Compliance Monitoring: what works, what backfires. (UCToday)
  3. New AI governance tool. (PRNewsWire)
  4. The Spinx raises funds for new AI compliance agents. (FinTechGlobal)
  5. Boys will always be…just boys. (CNBC)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

When Your AI Chat Becomes Exhibit A: What United States v. Heppner Means for Compliance Professionals

There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling,  the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.

For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.

The Court’s Core Holding

The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.

The reasoning was straightforward and, frankly, predictable:

  • The AI tool was not an attorney.
  • The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
  • The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
  • Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.

This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.

The Illusion of Confidentiality

Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.

The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.

Why This Is a Board-Level Issue

The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:

  • Do we have an inventory of AI tools used across the enterprise?
  • Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
  • Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
  • Are legal and compliance functions involved in approving AI deployments?

If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.

Privilege cannot Be Recreated After the fact.

One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:

An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.

By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.

Work Product Is Not a Safety Net

Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.

Practical Steps for Compliance Leaders

This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.

1. Treat Third-Party AI as Non-Confidential by Default

Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.

2. Update Acceptable Use Policies

Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:

  • Privileged communications.
  • Investigation details.
  • Personally identifiable information.
  • Trade secrets.
  • Sensitive regulatory communications.

Policy must move from general warnings to specific examples.

3. Involve Legal in AI Governance

AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:

  • Data retention.
  • Subprocessor use.
  • Confidentiality obligations.
  • Audit rights.
  • Breach notification.

If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.

4. Implement Training That Reflects Real Risk

Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.

5. Establish Secure AI Environments for Legal Work

If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:

  • Operate within your controlled environment.
  • Restrict data sharing.
  • Provide contractual confidentiality.
  • Maintain clear audit logs.

Even then, legal oversight is essential. Secure does not automatically mean privileged.h

6. Align with Litigation Hold Procedures

AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.

The Cultural Dimension

Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.

Connecting to the DOJ’s Expectations

The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:

  • Did the company identify AI as a risk area?
  • Did it implement controls?
  • Did it train employees?
  • Did it monitor usage?
  • Did it respond to incidents?

The answers to those questions will influence charging decisions, resolutions, and penalty calculations.

A Final Word: Convenience Versus Control

Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.

Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.

Categories
AI Today in 5

AI Today in 5: February 19, 2026, The End of Stores Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Will AI shopping agents mean the end of stores? (⁠PYMNTS⁠)
  2. How Internal Audit must respond to the EU AI Act.  (⁠TeamMate)⁠
  3. Re-writing the regulatory risk playbook. (⁠Forbes)⁠
  4. Public-Private Initiative to strengthen risk management for AI. (⁠DOT)⁠
  5. Dr. Oz wants AI avatars for rural healthcare. (⁠NPR)⁠ 

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on ⁠Amazon.com

Categories
Daily Compliance News

Daily Compliance News: February 19, 2026, The Gambler Takes the Stand Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Connected workflows for compliance. (Spark)
  • Commit $600MM in fraud, no worries, just donate to Trump. (NYT)
  • Write the facts, get fired by Trump. (FT)
  • Tom Goldstein takes the stand in his criminal tax trial. (Reuters)
Categories
Blog

Embedded Explainability: Turning Principles into Proof

Embedded explainability is the design choice to build “the why” directly into a system as it operates, rather than bolting on an explanation after the fact. In practical terms, it means the model or decision engine is instrumented to surface the key factors that drove a specific output as the output is delivered. In a compliance, risk, or fraud context, this can include reason codes tied to specific data features, a clear confidence score, the policy or control implicated, and a short narrative that translates technical drivers into business language. The point is not to turn every decision into a science project; the point is to make explanations an always-on product requirement, so investigators, managers, and auditors can quickly understand what the system saw, why it escalated, and what evidence supports the action.

Where this becomes powerful is in governance. Embedded explainability creates a durable audit trail and makes accountability real: you can test whether explanations are consistent over time, whether they drift, whether similarly situated cases are treated consistently, and whether the system is relying on inappropriate proxies. It also reduces the “black box” tax during exams and internal reviews because your documentation is generated continuously, decision by decision, rather than recreated under a deadline. Done well, embedded explainability supports model risk management, accelerates case resolution, and builds user trust because the system does not just tell you what to do. It shows its work in a way that is usable for first-line teams and defensible for second-line and regulators.

If you have been in a single AI governance meeting, you have heard the same reassuring words: transparency, fairness, accountability. They sound good. They also do not answer the one question your Audit Committee will ask you the minute something goes sideways: can you prove what happened, who approved it, and why the system did what it did?

That is the heart of embedded explainability for a GRC or compliance professional. It is not a debate about data science. It is about building a program that can withstand scrutiny. In a strong compliance program, “principles” are not controls. They are intentions. Regulators, prosecutors, and auditors do not award credit for intent. They want evidence of implementation and effectiveness. When you embed explainability, you are building evidence into the workflow itself, so the program produces audit-ready artifacts without heroics.

Think like an auditor, not like a vendor.

In many organizations, “explainability” is treated like a technical deliverable. Someone pulls a chart. Someone cites an algorithm. Everyone nods. Then, the internal audit asks a simple question: “Show me how this use case was approved, how risks were assessed, how testing was performed, and how you monitor it today.”

That is where compliance needs to reframe the conversation. For GRC, the most important explainability is process explainability:

  • Who approved the use case, and what decision impact does it have?
  • What risks were identified, and what mitigations were required?
  • What data and content sources were used, and how they are governed.
  • What testing was done, what thresholds were applied, and what failed.
  • Who monitors the system in production, and how issues get escalated.
  • How changes are controlled, logged, and reapproved

If you can answer those questions with documentation, you can pull on demand; you are not “talking about explainability.” You are demonstrating it.

The risk that hides in plain sight: language and cultural bias

Most compliance teams understand bias as a broad concept. The operational problem manifests in a narrower, more painful way: language and cultural bias within everyday compliance workflows. Consider the real-life places your organization may be using AI or analytics: hotline intake, investigations triage, monitoring and surveillance, third-party diligence, audit planning, policy interpretation, and case summarization. Now add the facts of corporate life: multilingual reporting, non-native English narratives, regional idioms, and different cultural communication styles.

Here is the compliance risk: the system may not be “biased” in a headline-grabbing way. It may be biased in a quiet, compounding way:

  • A hotline narrative written in non-native English is scored lower for credibility.
  • Regional phrasing triggers false positives in monitoring.
  • Direct communication styles are interpreted as “aggressive” or “retaliatory”;
  • Reports from certain geographies are deprioritized because of linguistic patterns; and
  • Summaries strip context from culturally specific descriptions of harm.

This is why embedded explainability matters. If the system cannot tell you why it scored and routed a case the way it did, you will not find these problems until someone outside the company points them out to you.

A compliance-led lifecycle that makes explainability real

The practical move is to treat embedded explainability as a lifecycle requirement, not a go-live checkbox. You want stage gates with documented approvals and an evidence pack that travels with the use case from intake to monitoring. Think of it as the same discipline you already apply to third parties, controls testing, and investigations: define, document, test, approve, monitor, and improve.

A simple compliance-led lifecycle looks like this:

  1. Intake and approval: What is the use case, what is the decision impact, and who is accountable?
  2. Data and language risk assessment: What data is used, what languages and regions are in scope, and what bias risks exist?
  3. Build with traceability: Document the logic, rules, prompts, and human review points.
  4. Testing: Prove the system can be reconstructed and does not degrade across language groups.
  5. Deployment readiness: Confirm monitoring, access controls, logging, and escalation are active.
  6. Ongoing monitoring: Report drift, exceptions, overrides, and bias findings; reapprove material changes.

This is the compliance function earning its keep; not by arguing about definitions, but by building a governance machine that produces defensible evidence.

The minimum evidence pack: what you should be able to pull on demand

If you want to operationalize embedded explainability, standardize the artifacts. Do not let every team reinvent documentation. Your minimum evidence pack should be consistent across machine learning models, rules-based analytics, LLM workflows, and decision engines.

At a minimum, you should be able to produce:

  • Use case charter: purpose, scope, decision impact, owner, risk tier, approvals;
  • Data and language risk assessment: sources, language coverage, cultural risk factors, mitigations;
  • System specification: what it is, how it works, where humans intervene;
  • Testing artifacts: bias test plan, scenario tests, results, remediation notes;
  • Explainability checklist: proof you can reconstruct inputs, steps, outputs, and rationale;
  • Deployment approval record: stage-gate sign-offs and dates;
  • Monitoring and drift reports: trends, exceptions, and escalation notes;
  • Incident and escalation log: root cause, corrective actions, closure dates, and
  • Change management log: what changed, materiality, retesting, reapproval.

If you have this, you have something most organizations still lack: a system of record for AI governance that internal and external auditors can actually test.

The Bottom Line

Embedded explainability is how you turn AI governance from a values statement into a control environment. It is how you protect innovation by making it defensible. If your program can reconstruct decisions, show approvals, demonstrate testing, and document monitoring, you are not hoping you are compliant. You are ready to prove it. 

Categories
Daily Compliance News

Daily Compliance News: February 18, 2026, The Stupid Is as Stupid Does Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Just how big is Ukraine’s corruption problem? (TheIndependent)
  • HB-1 visas and GOP racial hatred. (NYT)
  • More energy investments in Venezuela. (WSJ)
  • The Trump Administration wants history and science removed from federal parks. (Reuters)
Categories
AI Today in 5

AI Today in 5: February 18, 2026, The AI for Rural Healthcare Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI to transform fraud investigations. (PRNewswire)
  2. Better defensible AI oversight. (PRNewswire)
  3. What’s in your compliance gap? (Forbes)
  4. Is the AI moment here? (FRSF)
  5. Oz wants AI avatars for rural healthcare. (NPR)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Truth Stranger the Fiction: Binance, Iran, Crypto and Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly look at recent reporting on Binance that raises questions about the effectiveness of its compliance program, monitorships, and executive attitudes toward compliance.

They recap Binance’s 2023 resolution of U.S. criminal and civil matters involving money laundering and sanctions evasion. They discuss the Fortune article, which reported that Binance continued to route funds through its platform to the Iranian government in 2024 and into 2025. They highlight Mr. Zou’s public response on X, suggesting that if investigators found misconduct, it implied they failed to prevent it, which the hosts criticize as a misunderstanding that business units own risk and that compliance’s role is to provide systems, channels, oversight, and escalation rather than “prevent” all misconduct.

Key highlights:

  • Truth Stranger Than Fiction in Compliance
  • Binance’s 2023 Guilty Plea, $4.3B Penalty & Two Monitorships
  • Compliance Team Fallout: Investigators Fired & CCO on the Move
  • ‘If You Found It, You Failed’: Why CEOs Misunderstand Compliance
  • Iran as the Red Line: Plea Agreement Breach, Politics, and Corruption Risk
  • Will Anyone Enforce This? Rule of Law Questions and What Comes Next

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
The Hill Country Podcast

The Hill Country Podcast: Greg Faldyn: Leadership and Legacy in Rotary

Welcome to the award-winning The Hill Country Podcast. The Texas Hill Country is one of the most beautiful places on earth. In this podcast, Hill Country resident Tom Fox visits with the people and organizations that make this one of the most unique areas of Texas. In this episode, host Tom Fox speaks with Greg Faldyn, a seasoned insurance industry professional and a long-time Rotarian.

Greg, an insurance professional with over 40 years of experience and a dedicated Rotary Club member for nearly 25 years, views the 100th anniversary of Rotary in Kerrville as a landmark achievement in the organization’s enduring commitment to community service. Having played a pivotal role in organizing the celebration as the foundation chair, Greg has been instrumental in highlighting Rotary’s century-long partnerships with key local organizations, such as the Peterson Foundation and the Raphael Clinic. He proudly points to the Hill Country community’s collective resilience, particularly in the wake of events like the July 4th flood, as a testament to Rotary’s strength and impact. Passionate about engaging young professionals, Greg believes that the milestone anniversary serves not only as a celebration of past achievements but also as a call to future service and community enhancement.

Highlights include:

  • Rotary’s Centennial Celebration in Kerrville’s Community
  • Community Support through Rotary Foundation Grants
  • Rotary Club Weekly Engagement
  • Why Join Rotary?

Resources:

Rotary Club of Kerrville

Rotary District 5840

Rotary International

 Other Hill Country Focused Podcasts

Hill Country Authors Podcast

Hill Country Artists Podcast

Texas Hill Country Podcast Network

Cover Art

Nancy Huffman