Categories
Great Women in Compliance

Great Women in Compliance: Both Sides of the Desk: Managing Layoffs & Thriving Through Them

Layoffs, no matter which side of the desk you are on, are one of the most difficult realities of the workplace. For leaders, they demand empathy, clarity, and responsibility. For employees, they can bring shock, uncertainty, and the need to rebuild. In this episode, Lisa Fina and Ellen Hunt invited Gina Lakatos and Gwen Hassan to explore what it means to manage layoffs with integrity and how individuals can survive and even thrive in the aftermath.

Our conversation focused on the human experience of layoffs: the decisions, emotions, mistakes, and opportunities that shape what comes next.

🔍 What We Cover

  • Compassion and clarity matter on both sides of the desk
  • Why the corporate math of layoffs is not a judgment of value or performance
  • How leaders can communicate with clarity, empathy, and respect
  • Acknowledging the emotional impact of layoffs on those who remain
  • Practical strategies for thriving after job loss: mindset, skills, and next steps

Layoffs may close one chapter—but they don’t have to define your story. This episode offers insight, empathy, and actionable guidance for navigating one of work’s hardest realities with dignity and resilience.

Categories
31 Days to More Effective Compliance Programs

31 Days to a More Effective Compliance Program: Day 7 – Clawbacks and Holdbacks

Welcome to 31 Days to a More Effective Compliance Program. Over this 31-day series in January 2026, Tom Fox will post a key component of a best-practice compliance program each day. By the end of January, you will have enough information to create, design, or enhance a compliance program. Each podcast will be short, at 6-8 minutes, with three key takeaways that you can implement at little or no cost to help update your compliance program. I hope you will join each day in January for this exploration of best practices in compliance. Today, on Day 7, we explore the critical insights from the DOJ Clawback and Holdback Program for compliance professionals.

Key highlights:

  • Integrating Compliance into Compensation
  • Financial Accountability Emphasis
  • DOJ’s Commitment to Individual Accountability
  • Continuous Evaluation and Improvement

Resources:

Listeners to this podcast can receive a 20% discount on The Compliance Handbook, 6th edition, by clicking here.

Categories
AI Today in 5

AI Today in 5: January 7, 2026, The AI Prescribing Meds in Utah Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI prescribing medicines in Utah. (ABC4 Utah)
  2. Compliance companies scaling AI. (CIO)
  3. The human factors reshaping AI-driven AML. (FinTech Global)
  4. Real-time AI for healthcare compliance. (HealthCare IT Today)
  5. AI reshaping VAT compliance. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Matt’s Key Compliance Issues and Trends to Watch in 2026

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss key issues Matt is following in 2026.

They look into anticipated FCPA enforcement actions against Chinese telecom giant ZTE and the controversial indictment of SmartMatic, raising concerns about possible politicization of compliance enforcement. The conversation also covers the potential impact on whistleblower cases if key Qui Tam lawsuits under the False Claims Act are invalidated, as well as the ongoing federal-state conflict over AI regulations. Additionally, they touch on the financial complexities and risks associated with AI funding deals, drawing parallels to past financial crises. Compliance officers are advised to prepare for an uncertain and challenging regulatory landscape in the year ahead.

Key highlights:

  • FCPA Enforcement in 2026
  • The Future of Qui Tam Lawsuits
  • Federal Preemption of State AI Laws
  • AI Accounting and Financial Risks

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
PodFest Expo 2026 Speaker Series Preview

Podfest Expo 2026 Speaker Preview Series: Jenn Trepeck on Moving up to Pro Status in Podcasting

In this episode of the PodfestExpo 2026 Speaker Preview Podcasts series, Tom Fox visits with Jenn Trepeck, host of the Salad with a Side of Fries podcast, and discusses her panels at PodfestExpo 2026 on AI in Podcasting, Ask the Pros, and Turning Your Podcast into a Book. Some of the highlights in this podcast are:

  • Jenn’s role in the world of podcasting.
  • Her presentations at PodFest Expo.
  • What she hopes to get out of PodFest Expo 2026 and why you should attend.

I hope you can join us at Podfest Expo 2026, hosted by Podfest Global. This year’s event will be the 12th anniversary and will be held January 15-18, at the RENAISSANCE ORLANDO AT SEAWORLD® in Orlando, Florida. The lineup of this year’s event is simply first-rate, with some of the top names in podcasting.

Podfest Expo is a community of people interested in and passionate about sharing their voices and messages with the world through powerful audio and video mediums. We’re proud to unite as many people as possible to learn, get inspired, and grow better together.

Podfest Expo is so much more than just a conference. While we pride ourselves on featuring the most engaging speakers, exciting topics, and in-depth content, what sets the Podfest Expo event apart from all others is the tight-knit community we’ve been building since 2013. You don’t just attend a Podfest event—you become part of the Podfest family.

Whether you’re new to podcasting or a veteran podcaster looking to innovate and improve your podcast, our easy-to-understand Conference Topics allow you to customize a daily agenda based on what you’re most interested in learning. No matter your skill level or experience, Podfest Expo 2026 has plenty to offer!

Please join us at the event. For information on the event, click here. As an extra benefit for listeners of this podcast, Podfest Expo is offering 10% off any ticket level. Enter the discount code Fox2026 or visit this link.

Podfest Expo 2026 is a production of Podfest Global, which is the sponsor of this podcast series.

Categories
Daily Compliance News

Daily Compliance News: January 7, 2026, The 6 Years in Singapore Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Energy companies are cautious about Venezuela. (NYT)
  • Malaysia to up the ABC ante. (DW)
  • British national sentenced to 6 years in jail over Wirecard fraud. (FT)
  • How corporate security has changed. (WSJ)
Categories
Blog

AI Regulation – The Federal Override Question

Yesterday, we considered the next Texas AI law. Today, we review the Trump Administration’s attempt to override Texas and other states’ AI regulations.  Federal preemption is not a slogan; rather, it is a legal mechanism. Whether federal rules override Texas depends on the shape of the federal action. Of course, following the law or even being legal is not a nicety the Trump Administration concerns itself with, so we continue to be in the wild west.

Scenario A: A Comprehensive Federal AI Statute With Express Preemption

If Congress passes a federal AI law that explicitly preempts state laws in a defined field, then state requirements in that field can be displaced. Companies typically win simplicity but may lose stronger consumer protections that some states impose. Even then, preemption is often partial. Many federal regimes preserve state authority in areas such as consumer protection, civil rights, and general tort liability.

Scenario B: Federal Agency Rules Without Clear Congressional Authority

If the “federal initiative” is primarily executive-branch policy, guidance, or agency rulemaking without a strong statutory anchor, preemption becomes harder and more litigated. States often retain room to regulate, especially where they claim traditional police powers, such as privacy, civil rights, consumer protection, and public safety. Companies cannot bet the farm on “the feds will wipe this away” unless there is real statutory force behind it.

Scenario C: Federal Procurement-Only Standards

Sometimes, federal initiatives focus on government acquisition and vendor requirements. That does not preempt state law for private-sector deployments. It does, however, become a de facto national standard if large vendors align their products to sell to the federal government.

Where Conflict Actually Occurs

Conflicts tend to arise in these friction points:

  • Different definitions of “AI system” or “high-risk.”
  • Different disclosure triggers (Texas requires disclosure in X context, federal requires disclosure in Y context).
  • Biometric rules where one regime is stricter on consent, retention, or use limitations.
  • Enforcement and private rights of action (state allows lawsuits, federal channels enforcement to agencies).

Most mature companies respond by building a control set that satisfies the strictest credible requirements, then tailoring notices and workflows by jurisdiction where needed.

What Does it Mean for Compliance?

  1. Preemption Risk Is Not Binary

Preemption risk in artificial intelligence regulation does not operate as an on–off switch. It lives in the gray space between state authority and federal supremacy, and that gray space is where compliance programs either add value or fall apart. State AI laws are not disappearing simply because the federal government asserts leadership. Instead, they continue to operate until and unless a direct conflict arises, at which point federal standards typically become the ceiling rather than the floor.

For compliance leaders, this means that a checklist mentality is dangerous. It is not enough to ask whether a state law applies or whether a federal framework exists. The real question is how both interact in practice. A company may be fully compliant with a state statute and still find itself exposed if federal regulators view the same conduct through a national security, civil rights, or interstate commerce lens.

The operational takeaway is that AI governance must be designed with escalation in mind. Policies, controls, and documentation should assume federal review even when day-to-day compliance is driven by state requirements. Preemption uncertainty rewards organizations that think in systems and penalizes those that think in silos.

  1. Framework-Based Governance Is the Safest Harbor

In an unsettled regulatory environment, recognized AI governance frameworks are the closest thing compliance professionals have to solid ground. Aligning with established standards such as the NIST AI Risk Management Framework or ISO/IEC 42001 is not about regulatory box-checking. It is about demonstrating intent, structure, and accountability in a way regulators understand and respect.

At the state level, frameworks increasingly serve as explicit or implicit safe harbors. Legislatures recognize that they cannot outpace technology and therefore reward companies that adopt credible, risk-based governance models. At the federal level, the same frameworks provide evidence that AI risks are being identified, assessed, mitigated, and monitored systematically.

This dual function is critical. A framework-aligned program creates a common language across jurisdictions and regulators. It also gives compliance teams a defensible narrative when enforcement questions arise. Rather than arguing technical minutiae, organizations can point to governance architecture, risk assessments, and continuous improvement processes.

The compliance lesson is simple but powerful. Frameworks are no longer optional guidance documents. They are strategic assets that convert regulatory uncertainty into manageable risk.

  1. Design Once, Deploy Many

Fragmented compliance architectures are the fastest way to lose credibility under federal scrutiny. State-by-state AI controls may appear responsive in the short term, but they create operational inconsistency, documentation gaps, and governance confusion. Federal regulators do not evaluate compliance in isolation. They evaluate whether an organization understands and controls its enterprise-wide risk profile.

A design-once, deploy-many approach flips the traditional compliance model. Instead of tailoring governance from the ground up for each jurisdiction, companies should establish a core AI governance framework that applies globally, with localized adjustments layered on where legally required. This creates consistency in risk assessment, accountability, escalation, and remediation.

From a compliance operations perspective, this approach reduces friction between legal, IT, data science, and business teams. Everyone works from the same playbook. Training scales more effectively. Audits become easier. Most importantly, regulators see coherence rather than patchwork.

Federal preemption risk amplifies this need. If federal standards ultimately override conflicting state rules, organizations with unified governance will adapt far more quickly. Those relying on jurisdiction-specific controls will scramble. The strategic message is clear. Enterprise AI governance is not a luxury. It is a necessity.

  1. National Security Use Cases Demand Special Handling

Artificial intelligence that touches national security, export controls, critical infrastructure, or trade sanctions operates in a different regulatory universe. In these areas, federal authority is not merely dominant; it is exclusive. No state law meaningfully offsets federal jurisdiction, and no amount of state-level compliance provides a shield.

For compliance leaders, the challenge is identification and segmentation. Many organizations underestimate how broadly national security concepts are interpreted. AI models used in logistics optimization, cybersecurity, financial analytics, or advanced manufacturing may trigger federal scrutiny even if their primary purpose appears commercial.

The correct response is not fear but structure. AI systems with potential national security implications should be flagged early, governed separately, and subject to enhanced oversight. This includes stricter access controls, deeper documentation, export control reviews, and closer coordination with legal and government affairs functions.

State AI compliance remains relevant, but it becomes secondary. The risk of getting this wrong is not limited to fines. It includes injunctions, loss of government contracts, reputational damage, and, in extreme cases, criminal exposure. Compliance programs that fail to elevate these use cases are operating with blind spots that regulators will not forgive.

  1. Boards Must Own AI Oversight

Preemption uncertainty elevates AI governance from a legal or technical issue to a core enterprise risk issue. That shift places responsibility squarely at the board level. Regulators increasingly expect boards to understand how AI is used, what risks it creates, and how management is controlling those risks across jurisdictions.

This does not mean boards must become data scientists. It means they must exercise informed oversight. Boards should receive regular reporting on AI inventory, risk assessments, regulatory exposure, and incident response readiness. They should ask the same questions they ask about cybersecurity, financial controls, and ethics.

From a compliance perspective, board engagement is a force multiplier. It drives resource allocation, breaks down organizational resistance, and signals seriousness to regulators. It also creates a governance record that matters when enforcement decisions are made.

Preemption debates will continue. Laws will change. What will not change is the expectation that boards oversee material risks. AI now qualifies. Organizations that recognize this early will be better positioned to navigate both state innovation and federal authority with confidence.

State–Federal AI Preemption Risk Matrix

To help you think through some of these issues, I have created a state-federal AI pre-emption matrix for multi-jurisdictional operations.

State–Federal AI Preemption Risk Matrix For Multi-Jurisdictional Operations

Risk Dimension Federal Position (Emerging) State Position (Example: Texas) Preemption Risk Level Compliance Implication Recommended Action
Scope of Regulation Federal framework signals broad national uniformity for AI governance tied to interstate commerce and national security State laws focus on in-state deployment and consumer impact Medium Overlapping but not identical coverage Map AI systems by deployment location and business use, not by development location
Enforcement Authority Centralized federal enforcement likely through agencies (FTC, DOJ, sector regulators) Centralized state enforcement (Attorney General only) Low Parallel enforcement is possible but manageable Design escalation protocols for dual-regulator inquiries
Private Right of Action Federal posture trending against expansive private litigation Many states explicitly bar private rights of action Low Reduced litigation exposure Maintain strong documentation to demonstrate good-faith compliance
Disclosure & Transparency Federal guidance favors risk-based, context-specific disclosures State laws may impose explicit disclosure triggers Medium Potential inconsistency in disclosure thresholds Default to the higher transparency standard where commercially feasible
Biometric & Surveillance Controls Federal focus on national security and civil liberties States restrict unauthorized biometric surveillance Low–Medium Risk arises in public-facing or employee monitoring tools Centralize biometric governance under a single enterprise policy
Governance Framework Recognition Federal regulators endorse voluntary frameworks (e.g., NIST-aligned) States provide safe harbors for recognized frameworks Low Strong alignment opportunity Anchor AI governance to a recognized framework, enterprise-wide
Cure Periods & Remediation Federal enforcement is historically discretionary, not guaranteed States may codify explicit cure periods Medium Loss of cure rights if federal preemption applies Treat cure periods as a bonus, not a compliance strategy
National Security & Export Controls Federal law dominates States largely defer High (Federal) State compliance does not shield federal exposure Segment AI systems touching defense, trade, or sanctions
Cross-Border Data & AI Models Federal primacy expected States are silent or limited High (Federal) State compliance insufficient Build AI governance with federal cross-border assumptions
Future Rulemaking Velocity Rapid and evolving Slower, statute-bound Medium–High State laws may lag or conflict Establish continuous monitoring and board-level AI oversight

 

Categories
PodFest Expo 2026 Speaker Series Preview

Podfest Expo 2026 Speaker Preview Series: David Dachinger on From Mic to Manuscript

In this episode of the Podfest Expo 2026 Speaker Preview Podcasts series, Tom Fox visits with David Dachinger to discuss his presentation at Podfest Expo 2026 on From Mic to Manuscript: How Podcasters Turn Their Shows Into Books. Some of the highlights in this podcast are:

  • David’s role in the world of podcasting.
  • His presentation on using your pod as the basis for a book.
  • What David hopes to get out of PodFest Expo 2026 and why you should attend.

I hope you can join us at Podfest Expo 2026, hosted by Podfest Global. This year’s event will be the 12th anniversary and will be held January 15-18, at the RENAISSANCE ORLANDO AT SEAWORLD® in Orlando, Florida. The lineup of this year’s event is simply first-rate, with some of the top names in podcasting.

Podfest Expo is a community of people interested in and passionate about sharing their voices and messages with the world through powerful audio and video mediums. We’re proud to unite as many people as possible to learn, get inspired, and grow better together.

Podfest Expo is so much more than just a conference. While we pride ourselves on featuring the most engaging speakers, exciting topics, and in-depth content, what sets the Podfest Expo event apart from all others is the tight-knit community we’ve been building since 2013. You don’t just attend a Podfest event—you become part of the Podfest family.

Whether you’re new to podcasting or a veteran podcaster looking to innovate and improve your podcast, our easy-to-understand Conference Topics allow you to customize a daily agenda based on what you’re most interested in learning. No matter your skill level or experience, Podfest Expo 2026 has plenty to offer!

Please join us at the event. For information on the event, click here. As an extra benefit for listeners of this podcast, Podfest Expo is offering 10% off any ticket level. Enter the discount code Fox2026 or visit this link.

Podfest Expo 2026 is a production of Podfest Global, which is the sponsor of this podcast series.