Categories
Compliance Into the Weeds

Compliance into the Weeds: Matt’s Key Compliance Issues and Trends to Watch in 2026

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss key issues Matt is following in 2026.

They look into anticipated FCPA enforcement actions against Chinese telecom giant ZTE and the controversial indictment of SmartMatic, raising concerns about possible politicization of compliance enforcement. The conversation also covers the potential impact on whistleblower cases if key Qui Tam lawsuits under the False Claims Act are invalidated, as well as the ongoing federal-state conflict over AI regulations. Additionally, they touch on the financial complexities and risks associated with AI funding deals, drawing parallels to past financial crises. Compliance officers are advised to prepare for an uncertain and challenging regulatory landscape in the year ahead.

Key highlights:

  • FCPA Enforcement in 2026
  • The Future of Qui Tam Lawsuits
  • Federal Preemption of State AI Laws
  • AI Accounting and Financial Risks

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
PodFest Expo 2026 Speaker Series Preview

Podfest Expo 2026 Speaker Preview Series: Jenn Trepeck on Moving up to Pro Status in Podcasting

In this episode of the PodfestExpo 2026 Speaker Preview Podcasts series, Tom Fox visits with Jenn Trepeck, host of the Salad with a Side of Fries podcast, and discusses her panels at PodfestExpo 2026 on AI in Podcasting, Ask the Pros, and Turning Your Podcast into a Book. Some of the highlights in this podcast are:

  • Jenn’s role in the world of podcasting.
  • Her presentations at PodFest Expo.
  • What she hopes to get out of PodFest Expo 2026 and why you should attend.

I hope you can join us at Podfest Expo 2026, hosted by Podfest Global. This year’s event will be the 12th anniversary and will be held January 15-18, at the RENAISSANCE ORLANDO AT SEAWORLD® in Orlando, Florida. The lineup of this year’s event is simply first-rate, with some of the top names in podcasting.

Podfest Expo is a community of people interested in and passionate about sharing their voices and messages with the world through powerful audio and video mediums. We’re proud to unite as many people as possible to learn, get inspired, and grow better together.

Podfest Expo is so much more than just a conference. While we pride ourselves on featuring the most engaging speakers, exciting topics, and in-depth content, what sets the Podfest Expo event apart from all others is the tight-knit community we’ve been building since 2013. You don’t just attend a Podfest event—you become part of the Podfest family.

Whether you’re new to podcasting or a veteran podcaster looking to innovate and improve your podcast, our easy-to-understand Conference Topics allow you to customize a daily agenda based on what you’re most interested in learning. No matter your skill level or experience, Podfest Expo 2026 has plenty to offer!

Please join us at the event. For information on the event, click here. As an extra benefit for listeners of this podcast, Podfest Expo is offering 10% off any ticket level. Enter the discount code Fox2026 or visit this link.

Podfest Expo 2026 is a production of Podfest Global, which is the sponsor of this podcast series.

Categories
Blog

AI Regulation – The Federal Override Question

Yesterday, we considered the next Texas AI law. Today, we review the Trump Administration’s attempt to override Texas and other states’ AI regulations.  Federal preemption is not a slogan; rather, it is a legal mechanism. Whether federal rules override Texas depends on the shape of the federal action. Of course, following the law or even being legal is not a nicety the Trump Administration concerns itself with, so we continue to be in the wild west.

Scenario A: A Comprehensive Federal AI Statute With Express Preemption

If Congress passes a federal AI law that explicitly preempts state laws in a defined field, then state requirements in that field can be displaced. Companies typically win simplicity but may lose stronger consumer protections that some states impose. Even then, preemption is often partial. Many federal regimes preserve state authority in areas such as consumer protection, civil rights, and general tort liability.

Scenario B: Federal Agency Rules Without Clear Congressional Authority

If the “federal initiative” is primarily executive-branch policy, guidance, or agency rulemaking without a strong statutory anchor, preemption becomes harder and more litigated. States often retain room to regulate, especially where they claim traditional police powers, such as privacy, civil rights, consumer protection, and public safety. Companies cannot bet the farm on “the feds will wipe this away” unless there is real statutory force behind it.

Scenario C: Federal Procurement-Only Standards

Sometimes, federal initiatives focus on government acquisition and vendor requirements. That does not preempt state law for private-sector deployments. It does, however, become a de facto national standard if large vendors align their products to sell to the federal government.

Where Conflict Actually Occurs

Conflicts tend to arise in these friction points:

  • Different definitions of “AI system” or “high-risk.”
  • Different disclosure triggers (Texas requires disclosure in X context, federal requires disclosure in Y context).
  • Biometric rules where one regime is stricter on consent, retention, or use limitations.
  • Enforcement and private rights of action (state allows lawsuits, federal channels enforcement to agencies).

Most mature companies respond by building a control set that satisfies the strictest credible requirements, then tailoring notices and workflows by jurisdiction where needed.

What Does it Mean for Compliance?

  1. Preemption Risk Is Not Binary

Preemption risk in artificial intelligence regulation does not operate as an on–off switch. It lives in the gray space between state authority and federal supremacy, and that gray space is where compliance programs either add value or fall apart. State AI laws are not disappearing simply because the federal government asserts leadership. Instead, they continue to operate until and unless a direct conflict arises, at which point federal standards typically become the ceiling rather than the floor.

For compliance leaders, this means that a checklist mentality is dangerous. It is not enough to ask whether a state law applies or whether a federal framework exists. The real question is how both interact in practice. A company may be fully compliant with a state statute and still find itself exposed if federal regulators view the same conduct through a national security, civil rights, or interstate commerce lens.

The operational takeaway is that AI governance must be designed with escalation in mind. Policies, controls, and documentation should assume federal review even when day-to-day compliance is driven by state requirements. Preemption uncertainty rewards organizations that think in systems and penalizes those that think in silos.

  1. Framework-Based Governance Is the Safest Harbor

In an unsettled regulatory environment, recognized AI governance frameworks are the closest thing compliance professionals have to solid ground. Aligning with established standards such as the NIST AI Risk Management Framework or ISO/IEC 42001 is not about regulatory box-checking. It is about demonstrating intent, structure, and accountability in a way regulators understand and respect.

At the state level, frameworks increasingly serve as explicit or implicit safe harbors. Legislatures recognize that they cannot outpace technology and therefore reward companies that adopt credible, risk-based governance models. At the federal level, the same frameworks provide evidence that AI risks are being identified, assessed, mitigated, and monitored systematically.

This dual function is critical. A framework-aligned program creates a common language across jurisdictions and regulators. It also gives compliance teams a defensible narrative when enforcement questions arise. Rather than arguing technical minutiae, organizations can point to governance architecture, risk assessments, and continuous improvement processes.

The compliance lesson is simple but powerful. Frameworks are no longer optional guidance documents. They are strategic assets that convert regulatory uncertainty into manageable risk.

  1. Design Once, Deploy Many

Fragmented compliance architectures are the fastest way to lose credibility under federal scrutiny. State-by-state AI controls may appear responsive in the short term, but they create operational inconsistency, documentation gaps, and governance confusion. Federal regulators do not evaluate compliance in isolation. They evaluate whether an organization understands and controls its enterprise-wide risk profile.

A design-once, deploy-many approach flips the traditional compliance model. Instead of tailoring governance from the ground up for each jurisdiction, companies should establish a core AI governance framework that applies globally, with localized adjustments layered on where legally required. This creates consistency in risk assessment, accountability, escalation, and remediation.

From a compliance operations perspective, this approach reduces friction between legal, IT, data science, and business teams. Everyone works from the same playbook. Training scales more effectively. Audits become easier. Most importantly, regulators see coherence rather than patchwork.

Federal preemption risk amplifies this need. If federal standards ultimately override conflicting state rules, organizations with unified governance will adapt far more quickly. Those relying on jurisdiction-specific controls will scramble. The strategic message is clear. Enterprise AI governance is not a luxury. It is a necessity.

  1. National Security Use Cases Demand Special Handling

Artificial intelligence that touches national security, export controls, critical infrastructure, or trade sanctions operates in a different regulatory universe. In these areas, federal authority is not merely dominant; it is exclusive. No state law meaningfully offsets federal jurisdiction, and no amount of state-level compliance provides a shield.

For compliance leaders, the challenge is identification and segmentation. Many organizations underestimate how broadly national security concepts are interpreted. AI models used in logistics optimization, cybersecurity, financial analytics, or advanced manufacturing may trigger federal scrutiny even if their primary purpose appears commercial.

The correct response is not fear but structure. AI systems with potential national security implications should be flagged early, governed separately, and subject to enhanced oversight. This includes stricter access controls, deeper documentation, export control reviews, and closer coordination with legal and government affairs functions.

State AI compliance remains relevant, but it becomes secondary. The risk of getting this wrong is not limited to fines. It includes injunctions, loss of government contracts, reputational damage, and, in extreme cases, criminal exposure. Compliance programs that fail to elevate these use cases are operating with blind spots that regulators will not forgive.

  1. Boards Must Own AI Oversight

Preemption uncertainty elevates AI governance from a legal or technical issue to a core enterprise risk issue. That shift places responsibility squarely at the board level. Regulators increasingly expect boards to understand how AI is used, what risks it creates, and how management is controlling those risks across jurisdictions.

This does not mean boards must become data scientists. It means they must exercise informed oversight. Boards should receive regular reporting on AI inventory, risk assessments, regulatory exposure, and incident response readiness. They should ask the same questions they ask about cybersecurity, financial controls, and ethics.

From a compliance perspective, board engagement is a force multiplier. It drives resource allocation, breaks down organizational resistance, and signals seriousness to regulators. It also creates a governance record that matters when enforcement decisions are made.

Preemption debates will continue. Laws will change. What will not change is the expectation that boards oversee material risks. AI now qualifies. Organizations that recognize this early will be better positioned to navigate both state innovation and federal authority with confidence.

State–Federal AI Preemption Risk Matrix

To help you think through some of these issues, I have created a state-federal AI pre-emption matrix for multi-jurisdictional operations.

State–Federal AI Preemption Risk Matrix For Multi-Jurisdictional Operations

Risk Dimension Federal Position (Emerging) State Position (Example: Texas) Preemption Risk Level Compliance Implication Recommended Action
Scope of Regulation Federal framework signals broad national uniformity for AI governance tied to interstate commerce and national security State laws focus on in-state deployment and consumer impact Medium Overlapping but not identical coverage Map AI systems by deployment location and business use, not by development location
Enforcement Authority Centralized federal enforcement likely through agencies (FTC, DOJ, sector regulators) Centralized state enforcement (Attorney General only) Low Parallel enforcement is possible but manageable Design escalation protocols for dual-regulator inquiries
Private Right of Action Federal posture trending against expansive private litigation Many states explicitly bar private rights of action Low Reduced litigation exposure Maintain strong documentation to demonstrate good-faith compliance
Disclosure & Transparency Federal guidance favors risk-based, context-specific disclosures State laws may impose explicit disclosure triggers Medium Potential inconsistency in disclosure thresholds Default to the higher transparency standard where commercially feasible
Biometric & Surveillance Controls Federal focus on national security and civil liberties States restrict unauthorized biometric surveillance Low–Medium Risk arises in public-facing or employee monitoring tools Centralize biometric governance under a single enterprise policy
Governance Framework Recognition Federal regulators endorse voluntary frameworks (e.g., NIST-aligned) States provide safe harbors for recognized frameworks Low Strong alignment opportunity Anchor AI governance to a recognized framework, enterprise-wide
Cure Periods & Remediation Federal enforcement is historically discretionary, not guaranteed States may codify explicit cure periods Medium Loss of cure rights if federal preemption applies Treat cure periods as a bonus, not a compliance strategy
National Security & Export Controls Federal law dominates States largely defer High (Federal) State compliance does not shield federal exposure Segment AI systems touching defense, trade, or sanctions
Cross-Border Data & AI Models Federal primacy expected States are silent or limited High (Federal) State compliance insufficient Build AI governance with federal cross-border assumptions
Future Rulemaking Velocity Rapid and evolving Slower, statute-bound Medium–High State laws may lag or conflict Establish continuous monitoring and board-level AI oversight

 

Categories
AI Today in 5

AI Today in 5: January 6, 2026, The TRAIGA Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. TRAIGA in Texas. (WRAL News)
  2. 2026 is a seminal year for AI and copyright. (Reuters)
  3. How AI will redefine GRC in 2026. (Governance-Intelligence)
  4. Health companies want clear, more consistent AI rules. (HealthCare IT News)
  5. What does the AI boom look like (to WSJ reporters)? (WSJ)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: January 6, 2026, The Corruption Costs Lives Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Dealing with past trauma is critical for CEOs. (FT)
  • Who will repay China? (NYT)
  • Pivotal year for AI copyright battles. (Reuters)
  • Corruption led to the Hong Kong fire disaster. (Bloomberg)
Categories
PodFest Expo 2026 Speaker Series Preview

Podfest Expo 2026 Speaker Preview Series: Anika Jackson on Future Proofing Your Podcast in the AI Era

In this episode of the Podfest Expo 2026 Speaker Preview Podcasts series, Tom Fox visits with Professor Anika Jackson, one of the few academics Tom has met who studies and teaches about podcasts at the college level. She discusses her panel at Podfest Expo 2026, titled “Future Proofing Your Podcast in the AI Era.” Some of the highlights in this podcast are:

  • Anika’s role in podcasting.
  • Her presentation on future-proofing your podcast.
  • What she hopes to get out of PodFest Expo 2026 and why you should attend.

I hope you can join us at Podfest Expo 2026, hosted by Podfest Global. This year’s event will be the 12th anniversary and will be held January 15-18, at the RENAISSANCE ORLANDO AT SEAWORLD® in Orlando, Florida. The lineup of this year’s event is simply first-rate, with some of the top names in podcasting.

Podfest Expo is a community of people interested in and passionate about sharing their voices and messages with the world through powerful audio and video mediums. We’re proud to unite as many people as possible to learn, get inspired, and grow better together.

Podfest Expo is so much more than just a conference. While we pride ourselves on featuring the most engaging speakers, exciting topics, and in-depth content, what sets the Podfest Expo event apart from all others is the tight-knit community we’ve been building since 2013. You don’t just attend a Podfest event—you become part of the Podfest family.

Whether you’re new to podcasting or a veteran podcaster looking to innovate and improve your podcast, our easy-to-understand Conference Topics allow you to customize a daily agenda based on what you’re most interested in learning. No matter your skill level or experience, Podfest Expo 2026 has plenty to offer!

Please join us at the event. For information on the event, click here. As an extra benefit for listeners of this podcast, Podfest Expo is offering 10% off any ticket level. Enter the discount code Fox2026 or visit this link.

Podfest Expo 2026 is a production of Podfest Global, which is the sponsor of this podcast series.

Categories
Innovation in Compliance

Innovation in Compliance: 10+1 Commandments: A Moral Code for AI Ethics in Business

Innovation comes in many forms, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom welcomes Cristina DiGiacomo, founder of 10P1 Inc.

Cristina has an extensive background in communications, business, and practical philosophy. Cristina introduces her ’10+1 Commandments,’ a set of ethical guidelines for human interaction with artificial intelligence. They discuss the compelling need to integrate these principles into business compliance and governance frameworks. The commandments aim to provide a high-level, universal, and perpetual moral code that addresses the risks and ethical considerations of AI in the corporate world. Cristina emphasizes the importance of maintaining ethical AI practices amidst the evolving regulatory landscape.

Key highlights:

  • Philosophy in Everyday Life
  • Ancient Wisdom and Modern Application
  • The 10+1 Commandments Explained
  • Applying the Commandments in Business
  • Governance and Ethical AI

Resources:

Cristina DiGiacomo on LinkedIn

Website-10+1 

Categories
PodFest Expo 2026 Speaker Series Preview

Podfest Expo 2026 Speaker Preview Series: Chad Parizman on AI Hacks for Solo and Small-Pod Teams

In this episode of the PodfestExpo 2026 Speaker Preview Podcasts series, Tom Fox visits with Chad Parizman, founder of Ader Communications, and discusses his presentation at PodfestExpo 2026 on AI Hacks for Solo and Small-Pod Teams. Some of the highlights in this podcast are:

  • Chad’s role in the world of podcasting.
  • His presentations at PodFest Expo.
  • What he hopes to get out of PodFest Expo 2026 and why you should attend.

I hope you can join us at Podfest Expo 2026, hosted by Podfest Global. This year’s event will be the 12th anniversary and will be held January 15-18, at the RENAISSANCE ORLANDO AT SEAWORLD® in Orlando, Florida. The lineup of this year’s event is simply first-rate, with some of the top names in podcasting.

Podfest Expo is a community of people interested in and passionate about sharing their voices and messages with the world through powerful audio and video mediums. We’re proud to unite as many people as possible to learn, get inspired, and grow better together.

Podfest Expo is so much more than just a conference. While we pride ourselves on featuring the most engaging speakers, exciting topics, and in-depth content, what sets the Podfest Expo event apart from all others is the tight-knit community we’ve been building since 2013. You don’t just attend a Podfest event—you become part of the Podfest family.

Whether you’re new to podcasting or a veteran podcaster looking to innovate and improve your podcast, our easy-to-understand Conference Topics allow you to customize a daily agenda based on what you’re most interested in learning. No matter your skill level or experience, Podfest Expo 2026 has plenty to offer!

Please join us at the event. For information on the event, click here. As an extra benefit for listeners of this podcast, Podfest Expo is offering 10% off any ticket level. Enter the discount code Fox2026 or visit this link.

Podfest Expo 2026 is a production of Podfest Global, which is the sponsor of this podcast series.

Categories
Blog

Texas Steps Into the AI Ring: What a “Responsible AI Governance Act” Means for Companies

Contrary to the standard belief and even Governor Abbott’s pronouncements, there is some regulation in the great state of Texas. With the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), Texas made a clear statement: artificial intelligence is no longer just a product feature or a data science experiment. It is a regulated business risk. If your organization builds, buys, deploys, or relies on AI to make decisions about people, Texas is signaling that you should be able to explain what the system does, prove you are not using it in harmful ways, and demonstrate governance over it.

Based on your summary, the Texas Responsible Artificial Intelligence Governance Act creates a statewide framework with four big pillars: (1) prohibitions on certain harmful or discriminatory uses, (2) limits on biometric surveillance, (3) disclosure requirements in defined contexts, (4) oversight infrastructure, including a regulatory sandbox, and (5) enforcement with noted safe harbors. That is not “innovation-killing.” It is Texas doing what Texas does: setting boundaries on unacceptable conduct while leaving room for businesses to move fast within guardrails.

Today, we begin a two-part look at state regulation of AI. Today in Part 1, we consider the Texas approach. Tomorrow in Part 2, we review the federal attempt to eviscerate all state AI regulation, claiming federal preemption through the Trump Administration’s sweeping Executive Order titled “Ensuring a National Policy Framework for Artificial Intelligence.”

1. Prohibited Uses: Drawing Hard Lines Around Harm and Discrimination

The most important practical takeaway for a corporate audience is this: Texas is moving toward outcome-focused restrictions, not just paperwork. When a law prohibits “harmful or discriminatory uses,” the question becomes: harmful to whom, and in what context? For most companies, the risk zones are predictable:

  • Employment: recruiting, resume screening, interview scoring, promotion, performance evaluation, and workforce reduction.
  • Credit and financial decisions: underwriting, pricing, and fraud flags that drive adverse decisions.
  • Housing and insurance: eligibility, pricing, and claims triage.
  • Customer access: KYC onboarding, account shutdowns, and refund decisions.
  • Public-facing services: education, health-related triage, and benefits navigation.

From a compliance program perspective, this pushes you toward two controls you should already want:

• A documented AI use-case inventory, categorized by impact level.

• A discrimination and fairness control, meaning pre-deployment testing plus monitoring, and a mechanism to remediate.

If you are thinking, “We do not use AI for those decisions,” the next question is whether the vendor tool uses AI under the hood. Texas-style statutes tend to treat “deployment” broadly, and regulators are rarely impressed by “the vendor did it” as a defense.

2. Biometric Surveillance: The Texas Red Line

You mentioned restrictions on “unauthorized biometric surveillance.” In plain English, that means the law is likely concerned with face recognition, voiceprints, gait recognition, and other identifiers used to track or identify people.

Corporate implications typically fall into three areas:

  • Physical security: access control systems, visitor management, and camera analytics.
  • Retail and venues: loss prevention, “known offender” lists, and customer behavior analytics.
  • Workplace monitoring: time clocks using facial recognition and productivity monitoring that drifts into biometrics.

If you use biometric tools, your governance should address:

  • Lawful basis and authorization—consent, notice, contractual, and policy controls.
  • Purpose limitation—what it is used for and what it is not used for.
  • Retention and deletion—biometric data cannot be a forever asset.
  • Vendor constraints—no secondary use, no model training on your biometric data unless explicitly approved.

Even if Texas is not your primary market, this is the type of requirement that quickly becomes “lowest common denominator” compliance across a multi-state footprint.

3. Disclosure: The Practical “Tell the Truth” Requirement

You flagged “clear AI disclosures in some contexts.” For corporate teams, disclosure obligations usually arise when AI materially interacts with a person or influences a decision that affects them.

Think of disclosure as a three-part discipline:

  • When you disclose: at the point of interaction or decision.
  • What you disclose: that AI is used, what it is used for, and how a person can seek assistance or appeal.
  • How you disclose: clear, conspicuous, and not buried in terms and conditions.

The compliance opportunity here is that disclosure forces operational clarity. If you cannot describe the system in plain language, you almost certainly do not have adequate control over it.

4. Oversight and a Regulatory Sandbox: “Governance With a Business On-Ramp”

A state oversight body, along with a “sandbox” approach, signals that Texas wants responsible experimentation. Done right, a sandbox creates a controlled pathway to test higher-risk systems with agreed guardrails, transparency, and reporting.

For companies, the sandbox concept maps to an internal capability you should build anyway:

  • Pilot governance: criteria for what can be tested, where, with whom, and with what monitoring.
  • Kill switches: the ability to stop or roll back quickly.
  • Post-pilot review: documented lessons learned before scaling.

This is compliance that enables innovation, not blocks it.

5. Enforcement: Centralized, Cure-Oriented, and Compliance-Friendly

Enforcement authority under the Texas Responsible Artificial Intelligence Governance Act is deliberately centralized in the Texas Attorney General’s office. That decision matters. By excluding a private right of action, the statute avoids the litigation-driven compliance chaos that has plagued other regulatory regimes. Instead of trial lawyers driving outcomes, Texas has opted for a single, accountable enforcement authority with discretion, consistency, and an institutional understanding of regulatory tradeoffs.

Equally important is the statute’s 60-day cure period. This provision reflects a mature regulatory philosophy: most compliance failures in emerging technologies are not rooted in bad intent but in complexity, novelty, and rapid innovation cycles. The law gives companies the opportunity to remediate, document corrective action, and improve governance before penalties attach. That is precisely how effective compliance programs are built.

The explicit safe harbor for organizations aligned with recognized frameworks such as the NIST AI Risk Management Framework or ISO/IEC 42001 further reinforces this approach. Texas is not inventing bespoke standards in isolation. It is rewarding companies that invest in globally recognized, risk-based governance systems.

This is not a punitive regulation designed to extract fines or score political points. It is a regulatory governance intended to incentivize foresight, structure, and accountability. For compliance professionals, that is the right signal at exactly the right moment.

Join us tomorrow as we consider what the attempted federal preemption via Executive Order might mean for Texas and other states.

Categories
AI Today in 5

AI Today in 5: January 5, 2026, The Does The World Have Time Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Does the world have time to prepare for AI? (The Guardian)
  2. Colombia adopts an international standard for AI. (Global Compliance News)
  3. Client enablement with AI. (FinTechWeekly)
  4. Agentic AI rewriting rules for compliance. (Dallas Business Journal)
  5. Why AI Compliance needs to build operating systems. (Forbes)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.