Categories
AI Today in 5

AI Today in 5: April 13, 2026, The AI Governance Framework Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Oracle brings storytelling to the heart of compliance with AI. (Yahoo!Finance)
  2. AI is bringing compliance to BioPharma. (PharmTech)
  3. Oracle brings AI agents to financial crime and compliance. (Financial IT)
  4. Building out your AI governance framework. (Bloomberg Law)
  5. AI developments finance pros should be tracking. (MIT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: April 13, 2026, The Giant Leap for Jargon Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • A giant leap for jargon. (FT)
  • CFTC wants no state regulation of Kalshi. (Reuters)
  • The gasoline godfather. (Bloomberg)
  • China targets middlemen in corruption crackdown. (SCMP)
Categories
FCPA Compliance Report

FCPA Compliance Report: Judicial Discretion, Sentencing Advocacy, and a Proactive Compliance Model: Joseph De Gregorio – Part 2

In this episode, Tom Fox welcomes former Wall Street trader Joseph De Gregorio, who was federally convicted and now applies a “compliance rebuild” methodology to demonstrate genuine remediation under legal scrutiny. This is Part 2 of a two-part podcast series.

In Part 2, we cover how federal judges exercise broad discretion despite sentencing guidelines and often form views before the court based on the pre-sentence report and sentencing memorandum, with probation officers’ impressions shaped by a detailed defendant letter and authentic allocution; judges emphasize post-offense conduct and may discount lawyer advocacy. Joseph then summarizes patterns from 400+ white-collar cases, arguing that structural failures precede cultural and operational failures, and introducing the “access to scrutiny ratio” as the most predictive risk indicator. He lists five warning signals: unscrutinized top performers, known but unmapped monitoring gaps, unmanaged performance pressure, quietly resolved senior incidents, and compensation rewarding results without method (noting DOJ’s September 2024 ECCP update). He outlines a proactive Compliance Rebuild approach using human failure audits, reverse access audits, directional speak-up analysis, and DOJ-aligned prosecution simulations.

Key highlights:

  • Pre-Sentence Reports Matter
  • Patterns Across 400 Cases
  • Five Compliance Warning Signals
  • Prosecution Simulation Stress Test
  • DOJ Evaluation Questions and Red Flags

Resources:

Joseph De Gregorio – Founder, JN Advisor™ Maximum Sentence Reduction – Minimum Time Served

📋 Initial Consultation: https://forms.gle/2fLczk7bbwM7KSaP6

Bloomberg Law Contributor: “How to Get a Judge to Reduce Your Client’s White-Collar Sentence” – Bloomberg Law 

Bloomberg Tax Contributor: Tax Fraud Sentencing Has a Gap Defense Attorneys Are Missing

Featured Expert: American Bar Association

Featured Sentencing Mitigation Expert: Law360

Featured Expert on Us Weekly with 5x Emmy Award Winning Journalist Kristin Thorne for her “Uncovered” Series Click Link For Full Video

https://www.usmagazine.com/crime-news/news/federal-sentencing-strategist-reveals-why-some-real-housewives-stars-commit-fraud/

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Interested in the intersection of Sherlock Holmes and modern compliance? Check out my latest book, The Game is Afoot in Compliance.

Categories
Blog

Preventing Strategy Outrunning Governance in AI

One of the clearest AI governance challenges facing companies today is not a failure of ambition. It is a failure of pacing. Put simply, strategy is moving faster than governance. Business teams want results. Senior executives hear daily about efficiency gains, lower costs, faster decision-making, enhanced customer engagement, and competitive advantage. Vendors are more than happy to promise it all. Employees are already experimenting with AI tools on their own. In that environment, the pressure to move quickly is relentless.

That is where the compliance function must step forward. Not to say no. Not to slow innovation for the sake of slowing it. But to ensure that innovation moves with structure, discipline, and accountability. Governance is not the enemy of AI strategy. Governance is what allows an AI strategy to scale without becoming an enterprise risk event.

The Central Question for Boards and CCOs

For boards, Chief Compliance Officers, and business leaders, the central question is straightforward: has the company defined the rules of the road before putting AI into production? If the answer is no, the company is already behind.

This is not a theoretical problem. It is happening every day. A business unit buys an AI-enabled tool before legal, compliance, IT, privacy, and security have reviewed it. A vendor pitches a product as low-risk automation, even though it actually makes consequential recommendations. An employee uploads sensitive data into a generative AI platform for convenience. A use case that began as internal support quietly migrates into customer-facing decision-making. A pilot project becomes business as usual without anyone documenting who approved it, what risks were considered, or what human oversight is supposed to look like.

That is what it means when strategy outruns governance. The business has a faster process for adopting AI than it has for understanding, controlling, and monitoring AI risk.

What the DOJ Expects

The Department of Justice has been telling compliance professionals for years that an effective compliance program must be dynamic, risk-based, and integrated into the business. That lesson applies directly here. Under the ECCP, prosecutors ask whether a company has identified and assessed its risk profile, whether policies and procedures are practical and accessible, whether responsibilities are clearly assigned, whether decisions are documented, and whether the program evolves as risks change. AI governance sits squarely in that framework.

What “Rules of the Road” Means in Practice

What do the “rules of the road” look like in practice?

First, the company must define which AI use cases are permissible. These are lower-risk applications that can be used within established controls. Think internal drafting support, workflow automation for non-sensitive administrative tasks, or summarization tools used on approved data sets. Even here, there should be basic conditions: approved tools only, no confidential data unless authorized, user training, logging, and manager accountability.

Second, the company must identify restricted or high-risk use cases. These are situations where AI may be allowed, but only after enhanced review. This can include uses involving personal data, HR decisions, customer communications, pricing, fraud detection, credit or eligibility decisions, compliance surveillance, or any function where bias, opacity, or error could create legal, regulatory, or reputational harm. These use cases should trigger a more formal process that includes a documented risk assessment, legal and compliance review, data governance checks, testing, defined human oversight, and ongoing monitoring.

Third, the company must be clear about prohibited use cases. If an AI application cannot be used consistently with the company’s values, control environment, legal obligations, or risk appetite, it should be off-limits. That might include tools that process sensitive data in unapproved environments, systems that make fully automated consequential decisions without human review, or applications that cannot be explained, tested, validated, or monitored sufficiently for their intended use.

Fourth, the company must establish escalation thresholds. Not every AI decision belongs at the board level, but some certainly do. Use cases involving strategic transformation, material legal exposure, major customer impact, significant third-party dependency, or high-consequence decision-making may need escalation to senior management, a designated AI or risk committee, or the board itself. If management cannot explain when a matter gets elevated, governance is too vague to be trusted.

Why the NIST AI RMF Matters

This is where the NIST Framework is so useful. NIST does not treat AI governance as a one-time signoff exercise. It organizes governance as an ongoing discipline through four connected functions: Govern, Map, Measure, and Manage. For compliance professionals, that is a practical operating model.

Governance means setting accountability, policies, oversight structures, and risk tolerances. It answers who is responsible, who decides, and what standards apply. A map means understanding the use case, context, stakeholders, data, and risks. It answers what the system is actually doing and where exposure lies. Measure means testing, validating, and assessing performance and controls. It answers whether the system works as intended and whether the company can prove it. Managing means acting on what is learned through oversight, remediation, change management, and continual improvement. It answers whether the company is prepared to respond when reality diverges from the plan.

How ISO 42001 Reinforces Governance Discipline

ISO 42001 reinforces the same message from a management systems perspective. It brings structure, accountability, controls, and continual improvement to AI governance. That matters because many organizations do not fail because of a lack of policy language. They fail because they do not operationalize accountability. ISO 42001 pushes companies to embed AI governance into defined processes, assign responsibilities, document controls, conduct internal reviews, and take corrective action. In other words, it turns aspiration into a management discipline.

What Happens When Strategy Outruns Governance

What happens when none of this is done well?

Shadow AI is usually the first warning sign. Employees use public or lightly reviewed tools because they are easy to use, fast, and readily available. Sensitive data may be entered without approval. Outputs may be used in business decisions without validation. The organization tells itself it is still in the experimentation phase, while the risk has already gone live.

Vendor-driven deployment is another danger. The company relies too heavily on what the vendor says the product can do and not enough on its own evaluation of what the product should do, how it works, what data it uses, and what controls are required. When something goes wrong, accountability becomes murky. Procurement says the business wanted speed. The business says IT approved the integration. IT says legal reviewed the contract. Legal says compliance owns the policy. Compliance says no one submitted the use case for formal review. That is not governance. That is institutional finger-pointing.

Undocumented approvals are equally dangerous. An AI tool is launched because everyone generally agrees it seems useful. But there is no record of the intended purpose, risk rating, required controls, human review standard, or approval rationale. Six months later, the company cannot explain why the system was deployed, what guardrails were put in place, or whether its use has drifted beyond its original scope.

The Compliance Mechanisms Companies Need Now

That is why companies need concrete compliance mechanisms now. They need an intake process for AI use cases to enter a formal review channel before deployment. They need risk tiering so not every use case gets the same treatment, but higher-risk applications receive enhanced scrutiny. They need approval workflows with defined roles for the business, legal, compliance, privacy, security, IT, and, where appropriate, model risk or internal audit. They need board reporting triggers to inform leadership when AI adoption crosses materiality or risk thresholds. They need a current model and use-case inventory so the company knows what is in operation. They need change management, so updates, retraining, vendor changes, and scope shifts are reviewed rather than assumed. And they need periodic review because AI risk does not stand still after launch.

The Special Role of Compliance

The compliance professional has a special role here. Compliance is often the function best positioned to connect governance, process, accountability, documentation, and escalation. That is precisely what the DOJ expects in an effective program. If the company can buy AI faster than it can classify risk, document controls, assign accountability, and test outcomes, the program is not keeping pace with the business. That gap will not stay theoretical for long. It will harden into enterprise risk.

Conclusion: Governance Must Keep Pace With Strategy

The lesson is direct. Strategy and governance must move together. AI governance is not a brake pedal. It is the steering system. A company that wants the benefits of AI must be disciplined enough to define where AI can go, where it cannot go, who decides, what gets documented, and when the business must stop and reassess. If the company can move faster on AI strategy than on AI governance, it is creating risk faster than it can manage. That is not innovation. That is exposure.

Categories
Sunday Book Review

Sunday Book Review: April 12, 2026, The Library of America for the Revolution Edition

In the Sunday Book Review, Tom Fox considers books that would interest compliance professionals, business executives, or anyone curious. It could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. In honor of the upcoming 250th anniversary of the US, in this episode, we look at 4 top books from the Library of America on contemporaneous writings on the American Revolution.

  1. Thomas Jefferson: Writings
  2. George Washington: Writings
  3. The American Revolution: Writings from the Pamphlet Debate
  4. The American Revolution: Writings from the War of Independence
Categories
Daily Compliance News

Daily Compliance News: April 10, 2026, The AI & Trust Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Biggest defense against AI–trust. (FT)
  • No wonder he attacked Beirut. (Reuters)
  • Applying the law will get you fired in the Trump Administration. (NYT)
  • Rooney Rule, anyone? (WSJ)

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.

Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 74 – The GES Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

Resources:

Kristy Grant-Hart on LinkedIn

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI in Financial Services in 5 Stories

AI in Financial Services in 5 Stories – Week Ending April 10, 2026

Welcome to AI in Financial Services in 5 Stories. A practical weekly roundup of the five most important AI developments affecting banking, insurance, payments, asset management, and fintech. Each Friday, Tom Fox will break down the top stories that matter most through the lenses of compliance, risk management, governance, and business strategy. Designed for compliance professionals, executives, legal teams, and financial services leaders, it goes beyond headlines to explain why each development matters in a highly regulated industry. The result is a concise weekly briefing that helps listeners stay current on AI innovation while asking sharper questions about oversight, accountability, and trust.

This week’s stories include:

  1. AI is the top data security concern. (FintechNews)
  2. The perils of one-click ambition. (bobsguide)
  3. To fight financial crime, AI needs context. (FinTechMagazine)
  4. AI-driven pKYC. (FinTechGlobal)
  5. 6 AI truths from Amazon CEO. (Amazon News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI in Healthcare

AI in Healthcare: Five Healthcare AI Stories You Need to Know This Week – April 10, 2026

Welcome to AI in Healthcare in 5 Stories. This podcast is a Weekly Briefing of the five most important AI developments shaping healthcare, medicine, and life sciences. Each week, Tom Fox breaks down the latest stories on clinical innovation, regulation, privacy, compliance, patient safety, and operational transformation through a practical, business-focused lens. Designed for healthcare compliance professionals, executives, legal teams, clinicians, and industry leaders, the podcast moves beyond headlines to explain what each development means in the real world.

The top five stories for the week ending April 10, 2026, include:

  1. How much can AI streamline healthcare? (Fox17)
  2. AI as a personal healthcare concierge. (Healthcare Finance)
  3. Using AI to rewire healthcare at the Cleveland Clinic. (Forbes)
  4. Risks of Shadow AI in healthcare. Fierce Healthcare)
  5. AI as a competition imperative. (HealthcareItNews)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: April 10, 2026, The Missing Signals Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Biggest defense against AI–trust. (FT)
  2. Missing signals in AI compliance. (FinTech Global)
  3. Why AI-first compliance programs fail. (Wolters Kluwer)
  4. The risks of AI-driven hiring. (Staffing Industry Analysts)
  5. AI as a competitive necessity. (Healthcare IT News)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

To learn about the intersection of Sherlock Holmes and the modern compliance professional, check out my latest book, The Game is Afoot-What Sherlock Holmes Teaches About Risk, Ethics and Investigations on Amazon.com.