Categories
AI Today in 5

AI Today in 5: March 12, 2026, The Attorneys and AI Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI forensics is helping compliance gridlock. (PYMNTS)
  2. Creating responsible AI governance standards. (mycarrollcountynews)
  3. AI agents cannot open bank accounts. (FinTechWeekly)
  4. The court castigated an attorney using AI to write briefs. (TheNews&Observer)
  5. 3 key principles for AI use in businesses. (BusinessInsider)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: March 11, 2026, The AI Compliance is a People Risk Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI compliance is now a people risk. (The HR Director)
  2. CVS and Google launch an AI healthcare business. (Forbes)
  3. The disconnect between the C-Suite and the rank-and-file on AI. (HR Dive)
  4. Amazon – a self-inflicted wound? (CNBC)
  5. KYC is going to continuous monitoring. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com

Categories
AI Today in 5

AI Today in 5: March 10, 2026, The Good, The Bad and The Ugly Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Texas goes TRAIGA. (JD Supra)
  2. AI to reshape compliance. (FinTech Global)
  3. The Good, Bad, and Ugly of AI in healthcare. (ZDNet)
  4. The AI Literacy gap is a compliance risk. (Complex Discovery)
  5. How to use AI without getting dumber. (Business Insider Africa)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
FCPA Compliance Report

FCPA Compliance Report: Highlights from SCCE Europe with Gerry Zack

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. This is our 800th edition. In this episode, Tom Fox welcomes back Gerry Zack, who recently attended the SCCE Europe conference in Berlin.

They begin by noting the differences from the U.S. national conference, including a stronger European focus on behavioral ethics, culture, and community networking. Zack highlights extensive conference attention to AI, including the shift toward agentic AI, practical compliance uses such as identifying policy gaps, enhancing third-party due diligence, and automating anomaly follow-up, while cautioning about investigative risks if AI-generated interview strategies are scrutinized in court. They discuss AI-driven fraud threats (deepfakes, fake invoices, and improved phishing) and the growing concerns about shadow AI and the improper use of confidential information. Zack also describes a company’s experience pursuing ISO 37301 and 37001 certifications and notes ongoing work and limited U.S. awareness around the UK Failure to Prevent Fraud Act. He was surprised by the profession’s continued lack of sophistication in risk assessments.

Key highlights:

  • US vs Europe Conference
  • AI Keynote and Practical Takeaways
  • ISO Compliance Certification
  • UK Failure to Prevent Fraud
  • Surprises Risk Assessment Gap

Resources:

Gerry Zack on LinkedIn

RiskTrek

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Returning to Venezuela on Amazon.com

Categories
AI Today in 5

AI Today in 5: March 6, 2026, The Captain Nemo Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Financial crimes, compliance, and AI. (FundsEurope)
  2. AI is making a difference in finance. (FinTechWeekly)
  3. AI agents as financial intermediaries. (FinTechWeekly)
  4. How AI is changing pharma. (BioSpace)
  5. Floating wind turbines to power AI data centers located at sea. (Electrek)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Daily Compliance News

Daily Compliance News: March 6, 2026, The Does ChatGPT Practice Law Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Wells Fargo is free from the Consent Order. (WSJ)
  • Senator flags White House corruption for betting markets. (Decrypt)
  • OpenAI sued for practicing law. (Reuters)
  • The Trump Administration ordered a refund of illegal tariffs. (WSJ)
Categories
Blog

AI Compliance as a Competitive Advantage: Turning Governance Into ROI

In too many organizations, “AI compliance” is treated like a speed bump. Something to route around, manage after launch, or outsource to a vendor deck and a policy that nobody reads. That mindset is not only outdated but also expensive. In 2026, mature AI governance is becoming a commercial differentiator because customers, regulators, employees, and business partners increasingly ask the same question: Can you prove your system is trustworthy?

The most underappreciated truth is that AI risk is not “an AI team problem.” It is a business-process problem, expressed through data, decisions, third parties, and change control. The Department of Justice Evaluation of Corporate Compliance Programs (ECCP) has never been about perfect paperwork; it has been about whether a program is designed, implemented, resourced, tested, and improved. If you can translate that posture into AI, you can convert “compliance cost” into “credibility capital.”

A cautionary backdrop shows why. The EEOC’s 2023 settlement with iTutorGroup serves as a cautionary tale: automated hiring screening that disadvantages older workers can lead to legal exposure, remediation costs, and reputational damage. The details matter less than the pattern; when algorithmic decisions are not governed, the business eventually pays the bill. The compliance professional should see the pivot clearly; governance is the mechanism that lets you move fast without becoming reckless.

From a build-from-scratch, low-to-medium maturity posture, the win is not sophistication. The win is repeatability. If you build an AI governance framework aligned to NIST AI RMF (govern, map, measure, manage), structured through ISO/IEC 42001’s management-system discipline, and cognizant of EU AI Act risk tiering, you get something the business loves: a predictable path from idea to deployment. Today, I will explore five ways mature AI compliance can become a competitive advantage, each with a practical view of how a compliance-focused GenAI assistant can support business processes.

1) Sales and Customer Trust

Trust is a sales feature now, even when marketing refuses to call it that. Customers increasingly ask about data use, model behavior, security controls, and human oversight, and they are doing it in procurement questionnaires and contract negotiations. A mature governance framework lets you answer quickly, consistently, and with evidence, thereby shortening sales cycles and reducing late-stage deal friction. A compliance GenAI can support this by drafting standardized responses from approved trust artifacts such as policies, model cards, DPIAs, and audit summaries; flagging gaps, and routing exceptions to Legal and Compliance before the business overpromises.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track are average time to complete AI security and compliance questionnaires; percentage of deals requiring AI-related contractual concessions; number of customer-facing AI disclosures issued with approved templates; and percentage of AI systems with current model documentation and ownership attestations.

2) Regulatory Credibility

Regulators are not impressed by ambition; controls persuade them. NIST AI RMF provides a common language to demonstrate that you mapped use cases, measured risks, and managed them over time, while ISO/IEC 42001 imposes discipline on accountability, documentation, and continual improvement. The EU AI Act’s risk-based approach adds an organizing principle: classify systems, apply controls proportionate to risk, and prove that you did it. A compliance GenAI can help by maintaining a living inventory, prompting owners to complete quarterly attestations, drafting control narratives aligned with the frameworks, and assembling regulator-ready “evidence packs” that demonstrate governance in operation rather than on paper.

For compliance professionals, this lesson is about your gap analysis. You have not aligned your current internal controls with GenAI, governance, or other controls. You should do so. Some key KPIs you can track are percentage of AI systems risk-tiered and documented; time to produce an evidence pack for a high-impact system; number of material control exceptions and time-to-remediation; and frequency of risk reviews for high-impact systems.

3) Faster Product Approvals and Safer Deployment

Speed comes from clarity, not from cutting corners. When decision rights, review thresholds, and required artifacts are defined up front, product teams stop guessing what Compliance will require at the end. That is the management-system advantage: ISO/IEC 42001 treats AI governance like a repeatable operational process with gates, owners, and records, rather than a series of one-off debates. A compliance GenAI can support the workflow by pre-screening new use-case intake forms, recommending the correct risk tier under EU AI Act concepts, suggesting required testing (bias, privacy, safety), and generating the first draft of a launch checklist that the product team can execute.

For compliance professionals, this lesson is that you must run compliance at the speed of your business operations. Some key KPIs you can track are: cycle time from AI intake to approval; percent of launches that pass on first review; number of post-launch “surprise” issues tied to missing pre-launch controls; and percentage of models with human-in-the-loop controls when required.

4) Talent, Recruiting, and Internal Confidence

Top performers do not want to work in a company that treats AI like a toy and compliance like a nuisance. Mature governance creates psychological safety inside the organization: employees know what is permitted, what is prohibited, and how to raise concerns. It also improves recruiting because candidates, especially in technical roles, ask about responsible AI practices, data governance, and ethical guardrails. A compliance GenAI can support internal confidence by serving as the first-line “policy concierge,” answering questions with approved guidance, directing employees to the correct procedures, and logging common questions so Compliance can improve training and communications.

For compliance professionals, this fits squarely within the DOJ mandate for compliance to lead efforts in institutional justice and fairness. Some key KPIs you can track include training completion and comprehension metrics for AI use; the number of AI-related helpline inquiries and their resolution times; employee survey results on comfort raising AI concerns; and the percentage of AI use cases with documented business-owner accountability.

5) Lower Cost of Incidents and More Resilient Operations

AI incidents are rarely just “bad outputs.” They are process failures: poor data lineage, uncontrolled model changes, vendor opacity, missing logs, weak access controls, or no escalation path when harm appears. NIST AI RMF’s “measure” and “manage” functions emphasize monitoring, drift detection, incident response, and continuous improvement, which is precisely how you reduce the frequency and severity of failures. A compliance GenAI can support incident resilience by guiding teams through an AI incident response playbook, helping triage severity, ensuring evidence is preserved (audit logs, prompts, outputs, approvals), and generating lessons-learned reports that connect root cause to control enhancements.

For compliance professionals, this lesson is even more stark, as the ‘customers’ of a corporate compliance program are your employees. Some key KPIs you can track include the number of AI incidents by severity tier; mean time to detect and mean time to remediate; the percentage of high-impact models with drift-monitoring and alert thresholds; and the percentage of third-party AI providers subject to change-control notification requirements.

What “Mature Governance” Looks Like When You Are Building From Scratch

Do not start with a 60-page policy. Start with a few non-negotiables that scale:

  • Inventory and classification: Create a single inventory of GenAI assistants, ML models, and automated decision systems. Classify them by impact using EU AI Act concepts (high-impact versus low-impact) and your own business context.
  • Accountability and decision rights: Assign an owner for each system and require periodic attestations for the highest-risk categories.
  • Standard artifacts: Use lightweight model documentation, data lineage notes, and disclosure templates. If it is not documented, it does not exist for governance.
  • Human oversight and logging: Define when human-in-the-loop is mandatory and ensure logs capture who approved what, when, and why.
  • Third-party AI controls: Contract for transparency, audit support, change notification, and security requirements. Vendor opacity is not a strategy.

This is where ECCP thinking helps. The question is not whether you have a policy. The question is whether the policy is operationalized, tested, and improved. That is the bridge from compliance to competitive advantage.

If you want AI compliance to be a competitive advantage, treat it like a management system that produces evidence, not like a policy library that produces comfort. When governance becomes repeatable, the business can move faster, regulators become more confident, and customers see the difference. That is not a cost center. That is credibility you can take to the bank.

Categories
AI Today in 5

AI Today in 5: March 5, 2026, The AI ‘s Biggest Test Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Ending compliance bottlenecks with AI. (FinTechGlobal)
  2. AI surge will reshape compliance. (FinTechGlobal)
  3. Compliance first AI. (Cyberscoop)
  4. Trump, AI Data Centers, and the midterms. (CNBC)
  5. Healthcare is AI’s biggest test. (Time)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Red Flags Rising

Red Flags Rising: S01 E38: “Fallen Chips” – GIR’s Estelle Atkinson on her Three-Part Report

Mike Huneke and Brent Carlson welcome Estelle Atkinson, a reporter with Global Investigations Review (GIR), to speak about her recent three-part series, “Fallen Chips,” published on January 26, 27, and 28, 2026 (linked in the show notes). They discuss how Estelle learned of the U.S. government investigation of Zenith Semiconductor in Chandler, Arizona (01:14); that company’s background (06:03); when employees started to realize that things were not quite right at the company and how that led to employees going to the FBI (08:19); how Estelle got to know the employees and why they were willing to help her with her story (10:30); how her experience illustrates more broadly the challenge companies have in responding to whistleblower reports or allegations (11:48); how diversion starts close to home, and is not always in some exotic “offshore” location (15:31); how U.S. administration policies to promote the export of the U.S. AI “stack” are not without controls or national security considerations (15:58); why success under America’s AI Action Plan and the American AI Export initiative will depend on effective, risk-based export controls compliance programs (16:21); the role of media in American life (19:14); why the standard PR or IR “playbook” of asserting “full compliance with the law” creates risks if companies aren’t expressly incorporating the full definition of “knowledge,” to include “an awareness of a high probability,” into export controls compliance (20:14); and what GIR readers can expect to see (or read) next from Estelle (20:49). Mike and Brent conclude with yet another installment of Brent Carlson’s “Managing Up” (22:39).

Resources:

GIR 

Fallen Chips Part I: Inside the FBI Raid that Rocked an Arizona Chip Start-Up (Jan. 26, 2026)

Fallen Chips Part II: Silicon Secrets and the Risks Hiding in Plain Sight (Jan. 27, 2026)

Fallen Chips Part III: The Fault Lines of the US-China Tech War (Jan. 28, 2026)

More about:

Estelle: https://globalinvestigationsreview.com/authors/estelle-atkinson

Contact Estelle: estelle.atkinson@globalinvestigationsreview.com

Contact Brent: brent@redflagsrising.com

Contact Mike: michael.huneke@morganlewis.com

Categories
AI Today in 5

AI Today in 5: March 4, 2026, The AI Content Explosion Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Symphony AI is helping Spanish banks with sanctions screening. (FinTechGlobal)
  2. Agentic AI for reg compliance. (Yahoo!Finance)
  3. Chatbots and Influence. (YaleNews)
  4. Managing your AI content explosion. (PlanAdviser)
  5. AI for data protection. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.