Categories
AI Today in 5

AI Today in 5: February 9, 2026, The AI Agents Doing Compliance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. What to do when AI is forced on compliance. (CW)
  2. Napier AI/AML report is out. (FinTechGlobal)
  3. AI and the accountability gap. (FinTechGlobal)
  4. Where AI is tearing through corporate America. (WSJ)
  5. Goldman is letting AI Agents do compliance. (PYMNTS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.

Categories
From the Editor's Desk

From the Editor’s Desk – Aaron Nicodemus on the CW AI Conference Insights: Navigating the Practical Use of AI in Compliance

In this episode of ‘From the Editor’s Desk,’ Tom Fox visits with Aaron Nicodemus to discuss highlights from the recent Compliance Week AI Conference. Key takeaways include the importance of understanding the purpose and practical use of AI tools before implementation, the pressures from C-suite and boards to adopt AI, and the necessity of a human-in-the-loop approach. The conversation also touches on integrating trust and integrity into AI adoption, the evolving role of compliance as a trusted partner in AI initiatives, and the collective willingness to learn and apply AI across compliance operations.

Key highlights:

  • Importance of Understanding AI Implementation
  • Pressure from the Top: Compliance and AI
  • Human Oversight in AI Processes
  • Trust and Integrity in AI
  • Compliance as a Competitive Advantage
  • Real-World Examples: Robinhood and DocuSign
  • The Evolving Role of Compliance in AI
  • Conference Vibes and Final Thoughts

Resources:

Aaron Nicodemus on LinkedIn

Compliance Week

Categories
AI Today in 5

AI Today in 5: February 6, 2026, The Trillion $$ Wipeout Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. EU AI group establishes task force to foster compliance. (Babl)
  2. AI diligence tool rollout. (InvestmentNews)
  3. AI in healthcare is driving greater accountability. (FastCompany)
  4. The compliance convergence challenge. (SecurityBlvd.)
  5. AI fears wipe out tech stock values. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: February 5, 2026, The Google Goes for the Jugular Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Google vows to outspend everyone. (BusinessInsider)
  2. AI communications governance criticality. (FinTechGlobal)
  3. Even with the Trump administration’s AI order, companies must remain vigilant. (CXDive)
  4. World’s first viral AI agent has arrived. (WSJ)
  5. China ramps up energy boom to fuel AI. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
AI Today in 5

AI Today in 5: February 4, 2026, The SaaSpocalypse Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI is helping in regulatory volatility. (WSJ)
  2. AI is reshaping AML in banking. (FinTechGlobal)
  3. Wall Street is dumping SW stock. (Yahoo!Finance)
  4. What is your enterprise AI strategy? (FinTechGlobal)
  5. AI security reaches a turning point. (FinTechGlobal)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: The Reality of AI Adoption in Corporate Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore it more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly examine three recent surveys that examine the real-world impact of AI adoption in corporate environments.

Recording from Alexandria, Virginia, where Matt is attending a conference on ethical governance of AI, Matt and Tom discuss the differing perceptions of AI’s benefits between senior executives and other employees. They explore findings from PWC, Section, and Workday surveys, uncovering a significant gap in AI’s perceived value. The discussion highlights the challenges of integrating AI, the significant rework required by employees, and the struggle to build trust in AI tools. They also debate whether enterprise-scale AI deployment or incremental, point-specific adoption is the best path forward.

Key highlights:

  • Conference on Ethical AI Governance
  • Reality Checks on AI Adoption
  • AI Rework and Employee Training Concerns
  • Trust Issues with AI

Resources:

Matt in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, a Communicator Award, and a W3 Award, all for podcast excellence.

Categories
AI Today in 5

AI Today in 5: February 3, 2026, The AI Undergrad Degree Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. UW-Whitewater offers an undergraduate degree in AI. (Channel3000)
  2. The race to build an operating system for investment advisors. (InvestmentNews)
  3. Cramer says AI changing companies fortunes. (YahooFinanceSingapore)
  4. Is your business’s speed a risk? (FinTechGlobal)
  5. Where is AI taking us? 8 thinkers report. (NYT)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
The PfBCon Podcast

The PfBCon Podcast: Unlocking the Secrets to Successful Podcasting with Insider Tips and Tools with Chris Krimitsos

In this inaugural episode of The PfBCon Podcast, Chris Krimitsos, the founder and driving force behind Podfest Multimedia Expo—one of the world’s most influential podcasting and creator community events—delves into valuable tips and tools for creating an exceptional podcast or video podcast.

Chris highlights the North American pod tour, thanks key sponsors and contributors, and discusses essential resources such as Google Trends, Answer the Public, VidIQ, and more to generate content ideas and increase audience engagement. Discover powerful AI tools such as Adobe Enhanced Speech, Cast Magic, Descript, and others to streamline your podcast production and explore creative ways to enhance your podcast’s reach and monetization strategies with PodMatch, Canva, and Headliner. Listen for insights on overcoming industry-specific challenges and leveraging AI to stay ahead in the podcasting world.

Key highlights:

  • The Pod Tour
  • Highlighting Key Figures in Podcasting
  • Tips for Creating an Amazing Podcast
  • Essential Tools for Podcasters
  • AI Tools and Their Benefits
  • Case Study: The Produce Industry Podcast

Resources:

Follow Chris on his:

Website

Facebook

Podfest Multimedia Expo

Categories
AI Today in 5

AI Today in 5: February 2, 2026, The On Thin Ice Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. OpenAI/Nvidia deal ‘on thin ice’. (WSJ)
  2. Wearable AI helping stroke victims. (FoxNews)
  3. Financial firms are facing new compliance risks over AI. (CPI)
  4. A playbook for AI compliance and governance. (FinTechGlobal)
  5. Will AI automate compliance? (LawFare)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.