Categories
AI Today in 5

AI Today in 5: February 12, 2026, The AI to the Moon Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Putting AI into your compliance workflow. (Valley Courier)
  2. GenAI and compliance. (FinTechGlobal)
  3. Musk wants to put an AI factory on the Moon. (NYT)
  4. OpenAI disbands safety teams. (TechCrunch)
  5. Is the US ready for what AI will do for jobs? (The Atlantic)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: February 11, 2026, The Hits and Misses Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Hits and misses in compliance using AI. (CW)
  2. Turning AI into a competitive advantage. (IBS Intelligence)
  3. Preparing for AI-powered investigations. (CCI)
  4. How AI intensifies work. (HBR)
  5. Deploying AI against financial crime. (PYMNTS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

AI, Compliance, and the Missing “Why”: Highlights from the Compliance Week AI Conference

If there was one clear message coming out of Compliance Week’s January 2026 AI conference, The Leading Edge: Applying AI and Data Analytics in E&C, it was not about tools, vendors, or futuristic promises. It was about discipline. More specifically, it was about something compliance professionals have preached for decades and are now being pressured to skip: the “why.”

In a recent episode of the podcast From the Editor’s Desk, I sat down with Compliance Week Editor in Chief Aaron Nicodemus to gather his reflections on the conference and its implications for compliance leaders. What emerged was not a story about artificial intelligence replacing compliance, but about AI exposing weaknesses in how organizations make decisions, manage pressure from the top, and integrate ethics into innovation. For compliance professionals, the discussion was a reminder that AI is not a technology problem. It is a governance problem.

The Step Everyone Is Skipping: Why Before What

One of the most striking takeaways from the conference came from Jen Gennai, former AI Ethics and Compliance Advisor at Google. Her message was deceptively simple: companies are skipping the “why.” Organizations are rushing to implement AI tools without first articulating what problem they are trying to solve or why AI is the appropriate solution. Instead of defining the use case and then selecting the right tool, teams are buying technology first and hoping value emerges later.

For compliance professionals, this should sound uncomfortably familiar. Risk management, third-party due diligence, investigations; every mature compliance process begins with a defined purpose. There is a reason the first step in the third-party risk management process is the Business Rationale. This is the ‘why’, requiring a business sponsor to explain why your organization needs a new or different business partner. Yet when AI enters the picture, that discipline often evaporates. The result is experimentation without accountability and pilots without strategy.

The irony is that compliance already knows how to do this. The failure is not a lack of knowledge; it is pressure.

Tone at the Top, Revisited: Pressure Without Direction

According to a recent Compliance Week and konaAI study released at the conference, more than 60 percent of compliance officers feel pressure from the board or C-suite to “use AI.” Not to use it in a specific way. Not to achieve a defined outcome. To use it. This top-down mandate creates a new kind of compliance risk. When leadership demands adoption without guidance, teams feel compelled to move quickly, sometimes cutting corners they would never cut in other risk domains.

This is not inherently nefarious. Boards are doing what they believe is necessary to keep their organizations competitive. But pressure without clarity creates the conditions for poor governance. Compliance leaders must recognize this moment not as a threat, but as an opening. Because when leadership says “use AI,” compliance has an opportunity to respond with structure: identify manual pain points, define defensible use cases, and align AI deployment with existing policies and ethical standards. The mandate may be broad, but the implementation can and should be deliberate.

Humans in the Loop: Why Oversight Is Not Optional

Another recurring theme from the conference was the danger of letting AI evaluate AI. Scaling tools without human oversight compounds error. One flawed assumption becomes many. Bias multiplies. Outputs drift. The lesson here is not anti-technology; it is pro-governance. AI works best when humans remain embedded throughout the lifecycle: selecting tools, defining scope, reviewing outputs, and deciding whether the system is working at all.

This aligns squarely with long-standing compliance principles. Judgment-heavy decisions, investigations, escalations, and remediations must remain human. Attempting to automate them introduces fairness and defensibility risks that no compliance program can explain away after the fact. AI should accelerate compliance work, not absolve responsibility for it.

Trust and Integrity: The Core Compliance Tension with AI

The most profound tension discussed at the conference was philosophical. Compliance programs are built on trust and integrity. AI, by contrast, is often perceived as opaque, untrustworthy, and occasionally wrong. This creates a credibility problem.

Why would a compliance function that spends years telling employees to act ethically, verify sources, and question assumptions deploy a tool that fabricates answers or cannot explain its reasoning? If compliance cannot articulate why an AI system aligns with the organization’s ethical standards, it should not be deployed, no matter how efficient it appears to be. Trust is not just about outputs. It extends to inputs, data quality, and understanding how systems interact with information. AI amplifies what it is given. Bad data does not improve through automation; it spreads faster.

Iteration Over Perfection: Learning Is Part of the Process

A healthy counterpoint emerged as well: AI is not a one-shot deployment. It requires iteration. Early failures are not proof that AI does not work; they are evidence that learning has begun. Several speakers emphasized that AI improves through feedback. Teams must be willing to correct it, teach it, and refine its outputs over time. Compliance professionals who abandon tools after one or two imperfect attempts misunderstand how the technology functions.

That said, iteration does not excuse carelessness. Learning must occur within guardrails: governance frameworks, usage boundaries, and documentation matter more, not less, when tools evolve.

Compliance as Value Creator, Not Speed Bump

One of the most encouraging insights from the conference was how AI is reshaping compliance’s role inside organizations. When compliance is involved early, before tools are rolled out, it becomes a partner in innovation rather than an obstacle.

Nicodemus pointed out companies like Robinhood, and Hemma Lomax, Deputy General Counsel, Vice President, and Head of Business Integrity at DocuSign, illustrated this point clearly. Compliance teams that embed themselves in product development and operational change help shape tools that work within ethical and regulatory boundaries from the start. That credibility compounds.

Lomax noted that at DocuSign, she and her compliance teams have gone further, creating AI agents that perform defined tasks continuously, with built-in ethical guardrails. When these tools are handed to new users, the hard questions have already been answered. This is how compliance becomes a competitive advantage; not by saying no, but by helping the business say yes safely.

No Experts, Only Practitioners

Another refreshing theme from the conference was humility. No one claimed to be an AI expert. Especially not in compliance. That matters. When technologies move quickly, false certainty is dangerous. Compliance professionals should not be intimidated by those who claim mastery. Instead, they should lean into their strengths: skepticism, documentation, and principled decision-making. AI does not require omniscience. It requires informed judgment.

The Vibe Shift: From Fear to Engagement

Perhaps the most telling insight came not from the stage, but from the hallways. Compared to earlier events, the mood around AI has shifted. Compliance professionals are no longer crossing their arms in resistance. They recognize the benefits and risks and want to engage. No one believes AI will disappear. The debate is no longer whether to use it, but how. Some organizations will lean in aggressively. Others will move cautiously. All will need compliance to guide those choices. The most effective analogy offered was this: AI is like a very confident intern. Smart. Fast. Occasionally wrong. Useful, but never in charge.

Conclusion: AI Is a Compliance Opportunity, If Compliance Leads

The Compliance Week AI conference made one thing clear: AI is not undermining compliance. It is testing it. Programs that lack clarity, governance, or confidence will struggle. Programs that know who they are, what they stand for, and how they make decisions will thrive. For compliance professionals, the question is not whether AI belongs at the table. It already sits there. The real question is whether compliance will claim its seat, not as a roadblock, but as the function that ensures innovation aligns with integrity. That is not a burden. It is an opportunity.

Categories
Great Women in Compliance

Great Women in Compliance: Why Decision Rubrics Matter in the Age of AI with Hemma Lomax and Shalini Rajoo

In this conversation, GWIC host Dr. Hemma R. Lomax and Shalini Rajoo explore the critical role of decision rubrics in governance, accountability, and trust, especially in the context of AI. Shalini shares her journey from law to compliance, emphasizing the importance of understanding systems and the impact of leadership on decision-making processes. They discuss how transparency and clarity in decision-making can build trust within organizations and the necessity of responsible AI governance. Practical tips for improving decision quality are also provided, highlighting the importance of self-awareness and critical thinking in leadership.

Takeaways:

  • The biggest risk in governance is unclear decisions.
  • AI amplifies existing clarity or confusion in decision-making.
  • Systems and rules reflect the identities of their architects.
  • Everyone has an impact on those around them every day.
  • Leadership is about improving the people around you.
  • It’s not just about rules; it’s about how people behave.
  • Decision rubrics provide consistency and predictability in outcomes.
  • Transparency in decision-making processes builds trust.
  • Slowing down to ask questions can lead to better decision-making.
  • Writing down the reasons for decisions brings clarity and accountability.

Sound bites:

“Systems and rules are not inherently neutral.”

“Transparency in decision making builds trust.”

“Slow is smooth, and smooth is fast.”

Chapters:

00:00 Introduction to Decision Rubrics and Governance

02:55 Shalini’s Journey: From Law to Governance

06:09 The Impact of Systems on Leadership and Accountability

09:09 Transitioning to Compliance and Ethics

11:49 Understanding Decision Rubrics in Compliance

15:06 The Role of Leadership in Decision Making

18:03 Designing Conditions for Effective Decision Making

20:47 The Importance of Transparency in Decision Processes

24:09 Decision Rubrics: Building Trust in Organizations

26:49 AI and Governance: Leadership Infrastructure Failures

29:47 Responsible AI: The Role of Ethics and Compliance

32:55 Practical Tips for Improving Decision Quality

36:00 Conclusion: The Future of Decision Making in AI

Guest Biography:

Shalini Rajoo is the Founder and Principal Consultant of Shalini Rajoo Advisory, LLC, where she partners with organizations to design governance, compliance, and decision-making systems that are resilient, trustworthy, and aligned to real operational pressures. Across more than two decades in law, compliance, HR, and organizational leadership, Shalini has helped companies and leaders move beyond check-the-box frameworks to build structures that embed accountability, clarity, and performance into everyday decisions.

She began her career in South Africa, first as a public prosecutor and then leading regulatory work with the Department of Trade and Industry, collaborating with legislative and executive stakeholders on corporate, competition, and consumer law. After relocating to the U.S., Shalini practiced commercial litigation. She later served as Director of Global Business Conduct for a Fortune 500 company, where she redesigned ethics and compliance systems, led global risk assessments, and championed psychological safety and integrity-based practices.

Today, Shalini’s work centers on helping leaders clarify decision rights, governance architectures, and accountability pathways — especially as organizations adopt AI and automation. She recently spoke at the Opal Group’s Corporate Governance & Ethics in the Age of AI conference, where she reframed AI governance as a leadership-infrastructure challenge rather than a purely technical or compliance one.

Categories
AI Today in 5

AI Today in 5: February 10, 2026, The AI Redefining GRC Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. How AI is redefining GRC. (GulfNews)
  2. AI-assisted workforce leave compliance program. (USAToday)
  3. How to integrate AI into your compliance workflows. (AOL)
  4. How AI can speed compliance research. (FedScoop)
  5. Data sovereignty for AI compliance. (TechTarget)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance – Proactive Compliance Frameworks for Evolving AI Regulations with Yakir Golan

Innovation occurs across many areas, and compliance professionals need not only to be ready for it but also to embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Yakir Golan, CEO & Co-founder at Kovrr, who shares his professional journey from the Israeli intelligence community to his current role at Kovrr.

With a rich background in Israel’s intelligence community and significant experience with cybersecurity vendors, Golan champions integrating frameworks with analytics to effectively assess and navigate risks, emphasizing governance as a vital component for sustained innovation. He advocates proactive measures to address AI-enabled insider threats, urging businesses not to wait for perfect regulatory clarity amid the fast-paced evolution of AI technologies. Golan’s holistic approach to compliance transcends mere regulatory adherence, focusing on business-driven proficiency in cybersecurity and AI to meet the dynamic demands of the business landscape.

 

Key highlights:

  • Financial Models for AI Risk Governance
  • Enhancing AI Governance with Adaptive Frameworks
  • Empowering Innovation Through Strategic Governance and Compliance
  • Unified Approach: AI-Cybersecurity in Enterprise Risk Management

Resources:

Yakir Golan on LinkedIn

Kovrr 

Innovation in Compliance was recently ranked Number 4 in Risk Management by 1,000,000 Podcasts.

Categories
AI Today in 5

AI Today in 5: February 9, 2026, The AI Agents Doing Compliance Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. What to do when AI is forced on compliance. (CW)
  2. Napier AI/AML report is out. (FinTechGlobal)
  3. AI and the accountability gap. (FinTechGlobal)
  4. Where AI is tearing through corporate America. (WSJ)
  5. Goldman is letting AI Agents do compliance. (PYMNTS)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

From Principle to Proof: Operationalizing AI Governance Through the ECCP and NIST

Artificial intelligence governance has officially crossed the threshold from theory to expectation. The Department of Justice has not issued a standalone “AI rulebook,” but it has provided a framework for compliance professionals to consider the issue: the 2024 Evaluation of Corporate Compliance Programs (ECCP). In this version of the ECCP, the DOJ laid out guidance that any technology capable of creating material business risk must be governed, monitored, and improved like any other compliance risk. That includes artificial intelligence.

Too many organizations still treat AI governance as an ethics exercise, a technical problem, or a future concern. That posture is not defensible. The DOJ does not ask whether your program is fashionable or aspirational. It asks three very old-fashioned questions: Is your compliance program well designed? Is it applied in good faith? Does it work in practice? Those questions apply with full force to AI.

In this post, I want to move the discussion from abstract frameworks to operational reality. I will show how compliance professionals can use the ECCP to structure AI governance, select board-grade KPIs, and demonstrate effectiveness in a way regulators understand. I will also show how the NIST AI Risk Management Framework (NIST Framework) fits neatly underneath this structure as an operating model, not a competing philosophy.

AI Governance Is Already an ECCP Issue

The DOJ has repeatedly emphasized that compliance programs must evolve as business risks evolve. Artificial intelligence is not a future risk. It is already embedded in pricing, hiring, credit decisions, customer interactions, fraud detection, and third-party screening. If an AI model can influence revenue, customer outcomes, or regulatory exposure, it is a compliance risk. Period.

The ECCP does not require companies to eliminate risk. It requires them to identify, assess, manage, and learn from it. AI governance, therefore, belongs squarely inside the compliance program, not off to the side in an innovation lab or technology committee.

The ECCP as an AI Governance Blueprint

The power of the ECCP is its simplicity. Every enforcement action ultimately traces back to the same three questions. Let us apply them directly to AI.

Is the Program Well Designed?

Design begins with risk assessment. If your organization cannot answer a basic question such as “What AI systems do we have, who owns them, and what decisions they influence,” you do not have a program. You have hope. A well-designed AI compliance program starts with an AI asset inventory that identifies models, tools, vendors, and use cases. Each asset must be risk-classified based on business impact, regulatory exposure, and potential harm.

Board-level KPIs here are coverage metrics. How many AI assets have been identified? What percentage has been risk-classified? How many high-impact models have completed an impact assessment before deployment? If your dashboard does not show near-full coverage, the design is incomplete.

Policies and procedures come next. The DOJ does not care how many policies you have. It cares whether they provide clear guidance for real decisions. AI policies should cover the full lifecycle, from design and data sourcing through deployment, monitoring, and retirement. A practical KPI is policy coverage. What percentage of AI assets operate under current, approved procedures? How often are those procedures refreshed? Annual updates are a reasonable baseline in a rapidly changing risk environment.

Is the Program Applied Earnestly and in Good Faith?

Good faith is demonstrated through action, not intent. Training is a central indicator. The DOJ expects role-based training tailored to actual risk. A generic AI awareness course does not meet this standard. Developers, model owners, compliance reviewers, and business leaders all require different training. Completion rates matter, but so does comprehension. Measuring post-training proficiency improvement is one of the clearest signals that training is more than a box-checking exercise.

Third-party risk management is another critical area. Many organizations rely on external models, data providers, or AI-enabled vendors. If you do not understand how those tools are built, governed, and updated, you are importing risk without controls. Strong programs use standardized AI diligence questionnaires, assign assurance scores, and require contractual safeguards for high-risk vendors. A board-ready KPI here is the percentage of high-risk AI vendors subject to enhanced diligence and contractual controls.

Mergers and acquisitions deserve special attention. AI risk does not wait for post-close integration. The DOJ has been explicit that pre-acquisition diligence matters. A defensible KPI is simple and unforgiving. 100% of acquisition targets with material AI usage must undergo AI due diligence before closing. Anything less invites inherited risk.

Does the Program Work in Practice?

This is where many programs fail. Paper controls do not impress regulators. Outcomes do. Incident reporting is a critical signal. A low number of reported AI issues may indicate fear, confusion, or a lack of safety rather than safety concerns. What matters is whether issues are identified, investigated, and resolved promptly. Mean time to investigate is a powerful metric. If AI-related concerns take months to resolve, the program is not working. Clear escalation paths, defined investigation playbooks, and documented root cause analysis are essential.

Continuous monitoring is equally important. High-risk AI systems must be monitored for performance drift, data changes, and unintended outcomes. The DOJ expects companies to use data analytics to test whether controls are functioning. KPIs here include validation pass rates before deployment, drift-detection coverage for critical models, and corrective action closure rates. These are not technical vanity metrics. They are evidence of effectiveness.

Where NIST Fits and Why It Matters

The NIST AI Risk Management Framework does not compete with the ECCP. It operationalizes it. The ECCP tells you what regulators expect. NIST helps you implement those expectations across governance, mapping, measurement, and management. For example, ECCP risk assessment aligns with NIST’s mapping function. ECCP’s continuous improvement aligns with NIST’s measurement and management functions. Using NIST terminology creates a shared language across compliance, legal, security, and data science teams. That shared language is governance in action.

Reporting AI Risk to the Board

Boards do not want technical detail. They want assurance. The most effective AI governance dashboards focus on a small set of indicators that answer the DOJ’s three questions: coverage, quality, responsiveness, and learning. Examples include the percentage of AI assets risk-classified, validation pass rates, investigation cycle times, and corrective action closure rates. When these metrics move in the right direction, they tell a credible story of control. More importantly, they show that compliance is not reacting to AI. It is governing it.

Five Key Takeaways for Compliance Professionals

  1. AI as Risk. Artificial intelligence is already within the scope of the ECCP. If AI can influence business outcomes, it must be governed like any other compliance risk.
  2. Risk Management Program. A well-designed AI compliance program begins with complete asset identification and risk classification. Coverage metrics are the first signal regulators will examine.
  3. Implementation. Good faith implementation is demonstrated through role-based training, disciplined third-party oversight, and pre-acquisition AI diligence. Intent without execution does not count.
  4. Outcomes, not Inputs. Effectiveness is proven through outcomes. Investigation speed, monitoring coverage, and corrective action closure rates matter more than policy volume.
  5. Complementary. The NIST Framework complements the ECCP by providing an operating model that compliance, legal, and technical teams can share. Together, they turn principles into proof.

Final Thoughts

AI governance is not about predicting the future. It is about demonstrating discipline in the present. The DOJ is not asking compliance professionals to become data scientists. It is asking us to do what they have always done well: identify risk, establish controls, test effectiveness, and improve continuously. The ECCP already gives you the framework. The only question is applying it.

Categories
From the Editor's Desk

From the Editor’s Desk – Aaron Nicodemus on the CW AI Conference Insights: Navigating the Practical Use of AI in Compliance

In this episode of ‘From the Editor’s Desk,’ Tom Fox visits with Aaron Nicodemus to discuss highlights from the recent Compliance Week AI Conference. Key takeaways include the importance of understanding the purpose and practical use of AI tools before implementation, the pressures from C-suite and boards to adopt AI, and the necessity of a human-in-the-loop approach. The conversation also touches on integrating trust and integrity into AI adoption, the evolving role of compliance as a trusted partner in AI initiatives, and the collective willingness to learn and apply AI across compliance operations.

Key highlights:

  • Importance of Understanding AI Implementation
  • Pressure from the Top: Compliance and AI
  • Human Oversight in AI Processes
  • Trust and Integrity in AI
  • Compliance as a Competitive Advantage
  • Real-World Examples: Robinhood and DocuSign
  • The Evolving Role of Compliance in AI
  • Conference Vibes and Final Thoughts

Resources:

Aaron Nicodemus on LinkedIn

Compliance Week

Categories
AI Today in 5

AI Today in 5: February 6, 2026, The Trillion $$ Wipeout Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. EU AI group establishes task force to foster compliance. (Babl)
  2. AI diligence tool rollout. (InvestmentNews)
  3. AI in healthcare is driving greater accountability. (FastCompany)
  4. The compliance convergence challenge. (SecurityBlvd.)
  5. AI fears wipe out tech stock values. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.