Categories
AI Today in 5

AI Today in 5: February 26, 2026, The Use AI or Lose Your Job Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. Treasury issues AI risks and compliance tools for financial services. (WVNS)
  2. EU AI Act enforcement begins. (DigWatch)
  3. Human in the Loop is needed for AI in healthcare. (HealthcareITNews)
  4. What happens when companies demand that employees use AI? (WSJ)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
Blog

AI, Compliance, and the Missing “Why”: Highlights from the Compliance Week AI Conference

If there was one clear message coming out of Compliance Week’s January 2026 AI conference, The Leading Edge: Applying AI and Data Analytics in E&C, it was not about tools, vendors, or futuristic promises. It was about discipline. More specifically, it was about something compliance professionals have preached for decades and are now being pressured to skip: the “why.”

In a recent episode of the podcast From the Editor’s Desk, I sat down with Compliance Week Editor in Chief Aaron Nicodemus to gather his reflections on the conference and its implications for compliance leaders. What emerged was not a story about artificial intelligence replacing compliance, but about AI exposing weaknesses in how organizations make decisions, manage pressure from the top, and integrate ethics into innovation. For compliance professionals, the discussion was a reminder that AI is not a technology problem. It is a governance problem.

The Step Everyone Is Skipping: Why Before What

One of the most striking takeaways from the conference came from Jen Gennai, former AI Ethics and Compliance Advisor at Google. Her message was deceptively simple: companies are skipping the “why.” Organizations are rushing to implement AI tools without first articulating what problem they are trying to solve or why AI is the appropriate solution. Instead of defining the use case and then selecting the right tool, teams are buying technology first and hoping value emerges later.

For compliance professionals, this should sound uncomfortably familiar. Risk management, third-party due diligence, investigations; every mature compliance process begins with a defined purpose. There is a reason the first step in the third-party risk management process is the Business Rationale. This is the ‘why’, requiring a business sponsor to explain why your organization needs a new or different business partner. Yet when AI enters the picture, that discipline often evaporates. The result is experimentation without accountability and pilots without strategy.

The irony is that compliance already knows how to do this. The failure is not a lack of knowledge; it is pressure.

Tone at the Top, Revisited: Pressure Without Direction

According to a recent Compliance Week and konaAI study released at the conference, more than 60 percent of compliance officers feel pressure from the board or C-suite to “use AI.” Not to use it in a specific way. Not to achieve a defined outcome. To use it. This top-down mandate creates a new kind of compliance risk. When leadership demands adoption without guidance, teams feel compelled to move quickly, sometimes cutting corners they would never cut in other risk domains.

This is not inherently nefarious. Boards are doing what they believe is necessary to keep their organizations competitive. But pressure without clarity creates the conditions for poor governance. Compliance leaders must recognize this moment not as a threat, but as an opening. Because when leadership says “use AI,” compliance has an opportunity to respond with structure: identify manual pain points, define defensible use cases, and align AI deployment with existing policies and ethical standards. The mandate may be broad, but the implementation can and should be deliberate.

Humans in the Loop: Why Oversight Is Not Optional

Another recurring theme from the conference was the danger of letting AI evaluate AI. Scaling tools without human oversight compounds error. One flawed assumption becomes many. Bias multiplies. Outputs drift. The lesson here is not anti-technology; it is pro-governance. AI works best when humans remain embedded throughout the lifecycle: selecting tools, defining scope, reviewing outputs, and deciding whether the system is working at all.

This aligns squarely with long-standing compliance principles. Judgment-heavy decisions, investigations, escalations, and remediations must remain human. Attempting to automate them introduces fairness and defensibility risks that no compliance program can explain away after the fact. AI should accelerate compliance work, not absolve responsibility for it.

Trust and Integrity: The Core Compliance Tension with AI

The most profound tension discussed at the conference was philosophical. Compliance programs are built on trust and integrity. AI, by contrast, is often perceived as opaque, untrustworthy, and occasionally wrong. This creates a credibility problem.

Why would a compliance function that spends years telling employees to act ethically, verify sources, and question assumptions deploy a tool that fabricates answers or cannot explain its reasoning? If compliance cannot articulate why an AI system aligns with the organization’s ethical standards, it should not be deployed, no matter how efficient it appears to be. Trust is not just about outputs. It extends to inputs, data quality, and understanding how systems interact with information. AI amplifies what it is given. Bad data does not improve through automation; it spreads faster.

Iteration Over Perfection: Learning Is Part of the Process

A healthy counterpoint emerged as well: AI is not a one-shot deployment. It requires iteration. Early failures are not proof that AI does not work; they are evidence that learning has begun. Several speakers emphasized that AI improves through feedback. Teams must be willing to correct it, teach it, and refine its outputs over time. Compliance professionals who abandon tools after one or two imperfect attempts misunderstand how the technology functions.

That said, iteration does not excuse carelessness. Learning must occur within guardrails: governance frameworks, usage boundaries, and documentation matter more, not less, when tools evolve.

Compliance as Value Creator, Not Speed Bump

One of the most encouraging insights from the conference was how AI is reshaping compliance’s role inside organizations. When compliance is involved early, before tools are rolled out, it becomes a partner in innovation rather than an obstacle.

Nicodemus pointed out companies like Robinhood, and Hemma Lomax, Deputy General Counsel, Vice President, and Head of Business Integrity at DocuSign, illustrated this point clearly. Compliance teams that embed themselves in product development and operational change help shape tools that work within ethical and regulatory boundaries from the start. That credibility compounds.

Lomax noted that at DocuSign, she and her compliance teams have gone further, creating AI agents that perform defined tasks continuously, with built-in ethical guardrails. When these tools are handed to new users, the hard questions have already been answered. This is how compliance becomes a competitive advantage; not by saying no, but by helping the business say yes safely.

No Experts, Only Practitioners

Another refreshing theme from the conference was humility. No one claimed to be an AI expert. Especially not in compliance. That matters. When technologies move quickly, false certainty is dangerous. Compliance professionals should not be intimidated by those who claim mastery. Instead, they should lean into their strengths: skepticism, documentation, and principled decision-making. AI does not require omniscience. It requires informed judgment.

The Vibe Shift: From Fear to Engagement

Perhaps the most telling insight came not from the stage, but from the hallways. Compared to earlier events, the mood around AI has shifted. Compliance professionals are no longer crossing their arms in resistance. They recognize the benefits and risks and want to engage. No one believes AI will disappear. The debate is no longer whether to use it, but how. Some organizations will lean in aggressively. Others will move cautiously. All will need compliance to guide those choices. The most effective analogy offered was this: AI is like a very confident intern. Smart. Fast. Occasionally wrong. Useful, but never in charge.

Conclusion: AI Is a Compliance Opportunity, If Compliance Leads

The Compliance Week AI conference made one thing clear: AI is not undermining compliance. It is testing it. Programs that lack clarity, governance, or confidence will struggle. Programs that know who they are, what they stand for, and how they make decisions will thrive. For compliance professionals, the question is not whether AI belongs at the table. It already sits there. The real question is whether compliance will claim its seat, not as a roadblock, but as the function that ensures innovation aligns with integrity. That is not a burden. It is an opportunity.

Categories
AI Today in 5

AI Today in 5: January 15, 2026, The AI for IA Edition

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider five stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories include:

  1. AI for internal audit. (DataSnipper)
  2. The CISO’s guide to cyber AI. (Darktrace)
  3. Building the business case for legal-driven AI. (Harvey)
  4. The human-in-the-loop for financial crime risk assessments. (FinTechGlobal)
  5. Warren Buffett compares AI risk to the risk of nuclear war. (Yahoo!Finance)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game, is available. You can purchase a copy of the book on Amazon.com.

Categories
FCPA Compliance Report

FCPA Compliance Report – Nicole Di Schino on Harnessing AI for Compliance: Governance, Risks, and Best Practices

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode, Tom welcomes Nicole Di Schino, Principal Compliance Services Consultant at Diligent’s Spark Compliance Group, to discuss how best to harness AI for your compliance regime through 2026 and beyond.

Nicole and Tom discuss the critical importance of AI governance, compliance, and modern GRC. They cover practical steps for developing comprehensive compliance programs, emphasizing the necessity for AI risk assessments, the establishment of AI governance committees, and the implementation of human oversight in AI processes. Nicole highlights the intrinsic risks of AI, including privacy concerns and AI bias, and shares her personal experiences with AI’s impact in educational settings. Tom underscores the role of compliance education, advocating for the broader view of compliance as an ambassadorial and academic function. This session also explores the integration of AI into compliance workflows and the essential role of board and committee oversight.

 

Resources:

Nicole Di Schino on LinkedIn

Diligent Website

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Innovation in Compliance

Innovation in Compliance: Dare to Dream: Leveraging AI and Innovation

Innovation is present in many areas, and compliance professionals must not only be prepared for it but also actively embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, host Tom Fox welcomes Dr. Hemma Lomax from DocuSign, Chris Crowder from Airbus, and Vince Walden from konaAI to explore the future of compliance with AI and AgenticAI. This podcast was edited from a konaAI-sponsored webinar. For a link to the full webinar replay, see below.

Our discussion centers around the integration of AI, innovation, and compliance within corporate environments. Chris and Hemma share insights about their current data analytics efforts and the transformative role of AI in enhancing compliance processes. They discuss the importance of human judgment, exploring new technologies, and creating a forward-thinking compliance culture. Audience members are encouraged to think creatively about leveraging technology to address compliance challenges and prepare for a rapidly evolving business landscape.

Key highlights:

  • Current State of AI and Data Analytics in Compliance
  • Challenges and Opportunities in AI Implementation
  • The Role of AI in Risk Management
  • Human Judgment and AI: A Balanced Approach
  • Future of AI in Compliance and Business
  • Future of AI Agents in Compliance

Resources:

For a full replay of the Webinar, click here.

For the konaAI white paper on AgenticAI, click here.

To listen to the award-winning podcast Upping Your Game on the use of AI in a compliance program, click here.

Check out my latest book, Upping Your Game-How Compliance and Risk Management Move to 2023 and Beyond, available from Amazon.com.

Innovation in Compliance was recently honored as the number 4 podcast in Risk Management by 1,000,000 Podcasts.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Navigating Effective Human Oversight for ADS/ADMT in AI Compliance

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully, and looking for some hard-hitting insights on compliance. Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss Matt’s recent experience at a compliance conference in Lithuania and engage in a thorough discussion about effective human oversight in AI systems.

They examine the recent guidance from the European Data Protection Supervisor (EDPS) on maintaining human oversight of automated decision-making processes, relating it to similar regulatory requirements in California. The conversation explores the implications for corporate compliance, IT, and audit professionals, highlighting the challenge of balancing AI efficiency with the need for effective human intervention to mitigate risks and ensure regulatory compliance.

Key highlights:

  • Matt’s Experience in Lithuania
  • AI Regulation in the EU and CCPA Amendments re: ADS and ADMT
  • Effective Human Oversight in AI Systems
  • Challenges in AI Control Design
  • The Role of Compliance and Audit in AI Oversight

Resources:

Matt on Radical Compliance

Tom with a 5-Part podcast series on the CCPA Amendments on ADS/ADMT with Alyssa DeSimone on Life with GDPR

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, Communicator, and W3 Awards for podcast excellence.

Categories
AI Today in 5

AI Today in 5: October 9, 2025, The Looming AI Compliance Crisis Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

  • Is AI correction coming soon? (FT)
  • Salesforce AI agents to assist with compliance issues. (CSO)
  • AI compliance needs human oversight. (FinTech Global)
  • AI compliance crisis looming? (Technology.Org)
  • Anthropic and IBM are joining forces. (WSJ)
Categories
AI Today in 5

AI Today in 5: August 27, 2025, The AI Feelings Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI stories:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
AI Today in 5

AI Today in 5: August 15, 2025, The AI as Boss Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: August 14, 2025, The Putting the Human in AI Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

  • Presight and Dow Jones Factiva Partner to Create AI-Native Risk and Compliance Solutions. (TechAfricaNews)
  • CITGO to enhance compliance through AI. (BusinessWire)
  • GenAI in government. (SAS)
  • EU general-purpose AI obligations. (Baker & McKenzie)
  • Grounding your AI in the human experience. (Nice)

For more information on the use of AI in Compliance programs, see Tom Fox’s new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.