Categories
Compliance Tip of the Day

Compliance Tip of the Day – Trust and Verify

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our 5-part series on using compliance in a best practices compliance program by considering how to trust and verify your use of AI in your compliance program.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
Daily Compliance News

Daily Compliance News: August 20, 2025, The Boss is Back Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • UK drops request for Apple data. (WSJ)
  • Former Mali PM arrested for corruption. (ABCNews)
  • Advice of counsel without the advice. (Reuters)
  • No More Mr. Nice Guy-The boss is back. (FT)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
Blog

Trust and Verify: How Compliance Can Harness AI Agents Safely

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

When we think of “trust” in compliance, our minds usually go to whistleblowers, employees, or third parties. But increasingly, the question of trust must extend to a new category of actors: AI agents.

As Blair Levin and Larry Downes explain in their provocative Harvard Business Review piece, titled “Can AI Agents Be Trusted?“, AI agents are not just smarter chatbots. They are software systems that can collect data, make decisions, and even act autonomously based on rules and priorities. For compliance professionals, this changes the game. If AI agents can act on our behalf, can they also be trusted to uphold compliance principles?

The answer is yes, but only if we design and monitor them with the same rigor that we apply to employees, third parties, and business partners. Today, we look at five key takeaways from their article to guide compliance professionals in building AI agents into trustworthy components of their programs.

1. Trust Requires Oversight, Just as with Human Agents

The article makes a simple but powerful analogy: think of an AI agent the way you would think of an employee or contractor. Before delegating sensitive responsibilities, you conduct background checks, put controls in place, and possibly even require bonding. The same must hold for AI.

For compliance, this means creating oversight structures before deploying agents into live workflows. If your compliance AI assistant can monitor transactions for red flags, you must ensure that a human compliance officer reviews its outputs. If it can escalate potential whistleblower complaints, you must validate that escalation logic against regulatory requirements.

AI oversight also means testing for vulnerabilities. As Levin and Downes note, AI agents are susceptible to hacking, manipulation, and even misinformation. Compliance should require penetration testing of any agent integrated into company systems, just as IT would test network defenses.

Trust is never blind in compliance. It is built on verification, monitoring, and accountability. AI agents can and should be trusted, but only when they operate within a compliance framework that mirrors the controls we already use for human agents.

2. Recognize and Manage Bias and Conflicts of Interest

One of the major risks highlighted in the article is bias, whether introduced by marketers, advertisers, or flawed training data. Just as a conflicted employee can steer decisions for personal gain, an AI agent can be subtly manipulated to favor sponsors, advertisers, or even certain viewpoints.

For compliance professionals, this should raise alarms. Imagine an AI agent used for third-party due diligence. If biased data shapes its recommendations, you could end up onboarding a high-risk vendor while rejecting a low-risk one. Worse, if regulators discover that your system relied on biased algorithms, you’ll face serious questions about program effectiveness.

The solution is conflict-of-interest monitoring for AI. Just as employees must disclose outside interests, AI agents should be tested and audited for hidden preferences. Compliance should insist on transparency from vendors about training data sources and sponsorship arrangements. In some cases, contracts with AI providers may need explicit clauses guaranteeing independence from commercial influence.

Compliance has always been about spotting and mitigating conflicts. In the age of AI, that vigilance must extend to our digital agents. Only then can we claim that our programs are fair, impartial, and defensible.

3. Treat AI Agents as Fiduciaries of Compliance

Perhaps the most compelling insight from Levin and Downes is that AI agents should be treated as fiduciaries. Just as lawyers, trustees, and board members owe a heightened duty of care to their clients, AI agents entrusted with compliance responsibilities must be designed and governed under similar standards.

For compliance officers, this concept aligns directly with DOJ expectations. The Evaluation of Corporate Compliance Programs (2024 ECCP) emphasizes accountability, transparency, and independence. By treating AI agents as fiduciaries, compliance leaders can extend these principles to technology.

What does fiduciary duty look like in practice?

  • Obedience: AI must follow company policies and regulatory standards.
  • Loyalty: AI must prioritize the company’s compliance objectives over any hidden commercial interests.
  • Confidentiality: AI must protect sensitive compliance data from leaks or misuse.
  • Accountability: AI actions must be traceable, with clear logs and audit trails.

This fiduciary framing provides compliance professionals with a powerful tool. It not only reassures stakeholders that AI can be trusted, but it also sets a benchmark that regulators can understand and evaluate. In short, fiduciary AI is defensible AI.

4. Build Market and Insurance-Based Safeguards

The article notes that beyond regulation, market mechanisms such as insurance and independent oversight will be critical to ensuring AI trustworthiness. For compliance leaders, this presents both a risk management strategy and an opportunity.

Just as identity theft insurance evolved alongside online banking, AI liability insurance will likely become a standard corporate requirement. Compliance officers should begin engaging with insurers to explore coverage for AI-related risks, such as data leaks, wrongful denials of due diligence clearance, or biased decision-making.

Equally important are third-party oversight tools. The article envisions AI “credit bureaus” that could audit agent behavior, set decision thresholds, or freeze activity when risks escalate. For compliance, such independent monitoring could provide an external layer of assurance that your AI systems are behaving as intended.

The takeaway is clear: do not rely solely on internal controls. Pair them with market-based safeguards and external verification. Doing so not only strengthens trust in AI agents but also demonstrates to regulators that your program embraces both proactive and independent oversight.

5. Design for Data Security and Local Control

Finally, Levin and Downes stress the importance of keeping decisions local; that is, ensuring sensitive data stays on company-controlled devices and servers, rather than in external clouds. For compliance professionals, this echoes a familiar principle: control the data, control the risk.

Agentic AI, by definition, processes vast amounts of sensitive information. If compliance agents are reviewing hotline reports, transaction monitoring data, or due diligence files, any data leakage could be catastrophic. That’s why strong encryption, local processing, and secure enclaves are essential.

Compliance officers should demand that AI vendors support:

  • On-device or private cloud processing for sensitive tasks.
  • Encryption of all data in transit and at rest.
  • Independent verification of security claims by external auditors.
  • Full disclosure of sponsorships, promotions, and paid influences.

By designing AI agents with local control and transparency, compliance teams can build systems that are both effective and trustworthy. Data security is not just an IT concern; it is a compliance imperative.

Trust, But Never Blindly

AI agents hold immense potential for compliance programs. They can streamline monitoring, accelerate due diligence, and support real-time risk management. But as Levin and Downes remind us, they must also be carefully governed to prevent bias, manipulation, and misuse.

For compliance leaders, the path forward is to treat AI like any other agent (or channel your inner Ronald Reagan: trust, but verify. With oversight, fiduciary framing, market safeguards, and strong data controls, AI can become a trusted partner in compliance—one that strengthens, rather than weakens, the ethical fabric of the organization.

Categories
Daily Compliance News

Daily Compliance News: August 19, 2025, The AI Discontent Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Panamanian Intermediary pleads guilty to bribery and corruption. (Enmayuscula)
  • The winter of our AI discontent. (Bloomberg)
  • Understanding corruption. (Investopedia)
  • When good enough is good enough. (FT)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – AI Assistant for Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our 5-part series on using compliance in a best practices compliance program by considering how a compliance professional can use AI as an Assistant.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
AI Today in 5

AI Today in 5: August 19, 2025, The AI and Compliance Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

  • Texas AG goes after chatbots for kids’ mental health services. (KVUE)
  • China is turning to AI in information warfare. (NYT)
  • Does using AI put you on the wrong side of compliance? (UC Today)
  • Using AI for cross-border trade. (World Business Outlook)
  • Greenlight sues Compliance AI over trademark violation. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance – Gaurav Kapoor on Risk Management and the Role of AI in GRC

Innovation comes in many areas, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom Fox interviews Gaurav Kapoor, Vice Chairman, Co-Founder and Board Member of MetricStream, discussing his extensive professional background, from co-founding MetricStream to his current focus on customer intimacy amid AI market disruptions.

Kapoor delves into the evolving landscape of risk management, emphasizing the importance of midyear reviews and integration of various risk themes like operational risk, audit compliance, and cybersecurity. He elaborates on the role of AI in GRC, stating how generative and agent AI can streamline compliance processes and enhance risk management strategies. The conversation also touches on the increasing significance of cybersecurity, geopolitical instability, and climate impact on risk assessment. Kapoor highlights the shift from compliance to a more resilient and risk-aware culture within organizations.

Key highlights:

  • The Importance of July in Risk Management
  • AI’s Role in GRC
  • Emerging Risks and AI Applications
  • Counseling Boards on Risk Management
  • Top Concerns for the Second Half of 2025
  • Evolving Role of Compliance and Risk Officers

Resources:

MetricStream Website and on LinkedIn

Gaurav Kapoor on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Costs and Benefits of AI

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we begin a 5-part series on using compliance in a best practices compliance program by considering the costs and benefits of using AI.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
10 For 10

10 For 10: Top Compliance Stories For the Week Ending August 16, 2025

Welcome to 10 For 10, the podcast that brings you the week’s Top 10 compliance stories in one podcast each week. Tom Fox, the Voice of Compliance, brings to you, the compliance professional, the compliance stories you need to be aware of to end your busy week. Sit back, and in 10 minutes, hear about the stories every compliance professional should be aware of from the prior week. Every Saturday, 10 For 10 highlights the most important news, insights, and analysis for the compliance professional, all curated by the Voice of Compliance, Tom Fox. Get your weekly filling of compliance stories with 10 for 10, a podcast produced by the Compliance Podcast Network.

  • Attorney-client privilege is protected in the FirstEnergy litigation. (Reuters)
  • BCG’s Gaza project is so offensive that 4 staffers quit the company. (FT)
  • Albania (of all countries) turns to AI to fight corruption. (Politico)
  • 5th ex-Peruvian President jailed for corruption. (Al Jazeera)
  • The human cost of corruption. (Just Security)
  • The bribe-based bill remains the law in Ohio. (Brennan Center for Justice)
  • Musk threatens to sue over bad Apple App Store rankings. (FT)
  • South Korea’s ex-First Lady arrested for corruption. (NYT)
  • CZ pushes for a pardon. (NYT)
  • Piston’s Malik Beasley is facing gambling allegations. (NYPost)

You can check out the Daily Compliance News for four curated compliance- and ethics-related stories each day, here.

Connect with Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

You can purchase a copy of my new book, Upping Your Game, on Amazon.com

Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 57 – The Tom on His Highhorse Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

  • Thoughts on the Compliance Job Market (Radical Compliance)
  • A Shadow AI Crisis Is Brewing in the GC’s Office (Corporate Compliance Insights)
  • I built a company that broke people. Now I’m choosing capitalism with love (Fast Company)
  • European Union: Specific regulation of technological impact on the workforce
  • Trump is now the CEO of all US corps. (WSJ)
  • Trump tells Intel to fire CEO. Are you next? (WSJ)
  • Uber picked business over customer safety. (NYT)
  • 9th Circuit upholds SEC gag rule. (Reuters)
  • To Regulate or Not To Regulate. (Bloomberg)
  • Florida man posed as flight attendant to score dozens of free flights: officials – Fox 35 Orlando

Resources:

Kristy Grant-Hart on LinkedIn

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn