Categories
AI Today in 5

AI Today in 5: August 20, 2025, The In Pursuit of Super-Intelligence Episode

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
Great Women in Compliance

Great Woman in Compliance – Building Strategic and Effective Risk Assessments

In this episode of the Great Women and Compliance Podcast, co-hosts Hemma Lomax and Lisa Fine discuss the breadth and depth of effective risk assessments with guests Jisha Dymond and Lisa Beth Lentini Walker.  Jisha and Lisa Beth have both worked in highly regulated and high-profile industries. Jisha most recently served as Chief Ethics & Compliance Officer at OneTrust, and Lisa Beth is currently the Deputy General Counsel, Corporate Legal, and Assistant Secretary at Marqeta, as well as the CEO and Founder of Lumen Worldwide Endeavors.

They discuss various aspects of assessing risk and how to align the needs best for your compliance risk assessments with other functions to develop strategic and holistic approaches that influence organizational direction. The discussion touches on the importance of cross-functional collaboration, effective use of data and AI, and practical steps for implementing comprehensive risk management processes.

Key highlights include:

  • Holistic vs. Compliance Risk Assessments
  • Engaging Key Stakeholders
  • Building Trust and Cross-functional Collaboration 
  • Data-Driven Risk Assessments
  • The Role of AI in Risk Management
Categories
The Hill Country Podcast

The Hill Country Podcast – Discovering Blue Roan Coffee with James Love

Welcome to the award-winning The Hill Country Podcast. The Texas Hill Country is one of the most beautiful places on earth. In this podcast, Hill Country resident Tom Fox visits with the people and organizations that make this one of the most unique areas of Texas. This week, Tom welcomes James Love, the General Manager of Blue Roan Coffee in Kerrville, TX.

They discuss James’s professional journey from Portland to becoming involved in the coffee industry, and the unique aspects that set Blue Roan Coffee apart, including their commitment to organic products and customer health. James shares insights on coffee bean quality, roasting processes, and the importance of sourcing beans from smallholder farms. They also explore the challenges and plans for Blue Roan Coffee, and the impact of creating a quality, health-focused product in the community.

Key highlights:

  • The Art and Science of Coffee Roasting
  • Challenges and Innovations in Coffee Production
  • Blue Roan Coffee’s Unique Offerings
  • The Kerrville Coffee Scene

Resources:

Blue Roan Coffee

Blue Roan on Instagram

Other Hill Country Network Podcasts

Hill Country Authors Podcast

Hill Country Artists Podcast

Texas Hill Country Podcast Network

Artwork

Nancy Huffman Fine Art

Categories
Compliance Into the Weeds

Compliance into the Weeds: The Dark Side of AI in Employee Training

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss emerging concerns surrounding AI, particularly ChatGPT, in the realm of employee training.

Their discussion centers on the potential use of AI, specifically ChatGPT’s newest ‘Agent Mode’, to administer compliance training courses on behalf of employees, which could potentially enable them to cheat. They debate the implications of this capability, touching on the historical context of cheating, the effectiveness of current training methods, and the need for new internal controls and strategies to adapt to these technological advancements. They also contemplate the future of training, potentially evolving into AI-driven bots that provide on-the-spot, micro-learning modules. The episode encourages compliance officers to thoroughly vet their training vendors to ensure measures are in place to prevent AI-enabled cheating.

Key highlights:

  • The Dark Side of AI in Compliance Training
  • AI’s Impact on Employee Training
  • AI’s Role in Training and Compliance
  • Future of AI in Corporate Training
  • Challenges and Considerations

Resources:

Matt Kelly in Radical Compliance

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred a Davey, Communicator, and W3 Awards for podcast excellence.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Trust and Verify

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our 5-part series on using compliance in a best practices compliance program by considering how to trust and verify your use of AI in your compliance program.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
Daily Compliance News

Daily Compliance News: August 20, 2025, The Boss is Back Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • UK drops request for Apple data. (WSJ)
  • Former Mali PM arrested for corruption. (ABCNews)
  • Advice of counsel without the advice. (Reuters)
  • No More Mr. Nice Guy-The boss is back. (FT)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
Blog

Trust and Verify: How Compliance Can Harness AI Agents Safely

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

When we think of “trust” in compliance, our minds usually go to whistleblowers, employees, or third parties. But increasingly, the question of trust must extend to a new category of actors: AI agents.

As Blair Levin and Larry Downes explain in their provocative Harvard Business Review piece, titled “Can AI Agents Be Trusted?“, AI agents are not just smarter chatbots. They are software systems that can collect data, make decisions, and even act autonomously based on rules and priorities. For compliance professionals, this changes the game. If AI agents can act on our behalf, can they also be trusted to uphold compliance principles?

The answer is yes, but only if we design and monitor them with the same rigor that we apply to employees, third parties, and business partners. Today, we look at five key takeaways from their article to guide compliance professionals in building AI agents into trustworthy components of their programs.

1. Trust Requires Oversight, Just as with Human Agents

The article makes a simple but powerful analogy: think of an AI agent the way you would think of an employee or contractor. Before delegating sensitive responsibilities, you conduct background checks, put controls in place, and possibly even require bonding. The same must hold for AI.

For compliance, this means creating oversight structures before deploying agents into live workflows. If your compliance AI assistant can monitor transactions for red flags, you must ensure that a human compliance officer reviews its outputs. If it can escalate potential whistleblower complaints, you must validate that escalation logic against regulatory requirements.

AI oversight also means testing for vulnerabilities. As Levin and Downes note, AI agents are susceptible to hacking, manipulation, and even misinformation. Compliance should require penetration testing of any agent integrated into company systems, just as IT would test network defenses.

Trust is never blind in compliance. It is built on verification, monitoring, and accountability. AI agents can and should be trusted, but only when they operate within a compliance framework that mirrors the controls we already use for human agents.

2. Recognize and Manage Bias and Conflicts of Interest

One of the major risks highlighted in the article is bias, whether introduced by marketers, advertisers, or flawed training data. Just as a conflicted employee can steer decisions for personal gain, an AI agent can be subtly manipulated to favor sponsors, advertisers, or even certain viewpoints.

For compliance professionals, this should raise alarms. Imagine an AI agent used for third-party due diligence. If biased data shapes its recommendations, you could end up onboarding a high-risk vendor while rejecting a low-risk one. Worse, if regulators discover that your system relied on biased algorithms, you’ll face serious questions about program effectiveness.

The solution is conflict-of-interest monitoring for AI. Just as employees must disclose outside interests, AI agents should be tested and audited for hidden preferences. Compliance should insist on transparency from vendors about training data sources and sponsorship arrangements. In some cases, contracts with AI providers may need explicit clauses guaranteeing independence from commercial influence.

Compliance has always been about spotting and mitigating conflicts. In the age of AI, that vigilance must extend to our digital agents. Only then can we claim that our programs are fair, impartial, and defensible.

3. Treat AI Agents as Fiduciaries of Compliance

Perhaps the most compelling insight from Levin and Downes is that AI agents should be treated as fiduciaries. Just as lawyers, trustees, and board members owe a heightened duty of care to their clients, AI agents entrusted with compliance responsibilities must be designed and governed under similar standards.

For compliance officers, this concept aligns directly with DOJ expectations. The Evaluation of Corporate Compliance Programs (2024 ECCP) emphasizes accountability, transparency, and independence. By treating AI agents as fiduciaries, compliance leaders can extend these principles to technology.

What does fiduciary duty look like in practice?

  • Obedience: AI must follow company policies and regulatory standards.
  • Loyalty: AI must prioritize the company’s compliance objectives over any hidden commercial interests.
  • Confidentiality: AI must protect sensitive compliance data from leaks or misuse.
  • Accountability: AI actions must be traceable, with clear logs and audit trails.

This fiduciary framing provides compliance professionals with a powerful tool. It not only reassures stakeholders that AI can be trusted, but it also sets a benchmark that regulators can understand and evaluate. In short, fiduciary AI is defensible AI.

4. Build Market and Insurance-Based Safeguards

The article notes that beyond regulation, market mechanisms such as insurance and independent oversight will be critical to ensuring AI trustworthiness. For compliance leaders, this presents both a risk management strategy and an opportunity.

Just as identity theft insurance evolved alongside online banking, AI liability insurance will likely become a standard corporate requirement. Compliance officers should begin engaging with insurers to explore coverage for AI-related risks, such as data leaks, wrongful denials of due diligence clearance, or biased decision-making.

Equally important are third-party oversight tools. The article envisions AI “credit bureaus” that could audit agent behavior, set decision thresholds, or freeze activity when risks escalate. For compliance, such independent monitoring could provide an external layer of assurance that your AI systems are behaving as intended.

The takeaway is clear: do not rely solely on internal controls. Pair them with market-based safeguards and external verification. Doing so not only strengthens trust in AI agents but also demonstrates to regulators that your program embraces both proactive and independent oversight.

5. Design for Data Security and Local Control

Finally, Levin and Downes stress the importance of keeping decisions local; that is, ensuring sensitive data stays on company-controlled devices and servers, rather than in external clouds. For compliance professionals, this echoes a familiar principle: control the data, control the risk.

Agentic AI, by definition, processes vast amounts of sensitive information. If compliance agents are reviewing hotline reports, transaction monitoring data, or due diligence files, any data leakage could be catastrophic. That’s why strong encryption, local processing, and secure enclaves are essential.

Compliance officers should demand that AI vendors support:

  • On-device or private cloud processing for sensitive tasks.
  • Encryption of all data in transit and at rest.
  • Independent verification of security claims by external auditors.
  • Full disclosure of sponsorships, promotions, and paid influences.

By designing AI agents with local control and transparency, compliance teams can build systems that are both effective and trustworthy. Data security is not just an IT concern; it is a compliance imperative.

Trust, But Never Blindly

AI agents hold immense potential for compliance programs. They can streamline monitoring, accelerate due diligence, and support real-time risk management. But as Levin and Downes remind us, they must also be carefully governed to prevent bias, manipulation, and misuse.

For compliance leaders, the path forward is to treat AI like any other agent (or channel your inner Ronald Reagan: trust, but verify. With oversight, fiduciary framing, market safeguards, and strong data controls, AI can become a trusted partner in compliance—one that strengthens, rather than weakens, the ethical fabric of the organization.