Categories
AI Today in 5

AI Today in 5: September 9, 2025, The Investor Frenzy Continues Episode

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance – Cybersecurity Challenges and Solutions: An In-Depth Interview with Robert Meyers

Innovation comes in many areas, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom Fox interviews Robert Meyers, a cybersecurity and privacy expert with over 30 years of experience.

Meyers shares his journey from starting in IT to becoming a prominent figure in cybersecurity, privacy, and M&A security. He recounts the evolution of cybersecurity from the 1980s to the present day, highlighting key lessons learned along the way. He discusses the philosophical divide between U.S. and European attitudes toward data privacy, the importance of a cross-functional approach to cybersecurity and privacy within companies, and how emerging technologies like agentic AI are reshaping the industry. He also shares insights from his new book, ‘Privacy Snippets for the Cybersecurity Professional,’ aimed at helping professionals bridge the gap between cybersecurity and privacy. Additionally, Meyers’s passion for Comic-Con offers a unique perspective on how creativity and community engagement can inform and enrich professional practices.

Key highlights:

  • Robert Meyers’ Professional Background
  • Early Cybersecurity Challenges
  • Evolution of Privacy and Security
  • Roles and Responsibilities in Cybersecurity
  • Agentic AI and Future Challenges
  • Comic-Con and Personal Interests
  • Advice for Aspiring Professionals

Resources:

Privacy Snippets for the Cybersecurity Professional on Amazon

Robert Meyers’ Profile on Amazon

Robert Meyers ‘on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: August 26, 2025, The Will AI Take Your Culture Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

Top AI compliance stories:

  • AI simulates 1000 years of weather. (Phys Org)
  • Agentic AI for workflow. (FT)
  • Navigating the challenges of responsible AI. (Forbes)
  • 20 ways to design AI GRC plans that work. (Forbes)
  • Is AI coming for culture? (New Yorker)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: Agentic Misalignment and AI Ethics: Analyzing AI Behavior Under Pressure

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully. Seeking insightful perspectives on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss a recent Anthropic report that highlights “agentic misalignment in AI systems.”

The discussion addresses the unsettling, independent, and unethical behaviors exhibited by AI systems in extreme scenarios. The conversation explores the implications for corporate risk management, AI governance, and compliance, drawing parallels between AI behavior and human behavior using concepts such as the fraud triangle. The episode also explores how traditional anti-fraud mechanisms may be adapted for monitoring AI agents while reflecting on lessons from science fiction portrayals of AI ethics and risks.

Key highlights:

  • AI’s Unethical Behaviors
  • Comparing AI to Human Behavior
  • Fraud Triangle, the Anti-Fraud Triangle, and AI
  • Science Fiction Parallels

Resources:

Matt Kelly in Radical Compliance 

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred the Davey, Communicator, and W3 Awards for podcast excellence.

Categories
Everything Compliance

Everything Compliance: Episode 152, The Season Opener Edition

Welcome to this edition of the award-winning Everything Compliance. In this episode, Matt Kelly, Jonathan Armstrong, Karen Moore, and Tom Fox, the Compliance Evangelist, will appear as panelists.

1. Karen Moore examines EU changes in reporting on ESG and CCCD and rants about the anxiety caused by the Trump Administration.

2. Matt Kelly reviews the off-channel messaging scandal. He rants about the Trevor Milton pardon.

3. Jonathan Armstrong takes a deep dive into Agentic AI and its implications for compliance. He shouts out to Russell Brand for unmasking British actor Penelope Keith as JFK’s assassin.

4. Tom Fox examines the world’s response to Trump’s suspension of FCPA enforcement. He shouts out to Opening Day of the 2025 MLB season and highlights his hometown heroes, the Houston Astros.

The members of Everything Compliance are:

The host and producer, rantor (and sometime panelist) of Everything Compliance is Tom Fox, the Voice of Compliance. He can be reached at tfox@tfoxlaw.com. Everything Compliance is a part of the award-winning Compliance Podcast Network.

Categories
TechLaw10

TechLaw10: Agentic AI – What Is It & What Are The Risks?

In this film, Punter Southall Law’s Jonathan Armstrong discusses Agentic AI with Professor Eric Sinrod from his home in California. This is episode 291 in the popular TechLaw10 series.

The podcast includes top tips to help avoid issues when using Agentic AI. Jonathan & Eric discuss various aspects of the law’s impact on Agentic AI, including:

  • data location issues after regulatory activity against Deepseek
  • transparency
  • due diligence
  • decision-making in light of a recent ECJ decision
  • the impact of the EU AI Act
  • patent risk & other disclosure risks
  • bias & discrimination
  • existing laws like sanctions, procurement & IP

Jonathan also looks at a 3-step plan to reduce risk

  • understand the tech
  • look at rule setting for agents
  • consider a human in the loop, at least initially

Jonathan talked about the EU AI Act. There are FAQs on that here: The EU Artificial Intelligence Act. There is also a glossary of AI terms here: EU AI Act Glossary: Key terms & acronyms.

Jonathan discusses a recent ECJ judgment involving automated decision-making, and Eric discusses a case involving a hearing-impaired job applicant.

You can learn more about Eric at Duane Morris LLP: https://www.duanemorris.com/attorneys/ericjsinrod.html and Jonathan here at Punter Southall Law: https://puntersouthall.law/about-us/jonathan-armstrong/

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/

Facebook: https://www.facebook.com/compliancepodcastnetwork/

YouTube: https://www.youtube.com/@CompliancePodcastNetwork

Twitter: https://twitter.com/tfoxlaw

Instagram: https://www.instagram.com/voiceofcompliance/

Website: https://compliancepodcastnetwork.net/

Categories
Compliance Tip of the Day

Compliance Tip of the Day – How Compliance Can Leverage Agentic AI Systems

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our exploration of Agentic AI by considering how compliance can leverage Agentic AI systems.

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Introduction to Agentic AI for Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we begin a look at Agentic AI and how it can be used in compliance.

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Categories
Blog

Compliance and Agentic AI – Building Trust, Part 3

The rise of agentic artificial intelligence (AI) is one of the most transformative developments in recent memory, particularly for legal and compliance professionals. No longer limited to passive interactions or answering questions, AI has evolved into a tool capable of reasoning, making decisions within pre-defined parameters, and taking actions autonomously. As businesses explore the potential of these technologies, compliance professionals find themselves at the forefront of ensuring that this innovation occurs within the guardrails of trust, privacy, and ethical accountability.

In a recent article in Bloomberg entitled Using AI Agents Requires a Balance of Trust, Privacy, Compliance author Sabastian Niles, President and Chief Legal Officer of Salesforce, discussed the role of AI agents today and in the future. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance. Over this three-part blog series, I have explored what Agentic AI systems are and how the compliance profession can use them. Today, we conclude by looking at key issues compliance will face, including trust, privacy, and ethical accountability.

Trust is the bedrock upon which all successful technology implementations are built, and when it comes to agentic AI, trust is not just a nice-to-have; it is non-negotiable. For compliance professionals, fostering trust in AI systems is a dual challenge: balancing the excitement of innovation with the ethical and regulatory responsibilities that come with it. Without trust, even the most sophisticated AI systems can fail to deliver their promised value, exposing organizations to legal, reputational, and operational risks.

The cornerstone of this trust lies in three critical areas: data integrity, transparency and explainability, and regulatory alignment.

Data Integrity: Building AI on a Solid Foundation

AI agents are only as reliable as the data they process. The outputs will follow suit if the inputs are flawed—whether through bias, inaccuracy, or incompleteness. Compliance professionals must ensure the organization’s data ecosystem is robust, curated, and reflects organizational values. Steps a compliance professional can take to strengthen data integrity include the following:.

  1. Centralize Data Management. Fragmented data sources increase the risk of inconsistencies. Establish unified systems that pool data into a single source of truth, ensuring consistency across all AI-driven processes.
  2. Validate Inputs and Outputs. Build systems that validate data inputs for accuracy and continuously monitor AI outputs. This safeguards against deviations or unintended consequences as the AI evolves.
  3. Eliminate Bias. Conduct bias audits on datasets to ensure fair and equitable outcomes. For example, compliance teams using AI to monitor transactions for fraud must ensure that the data does not unfairly target specific regions or demographics.

When compliance professionals champion high-quality, unbiased, and unified data, they provide a strong foundation for building trust in AI systems.

Transparency and Explainability: Demystifying the Black Box

One of the most common concerns about AI, particularly agentic AI, is its Black Box quality. How did the system arrive at a specific decision? Was it a fair decision? Could it have been influenced by flawed data or programming? Transparency and explainability are key to addressing these questions. For compliance professionals, the goal is to ensure that AI decisions are understandable and defensible. Regulators, employees, and customers will demand to know how AI systems operate, especially when decisions impact them directly. A compliance function can prioritize transparency using the following strategies:.

  1. Document Decision-Making Processes. AI systems must be designed to log their decision-making rationale. This documentation can be a critical audit trail during internal reviews or regulatory inquiries.
  2. Promote Explainable AI. Collaborate with IT and AI teams to prioritize explainability, even if it means sacrificing some degree of complexity. The ability to explain why an AI flagged a transaction or how it recommended a course of action builds confidence among stakeholders.
  3. Train Stakeholders. Ensure that key stakeholders understand the basics of how the AI system operates, its limitations, and when human oversight is required.

Transparency and explainability are not just technical features; they are trust-building tools. Compliance professionals who advocate for these principles will strengthen stakeholder confidence in AI systems.

Regulatory Alignment: Staying Ahead of the Curve

As Agentic AI continues to evolve, so will the regulatory landscape. Policymakers worldwide are introducing AI-specific regulations, such as the EU Artificial Intelligence Act or Colorado’s state-level Consumer Protections for Artificial Intelligence. These frameworks ensure that AI systems operate ethically, securely, and transparently. For compliance professionals, this represents both a challenge and an opportunity. 

  1. Embed Privacy-by-Design Principles. Incorporate data privacy protections at every stage of AI development, ensuring compliance with laws like GDPR, CCPA, and beyond.
  2. Monitor Emerging Regulations. Monitor evolving AI regulations and assess how they impact your organization. Assign dedicated resources to regulatory monitoring to stay ahead of changes.
  3. Collaborate Across Functions. Work with legal, IT, and data governance teams to ensure AI systems meet or exceed regulatory standards from day one.

Compliance professionals have a unique role in translating complex regulatory requirements into actionable strategies. By embedding regulatory alignment into AI systems, they help their organizations avoid legal pitfalls and foster long-term trust.

Building Ethical Guardrails: The Compass for Responsible AI 

Trust in AI is not just about compliance; it is also about ethics. The responsible adoption of agentic AI hinges on establishing ethical guardrails that ensure innovation does not come at the expense of integrity. These guardrails serve as both a compass and a safety net, guiding the organization as it navigates the complexities of AI deployment. You should employ several key ethical guardrails.

  1. Transparency in Decision-Making. AI systems must document and communicate their decision-making processes. This ensures that humans can intervene when needed.
  2. Risk Mitigation. Conduct comprehensive risk assessments for all AI use cases, identifying vulnerabilities and implementing safeguards to address them.
  3. Human Escalation Pathways. Define clear parameters for when and how human oversight is required. Even the most advanced AI systems should not operate entirely without human involvement.
  4. Privacy Protections. Privacy-by-design principles should be central to every AI deployment, ensuring compliance with data protection laws and safeguarding customer trust.

By championing ethical AI practices, compliance professionals can help their organizations harness the power of agentic AI while mitigating its risks.

Balancing Innovation with Compliance: A Strategic Opportunity

The perception of compliance as a business blocker is outdated. Agentic AI allows compliance teams to position themselves as enablers of innovation. Compliance professionals can enhance business outcomes and stakeholder trust by guiding organizations to adopt AI responsibly and strategically. There are multiple steps that a corporate compliance function can take and inculcate in your organization.

  1. Educate Your Team. Develop a plan to increase your team’s understanding of agentic AI—Foster cross-functional collaboration between compliance, IT, and business units to ensure alignment.
  2. Shift the Mindset. Move beyond the “Is this legal?” to ask, “How can we do this responsibly?” This positions compliance as a driver of ethical innovation.
  3. Audit Your Data Ecosystem. Conduct a thorough review of your organization’s data sources, addressing inaccuracies and ensuring readiness for AI processing.
  4. Update Policies. Revise acceptable use policies to address the unique risks of agentic AI, ensuring alignment with organizational values and emerging regulations.
  5. Prioritize Trust. Without definitive laws, meeting or exceeding customer privacy and security expectations can be a competitive advantage.

The Path Forward: Trust as a Strategic Asset

Adopting Agentic AI systems marks a transformative moment for compliance professionals and the corporate compliance function. By embedding trust into every aspect of AI deployment through data integrity, transparency, regulatory alignment, and ethical guardrails, compliance teams can help their organizations navigate this new era and thrive in it. By championing trust, compliance professionals can become strategic partners in their organizations’ AI journeys, proving that ethics and innovation are not opposing forces; they are complementary pillars of success. As always, compliance begins with trust. In the Agentic AI era, trust is not just foundational but transformational.

The rise of AI is not just a technological shift; it’s a cultural and ethical one. It’s an opportunity for compliance professionals to redefine their roles, demonstrating that trust and innovation coexist. In this new frontier, the organizations that strike the right balance between trust, privacy, and compliance will succeed and set the standard for the entire industry.  As Niles aptly puts it, this is not just about adopting new tools but transforming organizations’ operations. And in that transformation lies the promise of a more efficient, resilient, and ethical future.

Categories
Blog

How Compliance Can Leverage Agentic AI Systems, Part 2

Agentic AI systems, with their unique ability to operate autonomously, present a game-changing opportunity for corporate compliance functions. In a recent article in Bloomberg entitled “Using AI Agents Requires a Balance of Trust, Privacy, Compliance,” Sabastian Niles, President, and Chief Legal Officer of Salesforce, discussed AI agents’ roles. Today, we, therefore, enter the world of agentic AI systems. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance.

Unlike traditional chatbots or large language models that are limited to providing static responses, Agentic AI systems can analyze complex data, adapt to new information, and take actions based on predefined parameters. This capability can revolutionize compliance operations by introducing efficiencies, enhancing decision-making, and improving the organization’s ability to anticipate and respond to risks. However, leveraging these systems effectively requires compliance professionals to approach them thoughtfully and strategically. Over this three-part blog series, I will explore what Agentic AI systems are, how they can be used in compliance, and how to use Agentic AI going forward. In Part 2, we look at how compliance can use Agentic AI systems.

Understanding the Potential of Agentic AI in Compliance

Agentic AI is distinguished by its autonomy. These systems do not simply respond to queries; they execute tasks, provide actionable insights, and adapt to changing circumstances with minimal human intervention. For compliance professionals, this shift represents an opportunity to go beyond even monitoring and detection. Instead, compliance teams can integrate AI agents into their workflows to proactively manage risks, enhance internal processes, and improve the organization’s overall compliance posture. Here are some specific ways agentic AI systems can be applied within the compliance function.

Automating Routine Tasks. Many compliance activities are repetitive and resource-intensive, leading to inefficiencies and bottlenecks. Agentic AI can streamline these processes by handling internal inquiries. AI agents can respond to frequently asked compliance questions from employees, such as clarifications on company policies, reporting obligations, or training requirements. This reduces the workload on compliance officers while ensuring consistent and accurate responses.

Agentic AI can assist in managing external counsel and external consultant relationships. For companies working with multiple external legal advisors, Agentic AI can automate the tracking of legal expenses, performance metrics, and case statuses, providing a centralized view of outside counsel activities. Finally, Agentic AI can be a game-changer in monitoring transactions on a real-time and ongoing basis. Agentic AI systems can autonomously review large volumes of financial transactions to identify red flags, such as unusual payment patterns or potential violations of anti-corruption laws.

  • Enhancing Decision-Making

Compliance often involves making decisions based on a wide array of data, from regulatory updates to internal audit findings. Agentic AI can enhance this process by providing real-time insights. It can analyze data across the organization to identify emerging risks, such as changes in geopolitical conditions or new regulatory developments, and provide recommendations on how to address them.

Agentic AI can also help reduce human error. Agentic AI can help eliminate biases or oversight errors in compliance assessments, ensuring that decisions are more objective and accurate. It can also model the potential impact of regulatory changes or proposed business initiatives, allowing compliance teams to anticipate challenges and provide informed guidance to leadership.

  • Driving Resilience

The regulatory environment is constantly evolving under the second Trump Administration, and organizations must be able to adapt quickly. Agentic AI can help compliance teams stay ahead by monitoring regulatory changes. It can automatically track and analyze updates to laws and regulations worldwide, highlighting changes relevant to the organization and suggesting actions to ensure compliance.

One of the key areas the Department of Justice communicated back in 2020 and brought forward in the 2024 Update to the Evaluation of Corporate Compliance Programs (2024 Update) was the need for risk assessments as your risk changes. Agentic AI moves you to a level beyond this with proactive risk assessments. By analyzing internal and external data, AI systems can identify vulnerabilities and recommend preventive measures, reducing the likelihood of compliance failures. It can also assist in your incident and triage process by investigating the issue, gathering evidence, and suggesting corrective actions, enabling the organization to respond more effectively.

Managing the Risks of Autonomy

While the autonomy of agentic AI systems offers significant benefits, it also introduces new risks that compliance professionals must address. Poor data quality and bias will still generate suboptimal results. Poor-quality or incomplete data can lead to incorrect or biased outputs from AI systems. Compliance teams must ensure that the data used by these systems is accurate, representative, and regularly updated.

The autonomous nature of Agentic AI means that organizations must establish clear guidelines for oversight and accountability. This includes defining when human intervention is required and ensuring that AI decisions align with organizational values and regulatory requirements. Finally, there are the dual areas of transparency and accountability. One of the most critical challenges with agentic AI is understanding how the system arrives at its decisions. Compliance teams must advocate for transparency in AI operations and develop mechanisms to explain decisions to regulators, stakeholders, and employees.

Steps for Compliance Teams to Adopt Agentic AI

To maximize the benefits of agentic AI while minimizing its risks, compliance teams should take the following steps:

  1. Assess Current Processes. Begin by identifying compliance activities that are repetitive, time-consuming, or prone to error. These are often the best candidates for automation through agentic AI.
  2. Pilot AI Applications. Before deploying AI across the entire compliance function, start with pilot projects in specific areas, such as policy monitoring or transaction reviews. Use pilots to test the system’s capabilities, identify potential risks, and gather feedback.
  3. Strengthen Data Governance. Agentic AI relies heavily on data, making strong data governance practices essential. This includes implementing controls to ensure data accuracy, managing access to sensitive information, and maintaining compliance with data privacy regulations.
  4. Develop Ethical Guidelines. Work with cross-functional teams to establish ethical guidelines for AI use. These guidelines should cover issues such as transparency, accountability, and acceptable use and should be reviewed regularly to reflect evolving best practices and regulatory standards.
  5. Provide Training and Support. Compliance teams must be equipped to work effectively with AI systems. Offer training to help team members understand how agentic AI works, how it can be used responsibly, and their role in overseeing its operations.
  6. Establish a Feedback Loop. Implement processes for continuously monitoring AI performance and gathering feedback from users. Use this information to refine the system and address any issues that arise.

Down the Road

Agentic AI systems represent a powerful tool for compliance functions, offering the potential to enhance efficiency, improve decision-making, and build resilience. However, these benefits can only be realized if the technology is implemented responsibly. Compliance professionals must balance leveraging AI’s capabilities and maintaining the trust, privacy, and ethical standards critical to the organization’s success.

By taking a proactive approach to understanding and adopting agentic AI, compliance teams can streamline their own operations and position themselves as strategic partners in driving the organization’s broader innovation and risk management efforts. The question is no longer whether compliance teams should embrace agentic AI but how they can do so responsibly and effectively.