Categories
TechLaw10

TechLaw10: Agentic AI – What Is It & What Are The Risks?

In this film, Punter Southall Law’s Jonathan Armstrong discusses Agentic AI with Professor Eric Sinrod from his home in California. This is episode 291 in the popular TechLaw10 series.

The podcast includes top tips to help avoid issues when using Agentic AI. Jonathan & Eric discuss various aspects of the law’s impact on Agentic AI, including:

  • data location issues after regulatory activity against Deepseek
  • transparency
  • due diligence
  • decision-making in light of a recent ECJ decision
  • the impact of the EU AI Act
  • patent risk & other disclosure risks
  • bias & discrimination
  • existing laws like sanctions, procurement & IP

Jonathan also looks at a 3-step plan to reduce risk

  • understand the tech
  • look at rule setting for agents
  • consider a human in the loop, at least initially

Jonathan talked about the EU AI Act. There are FAQs on that here: The EU Artificial Intelligence Act. There is also a glossary of AI terms here: EU AI Act Glossary: Key terms & acronyms.

Jonathan discusses a recent ECJ judgment involving automated decision-making, and Eric discusses a case involving a hearing-impaired job applicant.

You can learn more about Eric at Duane Morris LLP: https://www.duanemorris.com/attorneys/ericjsinrod.html and Jonathan here at Punter Southall Law: https://puntersouthall.law/about-us/jonathan-armstrong/

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/

Facebook: https://www.facebook.com/compliancepodcastnetwork/

YouTube: https://www.youtube.com/@CompliancePodcastNetwork

Twitter: https://twitter.com/tfoxlaw

Instagram: https://www.instagram.com/voiceofcompliance/

Website: https://compliancepodcastnetwork.net/

Categories
Compliance Tip of the Day

Compliance Tip of the Day – How Compliance Can Leverage Agentic AI Systems

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our exploration of Agentic AI by considering how compliance can leverage Agentic AI systems.

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Introduction to Agentic AI for Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we begin a look at Agentic AI and how it can be used in compliance.

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Categories
Blog

Compliance and Agentic AI – Building Trust, Part 3

The rise of agentic artificial intelligence (AI) is one of the most transformative developments in recent memory, particularly for legal and compliance professionals. No longer limited to passive interactions or answering questions, AI has evolved into a tool capable of reasoning, making decisions within pre-defined parameters, and taking actions autonomously. As businesses explore the potential of these technologies, compliance professionals find themselves at the forefront of ensuring that this innovation occurs within the guardrails of trust, privacy, and ethical accountability.

In a recent article in Bloomberg entitled Using AI Agents Requires a Balance of Trust, Privacy, Compliance author Sabastian Niles, President and Chief Legal Officer of Salesforce, discussed the role of AI agents today and in the future. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance. Over this three-part blog series, I have explored what Agentic AI systems are and how the compliance profession can use them. Today, we conclude by looking at key issues compliance will face, including trust, privacy, and ethical accountability.

Trust is the bedrock upon which all successful technology implementations are built, and when it comes to agentic AI, trust is not just a nice-to-have; it is non-negotiable. For compliance professionals, fostering trust in AI systems is a dual challenge: balancing the excitement of innovation with the ethical and regulatory responsibilities that come with it. Without trust, even the most sophisticated AI systems can fail to deliver their promised value, exposing organizations to legal, reputational, and operational risks.

The cornerstone of this trust lies in three critical areas: data integrity, transparency and explainability, and regulatory alignment.

Data Integrity: Building AI on a Solid Foundation

AI agents are only as reliable as the data they process. The outputs will follow suit if the inputs are flawed—whether through bias, inaccuracy, or incompleteness. Compliance professionals must ensure the organization’s data ecosystem is robust, curated, and reflects organizational values. Steps a compliance professional can take to strengthen data integrity include the following:.

  1. Centralize Data Management. Fragmented data sources increase the risk of inconsistencies. Establish unified systems that pool data into a single source of truth, ensuring consistency across all AI-driven processes.
  2. Validate Inputs and Outputs. Build systems that validate data inputs for accuracy and continuously monitor AI outputs. This safeguards against deviations or unintended consequences as the AI evolves.
  3. Eliminate Bias. Conduct bias audits on datasets to ensure fair and equitable outcomes. For example, compliance teams using AI to monitor transactions for fraud must ensure that the data does not unfairly target specific regions or demographics.

When compliance professionals champion high-quality, unbiased, and unified data, they provide a strong foundation for building trust in AI systems.

Transparency and Explainability: Demystifying the Black Box

One of the most common concerns about AI, particularly agentic AI, is its Black Box quality. How did the system arrive at a specific decision? Was it a fair decision? Could it have been influenced by flawed data or programming? Transparency and explainability are key to addressing these questions. For compliance professionals, the goal is to ensure that AI decisions are understandable and defensible. Regulators, employees, and customers will demand to know how AI systems operate, especially when decisions impact them directly. A compliance function can prioritize transparency using the following strategies:.

  1. Document Decision-Making Processes. AI systems must be designed to log their decision-making rationale. This documentation can be a critical audit trail during internal reviews or regulatory inquiries.
  2. Promote Explainable AI. Collaborate with IT and AI teams to prioritize explainability, even if it means sacrificing some degree of complexity. The ability to explain why an AI flagged a transaction or how it recommended a course of action builds confidence among stakeholders.
  3. Train Stakeholders. Ensure that key stakeholders understand the basics of how the AI system operates, its limitations, and when human oversight is required.

Transparency and explainability are not just technical features; they are trust-building tools. Compliance professionals who advocate for these principles will strengthen stakeholder confidence in AI systems.

Regulatory Alignment: Staying Ahead of the Curve

As Agentic AI continues to evolve, so will the regulatory landscape. Policymakers worldwide are introducing AI-specific regulations, such as the EU Artificial Intelligence Act or Colorado’s state-level Consumer Protections for Artificial Intelligence. These frameworks ensure that AI systems operate ethically, securely, and transparently. For compliance professionals, this represents both a challenge and an opportunity. 

  1. Embed Privacy-by-Design Principles. Incorporate data privacy protections at every stage of AI development, ensuring compliance with laws like GDPR, CCPA, and beyond.
  2. Monitor Emerging Regulations. Monitor evolving AI regulations and assess how they impact your organization. Assign dedicated resources to regulatory monitoring to stay ahead of changes.
  3. Collaborate Across Functions. Work with legal, IT, and data governance teams to ensure AI systems meet or exceed regulatory standards from day one.

Compliance professionals have a unique role in translating complex regulatory requirements into actionable strategies. By embedding regulatory alignment into AI systems, they help their organizations avoid legal pitfalls and foster long-term trust.

Building Ethical Guardrails: The Compass for Responsible AI 

Trust in AI is not just about compliance; it is also about ethics. The responsible adoption of agentic AI hinges on establishing ethical guardrails that ensure innovation does not come at the expense of integrity. These guardrails serve as both a compass and a safety net, guiding the organization as it navigates the complexities of AI deployment. You should employ several key ethical guardrails.

  1. Transparency in Decision-Making. AI systems must document and communicate their decision-making processes. This ensures that humans can intervene when needed.
  2. Risk Mitigation. Conduct comprehensive risk assessments for all AI use cases, identifying vulnerabilities and implementing safeguards to address them.
  3. Human Escalation Pathways. Define clear parameters for when and how human oversight is required. Even the most advanced AI systems should not operate entirely without human involvement.
  4. Privacy Protections. Privacy-by-design principles should be central to every AI deployment, ensuring compliance with data protection laws and safeguarding customer trust.

By championing ethical AI practices, compliance professionals can help their organizations harness the power of agentic AI while mitigating its risks.

Balancing Innovation with Compliance: A Strategic Opportunity

The perception of compliance as a business blocker is outdated. Agentic AI allows compliance teams to position themselves as enablers of innovation. Compliance professionals can enhance business outcomes and stakeholder trust by guiding organizations to adopt AI responsibly and strategically. There are multiple steps that a corporate compliance function can take and inculcate in your organization.

  1. Educate Your Team. Develop a plan to increase your team’s understanding of agentic AI—Foster cross-functional collaboration between compliance, IT, and business units to ensure alignment.
  2. Shift the Mindset. Move beyond the “Is this legal?” to ask, “How can we do this responsibly?” This positions compliance as a driver of ethical innovation.
  3. Audit Your Data Ecosystem. Conduct a thorough review of your organization’s data sources, addressing inaccuracies and ensuring readiness for AI processing.
  4. Update Policies. Revise acceptable use policies to address the unique risks of agentic AI, ensuring alignment with organizational values and emerging regulations.
  5. Prioritize Trust. Without definitive laws, meeting or exceeding customer privacy and security expectations can be a competitive advantage.

The Path Forward: Trust as a Strategic Asset

Adopting Agentic AI systems marks a transformative moment for compliance professionals and the corporate compliance function. By embedding trust into every aspect of AI deployment through data integrity, transparency, regulatory alignment, and ethical guardrails, compliance teams can help their organizations navigate this new era and thrive in it. By championing trust, compliance professionals can become strategic partners in their organizations’ AI journeys, proving that ethics and innovation are not opposing forces; they are complementary pillars of success. As always, compliance begins with trust. In the Agentic AI era, trust is not just foundational but transformational.

The rise of AI is not just a technological shift; it’s a cultural and ethical one. It’s an opportunity for compliance professionals to redefine their roles, demonstrating that trust and innovation coexist. In this new frontier, the organizations that strike the right balance between trust, privacy, and compliance will succeed and set the standard for the entire industry.  As Niles aptly puts it, this is not just about adopting new tools but transforming organizations’ operations. And in that transformation lies the promise of a more efficient, resilient, and ethical future.

Categories
Blog

How Compliance Can Leverage Agentic AI Systems, Part 2

Agentic AI systems, with their unique ability to operate autonomously, present a game-changing opportunity for corporate compliance functions. In a recent article in Bloomberg entitled “Using AI Agents Requires a Balance of Trust, Privacy, Compliance,” Sabastian Niles, President, and Chief Legal Officer of Salesforce, discussed AI agents’ roles. Today, we, therefore, enter the world of agentic AI systems. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance.

Unlike traditional chatbots or large language models that are limited to providing static responses, Agentic AI systems can analyze complex data, adapt to new information, and take actions based on predefined parameters. This capability can revolutionize compliance operations by introducing efficiencies, enhancing decision-making, and improving the organization’s ability to anticipate and respond to risks. However, leveraging these systems effectively requires compliance professionals to approach them thoughtfully and strategically. Over this three-part blog series, I will explore what Agentic AI systems are, how they can be used in compliance, and how to use Agentic AI going forward. In Part 2, we look at how compliance can use Agentic AI systems.

Understanding the Potential of Agentic AI in Compliance

Agentic AI is distinguished by its autonomy. These systems do not simply respond to queries; they execute tasks, provide actionable insights, and adapt to changing circumstances with minimal human intervention. For compliance professionals, this shift represents an opportunity to go beyond even monitoring and detection. Instead, compliance teams can integrate AI agents into their workflows to proactively manage risks, enhance internal processes, and improve the organization’s overall compliance posture. Here are some specific ways agentic AI systems can be applied within the compliance function.

Automating Routine Tasks. Many compliance activities are repetitive and resource-intensive, leading to inefficiencies and bottlenecks. Agentic AI can streamline these processes by handling internal inquiries. AI agents can respond to frequently asked compliance questions from employees, such as clarifications on company policies, reporting obligations, or training requirements. This reduces the workload on compliance officers while ensuring consistent and accurate responses.

Agentic AI can assist in managing external counsel and external consultant relationships. For companies working with multiple external legal advisors, Agentic AI can automate the tracking of legal expenses, performance metrics, and case statuses, providing a centralized view of outside counsel activities. Finally, Agentic AI can be a game-changer in monitoring transactions on a real-time and ongoing basis. Agentic AI systems can autonomously review large volumes of financial transactions to identify red flags, such as unusual payment patterns or potential violations of anti-corruption laws.

  • Enhancing Decision-Making

Compliance often involves making decisions based on a wide array of data, from regulatory updates to internal audit findings. Agentic AI can enhance this process by providing real-time insights. It can analyze data across the organization to identify emerging risks, such as changes in geopolitical conditions or new regulatory developments, and provide recommendations on how to address them.

Agentic AI can also help reduce human error. Agentic AI can help eliminate biases or oversight errors in compliance assessments, ensuring that decisions are more objective and accurate. It can also model the potential impact of regulatory changes or proposed business initiatives, allowing compliance teams to anticipate challenges and provide informed guidance to leadership.

  • Driving Resilience

The regulatory environment is constantly evolving under the second Trump Administration, and organizations must be able to adapt quickly. Agentic AI can help compliance teams stay ahead by monitoring regulatory changes. It can automatically track and analyze updates to laws and regulations worldwide, highlighting changes relevant to the organization and suggesting actions to ensure compliance.

One of the key areas the Department of Justice communicated back in 2020 and brought forward in the 2024 Update to the Evaluation of Corporate Compliance Programs (2024 Update) was the need for risk assessments as your risk changes. Agentic AI moves you to a level beyond this with proactive risk assessments. By analyzing internal and external data, AI systems can identify vulnerabilities and recommend preventive measures, reducing the likelihood of compliance failures. It can also assist in your incident and triage process by investigating the issue, gathering evidence, and suggesting corrective actions, enabling the organization to respond more effectively.

Managing the Risks of Autonomy

While the autonomy of agentic AI systems offers significant benefits, it also introduces new risks that compliance professionals must address. Poor data quality and bias will still generate suboptimal results. Poor-quality or incomplete data can lead to incorrect or biased outputs from AI systems. Compliance teams must ensure that the data used by these systems is accurate, representative, and regularly updated.

The autonomous nature of Agentic AI means that organizations must establish clear guidelines for oversight and accountability. This includes defining when human intervention is required and ensuring that AI decisions align with organizational values and regulatory requirements. Finally, there are the dual areas of transparency and accountability. One of the most critical challenges with agentic AI is understanding how the system arrives at its decisions. Compliance teams must advocate for transparency in AI operations and develop mechanisms to explain decisions to regulators, stakeholders, and employees.

Steps for Compliance Teams to Adopt Agentic AI

To maximize the benefits of agentic AI while minimizing its risks, compliance teams should take the following steps:

  1. Assess Current Processes. Begin by identifying compliance activities that are repetitive, time-consuming, or prone to error. These are often the best candidates for automation through agentic AI.
  2. Pilot AI Applications. Before deploying AI across the entire compliance function, start with pilot projects in specific areas, such as policy monitoring or transaction reviews. Use pilots to test the system’s capabilities, identify potential risks, and gather feedback.
  3. Strengthen Data Governance. Agentic AI relies heavily on data, making strong data governance practices essential. This includes implementing controls to ensure data accuracy, managing access to sensitive information, and maintaining compliance with data privacy regulations.
  4. Develop Ethical Guidelines. Work with cross-functional teams to establish ethical guidelines for AI use. These guidelines should cover issues such as transparency, accountability, and acceptable use and should be reviewed regularly to reflect evolving best practices and regulatory standards.
  5. Provide Training and Support. Compliance teams must be equipped to work effectively with AI systems. Offer training to help team members understand how agentic AI works, how it can be used responsibly, and their role in overseeing its operations.
  6. Establish a Feedback Loop. Implement processes for continuously monitoring AI performance and gathering feedback from users. Use this information to refine the system and address any issues that arise.

Down the Road

Agentic AI systems represent a powerful tool for compliance functions, offering the potential to enhance efficiency, improve decision-making, and build resilience. However, these benefits can only be realized if the technology is implemented responsibly. Compliance professionals must balance leveraging AI’s capabilities and maintaining the trust, privacy, and ethical standards critical to the organization’s success.

By taking a proactive approach to understanding and adopting agentic AI, compliance teams can streamline their own operations and position themselves as strategic partners in driving the organization’s broader innovation and risk management efforts. The question is no longer whether compliance teams should embrace agentic AI but how they can do so responsibly and effectively.

Categories
Blog

What Are Agentic AI Systems, Part 1

We live in an era where artificial intelligence (AI) is no longer just a tool for answering questions or providing recommendations; it has strengthened into a partner capable of acting on our behalf. In a recent article in Bloomberg entitled Using AI Agents Requires a Balance of Trust, Privacy, Compliance, Sabastian Niles, President and Chief Legal Officer of Salesforce, discussed the role of AI agents. Today, we, therefore, enter the world of agentic AI systems. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance. Over this three-part blog series, I will explore what Agentic AI systems are, how they can be used in compliance, and how to use Agentic AI going forward.

Defining Agentic AI Systems

In simple terms, Agentic AI does not simply inform; it acts. For compliance professionals, this opens up many possibilities for automating tasks, improving efficiency, and enhancing decision-making. However, with greater autonomy comes greater responsibility, particularly in ensuring these systems operate ethically and within regulatory boundaries.

Agentic AI systems differ significantly from traditional AI tools like chatbots or standalone large language models. While the latter is primarily reactive, responding to queries or prompts, Agentic AI systems operate with a higher degree of autonomy. These systems can analyze data, adapt to new information, and act within pre-defined parameters without requiring constant human oversight. Some of the key differences include the following.

  1. Autonomy. Unlike traditional AI, which often requires human input to execute tasks, agentic AI can take the initiative within established guidelines.
  2. Adaptability. Agentic AI learns and develops based on new data or changing conditions, making it highly dynamic.
  3. Action-Oriented. These systems can analyze data and decide and execute tasks in real time.

For example, imagine a compliance chatbot that answers employees’ questions about corporate policies. While useful, this chatbot cannot take further steps, such as generating a personalized policy report or flagging potential compliance risks. On the other hand, an Agentic AI system could handle these additional tasks autonomously, freeing compliance teams to focus on more strategic priorities.

Agentic AI in Action for Compliance

What does agentic AI mean for the compliance function? Essentially, it represents an opportunity to reimagine how compliance teams operate, enabling them to do more with less. Here are a few ways agentic AI systems can be used effectively in corporate compliance.

  1. Automating Repetitive Tasks. Compliance professionals often find themselves bogged down by routine, resource-intensive tasks. Agentic AI can take over many of these responsibilities, such as in policy management automation, by reviewing and updating compliance policies based on regulatory changes. You can provide employee support by responding to frequently asked compliance questions and escalating complex issues to the appropriate team members. You can move it outside your organization by continuously assessing third-party risks and analyzing real-time data, such as media reports or transaction histories.
  2. Enhancing Risk Assessment. Agentic AI systems can analyze vast amounts of data quickly and accurately, making them invaluable for identifying and mitigating risks. They can assist in transaction monitoring by detecting anomalies in financial transactions that may show potential fraud or corruption. You can move to more proactive risk screening by monitoring news and regulatory updates to identify emerging risks that could impact the organization. Most excitingly, they can provide predictive analytics. They could allow you to expect compliance challenges based on historical trends and current data.
  3. Supporting Decision-Making. With their ability to analyze complex data and generate actionable insights, agentic AI systems can help compliance teams make better-informed decisions. This can include scenario planning and forecasting by modeling the impact of potential regulatory changes on the organization. As the Department of Justice reminded us in the 2024 Update to the Evaluation of Corporate Compliance Programs (2024 Update), you can move to true data-driven recommendations to provide documented guidance on addressing identified risks or improving compliance processes. Finally, in the never-ending battle for resource allocation, Agentic AI can identify areas where compliance efforts should be prioritized for maximum impact.

The Risks and Responsibilities of Agentic AI

While the benefits of agentic AI are clear, compliance professionals must approach its adoption cautiously. The autonomy of these systems introduces new risks. First and foremost is data integrity and Garbage In, Garbage Out (GIGO), which tells us that AI systems are only as good as the data they process. The system’s outputs could be flawed if the data is incomplete, biased, or outdated. Accountability and transparency are critical, as the question will be asked, “When AI systems make decisions or take actions, who is ultimately responsible?” Compliance teams must establish clear guidelines to ensure accountability and transparency. Finally, there are the ethical concerns involved. The ability of agentic AI to act autonomously raises questions about transparency, fairness, and privacy. These concerns must be addressed through robust governance and ethical guidelines.

Why Compliance Professionals Should Care

Agentic AI systems are not just another tech innovation—they are a significant change that will shape the future of compliance. By understanding these systems, compliance professionals can position themselves as strategic enablers, helping their organizations harness the power of AI responsibly. Compliance teams are uniquely positioned to ensure that AI systems operate transparently and ethically, fostering stakeholder trust.

As AI-specific regulations emerge, compliance professionals will play a critical role in ensuring adherence to new legal standards, as echoed in the 2024 Update.

By integrating agentic AI into their workflows, compliance teams can improve efficiency, reduce costs, and drive profitability in the company. It will certainly demonstrate an increased ROI for compliance.

The Path Forward

The rise of agentic AI systems represents a transformative opportunity for compliance professionals, but only if implemented thoughtfully and responsibly. By embracing this technology, compliance teams can move from being seen as cost centers to becoming innovation partners, driving compliance and business success.

The key is striking the right balance: leveraging the autonomy of agentic AI to achieve efficiencies while maintaining the trust, privacy, and ethical standards foundational to compliance. As compliance professionals, we can lead this transformation, ensuring that agentic AI serves as a tool for good, not a source of risk. The bottom line is that the future of compliance is not simply about saying no to innovation; it is about guiding it responsibly. Let Agentic AI be your ally in this journey.

Join us tomorrow in Part 2, to discuss how to use Agentic AI systems.