Categories
Blog

Compliance and Agentic AI – Building Trust, Part 3

The rise of agentic artificial intelligence (AI) is one of the most transformative developments in recent memory, particularly for legal and compliance professionals. No longer limited to passive interactions or answering questions, AI has evolved into a tool capable of reasoning, making decisions within pre-defined parameters, and taking actions autonomously. As businesses explore the potential of these technologies, compliance professionals find themselves at the forefront of ensuring that this innovation occurs within the guardrails of trust, privacy, and ethical accountability.

In a recent article in Bloomberg entitled Using AI Agents Requires a Balance of Trust, Privacy, Compliance author Sabastian Niles, President and Chief Legal Officer of Salesforce, discussed the role of AI agents today and in the future. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance. Over this three-part blog series, I have explored what Agentic AI systems are and how the compliance profession can use them. Today, we conclude by looking at key issues compliance will face, including trust, privacy, and ethical accountability.

Trust is the bedrock upon which all successful technology implementations are built, and when it comes to agentic AI, trust is not just a nice-to-have; it is non-negotiable. For compliance professionals, fostering trust in AI systems is a dual challenge: balancing the excitement of innovation with the ethical and regulatory responsibilities that come with it. Without trust, even the most sophisticated AI systems can fail to deliver their promised value, exposing organizations to legal, reputational, and operational risks.

The cornerstone of this trust lies in three critical areas: data integrity, transparency and explainability, and regulatory alignment.

Data Integrity: Building AI on a Solid Foundation

AI agents are only as reliable as the data they process. The outputs will follow suit if the inputs are flawed—whether through bias, inaccuracy, or incompleteness. Compliance professionals must ensure the organization’s data ecosystem is robust, curated, and reflects organizational values. Steps a compliance professional can take to strengthen data integrity include the following:.

  1. Centralize Data Management. Fragmented data sources increase the risk of inconsistencies. Establish unified systems that pool data into a single source of truth, ensuring consistency across all AI-driven processes.
  2. Validate Inputs and Outputs. Build systems that validate data inputs for accuracy and continuously monitor AI outputs. This safeguards against deviations or unintended consequences as the AI evolves.
  3. Eliminate Bias. Conduct bias audits on datasets to ensure fair and equitable outcomes. For example, compliance teams using AI to monitor transactions for fraud must ensure that the data does not unfairly target specific regions or demographics.

When compliance professionals champion high-quality, unbiased, and unified data, they provide a strong foundation for building trust in AI systems.

Transparency and Explainability: Demystifying the Black Box

One of the most common concerns about AI, particularly agentic AI, is its Black Box quality. How did the system arrive at a specific decision? Was it a fair decision? Could it have been influenced by flawed data or programming? Transparency and explainability are key to addressing these questions. For compliance professionals, the goal is to ensure that AI decisions are understandable and defensible. Regulators, employees, and customers will demand to know how AI systems operate, especially when decisions impact them directly. A compliance function can prioritize transparency using the following strategies:.

  1. Document Decision-Making Processes. AI systems must be designed to log their decision-making rationale. This documentation can be a critical audit trail during internal reviews or regulatory inquiries.
  2. Promote Explainable AI. Collaborate with IT and AI teams to prioritize explainability, even if it means sacrificing some degree of complexity. The ability to explain why an AI flagged a transaction or how it recommended a course of action builds confidence among stakeholders.
  3. Train Stakeholders. Ensure that key stakeholders understand the basics of how the AI system operates, its limitations, and when human oversight is required.

Transparency and explainability are not just technical features; they are trust-building tools. Compliance professionals who advocate for these principles will strengthen stakeholder confidence in AI systems.

Regulatory Alignment: Staying Ahead of the Curve

As Agentic AI continues to evolve, so will the regulatory landscape. Policymakers worldwide are introducing AI-specific regulations, such as the EU Artificial Intelligence Act or Colorado’s state-level Consumer Protections for Artificial Intelligence. These frameworks ensure that AI systems operate ethically, securely, and transparently. For compliance professionals, this represents both a challenge and an opportunity. 

  1. Embed Privacy-by-Design Principles. Incorporate data privacy protections at every stage of AI development, ensuring compliance with laws like GDPR, CCPA, and beyond.
  2. Monitor Emerging Regulations. Monitor evolving AI regulations and assess how they impact your organization. Assign dedicated resources to regulatory monitoring to stay ahead of changes.
  3. Collaborate Across Functions. Work with legal, IT, and data governance teams to ensure AI systems meet or exceed regulatory standards from day one.

Compliance professionals have a unique role in translating complex regulatory requirements into actionable strategies. By embedding regulatory alignment into AI systems, they help their organizations avoid legal pitfalls and foster long-term trust.

Building Ethical Guardrails: The Compass for Responsible AI 

Trust in AI is not just about compliance; it is also about ethics. The responsible adoption of agentic AI hinges on establishing ethical guardrails that ensure innovation does not come at the expense of integrity. These guardrails serve as both a compass and a safety net, guiding the organization as it navigates the complexities of AI deployment. You should employ several key ethical guardrails.

  1. Transparency in Decision-Making. AI systems must document and communicate their decision-making processes. This ensures that humans can intervene when needed.
  2. Risk Mitigation. Conduct comprehensive risk assessments for all AI use cases, identifying vulnerabilities and implementing safeguards to address them.
  3. Human Escalation Pathways. Define clear parameters for when and how human oversight is required. Even the most advanced AI systems should not operate entirely without human involvement.
  4. Privacy Protections. Privacy-by-design principles should be central to every AI deployment, ensuring compliance with data protection laws and safeguarding customer trust.

By championing ethical AI practices, compliance professionals can help their organizations harness the power of agentic AI while mitigating its risks.

Balancing Innovation with Compliance: A Strategic Opportunity

The perception of compliance as a business blocker is outdated. Agentic AI allows compliance teams to position themselves as enablers of innovation. Compliance professionals can enhance business outcomes and stakeholder trust by guiding organizations to adopt AI responsibly and strategically. There are multiple steps that a corporate compliance function can take and inculcate in your organization.

  1. Educate Your Team. Develop a plan to increase your team’s understanding of agentic AI—Foster cross-functional collaboration between compliance, IT, and business units to ensure alignment.
  2. Shift the Mindset. Move beyond the “Is this legal?” to ask, “How can we do this responsibly?” This positions compliance as a driver of ethical innovation.
  3. Audit Your Data Ecosystem. Conduct a thorough review of your organization’s data sources, addressing inaccuracies and ensuring readiness for AI processing.
  4. Update Policies. Revise acceptable use policies to address the unique risks of agentic AI, ensuring alignment with organizational values and emerging regulations.
  5. Prioritize Trust. Without definitive laws, meeting or exceeding customer privacy and security expectations can be a competitive advantage.

The Path Forward: Trust as a Strategic Asset

Adopting Agentic AI systems marks a transformative moment for compliance professionals and the corporate compliance function. By embedding trust into every aspect of AI deployment through data integrity, transparency, regulatory alignment, and ethical guardrails, compliance teams can help their organizations navigate this new era and thrive in it. By championing trust, compliance professionals can become strategic partners in their organizations’ AI journeys, proving that ethics and innovation are not opposing forces; they are complementary pillars of success. As always, compliance begins with trust. In the Agentic AI era, trust is not just foundational but transformational.

The rise of AI is not just a technological shift; it’s a cultural and ethical one. It’s an opportunity for compliance professionals to redefine their roles, demonstrating that trust and innovation coexist. In this new frontier, the organizations that strike the right balance between trust, privacy, and compliance will succeed and set the standard for the entire industry.  As Niles aptly puts it, this is not just about adopting new tools but transforming organizations’ operations. And in that transformation lies the promise of a more efficient, resilient, and ethical future.

Categories
Blog

How Compliance Can Leverage Agentic AI Systems, Part 2

Agentic AI systems, with their unique ability to operate autonomously, present a game-changing opportunity for corporate compliance functions. In a recent article in Bloomberg entitled “Using AI Agents Requires a Balance of Trust, Privacy, Compliance,” Sabastian Niles, President, and Chief Legal Officer of Salesforce, discussed AI agents’ roles. Today, we, therefore, enter the world of agentic AI systems. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance.

Unlike traditional chatbots or large language models that are limited to providing static responses, Agentic AI systems can analyze complex data, adapt to new information, and take actions based on predefined parameters. This capability can revolutionize compliance operations by introducing efficiencies, enhancing decision-making, and improving the organization’s ability to anticipate and respond to risks. However, leveraging these systems effectively requires compliance professionals to approach them thoughtfully and strategically. Over this three-part blog series, I will explore what Agentic AI systems are, how they can be used in compliance, and how to use Agentic AI going forward. In Part 2, we look at how compliance can use Agentic AI systems.

Understanding the Potential of Agentic AI in Compliance

Agentic AI is distinguished by its autonomy. These systems do not simply respond to queries; they execute tasks, provide actionable insights, and adapt to changing circumstances with minimal human intervention. For compliance professionals, this shift represents an opportunity to go beyond even monitoring and detection. Instead, compliance teams can integrate AI agents into their workflows to proactively manage risks, enhance internal processes, and improve the organization’s overall compliance posture. Here are some specific ways agentic AI systems can be applied within the compliance function.

Automating Routine Tasks. Many compliance activities are repetitive and resource-intensive, leading to inefficiencies and bottlenecks. Agentic AI can streamline these processes by handling internal inquiries. AI agents can respond to frequently asked compliance questions from employees, such as clarifications on company policies, reporting obligations, or training requirements. This reduces the workload on compliance officers while ensuring consistent and accurate responses.

Agentic AI can assist in managing external counsel and external consultant relationships. For companies working with multiple external legal advisors, Agentic AI can automate the tracking of legal expenses, performance metrics, and case statuses, providing a centralized view of outside counsel activities. Finally, Agentic AI can be a game-changer in monitoring transactions on a real-time and ongoing basis. Agentic AI systems can autonomously review large volumes of financial transactions to identify red flags, such as unusual payment patterns or potential violations of anti-corruption laws.

  • Enhancing Decision-Making

Compliance often involves making decisions based on a wide array of data, from regulatory updates to internal audit findings. Agentic AI can enhance this process by providing real-time insights. It can analyze data across the organization to identify emerging risks, such as changes in geopolitical conditions or new regulatory developments, and provide recommendations on how to address them.

Agentic AI can also help reduce human error. Agentic AI can help eliminate biases or oversight errors in compliance assessments, ensuring that decisions are more objective and accurate. It can also model the potential impact of regulatory changes or proposed business initiatives, allowing compliance teams to anticipate challenges and provide informed guidance to leadership.

  • Driving Resilience

The regulatory environment is constantly evolving under the second Trump Administration, and organizations must be able to adapt quickly. Agentic AI can help compliance teams stay ahead by monitoring regulatory changes. It can automatically track and analyze updates to laws and regulations worldwide, highlighting changes relevant to the organization and suggesting actions to ensure compliance.

One of the key areas the Department of Justice communicated back in 2020 and brought forward in the 2024 Update to the Evaluation of Corporate Compliance Programs (2024 Update) was the need for risk assessments as your risk changes. Agentic AI moves you to a level beyond this with proactive risk assessments. By analyzing internal and external data, AI systems can identify vulnerabilities and recommend preventive measures, reducing the likelihood of compliance failures. It can also assist in your incident and triage process by investigating the issue, gathering evidence, and suggesting corrective actions, enabling the organization to respond more effectively.

Managing the Risks of Autonomy

While the autonomy of agentic AI systems offers significant benefits, it also introduces new risks that compliance professionals must address. Poor data quality and bias will still generate suboptimal results. Poor-quality or incomplete data can lead to incorrect or biased outputs from AI systems. Compliance teams must ensure that the data used by these systems is accurate, representative, and regularly updated.

The autonomous nature of Agentic AI means that organizations must establish clear guidelines for oversight and accountability. This includes defining when human intervention is required and ensuring that AI decisions align with organizational values and regulatory requirements. Finally, there are the dual areas of transparency and accountability. One of the most critical challenges with agentic AI is understanding how the system arrives at its decisions. Compliance teams must advocate for transparency in AI operations and develop mechanisms to explain decisions to regulators, stakeholders, and employees.

Steps for Compliance Teams to Adopt Agentic AI

To maximize the benefits of agentic AI while minimizing its risks, compliance teams should take the following steps:

  1. Assess Current Processes. Begin by identifying compliance activities that are repetitive, time-consuming, or prone to error. These are often the best candidates for automation through agentic AI.
  2. Pilot AI Applications. Before deploying AI across the entire compliance function, start with pilot projects in specific areas, such as policy monitoring or transaction reviews. Use pilots to test the system’s capabilities, identify potential risks, and gather feedback.
  3. Strengthen Data Governance. Agentic AI relies heavily on data, making strong data governance practices essential. This includes implementing controls to ensure data accuracy, managing access to sensitive information, and maintaining compliance with data privacy regulations.
  4. Develop Ethical Guidelines. Work with cross-functional teams to establish ethical guidelines for AI use. These guidelines should cover issues such as transparency, accountability, and acceptable use and should be reviewed regularly to reflect evolving best practices and regulatory standards.
  5. Provide Training and Support. Compliance teams must be equipped to work effectively with AI systems. Offer training to help team members understand how agentic AI works, how it can be used responsibly, and their role in overseeing its operations.
  6. Establish a Feedback Loop. Implement processes for continuously monitoring AI performance and gathering feedback from users. Use this information to refine the system and address any issues that arise.

Down the Road

Agentic AI systems represent a powerful tool for compliance functions, offering the potential to enhance efficiency, improve decision-making, and build resilience. However, these benefits can only be realized if the technology is implemented responsibly. Compliance professionals must balance leveraging AI’s capabilities and maintaining the trust, privacy, and ethical standards critical to the organization’s success.

By taking a proactive approach to understanding and adopting agentic AI, compliance teams can streamline their own operations and position themselves as strategic partners in driving the organization’s broader innovation and risk management efforts. The question is no longer whether compliance teams should embrace agentic AI but how they can do so responsibly and effectively.

Categories
Daily Compliance News

Daily Compliance News: January 29, 2025, The End to Black History Month Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News—all from the Compliance Podcast Network. Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • State Department prohibited from celebrating Black History Month. (WSJ)
  • Is DeepSeek real? (FT)
  • DOJ Public Corruption Unit Chief resigns. (Bloomberg)
  • Using AI agents requires trust and compliance. (Bloomberg)

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Check out The FCPA Survival Guide on Amazon.com.

Categories
Blog

What Are Agentic AI Systems, Part 1

We live in an era where artificial intelligence (AI) is no longer just a tool for answering questions or providing recommendations; it has strengthened into a partner capable of acting on our behalf. In a recent article in Bloomberg entitled Using AI Agents Requires a Balance of Trust, Privacy, Compliance, Sabastian Niles, President and Chief Legal Officer of Salesforce, discussed the role of AI agents. Today, we, therefore, enter the world of agentic AI systems. Understanding this new breed of AI is essential for compliance professionals to harness its power responsibly while safeguarding trust, privacy, and compliance. Over this three-part blog series, I will explore what Agentic AI systems are, how they can be used in compliance, and how to use Agentic AI going forward.

Defining Agentic AI Systems

In simple terms, Agentic AI does not simply inform; it acts. For compliance professionals, this opens up many possibilities for automating tasks, improving efficiency, and enhancing decision-making. However, with greater autonomy comes greater responsibility, particularly in ensuring these systems operate ethically and within regulatory boundaries.

Agentic AI systems differ significantly from traditional AI tools like chatbots or standalone large language models. While the latter is primarily reactive, responding to queries or prompts, Agentic AI systems operate with a higher degree of autonomy. These systems can analyze data, adapt to new information, and act within pre-defined parameters without requiring constant human oversight. Some of the key differences include the following.

  1. Autonomy. Unlike traditional AI, which often requires human input to execute tasks, agentic AI can take the initiative within established guidelines.
  2. Adaptability. Agentic AI learns and develops based on new data or changing conditions, making it highly dynamic.
  3. Action-Oriented. These systems can analyze data and decide and execute tasks in real time.

For example, imagine a compliance chatbot that answers employees’ questions about corporate policies. While useful, this chatbot cannot take further steps, such as generating a personalized policy report or flagging potential compliance risks. On the other hand, an Agentic AI system could handle these additional tasks autonomously, freeing compliance teams to focus on more strategic priorities.

Agentic AI in Action for Compliance

What does agentic AI mean for the compliance function? Essentially, it represents an opportunity to reimagine how compliance teams operate, enabling them to do more with less. Here are a few ways agentic AI systems can be used effectively in corporate compliance.

  1. Automating Repetitive Tasks. Compliance professionals often find themselves bogged down by routine, resource-intensive tasks. Agentic AI can take over many of these responsibilities, such as in policy management automation, by reviewing and updating compliance policies based on regulatory changes. You can provide employee support by responding to frequently asked compliance questions and escalating complex issues to the appropriate team members. You can move it outside your organization by continuously assessing third-party risks and analyzing real-time data, such as media reports or transaction histories.
  2. Enhancing Risk Assessment. Agentic AI systems can analyze vast amounts of data quickly and accurately, making them invaluable for identifying and mitigating risks. They can assist in transaction monitoring by detecting anomalies in financial transactions that may show potential fraud or corruption. You can move to more proactive risk screening by monitoring news and regulatory updates to identify emerging risks that could impact the organization. Most excitingly, they can provide predictive analytics. They could allow you to expect compliance challenges based on historical trends and current data.
  3. Supporting Decision-Making. With their ability to analyze complex data and generate actionable insights, agentic AI systems can help compliance teams make better-informed decisions. This can include scenario planning and forecasting by modeling the impact of potential regulatory changes on the organization. As the Department of Justice reminded us in the 2024 Update to the Evaluation of Corporate Compliance Programs (2024 Update), you can move to true data-driven recommendations to provide documented guidance on addressing identified risks or improving compliance processes. Finally, in the never-ending battle for resource allocation, Agentic AI can identify areas where compliance efforts should be prioritized for maximum impact.

The Risks and Responsibilities of Agentic AI

While the benefits of agentic AI are clear, compliance professionals must approach its adoption cautiously. The autonomy of these systems introduces new risks. First and foremost is data integrity and Garbage In, Garbage Out (GIGO), which tells us that AI systems are only as good as the data they process. The system’s outputs could be flawed if the data is incomplete, biased, or outdated. Accountability and transparency are critical, as the question will be asked, “When AI systems make decisions or take actions, who is ultimately responsible?” Compliance teams must establish clear guidelines to ensure accountability and transparency. Finally, there are the ethical concerns involved. The ability of agentic AI to act autonomously raises questions about transparency, fairness, and privacy. These concerns must be addressed through robust governance and ethical guidelines.

Why Compliance Professionals Should Care

Agentic AI systems are not just another tech innovation—they are a significant change that will shape the future of compliance. By understanding these systems, compliance professionals can position themselves as strategic enablers, helping their organizations harness the power of AI responsibly. Compliance teams are uniquely positioned to ensure that AI systems operate transparently and ethically, fostering stakeholder trust.

As AI-specific regulations emerge, compliance professionals will play a critical role in ensuring adherence to new legal standards, as echoed in the 2024 Update.

By integrating agentic AI into their workflows, compliance teams can improve efficiency, reduce costs, and drive profitability in the company. It will certainly demonstrate an increased ROI for compliance.

The Path Forward

The rise of agentic AI systems represents a transformative opportunity for compliance professionals, but only if implemented thoughtfully and responsibly. By embracing this technology, compliance teams can move from being seen as cost centers to becoming innovation partners, driving compliance and business success.

The key is striking the right balance: leveraging the autonomy of agentic AI to achieve efficiencies while maintaining the trust, privacy, and ethical standards foundational to compliance. As compliance professionals, we can lead this transformation, ensuring that agentic AI serves as a tool for good, not a source of risk. The bottom line is that the future of compliance is not simply about saying no to innovation; it is about guiding it responsibly. Let Agentic AI be your ally in this journey.

Join us tomorrow in Part 2, to discuss how to use Agentic AI systems.

Categories
Daily Compliance News

Daily Compliance News: January 28, 2025, The TikTok Test Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News—all from the Compliance Podcast Network. Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • TD Bank gets a new Global Head of Financial Crime Risk Management. (WSJ)
  • TikTok test for corporate America. (FT)
  • Would the SEC-CFTC merger be a win for DOGE? (Bloomberg)
  • AI’s role in compliance training. (TechRadar)

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Check out The FCPA Survival Guide on Amazon.com.

Categories
Blog

AI and Compliance Training

AI-driven training tools are transforming how organizations deliver compliance programs. By offering personalized, interactive, and role-specific training at scale, AI eliminates many cost and logistical barriers that have historically made tailored training challenging. This evolution improves engagement and reduces compliance risks by equipping employees with relevant, actionable knowledge. Today, I want to explore how AI reshapes compliance training, supplemented with real-world examples of companies leading the charge.

Personalization at Scale

AI analyzes vast amounts of data, an employee’s role, learning history, and performance metrics to create tailored training experiences. This ensures that the content is directly relevant to each employee’s responsibilities. For example, a sales team focusing on international transactions might focus on anti-bribery and corruption rules under the FCPA. A procurement team could receive training on vendor due diligence, export control and sanctions, and conflict-of-interest disclosures. Conversely, a finance staff member might dive into anti-money laundering (AML) and financial controls.

You can integrate AI into your global compliance training programs to tailor content to employees’ roles. Through machine learning, your system can deliver specific modules to individuals, ensuring that high-risk roles receive advanced training while others get streamlined, relevant content. The result will be better alignment between training content and operational realities, boosting engagement and effectiveness.

Just-in-Time Learning

AI enables “just-in-time” learning, delivering content at the precise moment it’s needed. For example, an employee preparing to interact with a foreign government official might receive a refresher module on anti-corruption policies before the meeting. Similarly, an employee about to onboard a vendor might receive training on due diligence best practices. This approach effectively ensures that employees apply their knowledge in real-world scenarios when it matters most. It also minimizes the “forgetting curve” by delivering training in digestible chunks that reinforce memory retention.

This means you can use AI to deliver microlearning modules through your internal compliance training platform. Employees receive targeted reminders about data privacy regulations when working on projects involving personal data, ensuring compliance is seamlessly integrated into daily workflows.

Enhanced Engagement Through Gamification 

AI makes compliance training engaging by incorporating gamified elements like quizzes, leaderboards, and decision-making simulations. These interactive features transform mundane lessons into enjoyable experiences, boosting motivation and retention. Imagine employees participating in a simulated bribery scenario, navigating ethical dilemmas in real time. Such immersive experiences teach policies and foster critical thinking and decision-making skills.

For example, PwC’s Game of Threats™ is a digital game that simulates the speed and complexity of a real-world cyber breach. It is designed to help executives “understand the steps they can take to protect their companies. The game environment creates a realistic experience where both sides, the company and the attacker, are required to make quick, high-impact decisions with minimal information.” You can “coach players through realistic scenarios with different types of threat actors and their preferred methodologies and explain what they can do to better prevent, detect, and respond to an attack.”

Continuous Improvement

AI-powered platforms don’t just deliver training; they learn and adapt. These systems analyze performance metrics, such as quiz scores and engagement rates, to identify areas where employees struggle. Based on this data, the platform refines its content, ensuring that training evolves alongside organizational needs and regulatory changes.

One company implemented AI-driven tools for compliance training that adapt based on user feedback and performance data. If employees consistently fail a particular module, the AI identifies gaps and adjusts the content to address misunderstandings more effectively.

Cost-Effective Solutions for Large Organizations

Scaling traditional training methods across a large global workforce is challenging and expensive. AI simplifies this by automating the customization process, ensuring consistent quality across teams and geographies. It also reduces costs associated with in-person training sessions and printed materials—one large multinational leveraged AI to implement a scalable compliance training platform for its over 150,000 employees. By automating the delivery of role-specific training modules and offering multi-language support, Unilever significantly reduced training costs while maintaining high levels of engagement and effectiveness.

Overcoming Barriers to AI Adoption in Compliance Training

Unfortunately, despite its obvious benefits, some organizations hesitate to adopt AI-driven compliance training due to perceived challenges. Some of these challenges include one or more of the following concerns: The Cost Concern is where the initial investment in AI tools seems way too high. This is even where the long-term savings, through improved training efficiency and reduced compliance risks, far outweigh the upfront expenses. Another concern is around the Technological Complexity. Partnering with experienced vendors or consultants can simplify the implementation process, ensuring seamless integration with existing systems. Finally, there is the ever-present Cultural Resistance. Employees may resist AI-driven training for fear of surveillance or skepticism about its effectiveness. Clear communication about how AI enhances training rather than replacing human oversight can help alleviate these concerns.

The Future of Compliance Training: AI as a Strategic Advantage

AI-driven compliance training is more than just a technological upgrade; it is a strategic advantage that organizations can use in various ways. It can mitigate compliance risks by delivering tailored, engaging, and timely training. AI reduces the likelihood of compliance violations and associated penalties. It can build and foster trust between compliance and your customer base, which is corporate employees. Employees who feel supported with relevant, engaging training are more likely to embrace compliance as part of their workplace culture. Finally, it will allow you to stay ahead of the compliance curve in training and potentially the Department of Justice (DOJ). AI ensures training evolves alongside regulatory changes, keeping organizations proactive rather than reactive.

The message is clear: Investing in AI-driven compliance training is not just about ticking boxes; it is rather about building a resilient, ethical organization that thrives in today’s complex regulatory environment. If your company has not yet embraced the AI revolution in compliance training, now is the time to explore the possibilities. With the right tools and a commitment to meaningful employee engagement, you can transform compliance from a checkbox exercise into a powerful driver of business success.

Categories
Blog

AI, Process Management, and Compliance

Integrating artificial intelligence (AI) and advanced analytics with robust process management principles can unlock new levels of efficiency and innovation. Mars Wrigley, the global confectionery leader, offers an instructive case study. In an article in the Harvard Business Review entitled, How to Marry Process Management and AI Thomas H. Davenport and Thomas C. Redman wrote that through its strategic deployment of AI to digitize its supply chain and manage operations, Mars Wrigley demonstrates how a systematic approach to process management can achieve significant improvements in operational performance, customer satisfaction, and sustainability.

Mars Wrigley’s success story holds valuable lessons for compliance professionals about aligning technology, data, and governance to enhance compliance frameworks and drive value across organizations.

Digitization and AI: The New Frontier for Process Management

Mars Wrigley began its journey by building a digital twin of its production line and feeding real-time operational data into machine-learning models. The results were striking. The company received predictive insights that reduced overfilling, minimized waste, and optimized supply chain processes. They partnered with vendors like Aera Technology for data visualization and preventive maintenance and with Kinaxis to balance supply and demand, automate invoices, and increase truck utilization by 15%.

This underscores a critical point from a compliance standpoint: Technology can only enhance compliance when processes are well-defined, integrated, and aligned with organizational goals. Compliance officers must recognize the potential of AI to streamline compliance monitoring, enhance risk detection, and reduce manual inefficiencies.

For example, consider AI tools that monitor high-risk transactions or flag anomalies in employee expense reports. When implemented in a robust compliance framework, these tools improve detection rates and allow compliance teams to focus on strategic initiatives rather than routine checks.

The Role of Process Management in Compliance

Process management is about understanding how tasks fit together to create a specific outcome and then optimizing those sequences. Put another way, it is about operationalizing compliance. Whether addressing department-level activities or end-to-end processes, process management principles can yield transformative results when applied to compliance. What are some of the ways process management can do so?

In areas as basic as error reduction, well-managed processes minimize compliance failures by reducing error rates and increasing consistency. A traditional compliance department area is cross-functional coordination with other corporate departments. Effective compliance requires breaking down silos, whether between legal, finance, HR, or operations, and aligning departments toward common objectives.

This approach can also positively impact corporate culture by increasing stakeholder buy-in and employee engagement. Process management often conflicts with hierarchical management structures. In compliance, this tension may manifest when reconciling DOJ mandates with operational priorities in your organization. Persuading stakeholders to prioritize compliance demands strong leadership and effective change management.

AI and Process Management: A Compliance Blueprint

AI supports specific subprocesses within larger workflows, but true transformation occurs when organizations integrate these capabilities across end-to-end processes. For compliance professionals, this is a roadmap for embedding AI into compliance programs.

Step 1: Establish Ownership

Every effective compliance initiative begins with clear accountability. A defined ownership structure underpinned Mars Wrigley’s digital twin success. Compliance programs require similar clarity. Appointing a “compliance process owner” ensures cross-functional alignment, while department-level compliance champions can coordinate implementation.

Step 2: Map and Redesign Processes

Mapping current compliance processes is essential for identifying inefficiencies. Process mining tools, which analyze enterprise system logs to identify bottlenecks, can uncover hidden risks. For instance, tracking the due diligence lifecycle in third-party onboarding can reveal inefficiencies, such as delays in background checks or missed follow-ups.

Redesign efforts should prioritize risk-prone areas, leveraging AI tools to streamline activities like transaction monitoring, policy distribution, and whistleblower case tracking.

Step 3: Define Metrics and Set Targets

Compliance performance must be measurable. Metrics such as incident resolution times, training completion rates, and risk assessment quality should guide process improvements. AI enables real-time metrics monitoring, providing insights that compliance officers can act on immediately. Mars Wrigley’s use of analytics to improve truck utilization offers a parallel for compliance: by tracking resource allocation, compliance teams can reduce unnecessary costs while ensuring optimal coverage of risk areas.

Step 4: Leverage Technology and Data

AI tools such as robotic process automation (RPA) and natural language processing (NLP) are increasingly used in compliance programs to automate routine tasks. RPA can streamline repetitive activities like generating regulatory reports. NLP can analyze large volumes of text, such as contracts or policies, to identify risks or inconsistencies.

Compliance professionals must also advocate for standardized data practices. As Mars Wrigley’s case illustrates, data silos impede process efficiency. In compliance, inconsistent data can obscure risks, making standardized data governance a cornerstone of effective compliance.

Step 5: Foster a Culture of Continuous Improvement

AI and process management are not “set it-and-forget it” solutions. As Mars Wrigley demonstrated, continuous monitoring and iterative improvements are critical for sustaining gains. This means regularly reviewing and updating AI tools for compliance professionals to address emerging risks and regulatory changes.

Lessons for Compliance Professionals

Mars Wrigley’s journey highlights several key takeaways for compliance leaders:

  1. Invest in AI Thoughtfully. Technology is not a silver bullet. Its effectiveness depends on how well it integrates with and supports compliance processes.
  2. Adopt a Holistic View of Compliance. Compliance risks rarely confine themselves to one department. Breaking down silos through cross-functional process management improves visibility and reduces risk.
  3. Prioritize Data Governance. High-quality, standardized data is essential for both AI and compliance. Without it, even the best tools cannot deliver meaningful insights.
  4. Embrace Change Management. As with Mars Wrigley’s digital transformation, compliance process improvements require buy-in from leadership and employees.

The Compliance Call to Action

Compliance has been reactive for too long, focusing on addressing failures rather than preventing them. Integrating AI into process management offers an opportunity to shift that paradigm. By combining the best of technology and process management, compliance programs can reduce risk and enhance business value.

Mars Wrigley’s success story reminds us that the tools and strategies to transform compliance are available—but the onus is on compliance professionals to lead the charge. Whether through smarter risk management, better stakeholder engagement, or innovative technology adoption, the path forward is clear: process management and AI are not just operational tools; they are the future of compliance.

Now is the time to act. By adopting process management principles and leveraging AI, compliance leaders can build programs that are not only effective but also resilient, sustainable, and aligned with organizational goals. The question is no longer whether compliance should embrace these tools but how quickly they can integrate them into their processes.

By learning from companies like Mars Wrigley, compliance professionals can reimagine their programs, aligning them with the business’s needs while staying ahead of regulatory requirements.

Categories
SBR - Authors' Podcast

SBR – Author’s Podcast – Exploring the Future of Work, Ethics, and Compliance with Kelly Monahan, Part 2

Welcome to the Sunday Book Review, The Authors Podcast! Host Tom Fox visits with authors in the compliance arena and beyond in this Podcast Series. Today, Tom is joined by his good friend and colleague, Earnie Broughton (Earnie from Boerne), to visit with Dr. Kelly Monahan, co-author of the soon-to-be-released book Essential: How Distributed Teams, Generative AI, and Global Shifts are Creating a New Human-Powered Leader.  (Co-authored with Dr. Christie Smith) We three had such good fun that we went on for nearly an hour, so we have broken up the interview into two podcasts. If you have not checked out our first episode, you can do so by clicking here.

In Part 2, we deeply dive into effective communication tools for conveying corporate values to diverse workplace groups, emphasizing tailored training and gamification. Kelly highlights the importance of engaging, behavior-reinforcing communications through storytelling and public recognition systems. Emphasizing intrinsic motivation over financial incentives, Kelly draws on behavioral economics and the importance of fostering an environment of curiosity and context awareness for leadership roles. The discussion also addresses the nuances of generational differences in the workforce and the importance of diversity, equity, inclusion (DEI), and ESG initiatives for long-term organizational sustainability. Compliance professionals are encouraged to stay ahead of AI developments and promote positive behaviors to align with evolving business and ethical standards.

Key highlights:

  • Effective Communication Tools for Corporate Values
  • Future of Leadership in the Age of AI
  • Suspending Self-Interest and Cultivating Curiosity
  • Importance of Context in Ethical Decision-Making
  • Generational Differences in the Workforce
  • Role of Ethics and Compliance Professionals

Resources:

The Essential Website

Pre-Order Essential: How Distributed Teams, Generative AI, and Global Shifts are Creating a New Human-Powered Leader on Amazon.com

Kelly Monahan on LinkedIn

Earnie Boughton On LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Kaizen 2.0: Leveraging AI for Continuous Improvement in Compliance

In the late 1940s, engineer Taiichi Ohno introduced the world to the Toyota Production System, an operational approach rooted in the Japanese principle of Kaizen or, as we call it today, continuous improvement. By prioritizing incremental enhancements and engaging employees at all levels, Toyota transformed manufacturing with concepts like worker empowerment, just-in-time manufacturing, root-cause analysis, and total quality management. The result? Toyota became the largest automaker in the world and a gold standard for process excellence. All this and much more was found in a recent Harvard Business Review article, The Secret to Successful AI-Driven Process Redesign, by H. James Wilson and Paul R. Daugherty.

I use their article as a starting point to explore where Kaizen meets the transformative power of artificial intelligence (AI) in the compliance realm. Kaizen 2.0 empowers employees with AI tools to make data-driven decisions, streamline processes, and elevate organizational performance in this new era. For compliance professionals, the principles behind this transformation offer a powerful roadmap for managing risk, embedding compliance into your business processes, and creating resilient risk management structures.

From Kaizen to Kaizen 2.0: The Role of AI in Compliance 

At its core, Kaizen is about empowering employees to improve processes continuously. Kaizen 2.0 amplifies this with AI, making advanced tools accessible to non-technical employees and enabling them to synthesize complex data for actionable insights. For compliance teams, this means using AI not to replace human judgment but to enhance it, whether by automating routine tasks, detecting risks, or uncovering inefficiencies.

Mercedes-Benz provides an interesting example. The company’s MO360 Data Platform democratizes data access across its global production network, enabling employees at every level to make data-driven decisions. A frontline worker can query AI about assembly-line bottlenecks or supply chain delays and receive actionable real-time recommendations. Imagine a compliance professional leveraging similar tools to identify patterns in third-party transactions or track policy adherence across business units.

This democratization of information underscores a key lesson for compliance professionals. AI tools are most effective when they empower teams rather than replace them. By augmenting human expertise, compliance programs can scale their impact while fostering a culture of accountability and engagement.

AI-Driven Tools: Unlocking New Compliance Opportunities 

Incorporating AI into compliance frameworks opens the door to new possibilities. Consider the following applications for the compliance function.

  • Root-Cause Analysis

Root-cause analysis can become more powerful with AI. Generative AI tools can analyze vast amounts of data to pinpoint the underlying root causes of compliance failures. For example, training AI on high-quality data can reduce false positives in transaction monitoring, allowing teams to focus on genuine risks. Using AI in the root-cause process could allow a compliance professional to determine the root cause of every compliance failure, whether simply a hiccup or a major system failure.

  • Just-in-Time Compliance

Borrowing from Toyota’s just-in-time manufacturing, compliance teams can use AI to implement “just-in-time compliance.” AI tools can monitor real-time transactions, communications, or activities, flagging issues as they occur rather than after the fact. This proactive approach aligns with regulators’ increasing focus on continuous monitoring. Also, consider how you could send a personalized compliance message to an employee who is about to travel to a high-risk country or engage in a high-risk activity.

  • Employee Empowerment

AI-enabled compliance platforms can empower employees across the organization to identify and address risks. This offers a great opportunity to move a compliance tool directly to the first line of defense. A generative AI tool could help employees draft accurate disclosures, navigate complex policies, or report concerns anonymously and securely. By embedding compliance tools into day-to-day workflows, organizations can create a proactive compliance culture and make the process more efficient.

Reshaping Risk Management: Lessons from Kaizen 2.0 

One of the most transformative aspects of Kaizen 2.0 is how it redefines risk management. Merck uses generative AI to improve quality control in drug inspection processes in the pharmaceutical industry. By creating synthetic defect-image data, AI reduces false rejects by over 50%, cutting waste and enhancing efficiency.

Compliance professionals can take inspiration from this approach by leveraging AI to address data quality issues. For instance, AI-powered tools can identify inconsistencies in due diligence data, streamline third-party risk assessments, and ensure consistent policy application across global operations.

Similarly, companies like Colgate-Palmolive and Nestlé are using AI to drive innovation in product development. For compliance teams, these advancements signal the potential for AI to transform regulatory reporting, training, and monitoring by making these processes more adaptive and aligned with business goals.

Overcoming Challenges: Ensuring Human-Centric AI Adoption 

While AI offers immense potential, successful adoption requires careful planning and execution. Compliance professionals must address the following challenges:

  1. Employee Training and Engagement. Like Mercedes-Benz’s Turn2Learn initiative, compliance teams should invest in training employees in AI in compliance programs. Educating staff on using AI tools effectively ensures they can take part in compliance initiatives and take ownership of risk management.
  2. Data Quality and Integration. High-quality data is the foundation of effective AI tools. Compliance leaders must champion data governance initiatives to eliminate silos, standardize data formats, and ensure accuracy. This has been on the Department of Justice’s (DOJ) mind since 2020 and was reiterated in the 2024 Evaluation of Corporate Compliance Programs.
  3. Ethical AI Usage. Compliance teams must lead efforts to ensure AI tools are used ethically and transparently. This includes validating AI outputs, addressing biases, and maintaining accountability for decisions informed by AI.

The Future of Kaizen 2.0 in Compliance

The convergence of AI, digital twins, and autonomous agents will redefine process management in compliance. Autonomous agents powered by generative AI can independently execute tasks, adapt strategies, and continuously improve their performance. This means a shift from routine oversight to strategic leadership for compliance professionals.

Walmart uses autonomous agents for inventory management. Compliance teams could deploy similar agents to monitor real-time regulatory changes, update policies, and notify stakeholders of critical updates.

Looking ahead, digital twins, which are virtual models of real-world systems, could revolutionize compliance training and testing. A digital twin of an organization’s compliance framework could simulate the impact of regulatory changes, test the effectiveness of controls, and identify vulnerabilities before they become liabilities.

A Call to Action for Compliance Professionals

The principles of Kaizen 2.0 offer a roadmap for transforming compliance programs. By embracing AI and empowering employees, compliance leaders can foster a culture of continuous improvement that meets DOJ requirements and drives business success. Three key steps help the compliance professional begin.

The first is to identify opportunities for AI integration in both your compliance program and overall compliance function. You should begin by mapping compliance processes and identifying areas where AI can add value, such as risk monitoring, policy management, or training. Next is engagement with employees by fostering a culture of collaboration by involving employees in AI-driven compliance initiatives. Provide training and resources to help them contribute to continuous improvement.  The final step is to monitor and continuously improve. Establish clear metrics for compliance performance and use AI to monitor progress. Review and refine processes to ensure they remain effective and aligned with business goals. Update, refine, and improve as the data becomes available to you.

Compliance professionals have a unique opportunity to lead our organizations into the future. By leveraging Kaizen 2.0 principles and AI tools, we can create compliance programs that are effective, resilient, adaptive, and aligned with organizational values. Let’s make continuous improvement the cornerstone of a fully operationalized compliance program and demonstrate to your organization that effective compliance leads to more efficient processes, which leads to greater ROI and profitability.

Categories
Blog

Overcoming AI Resistance for Corporate Compliance Professionals

Artificial intelligence (AI) presents a paradox for corporate leaders. On one hand, its potential is undeniable: in a 2023 Gartner survey, 79% of corporate strategists deemed AI, automation, and analytics critical to their success. Yet, only 20% actively use AI in their daily activities. The gap between intention and action speaks volumes, especially in compliance, where AI offers unprecedented opportunities to manage risk, enhance efficiency, and ensure adherence to regulations. In a recent Harvard Business Review Article entitled Why People Resist Embracing AI, Julian De Freitas reviewed this issue and provided some ways to think through how to respond.

Despite its promise, AI adoption is hindered by human skepticism. Concerns range from fears of job loss to distrust in AI’s capacity for ethical decision-making. For compliance professionals, understanding and addressing these barriers is vital for leveraging AI to strengthen compliance programs and drive corporate integrity. In this blog post, I want to explore these challenges and how compliance leaders can overcome them. I have adapted Freitas’ article for the compliance professional.

The Five Barriers to AI Adoption in Compliance

  • AI’s Opacity: The “Black Box” Problem

Many employees resist AI because it operates as an inscrutable “black box,” offering conclusions without clear explanations. This lack of transparency can be a deal-breaker for compliance teams, as accountability is paramount in regulatory environments. How can an algorithm flag a suspicious transaction or identify potential bribery risks without explaining its rationale?

Compliance leaders should prioritize AI tools that offer clear, comparative explanations to overcome this barrier. For instance, instead of stating that a third-party transaction was flagged as high risk, the system should explain why, perhaps because of discrepancies in invoice patterns or connections to sanctioned entities. Such insights enhance trust and empower teams to make informed decisions.

Start small. Introducing simpler AI models before scaling to more complex ones can build confidence. Much like Miroglio Fashion’s approach to demand forecasting, a pilot program allows teams to familiarize themselves with AI and see its benefits before adopting more advanced systems.

  • AI Is Perceived as Emotionless

Compliance often involves navigating complex, human-centric issues, such as whistleblower reports, triage, Institutional Justice/Fairness, or ethical dilemmas. Many employees doubt AI’s ability to handle such subjective tasks, viewing it as emotionless and rigid. While AI can process vast amounts of data, can it understand the nuances of a whistleblower’s complaint or the subtleties of cultural differences in compliance?

Here, framing matters. Compliance leaders should emphasize AI’s ability to provide objective insights while leaving subjective decision-making to human professionals. For instance, AI can flag patterns in expense reports suggesting potential fraud, but the decision to investigate remains with compliance officers.

Anthropomorphizing AI tools can also make them more relatable. Tools like Amazon Alexa, with humanlike names and voices, have shown that users are more willing to interact with AI when it feels approachable. However, tread carefully in sensitive contexts, such as investigations, where a less personalized AI may feel less intrusive. Always remember the Human-in-the-Loop.

  • AI’s Perceived Rigidity

A common misconception about AI is that it cannot adapt or evolve. For compliance professionals, this rigidity could mean AI systems are seen as inflexible, unable to account for unique organizational contexts or evolving regulatory landscapes.

To address this, emphasize AI’s learning capabilities. Tools that improve over time, such as those that adapt to new fraud schemes or regulatory updates, mainly through large language models, can demonstrate AI’s ability to evolve alongside the business. Netflix’s content recommendations, for example, continuously improve based on user behavior. Compliance systems should follow suit, showcasing how AI refines its processes to meet organizational needs better.

At the same time, compliance leaders must balance flexibility with predictability. Highly adaptable AI systems can introduce risks if they deviate too far from expected outcomes. Regular monitoring and safeguards are critical to ensure the system operates within defined ethical and regulatory boundaries.

  • Fear of Loss of Control

AI’s autonomy often feels threatening, particularly in compliance, where human judgment is paramount. Employees may worry that AI will override their expertise or act independently in ways that could jeopardize compliance efforts. For example, an AI tool autonomously approving transactions without human review might lead to unchecked risks.

The solution? Implement human-in-the-loop systems, where AI supports decision-making rather than replaces it. Nest’s smart thermostat, which allows users to switch between manual control and automation, is an excellent analogy. In compliance, this could mean using AI to flag risks while leaving final decisions to compliance officers. Such hybrid models restore employees’ sense of agency while ensuring AI enhances rather than undermines human oversight.

  • Preference for Human Interaction

Compliance is inherently relational. Building trust, navigating cultural differences, and addressing employee concerns require human empathy—qualities many believe AI lacks. Resistance to AI often stems from the belief that humans are better equipped to handle nuanced interpersonal issues.

While AI cannot replicate human empathy, it can support human efforts. For example, generative AI can analyze patterns in hotline reports to identify systemic issues, allowing compliance officers to focus on building relationships and fostering a speak-up culture. Framing AI as a tool that amplifies human capabilities rather than replacing them can help reduce resistance.

Strategies for Driving AI Adoption in Compliance

  1. Start with Transparency. Be upfront about what AI can and cannot do. Educate employees on how AI systems work, their limitations, and the safeguards to prevent misuse. Transparency builds trust and encourages collaboration.
  2. Focus on Small Wins. Demonstrating tangible benefits through pilot programs can win over skeptics. For instance, AI can automate low-risk tasks like policy distribution or routine transaction monitoring. Success in these areas can pave the way for broader adoption.
  3. Prioritize Training and Support. AI adoption requires investment in employee training. Equip teams with the skills to use AI tools effectively and provide ongoing support to address questions or concerns. Mercedes-Benz’s Turn2Learn initiative offers extensive AI training and is a model worth emulating.
  4. Align AI with Ethical Standards. Compliance professionals must ensure AI systems align with the organization’s values and ethical standards. Regular audits, bias checks, and transparent reporting can reassure stakeholders that AI is being used responsibly.
  5. Measure and Iterate. Establish clear metrics to evaluate AI’s impact on compliance processes. Use these insights to refine the system, addressing pain points and enhancing effectiveness.

AI in Compliance: A Strategic Imperative 

AI’s potential to revolutionize compliance is immense. From automating routine tasks to identifying emerging risks, it can make programs more efficient, proactive, and resilient. However, realizing this potential requires more than technology; it demands a cultural shift.

Compliance leaders must champion AI adoption by addressing psychological barriers and demonstrating its value. Organizations can harness AI to strengthen compliance and drive business success by prioritizing transparency, fostering trust, and empowering employees. As the Gartner survey reminds us, AI is not just a tool for the future—it’s a strategic imperative for today. The question isn’t whether to adopt AI but how to do so in a way that aligns with organizational goals and values. For compliance professionals, the path forward is clear: embrace AI, empower your teams, and lead the charge toward a more efficient, ethical, and innovative compliance landscape.