Categories
Compliance Tip of the Day

Compliance Tip of the Day: Leveraging Compensation to Drive Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, your compliance program must fully incentivize compliance and impose consequences for negative actions by senior management.

 

Categories
Blog

2024 ECCP – Embracing Continuous Improvement

In her recent speech at the Society of Corporate Compliance and Ethics 23rd Annual Compliance & Ethics Institute, Principal Deputy Assistant Attorney General Nicole M. Argentieri discussed the Evaluation of Corporate Compliance Programs (2024 ECCP). (A copy of her remarks can be found here.) Today, I want to consider her remarks and the 2024 ECCP on continuous improvement.

Continuous Improvement: A Foundational Pillar

The ability to adapt and evolve is at the heart of any successful compliance program. Deputy Attorney General Lanny Breuer said that in 2009, which is still true today. Continuous improvement ensures compliance programs remain agile and responsive to internal and external pressures. The DOJ’s 2024 ECCP clarified that there is no one-size-fits-all approach to compliance. Instead, companies must tailor their programs to reflect their specific risk profiles, industries, and operational footprints. The three key questions the DOJ asks when evaluating a company’s compliance program are pivotal:

  1. Is the program well-designed?
  2. Is it applied in good faith and adequately resourced?
  3. Does it work in practice?

The answers to these questions must evolve as the company grows, its risk environment changes and new technologies or regulatory frameworks emerge. In other words, continuous improvement should be ingrained in the DNA of the compliance function.

Focus on Emerging Risks and Technology

A critical aspect of the 2024 ECCP update is its emphasis on emerging risks, particularly those related to artificial intelligence (AI) and other disruptive technologies. The DOJ has clarified that prosecutors will closely examine how companies assess and mitigate risks associated with AI and technology-enabled schemes. In an age where AI is increasingly used in business operations, compliance professionals must ensure that their companies are leveraging these technologies ethically and implementing robust controls to monitor for potential misuse.

For instance, as AI systems are deployed in decision-making processes—such as approving financial transactions or conducting due diligence—companies must have mechanisms to validate AI-generated data’s accuracy and reliability. This includes periodic testing, ongoing monitoring, and ensuring that human oversight remains an integral part of the compliance process.

Moreover, continuous improvement in this area involves staying ahead of technological trends. Compliance professionals must regularly update risk assessments for new technological developments, ensuring their controls and policies remain relevant. The ability to proactively manage these emerging risks is a hallmark of a forward-thinking compliance program.

Encouraging a Speak-Up Culture

Another critical update to the ECCP addresses the importance of fostering a “speak-up” culture within organizations. The DOJ’s increased scrutiny of whistleblower protections underscores the need for companies to encourage internal reporting of misconduct without fear of retaliation. Compliance programs must be designed to detect wrongdoing and provide employees with the tools and confidence to report issues when they arise.

Continuous improvement in this area means regularly testing and refining internal reporting mechanisms. Companies should ask themselves: Are our employees aware of how to report misconduct? Do they trust the process? Are we doing enough to protect whistleblowers? The ECCP now explicitly evaluates whether companies have anti-retaliation policies and whether they promote a culture encouraging employees to come forward.

It is also worth noting that companies can earn significant benefits by prioritizing internal reporting. Under the DOJ’s whistleblower pilot program, companies that receive an internal report and then self-disclose misconduct to the DOJ within 120 days can qualify for a presumption of a declination of prosecution. This sends a powerful message that promoting a speak-up culture is the right thing to do and strategically advantageous.

Leveraging Data for Compliance Effectiveness

The 2024 ECCP also strongly emphasizes the role of data in compliance programs. Companies are expected to use data to identify misconduct and assess the effectiveness of their compliance programs. Compliance professionals must ensure adequate access to relevant data sources and the resources to analyze that data effectively.

Continuous improvement in data management involves regularly auditing the sources and quality of data used in the compliance program. Are compliance personnel receiving timely and relevant data? Are there gaps in data collection that could hinder the detection of misconduct? By addressing these questions and implementing the necessary improvements, companies can ensure that their compliance programs function efficiently.

The Power of Adaptation

One of the most insightful aspects of the 2024 ECCP is its focus on learning from past mistakes—whether those mistakes occurred within the company or elsewhere in the industry. The DOJ encourages companies to conduct thorough root cause analyses after incidents of misconduct, using those insights to inform and improve compliance policies and procedures

Incorporating lessons learned into a compliance program is key to continuous improvement. Companies should routinely review their own experiences and external enforcement actions to identify weaknesses and strengthen their controls. For example, a company that uncovers a gap in its third-party due diligence process should take immediate action to address it and prevent similar issues.

Compensation and Clawbacks: A Shift Toward Accountability

Finally, the DOJ’s Compensation Incentives and Clawbacks Pilot Program is another area where continuous improvement can drive compliance excellence. By aligning compensation structures with ethical behavior, companies can incentivize employees to prioritize compliance. The DOJ now requires that compensation systems include criteria for promoting compliance and deterring misconduct, and early indications suggest that this positively impacts corporate behavior.

Continuous improvement in this area means regularly assessing whether the metrics used to evaluate employee performance are aligned with compliance objectives. Companies should also ensure that their compensation structures provide clear consequences for misconduct, such as clawing back bonuses or withholding future compensation from culpable employees.

In 2024 and as we move to 2025, continuous improvement is not a luxury but a necessity. Compliance professionals must remain vigilant, regularly evaluating and updating their programs to address new risks, leverage emerging technologies, and promote a strong culture of ethics. The DOJ’s 2024 ECCP provides a roadmap for how companies can achieve these goals, but the responsibility ultimately falls on compliance professionals to ensure that their programs are well-designed and effective in practice.

As we progress, the key to success lies in our ability to embrace continuous improvement. We must make the necessary investments in compliance to prevent, detect, and remediate misconduct. By doing so, we protect our organizations from legal and financial risk and foster a corporate culture that values integrity and ethical leadership.

Categories
Compliance Tip of the Day

Compliance Tip of the Day: Why Data Access is Key to Compliance Effectiveness

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we explore why the DOJ will now evaluate whether compliance teams have adequate access to the necessary data to assess the effectiveness of their programs.

Categories
Compliance Tip of the Day

Compliance Tip of the Day: Fostering a Culture of Speak Up

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we explore how the DOJ has placed significant emphasis on encouraging a culture where employees feel comfortable reporting misconduct.

Categories
Compliance Tip of the Day

Compliance tip of the Day: Embracing Continuous Improvement in Compliance Programs

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we explore why the DOJ’s emphasis on continuous improvement in compliance programs is a call to action for all of us.

Categories
Innovation in Compliance

Innovation in Compliance: Evie Wentink on Rethinking Compliance

Innovation comes in many areas and compliance professionals need to not only be ready for it but embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast.

In this episode, Tom welcomes back Evie Wentink to discuss the importance of rethinking ethics and compliance practices.

Evie shares insights from her recent LinkedIn articles on best practices for ethics hotlines and the importance of finding creative ways to engage employees in compliance topics. She reads a whimsical Dr. Seuss-inspired piece on reaching ethics hotlines and emphasizes the need for compliance messaging to be approachable and engaging. Additionally, Evie discusses the challenges compliance professionals face with limited budgets and offers practical solutions such as leveraging LinkedIn for networking and creating low-cost, effective compliance awareness tools.

The conversation also touches on the significance of changing the narrative around ethics and compliance for younger generations. Evie shares her experiences discussing compliance with her children and highlights the need for better education in schools to prepare future employees. She concludes by mentioning her new website, Ethical Edge Experts, and various platforms she’s using to spread compliance awareness. Tom and Evie agree on the necessity of continuous dialogue and innovation in the compliance field.

Key Highlights:

  • Rethinking Compliance Practices
  • Creative Messaging for Ethics Hotlines
  • Leveraging Low-Cost Resources
  • Engaging Managers in Compliance

Resources:
Evie Wentink on LinkedIn

Evie’s Top 10 Compliance Back to Basics

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 54 – From Secret Service Agent to Global Financial Crime Fighter: David Caruso’s 30-Year Journey

David Caruso is the founder and managing director of the Dominion Advisory Group, a consulting firm based in Virginia, near the nation’s capital. The firm works with banks facing regulatory enforcement actions across the U.S., Europe, and Asia. David aids institutions and organizations in navigating financial crime risk and compliance modernization globally.

As a former special agent with the US Secret Service and a graduate of George Washington University since 1996, he has been at the forefront of shaping the financial crime risk and compliance profession more generally. Building anti-money laundering (AML) and sanctions compliance programs at banking and financial institutions across the US and internationally, overseeing headline-grabbing corruption and money laundering investigations, and building and selling a RegTech software firm have afforded him an ideal perspective to reflect on every major issue and trend occurring in the financial crime compliance space for the past 25 years.

In this episode of Regulatory Ramblings, David shares his reflections on a nearly three-decade career in AML and financial crime compliance with our host, Ajay Shamdasani. 

He recounts having worked at global institutions like JP Morgan, Riggs Bank, Wachovia, Washington Mutual, and HSBC, to name a few. His notable achievements include his time as Riggs Bank’s chief compliance and AML officer.

In that role, he was hired to address some program weaknesses cited by the US Treasury Department’s Office of the Comptroller of the Currency (OCC). While at Riggs, David’s team uncovered two notorious international corruption schemes involving the government of Equatorial Guinea and former Chilean dictator Augusto Pinochet. The team’s work led to investigations by the Department of Justice and the U.S. Senate Permanent Subcommittee on Investigations. 

The cases drew worldwide media attention from justice authorities in the US, UK, Spain, and Chile. The facts uncovered by David at Riggs shook US lawmakers and regulators, kicking off 10 years of active regulatory and law enforcement action against banks across the US. 

After Riggs, David founded The Dominion Advisory Group in 2005. From his ringside seat near Washington, DC, he works closely with executive management, boards, and outside counsel to craft responses and build entire financial crime risk and compliance programs to address regulatory concerns—of which there has been no shortage in recent years. 

David also discusses the allure of AML and financial crime compliance and what brought him to the professional path he has been on for over three decades. Methodologically speaking, he recounts what has changed in AML and financial crime in that time and what has remained the same. 

He concurs that since 1970, so many additional requirements and expectations have been created that AML teams still need to catch up on their primary mission. Reflecting on the impact of the Bank Secrecy Act (1970), the USA PATRIOT Act (2001), the Foreign Account Tax Compliance Act (2010), or FATCA, and the more recent Anti Money Laundering Act (2020), he shares his views on how the impact of regulatory action has distracted from compliance professionals’ more critical tasks—with an eye towards how the regulatory exam-focused mindset of money laundering reporting officers (MLROs) affects operations and innovation. 

David also depicts the pervasive and ongoing discrepancies between what domestic and international/supernational policy-setting organizations, like the Financial Action Task Force (FATF), based in Paris, say and what they do. He says, “No one wants to ask if new rules and regulations are working and whether they prevent crime or have the unintended consequence of reducing [economic] growth?” 

He acknowledges the degree of geopolitical hypocrisy when it comes to AML and financial crime compliance, as well as when it comes to fighting bribery, fraud, and corruption internationally. Washington, New York, London, and Brussels all too often regulated the financial world. Yet, while the US and UK, and increasingly the EU, are some of the most aggressive jurisdictions regarding financial crime enforcement actions, their regulatory apparatus is often used to further their geopolitical goals. It is a view that many outside the West hold. 

The conversation concludes with David’s views on why sanctions against Russia stemming from its 2022 invasion of Ukraine have largely been unsuccessful, how technologies such as artificial intelligence can help AML/KYC/FCC compliance, and what policy recommendations he suggests moving forward. 

We are bringing you the Regulatory Ramblings podcasts with assistance from the HKU Faculty of Law, the University of Hong Kong’s Reg/Tech Lab, HKU-SCF Fintech Academy, Asia Global Institute, and HKU-edX Professional Certificate in Fintech.

Useful links in this episode:

  • Connect or follow David Caruso on LinkedIn

  • Dominion Advisory Group: Webpage

You might also be interested in:

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Blog

Argentieri Speech and 2024 ECCP: Complying with the 2024 ECCP on AI

The Department of Justice (DOJ), in its 2024 Update, has explicitly directed companies to ensure they have robust processes in place to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it’s crucial to integrate these mandates into your enterprise risk management (ERM) strategies and broader compliance programs. The DOJ posed two sets of queries for compliance professionals. The first was found in Section I, entitled Is the Corporation’s Compliance Program Well Designed? These are the following questions a prosecutor could ask a company or compliance professional going through an investigation.

Management of Emerging Risks to Ensure Compliance with Applicable Law

  • Does the company have a process for identifying and managing emerging internal and external risks, including risks related to the use of new technologies, that could potentially impact its ability to comply with the law?
  • How does the company assess the potential impact of new technologies, such as artificial intelligence (AI), on its ability to comply with criminal laws?
  • Is management of risks related to using AI and other new technologies integrated into broader enterprise risk management (ERM)  strategies?
  • What is the company’s approach to governance regarding the use of new technologies, such as AI, in its commercial business and compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from using technologies in its commercial business and compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
  • Do controls exist to ensure the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over the use of AI monitored and enforced?
  • How does the company train its employees on using emerging technologies such as AI?

The second question ties AI to a company’s values, ethics, and, most importantly, culture. It is found in Section III, entitled Does the Corporation’s Compliance Program Work in Practice?, Evolving Updates, and poses the following questions:

  • If the company is using new technologies such as AI in its commercial operations or compliance program, is the company monitoring and testing the technologies so that it can evaluate whether they are functioning as intended and consistent with the company’s code of conduct?
  • How quickly can the company detect and correct decisions made by AI or other new technologies that are inconsistent with the company’s values?

Thinking across both questions will lead to more questions and a deep dive into your compliance culture, philosophy, and corporate ethos. It will also bring about unprecedented opportunities for businesses. However, with these opportunities come significant risks, especially in the context of legal compliance. The DOJ has now explicitly directed companies to ensure they have robust processes to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it is both crucial and even obligatory to integrate these mandates into your ERM strategies and broader compliance programs. Below are some ways a compliance professional can think through and you can effectively respond to the DOJ’s latest guidance for the first series of questions.

Establish a Proactive Risk Identification Process

Managing emerging risks begins with a proactive approach to identifying potential threats before they manifest into significant compliance issues.

  • Implement a Dynamic Risk Assessment Framework. Develop a risk assessment process that continuously scans internal and external environments for emerging risks. This should include regular updates to risk profiles based on the latest technological developments, industry trends, and regulatory changes. Incorporating AI into your business and compliance operations requires that you assess its immediate impact and anticipate future risks it might pose as the technology evolves.
  • Engage Cross-Functional Teams. Ensure that your risk identification process is not siloed within the compliance function. Engage cross-functional teams, including IT, legal, HR, and operations, to provide diverse perspectives on potential risks associated with new technologies. This collaboration will help you capture a more comprehensive view of the risks and their potential impact on your organization’s ability to comply with applicable laws.

Establish Rigorous Monitoring Protocols

Monitoring AI and other new technologies isn’t just a box-ticking exercise; it’s a continuous process that requires a deep understanding of the technology and the ethical standards it must uphold.

  • Set Up Continuous Monitoring Systems. Implement real-time monitoring systems that track AI outputs and decisions as they occur. This is crucial for identifying deviations from expected behavior or ethical standards as soon as they happen. Automated monitoring tools can flag anomalies, such as decisions that fall outside predefined parameters, for further review by compliance officers.
  • Define Key Performance Indicators (KPIs). Develop KPIs that specifically measure the alignment of AI outputs with your company’s code of conduct. These include fairness, transparency, accuracy, and ethical impact metrics. Regularly review these KPIs to ensure that AI systems perform within acceptable boundaries and contribute positively to your compliance objectives.

Integrate AI Risk Management into Your ERM Strategy

The DOJ expects companies to manage AI and other technological risks within the broader context of their enterprise risk management strategies.

  • Align AI Risk Management with ERM. Ensure that risks related to AI and other new technologies are integrated into your ERM framework. This means treating AI-related risks like any other enterprise with appropriate controls, governance, and oversight. AI should not be viewed as a standalone issue but as an integral part of your organization’s overall risk landscape.
  • Develop AI-Specific Risk Controls. Establish controls that specifically address the unique risks posed by AI. These might include measures to prevent algorithmic bias, safeguards against AI-driven fraud, and protocols to ensure data privacy and security. Regularly review and update these controls to keep pace with technological advancements and emerging threats.

Implement Comprehensive Testing and Validation

Testing and validating AI technologies should be an ongoing practice, not just a one-time event during the deployment phase. The DOJ expects companies to evaluate whether these technologies are functioning as intended rigorously.

  • Stress-Test AI Systems. Subject your AI systems to scenarios that test their decision-making processes under different conditions. This includes testing for biases, errors, and unintended consequences. By simulating real-world situations, you can better understand how the AI might behave in practice and identify any potential risks before they manifest.
  • Periodic Audits and Reviews. Conduct regular audits of your AI systems to verify their continued compliance with company policies and ethical standards. These audits should include technical assessments and ethical evaluations, ensuring the AI’s decisions remain consistent with your company’s values over time.
  • External Validation. Consider bringing in third-party experts to validate your AI systems. External validation can objectively assess your AI’s functionality and ethical alignment, offering insights that might not be apparent to internal teams.

Develop a Rapid Response Mechanism

Every system is infallible; even the best-monitored AI systems can make mistakes. The key is how quickly and effectively your company can detect and correct these errors.

  • Establish a Rapid Response Team. Create a dedicated team within your compliance function responsible for addressing AI-related issues as they arise. This team should be equipped to investigate flagged decisions quickly, determine the root cause of any inconsistencies, and implement corrective actions.
  • Implement Feedback Loops. Develop feedback loops that allow for continuous learning and improvement of AI systems. When an error is detected, ensure that the AI system is updated or retrained to prevent similar issues in the future. This iterative process is essential for maintaining the integrity of AI systems over time.
  • Document and Report Corrections. Keep detailed records of any AI-related issues and the steps taken to correct them. This documentation is critical for internal tracking and for demonstrating to regulators, like the DOJ, that your company is serious about maintaining ethical AI practices.

Strengthen AI Governance and Accountability

Governance is key to ensuring that AI and other new technologies are used responsibly and in compliance with the law.

  • Create a Governance Framework for Technology Use. Develop a governance framework outlining how AI and other emerging technologies will be used within your organization. This framework should define roles and responsibilities, set clear guidelines for the ethical use of technology, and establish protocols for monitoring and enforcement. Ensure that this framework is aligned with your company’s code of conduct and compliance objectives. Ensure these guidelines are communicated clearly to all stakeholders, including AI developers, compliance teams, and business leaders.
  • Enforce Accountability. Accountability for the use of AI should be clearly defined and enforced. This includes assigning specific oversight roles to ensure that AI systems are used as intended and that any deliberate or reckless misuse is swiftly addressed. Establish a chain of accountability spanning from the C-suite to the operational level, ensuring all stakeholders understand their responsibilities in managing AI risks.

Mitigate Unintended Consequences and Misuse

The DOJ is particularly concerned with the potential for AI and other technologies to be misused, deliberately or unintentionally, leading to compliance breaches.

  • Monitor for Unintended Consequences. Implement monitoring systems that can detect unintended consequences of AI use, such as biased decision-making, unethical outcomes, or operational inefficiencies. These systems should be capable of flagging anomalies in real-time, allowing your compliance team to intervene before issues escalate.
  • Restrict AI Usage to Intended Purposes. Ensure that AI and other technologies are used only for their intended purposes. This involves setting clear boundaries on how AI can be applied and establishing controls to prevent misuse. Regular audits should be conducted to verify that AI systems operate within these defined parameters and that any deviations are promptly corrected.

Ensure Trustworthiness and Human Oversight

As Sam Silverstein continually reminds us, culture is all about trust. The same is true for the use of AI in the workplace. AI’s trustworthiness and reliability are paramount in maintaining compliance and protecting your company’s reputation.

  • Implement Trustworthiness Controls. Develop controls to ensure the trustworthiness of AI systems, including regular validation of AI models, thorough testing for accuracy and reliability, and ongoing monitoring for performance consistency. These controls should be designed to prevent the AI from producing outputs that could lead to legal or ethical violations.
  • Maintain a Human Baseline. AI should complement, not replace, human judgment. Establish a baseline of human decision-making to assess AI outputs and ensure that human oversight is maintained where necessary. This could involve having human review processes for high-stakes decisions or integrating AI outputs into broader decision-making frameworks that involve human input.

Train Employees on Emerging Technologies

As AI and other technologies become more prevalent, employee training is essential to ensure that your workforce understands both the benefits and risks.

  • Develop Comprehensive Training Programs. Create training programs that educate employees on using AI and other emerging technologies, focusing on compliance and ethical considerations. Training should cover the potential risks, the importance of adhering to the company’s code of conduct, and the specific controls to mitigate those risks. Employees should understand how the technology works and how to identify and address any decisions that may conflict with company values. Regular training sessions reinforce the importance of ethical AI use across the organization.
  • Promote a Culture of Awareness. Encourage a culture where employees are vigilant about the risks associated with new technologies. This involves fostering an environment where employees feel empowered to speak up if they notice potential issues and are actively engaged in ensuring that AI and other technologies are used responsibly.
  • Promote a Speak-Up Culture. Encourage employees to report concerns about AI-driven decisions, just as they would report other misconduct. A robust speak-up culture is critical for catching ethical lapses early and ensuring that AI systems remain aligned with company values.

The DOJ’s mandate on managing emerging risks, particularly those related to AI and other new technologies, underscores the need for a proactive, integrated approach to compliance. Compliance professionals can confidently navigate this complex landscape by embedding AI risk management within your broader ERM strategy, strengthening governance and accountability, mitigating unintended consequences, ensuring trustworthiness, and investing in employee training. The stakes are high, but with the right plan in place, your organization can harness the power of AI while staying firmly on the right side of the law.

Categories
Blog

Argentieri Speech and 2024 ECCP: Argentieri on Navigating AI Risks

Deputy Assistant Attorney General Nicole M. Argentieri’s speech highlighted a critical shift in the Department of Justice’s (DOJ) approach to evaluating corporate compliance programs. As outlined in the updated 2024 Evaluation of Corporate Compliance Programs (2024 ECCP), the emphasis on data access signals a new era where compliance professionals are expected to wield data with the same rigor and sophistication as their business counterparts. This week, I am reviewing the speech and 2024 ECCP. Over the next couple of blog posts, I will look at the most significant addition, that around AI. Today, I will review Argentieri’s remarks to see what she has said. Tomorrow, I will dive deeply into the new areas in the 2024 ECCP around new technologies such as Artificial Intelligence (AI).

In her remarks, Argentieri said, “First, … Our updated ECCP includes an evaluation of how companies assess and manage risk related to using new technology such as artificial intelligence in their business and compliance programs. Under the ECCP, prosecutors will consider the technology that a company and its employees use to conduct business, whether the company has conducted a risk assessment of using that technology, and whether the company has taken appropriate steps to mitigate any associated risk. For example, prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks, such as tools to confirm the accuracy or reliability of data the business uses. We also want to know whether the company monitors and tests its technology to evaluate its functioning as intended and consistent with its code of conduct.”

Argentieri emphasizes the importance of managing risks associated with disruptive technologies like AI. These updates signal a clear directive for compliance professionals: you must take a proactive stance on AI risk management. You can take the following steps to align your compliance program with the DOJ’s latest expectations.

Conduct a Comprehensive Risk Assessment of AI Technologies

The first step in meeting the DOJ is to thoroughly assess the risks that AI and other disruptive technologies pose to your organization.

  • Identify AI Use Cases. Start by mapping out where AI is being used across your business operations. This could include everything from automated decision-making processes to AI-driven data analytics. Understanding the scope of AI use is essential for identifying potential risk areas.
  • Evaluate Vulnerabilities. Once you have a clear picture of how AI is utilized, conduct a detailed risk assessment. Look for vulnerabilities, such as the potential for AI to generate false approvals or fraudulent documentation. Consider scenarios where AI could be manipulated or fail to perform as expected, leading to compliance breaches or unethical outcomes.
  • Prioritize Risks. Not all risks are created equal. Prioritize them based on their potential impact on your business and the likelihood of occurrence. This prioritization will guide the allocation of resources and the development of mitigation strategies.

Implement Robust Compliance Controls and Tools

Once risks have been identified, the next step is to ensure that your compliance program includes strong controls and tools specifically designed to manage AI-related risks.

  • Develop AI-Specific Controls. Traditional compliance controls may not be sufficient to address AI’s unique challenges. Develop or adapt controls to monitor AI-generated outputs, ensuring accuracy and consistency with company policies. This might include cross-referencing AI decisions with manual checks or implementing algorithms that flag unusual patterns for further review.
  • Invest in AI-Compliance Tools. Specialized tools are available that can help compliance teams monitor AI systems and detect potential issues. Invest in these tools to enhance your ability to identify and mitigate AI-related risks. These tools should be capable of real-time monitoring and provide insights into the functioning of AI systems, including the accuracy and reliability of the data they generate.
  • Regular Testing and Validation. AI systems should not be a set-it-and-forget-it solution. Regularly test and validate your AI tools to ensure they function as intended. This should include stress testing under different scenarios to identify any weaknesses or biases in the system. The DOJ expects your company to implement AI and rigorously monitor its performance and alignment with your compliance objectives.

Monitor, Evaluate, and Adapt

AI technology and its associated risks constantly evolve, so your compliance program must be flexible and responsive.

  • Ongoing Monitoring. Continuously monitor AI systems’ performance to ensure they align with your company’s code of conduct and compliance requirements. This involves technical monitoring and assessing the ethical implications of AI decisions.
  • Adapt to New Risks. As AI technology advances, new risks will emerge. Stay informed about the latest developments in AI and disruptive technologies, and be ready to adapt your compliance program accordingly. This may involve updating risk assessments, enhancing controls, or revising your company’s overall approach to AI.
  • Engage with Technology Experts. Compliance professionals should work closely with IT and AI experts to stay ahead of potential risks. This collaboration is crucial for understanding the technical nuances of AI and ensuring that compliance strategies are technically sound and effectively implemented.

Ensure Alignment with the Company’s Code of Conduct

Finally, all AI initiatives must follow your code of conduct and ethical standards.

  • Training and Awareness. Ensure that all employees, particularly those involved in AI development and deployment, are trained on the ethical implications of AI and the company’s code of conduct. This training should cover the importance of transparency, fairness, and accountability in AI operations.
  • Ethical AI Use. Embed ethical considerations into the AI development process. This means complying with the law and striving to use AI to reflect your company’s values. The DOJ will be looking to see if your company is avoiding harm and proactively promoting ethical AI use.

Argentieri’s remarks underscore the importance of managing the risks associated with AI and other disruptive technologies. Compliance professionals must take a proactive approach by conducting thorough risk assessments, implementing robust controls, and continuously monitoring AI systems to ensure they align with regulatory requirements and the company’s ethical standards. By taking these initial steps, you can meet the DOJ’s expectations and leverage AI to enhance your compliance program and overall business integrity. Join us tomorrow to take a deep dive into the new language of the 2024 ECCP and explore how to implement it.

Categories
Business Integrity Innovations

Business Integrity Innovations: Innovating Against the Odds – Dr. Amy Jadesimi on Ethical Business Practices in Nigeria

The Compliance Podcast Network (CPN) and the Center for International Private Enterprise (CIPE) bring you Business Integrity Innovations. This podcast is inspired by Ethics 1st – a multi-stakeholder initiative led by CIPE that creates pathways for accountable and sustainable investment in Africa. Companies can standardize their business practices, develop sound corporate governance systems, and demonstrate their commitment to compliance and business ethics using Ethics 1st.

In this episode of the Ethics 1st podcast, hosts Tom Fox and Lola Adekanye welcome Dr. Amy Jadesimi, a committee member of the Ethics First Advisory and a trailblazer in Nigerian industrial development. Dr. Jadesimi shares her incredible journey from being a medical graduate at Oxford and a banker at Goldman Sachs to earning her MBA at Stanford and leading LADOL in Nigeria. LADOL transformed from a single warehouse into a sprawling industrial hub during her leadership, slashing deep offshore logistics support costs and partnering with major companies like Samsung to create significant job opportunities and foster local industry growth.

Dr. Jadesimi discusses key aspects of her work, highlighting how compliance, sustainability, and ethical business practices drive innovation and competitiveness in challenging environments. She explains the importance of maintaining compliance to build trust, secure investment, and enable sustainable growth, emphasizing the positive impact on local communities and economies. Dr. Jadesimi’s insights provide valuable lessons for businesses striving to navigate regulatory challenges and make ethical decisions that lead to long-term success.

Key Highlights:

  • Building LADOL: Challenges and Successes
  • Navigating Regulatory Challenges
  • The Importance of Compliance
  • Sustainability and Business Efficiency
  • Future of Business in Nigeria

Resources:

CIPE

Dr. Amy Jadesimi on LinkedIn

LADOL