Deputy Assistant Attorney General Nicole M. Argentieri’s speech highlighted a critical shift in the Department of Justice’s (DOJ) approach to evaluating corporate compliance programs. As outlined in the updated 2024 Evaluation of Corporate Compliance Programs (2024 ECCP), the emphasis on data access signals a new era where compliance professionals are expected to wield data with the same rigor and sophistication as their business counterparts. This week, I am reviewing the speech and 2024 ECCP. Over the next couple of blog posts, I will look at the most significant addition, that around AI. Today, I will review Argentieri’s remarks to see what she has said. Tomorrow, I will dive deeply into the new areas in the 2024 ECCP around new technologies such as Artificial Intelligence (AI).
In her remarks, Argentieri said, “First, … Our updated ECCP includes an evaluation of how companies assess and manage risk related to using new technology such as artificial intelligence in their business and compliance programs. Under the ECCP, prosecutors will consider the technology that a company and its employees use to conduct business, whether the company has conducted a risk assessment of using that technology, and whether the company has taken appropriate steps to mitigate any associated risk. For example, prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks, such as tools to confirm the accuracy or reliability of data the business uses. We also want to know whether the company monitors and tests its technology to evaluate its functioning as intended and consistent with its code of conduct.”
Argentieri emphasizes the importance of managing risks associated with disruptive technologies like AI. These updates signal a clear directive for compliance professionals: you must take a proactive stance on AI risk management. You can take the following steps to align your compliance program with the DOJ’s latest expectations.
Conduct a Comprehensive Risk Assessment of AI Technologies
The first step in meeting the DOJ is to thoroughly assess the risks that AI and other disruptive technologies pose to your organization.
- Identify AI Use Cases. Start by mapping out where AI is being used across your business operations. This could include everything from automated decision-making processes to AI-driven data analytics. Understanding the scope of AI use is essential for identifying potential risk areas.
- Evaluate Vulnerabilities. Once you have a clear picture of how AI is utilized, conduct a detailed risk assessment. Look for vulnerabilities, such as the potential for AI to generate false approvals or fraudulent documentation. Consider scenarios where AI could be manipulated or fail to perform as expected, leading to compliance breaches or unethical outcomes.
- Prioritize Risks. Not all risks are created equal. Prioritize them based on their potential impact on your business and the likelihood of occurrence. This prioritization will guide the allocation of resources and the development of mitigation strategies.
Implement Robust Compliance Controls and Tools
Once risks have been identified, the next step is to ensure that your compliance program includes strong controls and tools specifically designed to manage AI-related risks.
- Develop AI-Specific Controls. Traditional compliance controls may not be sufficient to address AI’s unique challenges. Develop or adapt controls to monitor AI-generated outputs, ensuring accuracy and consistency with company policies. This might include cross-referencing AI decisions with manual checks or implementing algorithms that flag unusual patterns for further review.
- Invest in AI-Compliance Tools. Specialized tools are available that can help compliance teams monitor AI systems and detect potential issues. Invest in these tools to enhance your ability to identify and mitigate AI-related risks. These tools should be capable of real-time monitoring and provide insights into the functioning of AI systems, including the accuracy and reliability of the data they generate.
- Regular Testing and Validation. AI systems should not be a set-it-and-forget-it solution. Regularly test and validate your AI tools to ensure they function as intended. This should include stress testing under different scenarios to identify any weaknesses or biases in the system. The DOJ expects your company to implement AI and rigorously monitor its performance and alignment with your compliance objectives.
Monitor, Evaluate, and Adapt
AI technology and its associated risks constantly evolve, so your compliance program must be flexible and responsive.
- Ongoing Monitoring. Continuously monitor AI systems’ performance to ensure they align with your company’s code of conduct and compliance requirements. This involves technical monitoring and assessing the ethical implications of AI decisions.
- Adapt to New Risks. As AI technology advances, new risks will emerge. Stay informed about the latest developments in AI and disruptive technologies, and be ready to adapt your compliance program accordingly. This may involve updating risk assessments, enhancing controls, or revising your company’s overall approach to AI.
- Engage with Technology Experts. Compliance professionals should work closely with IT and AI experts to stay ahead of potential risks. This collaboration is crucial for understanding the technical nuances of AI and ensuring that compliance strategies are technically sound and effectively implemented.
Ensure Alignment with the Company’s Code of Conduct
Finally, all AI initiatives must follow your code of conduct and ethical standards.
- Training and Awareness. Ensure that all employees, particularly those involved in AI development and deployment, are trained on the ethical implications of AI and the company’s code of conduct. This training should cover the importance of transparency, fairness, and accountability in AI operations.
- Ethical AI Use. Embed ethical considerations into the AI development process. This means complying with the law and striving to use AI to reflect your company’s values. The DOJ will be looking to see if your company is avoiding harm and proactively promoting ethical AI use.
Argentieri’s remarks underscore the importance of managing the risks associated with AI and other disruptive technologies. Compliance professionals must take a proactive approach by conducting thorough risk assessments, implementing robust controls, and continuously monitoring AI systems to ensure they align with regulatory requirements and the company’s ethical standards. By taking these initial steps, you can meet the DOJ’s expectations and leverage AI to enhance your compliance program and overall business integrity. Join us tomorrow to take a deep dive into the new language of the 2024 ECCP and explore how to implement it.