The Department of Justice (DOJ), in its 2024 Update, has explicitly directed companies to ensure they have robust processes in place to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it’s crucial to integrate these mandates into your enterprise risk management (ERM) strategies and broader compliance programs. The DOJ posed two sets of queries for compliance professionals. The first was found in Section I, entitled Is the Corporation’s Compliance Program Well Designed? These are the following questions a prosecutor could ask a company or compliance professional going through an investigation.
Management of Emerging Risks to Ensure Compliance with Applicable Law
- Does the company have a process for identifying and managing emerging internal and external risks, including risks related to the use of new technologies, that could potentially impact its ability to comply with the law?
- How does the company assess the potential impact of new technologies, such as artificial intelligence (AI), on its ability to comply with criminal laws?
- Is management of risks related to using AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?
- What is the company’s approach to governance regarding the use of new technologies, such as AI, in its commercial business and compliance program?
- How is the company curbing any potential negative or unintended consequences resulting from using technologies in its commercial business and compliance program?
- How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
- To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
- Do controls exist to ensure the technology is used only for its intended purposes?
- What baseline of human decision-making is used to assess AI?
- How is accountability over the use of AI monitored and enforced?
- How does the company train its employees on using emerging technologies such as AI?
The second question ties AI to a company’s values, ethics, and, most importantly, culture. It is found in Section III, entitled Does the Corporation’s Compliance Program Work in Practice?, Evolving Updates, and poses the following questions:
- If the company is using new technologies such as AI in its commercial operations or compliance program, is the company monitoring and testing the technologies so that it can evaluate whether they are functioning as intended and consistent with the company’s code of conduct?
- How quickly can the company detect and correct decisions made by AI or other new technologies that are inconsistent with the company’s values?
Thinking across both questions will lead to more questions and a deep dive into your compliance culture, philosophy, and corporate ethos. It will also bring about unprecedented opportunities for businesses. However, with these opportunities come significant risks, especially in the context of legal compliance. The DOJ has now explicitly directed companies to ensure they have robust processes to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it is both crucial and even obligatory to integrate these mandates into your ERM strategies and broader compliance programs. Below are some ways a compliance professional can think through and you can effectively respond to the DOJ’s latest guidance for the first series of questions.
Establish a Proactive Risk Identification Process
Managing emerging risks begins with a proactive approach to identifying potential threats before they manifest into significant compliance issues.
- Implement a Dynamic Risk Assessment Framework. Develop a risk assessment process that continuously scans internal and external environments for emerging risks. This should include regular updates to risk profiles based on the latest technological developments, industry trends, and regulatory changes. Incorporating AI into your business and compliance operations requires that you assess its immediate impact and anticipate future risks it might pose as the technology evolves.
- Engage Cross-Functional Teams. Ensure that your risk identification process is not siloed within the compliance function. Engage cross-functional teams, including IT, legal, HR, and operations, to provide diverse perspectives on potential risks associated with new technologies. This collaboration will help you capture a more comprehensive view of the risks and their potential impact on your organization’s ability to comply with applicable laws.
Establish Rigorous Monitoring Protocols
Monitoring AI and other new technologies isn’t just a box-ticking exercise; it’s a continuous process that requires a deep understanding of the technology and the ethical standards it must uphold.
- Set Up Continuous Monitoring Systems. Implement real-time monitoring systems that track AI outputs and decisions as they occur. This is crucial for identifying deviations from expected behavior or ethical standards as soon as they happen. Automated monitoring tools can flag anomalies, such as decisions that fall outside predefined parameters, for further review by compliance officers.
- Define Key Performance Indicators (KPIs). Develop KPIs that specifically measure the alignment of AI outputs with your company’s code of conduct. These include fairness, transparency, accuracy, and ethical impact metrics. Regularly review these KPIs to ensure that AI systems perform within acceptable boundaries and contribute positively to your compliance objectives.
Integrate AI Risk Management into Your ERM Strategy
The DOJ expects companies to manage AI and other technological risks within the broader context of their enterprise risk management strategies.
- Align AI Risk Management with ERM. Ensure that risks related to AI and other new technologies are integrated into your ERM framework. This means treating AI-related risks like any other enterprise with appropriate controls, governance, and oversight. AI should not be viewed as a standalone issue but as an integral part of your organization’s overall risk landscape.
- Develop AI-Specific Risk Controls. Establish controls that specifically address the unique risks posed by AI. These might include measures to prevent algorithmic bias, safeguards against AI-driven fraud, and protocols to ensure data privacy and security. Regularly review and update these controls to keep pace with technological advancements and emerging threats.
Implement Comprehensive Testing and Validation
Testing and validating AI technologies should be an ongoing practice, not just a one-time event during the deployment phase. The DOJ expects companies to evaluate whether these technologies are functioning as intended rigorously.
- Stress-Test AI Systems. Subject your AI systems to scenarios that test their decision-making processes under different conditions. This includes testing for biases, errors, and unintended consequences. By simulating real-world situations, you can better understand how the AI might behave in practice and identify any potential risks before they manifest.
- Periodic Audits and Reviews. Conduct regular audits of your AI systems to verify their continued compliance with company policies and ethical standards. These audits should include technical assessments and ethical evaluations, ensuring the AI’s decisions remain consistent with your company’s values over time.
- External Validation. Consider bringing in third-party experts to validate your AI systems. External validation can objectively assess your AI’s functionality and ethical alignment, offering insights that might not be apparent to internal teams.
Develop a Rapid Response Mechanism
Every system is infallible; even the best-monitored AI systems can make mistakes. The key is how quickly and effectively your company can detect and correct these errors.
- Establish a Rapid Response Team. Create a dedicated team within your compliance function responsible for addressing AI-related issues as they arise. This team should be equipped to investigate flagged decisions quickly, determine the root cause of any inconsistencies, and implement corrective actions.
- Implement Feedback Loops. Develop feedback loops that allow for continuous learning and improvement of AI systems. When an error is detected, ensure that the AI system is updated or retrained to prevent similar issues in the future. This iterative process is essential for maintaining the integrity of AI systems over time.
- Document and Report Corrections. Keep detailed records of any AI-related issues and the steps taken to correct them. This documentation is critical for internal tracking and for demonstrating to regulators, like the DOJ, that your company is serious about maintaining ethical AI practices.
Strengthen AI Governance and Accountability
Governance is key to ensuring that AI and other new technologies are used responsibly and in compliance with the law.
- Create a Governance Framework for Technology Use. Develop a governance framework outlining how AI and other emerging technologies will be used within your organization. This framework should define roles and responsibilities, set clear guidelines for the ethical use of technology, and establish protocols for monitoring and enforcement. Ensure that this framework is aligned with your company’s code of conduct and compliance objectives. Ensure these guidelines are communicated clearly to all stakeholders, including AI developers, compliance teams, and business leaders.
- Enforce Accountability. Accountability for the use of AI should be clearly defined and enforced. This includes assigning specific oversight roles to ensure that AI systems are used as intended and that any deliberate or reckless misuse is swiftly addressed. Establish a chain of accountability spanning from the C-suite to the operational level, ensuring all stakeholders understand their responsibilities in managing AI risks.
Mitigate Unintended Consequences and Misuse
The DOJ is particularly concerned with the potential for AI and other technologies to be misused, deliberately or unintentionally, leading to compliance breaches.
- Monitor for Unintended Consequences. Implement monitoring systems that can detect unintended consequences of AI use, such as biased decision-making, unethical outcomes, or operational inefficiencies. These systems should be capable of flagging anomalies in real-time, allowing your compliance team to intervene before issues escalate.
- Restrict AI Usage to Intended Purposes. Ensure that AI and other technologies are used only for their intended purposes. This involves setting clear boundaries on how AI can be applied and establishing controls to prevent misuse. Regular audits should be conducted to verify that AI systems operate within these defined parameters and that any deviations are promptly corrected.
Ensure Trustworthiness and Human Oversight
As Sam Silverstein continually reminds us, culture is all about trust. The same is true for the use of AI in the workplace. AI’s trustworthiness and reliability are paramount in maintaining compliance and protecting your company’s reputation.
- Implement Trustworthiness Controls. Develop controls to ensure the trustworthiness of AI systems, including regular validation of AI models, thorough testing for accuracy and reliability, and ongoing monitoring for performance consistency. These controls should be designed to prevent the AI from producing outputs that could lead to legal or ethical violations.
- Maintain a Human Baseline. AI should complement, not replace, human judgment. Establish a baseline of human decision-making to assess AI outputs and ensure that human oversight is maintained where necessary. This could involve having human review processes for high-stakes decisions or integrating AI outputs into broader decision-making frameworks that involve human input.
Train Employees on Emerging Technologies
As AI and other technologies become more prevalent, employee training is essential to ensure that your workforce understands both the benefits and risks.
- Develop Comprehensive Training Programs. Create training programs that educate employees on using AI and other emerging technologies, focusing on compliance and ethical considerations. Training should cover the potential risks, the importance of adhering to the company’s code of conduct, and the specific controls to mitigate those risks. Employees should understand how the technology works and how to identify and address any decisions that may conflict with company values. Regular training sessions reinforce the importance of ethical AI use across the organization.
- Promote a Culture of Awareness. Encourage a culture where employees are vigilant about the risks associated with new technologies. This involves fostering an environment where employees feel empowered to speak up if they notice potential issues and are actively engaged in ensuring that AI and other technologies are used responsibly.
- Promote a Speak-Up Culture. Encourage employees to report concerns about AI-driven decisions, just as they would report other misconduct. A robust speak-up culture is critical for catching ethical lapses early and ensuring that AI systems remain aligned with company values.
The DOJ’s mandate on managing emerging risks, particularly those related to AI and other new technologies, underscores the need for a proactive, integrated approach to compliance. Compliance professionals can confidently navigate this complex landscape by embedding AI risk management within your broader ERM strategy, strengthening governance and accountability, mitigating unintended consequences, ensuring trustworthiness, and investing in employee training. The stakes are high, but with the right plan in place, your organization can harness the power of AI while staying firmly on the right side of the law.