Categories
Daily Compliance News

Daily Compliance News: September 27, 2024 – The Hiz Honor Indicted Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • NYT Mayor Adams indicted on bribery and corruption charges.  (NYT)
  • What happens when a news organization is a hedge fund or class action firm? (Bloomberg)
  • DOJ probing Super Micro Computer. (WSJ)
  • SEC fines 11 more firms for failures in messaging apps. (SEC Press Release)

Categories
Compliance Tip of the Day

Compliance Tip of the Day: Lesson from The John Deere FCPA Enforcement Action – Root Cause Analysis for Remediation

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we review why a root cause analysis is the first step you should take before you begin the remediation of your compliance program.

Categories
Data Driven Compliance

Data-Driven Compliance: The DOJ Mandate on Transforming Compliance Through Data Analytics and AI with Vince Walden

Are you struggling to keep up with the ever-changing compliance programs in your business? Look no further than the award-winning Data Driven Compliance podcast, hosted by Tom Fox, is a podcast featuring an in-depth conversation around the uses of data and data analytics in compliance programs. Data Driven Compliance is back with another exciting episode. Today, Vince Walden, founder of KonaAI, the sponsor of this podcast, returns to talk about the recent speech by Nicole Argentieri and the release of the 2024 Update to the Evaluation of Corporate Compliance Programs (ECCP).

Walden shares insights from the Nicole Argentieri’s keynote and ECCP update, emphasizing the DOJ’s focus on data access in compliance. We explore the importance of utilizing both compliance and business data for effective fraud and risk management. Walden underscores the necessity for compliance professionals to collaborate with internal audit and finance departments, advocating for a risk-based approach to data analytics and continuous controls monitoring. The discussion also delves into leveraging AI and machine learning to improve compliance efficacy and overall business operations, arguing for the proportional allocation of resources to match the company’s sophistication level.

Key Highlights:

  • DOJ’s Focus on Data Access
  • Understanding Compliance Data Analytics
  • Training Compliance Officers on Data
  • Implementing Continuous Controls Monitoring
  • Cost Savings and ROI in Compliance
  • Proportionate Resource Allocation
  • Documentation and Transparency

Resources:

Vince Walden on LinkedIn

KonaAI

Tom Fox 

Connect with me on the following sites:

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 54 – From Secret Service Agent to Global Financial Crime Fighter: David Caruso’s 30-Year Journey

David Caruso is the founder and managing director of the Dominion Advisory Group, a consulting firm based in Virginia, near the nation’s capital. The firm works with banks facing regulatory enforcement actions across the U.S., Europe, and Asia. David aids institutions and organizations in navigating financial crime risk and compliance modernization globally.

As a former special agent with the US Secret Service and a graduate of George Washington University since 1996, he has been at the forefront of shaping the financial crime risk and compliance profession more generally. Building anti-money laundering (AML) and sanctions compliance programs at banking and financial institutions across the US and internationally, overseeing headline-grabbing corruption and money laundering investigations, and building and selling a RegTech software firm have afforded him an ideal perspective to reflect on every major issue and trend occurring in the financial crime compliance space for the past 25 years.

In this episode of Regulatory Ramblings, David shares his reflections on a nearly three-decade career in AML and financial crime compliance with our host, Ajay Shamdasani. 

He recounts having worked at global institutions like JP Morgan, Riggs Bank, Wachovia, Washington Mutual, and HSBC, to name a few. His notable achievements include his time as Riggs Bank’s chief compliance and AML officer.

In that role, he was hired to address some program weaknesses cited by the US Treasury Department’s Office of the Comptroller of the Currency (OCC). While at Riggs, David’s team uncovered two notorious international corruption schemes involving the government of Equatorial Guinea and former Chilean dictator Augusto Pinochet. The team’s work led to investigations by the Department of Justice and the U.S. Senate Permanent Subcommittee on Investigations. 

The cases drew worldwide media attention from justice authorities in the US, UK, Spain, and Chile. The facts uncovered by David at Riggs shook US lawmakers and regulators, kicking off 10 years of active regulatory and law enforcement action against banks across the US. 

After Riggs, David founded The Dominion Advisory Group in 2005. From his ringside seat near Washington, DC, he works closely with executive management, boards, and outside counsel to craft responses and build entire financial crime risk and compliance programs to address regulatory concerns—of which there has been no shortage in recent years. 

David also discusses the allure of AML and financial crime compliance and what brought him to the professional path he has been on for over three decades. Methodologically speaking, he recounts what has changed in AML and financial crime in that time and what has remained the same. 

He concurs that since 1970, so many additional requirements and expectations have been created that AML teams still need to catch up on their primary mission. Reflecting on the impact of the Bank Secrecy Act (1970), the USA PATRIOT Act (2001), the Foreign Account Tax Compliance Act (2010), or FATCA, and the more recent Anti Money Laundering Act (2020), he shares his views on how the impact of regulatory action has distracted from compliance professionals’ more critical tasks—with an eye towards how the regulatory exam-focused mindset of money laundering reporting officers (MLROs) affects operations and innovation. 

David also depicts the pervasive and ongoing discrepancies between what domestic and international/supernational policy-setting organizations, like the Financial Action Task Force (FATF), based in Paris, say and what they do. He says, “No one wants to ask if new rules and regulations are working and whether they prevent crime or have the unintended consequence of reducing [economic] growth?” 

He acknowledges the degree of geopolitical hypocrisy when it comes to AML and financial crime compliance, as well as when it comes to fighting bribery, fraud, and corruption internationally. Washington, New York, London, and Brussels all too often regulated the financial world. Yet, while the US and UK, and increasingly the EU, are some of the most aggressive jurisdictions regarding financial crime enforcement actions, their regulatory apparatus is often used to further their geopolitical goals. It is a view that many outside the West hold. 

The conversation concludes with David’s views on why sanctions against Russia stemming from its 2022 invasion of Ukraine have largely been unsuccessful, how technologies such as artificial intelligence can help AML/KYC/FCC compliance, and what policy recommendations he suggests moving forward. 

We are bringing you the Regulatory Ramblings podcasts with assistance from the HKU Faculty of Law, the University of Hong Kong’s Reg/Tech Lab, HKU-SCF Fintech Academy, Asia Global Institute, and HKU-edX Professional Certificate in Fintech.

Useful links in this episode:

  • Connect or follow David Caruso on LinkedIn

  • Dominion Advisory Group: Webpage

You might also be interested in:

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Blog

Argentieri Speech and 2024 ECCP: Complying with the 2024 ECCP on AI

The Department of Justice (DOJ), in its 2024 Update, has explicitly directed companies to ensure they have robust processes in place to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it’s crucial to integrate these mandates into your enterprise risk management (ERM) strategies and broader compliance programs. The DOJ posed two sets of queries for compliance professionals. The first was found in Section I, entitled Is the Corporation’s Compliance Program Well Designed? These are the following questions a prosecutor could ask a company or compliance professional going through an investigation.

Management of Emerging Risks to Ensure Compliance with Applicable Law

  • Does the company have a process for identifying and managing emerging internal and external risks, including risks related to the use of new technologies, that could potentially impact its ability to comply with the law?
  • How does the company assess the potential impact of new technologies, such as artificial intelligence (AI), on its ability to comply with criminal laws?
  • Is management of risks related to using AI and other new technologies integrated into broader enterprise risk management (ERM)  strategies?
  • What is the company’s approach to governance regarding the use of new technologies, such as AI, in its commercial business and compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from using technologies in its commercial business and compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
  • Do controls exist to ensure the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over the use of AI monitored and enforced?
  • How does the company train its employees on using emerging technologies such as AI?

The second question ties AI to a company’s values, ethics, and, most importantly, culture. It is found in Section III, entitled Does the Corporation’s Compliance Program Work in Practice?, Evolving Updates, and poses the following questions:

  • If the company is using new technologies such as AI in its commercial operations or compliance program, is the company monitoring and testing the technologies so that it can evaluate whether they are functioning as intended and consistent with the company’s code of conduct?
  • How quickly can the company detect and correct decisions made by AI or other new technologies that are inconsistent with the company’s values?

Thinking across both questions will lead to more questions and a deep dive into your compliance culture, philosophy, and corporate ethos. It will also bring about unprecedented opportunities for businesses. However, with these opportunities come significant risks, especially in the context of legal compliance. The DOJ has now explicitly directed companies to ensure they have robust processes to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it is both crucial and even obligatory to integrate these mandates into your ERM strategies and broader compliance programs. Below are some ways a compliance professional can think through and you can effectively respond to the DOJ’s latest guidance for the first series of questions.

Establish a Proactive Risk Identification Process

Managing emerging risks begins with a proactive approach to identifying potential threats before they manifest into significant compliance issues.

  • Implement a Dynamic Risk Assessment Framework. Develop a risk assessment process that continuously scans internal and external environments for emerging risks. This should include regular updates to risk profiles based on the latest technological developments, industry trends, and regulatory changes. Incorporating AI into your business and compliance operations requires that you assess its immediate impact and anticipate future risks it might pose as the technology evolves.
  • Engage Cross-Functional Teams. Ensure that your risk identification process is not siloed within the compliance function. Engage cross-functional teams, including IT, legal, HR, and operations, to provide diverse perspectives on potential risks associated with new technologies. This collaboration will help you capture a more comprehensive view of the risks and their potential impact on your organization’s ability to comply with applicable laws.

Establish Rigorous Monitoring Protocols

Monitoring AI and other new technologies isn’t just a box-ticking exercise; it’s a continuous process that requires a deep understanding of the technology and the ethical standards it must uphold.

  • Set Up Continuous Monitoring Systems. Implement real-time monitoring systems that track AI outputs and decisions as they occur. This is crucial for identifying deviations from expected behavior or ethical standards as soon as they happen. Automated monitoring tools can flag anomalies, such as decisions that fall outside predefined parameters, for further review by compliance officers.
  • Define Key Performance Indicators (KPIs). Develop KPIs that specifically measure the alignment of AI outputs with your company’s code of conduct. These include fairness, transparency, accuracy, and ethical impact metrics. Regularly review these KPIs to ensure that AI systems perform within acceptable boundaries and contribute positively to your compliance objectives.

Integrate AI Risk Management into Your ERM Strategy

The DOJ expects companies to manage AI and other technological risks within the broader context of their enterprise risk management strategies.

  • Align AI Risk Management with ERM. Ensure that risks related to AI and other new technologies are integrated into your ERM framework. This means treating AI-related risks like any other enterprise with appropriate controls, governance, and oversight. AI should not be viewed as a standalone issue but as an integral part of your organization’s overall risk landscape.
  • Develop AI-Specific Risk Controls. Establish controls that specifically address the unique risks posed by AI. These might include measures to prevent algorithmic bias, safeguards against AI-driven fraud, and protocols to ensure data privacy and security. Regularly review and update these controls to keep pace with technological advancements and emerging threats.

Implement Comprehensive Testing and Validation

Testing and validating AI technologies should be an ongoing practice, not just a one-time event during the deployment phase. The DOJ expects companies to evaluate whether these technologies are functioning as intended rigorously.

  • Stress-Test AI Systems. Subject your AI systems to scenarios that test their decision-making processes under different conditions. This includes testing for biases, errors, and unintended consequences. By simulating real-world situations, you can better understand how the AI might behave in practice and identify any potential risks before they manifest.
  • Periodic Audits and Reviews. Conduct regular audits of your AI systems to verify their continued compliance with company policies and ethical standards. These audits should include technical assessments and ethical evaluations, ensuring the AI’s decisions remain consistent with your company’s values over time.
  • External Validation. Consider bringing in third-party experts to validate your AI systems. External validation can objectively assess your AI’s functionality and ethical alignment, offering insights that might not be apparent to internal teams.

Develop a Rapid Response Mechanism

Every system is infallible; even the best-monitored AI systems can make mistakes. The key is how quickly and effectively your company can detect and correct these errors.

  • Establish a Rapid Response Team. Create a dedicated team within your compliance function responsible for addressing AI-related issues as they arise. This team should be equipped to investigate flagged decisions quickly, determine the root cause of any inconsistencies, and implement corrective actions.
  • Implement Feedback Loops. Develop feedback loops that allow for continuous learning and improvement of AI systems. When an error is detected, ensure that the AI system is updated or retrained to prevent similar issues in the future. This iterative process is essential for maintaining the integrity of AI systems over time.
  • Document and Report Corrections. Keep detailed records of any AI-related issues and the steps taken to correct them. This documentation is critical for internal tracking and for demonstrating to regulators, like the DOJ, that your company is serious about maintaining ethical AI practices.

Strengthen AI Governance and Accountability

Governance is key to ensuring that AI and other new technologies are used responsibly and in compliance with the law.

  • Create a Governance Framework for Technology Use. Develop a governance framework outlining how AI and other emerging technologies will be used within your organization. This framework should define roles and responsibilities, set clear guidelines for the ethical use of technology, and establish protocols for monitoring and enforcement. Ensure that this framework is aligned with your company’s code of conduct and compliance objectives. Ensure these guidelines are communicated clearly to all stakeholders, including AI developers, compliance teams, and business leaders.
  • Enforce Accountability. Accountability for the use of AI should be clearly defined and enforced. This includes assigning specific oversight roles to ensure that AI systems are used as intended and that any deliberate or reckless misuse is swiftly addressed. Establish a chain of accountability spanning from the C-suite to the operational level, ensuring all stakeholders understand their responsibilities in managing AI risks.

Mitigate Unintended Consequences and Misuse

The DOJ is particularly concerned with the potential for AI and other technologies to be misused, deliberately or unintentionally, leading to compliance breaches.

  • Monitor for Unintended Consequences. Implement monitoring systems that can detect unintended consequences of AI use, such as biased decision-making, unethical outcomes, or operational inefficiencies. These systems should be capable of flagging anomalies in real-time, allowing your compliance team to intervene before issues escalate.
  • Restrict AI Usage to Intended Purposes. Ensure that AI and other technologies are used only for their intended purposes. This involves setting clear boundaries on how AI can be applied and establishing controls to prevent misuse. Regular audits should be conducted to verify that AI systems operate within these defined parameters and that any deviations are promptly corrected.

Ensure Trustworthiness and Human Oversight

As Sam Silverstein continually reminds us, culture is all about trust. The same is true for the use of AI in the workplace. AI’s trustworthiness and reliability are paramount in maintaining compliance and protecting your company’s reputation.

  • Implement Trustworthiness Controls. Develop controls to ensure the trustworthiness of AI systems, including regular validation of AI models, thorough testing for accuracy and reliability, and ongoing monitoring for performance consistency. These controls should be designed to prevent the AI from producing outputs that could lead to legal or ethical violations.
  • Maintain a Human Baseline. AI should complement, not replace, human judgment. Establish a baseline of human decision-making to assess AI outputs and ensure that human oversight is maintained where necessary. This could involve having human review processes for high-stakes decisions or integrating AI outputs into broader decision-making frameworks that involve human input.

Train Employees on Emerging Technologies

As AI and other technologies become more prevalent, employee training is essential to ensure that your workforce understands both the benefits and risks.

  • Develop Comprehensive Training Programs. Create training programs that educate employees on using AI and other emerging technologies, focusing on compliance and ethical considerations. Training should cover the potential risks, the importance of adhering to the company’s code of conduct, and the specific controls to mitigate those risks. Employees should understand how the technology works and how to identify and address any decisions that may conflict with company values. Regular training sessions reinforce the importance of ethical AI use across the organization.
  • Promote a Culture of Awareness. Encourage a culture where employees are vigilant about the risks associated with new technologies. This involves fostering an environment where employees feel empowered to speak up if they notice potential issues and are actively engaged in ensuring that AI and other technologies are used responsibly.
  • Promote a Speak-Up Culture. Encourage employees to report concerns about AI-driven decisions, just as they would report other misconduct. A robust speak-up culture is critical for catching ethical lapses early and ensuring that AI systems remain aligned with company values.

The DOJ’s mandate on managing emerging risks, particularly those related to AI and other new technologies, underscores the need for a proactive, integrated approach to compliance. Compliance professionals can confidently navigate this complex landscape by embedding AI risk management within your broader ERM strategy, strengthening governance and accountability, mitigating unintended consequences, ensuring trustworthiness, and investing in employee training. The stakes are high, but with the right plan in place, your organization can harness the power of AI while staying firmly on the right side of the law.