Categories
Blog

Argentieri Speech and 2024 ECCP: Complying with the 2024 ECCP on AI

The Department of Justice (DOJ), in its 2024 Update, has explicitly directed companies to ensure they have robust processes in place to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it’s crucial to integrate these mandates into your enterprise risk management (ERM) strategies and broader compliance programs. The DOJ posed two sets of queries for compliance professionals. The first was found in Section I, entitled Is the Corporation’s Compliance Program Well Designed? These are the following questions a prosecutor could ask a company or compliance professional going through an investigation.

Management of Emerging Risks to Ensure Compliance with Applicable Law

  • Does the company have a process for identifying and managing emerging internal and external risks, including risks related to the use of new technologies, that could potentially impact its ability to comply with the law?
  • How does the company assess the potential impact of new technologies, such as artificial intelligence (AI), on its ability to comply with criminal laws?
  • Is management of risks related to using AI and other new technologies integrated into broader enterprise risk management (ERM)  strategies?
  • What is the company’s approach to governance regarding the use of new technologies, such as AI, in its commercial business and compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from using technologies in its commercial business and compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
  • Do controls exist to ensure the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over the use of AI monitored and enforced?
  • How does the company train its employees on using emerging technologies such as AI?

The second question ties AI to a company’s values, ethics, and, most importantly, culture. It is found in Section III, entitled Does the Corporation’s Compliance Program Work in Practice?, Evolving Updates, and poses the following questions:

  • If the company is using new technologies such as AI in its commercial operations or compliance program, is the company monitoring and testing the technologies so that it can evaluate whether they are functioning as intended and consistent with the company’s code of conduct?
  • How quickly can the company detect and correct decisions made by AI or other new technologies that are inconsistent with the company’s values?

Thinking across both questions will lead to more questions and a deep dive into your compliance culture, philosophy, and corporate ethos. It will also bring about unprecedented opportunities for businesses. However, with these opportunities come significant risks, especially in the context of legal compliance. The DOJ has now explicitly directed companies to ensure they have robust processes to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it is both crucial and even obligatory to integrate these mandates into your ERM strategies and broader compliance programs. Below are some ways a compliance professional can think through and you can effectively respond to the DOJ’s latest guidance for the first series of questions.

Establish a Proactive Risk Identification Process

Managing emerging risks begins with a proactive approach to identifying potential threats before they manifest into significant compliance issues.

  • Implement a Dynamic Risk Assessment Framework. Develop a risk assessment process that continuously scans internal and external environments for emerging risks. This should include regular updates to risk profiles based on the latest technological developments, industry trends, and regulatory changes. Incorporating AI into your business and compliance operations requires that you assess its immediate impact and anticipate future risks it might pose as the technology evolves.
  • Engage Cross-Functional Teams. Ensure that your risk identification process is not siloed within the compliance function. Engage cross-functional teams, including IT, legal, HR, and operations, to provide diverse perspectives on potential risks associated with new technologies. This collaboration will help you capture a more comprehensive view of the risks and their potential impact on your organization’s ability to comply with applicable laws.

Establish Rigorous Monitoring Protocols

Monitoring AI and other new technologies isn’t just a box-ticking exercise; it’s a continuous process that requires a deep understanding of the technology and the ethical standards it must uphold.

  • Set Up Continuous Monitoring Systems. Implement real-time monitoring systems that track AI outputs and decisions as they occur. This is crucial for identifying deviations from expected behavior or ethical standards as soon as they happen. Automated monitoring tools can flag anomalies, such as decisions that fall outside predefined parameters, for further review by compliance officers.
  • Define Key Performance Indicators (KPIs). Develop KPIs that specifically measure the alignment of AI outputs with your company’s code of conduct. These include fairness, transparency, accuracy, and ethical impact metrics. Regularly review these KPIs to ensure that AI systems perform within acceptable boundaries and contribute positively to your compliance objectives.

Integrate AI Risk Management into Your ERM Strategy

The DOJ expects companies to manage AI and other technological risks within the broader context of their enterprise risk management strategies.

  • Align AI Risk Management with ERM. Ensure that risks related to AI and other new technologies are integrated into your ERM framework. This means treating AI-related risks like any other enterprise with appropriate controls, governance, and oversight. AI should not be viewed as a standalone issue but as an integral part of your organization’s overall risk landscape.
  • Develop AI-Specific Risk Controls. Establish controls that specifically address the unique risks posed by AI. These might include measures to prevent algorithmic bias, safeguards against AI-driven fraud, and protocols to ensure data privacy and security. Regularly review and update these controls to keep pace with technological advancements and emerging threats.

Implement Comprehensive Testing and Validation

Testing and validating AI technologies should be an ongoing practice, not just a one-time event during the deployment phase. The DOJ expects companies to evaluate whether these technologies are functioning as intended rigorously.

  • Stress-Test AI Systems. Subject your AI systems to scenarios that test their decision-making processes under different conditions. This includes testing for biases, errors, and unintended consequences. By simulating real-world situations, you can better understand how the AI might behave in practice and identify any potential risks before they manifest.
  • Periodic Audits and Reviews. Conduct regular audits of your AI systems to verify their continued compliance with company policies and ethical standards. These audits should include technical assessments and ethical evaluations, ensuring the AI’s decisions remain consistent with your company’s values over time.
  • External Validation. Consider bringing in third-party experts to validate your AI systems. External validation can objectively assess your AI’s functionality and ethical alignment, offering insights that might not be apparent to internal teams.

Develop a Rapid Response Mechanism

Every system is infallible; even the best-monitored AI systems can make mistakes. The key is how quickly and effectively your company can detect and correct these errors.

  • Establish a Rapid Response Team. Create a dedicated team within your compliance function responsible for addressing AI-related issues as they arise. This team should be equipped to investigate flagged decisions quickly, determine the root cause of any inconsistencies, and implement corrective actions.
  • Implement Feedback Loops. Develop feedback loops that allow for continuous learning and improvement of AI systems. When an error is detected, ensure that the AI system is updated or retrained to prevent similar issues in the future. This iterative process is essential for maintaining the integrity of AI systems over time.
  • Document and Report Corrections. Keep detailed records of any AI-related issues and the steps taken to correct them. This documentation is critical for internal tracking and for demonstrating to regulators, like the DOJ, that your company is serious about maintaining ethical AI practices.

Strengthen AI Governance and Accountability

Governance is key to ensuring that AI and other new technologies are used responsibly and in compliance with the law.

  • Create a Governance Framework for Technology Use. Develop a governance framework outlining how AI and other emerging technologies will be used within your organization. This framework should define roles and responsibilities, set clear guidelines for the ethical use of technology, and establish protocols for monitoring and enforcement. Ensure that this framework is aligned with your company’s code of conduct and compliance objectives. Ensure these guidelines are communicated clearly to all stakeholders, including AI developers, compliance teams, and business leaders.
  • Enforce Accountability. Accountability for the use of AI should be clearly defined and enforced. This includes assigning specific oversight roles to ensure that AI systems are used as intended and that any deliberate or reckless misuse is swiftly addressed. Establish a chain of accountability spanning from the C-suite to the operational level, ensuring all stakeholders understand their responsibilities in managing AI risks.

Mitigate Unintended Consequences and Misuse

The DOJ is particularly concerned with the potential for AI and other technologies to be misused, deliberately or unintentionally, leading to compliance breaches.

  • Monitor for Unintended Consequences. Implement monitoring systems that can detect unintended consequences of AI use, such as biased decision-making, unethical outcomes, or operational inefficiencies. These systems should be capable of flagging anomalies in real-time, allowing your compliance team to intervene before issues escalate.
  • Restrict AI Usage to Intended Purposes. Ensure that AI and other technologies are used only for their intended purposes. This involves setting clear boundaries on how AI can be applied and establishing controls to prevent misuse. Regular audits should be conducted to verify that AI systems operate within these defined parameters and that any deviations are promptly corrected.

Ensure Trustworthiness and Human Oversight

As Sam Silverstein continually reminds us, culture is all about trust. The same is true for the use of AI in the workplace. AI’s trustworthiness and reliability are paramount in maintaining compliance and protecting your company’s reputation.

  • Implement Trustworthiness Controls. Develop controls to ensure the trustworthiness of AI systems, including regular validation of AI models, thorough testing for accuracy and reliability, and ongoing monitoring for performance consistency. These controls should be designed to prevent the AI from producing outputs that could lead to legal or ethical violations.
  • Maintain a Human Baseline. AI should complement, not replace, human judgment. Establish a baseline of human decision-making to assess AI outputs and ensure that human oversight is maintained where necessary. This could involve having human review processes for high-stakes decisions or integrating AI outputs into broader decision-making frameworks that involve human input.

Train Employees on Emerging Technologies

As AI and other technologies become more prevalent, employee training is essential to ensure that your workforce understands both the benefits and risks.

  • Develop Comprehensive Training Programs. Create training programs that educate employees on using AI and other emerging technologies, focusing on compliance and ethical considerations. Training should cover the potential risks, the importance of adhering to the company’s code of conduct, and the specific controls to mitigate those risks. Employees should understand how the technology works and how to identify and address any decisions that may conflict with company values. Regular training sessions reinforce the importance of ethical AI use across the organization.
  • Promote a Culture of Awareness. Encourage a culture where employees are vigilant about the risks associated with new technologies. This involves fostering an environment where employees feel empowered to speak up if they notice potential issues and are actively engaged in ensuring that AI and other technologies are used responsibly.
  • Promote a Speak-Up Culture. Encourage employees to report concerns about AI-driven decisions, just as they would report other misconduct. A robust speak-up culture is critical for catching ethical lapses early and ensuring that AI systems remain aligned with company values.

The DOJ’s mandate on managing emerging risks, particularly those related to AI and other new technologies, underscores the need for a proactive, integrated approach to compliance. Compliance professionals can confidently navigate this complex landscape by embedding AI risk management within your broader ERM strategy, strengthening governance and accountability, mitigating unintended consequences, ensuring trustworthiness, and investing in employee training. The stakes are high, but with the right plan in place, your organization can harness the power of AI while staying firmly on the right side of the law.

Categories
Blog

Argentieri Speech and 2024 ECCP: Argentieri on Navigating AI Risks

Deputy Assistant Attorney General Nicole M. Argentieri’s speech highlighted a critical shift in the Department of Justice’s (DOJ) approach to evaluating corporate compliance programs. As outlined in the updated 2024 Evaluation of Corporate Compliance Programs (2024 ECCP), the emphasis on data access signals a new era where compliance professionals are expected to wield data with the same rigor and sophistication as their business counterparts. This week, I am reviewing the speech and 2024 ECCP. Over the next couple of blog posts, I will look at the most significant addition, that around AI. Today, I will review Argentieri’s remarks to see what she has said. Tomorrow, I will dive deeply into the new areas in the 2024 ECCP around new technologies such as Artificial Intelligence (AI).

In her remarks, Argentieri said, “First, … Our updated ECCP includes an evaluation of how companies assess and manage risk related to using new technology such as artificial intelligence in their business and compliance programs. Under the ECCP, prosecutors will consider the technology that a company and its employees use to conduct business, whether the company has conducted a risk assessment of using that technology, and whether the company has taken appropriate steps to mitigate any associated risk. For example, prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks, such as tools to confirm the accuracy or reliability of data the business uses. We also want to know whether the company monitors and tests its technology to evaluate its functioning as intended and consistent with its code of conduct.”

Argentieri emphasizes the importance of managing risks associated with disruptive technologies like AI. These updates signal a clear directive for compliance professionals: you must take a proactive stance on AI risk management. You can take the following steps to align your compliance program with the DOJ’s latest expectations.

Conduct a Comprehensive Risk Assessment of AI Technologies

The first step in meeting the DOJ is to thoroughly assess the risks that AI and other disruptive technologies pose to your organization.

  • Identify AI Use Cases. Start by mapping out where AI is being used across your business operations. This could include everything from automated decision-making processes to AI-driven data analytics. Understanding the scope of AI use is essential for identifying potential risk areas.
  • Evaluate Vulnerabilities. Once you have a clear picture of how AI is utilized, conduct a detailed risk assessment. Look for vulnerabilities, such as the potential for AI to generate false approvals or fraudulent documentation. Consider scenarios where AI could be manipulated or fail to perform as expected, leading to compliance breaches or unethical outcomes.
  • Prioritize Risks. Not all risks are created equal. Prioritize them based on their potential impact on your business and the likelihood of occurrence. This prioritization will guide the allocation of resources and the development of mitigation strategies.

Implement Robust Compliance Controls and Tools

Once risks have been identified, the next step is to ensure that your compliance program includes strong controls and tools specifically designed to manage AI-related risks.

  • Develop AI-Specific Controls. Traditional compliance controls may not be sufficient to address AI’s unique challenges. Develop or adapt controls to monitor AI-generated outputs, ensuring accuracy and consistency with company policies. This might include cross-referencing AI decisions with manual checks or implementing algorithms that flag unusual patterns for further review.
  • Invest in AI-Compliance Tools. Specialized tools are available that can help compliance teams monitor AI systems and detect potential issues. Invest in these tools to enhance your ability to identify and mitigate AI-related risks. These tools should be capable of real-time monitoring and provide insights into the functioning of AI systems, including the accuracy and reliability of the data they generate.
  • Regular Testing and Validation. AI systems should not be a set-it-and-forget-it solution. Regularly test and validate your AI tools to ensure they function as intended. This should include stress testing under different scenarios to identify any weaknesses or biases in the system. The DOJ expects your company to implement AI and rigorously monitor its performance and alignment with your compliance objectives.

Monitor, Evaluate, and Adapt

AI technology and its associated risks constantly evolve, so your compliance program must be flexible and responsive.

  • Ongoing Monitoring. Continuously monitor AI systems’ performance to ensure they align with your company’s code of conduct and compliance requirements. This involves technical monitoring and assessing the ethical implications of AI decisions.
  • Adapt to New Risks. As AI technology advances, new risks will emerge. Stay informed about the latest developments in AI and disruptive technologies, and be ready to adapt your compliance program accordingly. This may involve updating risk assessments, enhancing controls, or revising your company’s overall approach to AI.
  • Engage with Technology Experts. Compliance professionals should work closely with IT and AI experts to stay ahead of potential risks. This collaboration is crucial for understanding the technical nuances of AI and ensuring that compliance strategies are technically sound and effectively implemented.

Ensure Alignment with the Company’s Code of Conduct

Finally, all AI initiatives must follow your code of conduct and ethical standards.

  • Training and Awareness. Ensure that all employees, particularly those involved in AI development and deployment, are trained on the ethical implications of AI and the company’s code of conduct. This training should cover the importance of transparency, fairness, and accountability in AI operations.
  • Ethical AI Use. Embed ethical considerations into the AI development process. This means complying with the law and striving to use AI to reflect your company’s values. The DOJ will be looking to see if your company is avoiding harm and proactively promoting ethical AI use.

Argentieri’s remarks underscore the importance of managing the risks associated with AI and other disruptive technologies. Compliance professionals must take a proactive approach by conducting thorough risk assessments, implementing robust controls, and continuously monitoring AI systems to ensure they align with regulatory requirements and the company’s ethical standards. By taking these initial steps, you can meet the DOJ’s expectations and leverage AI to enhance your compliance program and overall business integrity. Join us tomorrow to take a deep dive into the new language of the 2024 ECCP and explore how to implement it.

Categories
Blog

The Bre-X Mining Scandal: Part 6 – A Guide for the 2024 Compliance Professional (Part 2)

Today, we conclude a multipart blog post series exploring one of the biggest corporate scandals of the 1990s, the Bre-X mining scandal. Our most recent blog post explored the foundational lessons from the Bre-X scandal for today’s compliance professionals, focusing on due diligence, transparency, corporate governance, and more. In today’s concluding blog post,  we focus on additional critical areas where compliance officers can play a pivotal role in ensuring organizational integrity. From fostering a strong whistleblowing culture to leveraging modern technologies for continuous monitoring, these strategies will help prevent financial fraud, uphold ethical standards, and do business in compliance into 2024 and beyond.

The Role of Whistleblowing and Ethics Programs

A lack of transparency and accountability within Bre-X contributed to the persistence of fraud for years. If a robust whistleblowing mechanism had been in place, the red flags might have been raised earlier, potentially preventing the massive fallout.

  • Encouraging Whistleblowing. One of the most critical aspects of modern compliance is creating a culture where employees feel empowered to speak up without fear of retaliation. Compliance officers should focus on building and maintaining secure, confidential channels where employees can report unethical or suspicious activities. A strong whistleblowing framework protects the organization from reputational damage and demonstrates to employees that integrity is a top priority.
  • Ethics Training. In addition to promoting whistleblowing, regular ethics training can help build a culture of transparency and accountability. Employees must be educated on the importance of ethical decision-making and how their actions contribute to the company’s long-term success. Compliance teams can reinforce the core values of honesty and integrity across the organization through frequent workshops, case studies (including Bre-X), and clear guidance on ethical behavior.

Risk Management and Scenario Planning

The Bre-X scandal is a stark reminder of the importance of comprehensive risk management. The ability to foresee potential risks and prepare accordingly can be the difference between averting a disaster or getting caught in one.

  • Assessing and Mitigating Risk. Risk management is central to the work of a compliance officer. Rigid risk assessments are non-negotiable in industries like mining—where speculation, large financial stakes, and geographical challenges intersect. Compliance professionals must develop strategies that identify, assess, and mitigate potential risks early, whether they stem from operational, financial, or reputational sources. For instance, resource overestimation, as seen in Bre-X, could have been mitigated with proper checks on geological data and third-party verification.
  • Scenario Planning. Preparing for various fraud scenarios, including “what if” situations similar to Bre-X, is a valuable exercise. Scenario planning enables organizations to consider how they would respond in the event of fraud or a major compliance breach. Companies should develop detailed crisis management plans, identify key decision-makers, and outline steps for navigating potential crises. In the event of another large-scale scandal, having these contingency plans in place will reduce the organization’s response time and limit damage.

Continuous Controls Monitoring and Auditing

The importance of continuous monitoring cannot be overstated, particularly in industries prone to high levels of fraud, such as mining, finance, or healthcare. Compliance professionals must champion ongoing oversight to ensure early detection of potential issues.

  • Ongoing Oversight. Continuous auditing of processes and transactions is an effective way to catch problems before they escalate. In the Bre-X case, regular audits of geological sample reporting and financial disclosures could have flagged discrepancies early on. Compliance teams today should implement robust monitoring programs that examine critical areas like financial performance, regulatory adherence, and ethical behavior. Routine audits of key operational processes, especially in high-risk industries, can prevent fraudulent behavior from going undetected.
  • Use of Technology. The rise of data analytics and artificial intelligence (AI) has transformed the compliance landscape. In 2024, compliance professionals must embrace technology that enhances real-time monitoring capabilities. By leveraging AI and big data, companies can detect anomalies or suspicious activities before they evolve into significant problems. For example, automated systems can track financial reporting patterns or identify irregular resource estimates, helping compliance teams intervene before major fraud occurs.

Global Considerations and Jurisdictional Awareness

In today’s globalized business environment, companies often operate in multiple countries, each with its regulatory requirements. Compliance professionals must stay abreast of international standards and ensure the organization complies with all regions.

  • Navigating International Regulations. The Bre-X scandal highlighted the complexities of operating in different jurisdictions. While Bre-X was a Canadian company, much of its fraudulent activities occurred in Indonesia, and the regulatory landscape vastly differed between the two countries. In 2024, compliance officers must develop an in-depth understanding of the regulatory environments in each jurisdiction where their company operates. This includes legal compliance and cultural and business norms that could impact operations and risk management strategies.
  • Cross-Border Cooperation. In an interconnected world, no company is an island. Regulatory bodies across countries are increasingly cooperating on compliance and enforcement efforts, especially in mining, finance, and pharmaceuticals. Building relationships with regulatory agencies in different jurisdictions is vital for compliance professionals. These partnerships can help organizations navigate complex international regulations and stay on top of emerging global compliance trends.

The Bre-X scandal was a watershed moment for the mining industry and for compliance professionals across sectors. The lessons from this case are invaluable in shaping how compliance is approached in 2024. Compliance officers can safeguard their organizations from the devastating consequences of fraud by encouraging a culture of whistleblowing, implementing comprehensive risk management practices, leveraging technology for continuous monitoring, and understanding global regulatory landscapes.

Fraud prevention is a continuous journey that requires vigilance, transparency, and a proactive mindset. Today’s compliance professional’s responsibility is not just to respond to incidents but to anticipate them, fostering a corporate culture prioritizing ethics and accountability at every level. This concludes our series on the Bre-X scandal. By learning from the past, compliance professionals can build a more resilient, transparent future for their organizations.

Categories
Compliance and AI

Compliance and AI: How Saifr is Revolutionizing Financial Services Compliance – A Conversation with Vall Herard

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are but three of the many questions we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance.

In this episode, Tom visits with Vall Herard, CEO of Saifr.ai, which is aimed at transforming compliance in the financial services industry.

Saifr.ai is an AI company aimed at transforming compliance in the financial services industry. Herard shares his professional background, the founding and objectives of Saifr, and the company’s innovative AI solutions, including marketing communications compliance, electronic communications compliance, and AML/KYC capabilities. We cover how Saifr.ai uses AI to help compliance officers by providing tools that streamline their work and embed compliance checks in everyday processes. Herard also touches upon AI ethics, adaptive risk management, and the future of AI in compliance. He hints at upcoming innovations, including the compliant adaptation of large language models like ChatGPT for financial services.

Key Highlights:

  • Saifr AI’s Core Capabilities
  • KYC and AML Innovations
  • Creating a Culture of Compliance
  • AI Ethics in Compliance
  • Adaptive Risk Management
  • Future of AI in Compliance

Resources

Vall Herard on LinkedIn

Saifr.ai

 Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: September 4, 2024 – The Don’t Ask for Something Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • Don’t ask for something like regulatory reform, as you might get it. (WSJ)
  • Former Volkswagen Chief Executive goes to trial for the emissions testing scandal. (NYT)
  • Lebanon’s former central bank head is charged with corruption. (AP)
  • How about using AI to increase profits? (FT)

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Categories
Innovation in Compliance

Innovation in Compliance: The Evolution of Compliance and Technology: An Interview with Stuart Breslow

Innovation comes in many areas and compliance professionals need to not only be ready for it but embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast.

In this episode, Tom welcomes Stuart Breslow, a member of the Board of Directors at StarCompliance., who takes a deep dive into the evolution of tech solutions for compliance.

Breslow has had extensive journey in compliance, including professional roles at Morgan Stanley, Credit Suisse, McKinsey, and Google Cloud. He was the CCO at Morgan Stanley. Our conversation takes a deep dive into the transformation of compliance through technological solutions, the evolution of Codes of Conduct, and the impact of digital tools on compliance efficiency.

Breslow advocates for the use of technology to scale compliance efforts, address evolving challenges, and integrate compliance more seamlessly with business operations. Emphasizing data analysis and proactive risk identification, Breslow believes that modern compliance tools not only enhance efficiency and effectiveness but also contribute significantly to business profitability. Breslow also explores the future role of generative AI and how StarCompliance is poised to leverage advanced data management to enhance compliance functions.

Key Highlights:

  • Evolution of Compliance Technology
  • The Role of Codes of Conduct in Compliance
  • Digital Transformation in Compliance
  • Future of Compliance with Generative AI

Resources:
StarCompliance

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Compliance and AI

Compliance and AI: Art Mueller on Enhancing Financial Crime Programs with AI

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These are but three questions we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance.

In this episode, Tom visits with Art Mueller, a thought leader in using AI to help fight financial crime.

Art Mueller, a seasoned expert with over 20 years in compliance programs and anti-financial crime initiatives, offers a transformative perspective on the role of AI and machine learning in financial crime programs. As the current lead at WorkFusion, he emphasizes the critical shift from manual processes to AI and automated solutions, enhancing efficiency and significantly reducing false positives. Mueller champions using AI to provide valuable insights into client risks and transactions, thereby improving job satisfaction for analysts and decreasing turnover rates. Drawing on his extensive field experience, he highlights the substantial advancements and benefits these technologies bring to risk management and mitigation in the financial sector through AI.

Key Highlights:

  • Financial Crime Prevention Solutions with AI Technology
  • AI-enhanced Adverse Media Screening for Compliance
  • Enhancing Risk Management Through Anomaly Detection with AI
  • Enhancing Financial Crime Programs with AI

Resources:

Art Mueller on LinkedIn

WorkFusion

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 52 – AI vs. Financial Scams: Why Banks Aren’t Doing Enough in the Fight Against Sextortion and Fraud with Oonagh van den Berg

A lawyer by training and an entrepreneur by vocation, Oonagh van den Berg founded the compliance consultancy and training firm RAW Compliance. She is a highly regarded international compliance professional with two decades of experience in London, Hong Kong, and Singapore.

Growing up in Northern Ireland against the violent backdrop of “The Troubles” during the tumultuous 1980s, she’s a veteran at weathering the sharp, harsh curveballs that life sometimes throws us. She went on to become a lawyer, compliance officer, recruiter, and later, a consultant and educator despite the hardships she encountered as a young girl, such as the Irish Republican Army shooting her police officer father.

This episode of Regulatory Ramblings is topical, timely, and deeply poignant. Oonagh talks to our host, Ajay Shamdasani, about the need for artificial intelligence (AI), mainly by international banking and financial institutions and multinational corporations more generally, to combat financial scams, deep fakes, and sextortion:

It is an issue that hit close to home earlier this summer as Oonagh while working to raise awareness of the matter, learned that her 13-year-old daughter and a few of her school friends became the victims of blackmail because of some innocent photos shared on Snapchat. Raising awareness, Oonagh says, can help prevent others from experiencing the same thing. She shares that RAW Compliance has been working on important awareness videos about social media scams and sextortion targeting pre-teens, teenagers, and young adults.

A recent poll by Europol revealed that cybercriminals are increasingly exploiting new technologies to commit complex and dangerous crimes – and, in many instances, using AI to commit vile acts of violation against the unwitting. For example, malicious large language models (LLM) are used to develop scripts, phishing emails, and online fraud advertisements and to overcome language barriers that allow sex offenders to groom victims in any language and impersonate peers.

Then there is the threat of generative AI because AI-altered and fully artificial child sexual abuse materials are now so realistic and used in sextortion cases that it has resulted in the blackmail and subsequent suicide of some victims.

Additionally, AI deepfakes are becoming more sophisticated and accessible. Such technologies make it vexatious for law enforcement to identify victims and find the appropriate legal framework to charge criminals. Yet, law enforcement has grown more tech-savvy and started using more advanced detection tools. It is still an uphill battle, however, as the authorities are all too often playing catch-up.

Oonagh also discusses her firm’s groundbreaking collaboration to support victims of financial scams and help recover their assets. Together with Nick Leeson, the infamous former 90s-era Barrings trader, the pair combine their expertise to make a tangible difference in the fight against financial fraud. (Links below)

Oonagh says it matters because “Financial scams leave lasting impacts and destroy lives, with little to no help available. Recovery can feel overwhelming. By joining forces, we aim to turn the tide and provide the help and guidance victims need to reclaim their financial futures.”

In her view, banks are not doing enough to help victims of financial scams, mainly due to shortcomings in their technology and fraud detection systems. In the UK, for example, financial crime is a growing issue, with over 3.5 million people affected by scams annually, leading to losses exceeding £1.2 billion.

The problem is equally severe in continental Europe, with countries like Ireland and the Netherlands reporting significant increases in scam-related incidents, resulting in hundreds of millions of euros in losses.

Similarly, in the US, financial scams cost consumers over $3.3 billion annually.

The conversation continues with Oonagh fleshing out how financial institutions can navigate evolving regulations and effectively monitor child sexual abuse materials (CSAM). She also discusses the challenges and strategies for investigating CSAM and human trafficking in traditional and decentralized financial systems. She emphasizes the hurdles of global technology in combating such crimes and estimates the value of suspected CSAM transactions using fiat versus cryptocurrency.

The discussion concludes with Oonagh pointing out that the financial sector has often shirked its responsibility when it comes to anti-money laundering, ‘pig butchering,” human trafficking, and financial scams. The sad truth is that many victims will never truly be made whole.

She stresses that when it comes to law enforcement and investigators, the biggest takeaway for traditional financial crime compliance professionals and blockchain investigators is understanding suspicious red flags and other typologies supporting investigations.

We are bringing you the Regulatory Ramblings podcasts with assistance from the HKU Faculty of Law, the University of Hong Kong’s Reg/Tech Lab, HKU-SCF Fintech Academy, Asia Global Institute, and HKU-edX Professional Certificate in Fintech.

Useful links in this episode:

  • Connect or follow Oonagh van den Berg on LinkedIn

  • RAW Compliance: Webpage

  • Oonagh van den Berg with Nick Leeson, through FundsRehab.com, offers support and solutions for those impacted by financial scams, guiding them through asset recovery. Assistance is available for those in need. FundsRehab.com is dedicated to combating financial fraud and driving change, with updates on their efforts on the website.

You might also be interested in:

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Daily Compliance News

Daily Compliance News: August 29, 2024 – The Getting Ahead at Work Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • GenZ guide for getting ahead at work. (WaPo)
  • A whistleblower lawyer who used fake AI cases says no harm, no foul. (Reuters)
  • Criminal convictions in Switzerland for 1MDB scandal. (Reuters)
  • Treasury loosens AML requirements for financial advisors, real estate agents. (WSJ)

For more information on the Ethico ROI Calculator and a free White Paper on the ROI of Compliance, click here.

Categories
Innovation in Compliance

Innovation in Compliance: Unpacking Healthcare Compliance with Maria Villanueva

Innovation comes in many forms, and compliance professionals must be ready for and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom welcomes compliance aficionado Maria Villanueva to dive deeply into healthcare compliance.

In this episode, Tom and Maria discuss her diverse career trajectory from accounting to healthcare compliance and delve into the complexities of ethical selling, aggregate spending challenges, and the growing role of AI in the compliance industry. Drawing on her extensive experience, she offers valuable insights on balancing roles in compliance and HR, the impact of data analytics, and the future landscape of healthcare compliance.

Key Highlights

  • Passion for Healthcare
  • Challenges in Healthcare Compliance
  • Balancing Compliance and HR Roles
  • The Role of Data Analytics and AI in Compliance
  • Future of Healthcare Compliance

Resources:

Maria Villanueva on LinkedIn 

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn