Categories
Daily Compliance News

Daily Compliance News: September 27, 2024 – The Hiz Honor Indicted Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • NYT Mayor Adams indicted on bribery and corruption charges.  (NYT)
  • What happens when a news organization is a hedge fund or class action firm? (Bloomberg)
  • DOJ probing Super Micro Computer. (WSJ)
  • SEC fines 11 more firms for failures in messaging apps. (SEC Press Release)

Categories
Compliance Tip of the Day

Compliance Tip of the Day: Lesson from The John Deere FCPA Enforcement Action – Root Cause Analysis for Remediation

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements.

Whether you’re a seasoned compliance professional or just starting your journey, our aim is to provide you with bite-sized, actionable tips to help you stay on top of your compliance game.

Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law.

Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we review why a root cause analysis is the first step you should take before you begin the remediation of your compliance program.

Categories
Data Driven Compliance

Data-Driven Compliance: The DOJ Mandate on Transforming Compliance Through Data Analytics and AI with Vince Walden

Are you struggling to keep up with the ever-changing compliance programs in your business? Look no further than the award-winning Data Driven Compliance podcast, hosted by Tom Fox, is a podcast featuring an in-depth conversation around the uses of data and data analytics in compliance programs. Data Driven Compliance is back with another exciting episode. Today, Vince Walden, founder of KonaAI, the sponsor of this podcast, returns to talk about the recent speech by Nicole Argentieri and the release of the 2024 Update to the Evaluation of Corporate Compliance Programs (ECCP).

Walden shares insights from the Nicole Argentieri’s keynote and ECCP update, emphasizing the DOJ’s focus on data access in compliance. We explore the importance of utilizing both compliance and business data for effective fraud and risk management. Walden underscores the necessity for compliance professionals to collaborate with internal audit and finance departments, advocating for a risk-based approach to data analytics and continuous controls monitoring. The discussion also delves into leveraging AI and machine learning to improve compliance efficacy and overall business operations, arguing for the proportional allocation of resources to match the company’s sophistication level.

Key Highlights:

  • DOJ’s Focus on Data Access
  • Understanding Compliance Data Analytics
  • Training Compliance Officers on Data
  • Implementing Continuous Controls Monitoring
  • Cost Savings and ROI in Compliance
  • Proportionate Resource Allocation
  • Documentation and Transparency

Resources:

Vince Walden on LinkedIn

KonaAI

Tom Fox 

Connect with me on the following sites:

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 54 – From Secret Service Agent to Global Financial Crime Fighter: David Caruso’s 30-Year Journey

David Caruso is the founder and managing director of the Dominion Advisory Group, a consulting firm based in Virginia, near the nation’s capital. The firm works with banks facing regulatory enforcement actions across the U.S., Europe, and Asia. David aids institutions and organizations in navigating financial crime risk and compliance modernization globally.

As a former special agent with the US Secret Service and a graduate of George Washington University since 1996, he has been at the forefront of shaping the financial crime risk and compliance profession more generally. Building anti-money laundering (AML) and sanctions compliance programs at banking and financial institutions across the US and internationally, overseeing headline-grabbing corruption and money laundering investigations, and building and selling a RegTech software firm have afforded him an ideal perspective to reflect on every major issue and trend occurring in the financial crime compliance space for the past 25 years.

In this episode of Regulatory Ramblings, David shares his reflections on a nearly three-decade career in AML and financial crime compliance with our host, Ajay Shamdasani. 

He recounts having worked at global institutions like JP Morgan, Riggs Bank, Wachovia, Washington Mutual, and HSBC, to name a few. His notable achievements include his time as Riggs Bank’s chief compliance and AML officer.

In that role, he was hired to address some program weaknesses cited by the US Treasury Department’s Office of the Comptroller of the Currency (OCC). While at Riggs, David’s team uncovered two notorious international corruption schemes involving the government of Equatorial Guinea and former Chilean dictator Augusto Pinochet. The team’s work led to investigations by the Department of Justice and the U.S. Senate Permanent Subcommittee on Investigations. 

The cases drew worldwide media attention from justice authorities in the US, UK, Spain, and Chile. The facts uncovered by David at Riggs shook US lawmakers and regulators, kicking off 10 years of active regulatory and law enforcement action against banks across the US. 

After Riggs, David founded The Dominion Advisory Group in 2005. From his ringside seat near Washington, DC, he works closely with executive management, boards, and outside counsel to craft responses and build entire financial crime risk and compliance programs to address regulatory concerns—of which there has been no shortage in recent years. 

David also discusses the allure of AML and financial crime compliance and what brought him to the professional path he has been on for over three decades. Methodologically speaking, he recounts what has changed in AML and financial crime in that time and what has remained the same. 

He concurs that since 1970, so many additional requirements and expectations have been created that AML teams still need to catch up on their primary mission. Reflecting on the impact of the Bank Secrecy Act (1970), the USA PATRIOT Act (2001), the Foreign Account Tax Compliance Act (2010), or FATCA, and the more recent Anti Money Laundering Act (2020), he shares his views on how the impact of regulatory action has distracted from compliance professionals’ more critical tasks—with an eye towards how the regulatory exam-focused mindset of money laundering reporting officers (MLROs) affects operations and innovation. 

David also depicts the pervasive and ongoing discrepancies between what domestic and international/supernational policy-setting organizations, like the Financial Action Task Force (FATF), based in Paris, say and what they do. He says, “No one wants to ask if new rules and regulations are working and whether they prevent crime or have the unintended consequence of reducing [economic] growth?” 

He acknowledges the degree of geopolitical hypocrisy when it comes to AML and financial crime compliance, as well as when it comes to fighting bribery, fraud, and corruption internationally. Washington, New York, London, and Brussels all too often regulated the financial world. Yet, while the US and UK, and increasingly the EU, are some of the most aggressive jurisdictions regarding financial crime enforcement actions, their regulatory apparatus is often used to further their geopolitical goals. It is a view that many outside the West hold. 

The conversation concludes with David’s views on why sanctions against Russia stemming from its 2022 invasion of Ukraine have largely been unsuccessful, how technologies such as artificial intelligence can help AML/KYC/FCC compliance, and what policy recommendations he suggests moving forward. 

We are bringing you the Regulatory Ramblings podcasts with assistance from the HKU Faculty of Law, the University of Hong Kong’s Reg/Tech Lab, HKU-SCF Fintech Academy, Asia Global Institute, and HKU-edX Professional Certificate in Fintech.

Useful links in this episode:

  • Connect or follow David Caruso on LinkedIn

  • Dominion Advisory Group: Webpage

You might also be interested in:

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Blog

Argentieri Speech and 2024 ECCP: Complying with the 2024 ECCP on AI

The Department of Justice (DOJ), in its 2024 Update, has explicitly directed companies to ensure they have robust processes in place to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it’s crucial to integrate these mandates into your enterprise risk management (ERM) strategies and broader compliance programs. The DOJ posed two sets of queries for compliance professionals. The first was found in Section I, entitled Is the Corporation’s Compliance Program Well Designed? These are the following questions a prosecutor could ask a company or compliance professional going through an investigation.

Management of Emerging Risks to Ensure Compliance with Applicable Law

  • Does the company have a process for identifying and managing emerging internal and external risks, including risks related to the use of new technologies, that could potentially impact its ability to comply with the law?
  • How does the company assess the potential impact of new technologies, such as artificial intelligence (AI), on its ability to comply with criminal laws?
  • Is management of risks related to using AI and other new technologies integrated into broader enterprise risk management (ERM)  strategies?
  • What is the company’s approach to governance regarding the use of new technologies, such as AI, in its commercial business and compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from using technologies in its commercial business and compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?
  • To the extent that the company uses AI and similar technologies in its business or as part of its compliance program, are controls in place to monitor and ensure its trustworthiness, reliability, and use in compliance with applicable law and the company’s code of conduct?
  • Do controls exist to ensure the technology is used only for its intended purposes?
  • What baseline of human decision-making is used to assess AI?
  • How is accountability over the use of AI monitored and enforced?
  • How does the company train its employees on using emerging technologies such as AI?

The second question ties AI to a company’s values, ethics, and, most importantly, culture. It is found in Section III, entitled Does the Corporation’s Compliance Program Work in Practice?, Evolving Updates, and poses the following questions:

  • If the company is using new technologies such as AI in its commercial operations or compliance program, is the company monitoring and testing the technologies so that it can evaluate whether they are functioning as intended and consistent with the company’s code of conduct?
  • How quickly can the company detect and correct decisions made by AI or other new technologies that are inconsistent with the company’s values?

Thinking across both questions will lead to more questions and a deep dive into your compliance culture, philosophy, and corporate ethos. It will also bring about unprecedented opportunities for businesses. However, with these opportunities come significant risks, especially in the context of legal compliance. The DOJ has now explicitly directed companies to ensure they have robust processes to identify, manage, and mitigate emerging risks related to new technologies, including AI. As compliance professionals, it is both crucial and even obligatory to integrate these mandates into your ERM strategies and broader compliance programs. Below are some ways a compliance professional can think through and you can effectively respond to the DOJ’s latest guidance for the first series of questions.

Establish a Proactive Risk Identification Process

Managing emerging risks begins with a proactive approach to identifying potential threats before they manifest into significant compliance issues.

  • Implement a Dynamic Risk Assessment Framework. Develop a risk assessment process that continuously scans internal and external environments for emerging risks. This should include regular updates to risk profiles based on the latest technological developments, industry trends, and regulatory changes. Incorporating AI into your business and compliance operations requires that you assess its immediate impact and anticipate future risks it might pose as the technology evolves.
  • Engage Cross-Functional Teams. Ensure that your risk identification process is not siloed within the compliance function. Engage cross-functional teams, including IT, legal, HR, and operations, to provide diverse perspectives on potential risks associated with new technologies. This collaboration will help you capture a more comprehensive view of the risks and their potential impact on your organization’s ability to comply with applicable laws.

Establish Rigorous Monitoring Protocols

Monitoring AI and other new technologies isn’t just a box-ticking exercise; it’s a continuous process that requires a deep understanding of the technology and the ethical standards it must uphold.

  • Set Up Continuous Monitoring Systems. Implement real-time monitoring systems that track AI outputs and decisions as they occur. This is crucial for identifying deviations from expected behavior or ethical standards as soon as they happen. Automated monitoring tools can flag anomalies, such as decisions that fall outside predefined parameters, for further review by compliance officers.
  • Define Key Performance Indicators (KPIs). Develop KPIs that specifically measure the alignment of AI outputs with your company’s code of conduct. These include fairness, transparency, accuracy, and ethical impact metrics. Regularly review these KPIs to ensure that AI systems perform within acceptable boundaries and contribute positively to your compliance objectives.

Integrate AI Risk Management into Your ERM Strategy

The DOJ expects companies to manage AI and other technological risks within the broader context of their enterprise risk management strategies.

  • Align AI Risk Management with ERM. Ensure that risks related to AI and other new technologies are integrated into your ERM framework. This means treating AI-related risks like any other enterprise with appropriate controls, governance, and oversight. AI should not be viewed as a standalone issue but as an integral part of your organization’s overall risk landscape.
  • Develop AI-Specific Risk Controls. Establish controls that specifically address the unique risks posed by AI. These might include measures to prevent algorithmic bias, safeguards against AI-driven fraud, and protocols to ensure data privacy and security. Regularly review and update these controls to keep pace with technological advancements and emerging threats.

Implement Comprehensive Testing and Validation

Testing and validating AI technologies should be an ongoing practice, not just a one-time event during the deployment phase. The DOJ expects companies to evaluate whether these technologies are functioning as intended rigorously.

  • Stress-Test AI Systems. Subject your AI systems to scenarios that test their decision-making processes under different conditions. This includes testing for biases, errors, and unintended consequences. By simulating real-world situations, you can better understand how the AI might behave in practice and identify any potential risks before they manifest.
  • Periodic Audits and Reviews. Conduct regular audits of your AI systems to verify their continued compliance with company policies and ethical standards. These audits should include technical assessments and ethical evaluations, ensuring the AI’s decisions remain consistent with your company’s values over time.
  • External Validation. Consider bringing in third-party experts to validate your AI systems. External validation can objectively assess your AI’s functionality and ethical alignment, offering insights that might not be apparent to internal teams.

Develop a Rapid Response Mechanism

Every system is infallible; even the best-monitored AI systems can make mistakes. The key is how quickly and effectively your company can detect and correct these errors.

  • Establish a Rapid Response Team. Create a dedicated team within your compliance function responsible for addressing AI-related issues as they arise. This team should be equipped to investigate flagged decisions quickly, determine the root cause of any inconsistencies, and implement corrective actions.
  • Implement Feedback Loops. Develop feedback loops that allow for continuous learning and improvement of AI systems. When an error is detected, ensure that the AI system is updated or retrained to prevent similar issues in the future. This iterative process is essential for maintaining the integrity of AI systems over time.
  • Document and Report Corrections. Keep detailed records of any AI-related issues and the steps taken to correct them. This documentation is critical for internal tracking and for demonstrating to regulators, like the DOJ, that your company is serious about maintaining ethical AI practices.

Strengthen AI Governance and Accountability

Governance is key to ensuring that AI and other new technologies are used responsibly and in compliance with the law.

  • Create a Governance Framework for Technology Use. Develop a governance framework outlining how AI and other emerging technologies will be used within your organization. This framework should define roles and responsibilities, set clear guidelines for the ethical use of technology, and establish protocols for monitoring and enforcement. Ensure that this framework is aligned with your company’s code of conduct and compliance objectives. Ensure these guidelines are communicated clearly to all stakeholders, including AI developers, compliance teams, and business leaders.
  • Enforce Accountability. Accountability for the use of AI should be clearly defined and enforced. This includes assigning specific oversight roles to ensure that AI systems are used as intended and that any deliberate or reckless misuse is swiftly addressed. Establish a chain of accountability spanning from the C-suite to the operational level, ensuring all stakeholders understand their responsibilities in managing AI risks.

Mitigate Unintended Consequences and Misuse

The DOJ is particularly concerned with the potential for AI and other technologies to be misused, deliberately or unintentionally, leading to compliance breaches.

  • Monitor for Unintended Consequences. Implement monitoring systems that can detect unintended consequences of AI use, such as biased decision-making, unethical outcomes, or operational inefficiencies. These systems should be capable of flagging anomalies in real-time, allowing your compliance team to intervene before issues escalate.
  • Restrict AI Usage to Intended Purposes. Ensure that AI and other technologies are used only for their intended purposes. This involves setting clear boundaries on how AI can be applied and establishing controls to prevent misuse. Regular audits should be conducted to verify that AI systems operate within these defined parameters and that any deviations are promptly corrected.

Ensure Trustworthiness and Human Oversight

As Sam Silverstein continually reminds us, culture is all about trust. The same is true for the use of AI in the workplace. AI’s trustworthiness and reliability are paramount in maintaining compliance and protecting your company’s reputation.

  • Implement Trustworthiness Controls. Develop controls to ensure the trustworthiness of AI systems, including regular validation of AI models, thorough testing for accuracy and reliability, and ongoing monitoring for performance consistency. These controls should be designed to prevent the AI from producing outputs that could lead to legal or ethical violations.
  • Maintain a Human Baseline. AI should complement, not replace, human judgment. Establish a baseline of human decision-making to assess AI outputs and ensure that human oversight is maintained where necessary. This could involve having human review processes for high-stakes decisions or integrating AI outputs into broader decision-making frameworks that involve human input.

Train Employees on Emerging Technologies

As AI and other technologies become more prevalent, employee training is essential to ensure that your workforce understands both the benefits and risks.

  • Develop Comprehensive Training Programs. Create training programs that educate employees on using AI and other emerging technologies, focusing on compliance and ethical considerations. Training should cover the potential risks, the importance of adhering to the company’s code of conduct, and the specific controls to mitigate those risks. Employees should understand how the technology works and how to identify and address any decisions that may conflict with company values. Regular training sessions reinforce the importance of ethical AI use across the organization.
  • Promote a Culture of Awareness. Encourage a culture where employees are vigilant about the risks associated with new technologies. This involves fostering an environment where employees feel empowered to speak up if they notice potential issues and are actively engaged in ensuring that AI and other technologies are used responsibly.
  • Promote a Speak-Up Culture. Encourage employees to report concerns about AI-driven decisions, just as they would report other misconduct. A robust speak-up culture is critical for catching ethical lapses early and ensuring that AI systems remain aligned with company values.

The DOJ’s mandate on managing emerging risks, particularly those related to AI and other new technologies, underscores the need for a proactive, integrated approach to compliance. Compliance professionals can confidently navigate this complex landscape by embedding AI risk management within your broader ERM strategy, strengthening governance and accountability, mitigating unintended consequences, ensuring trustworthiness, and investing in employee training. The stakes are high, but with the right plan in place, your organization can harness the power of AI while staying firmly on the right side of the law.

Categories
Great Women in Compliance

Great Women in Compliance: 2024 SCCE CEI Wrap Up

This episode is a rare opportunity for #teamgwic to catch up in person at one of the key Ethics & Compliance events, the SCCE Compliance & Ethics Institute (CEI).  CEI was in Grapevine, Texas, and, as usual, was a great experience.

In this episode, Lisa, Hemma, Ellen and Sarah discussed their highlights from the event. The first keynote was from Principal Deputy Assistant Attorney General Nicole M. Argentieri, who announced revisions to the Evaluation of Corporate Compliance Programs, and the group touches on this and the significance of the changes and having them announced at SCCE. There will be much more to come on this topic.  Each of the women discusses their favorite panels and some of the key takeaways they had, including discussions of DEI, controls, and how to work with Boards, as a few examples. They also sent their well-wishes to Nick Gallo, who was missed but, more importantly, is on the road to recovery.

One of the best parts of the conference is the opportunity to network and share best practices, and the whole group thought this year’s exhibit hall, and the format of the conference with longer breaks, allowed people to make great connections and have some in-depth discussions that don’t always happen when you are moving so quickly to not make a panel or event.  And the second morning keynote from Matt Friedman discussing his work in fighting human trafficking and modern slavery was moving and inspirational, a reminder of the importance of what we do every day with our due diligence and knowing our customers.

All in all, it was a great week of connections, learning and providing so much optimism for the contributions that ethics and compliance professionals make, and to connect (or reconnect) with the amazing people in our community.  If you were not able to attend, the team hopes this gives you a sense of the event.

#GWIC is proud to announce that it has been nominated for the WomenInPodcastAwards. This is a people’s choice award and whether you vote for #GWIC or other nominees, we ask that you send the elevator back down by voting. Voting closes October 1, 2024, and details can be found on the #GWIC LinkedIn page at http://www.linkedin.com/groups/12156164

Resources:

Join the Great Women in Compliance community on LinkedIn here.

Categories
Blog

Argentieri Speech and 2024 ECCP: Argentieri on Navigating AI Risks

Deputy Assistant Attorney General Nicole M. Argentieri’s speech highlighted a critical shift in the Department of Justice’s (DOJ) approach to evaluating corporate compliance programs. As outlined in the updated 2024 Evaluation of Corporate Compliance Programs (2024 ECCP), the emphasis on data access signals a new era where compliance professionals are expected to wield data with the same rigor and sophistication as their business counterparts. This week, I am reviewing the speech and 2024 ECCP. Over the next couple of blog posts, I will look at the most significant addition, that around AI. Today, I will review Argentieri’s remarks to see what she has said. Tomorrow, I will dive deeply into the new areas in the 2024 ECCP around new technologies such as Artificial Intelligence (AI).

In her remarks, Argentieri said, “First, … Our updated ECCP includes an evaluation of how companies assess and manage risk related to using new technology such as artificial intelligence in their business and compliance programs. Under the ECCP, prosecutors will consider the technology that a company and its employees use to conduct business, whether the company has conducted a risk assessment of using that technology, and whether the company has taken appropriate steps to mitigate any associated risk. For example, prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks, such as tools to confirm the accuracy or reliability of data the business uses. We also want to know whether the company monitors and tests its technology to evaluate its functioning as intended and consistent with its code of conduct.”

Argentieri emphasizes the importance of managing risks associated with disruptive technologies like AI. These updates signal a clear directive for compliance professionals: you must take a proactive stance on AI risk management. You can take the following steps to align your compliance program with the DOJ’s latest expectations.

Conduct a Comprehensive Risk Assessment of AI Technologies

The first step in meeting the DOJ is to thoroughly assess the risks that AI and other disruptive technologies pose to your organization.

  • Identify AI Use Cases. Start by mapping out where AI is being used across your business operations. This could include everything from automated decision-making processes to AI-driven data analytics. Understanding the scope of AI use is essential for identifying potential risk areas.
  • Evaluate Vulnerabilities. Once you have a clear picture of how AI is utilized, conduct a detailed risk assessment. Look for vulnerabilities, such as the potential for AI to generate false approvals or fraudulent documentation. Consider scenarios where AI could be manipulated or fail to perform as expected, leading to compliance breaches or unethical outcomes.
  • Prioritize Risks. Not all risks are created equal. Prioritize them based on their potential impact on your business and the likelihood of occurrence. This prioritization will guide the allocation of resources and the development of mitigation strategies.

Implement Robust Compliance Controls and Tools

Once risks have been identified, the next step is to ensure that your compliance program includes strong controls and tools specifically designed to manage AI-related risks.

  • Develop AI-Specific Controls. Traditional compliance controls may not be sufficient to address AI’s unique challenges. Develop or adapt controls to monitor AI-generated outputs, ensuring accuracy and consistency with company policies. This might include cross-referencing AI decisions with manual checks or implementing algorithms that flag unusual patterns for further review.
  • Invest in AI-Compliance Tools. Specialized tools are available that can help compliance teams monitor AI systems and detect potential issues. Invest in these tools to enhance your ability to identify and mitigate AI-related risks. These tools should be capable of real-time monitoring and provide insights into the functioning of AI systems, including the accuracy and reliability of the data they generate.
  • Regular Testing and Validation. AI systems should not be a set-it-and-forget-it solution. Regularly test and validate your AI tools to ensure they function as intended. This should include stress testing under different scenarios to identify any weaknesses or biases in the system. The DOJ expects your company to implement AI and rigorously monitor its performance and alignment with your compliance objectives.

Monitor, Evaluate, and Adapt

AI technology and its associated risks constantly evolve, so your compliance program must be flexible and responsive.

  • Ongoing Monitoring. Continuously monitor AI systems’ performance to ensure they align with your company’s code of conduct and compliance requirements. This involves technical monitoring and assessing the ethical implications of AI decisions.
  • Adapt to New Risks. As AI technology advances, new risks will emerge. Stay informed about the latest developments in AI and disruptive technologies, and be ready to adapt your compliance program accordingly. This may involve updating risk assessments, enhancing controls, or revising your company’s overall approach to AI.
  • Engage with Technology Experts. Compliance professionals should work closely with IT and AI experts to stay ahead of potential risks. This collaboration is crucial for understanding the technical nuances of AI and ensuring that compliance strategies are technically sound and effectively implemented.

Ensure Alignment with the Company’s Code of Conduct

Finally, all AI initiatives must follow your code of conduct and ethical standards.

  • Training and Awareness. Ensure that all employees, particularly those involved in AI development and deployment, are trained on the ethical implications of AI and the company’s code of conduct. This training should cover the importance of transparency, fairness, and accountability in AI operations.
  • Ethical AI Use. Embed ethical considerations into the AI development process. This means complying with the law and striving to use AI to reflect your company’s values. The DOJ will be looking to see if your company is avoiding harm and proactively promoting ethical AI use.

Argentieri’s remarks underscore the importance of managing the risks associated with AI and other disruptive technologies. Compliance professionals must take a proactive approach by conducting thorough risk assessments, implementing robust controls, and continuously monitoring AI systems to ensure they align with regulatory requirements and the company’s ethical standards. By taking these initial steps, you can meet the DOJ’s expectations and leverage AI to enhance your compliance program and overall business integrity. Join us tomorrow to take a deep dive into the new language of the 2024 ECCP and explore how to implement it.

Categories
Pawtastic Friends - The Paw Talk

Pawtastic Friends: The Paw Talk – Otto, Marchy and Alex

Welcome to Pawtastic Friends-The Paw Talk. In this podcast, host Tom Fox will visit with Michael and Melissa Novelli, co-founders of Pawtastic Friends, as well as those who work with them at Pawtastic Friends. Michael and Melissa are dedicated to helping shelter and rescue dogs in the Las Vegas area become more adaptable through enrichment training and activities such as yoga and aquatics training, as well as obedience and agility. This podcast is sure to tug on your heartstrings; just listen to how sweet this one dog is! Tune in now to hear more from Michael and Melissa Novelli as they discuss their passion for helping pups in need. Get ready for an exciting episode of Pawtastic Friends – The Paw Talk!

In this episode, we feature Otto, Marchy and Alex.

Otto and Marchy have both captured Melissa’s heart with their endearing personality and eagerness to learn during their training at a rescue facility. Tom asks about the dog’s potential. Both Michael and Melissa praise Otto and Marchy’s transformation from reluctance to engagement, emphasizing his need for a caring adult home that can provide him with adventures and companionship. Their experiences at the rescue facility, witnessing Otto and Marchy’s progress firsthand, shape their perspective that love, care, and engaging activities can profoundly impact the lives of rescue dogs, highlighting Otto and Marchy as perfect candidates for a fur-ever family.

Quotes:

“At first, he just wanted to go to the door to leave the arena. And then afterwards, he really connected with Jenny and the engagement of watching him be a dog and play with ball and, you know, tug toys and going up and down the ramp and the a-frame and seeing him really absorb all the information she was sharing with him, it was just so awesome.” – Melissa Novelli

“It’s so fun to watch them actually play and, you know, come out of their shell and to really shine their personalities.” – Melissa Novelli

“We don’t charge anybody. We’re just looking to raise some money to, you know, fund our enrichment training center.” – Michael Novelli

Resources:

Pawtastic Friends

Donate to Pawtastic Friends

Pawtastic Friends on Instagram

Pawtastic Friends on Facebook

Categories
Daily Compliance News

Daily Compliance News: September 26, 2024 – The Legal Limbo Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen to the Daily Compliance News. All from the Compliance Podcast Network.

Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

  • Menendez sentencing was delayed. (NewsWeek)
  • Ex-CEO of Skael faces criminal fraud charges. (WSJ)
  • S Iswaran convicted for corruption in Singapore. (BBC)
  • Roadmap of major DOJ anti-trust cases. (NYT)

Categories
Business Integrity Innovations

Business Integrity Innovations: Innovating Against the Odds – Dr. Amy Jadesimi on Ethical Business Practices in Nigeria

The Compliance Podcast Network (CPN) and the Center for International Private Enterprise (CIPE) bring you Business Integrity Innovations. This podcast is inspired by Ethics 1st – a multi-stakeholder initiative led by CIPE that creates pathways for accountable and sustainable investment in Africa. Companies can standardize their business practices, develop sound corporate governance systems, and demonstrate their commitment to compliance and business ethics using Ethics 1st.

In this episode of the Ethics 1st podcast, hosts Tom Fox and Lola Adekanye welcome Dr. Amy Jadesimi, a committee member of the Ethics First Advisory and a trailblazer in Nigerian industrial development. Dr. Jadesimi shares her incredible journey from being a medical graduate at Oxford and a banker at Goldman Sachs to earning her MBA at Stanford and leading LADOL in Nigeria. LADOL transformed from a single warehouse into a sprawling industrial hub during her leadership, slashing deep offshore logistics support costs and partnering with major companies like Samsung to create significant job opportunities and foster local industry growth.

Dr. Jadesimi discusses key aspects of her work, highlighting how compliance, sustainability, and ethical business practices drive innovation and competitiveness in challenging environments. She explains the importance of maintaining compliance to build trust, secure investment, and enable sustainable growth, emphasizing the positive impact on local communities and economies. Dr. Jadesimi’s insights provide valuable lessons for businesses striving to navigate regulatory challenges and make ethical decisions that lead to long-term success.

Key Highlights:

  • Building LADOL: Challenges and Successes
  • Navigating Regulatory Challenges
  • The Importance of Compliance
  • Sustainability and Business Efficiency
  • Future of Business in Nigeria

Resources:

CIPE

Dr. Amy Jadesimi on LinkedIn

LADOL