Categories
Blog

Building a Compliance Playbook for AI: Board – Level Lessons in Cybersecurity Oversight

Artificial intelligence (AI) has been heralded as one of the most transformative technologies of our time. It promises efficiency, productivity, and entirely new business models. Yet, as with any tool of such power, AI is both a friend and a foe. For corporate directors, compliance officers, and risk professionals, AI presents a dual challenge: leveraging its defensive strengths while preparing for its potential weaponization by malicious actors.

The National Association of Corporate Directors (NACD), in partnership with the Internet Security Alliance (ISA), has released a special supplement to its Directors’ Handbook on Cyber-Risk Oversight devoted entirely to AI in cybersecurity. It is a timely publication. As adoption rates soar, 72% of companies were already using AI in 2024, and the risks are accelerating just as fast. For the compliance community, the report provides a roadmap for oversight, governance, and practical questions boards must ask management.

AI as Both Force Multiplier and Risk Multiplier

On one side of the ledger, AI enhances cybersecurity by automating threat detection, reducing false positives, identifying malware, and analyzing oceans of log data. Used wisely, AI allows companies to “get ahead of theft”. This includes identifying vulnerabilities before criminals exploit them. Generative AI and large language models (LLMs), in particular, can speed detection, enrich threat indicators, and even suggest remediation steps.

However, these same capabilities are available to cybercriminals. AI lowers the barrier of entry for less sophisticated hackers, turbocharges phishing and social engineering campaigns, and allows nation-states to refine cyberattacks at scale. This duality makes AI unique: it amplifies both opportunity and risk simultaneously.

Oversight Imperatives for Boards

The handbook identifies four key imperatives for boards responsible for overseeing AI and cybersecurity.

1. Director of Education – Boards must commit to continuous learning about AI’s risks, benefits, and regulatory developments. Few leaders yet possess the technical grounding needed to appreciate AI’s implications.

2. Threat and Opportunity Awareness – Directors must understand not just the dangers but also the strategic benefits AI can bring.

3. Regulation and Disclosure – Boards must anticipate evolving rules and disclosure obligations. AI oversight will require the same level of rigor as financial and ESG reporting.

4. Board Readiness – Boards must ensure management builds governance structures, ethical use frameworks, and clear communication channels about AI’s role.

Compliance Lessons from the NACD AI in Cybersecurity Handbook

1. Third-Party and Supply Chain Risk Will Intensify

Boards are advised to scrutinize vendors’ AI tools and data sources. As the handbook emphasizes, AI models can be trained on data with questionable provenance, intellectual property, personally identifiable information, or even classified information. Using such models can expose organizations to liability. For compliance professionals, this means conducting enhanced due diligence on third-party AI systems. Ask vendors how they source training data, what models they use, and whether they have human oversight mechanisms in place to ensure quality. AI risk is now a key component of supply chain risk.

2. Transparency Is a Non-Negotiable

AI systems often function as “black boxes.” Their lack of explainability poses reputational and legal risks when decisions cannot be justified. Boards are urged to push for transparency in AI deployment, both internally and in customer-facing applications. For compliance professionals, this means incorporating explainability into your AI governance framework. Require documentation of training data, decision-making logic, and model limitations. If regulators ask, you must be able to demonstrate your homework.

3. Continuous Monitoring Is the New Standard

As highlighted in the AI Seven-Step Governance Program, AI oversight requires more than pre-deployment testing. Continuous monitoring, auditing, and retraining must occur throughout the lifecycle of AI tools to ensure their effective use. For the compliance professional, this means your program must move beyond “check-the-box” vendor certifications. Build ongoing monitoring and assurance processes. Think of AI oversight as dynamic, not static.

4. Regulation Will Come Fast and Furious

The NACD warns that while regulators often lag innovation by three to five years, the window for AI is already shortening. Boards relying on a “wait and see” approach will find themselves overwhelmed when rules arrive. Clearly, the compliance function must do more than wait for the regulators. Even if the US government were inclined to do so, the necessary political will would not exist to allow for an agreement. This means you should align your approach today with emerging frameworks, such as the EU AI Act, the NIST AI Risk Management Framework, and OECD principles. Position your company to demonstrate proactive governance.

5. Disclosure Expectations Will Rise

AI adoption carries disclosure obligations across transparency, risk assessment, and incident reporting. Boards must assume that regulators and investors alike will demand clear, timely disclosure of AI-related incidents and governance practices. Compliance must lead the way in your corporation to build AI into your disclosure controls and procedures now. Ensure incidents involving AI failures are reported with the same rigor as material cybersecurity breaches.

6. The Board Must Get Educated—and Fast

The handbook emphasizes director education. Boards that lack AI fluency will struggle to provide proper oversight. Worse, they may overestimate management’s ability to mitigate AI risks. You should encourage board training through NACD, Carnegie Mellon’s CERT program, or trusted third-party advisors. Education is no longer optional; it may well become a fiduciary duty.

7. Governance Structures Must Evolve

Some companies are considering dedicated AI committees, while others integrate AI oversight into existing audit or risk committees. Either way, boards need clear lines of accountability. The questions boards should be asking management are listed extensively in the handbook, including:

  • How are competitors using AI?
  • Do we need a Chief AI Officer?
  • What is our exposure if adversaries use AI against us?
  • Have we segregated training data to know its provenance?
  • Are our policies aligned with the EU AI Act’s risk classifications?

Start these conversations today. Board agendas must include AI oversight as a recurring topic.

Building a Compliance Playbook for AI

The compliance professional can translate the NACD’s recommendations into a practical playbook for your program, incorporating the following key concepts.

  • Embed AI governance early – Don’t bolt compliance onto AI projects after the fact. Integrate governance into design and procurement stages.
  • Adopt a human-centered AI approach – Ensure AI is aligned with corporate values and ethical principles, not just efficiency goals.
  • Use risk quantification – Treat AI risk like any other enterprise risk: quantify, compare, and integrate into ERM frameworks.
  • Demand accountability – Require clear responsibility for AI oversight, whether it sits with the Chief Compliance Officer, CIO, or a new Chief AI Officer role.
  • Engage regulators early – Use disclosure and transparency as tools to build trust with regulators and stakeholders.

The Handbook makes clear that AI in cybersecurity is not just a technology issue. It is an enterprise risk, a boardroom issue, and a compliance mandate. For compliance professionals, this means you must step into the AI oversight conversation.

As with the FCPA decades ago, regulators and stakeholders will expect companies to transition from a reactive to a proactive approach. The time to build frameworks, train directors, and embed oversight is now. AI, like every disruptive technology before it, will reward the prepared and punish the complacent. Compliance professionals are uniquely positioned to bridge the technical and governance divide. By applying lessons from the NACD handbook, we can ensure that AI becomes not just a tool for criminals but a force multiplier for integrity, trust, and resilience in the digital age.

Categories
Great Women in Compliance

Great Women in Compliance – Compliance as a Product Differentiator with Susan Cooper

In today’s episode, Lisa Fine speaks with Susan Cooper, Vice President of Regulatory Compliance Programs and Global Data Protection Officer at Meta, discussing her approach to compliance in the technology sector.  Susan discusses the path that led her to her current role, which is unique as her team is embedded within Meta’s product organization.

Being part of the product development team allows compliance to work hand-in-hand with product development through their risk review process, which assesses privacy, security, content safety, and financial risks in a centralized process for over 1,400 products per month.  It is part of their processes.

Susan also discusses how Meta utilizes “privacy-aware infrastructure,” embedding compliance requirements into standardized, reusable code components that can be used throughout the organization. She also provides some advice for compliance professionals, particularly those who are interested in technology companies, including:

  • Learn to speak “tech” if you want to work in tech compliance.
  • Get to know your stakeholders and their concerns;
  • Keep a growth mindset – be willing to ask questions and learn constantly; and
  • Embrace AI and automation tools to scale your work and keep learning about these tools
Categories
Upping Your Game

Upping Your Game – Leveraging Behavioral Analytics in Compliance: A Proactive Approach

In February, the Trump Administration suspended investigations under and enforcement of the FCPA. Many compliance professionals have since wondered what this will mean for corporate compliance programs going forward. Hui Chen challenged compliance professionals with the statement, “It’s time to up your game.”

This podcast series, sponsored by Ethico and co-hosted with Ethico co-CEO Nick Gallo, hopes to meet Hui Chen’s challenge. We will discuss how compliance professionals can ‘Up Their Game’ by utilizing currently existing Generative AI (GenAI) tools to enhance their compliance programs significantly. As compliance professionals, it is crucial to recognize that this moment is not merely about incremental improvements but about elevating our profession to an entirely new level of effectiveness, efficiency, and organizational value.

Tom Fox and Nick Gallo explore the role of behavioral analytics in transforming cultural assessments and compliance programs. They discuss how AI and data analytics can help compliance officers transition from a reactive to a proactive approach, thereby enhancing decision-making and promoting positive behavior within organizations. The conversation covers the importance of continuously assessing culture, the challenges of measuring it, and the necessity of thinking in bets—much like a skilled poker player. Tune in to learn how to make smarter, more agile decisions in the compliance realm, and stay ahead of potential issues before they escalate.

Key highlights:

  • Behavioral Analytics in Compliance
  • The Importance of Measuring Culture
  • Evolution of Data Analytics in Compliance
  • Strategies for Gathering Behavioral Data

Resources:

Upping Your Game-How Compliance and Risk Management Move to 2030 and Beyond on Amazon.com

Nick Gallo on LinkedIn

Ethico

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: September 23, 2025, The $100bn Edition

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

  • Nvidia invests $100 billion in OpenAI. (NYT)
  • What is ‘human agency’? (FT)
  • AI investment as the new diplomacy. (Bloomberg)
  • UN wants Red Lines around AI. (NBC News)
  • Compliance in the age of AI. (Forbes)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
AI Today in 5

AI Today in 5: September 22, 2025, The Chaos of Consent Episode

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

  • JFrog advances investment compliance. (Simply Wall St)
  • Using AI to navigate consent. (MarTech)
  • Making risk management a competitive advantage. (KPMG)
  • Using AI for cybersecurity. (IBM)
  • The AI race is like the Space Race. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
10 For 10

10 For 10: Top Compliance Stories For the Week Ending September 20, 2025

Welcome to 10 For 10, the podcast that brings you the week’s Top 10 compliance stories in one podcast each week. Tom Fox, the Voice of Compliance, brings to you, the compliance professional, the compliance stories you need to be aware of to end your busy week. Sit back, and in 10 minutes, hear about the stories every compliance professional should be aware of from the prior week. Every Saturday, 10 For 10 highlights the most important news, insights, and analysis for the compliance professional, all curated by the Voice of Compliance, Tom Fox. Get your weekly filling of compliance stories with 10 for 10, a podcast produced by the Compliance Podcast Network.

Top stories include:

  • A former Navy No. 2 was sentenced to 6 years for corruption. (NBC)
  • BCG employees to take Humanitarian Principles training. (FT)
  • DOJ is about to cut loose the Binance monitor. (Bloomberg)
  • Trump calls for the end of quarterly reporting for public compliance. (NYT)
  • Trump claims there is a deal with TikTok. (FT)
  • Marcos says no one will be spared in the corruption investigation. (Reuters)
  • First AI CCO. (BBC)
  • CFTC probes Google, Amazon over advertising. (Reuters)
  • Can Zoom make your meetings better? (NYT)
  • DOJ is looking at Uber for Disabilities violations. (WSJ)

You can check out the Daily Compliance News for four curated compliance and ethics-related stories each day, here.

Connect with Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

You can purchase a copy of my new book, Upping Your Game, on Amazon.com.

Categories
Compliance and AI

Compliance and AI: Innovation in Repurposing Content: Sheila Slick on PodtoBook.AI

What is the role of Artificial Intelligence in compliance? What about Machine Learning? Are you using ChatGPT? These questions are just three of the many we will explore in this cutting-edge podcast series, Compliance and AI, hosted by Tom Fox, the award-winning Voice of Compliance. In this episode, Tom Fox speaks with Sheila Slick, an entrepreneur and founder of PodtoBook.ai, a groundbreaking tool that repurposes podcast content into books.

Sheila shares her professional journey, from teaching math to founding a mobile application company, and her passion for sharing stories. She explains how her frustration with manual transcription and content creation led her to develop PodtoBook.AI. Sheila discusses the simplicity of the tool, which converts podcast episodes into first-draft manuscripts in just a few hours. She explores various innovative applications, including creating pitch books, event summaries, and preserving family stories. The conversation highlights the vast opportunities that AI offers in content repurposing and encourages listeners to embrace technological shifts to explore new business opportunities.

Key highlights:

  • Sheila’s Professional Journey
  • Founding PodtoBook.AI
  • The Power of AI in Content Creation
  • Using Pod to Book for Business and Personal Stories

Resources:

PodtoBook.ai

Sheila Slick on LinkedIn | Instagram

Milestone Moments in Business & Leadership Podcast

Five Milestones

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
AI Today in 5

AI Today in 5: September 19, 2025, The AI is Scheming Edition Episode

Welcome to AI Today in 5, the newest edition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI, so start your day, sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5, all from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest related to AI.

Top AI stories include:

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Regulatory Ramblings

Regulatory Ramblings: Episode 78 – How Well Does the Money Laundering Control System Work? Spotlight on: Rethinking AI Regulation: Why Current Approaches Fall Short with Oonagh van den Berg, Prof. Peter Reuter, and Dr. Mirko Nazzari

In the initial spotlight segment of this episode, we speak with returning guest and regulatory compliance expert Oonagh van den Berg of Raw Compliance about an article she recently penned on LinkedIn titled “Rethinking AI Regulation: Why Current Approaches Are Falling Short” (check the links below).​

Following that, we chat with anti-money laundering (AML) and financial crime scholars Dr. Mirko Nazzari and Prof. Peter Reuter about their new article in the Journal of Crime & Justice, published by the University of Chicago Press, entitled “How Well Does the Money Laundering Control System Work?”

Oonagh van den Berg is the founder of Raw Compliance, a compliance consultancy and training firm. Having grown up in Northern Ireland during the tumultuous 1980s, she is a compliance veteran.

A lawyer by training and an entrepreneur by vocation, she grew up during the dark chapter of her country – better known as “The Troubles”- and went on to achieve success after success: first as a lawyer, then as a compliance officer, a recruiter, and later, a consultant and educator. Having previously taken up roles in Asian financial hubs such as Singapore and Hong Kong, she is currently based in Braga, Portugal.

Dr. Mirko Nazzari is a postdoctoral research fellow in Political Science at Università degli Studi di Sassari, Italy. He holds a PhD in Criminology from Università Cattolica del Sacro Cuore (Italy), where he also served as a Research Fellow at Transcrime – Joint Research Centre on Innovation and Crime.

His research focuses on assessing and enhancing public policies for crime prevention and control, with particular emphasis on money laundering, cybercrime, and the policy challenges posed by emerging technologies. He has published extensively in these areas and contributed to applied policy research at both national and international levels.

Dr. Peter Reuter is Distinguished University Professor in the School of Public Policy and Department of Criminology at the University of Maryland. In 2019, he was awarded the Stockholm Prize in Criminology, the most prestigious award in the field. He founded the International Society for the Study of Drug Policy and RAND’s Drug Policy Research Center.

Discussion:

The podcast begins with a brief conversation between Oonagh and Regulatory Ramblings host Ajay Shamdasani about her September 8, 2025, article on LinkedIn, entitled “Rethinking AI Regulation: Why Current Approaches Are Falling Short.”

Her key takeaway for listeners and her readers is that: “AI isn’t just a technology—it’s an ecosystem. Regulating it requires cooperation, adaptability, and vision. Anything less will fail.”

Oonagh goes on to say: “Artificial Intelligence is evolving faster than regulators can keep up. Around the world, governments are racing to design frameworks to govern AI use, but the struggle is evident: how do you regulate something so pervasive, adaptive, and borderless without stifling innovation or missing critical risks?”

She assesses Hong Kong’s present dilemma – highlighted in a recent South China Morning Post article – that illustrates such challenges. The city faces obstacles in enforcing rules that would necessitate AI-created content to be labelled. Experts, she says, warn that the city’s market is “too small” for supporting “bespoke legislation, and without robust enforcement mechanisms, rules around watermarking and labelling may simply be ignored.”

“This isn’t just a Hong Kong problem. It’s a global one. And it’s a sign that we need to rethink how AI regulation is designed and enforced,” she writes.

As the former British colony crafts its own AI rules regime, she highlights the challenges the city faces:

1. Fragmented and reactive regulation: Hong Kong currently relies on piecemeal laws—privacy, IP, finance—to govern AI. The lack of a unified statute leaves gaps and inconsistencies. This mirrors the situation in many jurisdictions where regulators patch AI onto existing frameworks rather than building something purpose-built.

2. Enforcement complexity

Even when rules exist, implementation is shaky. For example, China mandates labelling and watermarking of AI content. But technical evasion is easy, watermarking can be stripped, and compliance varies across platforms. Enforcement lags behind innovation.

3. Scale and coordination problems

Small markets like Hong Kong can’t realistically create standalone AI regimes that diverge too far from global standards. With multiple regulators (PCPD, HKMA, SFC) touching AI issues, coordination becomes another hurdle.

4. Ethical and societal risks remain unaddressed

Labelling helps promote transparency, but it doesn’t address deeper concerns, such as misinformation, deepfakes, privacy breaches, biased algorithms, or liability for harm.

Ultimately, Oonagh notes the Special Administrative Region (SAR) needs to learn from other models.

For example, the EU AI Act is a superb piece of legislation. “The European Union has introduced the world’s most ambitious attempt at AI regulation,” she says. “Its risk-based approach divides AI systems into categories:

• Unacceptable risk (e.g., social scoring) – outright bans.

• High risk (e.g., biometrics, healthcare AI, financial services AI) – strict compliance, human oversight, mandatory audits.

• Low/minimal risk – lighter obligations.

“This is a principle-driven and comprehensive framework, but critics warn that its heavy compliance burden may stifle innovation in smaller companies. Enforcement capacity will also be tested—many national regulators are underfunded compared to the scope of responsibility,” she wrote.

Then there is the Singaporean model, which she acknowledges is “a more agile, industry-friendly approach with its Model AI Governance Framework.” Instead of rigid laws, it provides:

• Voluntary best practices (transparency, explainability, fairness).

• Industry sandboxes to experiment safely.

• A strong focus on multi-stakeholder collaboration between regulators, academia, and industry.

“This approach supports innovation while nudging companies toward responsible AI. But without legal force, it risks leaving gaps where bad actors can exploit weaknesses,” she says.

For Hong Kong to have a more workable approach, therefore, she recommends borrowing what works and is relevant to the local context. Namely:

Unified AI Regulation: Move beyond fragmented laws and adopt a dedicated AI framework, grounded in core principles: accountability, transparency, fairness, privacy, and safety.

Risk-Based Oversight: Like the EU Act, differentiate between high-risk and low-risk AI use, applying strict oversight only where harms could be severe.

Practical Enforcement Tools: Invest in watermarking and labelling standards that are technically robust, enforceable, and difficult to evade—while recognizing that labelling alone isn’t a silver bullet.

Dedicated Oversight Body: Create a central AI regulator to coordinate across sectors, avoid duplication, and respond quickly to emerging risks.

Public Engagement & Education: Foster societal trust by educating citizens on the risks, rights, and safeguards associated with AI, ensuring transparency in the decision-making process surrounding AI.

Global Alignment: For small markets like Hong Kong, aligning with global regimes—whether the EU Act’s structure or Singapore’s collaborative model—is key to avoiding regulatory isolation and easing compliance for international companies.

As Oonagh concludes, AI regulation cannot be built on ad hoc legal fixes or unenforceable guidelines. “Hong Kong’s struggles highlight the real-world limitations of trying to bolt rules onto outdated systems. The EU shows the power of principle-based, risk-tiered regulation, while Singapore demonstrates the agility of a collaborative, innovation-friendly approach,” she writes.

“The answer lies in combining these lessons: a unified, principle-driven law; proportionate, risk-based oversight; enforceable standards; and international harmonisation. Regulation must evolve as quickly as AI itself—not to slow it down, but to ensure that innovation happens safely, transparently, and for the benefit of society,” she says.

Moving into the lengthier discussion portion of the episode, Mirko and Peter discuss their article, published earlier this summer, entitled “How Well Does the Money Laundering Control System Work?”

The article takes a critical look at the global AML system and poses a simple yet fundamental question: Has it actually made money laundering more challenging or risky for criminals? The answer is more complicated— and less encouraging—than many might hope. And it’s a question for which there may be different answers at local, national, transnational, and global levels.

Mirko & Peter’s essay offers a critical and data-driven analysis of the global AML regime, highlighting:

▪️ The lack of empirical evidence that ML has become more difficult or less prevalent

▪️ The often symbolic nature of international evaluations, such as the Financial Action Task Force Mutual Evaluations

▪️ The high costs and unintended consequences of AML measures, including derisking, and

▪️ The central role of private entities in detecting suspicious activity, with significant operational implications. Although lengthy, it is highly recommended reading for anyone working in or interested in AML, financial crime, and public policy evaluation.

Simply put, Money laundering remains a significant concern worldwide, with substantial resources dedicated to preventing illicit funds from entering the financial system. Yet, despite decades of legislative and regulatory development, the effectiveness of AML frameworks remains dubious.

Again, the article is a sharp, data-informed critique of the current state of the international AML apparatus. The authors highlight seven key findings that challenge conventional wisdom:

  • Major banks regularly face hefty fines, but executives very rarely face criminal convictions
  • Money laundering is often no more complex or expensive today than it was in the late 1980s
  • Most laundering methods remain surprisingly basic
  • The system disproportionately benefits wealthy jurisdictions
  • AML measures yield valuable intelligence for law enforcement
  • But they also carry risks, including de-risking and data misuse
  • The real costs of AML compliance are never part of public debate. Only occasionally is there mention of the costs borne by banks.

The abstract to their piece states: “The continued globalization of finances has generated an ever-larger array of methods for making criminal earnings appear legitimate. The global regime to control money laundering has become more sophisticated and comprehensive (i.e., expensive and intrusive). There is no evidence that money laundering is declining or becoming more difficult or expensive. The system’s failure has many sources. Nations that pushed for its creation and development have been unwilling to implement critical elements. Major banks have repeatedly failed to meet their obligations, suggesting either insufficient commitment or a lack of the necessary skills and systems to comply. Regulatory oversight has been inadequate. There is, however, evidence that the system aids enforcement of laws against criminal enterprises. Despite the consensus that the system works poorly, there is almost no discussion of substantial reforms.”

Their key observations or conclusions are that simple laundering strategies remain pervasive, there has been, relatively speaking, limited adoption of sophisticated methods like crypto, and most launderers tend to launder their own funds rather than avail themselves of the “professional services” of more experienced financial criminals.

The challenges they cite include the limited policy debate over AML and financial crime compliance in general, a tendency for policymakers and regulators to focus on incremental improvements rather than comprehensive reforms, and whether the current system of ever-growing suspicious activity report (SAR) filings is sustainable in the long term.

As Mirko says, “SARs are contributing to investigations,” but it is unclear whether such a system is sustainable over time. He highlights a common practice among money laundering reporting officers (MLROs) of reporting everything to avoid fines, sanctions, or personal reprimands—a phenomenon known as “defensive filing.”

However, the example of the U.S. Treasury Department’s FinCEN shows that four million SARs are filed annually, which cannot be effectively managed. This places a significant strain on Financial Intelligence Units and law enforcement agencies, whose limited resources make it challenging to keep pace with the volume of reports.

Mirko added that not all money launderers are the same: the typologies of how a drug dealer, a kleptocrat, and a cryptocriminal launder funds may be very different.

When asked what policy choices they would advocate for regulators and law enforcement to adopt, both Mirko and Peter stressed the need to set realistic goals, develop alternative effectiveness metrics, and strike a balance between the competing yet compelling goals of AML controls and financial inclusion.

As the conversation concluded, Peter acknowledged that the White House’s statement earlier this year, indicating it would scale back AML enforcement, could lead to selective enforcement of such rules under the current Trump administration.

Regulatory Ramblings podcasts is brought to you by The University of Hong Kong – Reg/Tech Lab, HKU-SCF Fintech Academy, Asia Global Institute, and HKU-edX Professional Certificate in Fintech, with support from the HKU Faculty of Law.

Useful links in this episode:

You might also be interested in:

Connect with RR Podcast at:

LinkedIn: https://hk.linkedin.com/company/hkufintech 
Facebook: https://www.facebook.com/hkufintech.fb/
Instagram: https://www.instagram.com/hkufintech/ 
Twitter: https://twitter.com/HKUFinTech 
Threads: https://www.threads.net/@hkufintech
Website: https://www.hkufintech.com/regulatoryramblings 

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net

Categories
Compliance Into the Weeds

Compliance into the Weeds: SCCE Compliance and Ethics Institute Report

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss Matt’s experiences at the recently concluded SCCE Compliance and Ethics Institute.

Matt shares his insights on the atmosphere, key sessions, and notable absences from the agenda. They explore the innovative use of AI in compliance programs, including the development of chatbots for policy inquiries. Additionally, they reflect on leadership changes within the SCCE and liken the metaphor of nurturing compliance to tending a bonsai tree, emphasizing the long-term growth and development of a compliance culture within organizations.

 

Key highlights:

  • The SCCE conference was well-attended with over 1300 participants.
  • The absence of key representatives from the Trump administration was notable.
  • Innovative presentations offered fresh perspectives on compliance topics.
  • Compliance professionals must adapt policies to effectively support AI tools.
  • Leadership changes at SCCE signal a new direction for the organization.

Resources:

Matt on Radical Compliance 

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred the Davey, Communicator, and W3 Awards for podcast excellence.