Categories
10 For 10

10 For 10: Top Compliance Stories For the Week Ending September 20, 2025

Welcome to 10 For 10, the podcast that brings you the week’s Top 10 compliance stories in one podcast each week. Tom Fox, the Voice of Compliance, brings to you, the compliance professional, the compliance stories you need to be aware of to end your busy week. Sit back, and in 10 minutes, hear about the stories every compliance professional should be aware of from the prior week. Every Saturday, 10 For 10 highlights the most important news, insights, and analysis for the compliance professional, all curated by the Voice of Compliance, Tom Fox. Get your weekly filling of compliance stories with 10 for 10, a podcast produced by the Compliance Podcast Network.

Top stories include:

  • A former Navy No. 2 was sentenced to 6 years for corruption. (NBC)
  • BCG employees to take Humanitarian Principles training. (FT)
  • DOJ is about to cut loose the Binance monitor. (Bloomberg)
  • Trump calls for the end of quarterly reporting for public compliance. (NYT)
  • Trump claims there is a deal with TikTok. (FT)
  • Marcos says no one will be spared in the corruption investigation. (Reuters)
  • First AI CCO. (BBC)
  • CFTC probes Google, Amazon over advertising. (Reuters)
  • Can Zoom make your meetings better? (NYT)
  • DOJ is looking at Uber for Disabilities violations. (WSJ)

You can check out the Daily Compliance News for four curated compliance and ethics-related stories each day, here.

Connect with Tom 

Instagram

Facebook

YouTube

Twitter

LinkedIn

You can purchase a copy of my new book, Upping Your Game, on Amazon.com.

Categories
Compliance Into the Weeds

Compliance into the Weeds: SCCE Compliance and Ethics Institute Report

The award-winning Compliance into the Weeds is the only weekly podcast that takes a deep dive into a compliance-related topic, literally going into the weeds to explore a subject more fully. Looking for some hard-hitting insights on compliance? Look no further than Compliance into the Weeds! In this episode of Compliance into the Weeds, Tom Fox and Matt Kelly discuss Matt’s experiences at the recently concluded SCCE Compliance and Ethics Institute.

Matt shares his insights on the atmosphere, key sessions, and notable absences from the agenda. They explore the innovative use of AI in compliance programs, including the development of chatbots for policy inquiries. Additionally, they reflect on leadership changes within the SCCE and liken the metaphor of nurturing compliance to tending a bonsai tree, emphasizing the long-term growth and development of a compliance culture within organizations.

 

Key highlights:

  • The SCCE conference was well-attended with over 1300 participants.
  • The absence of key representatives from the Trump administration was notable.
  • Innovative presentations offered fresh perspectives on compliance topics.
  • Compliance professionals must adapt policies to effectively support AI tools.
  • Leadership changes at SCCE signal a new direction for the organization.

Resources:

Matt on Radical Compliance 

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

A multi-award-winning podcast, Compliance into the Weeds was most recently honored as one of the Top 25 Regulatory Compliance Podcasts, a Top 10 Business Law Podcast, and a Top 12 Risk Management Podcast. Compliance into the Weeds has been conferred the Davey, Communicator, and W3 Awards for podcast excellence.

Categories
Daily Compliance News

Daily Compliance News: September 18, 2025, The Four Humours Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • Muzzled Ben and Jerry’s founder resigns. (NYT)
  • Data Privacy Policies: To Be or Not to Be. (Reuters)
  • The 4 personality types. (BBC)
  • DOJ is about to cut loose the Binance monitor. (Bloomberg)
Categories
Daily Compliance News

Daily Compliance News: September 15, 2025, The AI CCO Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • First AI CCO. (BBC)
  • CFTC probes Google, Amazon over advertising. (Reuters)
  • Can Zoom make your meetings better? (NYT)
  • DOJ is looking at Uber for Disabilities violations. (WSJ)
Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 59 – The Foot Fetish Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

 Stories this week include:

  • AI vs. AI: The Battle Over Fraudulent Receipts
  • Whistleblower Lessons: Nestlé CEO Dismissal Case
  • Forced Labor Legislation: UK and EU Developments
  • Boeing, DOJ, and the Role of Corporate Monitors
  • Workplace Activism: Managing Political Debate at Work
  • Data Privacy: French Fines Against Google and Shein
  • Corporate Wellness: Innovative Employee Perks
  • Children’s Data Privacy: Disney’s FTC Settlement
  • Florida Man Story: Compliance Lessons from the Absurd

Connect with the hosts:

Resources:

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

Categories
Blog

Declinations Are Not Exits: Using Liberty Mutual to Pressure – Test Your Compliance Program

In August 2025, the Department of Justice announced its first FCPA declination of the year, closing its investigation into Liberty Mutual Insurance Company. The facts, while concise, are significant: between 2017 and 2022, employees of Liberty General Insurance, Liberty Mutual’s Indian subsidiary, funneled approximately $1.47 million in bribes to officials at six state-owned banks in exchange for customer referrals. These illicit payments, concealed as marketing expenses and routed through third-party intermediaries, generated $9.2 million in revenue and $4.7 million in profits.

Despite this misconduct, DOJ declined prosecution, citing Liberty Mutual’s early self-disclosure in March 2024 while its internal investigation was still underway; its full and proactive cooperation, including naming individuals involved; and its timely remediation efforts, which included a full acceptance of responsibility, a systematic root cause analysis, and enhanced compliance controls. Notably, the company agreed to disgorge nearly $4.7 million in profits and adopted strengthened policies on third-party oversight, social media use, and ephemeral messaging apps.

Far from a routine declination, Liberty Mutual’s case is a blueprint for how DOJ expects companies to handle potential FCPA violations in 2025 and beyond. For compliance officers, it provides an opportunity to benchmark their programs against the department’s revised Corporate Enforcement Policy and assess whether their own organizations could withstand the scrutiny that Liberty Mutual faced.

What lessons should the compliance community draw from this “plain Jane” declination that is anything but ordinary? Today, we break it down.

Lesson 1: The Risks and Rewards of Early Self-Disclosure

Liberty Mutual’s decision to self-disclose in March 2024, before its internal investigation was complete, reflects the central tension in DOJ’s revised Corporate Enforcement Policy: disclose early or risk losing credit. Under the old guidance, companies were expected to report “immediately upon becoming aware” of potential misconduct, often before facts were clear. The 2025 revision softened the language slightly, but the expectation remains to step forward as soon as you have a clear understanding of the conduct, even if the picture is incomplete.

For compliance officers, this means preparing leadership and boards for tough judgment calls. Waiting for every fact to crystallize risks forfeiting the benefits of voluntary disclosure. Disclosing too early risks exposing the company to liability before it fully understands the problem. Building governance frameworks that allow rapid escalation, provisional risk assessment, and timely board engagement is no longer optional; it is a survival mechanism.

Lesson 2: “Full and Proactive” Cooperation

The declination letter praised Liberty Mutual for its “full and proactive cooperation.” This is a notable evolution in the DOJ’s vocabulary. We know what “full” means: produce documents, facilitate interviews, and respond to requests quickly. Note how this differs from the prior formulation by former Assistant Attorney General Kenneth Polite when discussing the DOJ’s Corporate Enforcement Policy. He defined cooperation as going “above and beyond the criteria for full cooperation” to provide ‘extraordinary’ assistance in demonstrating immediacy, consistency, degree, and impact of the disclosures and support of the investigation. Polite’s use of the term ‘extraordinary’ went well beyond the framing of “full and proactive cooperation.” An extraordinary commitment is required to demonstrate exceptional dedication to the investigation and actively assist the DOJ in achieving its goals.

Liberty Mutual provided relevant facts about individuals, prepared materials the DOJ hadn’t specifically requested, and worked through foreign data privacy challenges to expedite production. That’s proactive.

For compliance professionals, the message is unmistakable: cooperation credit does not just come from answering questions; instead, it comes from anticipating them. Proactive means preparing translations before DOJ asks, synthesizing investigative findings into clear presentations, and offering additional documentation that regulators might find helpful. Companies that want declinations need to train investigative teams to think two steps ahead.

Lesson 3: Navigating Deconfliction and Investigative Boundaries

The Liberty Mutual matter also reminds us of the delicate dance of deconfliction. The DOJ’s practice of asking companies to delay interviewing certain employees so that prosecutors can conduct their interviews first. But cooperation doesn’t end there. The DOJ may also encourage companies to expand their investigations into new geographies or business units.

The 2025 CEP revisions signaled an intent to keep investigations more focused for companies, which provides leverage to push back on overreach while still demonstrating cooperation.

Compliance officers must strike a balance: honor deconfliction requests that allow prosecutors to proceed without interference, but defend investigative boundaries when asked to wander into areas where no evidence exists. A disciplined scope protects both resources and credibility with regulators.

Lesson 4: Fulsome Acceptance of Responsibility

One of the more striking phrases in the declination letter was DOJ’s recognition of Liberty Mutual’s “fulsome acceptance of responsibility.” This signals a shift from perfunctory acknowledgments of wrongdoing to meaningful ownership.

It is the difference between saying, “Yes, our subsidiary made mistakes,” versus declaring, “We, as the parent company, failed to prevent this misconduct, and we own the failure.” Liberty Mutual didn’t stop at distancing itself from bad actors; it accepted enterprise-level responsibility.

For boards and executives, this is a powerful compliance lesson. DOJ expects companies to shoulder responsibility broadly, not hide behind “rogue employees.” The tone set at the top must reflect ownership, contrition, and commitment to preventing recurrence.

Lesson 5: Root Cause Analysis as Compliance Bedrock

The declination also highlighted Liberty Mutual’s systematic root cause analysis. This is not a new concept in compliance circles, but it is increasingly central to the DOJ’s calculus. Simply removing the wrongdoer isn’t enough. The question is: what systemic weaknesses allowed the misconduct to occur?

Liberty Mutual conducted a thorough RCA that examined its control environment, third-party oversight, and cultural gaps. This analysis guided remediation efforts, including structural reorganization, increased compliance resources, and enhanced third-party monitoring.

For compliance officers, the takeaway is straightforward: build RCA into every investigative playbook. Document how each failure occurred, identify the control breakdowns, and map remediation directly back to those findings. DOJ does not just want to see discipline; it wants to see learning.

Lesson 6: Messaging, Social Media, and the New Compliance Frontier

Finally, the Liberty Mutual declination highlighted an issue that has been simmering beneath the surface: the use of ephemeral messaging and social media in business communications. DOJ specifically noted Liberty Mutual’s remediation in this area, a rarity in declinations.

This signals that DOJ expects compliance programs to account for modern communication risks, not just email and enterprise systems, but WhatsApp, Signal, Teams auto-delete, and even Facebook Messenger or Instagram DMs. These channels are increasingly central to both legitimate business and corrupt schemes.

For compliance officers, the challenge is twofold:

  1. Develop clear policies governing employee use of messaging and social media for business.
  2. Deploy monitoring and recordkeeping mechanisms that ensure compliance with legal and regulatory expectations.

This is the new frontier, and companies that fail to adapt may find themselves unable to demonstrate control credibly.

Declinations as Roadmaps

The Liberty Mutual case may have looked routine at first glance, but it is anything but. For the compliance community, it serves as a roadmap for navigating the DOJ’s revised Corporate Enforcement Policy.

The lessons are clear: prepare for early self-disclosure, embrace proactive cooperation, defend investigative boundaries, accept responsibility broadly, conduct rigorous root cause analysis, and modernize oversight of communication.

Declinations are not just quiet exits; they are public teaching tools. Liberty Mutual’s experience demonstrates how a company can turn a damaging bribery scandal into a compliance success by owning the problem, learning from it, and showing a genuine commitment to reform. For today’s CCO, the real question is: if DOJ knocked on your door tomorrow, could you meet the Liberty Mutual standard?

Categories
All Things Investigations

All Things Investigations – DOJ’s Evolving Guidelines: Implications from Liberty Mutual’s FCPA Case

Welcome to the Hughes Hubbard Anti-Corruption & Internal Investigations Practice Group’s podcast, All Things Investigation. In this podcast, host Tom Fox welcomes back Mike DeBernardis to discuss the recently released first Foreign Corrupt Practices Act (FCPA) enforcement action, a Declination involving Liberty Mutual Insurance Company.

Mike DeBernardis, partner at Hughes Hubbard & Reed, and Tom delve into the first FCPA enforcement action of 2025 involving Liberty Mutual. They discuss the nuances of self-disclosure during ongoing investigations, the challenges facing defense attorneys, and the expectations set by the new corporate enforcement policy. Key topics include proactive cooperation, dealing with deconfliction, and the importance of root cause analysis. The conversation provides valuable insights into how the Department of Justice communicates its expectations through enforcement actions and the evolving landscape of corporate compliance.

Key highlights:

  • Exploring the Liberty Mutual Case
  • Challenges of Early Self-Disclosure
  • Corporate Enforcement Policy Changes
  • Full and Proactive Cooperation
  • De-confliction in DOJ Investigations
  • Root Cause Analysis Importance
  • Social Media and Ephemeral Messaging

 Resources:

Hughes Hubbard & Reed website

Mike DeBernardis

Categories
Blog

Using AI to Embed Compliance into Business Operations

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

Compliance programs have long wrestled with a central challenge: how to move from “bolt-on” to “built-in.” Too often, compliance has been perceived as an overlay, a set of policies and reviews that operate parallel to business activity. The Department of Justice has repeatedly emphasized that compliance should be integrated directly into operations, not treated as an afterthought.

Generative AI offers compliance professionals a new tool to achieve this, as Elisa Farri and Gabriele Rosani argue in an HBR article How AI Can Help Managers Think Through Problems, that AI is not just a productivity enhancer but a thought partner. Instead, it is capable of helping leaders frame problems, test assumptions, and engage in structured dialogues that improve decision-making.

I aim to utilize their article to support compliance officers in leveraging AI to enhance our ability to embed compliance into business processes more effectively. Today, I conclude my five-part blog post series on using GenAI in compliance to explore how AI can assist in building compliance into the business and what it means for the future of compliance programs. I also provide five key takeaways for compliance professionals on how to do so.

1. AI as a Co-Thinking Partner for Embedding Compliance into Workflows

One of the article’s most powerful insights is the concept of “co-thinking”; AI as a partner in structured dialogue rather than just a tool for quick answers. For compliance, this is transformative. Imagine using AI not simply to draft a policy, but to help you think through how that policy should be embedded in day-to-day operations.

For instance, when designing a gifts-and-entertainment approval process, AI can walk compliance through stakeholder perspectives: What does sales need? What would regulators expect? What friction will finance raise? By simulating these perspectives, AI helps compliance professionals design workflows that are practical and embedded, rather than abstract and detached.

This approach also makes compliance more proactive. Instead of reacting to risks after violations occur, AI-enabled co-thinking allows compliance to anticipate where policies may clash with business objectives and design operational solutions upfront. The compliance lesson is to treat AI as a structured dialogue partner to design compliance that lives inside the workflow, policies, and processes that are not just documented but operationalized.

2. Enhancing Stakeholder Engagement Through AI Simulations

Embedding compliance into business operations requires more than rules; it requires buy-in. The article highlights how AI can role-play different stakeholders, challenging managers to anticipate reactions. Compliance can use this capability to stress-test initiatives before rollout.

Suppose compliance is introducing a new due diligence system for third-party onboarding. AI can simulate how procurement might respond (“slows down vendor onboarding”), how business development might object (“hurts competitiveness”), and how regulators might evaluate (“strong demonstration of risk-based management”). This multi-stakeholder dialogue allows compliance teams to refine both process design and messaging before rollout.

The implication for compliance programs is clear: embedding compliance requires deep cultural alignment. AI makes it possible to test and rehearse that alignment at scale, reducing resistance and building smoother adoption. The compliance lesson is to use AI simulations to bring stakeholder voices into the design process, ensuring compliance is not bolted on but built with empathy for business realities.

3. AI-Assisted Root Cause Analysis Strengthens Business Integration

Compliance programs are expected to conduct root cause analysis after misconduct, but too often these reviews remain siloed. AI-enabled co-thinking helps expand root cause analysis into an exercise that strengthens business operations.

For example, when analyzing repeated travel and expense violations, AI can guide compliance through structured questions: Were training gaps to blame? Were approval workflows too weak? Were sales incentives misaligned? Then, critically, AI can help map remediation back into operations—tightening finance approvals, adjusting incentive structures, and embedding compliance flags directly into expense systems.

This is not about AI making the decision. It is about AI helping compliance think through operational integration of lessons learned. Instead of merely complying with regulations by writing a report that sits on a shelf, the outcome becomes operational adjustments inside business processes. The compliance lesson (or rather, perhaps implication) is that the DOJ expects compliance programs to prevent recurrence through systemic fixes. AI co-thinking can ensure those fixes are operational, not theoretical.

4. Scaling Compliance Culture and Mindset Shifts Across the Organization

The article notes how AI can be used to coach managers through mindset shifts, helping them reflect on new behaviors and practices. Compliance can use the same approach to embed cultural expectations directly into business teams. For example, AI can be configured as a compliance coach embedded in daily tools, guiding managers through ethical dilemmas, prompting reflection during approval requests, or reinforcing company values during project planning. Instead of compliance being external and episodic, it becomes internal and continuous.

This democratizes compliance development. A frontline manager in Asia can interact with AI that reinforces compliance culture in real time, rather than waiting for annual training or sporadic compliance visits. It also gives compliance leaders data on where employees are struggling, revealing cultural gaps that can be addressed systemically.

The implication is that embedding compliance is not just about systems but about mindset. AI can make culture-building a daily, distributed activity rather than a centralized, one-time effort.

5. Ensuring Human Judgment Remains Central in AI-Enabled Compliance

Finally, while AI can enhance problem-solving and integration, the article underscores that co-thinking only works when humans stay actively engaged. Compliance cannot abdicate responsibility to machines. This has profound implications for compliance programs. AI can help frame problems, simulate stakeholders, and propose operational fixes, but it cannot weigh reputational risk, interpret regulatory expectations, or balance competing global obligations. Those decisions require human judgment.

The key is balance: AI accelerates and deepens thinking, but compliance leaders must build governance frameworks to ensure outputs are reviewed, validated, and contextualized. Embedding compliance into business operations does not mean letting AI run the show; it means letting AI augment human reasoning so that compliance becomes more practical, strategic, and defensible.

The compliance lesson, based on both the DOJ’s FCPA Resource Guide and the 2024 ECCP, is clear that compliance must be risk-based, well-resourced, and continuously improved. AI helps compliance think through integration, but humans remain accountable for ensuring it meets regulatory standards and ethical expectations.

AI as a Pathway to Embedded Compliance

The future of compliance is embedded, not bolted on. DOJ expects it. Boards demand it. Employees need it. The challenge is figuring out how to make it real. AI offers compliance professionals a powerful new tool: not as an oracle, but as a co-thinker. By helping compliance frame problems, simulate stakeholders, strengthen root cause analysis, scale cultural coaching, and reinforce human judgment, AI can accelerate the shift from compliance as oversight to compliance as an integrated business practice.

The call to action is simple: use AI not just to make compliance faster, but to make compliance inseparable from business. That is how compliance earns trust, drives culture, and meets regulatory expectations in the age of AI.

Categories
Blog

Trust and Verify: How Compliance Can Harness AI Agents Safely

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

When we think of “trust” in compliance, our minds usually go to whistleblowers, employees, or third parties. But increasingly, the question of trust must extend to a new category of actors: AI agents.

As Blair Levin and Larry Downes explain in their provocative Harvard Business Review piece, titled “Can AI Agents Be Trusted?“, AI agents are not just smarter chatbots. They are software systems that can collect data, make decisions, and even act autonomously based on rules and priorities. For compliance professionals, this changes the game. If AI agents can act on our behalf, can they also be trusted to uphold compliance principles?

The answer is yes, but only if we design and monitor them with the same rigor that we apply to employees, third parties, and business partners. Today, we look at five key takeaways from their article to guide compliance professionals in building AI agents into trustworthy components of their programs.

1. Trust Requires Oversight, Just as with Human Agents

The article makes a simple but powerful analogy: think of an AI agent the way you would think of an employee or contractor. Before delegating sensitive responsibilities, you conduct background checks, put controls in place, and possibly even require bonding. The same must hold for AI.

For compliance, this means creating oversight structures before deploying agents into live workflows. If your compliance AI assistant can monitor transactions for red flags, you must ensure that a human compliance officer reviews its outputs. If it can escalate potential whistleblower complaints, you must validate that escalation logic against regulatory requirements.

AI oversight also means testing for vulnerabilities. As Levin and Downes note, AI agents are susceptible to hacking, manipulation, and even misinformation. Compliance should require penetration testing of any agent integrated into company systems, just as IT would test network defenses.

Trust is never blind in compliance. It is built on verification, monitoring, and accountability. AI agents can and should be trusted, but only when they operate within a compliance framework that mirrors the controls we already use for human agents.

2. Recognize and Manage Bias and Conflicts of Interest

One of the major risks highlighted in the article is bias, whether introduced by marketers, advertisers, or flawed training data. Just as a conflicted employee can steer decisions for personal gain, an AI agent can be subtly manipulated to favor sponsors, advertisers, or even certain viewpoints.

For compliance professionals, this should raise alarms. Imagine an AI agent used for third-party due diligence. If biased data shapes its recommendations, you could end up onboarding a high-risk vendor while rejecting a low-risk one. Worse, if regulators discover that your system relied on biased algorithms, you’ll face serious questions about program effectiveness.

The solution is conflict-of-interest monitoring for AI. Just as employees must disclose outside interests, AI agents should be tested and audited for hidden preferences. Compliance should insist on transparency from vendors about training data sources and sponsorship arrangements. In some cases, contracts with AI providers may need explicit clauses guaranteeing independence from commercial influence.

Compliance has always been about spotting and mitigating conflicts. In the age of AI, that vigilance must extend to our digital agents. Only then can we claim that our programs are fair, impartial, and defensible.

3. Treat AI Agents as Fiduciaries of Compliance

Perhaps the most compelling insight from Levin and Downes is that AI agents should be treated as fiduciaries. Just as lawyers, trustees, and board members owe a heightened duty of care to their clients, AI agents entrusted with compliance responsibilities must be designed and governed under similar standards.

For compliance officers, this concept aligns directly with DOJ expectations. The Evaluation of Corporate Compliance Programs (2024 ECCP) emphasizes accountability, transparency, and independence. By treating AI agents as fiduciaries, compliance leaders can extend these principles to technology.

What does fiduciary duty look like in practice?

  • Obedience: AI must follow company policies and regulatory standards.
  • Loyalty: AI must prioritize the company’s compliance objectives over any hidden commercial interests.
  • Confidentiality: AI must protect sensitive compliance data from leaks or misuse.
  • Accountability: AI actions must be traceable, with clear logs and audit trails.

This fiduciary framing provides compliance professionals with a powerful tool. It not only reassures stakeholders that AI can be trusted, but it also sets a benchmark that regulators can understand and evaluate. In short, fiduciary AI is defensible AI.

4. Build Market and Insurance-Based Safeguards

The article notes that beyond regulation, market mechanisms such as insurance and independent oversight will be critical to ensuring AI trustworthiness. For compliance leaders, this presents both a risk management strategy and an opportunity.

Just as identity theft insurance evolved alongside online banking, AI liability insurance will likely become a standard corporate requirement. Compliance officers should begin engaging with insurers to explore coverage for AI-related risks, such as data leaks, wrongful denials of due diligence clearance, or biased decision-making.

Equally important are third-party oversight tools. The article envisions AI “credit bureaus” that could audit agent behavior, set decision thresholds, or freeze activity when risks escalate. For compliance, such independent monitoring could provide an external layer of assurance that your AI systems are behaving as intended.

The takeaway is clear: do not rely solely on internal controls. Pair them with market-based safeguards and external verification. Doing so not only strengthens trust in AI agents but also demonstrates to regulators that your program embraces both proactive and independent oversight.

5. Design for Data Security and Local Control

Finally, Levin and Downes stress the importance of keeping decisions local; that is, ensuring sensitive data stays on company-controlled devices and servers, rather than in external clouds. For compliance professionals, this echoes a familiar principle: control the data, control the risk.

Agentic AI, by definition, processes vast amounts of sensitive information. If compliance agents are reviewing hotline reports, transaction monitoring data, or due diligence files, any data leakage could be catastrophic. That’s why strong encryption, local processing, and secure enclaves are essential.

Compliance officers should demand that AI vendors support:

  • On-device or private cloud processing for sensitive tasks.
  • Encryption of all data in transit and at rest.
  • Independent verification of security claims by external auditors.
  • Full disclosure of sponsorships, promotions, and paid influences.

By designing AI agents with local control and transparency, compliance teams can build systems that are both effective and trustworthy. Data security is not just an IT concern; it is a compliance imperative.

Trust, But Never Blindly

AI agents hold immense potential for compliance programs. They can streamline monitoring, accelerate due diligence, and support real-time risk management. But as Levin and Downes remind us, they must also be carefully governed to prevent bias, manipulation, and misuse.

For compliance leaders, the path forward is to treat AI like any other agent (or channel your inner Ronald Reagan: trust, but verify. With oversight, fiduciary framing, market safeguards, and strong data controls, AI can become a trusted partner in compliance—one that strengthens, rather than weakens, the ethical fabric of the organization.

Categories
Blog

Building Your Own AI Assistant: Compliance Lessons in Customization

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

In the ever-changing world of compliance, resource constraints remain one of our biggest hurdles. Whether you’re drafting policies, conducting risk assessments, or preparing investigation summaries, the work is often repetitive, labor-intensive, and subject to tight deadlines. Enter the AI assistant, not as a futuristic dream, but as a practical, buildable tool available to compliance professionals right now.

Alexandra Samuel’s article in Harvard Business Review titled How to Build Your Own AI Assistant, makes one point crystal clear: if you can describe a project in plain English, you can build your own AI assistant. And for compliance professionals, this represents a transformative opportunity to reduce administrative burdens while increasing consistency, accuracy, and adaptability.

But building your compliance AI assistant isn’t about chasing efficiency alone—it’s about making intentional design choices that reinforce compliance objectives, protect corporate culture, and ensure regulatory defensibility. Today, we consider five key takeaways for compliance professionals, each showing how you can harness AI assistants to enhance, not replace, your compliance program.

1. Start with the Right Use Cases

Before building, compliance leaders must ask: What problems do we want AI to solve? Samuel notes that AI assistants excel in four domains: writing and communications, troubleshooting, project management, and strategic coaching. For compliance, this translates into use cases like:

  • Drafting first-pass policy updates aligned with global regulations.
  • Summarizing enforcement actions for Board reporting.
  • Automating responses to routine employee compliance questions (e.g., “Can I accept this client gift?”).
  • Tracking investigation timelines and automatically extracting action items from meeting transcripts.

Choosing the right use case ensures your AI assistant is a force multiplier rather than a shiny distraction. Importantly, you want to start with low-risk, high-volume tasks. Drafting an anti-corruption annual training memo? AI can handle the boilerplate. Deciding whether to disclose a potential FCPA violation to the DOJ? That still belongs squarely in the human domain.

The real lesson here: compliance officers should not let “AI hype” dictate priorities. Instead, define pain points within your compliance workflow and build assistants targeted at those specific, recurring problems. Start small, iterate, and scale responsibly.

2. Design Clear Instructions—Your Assistant Is Only as Good as Its Guidance

According to Samuel, the “heart” of a custom AI assistant is the set of instructions you provide. For compliance teams, this is where risk and opportunity intersect. If your assistant doesn’t know who it is, what standards to apply, and what tone to use, it will produce outputs that undermine your credibility.

Think of instructions as your assistant’s Code of Conduct. Instead of saying “you are a compliance assistant,” you can be more precise:

  • “You are a corporate compliance officer drafting policies for a multinational company. You must ensure all content aligns with DOJ guidance on effective compliance programs, uses a professional but approachable tone, and provides practical examples for employees.”

These custom instructions allow you to “bake in” compliance frameworks from day one. For example, you can require the assistant to reference the COSO Framework for Internal Controls, ISO 37001, or the DOJ’s Evaluation of Corporate Compliance Programs whenever relevant.

The key compliance insight: good AI assistants reflect great compliance design. Just as vague compliance policies create ambiguity, vague AI instructions create unreliable outputs. Invest time in precise persona-building for your assistant, and you’ll reap consistent, defensible results.

3. Feed It Knowledge—Without Losing Control of Sensitive Data

Samuel emphasizes that AI assistants become truly powerful when equipped with background documents, such as policies, reports, contracts, or training decks. For compliance, this is both a gold mine and a minefield.

On one hand, uploading prior investigation reports, risk assessments, or compliance training modules allows your assistant to generate outputs that reflect your company’s real history and regulatory environment. Imagine an assistant that can instantly pull together a cross-border risk assessment using your own prior filings and internal guidance.

On the other hand, compliance officers must stay vigilant about data protection, privilege, and confidentiality. Sensitive HR records, whistleblower reports, and privileged investigation materials should never be indiscriminately fed into a platform without proper safeguards.

Here lies the balancing act: compliance teams must create AI assistants that are well-informed but tightly governed. This may involve anonymizing data, working through secure enterprise-grade AI platforms, or restricting inputs to public and non-sensitive internal documents.

The compliance lesson is simple but non-negotiable: context matters, but confidentiality reigns supreme. Building a compliance AI assistant means establishing protocols for what can and cannot be shared.

4. Iterate Constantly—Think Like a Compliance Monitor

Just as compliance programs require continuous improvement, so too do AI assistants. Samuel makes it clear that assistants won’t be perfect out of the box. They require ongoing feedback, refinement, and adjustment.

For compliance professionals, this is second nature. We already think in terms of monitoring, auditing, and revising. Apply the same discipline to your AI assistant:

  • Audit its outputs for accuracy, tone, and regulatory defensibility.
  • Track where it consistently underperforms (e.g., misinterpreting data privacy rules) and feed corrective instructions.
  • Periodically, “refresh” its context files to reflect updated regulations, new enforcement actions, or changes in corporate policy.

Samuel suggests asking your assistant to write their own revised instructions based on your feedback. That’s a compliance monitoring exercise in itself—your assistant becomes both subject and participant in continuous improvement.

The compliance takeaway: treat your AI assistant as a dynamic system, not a static tool. Just as DOJ expects ongoing risk assessments and remediation, regulators will expect that AI tools in compliance are actively managed, not blindly trusted.

5. Embed Ethical Guardrails and Accountability

The most important compliance lesson in building your own AI assistant is ensuring accountability. As Samuel warns, assistants can hallucinate or produce flawed outputs. In compliance, this is not simply an annoyance; more importantly, it is a potential liability.

That means your assistant must operate under ethical guardrails:

  • Always include a human-in-the-loop review before any AI-generated compliance document is finalized.
  • Require disclosures when AI was used in drafting policies, reports, or training.
  • Train employees not to treat AI outputs as gospel but as drafts for critical evaluation.
  • Align your assistant’s objectives with compliance KPIs, accuracy, transparency, and defensibility, rather than raw speed.

This mirrors the DOJ’s emphasis on corporate accountability. An AI assistant may help draft your gifts and entertainment policy, but it cannot stand before prosecutors and defend your compliance program. That responsibility remains squarely with leadership.

The compliance lesson here is unmistakable: AI is a tool, not a scapegoat. Build it to augment compliance decision-making, not to absolve it.

From Experiment to Integration

Building your own AI assistant is not a technical challenge. It is a compliance design challenge. As Alexandra Samuel reminds us, if you can describe your project, you can build your assistant. For compliance officers, that means thinking intentionally about use cases, precision in instructions, safeguards for sensitive data, iteration, and ethical guardrails.

The opportunity is immense. With thoughtfully designed AI assistants, compliance professionals can shift their focus from repetitive drafting to higher-order strategy, from administrative overload to proactive risk management. But the responsibility is equally immense. An AI assistant reflects the design choices of its creators, choices that must always prioritize compliance culture, accountability, and trust.