Categories
Compliance Tip of the Day

Compliance Tip of the Day – Trust and Verify

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our 5-part series on using compliance in a best practices compliance program by considering how to trust and verify your use of AI in your compliance program.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
Blog

Trust and Verify: How Compliance Can Harness AI Agents Safely

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

When we think of “trust” in compliance, our minds usually go to whistleblowers, employees, or third parties. But increasingly, the question of trust must extend to a new category of actors: AI agents.

As Blair Levin and Larry Downes explain in their provocative Harvard Business Review piece, titled “Can AI Agents Be Trusted?“, AI agents are not just smarter chatbots. They are software systems that can collect data, make decisions, and even act autonomously based on rules and priorities. For compliance professionals, this changes the game. If AI agents can act on our behalf, can they also be trusted to uphold compliance principles?

The answer is yes, but only if we design and monitor them with the same rigor that we apply to employees, third parties, and business partners. Today, we look at five key takeaways from their article to guide compliance professionals in building AI agents into trustworthy components of their programs.

1. Trust Requires Oversight, Just as with Human Agents

The article makes a simple but powerful analogy: think of an AI agent the way you would think of an employee or contractor. Before delegating sensitive responsibilities, you conduct background checks, put controls in place, and possibly even require bonding. The same must hold for AI.

For compliance, this means creating oversight structures before deploying agents into live workflows. If your compliance AI assistant can monitor transactions for red flags, you must ensure that a human compliance officer reviews its outputs. If it can escalate potential whistleblower complaints, you must validate that escalation logic against regulatory requirements.

AI oversight also means testing for vulnerabilities. As Levin and Downes note, AI agents are susceptible to hacking, manipulation, and even misinformation. Compliance should require penetration testing of any agent integrated into company systems, just as IT would test network defenses.

Trust is never blind in compliance. It is built on verification, monitoring, and accountability. AI agents can and should be trusted, but only when they operate within a compliance framework that mirrors the controls we already use for human agents.

2. Recognize and Manage Bias and Conflicts of Interest

One of the major risks highlighted in the article is bias, whether introduced by marketers, advertisers, or flawed training data. Just as a conflicted employee can steer decisions for personal gain, an AI agent can be subtly manipulated to favor sponsors, advertisers, or even certain viewpoints.

For compliance professionals, this should raise alarms. Imagine an AI agent used for third-party due diligence. If biased data shapes its recommendations, you could end up onboarding a high-risk vendor while rejecting a low-risk one. Worse, if regulators discover that your system relied on biased algorithms, you’ll face serious questions about program effectiveness.

The solution is conflict-of-interest monitoring for AI. Just as employees must disclose outside interests, AI agents should be tested and audited for hidden preferences. Compliance should insist on transparency from vendors about training data sources and sponsorship arrangements. In some cases, contracts with AI providers may need explicit clauses guaranteeing independence from commercial influence.

Compliance has always been about spotting and mitigating conflicts. In the age of AI, that vigilance must extend to our digital agents. Only then can we claim that our programs are fair, impartial, and defensible.

3. Treat AI Agents as Fiduciaries of Compliance

Perhaps the most compelling insight from Levin and Downes is that AI agents should be treated as fiduciaries. Just as lawyers, trustees, and board members owe a heightened duty of care to their clients, AI agents entrusted with compliance responsibilities must be designed and governed under similar standards.

For compliance officers, this concept aligns directly with DOJ expectations. The Evaluation of Corporate Compliance Programs (2024 ECCP) emphasizes accountability, transparency, and independence. By treating AI agents as fiduciaries, compliance leaders can extend these principles to technology.

What does fiduciary duty look like in practice?

  • Obedience: AI must follow company policies and regulatory standards.
  • Loyalty: AI must prioritize the company’s compliance objectives over any hidden commercial interests.
  • Confidentiality: AI must protect sensitive compliance data from leaks or misuse.
  • Accountability: AI actions must be traceable, with clear logs and audit trails.

This fiduciary framing provides compliance professionals with a powerful tool. It not only reassures stakeholders that AI can be trusted, but it also sets a benchmark that regulators can understand and evaluate. In short, fiduciary AI is defensible AI.

4. Build Market and Insurance-Based Safeguards

The article notes that beyond regulation, market mechanisms such as insurance and independent oversight will be critical to ensuring AI trustworthiness. For compliance leaders, this presents both a risk management strategy and an opportunity.

Just as identity theft insurance evolved alongside online banking, AI liability insurance will likely become a standard corporate requirement. Compliance officers should begin engaging with insurers to explore coverage for AI-related risks, such as data leaks, wrongful denials of due diligence clearance, or biased decision-making.

Equally important are third-party oversight tools. The article envisions AI “credit bureaus” that could audit agent behavior, set decision thresholds, or freeze activity when risks escalate. For compliance, such independent monitoring could provide an external layer of assurance that your AI systems are behaving as intended.

The takeaway is clear: do not rely solely on internal controls. Pair them with market-based safeguards and external verification. Doing so not only strengthens trust in AI agents but also demonstrates to regulators that your program embraces both proactive and independent oversight.

5. Design for Data Security and Local Control

Finally, Levin and Downes stress the importance of keeping decisions local; that is, ensuring sensitive data stays on company-controlled devices and servers, rather than in external clouds. For compliance professionals, this echoes a familiar principle: control the data, control the risk.

Agentic AI, by definition, processes vast amounts of sensitive information. If compliance agents are reviewing hotline reports, transaction monitoring data, or due diligence files, any data leakage could be catastrophic. That’s why strong encryption, local processing, and secure enclaves are essential.

Compliance officers should demand that AI vendors support:

  • On-device or private cloud processing for sensitive tasks.
  • Encryption of all data in transit and at rest.
  • Independent verification of security claims by external auditors.
  • Full disclosure of sponsorships, promotions, and paid influences.

By designing AI agents with local control and transparency, compliance teams can build systems that are both effective and trustworthy. Data security is not just an IT concern; it is a compliance imperative.

Trust, But Never Blindly

AI agents hold immense potential for compliance programs. They can streamline monitoring, accelerate due diligence, and support real-time risk management. But as Levin and Downes remind us, they must also be carefully governed to prevent bias, manipulation, and misuse.

For compliance leaders, the path forward is to treat AI like any other agent (or channel your inner Ronald Reagan: trust, but verify. With oversight, fiduciary framing, market safeguards, and strong data controls, AI can become a trusted partner in compliance—one that strengthens, rather than weakens, the ethical fabric of the organization.

Categories
Compliance Tip of the Day

Compliance Tip of the Day – AI Assistant for Compliance

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we continue our 5-part series on using compliance in a best practices compliance program by considering how a compliance professional can use AI as an Assistant.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
AI Today in 5

AI Today in 5: August 19, 2025, The AI and Compliance Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

  • Texas AG goes after chatbots for kids’ mental health services. (KVUE)
  • China is turning to AI in information warfare. (NYT)
  • Does using AI put you on the wrong side of compliance? (UC Today)
  • Using AI for cross-border trade. (World Business Outlook)
  • Greenlight sues Compliance AI over trademark violation. (Bloomberg)

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Innovation in Compliance

Innovation in Compliance – Gaurav Kapoor on Risk Management and the Role of AI in GRC

Innovation comes in many areas, and compliance professionals need to be ready for it and embrace it. Join Tom Fox, the Voice of Compliance, as he visits with top innovative minds, thinkers, and creators in the award-winning Innovation in Compliance podcast. In this episode, Tom Fox interviews Gaurav Kapoor, Vice Chairman, Co-Founder and Board Member of MetricStream, discussing his extensive professional background, from co-founding MetricStream to his current focus on customer intimacy amid AI market disruptions.

Kapoor delves into the evolving landscape of risk management, emphasizing the importance of midyear reviews and integration of various risk themes like operational risk, audit compliance, and cybersecurity. He elaborates on the role of AI in GRC, stating how generative and agent AI can streamline compliance processes and enhance risk management strategies. The conversation also touches on the increasing significance of cybersecurity, geopolitical instability, and climate impact on risk assessment. Kapoor highlights the shift from compliance to a more resilient and risk-aware culture within organizations.

Key highlights:

  • The Importance of July in Risk Management
  • AI’s Role in GRC
  • Emerging Risks and AI Applications
  • Counseling Boards on Risk Management
  • Top Concerns for the Second Half of 2025
  • Evolving Role of Compliance and Risk Officers

Resources:

MetricStream Website and on LinkedIn

Gaurav Kapoor on LinkedIn

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Building Your Own AI Assistant: Compliance Lessons in Customization

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

In the ever-changing world of compliance, resource constraints remain one of our biggest hurdles. Whether you’re drafting policies, conducting risk assessments, or preparing investigation summaries, the work is often repetitive, labor-intensive, and subject to tight deadlines. Enter the AI assistant, not as a futuristic dream, but as a practical, buildable tool available to compliance professionals right now.

Alexandra Samuel’s article in Harvard Business Review titled How to Build Your Own AI Assistant, makes one point crystal clear: if you can describe a project in plain English, you can build your own AI assistant. And for compliance professionals, this represents a transformative opportunity to reduce administrative burdens while increasing consistency, accuracy, and adaptability.

But building your compliance AI assistant isn’t about chasing efficiency alone—it’s about making intentional design choices that reinforce compliance objectives, protect corporate culture, and ensure regulatory defensibility. Today, we consider five key takeaways for compliance professionals, each showing how you can harness AI assistants to enhance, not replace, your compliance program.

1. Start with the Right Use Cases

Before building, compliance leaders must ask: What problems do we want AI to solve? Samuel notes that AI assistants excel in four domains: writing and communications, troubleshooting, project management, and strategic coaching. For compliance, this translates into use cases like:

  • Drafting first-pass policy updates aligned with global regulations.
  • Summarizing enforcement actions for Board reporting.
  • Automating responses to routine employee compliance questions (e.g., “Can I accept this client gift?”).
  • Tracking investigation timelines and automatically extracting action items from meeting transcripts.

Choosing the right use case ensures your AI assistant is a force multiplier rather than a shiny distraction. Importantly, you want to start with low-risk, high-volume tasks. Drafting an anti-corruption annual training memo? AI can handle the boilerplate. Deciding whether to disclose a potential FCPA violation to the DOJ? That still belongs squarely in the human domain.

The real lesson here: compliance officers should not let “AI hype” dictate priorities. Instead, define pain points within your compliance workflow and build assistants targeted at those specific, recurring problems. Start small, iterate, and scale responsibly.

2. Design Clear Instructions—Your Assistant Is Only as Good as Its Guidance

According to Samuel, the “heart” of a custom AI assistant is the set of instructions you provide. For compliance teams, this is where risk and opportunity intersect. If your assistant doesn’t know who it is, what standards to apply, and what tone to use, it will produce outputs that undermine your credibility.

Think of instructions as your assistant’s Code of Conduct. Instead of saying “you are a compliance assistant,” you can be more precise:

  • “You are a corporate compliance officer drafting policies for a multinational company. You must ensure all content aligns with DOJ guidance on effective compliance programs, uses a professional but approachable tone, and provides practical examples for employees.”

These custom instructions allow you to “bake in” compliance frameworks from day one. For example, you can require the assistant to reference the COSO Framework for Internal Controls, ISO 37001, or the DOJ’s Evaluation of Corporate Compliance Programs whenever relevant.

The key compliance insight: good AI assistants reflect great compliance design. Just as vague compliance policies create ambiguity, vague AI instructions create unreliable outputs. Invest time in precise persona-building for your assistant, and you’ll reap consistent, defensible results.

3. Feed It Knowledge—Without Losing Control of Sensitive Data

Samuel emphasizes that AI assistants become truly powerful when equipped with background documents, such as policies, reports, contracts, or training decks. For compliance, this is both a gold mine and a minefield.

On one hand, uploading prior investigation reports, risk assessments, or compliance training modules allows your assistant to generate outputs that reflect your company’s real history and regulatory environment. Imagine an assistant that can instantly pull together a cross-border risk assessment using your own prior filings and internal guidance.

On the other hand, compliance officers must stay vigilant about data protection, privilege, and confidentiality. Sensitive HR records, whistleblower reports, and privileged investigation materials should never be indiscriminately fed into a platform without proper safeguards.

Here lies the balancing act: compliance teams must create AI assistants that are well-informed but tightly governed. This may involve anonymizing data, working through secure enterprise-grade AI platforms, or restricting inputs to public and non-sensitive internal documents.

The compliance lesson is simple but non-negotiable: context matters, but confidentiality reigns supreme. Building a compliance AI assistant means establishing protocols for what can and cannot be shared.

4. Iterate Constantly—Think Like a Compliance Monitor

Just as compliance programs require continuous improvement, so too do AI assistants. Samuel makes it clear that assistants won’t be perfect out of the box. They require ongoing feedback, refinement, and adjustment.

For compliance professionals, this is second nature. We already think in terms of monitoring, auditing, and revising. Apply the same discipline to your AI assistant:

  • Audit its outputs for accuracy, tone, and regulatory defensibility.
  • Track where it consistently underperforms (e.g., misinterpreting data privacy rules) and feed corrective instructions.
  • Periodically, “refresh” its context files to reflect updated regulations, new enforcement actions, or changes in corporate policy.

Samuel suggests asking your assistant to write their own revised instructions based on your feedback. That’s a compliance monitoring exercise in itself—your assistant becomes both subject and participant in continuous improvement.

The compliance takeaway: treat your AI assistant as a dynamic system, not a static tool. Just as DOJ expects ongoing risk assessments and remediation, regulators will expect that AI tools in compliance are actively managed, not blindly trusted.

5. Embed Ethical Guardrails and Accountability

The most important compliance lesson in building your own AI assistant is ensuring accountability. As Samuel warns, assistants can hallucinate or produce flawed outputs. In compliance, this is not simply an annoyance; more importantly, it is a potential liability.

That means your assistant must operate under ethical guardrails:

  • Always include a human-in-the-loop review before any AI-generated compliance document is finalized.
  • Require disclosures when AI was used in drafting policies, reports, or training.
  • Train employees not to treat AI outputs as gospel but as drafts for critical evaluation.
  • Align your assistant’s objectives with compliance KPIs, accuracy, transparency, and defensibility, rather than raw speed.

This mirrors the DOJ’s emphasis on corporate accountability. An AI assistant may help draft your gifts and entertainment policy, but it cannot stand before prosecutors and defend your compliance program. That responsibility remains squarely with leadership.

The compliance lesson here is unmistakable: AI is a tool, not a scapegoat. Build it to augment compliance decision-making, not to absolve it.

From Experiment to Integration

Building your own AI assistant is not a technical challenge. It is a compliance design challenge. As Alexandra Samuel reminds us, if you can describe your project, you can build your assistant. For compliance officers, that means thinking intentionally about use cases, precision in instructions, safeguards for sensitive data, iteration, and ethical guardrails.

The opportunity is immense. With thoughtfully designed AI assistants, compliance professionals can shift their focus from repetitive drafting to higher-order strategy, from administrative overload to proactive risk management. But the responsibility is equally immense. An AI assistant reflects the design choices of its creators, choices that must always prioritize compliance culture, accountability, and trust.

Categories
Blog

When the Captain Isn’t the Captain: Star Trek’s Turnabout Intruder as a Root Cause Analysis Case Study

One of the Department of Justice’s most consistent themes in its 2024 Update to the Evaluation of Corporate Compliance Programs (ECCP) is the need for companies to conduct effective root cause analysis following misconduct or control failures. It’s not enough to identify what went wrong; you must understand why it happened and implement measures to prevent it from happening again.

That principle is front and center in the Star Trek: The Original Series finale, Turnabout Intruder. In this episode, Captain Kirk is on an archaeological survey mission when he encounters Dr. Janice Lester, an old acquaintance from Starfleet Academy. Through a mysterious alien device, Lester transfers her consciousness into Kirk’s body, trapping his mind in her own body. What follows is a tense series of events in which “Kirk” behaves increasingly erratically, prompting suspicion among the crew.

For compliance professionals, the episode is a surprisingly apt case study in the perils of failing to dig past the surface when something seems off. Just as the crew needed to piece together the real cause of their captain’s strange behavior, compliance teams must be adept at peeling back layers to discover the true root cause of problems.

Here are five key root cause analysis lessons from Turnabout Intruder.

Lesson 1: Unusual Behavior Should Trigger an Investigation

Illustrated by: Shortly after the mind swap, “Kirk” begins making uncharacteristic decisions, belittling subordinates, ignoring Starfleet protocols, and punishing dissent in ways that are entirely out of character for the captain.

Compliance Lesson:

Behavior that deviates from established patterns should be a red flag. In corporate compliance, abrupt changes, whether in employee conduct, financial reporting patterns, or transaction activity, often indicate deeper issues.

Too often, organizations rationalize away early warning signs: “He’s under stress” or “That’s just her style.” But effective root cause analysis begins with the willingness to ask, Why is this happening now? Early detection is often the difference between a manageable problem and a full-blown crisis. Develop and maintain behavioral baselines for key personnel and functions. If something deviates sharply, investigate promptly rather than waiting for more evidence to emerge.

Lesson 2: Multiple Data Points Build a Stronger Case

Illustrated by: Several crew members—Spock, McCoy, Scotty—each notice something odd about “Kirk.” At first, their observations are anecdotal and separate. Only when they share information do they begin to see a pattern that suggests something is seriously wrong.

Compliance Lesson.  Root cause analysis is stronger when it integrates multiple perspectives and sources of data. If you rely on a single source, one audit, one complaint, you risk drawing incomplete or biased conclusions.

In the episode, no single crew member had enough to prove that Kirk wasn’t himself. But when their observations were combined, the collective evidence pointed toward an anomaly that needed urgent action. Create processes that encourage information sharing across departments. Compliance, audit, HR, and operations should have mechanisms to cross-reference findings because the root cause may only emerge when different pieces are put together.

Lesson 3: Be Alert to Hidden Motives

Illustrated by: In Kirk’s body, Lester uses her new authority to sideline suspected opponents, reassigning or threatening crew who question her behavior. Her motive isn’t mission success; it’s consolidating her stolen command.

Compliance Lesson. The apparent cause of a problem may mask deeper personal or organizational motives. Misconduct often occurs because someone is pursuing goals that conflict with corporate policy, whether financial gain, personal vendettas, or reputational enhancement.

If your analysis stops at “This person violated policy,” you miss the opportunity to uncover why they were willing to risk consequences. In many cases, systemic issues, misaligned incentives, toxic culture, and weak oversight are the true drivers. In every investigation, ask “What’s in it for them?” Understanding incentives, pressures, and personal agendas can reveal root causes that process analysis alone won’t uncover.

Lesson 4: Authority Structures Can Delay Recognition of the Problem

Illustrated by: Even when evidence mounts, the crew is reluctant to challenge “Kirk” because of the chain of command. Starfleet discipline dictates deference to the captain, making it harder to act on suspicions.

Compliance Lesson. In organizations, hierarchy can be a barrier to identifying root causes. Employees may hesitate to report misconduct by senior leaders, or they may assume questionable directives are “above their pay grade” to question.

This dynamic often allows problems to persist far longer than they should. A compliance program must be designed to bypass those bottlenecks, giving employees safe, confidential, and credible ways to report concerns, even about top executives. Ensure that escalation procedures allow for independent review of senior management conduct. Whistleblower protections, ombuds functions, and anonymous hotlines can help surface issues that otherwise stay buried.

Lesson 5: Validate Assumptions Before Acting

Illustrated by: Spock eventually confronts “Kirk” and demands an explanation. Through logical analysis and a mind meld, he confirms the body-swap truth. Only then can the crew take decisive action to restore the captain to his rightful body.

Compliance Lesson. One of the biggest pitfalls in root cause analysis is acting on unverified assumptions. If you jump to conclusions too early, you may “fix” the wrong problem—or make it worse. Spock’s mind meld was the ultimate verification step. In compliance, your “mind meld” might be corroborating whistleblower claims with independent documentation, or testing an internal control in multiple scenarios before concluding it’s defective.

Build verification into your root cause analysis process. Don’t settle for the first plausible explanation; pressure-test your conclusions before implementing remediation.

Connecting Star Trek to DOJ Expectations

The DOJ’s ECCP explicitly asks:

  • “What is the root cause of the misconduct?”
  • “Were prior opportunities to detect the misconduct missed?”
  • “What systemic failures contributed to the issue?”

Turnabout Intruder illustrates the importance of addressing these questions. If the crew had stopped at “the captain is acting oddly” and focused on damage control, they might never have uncovered the deeper truth of Lester’s body swap. Similarly, in corporate investigations, stopping at the surface level (“employee violated policy”) without probing the environment that allowed it to happen fails both the DOJ’s expectations and your prevention mandate.

Final ComplianceLog Reflections

In Turnabout Intruder, the crew’s slow realization of the true problem nearly cost them their captain and perhaps the Enterprise itself. In the compliance arena, a slow or shallow root cause analysis can allow misconduct to persist, control weaknesses to remain unaddressed, and systemic issues to metastasize.

Effective compliance leadership means not just spotting what’s wrong, but relentlessly pursuing why it went wrong. That’s how you fix the problem in a way that prevents recurrence.

Like Spock confronting “Kirk,” we must be willing to gather evidence methodically, test our conclusions, and take decisive action once the truth is clear. Root cause analysis isn’t about blame—it’s about ensuring your organization emerges stronger, more transparent, and more resilient than before.

Because in the end, just like the Enterprise, your mission depends on having the right people in the right roles, operating with integrity, and that’s a result only a thorough, well-executed root cause analysis can guarantee.

 Resources:

⁠⁠Excruciatingly Detailed Plot Summary by Eric W. Weisstein⁠⁠

⁠⁠MissionLogPodcast.com⁠⁠

⁠⁠Memory Alpha

Categories
Trekking Through Compliance

Trekking Through Compliance: Episode 79 – Beneath the Surface: Turnabout Intruder and the Hunt for Root Causes

One of the Department of Justice’s most consistent themes in its 2024 Update to the Evaluation of Corporate Compliance Programs (ECCP) is the need for companies to conduct effective root cause analysis following misconduct or control failures. It’s not enough to just identify what went wrong; you must understand why it happened and implement measures to prevent it from happening again.

For compliance professionals, the episode is a surprisingly apt case study in the perils of failing to dig past the surface when something seems off. Just as the crew needed to piece together the real cause of their captain’s strange behavior, compliance teams must be adept at peeling back layers to discover the true root cause of problems. Here are five key root cause analysis lessons from Turnabout Intruder.

Lesson 1: Unusual Behavior Should Trigger an Investigation

Illustrated by: Shortly after the mind swap, “Kirk” begins making uncharacteristic decisions, belittling subordinates, ignoring Starfleet protocols, and punishing dissent in ways that are completely out of character for the captain.

Compliance Lesson:

Behavior that deviates from established patterns should be a red flag. In corporate compliance, abrupt changes, whether in employee conduct, financial reporting patterns, or transaction activity, often indicate deeper issues.

Lesson 2: Multiple Data Points Build a Stronger Case

Illustrated by: Several crew members—Spock, McCoy, Scotty—each notice something odd about “Kirk.” Only when they share information do they begin to see a pattern that suggests something is seriously wrong.

Compliance Lesson.  Root cause analysis is stronger when it integrates multiple perspectives and sources of data. If you rely on a single source, one audit, one complaint, you risk drawing incomplete or biased conclusions.

Lesson 3: Be Alert to Hidden Motives

Illustrated by: In Kirk’s body, Lester uses her new authority to sideline suspected opponents, reassigning or threatening crew who question her behavior.

Compliance Lesson. The apparent cause of a problem may mask deeper personal or organizational motives. Misconduct often occurs because someone is pursuing goals that conflict with corporate policy, whether financial gain, personal vendettas, or reputational enhancement.

Lesson 4: Authority Structures Can Delay Recognition of the Problem

Illustrated by: Even when evidence mounts, the crew is reluctant to challenge “Kirk” because of the chain of command.

Compliance Lesson. In organizations, hierarchy can be a barrier to identifying root causes. Employees may hesitate to report misconduct by senior leaders, or they may assume questionable directives are “above their pay grade” to question.

Lesson 5: Validate Assumptions Before Acting

Illustrated by Spock, eventually confronts “Kirk” and demands an explanation. Through logical analysis and a mind meld, he confirms the body-swap truth.

Compliance Lesson. One of the biggest pitfalls in root cause analysis is acting on unverified assumptions. If you jump to conclusions too early, you may “fix” the wrong problem—or make it worse.

Final ComplianceLog Reflections

In Turnabout Intruder, the crew’s slow realization of the true problem nearly cost them their captain and perhaps the Enterprise itself. In the compliance arena, a slow or shallow root cause analysis can allow misconduct to persist, control weaknesses to remain unaddressed, and systemic issues to metastasize. Effective compliance leadership means not just spotting what’s wrong but relentlessly pursuing why it went wrong. That’s how you fix the problem in a way that prevents recurrence.

 Resources:

⁠⁠Excruciatingly Detailed Plot Summary by Eric W. Weisstein⁠⁠

⁠⁠MissionLogPodcast.com⁠⁠

⁠⁠Memory Alpha

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Costs and Benefits of AI

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we begin a 5-part series on using compliance in a best practices compliance program by considering the costs and benefits of using AI.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
Blog

Recalculating AI: Compliance Lessons in Weighing Costs and Benefits of GenAI

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

For compliance professionals, the rise of generative AI (GenAI) feels like déjà vu. We’ve been here before—with ERP rollouts, e-discovery software, and data analytics tools. Each new technology comes with the same pitch: faster, smarter, cheaper. And each time, compliance officers are tasked with answering a more difficult question: At what cost?

Mark Mortensen’s recent piece in Harvard Business Review titled Calculating the Costs and Benefits of GenAI, provides a framework for thinking about this balancing act. While AI undeniably creates efficiency, Mortensen cautions that organizations risk losing knowledge, engagement, and trust if they fail to evaluate adoption carefully. For compliance leaders, the implications are profound.

Today, we consider five key takeaways from the article for compliance professionals—each one an area where AI’s promise and peril intersect.

1. Efficiency Gains Must Be Weighed Against Knowledge Loss

One of AI’s greatest selling points is speed. It can review contracts in minutes, summarize regulatory changes instantly, and generate risk assessments that previously took weeks. For perpetually under-resourced compliance departments, this is a tantalizing offer.

Yet here lies the first hidden cost: learning. Mortensen reminds us that the process of struggling with a problem involves the back-and-forth revisions of a policy draft, iterative risk-mapping discussions, and even the time spent combing through dense regulations. This cements knowledge and deepens institutional expertise. If compliance teams begin to outsource too much of that process to AI, the organization risks eroding the very expertise it relies on to interpret nuance.

Consider this: an AI might draft your anti-bribery training materials, but without human engagement in the process, your team loses the chance to sharpen its understanding of new FCPA enforcement trends. Over time, this erodes your compliance program’s intellectual resilience.

The lesson for compliance leaders is clear: use AI to accelerate, not replace, your team’s learning. Make sure staff remain actively engaged in the interpretive process. AI should provide information, not serve as the final arbiter of compliance knowledge.

2. Short-Term Problem Solving Can Inhibit Long-Term Skill Development

“Practice makes perfect” is more than just a proverb; it is a professional truth. Drafting compliance reports builds writing skills, testing control frameworks sharpens analytical ability, and grappling with regulatory ambiguity builds judgment.

But if compliance teams lean too heavily on AI to generate audit memos or to identify anomalies in financial data, they risk undermining their development. Mortensen points out that when we hand tasks to AI, we sacrifice the chance to strengthen the very skills we will need tomorrow.

Consider a scenario where AI consistently handles first drafts of risk assessments. Compliance officers may grow accustomed to editing AI output rather than developing their structured thinking. Over time, the skill gap widens. This leaves organizations dependent on tools that cannot be held accountable when regulators ask tough questions.

From a compliance standpoint, this has a direct connection to sustainability. DOJ guidance emphasizes the need for continuous program improvement and the development of compliance capabilities. A department that loses skills to AI outsourcing may look efficient on paper, but it becomes brittle in practice.

Compliance leaders should strike a balance by reserving certain core tasks, like drafting root cause analyses or preparing investigation reports, for human-led execution, even if AI could technically do them faster. These are the muscle-building exercises of compliance, and like any workout, skipping them leads to long-term weakness.

3. AI Risks Weakening Relationships and Organizational Trust

Compliance does not happen in a vacuum. It thrives or fails based on relationships. Internal trust with business units, credibility with senior leadership, and even informal rapport built during brainstorming sessions all matter.

AI, however, threatens to reduce these interactions. Mortensen notes that the computational power of AI allows individuals to solve problems alone that previously required teams. While efficient, this independence comes at a cost: fewer interpersonal touchpoints, weaker social ties, and ultimately, reduced trust.

For compliance, this risk is especially acute. Much of our effectiveness hinges on being seen as collaborative partners, not bureaucratic enforcers. If AI reduces the frequency of conversations around risk assessments, policy updates, or investigations, compliance officers may lose opportunities to build influence. Worse, an “AI does it all” approach may reinforce perceptions that compliance is transactional rather than relational.

The takeaway here is that AI should never replace human dialogue in compliance. Use it to free up time so compliance officers can spend more energy building relationships with line managers, auditors, and employees, rather than less. The culture of compliance is rooted in trust, and no algorithm can generate that.

4. Engagement and Ownership Can Decline with Over-Automation

Engagement matters. Mortensen defines it as being psychologically present in the work. For compliance professionals, engagement translates into vigilance: spotting red flags, questioning anomalies, and challenging assumptions.

But AI introduces a risk of disengagement. When it summarizes investigation interviews or drafts compliance dashboards, humans can become passive consumers rather than active participants. Over time, “good enough” replaces “deep enough.”

This erosion of ownership is dangerous for compliance. Regulators increasingly expect companies to demonstrate not only robust processes but also genuine cultural buy-in. If compliance staff are disengaged because AI has taken over too many cognitive functions, the program risks becoming a paper tiger, form without substance.

To counter this, compliance leaders should intentionally design workflows where humans must interpret and add value to AI outputs. For example, AI can generate a first-pass risk heat map, but compliance officers should validate and adjust it based on local context and business realities. That layer of judgment keeps engagement alive and maintains a sense of accountability.

Ultimately, compliance is about judgment, not just information. AI can support but never substitute for human ownership of ethical decision-making.

5. Homogenization Threatens Compliance Program Uniqueness

Every compliance program reflects its company’s unique culture, risks, and leadership voice. Mortensen warns that because large language models are convergent technologies, they produce standardized answers. Leaders who rely on AI for memos, presentations, or policies risk erasing their distinctive tone and voice.

For compliance professionals, this risk translates into a loss of authenticity. Regulators, employees, and stakeholders can quickly tell the difference between a policy that reflects real company values and one that reads like a generic AI template. Over time, over-reliance on AI can strip a compliance program of its personality and with it, credibility.

The danger goes deeper. If multiple companies rely on AI to draft similar codes of conduct, policies may look indistinguishable. That creates industry-wide convergence at a time when regulators are looking for tailored programs that reflect specific risks. In effect, AI could make compliance programs less defensible, not more.

The path forward is to use AI as a scaffolding tool, not as a finished product. Compliance officers should inject their organization’s unique voice, industry-specific risks, and leadership tone into every AI-assisted document. Authenticity is non-negotiable in compliance. AI can never be allowed to flatten it.

AI Audits for Compliance Leaders

Mortensen’s framework for an “AI value audit” is particularly relevant for compliance. He suggests three steps: (1) determine the types of value a task creates, (2) prioritize and optimize them, and (3) continually reassess with a “milk test” to ensure the value hasn’t expired.

For compliance, this means asking: Does AI enhance our program without undermining knowledge, skills, trust, engagement, or authenticity? If not, the short-term benefits may not be worth the long-term costs.

AI is here to stay, and compliance officers must learn to harness it. But like every tool before it, AI is not a replacement for judgment, culture, and leadership. It is an assistant, not the evangelist for compliance.