Categories
The Ethics Experts

Episode 227 – Jason Lesandrini

In this episode of The Ethics Experts, Nick welcomes Jason Lesandrini.

Jason Lesandrini, PhD, FACHE, LPEC, HEC-C is a distinguished ethicist and leader with over two decades of experience in clinical, research, and organizational ethics. He is the founder and principal of The Ethics Architect: an outcomes-driven consulting firm specializing in the assessment and execution of ethics programming, the creation of ethical cultures and developing ethical leaders. In addition, he leads the departments of ethics, advance care planning, spiritual health, and language access for a large health system in Georgia. As a certified Healthcare Ethics Consultant (HEC-C) and a Leadership Professional in Ethics and Compliance (LPEC), Dr. Lesandrini brings a unique blend of theoretical knowledge and practical experience to his work. He has developed innovative programs in ethical leadership, ethical climate, and organizational ethics that have significantly improved organizational culture.

Connect with Jason on LinkedIn.

Categories
Corruption, Crime and Compliance

AI Legal Compliance and Governance

AI promises efficiency, innovation, and new opportunities – but are companies moving too fast in the rush to adopt it? The risks are very real, from false content to flawed decision-making, and the global regulatory patchwork is only getting more complex. The challenge now is building governance and compliance frameworks that keep pace without stifling progress.

In this episode of Corruption, Crime, and Compliance, Michael Volkov explains why an AI compliance program is essential to corporate governance today.

You’ll hear him discuss:

  • Why companies need to start with a clear use case and weigh benefits against potential legal and compliance risks before rolling out AI
  • The evolving patchwork of regulations, including the FTC, state-level laws in the US, and the EU’s AI Act
  • How sector-specific rules in healthcare, financial services, and defense add new layers of complexity
  • The two biggest risks: AI-generated false content that can cause liability and reputational harm, and decision-making systems that create unfair or discriminatory results
  • What strong AI governance looks like, from board oversight and compliance officers to clear policies and cross-functional committees
  • The role of training, documentation, and incident reporting in ensuring responsible, transparent AI use
  • Why embedding responsible AI into company values and employee performance reviews helps build a culture of accountability

Resources

Michael Volkov on LinkedIn | Twitter

The Volkov Law Group

Categories
AI Today in 5

AI Today in 5: August 18, 2025, The AI Music Episode

Welcome to AI Today in 5, the newest addition to the Compliance Podcast Network. Each day, Tom Fox will bring you 5 stories about AI to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the AI Today In 5. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest about AI.

For more information on the use of AI in compliance programs, Tom Fox’s new book, Upping Your Game. You can purchase a copy of the book on Amazon.com

Categories
Daily Compliance News

Daily Compliance News: August 18, 2025, The All Corruption Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, compliance, ethics, risk management, leadership, or general interest for the compliance professional.

Top stories include:

  • Roots of South African corruption. (Sapiens)
  • New Orleans Mayor and lover charged with corruption. (NYT)
  • Senegal’s President exempts himself from the ABC laws. (Africa News)
  • Ethnic leaders in China are under scrutiny for corruption. (South China Morning Post)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
FCPA Compliance Report

FCPA Compliance Report – Episode 771 – Accountability in Times of Crisis: A Conversation with Tom Fox and Sam Silverstein

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode, Tom Fox welcomes back Sam Silverstein in a conversation on the role of accountability in managing business disruptions and natural disasters.

Drawing from personal experiences and professional insights, they delve into the strategic framework necessary for businesses to navigate crises and rebuild stronger. Topics covered include pre-crisis preparedness, crisis response, stabilization phases, and recovery and growth, emphasizing the importance of a culture of accountability. Through practical steps and real-world examples, they explore how leaders can empower their teams, build trust with external stakeholders, and foster resilience within their organizations.

Key highlights:

  • The Role of Accountability in Crisis Management
  • Phases of Crisis Management
  • Pre-Crisis Preparedness
  • Crisis Response and Accountability
  • Stabilization and Recovery
  • The Importance of Truth in Leadership

Resources:

Connect with Sam Silverstein

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the use of AI in Compliance programs, my new book, Upping Your Game. You can purchase a copy of the book on Amazon.com.

Categories
Blog

When the Captain Isn’t the Captain: Star Trek’s Turnabout Intruder as a Root Cause Analysis Case Study

One of the Department of Justice’s most consistent themes in its 2024 Update to the Evaluation of Corporate Compliance Programs (ECCP) is the need for companies to conduct effective root cause analysis following misconduct or control failures. It’s not enough to identify what went wrong; you must understand why it happened and implement measures to prevent it from happening again.

That principle is front and center in the Star Trek: The Original Series finale, Turnabout Intruder. In this episode, Captain Kirk is on an archaeological survey mission when he encounters Dr. Janice Lester, an old acquaintance from Starfleet Academy. Through a mysterious alien device, Lester transfers her consciousness into Kirk’s body, trapping his mind in her own body. What follows is a tense series of events in which “Kirk” behaves increasingly erratically, prompting suspicion among the crew.

For compliance professionals, the episode is a surprisingly apt case study in the perils of failing to dig past the surface when something seems off. Just as the crew needed to piece together the real cause of their captain’s strange behavior, compliance teams must be adept at peeling back layers to discover the true root cause of problems.

Here are five key root cause analysis lessons from Turnabout Intruder.

Lesson 1: Unusual Behavior Should Trigger an Investigation

Illustrated by: Shortly after the mind swap, “Kirk” begins making uncharacteristic decisions, belittling subordinates, ignoring Starfleet protocols, and punishing dissent in ways that are entirely out of character for the captain.

Compliance Lesson:

Behavior that deviates from established patterns should be a red flag. In corporate compliance, abrupt changes, whether in employee conduct, financial reporting patterns, or transaction activity, often indicate deeper issues.

Too often, organizations rationalize away early warning signs: “He’s under stress” or “That’s just her style.” But effective root cause analysis begins with the willingness to ask, Why is this happening now? Early detection is often the difference between a manageable problem and a full-blown crisis. Develop and maintain behavioral baselines for key personnel and functions. If something deviates sharply, investigate promptly rather than waiting for more evidence to emerge.

Lesson 2: Multiple Data Points Build a Stronger Case

Illustrated by: Several crew members—Spock, McCoy, Scotty—each notice something odd about “Kirk.” At first, their observations are anecdotal and separate. Only when they share information do they begin to see a pattern that suggests something is seriously wrong.

Compliance Lesson.  Root cause analysis is stronger when it integrates multiple perspectives and sources of data. If you rely on a single source, one audit, one complaint, you risk drawing incomplete or biased conclusions.

In the episode, no single crew member had enough to prove that Kirk wasn’t himself. But when their observations were combined, the collective evidence pointed toward an anomaly that needed urgent action. Create processes that encourage information sharing across departments. Compliance, audit, HR, and operations should have mechanisms to cross-reference findings because the root cause may only emerge when different pieces are put together.

Lesson 3: Be Alert to Hidden Motives

Illustrated by: In Kirk’s body, Lester uses her new authority to sideline suspected opponents, reassigning or threatening crew who question her behavior. Her motive isn’t mission success; it’s consolidating her stolen command.

Compliance Lesson. The apparent cause of a problem may mask deeper personal or organizational motives. Misconduct often occurs because someone is pursuing goals that conflict with corporate policy, whether financial gain, personal vendettas, or reputational enhancement.

If your analysis stops at “This person violated policy,” you miss the opportunity to uncover why they were willing to risk consequences. In many cases, systemic issues, misaligned incentives, toxic culture, and weak oversight are the true drivers. In every investigation, ask “What’s in it for them?” Understanding incentives, pressures, and personal agendas can reveal root causes that process analysis alone won’t uncover.

Lesson 4: Authority Structures Can Delay Recognition of the Problem

Illustrated by: Even when evidence mounts, the crew is reluctant to challenge “Kirk” because of the chain of command. Starfleet discipline dictates deference to the captain, making it harder to act on suspicions.

Compliance Lesson. In organizations, hierarchy can be a barrier to identifying root causes. Employees may hesitate to report misconduct by senior leaders, or they may assume questionable directives are “above their pay grade” to question.

This dynamic often allows problems to persist far longer than they should. A compliance program must be designed to bypass those bottlenecks, giving employees safe, confidential, and credible ways to report concerns, even about top executives. Ensure that escalation procedures allow for independent review of senior management conduct. Whistleblower protections, ombuds functions, and anonymous hotlines can help surface issues that otherwise stay buried.

Lesson 5: Validate Assumptions Before Acting

Illustrated by: Spock eventually confronts “Kirk” and demands an explanation. Through logical analysis and a mind meld, he confirms the body-swap truth. Only then can the crew take decisive action to restore the captain to his rightful body.

Compliance Lesson. One of the biggest pitfalls in root cause analysis is acting on unverified assumptions. If you jump to conclusions too early, you may “fix” the wrong problem—or make it worse. Spock’s mind meld was the ultimate verification step. In compliance, your “mind meld” might be corroborating whistleblower claims with independent documentation, or testing an internal control in multiple scenarios before concluding it’s defective.

Build verification into your root cause analysis process. Don’t settle for the first plausible explanation; pressure-test your conclusions before implementing remediation.

Connecting Star Trek to DOJ Expectations

The DOJ’s ECCP explicitly asks:

  • “What is the root cause of the misconduct?”
  • “Were prior opportunities to detect the misconduct missed?”
  • “What systemic failures contributed to the issue?”

Turnabout Intruder illustrates the importance of addressing these questions. If the crew had stopped at “the captain is acting oddly” and focused on damage control, they might never have uncovered the deeper truth of Lester’s body swap. Similarly, in corporate investigations, stopping at the surface level (“employee violated policy”) without probing the environment that allowed it to happen fails both the DOJ’s expectations and your prevention mandate.

Final ComplianceLog Reflections

In Turnabout Intruder, the crew’s slow realization of the true problem nearly cost them their captain and perhaps the Enterprise itself. In the compliance arena, a slow or shallow root cause analysis can allow misconduct to persist, control weaknesses to remain unaddressed, and systemic issues to metastasize.

Effective compliance leadership means not just spotting what’s wrong, but relentlessly pursuing why it went wrong. That’s how you fix the problem in a way that prevents recurrence.

Like Spock confronting “Kirk,” we must be willing to gather evidence methodically, test our conclusions, and take decisive action once the truth is clear. Root cause analysis isn’t about blame—it’s about ensuring your organization emerges stronger, more transparent, and more resilient than before.

Because in the end, just like the Enterprise, your mission depends on having the right people in the right roles, operating with integrity, and that’s a result only a thorough, well-executed root cause analysis can guarantee.

 Resources:

⁠⁠Excruciatingly Detailed Plot Summary by Eric W. Weisstein⁠⁠

⁠⁠MissionLogPodcast.com⁠⁠

⁠⁠Memory Alpha

Categories
Trekking Through Compliance

Trekking Through Compliance: Episode 79 – Beneath the Surface: Turnabout Intruder and the Hunt for Root Causes

One of the Department of Justice’s most consistent themes in its 2024 Update to the Evaluation of Corporate Compliance Programs (ECCP) is the need for companies to conduct effective root cause analysis following misconduct or control failures. It’s not enough to just identify what went wrong; you must understand why it happened and implement measures to prevent it from happening again.

For compliance professionals, the episode is a surprisingly apt case study in the perils of failing to dig past the surface when something seems off. Just as the crew needed to piece together the real cause of their captain’s strange behavior, compliance teams must be adept at peeling back layers to discover the true root cause of problems. Here are five key root cause analysis lessons from Turnabout Intruder.

Lesson 1: Unusual Behavior Should Trigger an Investigation

Illustrated by: Shortly after the mind swap, “Kirk” begins making uncharacteristic decisions, belittling subordinates, ignoring Starfleet protocols, and punishing dissent in ways that are completely out of character for the captain.

Compliance Lesson:

Behavior that deviates from established patterns should be a red flag. In corporate compliance, abrupt changes, whether in employee conduct, financial reporting patterns, or transaction activity, often indicate deeper issues.

Lesson 2: Multiple Data Points Build a Stronger Case

Illustrated by: Several crew members—Spock, McCoy, Scotty—each notice something odd about “Kirk.” Only when they share information do they begin to see a pattern that suggests something is seriously wrong.

Compliance Lesson.  Root cause analysis is stronger when it integrates multiple perspectives and sources of data. If you rely on a single source, one audit, one complaint, you risk drawing incomplete or biased conclusions.

Lesson 3: Be Alert to Hidden Motives

Illustrated by: In Kirk’s body, Lester uses her new authority to sideline suspected opponents, reassigning or threatening crew who question her behavior.

Compliance Lesson. The apparent cause of a problem may mask deeper personal or organizational motives. Misconduct often occurs because someone is pursuing goals that conflict with corporate policy, whether financial gain, personal vendettas, or reputational enhancement.

Lesson 4: Authority Structures Can Delay Recognition of the Problem

Illustrated by: Even when evidence mounts, the crew is reluctant to challenge “Kirk” because of the chain of command.

Compliance Lesson. In organizations, hierarchy can be a barrier to identifying root causes. Employees may hesitate to report misconduct by senior leaders, or they may assume questionable directives are “above their pay grade” to question.

Lesson 5: Validate Assumptions Before Acting

Illustrated by Spock, eventually confronts “Kirk” and demands an explanation. Through logical analysis and a mind meld, he confirms the body-swap truth.

Compliance Lesson. One of the biggest pitfalls in root cause analysis is acting on unverified assumptions. If you jump to conclusions too early, you may “fix” the wrong problem—or make it worse.

Final ComplianceLog Reflections

In Turnabout Intruder, the crew’s slow realization of the true problem nearly cost them their captain and perhaps the Enterprise itself. In the compliance arena, a slow or shallow root cause analysis can allow misconduct to persist, control weaknesses to remain unaddressed, and systemic issues to metastasize. Effective compliance leadership means not just spotting what’s wrong but relentlessly pursuing why it went wrong. That’s how you fix the problem in a way that prevents recurrence.

 Resources:

⁠⁠Excruciatingly Detailed Plot Summary by Eric W. Weisstein⁠⁠

⁠⁠MissionLogPodcast.com⁠⁠

⁠⁠Memory Alpha

Categories
Compliance Tip of the Day

Compliance Tip of the Day – Costs and Benefits of AI

Welcome to “Compliance Tip of the Day,” the podcast where we bring you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, we aim to provide you with bite-sized, actionable tips to help you stay on top of your compliance game. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we begin a 5-part series on using compliance in a best practices compliance program by considering the costs and benefits of using AI.

For more on this topic, check out The Compliance Handbook, a Guide to Operationalizing your Compliance Program, 6th edition, which LexisNexis recently released. It is available here.

Categories
Adventures in Compliance

Adventures in Compliance: The Novels – The Valley of Fear, Whistleblowers and Corporate Compliance

In this new season of Adventures in Compliance, host Tom Fox takes a deep dive into the Sherlock Holmes novels. Over this season, Tom will take a deep dive into each novel over a four-part series. The four novels we will consider from the ethics and compliance perspective are A Study in Scarlet, The Sign of Four, The Hound of the Baskervilles, and The Valley of Fear. For August, we conclude this Season with a deep dive into the least well-known of the Sherlock Holmes novels, The Valley of Fear.

 

Timothy and Fiona return in Part 3 of our series on Sir Arthur Conan Doyle’s novel ‘The Valley of Fear’ to draw parallels with contemporary corporate challenges. Their discussion highlights how the novel’s depiction of fear, secrecy, and intimidation in a terror-ruled society resembles modern-day corporate environments where employees hesitate to speak up about issues due to fear of retaliation. Some of the key points they debate include the importance of anonymity, protection from retaliation, continuous communication with whistleblowers, and building a speak-up culture. These elements are identified as vital for effective compliance programs and fostering an environment of trust and integrity.

Key highlights:

  • Connecting Fiction to Modern Corporate Challenges
  • The Role of Whistleblowers in Corporate Compliance
  • The Importance of Anonymity
  • Protection from Retaliation
  • Building a Speak-Up Culture

Resources:

The New Annotated Sherlock Holmes

Sherlock Holmes FAQ by Dave Thompson

Connect with Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Recalculating AI: Compliance Lessons in Weighing Costs and Benefits of GenAI

Ed. Note: This week, we present a week-long series on the use of GenAI in a best practices compliance program. Additionally, for each blog post, I have created a one-page checklist for each article that you can use in presentations or for easier reference. Email my EA Jaja at jaja@compliancepodcastnetwork.net for a complimentary copy.

For compliance professionals, the rise of generative AI (GenAI) feels like déjà vu. We’ve been here before—with ERP rollouts, e-discovery software, and data analytics tools. Each new technology comes with the same pitch: faster, smarter, cheaper. And each time, compliance officers are tasked with answering a more difficult question: At what cost?

Mark Mortensen’s recent piece in Harvard Business Review titled Calculating the Costs and Benefits of GenAI, provides a framework for thinking about this balancing act. While AI undeniably creates efficiency, Mortensen cautions that organizations risk losing knowledge, engagement, and trust if they fail to evaluate adoption carefully. For compliance leaders, the implications are profound.

Today, we consider five key takeaways from the article for compliance professionals—each one an area where AI’s promise and peril intersect.

1. Efficiency Gains Must Be Weighed Against Knowledge Loss

One of AI’s greatest selling points is speed. It can review contracts in minutes, summarize regulatory changes instantly, and generate risk assessments that previously took weeks. For perpetually under-resourced compliance departments, this is a tantalizing offer.

Yet here lies the first hidden cost: learning. Mortensen reminds us that the process of struggling with a problem involves the back-and-forth revisions of a policy draft, iterative risk-mapping discussions, and even the time spent combing through dense regulations. This cements knowledge and deepens institutional expertise. If compliance teams begin to outsource too much of that process to AI, the organization risks eroding the very expertise it relies on to interpret nuance.

Consider this: an AI might draft your anti-bribery training materials, but without human engagement in the process, your team loses the chance to sharpen its understanding of new FCPA enforcement trends. Over time, this erodes your compliance program’s intellectual resilience.

The lesson for compliance leaders is clear: use AI to accelerate, not replace, your team’s learning. Make sure staff remain actively engaged in the interpretive process. AI should provide information, not serve as the final arbiter of compliance knowledge.

2. Short-Term Problem Solving Can Inhibit Long-Term Skill Development

“Practice makes perfect” is more than just a proverb; it is a professional truth. Drafting compliance reports builds writing skills, testing control frameworks sharpens analytical ability, and grappling with regulatory ambiguity builds judgment.

But if compliance teams lean too heavily on AI to generate audit memos or to identify anomalies in financial data, they risk undermining their development. Mortensen points out that when we hand tasks to AI, we sacrifice the chance to strengthen the very skills we will need tomorrow.

Consider a scenario where AI consistently handles first drafts of risk assessments. Compliance officers may grow accustomed to editing AI output rather than developing their structured thinking. Over time, the skill gap widens. This leaves organizations dependent on tools that cannot be held accountable when regulators ask tough questions.

From a compliance standpoint, this has a direct connection to sustainability. DOJ guidance emphasizes the need for continuous program improvement and the development of compliance capabilities. A department that loses skills to AI outsourcing may look efficient on paper, but it becomes brittle in practice.

Compliance leaders should strike a balance by reserving certain core tasks, like drafting root cause analyses or preparing investigation reports, for human-led execution, even if AI could technically do them faster. These are the muscle-building exercises of compliance, and like any workout, skipping them leads to long-term weakness.

3. AI Risks Weakening Relationships and Organizational Trust

Compliance does not happen in a vacuum. It thrives or fails based on relationships. Internal trust with business units, credibility with senior leadership, and even informal rapport built during brainstorming sessions all matter.

AI, however, threatens to reduce these interactions. Mortensen notes that the computational power of AI allows individuals to solve problems alone that previously required teams. While efficient, this independence comes at a cost: fewer interpersonal touchpoints, weaker social ties, and ultimately, reduced trust.

For compliance, this risk is especially acute. Much of our effectiveness hinges on being seen as collaborative partners, not bureaucratic enforcers. If AI reduces the frequency of conversations around risk assessments, policy updates, or investigations, compliance officers may lose opportunities to build influence. Worse, an “AI does it all” approach may reinforce perceptions that compliance is transactional rather than relational.

The takeaway here is that AI should never replace human dialogue in compliance. Use it to free up time so compliance officers can spend more energy building relationships with line managers, auditors, and employees, rather than less. The culture of compliance is rooted in trust, and no algorithm can generate that.

4. Engagement and Ownership Can Decline with Over-Automation

Engagement matters. Mortensen defines it as being psychologically present in the work. For compliance professionals, engagement translates into vigilance: spotting red flags, questioning anomalies, and challenging assumptions.

But AI introduces a risk of disengagement. When it summarizes investigation interviews or drafts compliance dashboards, humans can become passive consumers rather than active participants. Over time, “good enough” replaces “deep enough.”

This erosion of ownership is dangerous for compliance. Regulators increasingly expect companies to demonstrate not only robust processes but also genuine cultural buy-in. If compliance staff are disengaged because AI has taken over too many cognitive functions, the program risks becoming a paper tiger, form without substance.

To counter this, compliance leaders should intentionally design workflows where humans must interpret and add value to AI outputs. For example, AI can generate a first-pass risk heat map, but compliance officers should validate and adjust it based on local context and business realities. That layer of judgment keeps engagement alive and maintains a sense of accountability.

Ultimately, compliance is about judgment, not just information. AI can support but never substitute for human ownership of ethical decision-making.

5. Homogenization Threatens Compliance Program Uniqueness

Every compliance program reflects its company’s unique culture, risks, and leadership voice. Mortensen warns that because large language models are convergent technologies, they produce standardized answers. Leaders who rely on AI for memos, presentations, or policies risk erasing their distinctive tone and voice.

For compliance professionals, this risk translates into a loss of authenticity. Regulators, employees, and stakeholders can quickly tell the difference between a policy that reflects real company values and one that reads like a generic AI template. Over time, over-reliance on AI can strip a compliance program of its personality and with it, credibility.

The danger goes deeper. If multiple companies rely on AI to draft similar codes of conduct, policies may look indistinguishable. That creates industry-wide convergence at a time when regulators are looking for tailored programs that reflect specific risks. In effect, AI could make compliance programs less defensible, not more.

The path forward is to use AI as a scaffolding tool, not as a finished product. Compliance officers should inject their organization’s unique voice, industry-specific risks, and leadership tone into every AI-assisted document. Authenticity is non-negotiable in compliance. AI can never be allowed to flatten it.

AI Audits for Compliance Leaders

Mortensen’s framework for an “AI value audit” is particularly relevant for compliance. He suggests three steps: (1) determine the types of value a task creates, (2) prioritize and optimize them, and (3) continually reassess with a “milk test” to ensure the value hasn’t expired.

For compliance, this means asking: Does AI enhance our program without undermining knowledge, skills, trust, engagement, or authenticity? If not, the short-term benefits may not be worth the long-term costs.

AI is here to stay, and compliance officers must learn to harness it. But like every tool before it, AI is not a replacement for judgment, culture, and leadership. It is an assistant, not the evangelist for compliance.