Categories
Compliance Tip of the Day

Compliance Tip of the Day – Crowd Sourcing Risk Intelligence

Welcome to “Compliance Tip of the Day,” the podcast that brings you daily insights and practical advice on navigating the ever-evolving landscape of compliance and regulatory requirements. Whether you’re a seasoned compliance professional or just starting your journey, our goal is to provide you with bite-sized, actionable tips to help you stay ahead in your compliance efforts. Join us as we explore the latest industry trends, share best practices, and demystify complex compliance issues to keep your organization on the right side of the law. Tune in daily for your dose of compliance wisdom, and let’s make compliance a little less daunting, one tip at a time.

Today, we consider how you can use your data to crowdsource your risk intelligence.

For more information on this topic, refer to The Compliance Handbook: A Guide to Operationalizing Your Compliance Program, 6th edition, recently released by LexisNexis. It is available here.

Categories
Daily Compliance News

Daily Compliance News: July 24, 2025, The In Phone Hell Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, that are relevant to the compliance professional.

Top stories include:

  • Morgan Stanley screening draws scrutiny. (WSJ)
  • Carlos Ghosn finally faces justice. (Bloomberg)
  • No ‘hello,’ no answer? (FT)
  • Megadeals are in the offing. (Reuters)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
Everything Compliance

Everything Compliance: Episode 157, The Q2 – 2025, Great Women in Compliance Edition

A few months ago, we hosted a Special Edition of Everything Compliance, featuring the two primary hosts of the Great Women in Compliance, Lisa Fine and Hemma Lomax, along with our female panelists from Everything Compliance, Karen Woody and Karen Moore, all moderated by Kristy Grant-Hart. The episode was so popular (and the host and guests had so much fun) that everyone involved decided to make it a quarterly event. Today’s episode is hosted by Kristy Grant-Hart, with panelists Karen Moore, Lisa Fine, and Hemma Lomax.

Highlights include:

  • Lisa Discusses UK Fraud Prevention Law
  • Hema on the False Claims Act
  • Karen on Compliance, Rewards, and Incentives
  • Exploring Behavioral Science in Business
  • Ethics and Compliance Incentives
  • AI, Blackmail, and Whistleblowing
  • Sentient AI and Ethical Dilemmas
  • Rants and Raves: Compliance and Beyond

The members of this special episode of Everything Compliance (GWIC edition) are:

  • Karen Moore is an Adjunct Law professor at the Fordham School of Law.
  • Lisa Fine – is a co-host of the award-winning Great Women in Compliance.
  • Hemma Lomax– is a co-host of the award-winning Great Women in Compliance.

The host of this special episode of Everything Compliance is Kristy Grant-Hart, VP, Head of Advisory Services at Diligent and co-host of the award-winning podcast 2 Gurus Talk Compliance.

Categories
Trekking Through Compliance

Trekking Through Compliance: Episode 53 – Starship Oversight: AI Governance Lessons from The Ultimate Computer

One of Star Trek’s enduring gifts to corporate compliance professionals is its willingness to ask: What happens when innovation runs ahead of governance? Nowhere is this question more provocatively posed than in the classic episode “The Ultimate Computer.” As we enter an era where artificial intelligence is no longer science fiction but a business reality, “The Ultimate Computer” is required viewing for every compliance officer and governance professional. The episode’s hard lessons about control, accountability, and the limits of machine logic remain as relevant in today’s boardrooms as they were on Gene Roddenberry’s bridge.

Today, we explore five AI governance lessons, each grounded in unforgettable moments from “The Ultimate Computer” that every compliance team should consider as they guide their organizations through the brave new world of AI.

Lesson 1: Human Oversight Is Irreplaceable—AI Needs Accountable Stewards

Illustrated By: Dr. Richard Daystrom, the M-5’s creator, insists that his AI can run the Enterprise more efficiently than its human crew. He disables manual controls, leaving the starship and its fate entirely in M-5’s digital hands.

Compliance Lesson: Too often, organizations are tempted to turn complex decisions over to AI, assuming that algorithms can “do it all.” But “The Ultimate Computer” makes one fact clear: even the smartest AI requires ongoing, independent human oversight.

Lesson 2: Understand Your AI—Transparency and Explainability Are Non-Negotiable

Illustrated By: As M-5 takes control, it makes a series of decisions that the crew cannot understand.

Compliance Lesson: AI systems, especially those built with deep learning or complex algorithms, can be notoriously opaque. If even your developers can’t explain how decisions are made, you’re courting disaster.

Lesson 3: Build in Ethics from the Start—Programming Without Principles is Perilous

Illustrated By: Daystrom uploads his engrams, his personality and values, into M-5, believing that this will imbue the AI with human ethics.

Compliance Lesson: AI reflects not just the data it’s trained on, but the biases and blind spots of its creators. If you fail to embed clear ethical guidelines, guardrails, and values into your systems from the beginning, you risk unleashing “rogue AI” that optimizes for the wrong outcomes or perpetuates bias at scale.

Lesson 4: Test and Validate Continuously—Don’t Assume, Verify

Illustrated By: When exposed to the complexity and unpredictability of real-space maneuvers, M-5’s system flaws become evident only after it’s too late.

Compliance Lesson: No AI system should be considered “finished” on launch day. The real world is infinitely complex and ever-changing, and AI systems can degrade, drift, or encounter unanticipated circumstances.

Lesson 5: Assign Clear Responsibility—Accountability Can’t Be Delegated to a Machine

Illustrated By: Ultimately, it falls to Kirk to reassert human command and take responsibility for the ship’s fate.

Compliance Lesson: AI is a tool, not a scapegoat. Assigning accountability to a system erodes trust and undermines compliance. In the end, someone must always be responsible for decisions made “by the computer.”

Final ComplianceLog Reflections

The Ultimate Computer” ends with Kirk reclaiming command, but not before costly lessons are learned. For today’s compliance and governance professionals, the message is clear: you can’t outsource accountability, ethics, or oversight to a machine. As AI reshapes our organizations, we must lead with principles and prepare for the unexpected.

Resources:

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Blog

The Ultimate Computer: Five Essential AI Governance Lessons from Star Trek

One of Star Trek’s enduring gifts to corporate compliance professionals is its willingness to ask: What happens when innovation runs ahead of governance? Nowhere is this question more provocatively posed than in the classic episode “The Ultimate Computer.” As Captain Kirk and the Enterprise crew test the revolutionary M-5 computer—a prototype artificial intelligence designed to automate starship operations—they find themselves on a collision course with the ethical, operational, and human dilemmas of entrusting machines with decisions without proper oversight.

As we enter an era where artificial intelligence is no longer science fiction but a business reality, “The Ultimate Computer” is required viewing for every compliance officer and governance professional. The episode’s hard lessons about control, accountability, and the limits of machine logic remain as relevant in today’s boardrooms as they were on Gene Roddenberry’s bridge.

Today, we explore five AI governance lessons, each grounded in unforgettable moments from “The Ultimate Computer” that every compliance team should consider as they guide their organizations through the brave new world of AI.

Lesson 1: Human Oversight Is Irreplaceable—AI Needs Accountable Stewards

Illustrated By: Dr. Richard Daystrom, the M-5’s creator, insists that his AI can run the Enterprise more efficiently than its human crew. He disables manual controls, leaving the starship and its fate entirely in M-5’s digital hands. When things go wrong, Kirk and his crew struggle to regain control as M-5 begins to operate independently, with catastrophic results.

Compliance Lesson: Too often, organizations are tempted to turn complex decisions over to AI, assuming that algorithms can “do it all.” But “The Ultimate Computer” makes one fact clear: even the smartest AI requires ongoing, independent human oversight. Without it, errors go unchecked and responsibility becomes dangerously diffuse.

Corporate boards, executives, and compliance officers must ensure that all AI systems, especially those with critical business or safety functions, are subject to robust oversight. This includes clearly defined roles for monitoring, intervention, and (crucially) the ability to override the machine. Establish an AI governance framework that requires periodic human review, real-time tracking, and escalation procedures for intervention. Always preserve the “off switch.”

Lesson 2: Understand Your AI—Transparency and Explainability Are Non-Negotiable

Illustrated By: As M-5 takes control, it makes a series of decisions that the crew can’t understand. When the computer begins attacking other ships during a training exercise, killing crew members in the process, no one knows why, because M-5’s reasoning is a black box even to its creator, Daystrom.

Compliance Lesson: AI systems, especially those built with deep learning or complex algorithms, can be notoriously opaque. If even your developers can’t explain how decisions are made, you’re courting disaster. “The Ultimate Computer” demonstrates the dangers of unexplainable AI: when the stakes are high, opacity erodes trust and prevents timely intervention.

Modern AI governance must demand explainability and transparency, particularly for systems that make or recommend decisions in compliance, risk, HR, or other regulated domains. You must be able to audit, understand, and document how your AI reaches its conclusions. Mandate that all critical AI deployments include documentation of model logic, data sources, and decision-making pathways. Require “explainable AI” solutions for high-risk use cases, and build audit trails for regulatory scrutiny.

Lesson 3: Build in Ethics from the Start—Programming Without Principles is Perilous

Illustrated by Daystrom, who uploads his engrams—his personality and values—into M-5, believing that this will imbue the AI with human ethics. But he fails to account for his unresolved traumas and emotional instability, which are replicated and magnified by M-5, leading to dangerous, unethical decisions.

Compliance Lesson: AI reflects not just the data it’s trained on, but the biases and blind spots of its creators. If you fail to embed clear ethical guidelines, guardrails, and values into your systems from the beginning, you risk unleashing “rogue AI” that optimizes for the wrong outcomes or perpetuates bias at scale.

AI governance is not just a technical challenge; rather, it is an ethical mandate. Involve compliance, legal, DEI, and other stakeholders in the design phase to ensure your systems align with your organization’s values and regulatory obligations. Establish cross-functional AI ethics committees to review training data, test for bias, and define the acceptable uses and limitations of AI. Document decisions and revisit them regularly as your business and regulatory landscape evolve.

Lesson 4: Test and Validate Continuously—Don’t Assume, Verify

Illustrated By: Before full deployment, M-5 is tested only in limited scenarios. When exposed to the complexity and unpredictability of real-space maneuvers, the system’s flaws become evident only after it’s too late. The lack of ongoing testing and validation costs lives and nearly destroys the Enterprise.

Compliance Lesson: No AI system should be considered “finished” on launch day. The real world is infinitely complex and ever-changing, and AI systems can degrade, drift, or encounter unanticipated circumstances. “Set it and forget it” is not an option in AI governance.

Organizations must commit to ongoing validation, testing, and recalibration of all critical AI systems to ensure their reliability and effectiveness. This includes stress-testing under simulated “edge cases” and periodic audits against evolving compliance and risk standards. Develop a continuous monitoring and testing protocol for AI, including regular scenario-based drills, compliance checks, and real-world audits to ensure adequate oversight. Implement “red team” exercises to identify vulnerabilities and unintended consequences.

Lesson 5: Assign Clear Responsibility—Accountability Can’t Be Delegated to a Machine

Illustrated By: As M-5’s rampage escalates, command responsibility is unclear. Daystrom blames the system, the system blames its programming, and the Starfleet brass threatens to destroy the Enterprise. Ultimately, it falls to Kirk to reassert human command and take responsibility for the ship’s fate.

Compliance Lesson: AI is a tool, not a scapegoat. Assigning accountability to a system erodes trust and undermines compliance. In the end, someone must always be responsible for decisions made “by the computer.” Regulators, investors, and the public will not accept “the algorithm did it” as a defense.

Every AI deployment must have designated human owners—individuals or teams empowered (and required) to monitor, question, and take responsibility for outcomes. Define roles and responsibilities for AI oversight in policies and procedures. Assign an accountable executive (“AI owner”) for each critical system and ensure they have the necessary authority and training to perform their duties effectively.

Final ComplianceLog Reflections

The Ultimate Computer” ends with Kirk reclaiming command, but not before costly lessons are learned. For today’s compliance and governance professionals, the message is clear: you can’t outsource accountability, ethics, or oversight to a machine. As AI reshapes our organizations, we must lead with principles and prepare for the unexpected.

AI may be the “ultimate computer,” but governance remains the ultimate human challenge. As you chart your course through this new frontier, let the lessons of Star Trek remind you: the best technology serves humanity, not the other way around.

Resources:

Excruciatingly Detailed Plot Summary by Eric W. Weisstein

MissionLogPodcast.com

Memory Alpha

Categories
Life with GDPR

Life With GDPR: Episode 114 – Navigating GDPR in Global Outsourcing with Inge Zwick

Tom Fox takes a solo turn as Jonathan Armstrong is on assignment. Today, Tom visits with Inge Zwick, Executive Director, Head of Europe, and ESG Lead at Emapta Global, a global outsourcing company.

They discuss the company’s operations, with a particular focus on managing GDPR compliance within the outsourcing framework. They also discuss common misconceptions about outsourcing under the GDPR, risk assessment processes, handling data subject access requests, and integrating compliance into business operations. Zwick also shares insights into how EMAPTA collaborates with clients to ensure compliance and offers advice to business leaders on future-proofing their outsourcing strategies in light of GDPR requirements. Additionally, the discussion explores the integration of ESG initiatives within the company’s operations.

Key takeaways:

  • Outsourcing and GDPR Compliance
  • Risk Assessment and Data Security
  • Subject Access Requests (SAR)
  • Outsourcing Contracts and GDPR Obligations
  • Integrating Compliance into Operations

Resources:

Connect with Tom Fox

Connect with Inge Zwick

Connect with Emapta Global

Life with GDPR was recently honored as a Top Data Security Podcast.  

Categories
Hill Country Authors

Hill Country Authors – Exploring Mental Health and Community-Based Practices with Claudette Fette

Welcome to a new season of the award-winning Hill Country Authors Podcast, sponsored by Stoney Creek Publishing. In this podcast, Hill Country resident Tom Fox visits with authors who live in and write in and about the Texas Hill Country. In this episode, Tom visits Claudette Fette, an academic from Texas Woman’s University, to talk about her professional background and her work in mental health and community-based practices.

Fette shares the journey that led her to occupational therapy and advocacy, influenced by her son’s struggles with mental illness and addiction. They discuss the development and principles of authentic wraparound services, the importance of multidisciplinary staffing, and the effectiveness of early intervention and preventative mental health support. Fette also touches on the failures of the criminal justice system in dealing with mental health and substance abuse, advocating for restorative justice practices. Additionally, she provides insights into her writing and publishing process for her book, ‘No Saints Here,’ and the ongoing resources she provides through her blog and website.

Key highlights:

  • Claudette Fette’s Professional and Academic Journey
  • The Story Behind ‘No Saints Here’
  • Community-Based Alternatives to Institutionalization
  • The Importance of Multidisciplinary Staffing
  • Early Intervention and Preventative Mental Health Support
  • Educational Interventions and Support Systems
  • Criminal Justice System and Mental Health
  • Authentic Wraparound and Recovery
  • Writing and Publishing Journey

Resources

Claudette Fette on Stoney Creek Publishing

No Saints Here on Texas A&M University Press

Stoney Creek Publishing Website

Podcast Cover Art 

Nancy Huffman Fine Art

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Blog

Rethinking Compliance: Practical Steps for Adapting to the UK’s New Failure to Prevent Fraud Legislation

The introduction of the Economic Crime and Corporate Transparency Act 2023, specifically the offense of failure to prevent fraud (FTPF), takes effect on 1 September 2025. Every US company doing business in the UK or with UK companies must be aware of this law and its implications for them. The jurisdiction is as broad as or even broader than the US Foreign Corrupt Practices Act (FCPA). Corporate compliance professionals are finding themselves in uncharted territory with this new legal framework, requiring a thorough understanding of how this legislation applies and how it can potentially reshape their compliance strategies. Fortunately, the UK government has put out a document entitled “Economic Crime and Corporate Transparency Act 2023: Guidance to organisations on the offence of failure to prevent fraud.” (The Guidance) Over the next several blog posts, I will explore the Guidance and its implications for US-based compliance professionals.

The FTPF introduces corporate criminal liability for large organizations where an associated individual commits fraud, intending to benefit the organization or its clients. This represents a seismic shift for corporate compliance programs because senior management does not need to have ordered or even been aware of the fraud for liability to attach. The very act itself, if proven to benefit the organization or its clients, triggers organizational accountability.

Which companies exactly fall under this statute? The scope applies specifically to large organizations, defined as incorporated entities or partnerships that meet at least two of the following criteria: having more than 250 employees, a turnover exceeding £36 million, or total assets exceeding £18 million. This definition intentionally includes subsidiaries and partnerships within its ambit, casting a wide net for compliance oversight.

The Guidance clearly defines the types of fraud included under the new offense. These base fraud offenses include fraud by false representation, failing to disclose information, abuse of position, false accounting, cheating the public revenue, and fraudulent trading. Organizations must now look beyond mere regulatory adherence to proactive fraud detection and prevention strategies, given the broad spectrum of fraud covered.

The term “associated person” is critical. It extends beyond employees and explicitly includes agents, subsidiaries, or any other persons providing services for or on behalf of the organization. The Guidance notably excludes those merely supplying goods, emphasizing service relationships as the core focus. Understanding the depth and breadth of these associations will require enhanced due diligence processes, rigorous vetting of service providers, and a fundamental re-evaluation of contractual relationships.

Territoriality is another aspect that compliance professionals must closely evaluate. The offense holds a distinct UK nexus; thus, fraud committed by associated persons must either occur in the UK or involve gains or losses realized within UK boundaries. This global perspective on compliance places significant responsibility on UK-based operations with international associations and activities.

Notably, the Guidance outlines scenarios to clarify ambiguities. Consider, for instance, the fraud committed by the payroll department, which diverted employee pension funds to support other internal projects. Here, the payroll head abuses their position of trust to commit fraud intended to benefit the company’s operations. Even if no senior manager or director was aware of the fraud, the company could still face prosecution under this legislation unless it has demonstrably reasonable procedures in place to prevent such fraud.

In terms of defensive mechanisms, the guidance emphasizes the implementation of “reasonable fraud prevention procedures.” This implies that corporations must adopt tailored compliance systems that consider the specific risks associated with their industry, size, and operational territories. Simply having generic fraud detection tools will likely fall short of satisfying this legal standard. Instead, robust, proactive, risk-specific compliance measures, supported by ongoing training and review, become non-negotiable.

The Serious Fraud Office will lead investigations into the FTPF, and the Crown Prosecution Service will handle any courtroom work. An interesting aspect here is the possibility of Deferred Prosecution Agreements (DPAs) in England and Wales, suggesting that organizations may negotiate terms if fraud prevention measures were deemed insufficient initially but have since been significantly improved.

The Guidance emphasizes the importance of corporate cooperation with enforcement authorities. Organizations that demonstrate transparent reporting, proactive fraud detection efforts, and comprehensive preventive frameworks are likely to receive more favorable prosecutorial discretion and may be eligible for DPAs.

From a compliance perspective, understanding intent to benefit is crucial. The Guidance explicitly notes that even indirect or unrealized benefits to the organization, such as a failed attempt to attract investors through false accounting, could trigger liability. The intent to benefit need not be the primary motivation; any incidental or indirect benefit, financial or otherwise, places the organization at risk. Compliance programs must thus anticipate, monitor, and mitigate even seemingly remote risks.

This guidance represents not only a legal shift but also a call for a cultural transformation within corporations. Compliance professionals must foster an environment where ethical practices are embedded, whistleblowers are supported, and robust prevention frameworks are continuously evaluated and strengthened.

Key Highlights for Corporate Compliance Professionals:

  1. Understand the expanded scope of corporate liability and who qualifies as an associated person.
  2. Clearly identify the specific types of fraud covered under the Act.
  3. Implement tailored and robust fraud prevention procedures.
  4. Recognize the importance of territorial considerations for global operations.
  5. Foster a proactive and ethical organizational culture, supported by strong whistleblowing protocols.

The Economic Crime and Corporate Transparency Act 2023 mandates a higher degree of vigilance, proactive risk management, and cultural alignment with anti-fraud values. Organizations failing to adapt swiftly to this evolving compliance landscape risk severe financial penalties, reputational damage, and operational disruption. Forward-looking compliance professionals will seize this moment to reinforce corporate integrity, safeguard organizational reputation, and ensure lasting resilience against fraud.

The Guidance provides an entire section on compliance with the FTPF. Join us tomorrow as we take a deep dive into its prescripts.

Categories
Red Flags Rising

Red Flags Rising: S01 E21 – “Secondary Tariffs” with Tom Fox

Mike and Brent were honored guests on the FCPA Compliance Report podcast with their podfather, Tom Fox, the Voice of Compliance and founder of the Compliance Podcast Network. They discuss the concept of “secondary tariffs” recently threatened by the U.S. as to Russia’s trading partners (00:44), what would such secondary tariffs as to Russia mean, and for whom (03:21), how multinational companies should start thinking through the impact of these potential tariffs (04:37), the need to be very, very, very careful about schemes that seem too good to be true (because they are) (06:03), how risk-based compliance can help multinationals evaluate proposed reconfigurations of procurement flows (09:36), where self-certifications by suppliers might not be sufficient (10:22), and then conclude with a deep dive into what False Claims Act enforcement for tariff evasion might look like and how to mitigate enforcement risks by understanding and leveraging the False Claims Act’s “knowledge” element (13:52).

Resources:

Compliance Podcast Network

Tom Fox on LinkedIn

FCPA Compliance Report podcast

More about Tom Fox

Brent LinkedIn

Mike LinkedIn

Mike & Brent’s “Fresh Looks” Series