Categories
Blog

When AI Becomes Evidence of Bad Governance: What CCOs and Boards Can Learn from Fortis Advisors

The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.

A Cautionary Tale for Corporate Leaders

The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.

The Real Compliance Lesson

For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.

Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.

Viewing the Case Through the DOJ ECCP Lens

This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?

The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.

When AI Chats Become Discoverable Evidence

Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.

Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.

That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.

The M&A and Earnout Governance Lesson

The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.

That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.

Five Lessons for Boards and CCOs

What should boards and compliance officers do now? Here are five lessons.

  1. Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
  2. Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
  3. Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
  4. Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
  5. Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.

Governance Must Come Before AI

In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.

For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.

Categories
Daily Compliance News

Daily Compliance News: October 1, 2025, The Q4 Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • Exxon seeks security assurances for the Mozambique LNG project. (FT)
  • TXSE gets SEC approval. (Reuters)
  • Charlie Javice received a prison sentence of more than 7 years. (WSJ)
  • ChatGPT has new parental controls. (NYT)
Categories
2 Gurus Talk Compliance

2 Gurus Talk Compliance – Episode 58 – The AI Edition

What happens when two top compliance commentators get together? They talk compliance, of course. Join Tom Fox and Kristy Grant-Hart in 2 Gurus Talk Compliance as they discuss the latest compliance issues in this week’s episode!

Stories this week include:

  • Compliance with the new CRD Regulations is six weeks away. (CDF Labor Law)
  • TikTok to Utilize AI as Content Moderators. (WSJ)
  • Is AI coming for culture? (New Yorker)
  • Is AI psychosis real? (BBC)
  • AI will not replace historians. (WSJ)
  • Google Could Get Broken Up This Week. Here’s What It Would Mean – (NYT)
  • Using AI Agents to Cheat on Training – Radical Compliance (Radical Compliance)
  • AI Made Me Dumb & Sad – (Corporate Compliance Insights)
  • Incentives in Compliance and Ethics Programs: What Does ChatGPT Tell Us? – (Ideas & Answers)
  • Woman Claims Wind Blew Cocaine Into Her Purse, Police Say – (CBS News)

Resources:

Kristy Grant-Hart on LinkedIn

Prove Your Worth

Tom

Instagram

Facebook

YouTube

Twitter

LinkedIn

Categories
Daily Compliance News

Daily Compliance News: August 28, 2025, The Occupied Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee, and listen in to the Daily Compliance News. All, from the Compliance Podcast Network. Each day, we consider four stories from the business world, including compliance, ethics, risk management, leadership, or general interest, relevant to the compliance professional.

Top stories include:

  • The Argentine Central Bank raises reserves in response to allegations of presidential corruption. (Reuters)
  • Teen suicide and ChatGPT. (NYT)
  • South Africans confront a 54% increase in fraud. (Bloomberg)
  • Microsoft employees occupy the CEO’s office in protest over the situation in Gaza. (WSJ)

You can donate to flood relief for victims of the Kerr County flooding by going to the Hill Country Flood Relief here.

Categories
Sunday Book Review

Sunday Book Review: February 16, 2025 The Books on AI Edition

In the Sunday Book Review, Tom Fox considers books that would interest the compliance professional, the business executive, or anyone who might be curious. These could be books about business, compliance, history, leadership, current events, or anything else that might interest Tom. Today, we have a five-book look at the top books on AI for 2025.

  1. Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
  2. The Singularity Is Nearer: When We Merge with AI by Ray Kurzweil
  3. The Alignment Problem: Machine Learning and Human Values by Brian Christian
  4. Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson
  5. Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Harari

Resources:

The Best Books on AI in 2025. In FiveBooks.com

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

 

Categories
FCPA Compliance Report

FCPA Compliance Report – DeepSeek and the Recalibration of Risk with Mike Huneke and Brent Carlson

Welcome to the award-winning FCPA Compliance Report, the longest-running podcast in compliance. In this episode, Tom welcomes back Mike Huneke and Brent Carlson for a special two-part podcast series on DeepSeek’s bombshell AI advancements announced on President Trump’s inauguration day. In Part 1, they review the business and compliance implications, and in Part 2, they consider the Sputnik Moment that has occurred.

In Part 1, they consider the immediate and significant repercussions in both the business and compliance landscapes. Key topics include the economic and geopolitical ramifications of DeepSeek’s innovations, changes in export control policies, and the unique compliance challenges AI technology poses. The discussion also examines how corporations can recalibrate their risk frameworks, integrate high-probability standards, and leverage data analytics to handle millions of transactions in a global economy. Emphasizing the importance of comprehensive compliance programs, the episode provides actionable insights for compliance professionals navigating this evolving landscape.

Key highlights:

  • DeepSeek’s AI Breakthrough
  • Economic and Compliance Implications
  • Export Controls and Legal Concerns
  • Compliance Strategies and Risk Management
  • Training and Organizational Culture

Resources

Mike Huneke

Hughes Hubbard & Reed website

Brent Carlson on LinkedIn

A Fresh Look at US Export Controls and Sanctions

DeepSeek Finds US Export Controls at a New ‘Sputnik Moment’ in Bloomberg.Law

Tom Fox

Instagram

Facebook

YouTube

Twitter

LinkedIn

For more information on the Ethico Toolkit for Middle Managers, available at no charge, click here.

Categories
Great Women in Compliance

Great Women in Compliance: Jess Nall on Defending Tech Innovators

Welcome to the Great Women in Compliance Podcast. In this episode, Hemma visits with Jess Nall, a partner at Baker McKenzie.

Jess is a leader of Baker McKenzie’s AI and Cyber practice and
leads the Firm’s government defense practice in the US heart of technological innovation, the San Francisco Bay Area. For more than twenty years, Jess has defended technology innovators in high-profile federal and state government enforcement and investigations involving AI, cyber-security, algorithmic price-fixing, economic espionage, and trade sanctions.

With two decades of tech law experience under her belt and playing a pivotal role in various global technology enforcement cases, Jess has a grounded understanding of the complexities surrounding AI compliance and enforcement. She highlights the rapidly evolving global regulation and the increasing pressure it places on compliance professionals.

Jess advocates for a proactive approach to comprehension and readiness for the enforcement and governance aspects of AI, encouraging clients to have robust good faith narratives that illustrate their compliance efforts. This perspective is formed not only from her substantial professional experience but also her deep understanding of the potential risks and malpractices related to the use of AI technology.

Key Highlights:

  • AI Regulations: Impact on Businesses and Compliance
  • Navigating Risks in AI Compliance and Enforcement
  • Deceptive AI Marketing Practices in Industry
  • Fostering Collaboration for AI Compliance Success
  • Enhancing Regulatory Compliance with AI Analytics
  • Enhancing Legal Access with AI Translation

Resources:
Join the Great Women in Compliance community on LinkedIn here.

AI Strategy: The Whole Brain Approach Will Win in forbes.com

Categories
TechLaw10

TechLaw10: Eric Sinrod & Jonathan Armstrong on AI, ChatGPT & Legal Liability

In this edition of TechLaw10, Jonathan Armstrong talks to Attorney and Professor Eric Sinrod from his home in California. They look again at the legal issues surrounding AI, the rise of chatbot ChatGPT, and the legal and ethical issues it brings.

This film follows an earlier one that examined these issues: https://bit.ly/chatgptfilm. This time, Eric and Jonathan focus on liability for chatbot operators.

The topics include:

  • Can chatbots be liable for the things they do?
  • Should Section 230 of the US Communications Decency Act protect chatbots?
  • Can chatbots be liable for IP issues?
  • How is the UK Shetland Times case from 1996 relevant to today’s AI offerings?
  • Is ChatGPT right in its legal assessment of its liabilities?
  • What are the issues involved in scraping data to train AI?
  • What will the proposed new EU AI law do?
  • How will regulators handle AI complaints?

You can listen to earlier TechLaw10 audio podcasts with Eric and Jonathan at https://www.duanemorris.com/site/techlaw10.html

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
TechLaw10

Eric Sinrod & Jonathan Armstrong on AI & ChatGPT

In this edition of TechLaw10, Jonathan Armstrong talks with Attorney and Professor Eric Sinrod. They examine the legal issues surrounding AI, the rise of chatbot ChatGPT, and the legal and ethical issues it raises.

The topics include:

    • A Discussion on Ethics & Technology
    • The Need for AI Not to Discriminate
    • The Use of AI in Recruitment
    • The Use of Chatbots for Harm
    • AI & Copyright Issues
    • Does ChatGPT lie, and if so, what are the consequences?
    • Political Bias with Chatbots
    • A 1996 Case that could limit the Lawful Activity of some chatbots
    • How AI Harvesting Data Could be a Criminal offense under Computer Misuse Legislation
    • How data subject rights under GDPR may impact AI

You can listen to earlier TechLaw10 audio podcasts with Eric and Jonathan at https://www.duanemorris.com/site/techlaw10.html

Connect with the Compliance Podcast Network at:

LinkedIn: https://www.linkedin.com/company/compliance-podcast-network/
Facebook: https://www.facebook.com/compliancepodcastnetwork/
YouTube: https://www.youtube.com/@CompliancePodcastNetwork
Twitter: https://twitter.com/tfoxlaw
Instagram: https://www.instagram.com/voiceofcompliance/
Website: https://compliancepodcastnetwork.net/

Categories
Daily Compliance News

Daily Compliance News: January 31, 2024 – The $70,000 Watch Edition

Welcome to the Daily Compliance News. Each day, Tom Fox, the Voice of Compliance, brings you compliance-related stories to start your day. Sit back, enjoy a cup of morning coffee and listen to the Daily Compliance News. All from the Compliance Podcast Network. Each day, we consider four stories from the business world: compliance, ethics, risk management, leadership, or general interest for the compliance professional.

In today’s edition of Daily Compliance News:

• Germany to seize $2 billion worth of bitcoin. (NYT)

• Musk’s $55 billion pay package is voided.  (FT)

• An Ecuadorian official got a $70,000 watch as a bribe.  (Bloomberg)

• More lawyer trouble for fake ChatGPT citations.  (Reuters)

For more information on Ethico and a free White Paper on top compliance issues in 2024, click here.