The Delaware Court of Chancery has handed compliance leaders and boards a timely lesson: generative AI is not a substitute for judgment, legal discipline, or governance. When leaders use AI to validate a predetermined objective, the technology does not reduce risk. It can become powerful evidence of intent, bad faith, and control failure.
A Cautionary Tale for Corporate Leaders
The recent Delaware Court of Chancery decision in Fortis Advisors, LLC v. Krafton, Inc. should be read by every Chief Compliance Officer (CCO), board member, general counsel, and corporate deal professional. The article describing the decision recounts a dispute in which a buyer, apparently unhappy with a substantial earnout obligation, turned to ChatGPT for advice on how to escape the economic consequences of the deal. According to the court’s account, the buyer then executed an AI-generated strategy designed to renegotiate the arrangement or take control from the seller management team. The court ultimately found that the buyer had wrongfully terminated key employees, improperly seized operational control, reinstated the seller’s CEO, and extended the earnout window to restore a genuine opportunity to achieve the payout.
The Real Compliance Lesson
For compliance professionals, the most important lesson is not that AI is dangerous. The lesson is that leadership can use AI in dangerous ways when governance is absent. That is a far more important point.
Too many organizations still approach AI governance as a technology problem. They focus on model performance, cybersecurity, or procurement review. Those are important issues, but this case reminds us that AI governance begins with human purpose. What question was asked? What objective was embedded in the prompt? What controls existed before action was taken? Who challenged the proposed course of conduct? Who documented the legal and ethical analysis? Those are compliance questions. Those are board questions.
Viewing the Case Through the DOJ ECCP Lens
This is also where the DOJ’s Evaluation of Corporate Compliance Programs (ECCP) provides a useful lens. The ECCP asks whether a company’s program is well designed, adequately resourced, empowered to function effectively, and actually works in practice. Put that framework over this fact pattern, and the governance gaps become painfully clear. Was there a control around the use of generative AI in strategic or legal decision-making? Was there escalation to legal, compliance, or the board when a significant earnout exposure was at stake? Was there any meaningful challenge function, or did leadership use AI as a convenient amplifier for a business objective it had already chosen?
The case suggests the latter. That should concern every board. Generative AI can be useful in brainstorming, summarizing, and scenario testing. But when executives use it to reinforce a desired outcome, particularly one touching contractual obligations, employment decisions, or post-closing governance rights, the tool can become a mechanism for rationalizing misconduct.
When AI Chats Become Discoverable Evidence
Worse, it creates a record. The Court notes that the AI chats were not privileged, were discoverable, and vividly underscored the buyer’s efforts to avoid its legal obligations. That point alone should stop corporate leaders in their tracks.
Many executives still treat AI chats as an informal thinking space, almost like talking to themselves. That is a serious mistake. Prompt histories, outputs, internal forwarding, and downstream use can all become evidence. If employees use public or enterprise AI tools to explore termination strategies, dispute positions, or ways around contractual commitments, they may be creating exactly the documentary record that plaintiffs, regulators, and judges will later find most compelling. In other words, the issue is not simply data leakage. It is discoverability, privilege erosion, and self-generated evidence of intent.
That is why CCOs and boards need to move beyond generic AI-use policies and build governance around high-risk use cases. The question should not be, “Do we allow ChatGPT?” The question should be, “Under what circumstances can generative AI be used in decisions involving legal rights, employee discipline, regulatory exposure, strategic transactions, or board-level matters?” If the answer is unclear, the company has work to do.
The M&A and Earnout Governance Lesson
The dealmaking lesson here is equally important. Earnouts are already fertile ground for post-closing disputes because they sit at the intersection of incentives, control, and timing. Buyers often want flexibility. Sellers want protection from interference. This case illustrates what can happen when a buyer attempts to manipulate operations in a way that affects the achievement of the earnout. The court not only found wrongful interference but also equitably extended the earnout period by 258 days and preserved a further contractual right to extend, thereby materially altering the deal’s economic landscape.
That is a governance lesson hiding inside an M&A lesson. Once a company acquires a business with earnout rights and operational covenants, post-closing conduct is no longer just integration management. It is compliance management. Interference with operational control, pretextual terminations, or actions designed to suppress performance metrics can lead to litigation, destroy value, and trigger judicial remedies that boards did not expect. CCOs should therefore insist that M&A integration playbooks include compliance review of earnout governance, decision rights, escalation protocols, and documentation standards.
Five Lessons for Boards and CCOs
What should boards and compliance officers do now? Here are five lessons.
- Govern the objective before you govern the tool. AI is only as sound as the purpose for which it is deployed. If leadership starts with a bad objective, AI can scale the problem. Boards should require management to define prohibited uses of AI in areas such as contract avoidance, pretextual employee actions, retaliation, and legal strategy without oversight by counsel.
- Treat high-risk AI prompts and outputs as governed business records. If a prompt relates to litigation, terminations, regulatory response, deal rights, or board matters, it should fall within clear policies on retention, review, and escalation. Employees need to understand that AI interactions may be discoverable and may not be privileged.
- Embed legal and compliance into consequential AI use cases. The ECCP emphasizes whether compliance has stature, access, and authority. That principle applies directly here. Strategic uses of AI that touch contractual rights, employment decisions, or fiduciary issues should not proceed without legal and compliance review.
- Build AI governance into M&A and post-closing integration. Earnout structures, operational covenants, and seller management rights are precisely the areas where incentives can distort behavior. Boards should ask whether integration teams have controls preventing actions that could be viewed as interference, manipulation, or bad-faith conduct.
- Document challenge, not just action. A single final decision does not prove good governance. It is proved by the process surrounding it. Was there dissent? Was there an analysis? Was there an escalation memo? Was there a documented rationale grounded in law, contract, and fiduciary duty? If not, the company may be left with a record that tells the wrong story.
Governance Must Come Before AI
In the end, this case is not really about a video game company. It is about a governance failure dressed in modern technology. Leaders appear to have used AI not to improve judgment, but to reinforce a course of conduct they already wanted to pursue. That is the compliance lesson. AI does not remove the need for fiduciary discipline, legal oversight, or ethical restraint. It makes those requirements more urgent.
For boards and CCOs, the mandate is clear. Governance must come first. Because when AI is used without guardrails, it does not merely create risk; it creates it. It can become the evidence.