There are court rulings that quietly shape doctrine, and others that detonate assumptions. The recent decision of Judge Jed Rakoff from the Southern District of New York in United States v. Heppner falls into the latter category. In a February 10, 2026, ruling, the Court made clear that the attorney-client privilege or the work-product doctrine did not protect materials generated through a third-party generative AI platform. In plain English, what a defendant typed into a public AI system was discoverable.
For compliance professionals, this is not a narrow litigation footnote. It is a flashing red warning light. The era of casual AI experimentation inside corporations is over. Governance now must catch up with adoption. Today, we will consider the Court’s ruling and why it matters to a Chief Compliance Officer.
The Court’s Core Holding
The defendant in Heppner had used a third-party generative AI tool to draft and refine materials that were later shared with counsel. When prosecutors sought production, the defense argued that these materials were protected by privilege and work-product protections. The court disagreed.
The reasoning was straightforward and, frankly, predictable:
- The AI tool was not an attorney.
- The terms of service did not guarantee confidentiality and allowed retention or potential disclosure of inputs.
- The materials were not prepared at the direction of counsel for the purpose of obtaining legal advice.
- Simply sending AI-generated drafts to counsel after the fact did not, by itself, retroactively cloak them in privilege.
This is a fundamental point: privilege attaches to communications made in confidence for the purpose of seeking legal advice. When an employee enters sensitive facts into a third-party AI platform that disclaims confidentiality, that “confidence” is at best questionable. When those drafts are created independently of counsel’s direction, work-product arguments grow thin. The court did not create a new doctrine. It applied existing principles to new technology. That is precisely why this ruling is so important.
The Illusion of Confidentiality
Many business users treat AI platforms like a digital notebook. They assume that because the interaction occurs on a screen and feels private, it is private. That assumption is dangerous. Public and consumer AI platforms often reserve the right to store, analyze, or use inputs for service improvement. Even when vendors promise limited retention, those commitments may not meet the strict confidentiality standards necessary to preserve privilege. From a legal perspective, once you introduce a third party without adequate confidentiality protections, you risk waiving your rights.
The compliance lesson is blunt: generative AI is not your lawyer, and it is not your secure internal memo system. This is where governance intersects with culture. If employees are entering investigative summaries, draft responses to regulators, internal audit findings, or potential misconduct narratives into public AI tools, you are manufacturing discoverable evidence. That is not a hypothetical risk. That is now a litigated reality.
Why This Is a Board-Level Issue
The Department of Justice has made clear through the Evaluation of Corporate Compliance Programs (ECCP) that companies must identify and manage emerging risks. Artificial intelligence is no longer emerging. It is embedded in operations, marketing, finance, and legal workflows. The Heppner ruling converts AI usage from a technology convenience into a legal risk category. Boards of Directors should be asking:
- Do we have an inventory of AI tools used across the enterprise?
- Are employees permitted to input confidential, regulated, or legally sensitive information into third-party platforms?
- Have we reviewed the vendor’s terms of service regarding confidentiality, retention, and data ownership?
- Are legal and compliance functions involved in approving AI deployments?
If the answer to any of these questions is uncertain, there is a governance gap. AI governance is no longer solely about bias, explainability, or regulatory compliance. It is also about preserving privilege, managing litigation risk, and managing evidence.
Privilege cannot Be Recreated After the fact.
One of the most significant aspects of the ruling is the rejection of “retroactive privilege.” Sending AI-generated content to counsel after it is created does not transform it into protected communication. This matters for compliance investigations. Consider the following scenario:
An internal report of potential misconduct surfaces. An employee uses a public AI tool to summarize the facts and generate possible legal arguments before reaching out to in-house counsel. That summary now exists outside any protected legal channel. The vendor may retain it. It may be discoverable.
By the time counsel becomes involved, the privilege damage may already be done. The message for compliance teams is clear: legal engagement must precede, or at least direct, sensitive analysis, not follow it.
Work Product Is Not a Safety Net
Some may argue that AI-assisted drafting in anticipation of litigation should fall under the work-product doctrine. The court in Heppner was not persuaded. Work-product protection generally applies to materials prepared by or for an attorney in anticipation of litigation. When individuals independently generate content using AI tools without counsel’s direction, that protection is far from guaranteed. Compliance professionals should not assume that labeling a document “prepared in anticipation of litigation” will insulate AI-generated material. Courts will look at substance over form.
Practical Steps for Compliance Leaders
This ruling demands operational response from every CCO. Here are some steps every compliance program should consider.
1. Treat Third-Party AI as Non-Confidential by Default
Unless you have a contractual, enterprise-level arrangement with robust confidentiality provisions and clear data controls, assume that information entered into a third-party AI platform is not protected. This default posture should be reflected in policy language.
2. Update Acceptable Use Policies
Your code of conduct and IT policies should explicitly address the use of generative AI. Prohibit the entry of:
- Privileged communications.
- Investigation details.
- Personally identifiable information.
- Trade secrets.
- Sensitive regulatory communications.
Policy must move from general warnings to specific examples.
3. Involve Legal in AI Governance
AI procurement should not be a purely IT function. Legal and compliance must review vendor terms, especially around:
- Data retention.
- Subprocessor use.
- Confidentiality obligations.
- Audit rights.
- Breach notification.
If you cannot articulate how your AI vendor protects inputs, you cannot defend privilege claims.
4. Implement Training That Reflects Real Risk
Annual compliance training should now include explicit guidance on AI usage. Employees should understand that entering confidential information into public AI tools can waive privilege and render it discoverable. Training should include practical scenarios. The objective is behavioral change, not abstract awareness.
5. Establish Secure AI Environments for Legal Work
If your organization intends to use AI in legal or investigative contexts, consider enterprise solutions that:
- Operate within your controlled environment.
- Restrict data sharing.
- Provide contractual confidentiality.
- Maintain clear audit logs.
Even then, legal oversight is essential. Secure does not automatically mean privileged.h
6. Align with Litigation Hold Procedures
AI interaction logs may constitute discoverable material. Ensure that your litigation hold processes account for AI-generated content. If your organization logs prompts and outputs, those logs may fall within the scope of preservation obligations. Ignoring this dimension creates spoliation risk.
The Cultural Dimension
Technology adoption inside companies often outruns governance. Employees experiment. Business units optimize. Productivity improves. Compliance arrives later. That sequencing is no longer sustainable. The Heppner ruling should catalyze a shift from reactive to proactive governance. AI usage must be mapped, risk-ranked, and monitored, just as third-party intermediaries, high-risk markets, and financial controls are. If your risk assessment does not explicitly include generative AI, it is incomplete.
Connecting to the DOJ’s Expectations
The DOJ has repeatedly emphasized dynamic risk assessment. Artificial intelligence now clearly falls within the scope of corporate compliance evaluation. Prosecutors will not be sympathetic to arguments that “everyone was using it” or that policies were silent. They will ask:
- Did the company identify AI as a risk area?
- Did it implement controls?
- Did it train employees?
- Did it monitor usage?
- Did it respond to incidents?
The answers to those questions will influence charging decisions, resolutions, and penalty calculations.
A Final Word: Convenience Versus Control
Generative AI is transformative. It enhances drafting, analysis, and research. It can elevate compliance operations if deployed thoughtfully. However, convenience without control is exposure. The lesson of United States v. Heppner is not that AI should be avoided. It is that AI must be governed with the same rigor as any other high-impact enterprise tool.
Privilege is fragile. Once waived, it cannot be restored. In a world where a chat prompt can become an exhibit, compliance professionals must lead the charge in redefining responsible AI use. If you are a chief compliance officer, this is your moment. Update your policies. Engage your board. Coordinate with legal and IT. Embed AI governance into your compliance framework. Because the next time an AI conversation surfaces in discovery, you do not want to explain why your program treated it like a harmless experiment.
