As generative AI tools like ChatGPT and Claude become more common in litigation strategy and internal analysis, courts are beginning to confront a pressing question: Does using AI waive attorney-client privilege or work-product protection?
Two recent federal decisions suggest the answer depends heavily on how the AI tool is used, whether counsel directed its use, and what confidentiality protections are in place.
United States v. Heppner (SDNY): No Privilege for Public AI-Generated Materials
In February 2026, the U.S. District Court for the Southern District of New York addressed whether documents generated using a public generative AI platform were protected by attorney-client privilege or the work-product doctrine.
In United States v. Heppner, Judge Jed S. Rakoff held that materials generated by a defendant using a public AI tool (Anthropic’s Claude) and later shared with counsel were not protected.
The court’s reasoning focused on several key principles:
1. AI Is Not an Attorney
Communications with an AI platform are not communications with counsel. Because the AI tool is not a lawyer and does not provide legal advice, those communications do not qualify as privileged attorney-client communications.
2. Lack of Confidentiality
The AI platform’s terms permitted disclosure of user inputs and outputs to third parties. As a result, the court found there was no reasonable expectation of confidentiality — a core requirement for attorney-client privilege.
3. No Retroactive Privilege
Sharing AI-generated documents with counsel after they are created does not retroactively cloak them in privilege. If materials were unprivileged at creation, sending them to an attorney does not change their status.
4. Work-Product Doctrine Rejected
The court also declined to extend work-product protection because the materials were created independently by the defendant, without direction from counsel and not clearly in anticipation of litigation.
Under the specific facts in Heppner — public AI use, no confidentiality safeguards, and no counsel involvement — privilege protections failed.
A Different Approach: AI as a Litigation “Tool”
In contrast, a reported Michigan federal court decision has suggested a more flexible view of AI-generated materials.
In that matter, the court reportedly treated AI as a tool, similar to research software or databases, rather than as a third party that automatically destroys confidentiality.
Under this reasoning:
- AI-generated materials created in anticipation of litigation
- Reflecting strategic planning
- Developed in connection with legal preparation
may qualify for protection under the work-product doctrine.
This approach aligns with Federal Rule of Civil Procedure 26(b)(3), which protects materials prepared in anticipation of litigation — even if not drafted directly by attorneys.
Why These AI Privilege Decisions Diverge
The divergence between these decisions highlights several critical factors courts are evaluating when analyzing AI and attorney-client privilege.
Public vs. Enterprise AI Platforms
Consumer-grade AI tools often include terms that allow providers to retain, review, or use data for training purposes. Courts may view this as disclosure to a third party, undermining confidentiality.
Enterprise AI platforms with contractual confidentiality guarantees present a materially different analysis.
Counsel Direction
Courts appear more willing to consider work-product protection where AI use is directed by counsel as part of litigation strategy. Independent client use may not receive the same protection.
Anticipation of Litigation
Work-product doctrine requires that materials be prepared “in anticipation of litigation.” Courts are closely examining whether AI outputs truly reflect litigation preparation or simply exploratory use.
Does Using ChatGPT in Litigation Waive Privilege?
The emerging case law suggests that using generative AI does not automatically waive privilege — but careless use of public AI tools can create significant risk.
Key risk factors include:
- Entering sensitive facts into public AI systems
- Failing to review platform confidentiality terms
- Using AI without counsel direction
- Assuming privilege can be created after the fact
Emerging Themes in AI Attorney-Client Privilege Law
Courts are still developing frameworks for analyzing generative AI under traditional privilege doctrines. Early decisions suggest:
- AI itself is not inherently disqualifying
- Confidentiality remains the cornerstone of privilege
- The presence of a third-party AI provider matters
- Documentation of legal purpose and counsel oversight is critical
Different jurisdictions may adopt different analytical approaches as AI use becomes more common in litigation and compliance planning.
Practical Guidance for Lawyers and Organizations
Given this evolving legal landscape, organizations and counsel should consider:
- Using enterprise AI platforms with clear confidentiality protections
- Documenting counsel direction when AI is used for litigation preparation
- Avoiding consumer AI tools for sensitive analyses
- Conducting internal AI governance reviews before incorporating AI into legal workflows
The Bottom Line
The question is not whether AI destroys attorney-client privilege — it is whether the way AI is used preserves the core elements of confidentiality and litigation preparation.
As courts continue to define how generative AI interacts with attorney-client privilege and the work-product doctrine, practitioners should approach AI-assisted legal strategy deliberately and defensively.
Early decisions make one point clear: structure, documentation, and platform selection matter.