Alert
March 3, 2026

AI Chatbots, Privilege, and Pitfalls: Lessons for Keeping Generative AI Exchanges Out of the Hands of Legal Adversaries

Generative artificial intelligence presents a new frontier in the ongoing dialogue between technology and the law. Time will tell whether, as in the case of other technological advances, generative artificial intelligence will fulfill its promise to revolutionize the way we process information. But AI’s novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege and the work product doctrine.
– United States v. Heppner, 25 Cr. 503, slip op. at 2–3 (S.D.N.Y. Feb 17. 2026) (Rakoff, D.J.)

As the use of generative AI tools surges in the legal profession, a federal trial court has issued an opinion rejecting a criminal defendant’s claims of privilege over his pre-indictment exchanges with a widely used chatbot, paving the way for prosecutors to use that evidence against him in a fraud and embezzlement case.

The opinion, issued by U.S. District Judge Jed S. Rakoff of the Southern District of New York (“SDNY”), carries implications for the use of generative AI tools in sensitive legal settings and will require attorneys and their clients to exercise heightened care to avoid having their legal strategy and other confidential exchanges used against them in litigation.

Background

On October 28, 2025, Bradley Heppner, the former chair of a publicly traded company, was charged with a variety of criminal counts, including securities fraud, wire fraud, conspiracy, and falsifying corporate records. In connection with his arrest, FBI agents executed a search warrant and seized numerous documents and electronic devices, which included records of the defendant’s communications with a widely used generative AI platform (“the AI Tool”) that took place after he became aware that he was the target of a criminal investigation. In these chats with the publicly available AI Tool, Heppner discussed his intended legal strategy in advance of his indictment.

On February 6, 2026, the government sought a ruling from the court that the documents that the defendant generated using the AI Tool are not protected by attorney-client privilege. Although the evidence showed that counsel had not directed Heppner’s AI Tool interactions, the defendant nevertheless opposed the government’s motion and attempted to maintain attorney-client privilege and work product protection over the communications with the AI Tool based upon three core arguments:

(1) The substance of his prompts consisted of information he learned from legal counsel;
(2) The documents were created for the purpose of coordinating with counsel to obtain legal advice; and
(3) The contents of the chats were later shared with counsel.

During a pre-trial conference on February 10, 2026, Judge Rakoff ruled from the bench that the defendant’s communications with the AI Tool were not protected under the attorney-client privilege or work product doctrines. The following week, the judge issued a written memorandum outlining in detail his reasons for ruling in the government’s favor.

Attorney-Client Privilege

As Judge Rakoff noted, the attorney-client privilege is construed narrowly and protects from disclosure only “communications (1) between a client and his or her attorney (2) that are intended to be, and in fact were, kept confidential (3) for the purpose of obtaining or providing legal advice.” In rejecting the defendant’s argument that the communications were protected as privileged, the court found that Heppner’s communications with the AI Tool did not meet at least two, and likely all three, of these elements.

  1. AI is not an attorney. Because the AI Tool is not an attorney and "the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege," Judge Rakoff found that Heppner’s privilege claim was easily dispensable. The court noted that legal privileges require "a trusting human relationship," such as one with a licensed professional who is subject to professional discipline and fiduciary obligations to clients. While the AI Tool may have a human name and be programmed to converse with humans organically, the AI Tool is not a licensed attorney, and therefore the requisite attorney-client relationship was not established. 
  2. There is no reasonable expectation of privacy in AI communications. Judge Rakoff noted that the AI platform’s privacy policy should have put Heppner on notice that his communications and associated personal information may be used for secondary purposes, including training the AI Tool and making disclosures to third parties and governmental authorities. As a result, Heppner could not reasonably expect confidentiality – a bedrock of privileged attorney-client communications – in his communications with the AI Tool. 
  3. Heppner did not communicate with the AI Tool to obtain legal advice. While Heppner's counsel asserted that he had communicated with the AI Tool for the "express purpose of talking to counsel," the communications were not conducted at counsel’s direction. Although Heppner may have planned to subsequently share the AI Tool’s outputs with his attorney, Judge Rakoff noted that “non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel.”

Work Product Doctrine

Judge Rakoff also briefly addressed the work product doctrine, a related concept that protects from disclosure materials that memorialize attorneys’ mental processes and that are created in anticipation of litigation or trial. In some circumstances, the work product doctrine may shield from disclosure such materials created by non-lawyers, provided that individual is acting at the direction of counsel and in anticipation of litigation.

In this case, Judge Rakoff found that Heppner’s communications with the AI Tool did not warrant work product protection because they were not prepared "by or at the behest of counsel" but instead were prepared by Heppner on his own in coordination with the AI Tool. The court concluded that, since the work product doctrine is designed to protect attorneys’ development of legal theories and strategy, protecting a user’s self-initiated coordination with an AI model — in the absence of attorney instruction — would defeat the doctrine’s purpose.

Key Takeaways

While Judge Rakoff’s opinion is a technology-neutral application of traditional legal principles to a specific set of facts, the Heppner opinion raises the following considerations for attorneys and clients when using generative AI in sensitive legal settings:

  • Public generative AI platforms are third parties — and should be treated accordingly. Because major AI operators routinely collect user inputs and outputs, use that data to train their models, and reserve the right to disclose data to third parties and governmental authorities, sharing information with AI chatbots may be legally indistinguishable from sharing information with any other third party. Disclosing attorney-client communications or privileged work product to a public AI platform may constitute a waiver of applicable legal privilege in connection with the underlying material, similar to other disclosures to unrelated third parties. 
  • Privilege does not attach simply because a lawyer is eventually involved. The court made clear that downstream sharing of AI-generated content with counsel does not retroactively cloak that content in privilege. Clients who independently consult AI platforms, even to prepare for privileged conversations with their attorneys, should assume their queries could be discoverable.
  • AI platforms' privacy policies carry legal implications. Whether or not consumers read the fine print, the privacy policies of AI operators often explicitly put users on notice that consumer data may be disclosed under certain circumstances, including to governmental regulatory authorities and in connection with training and improving models. Individuals should carefully review the terms of service and privacy policies of any AI platform before using a chatbot in connection with sensitive legal, regulatory, or investigative matters.

Recommendations

In light of the implications of this court ruling, businesses (and their counsel) should consider the following:

  • Implement AI governance. Organizations should implement an AI acceptable use policy governing the use of generative AI by employees, in-house counsel, and third-party vendors. The policy should identify approved tools, set parameters for what information may be entered, set out restrictions for third-party use of AI tools, and specify when AI use in connection with sensitive matters must be expressly approved. Organizations should consider other components of AI governance to enable compliance with the policy and operationalize responsible AI usage. 
  • Require attorney direction and documentation. Where generative AI is to be used in connection with legal strategy or privileged work, the use should be initiated and directed by counsel in anticipation of litigation, with such direction documented in order to preserve the strongest possible argument for privilege protection. Clients, employees, and vendors should not expect legal privilege to attach to independently initiated AI-assisted legal analysis.
  • Use enterprise tools and impose data minimization tequirements. Organizational data privacy and confidentiality policies should prohibit entering privileged communications, case strategy, or other sensitive legal information into public-facing AI platforms absent clear attorney direction and a negotiated enterprise license. Where AI tools are used, preference should be given to enterprise or private deployment models with appropriate data protection and confidentiality agreements.
  • Conduct vendor due diligence. When outside vendors use AI tools to assist with privileged work, engagement letters and vendor agreements should address: (a) which AI tools may be used; (b) what data protection commitments are in place; (c) whether the vendor's AI platform retains user inputs; and (d) the AI platform's terms of service and confidentiality obligations.
  • Train lawyers and staff. Lawyers, paralegals, and other personnel handling sensitive legal matters should be trained on the implications of generative AI use in the context of attorney-client privilege, with particular emphasis on the risk that information entered into public AI platforms may be treated as disclosed to a third party and therefore not confidential.
  • Monitor developments. As Judge Rakoff noted, his ruling appears to answer a question of first impression nationwide. This area of law is developing rapidly. Organizations should monitor for further guidance from courts, bar associations, and regulators.

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.