A recent decision by the Southern District of New York ruled that documents generated by artificial intelligence are not protected under attorney-client privilege, following a case involving former CEO Bradley Heppner. The ruling, issued in February by Judge Jed Rakoff, stated that Heppner’s use of a chatbot to discuss defense strategies during a $300 million securities fraud investigation did not constitute privileged communication with an attorney.
This decision is significant as it addresses the growing use of AI tools in legal practice and clarifies their limitations regarding confidentiality. As more lawyers and prosecutors turn to AI for tasks such as document review and data analysis, questions about privacy and accuracy have become increasingly important.
The court’s ruling emphasized that “generative artificial intelligence presents a new frontier,” but also noted that “AI’s novelty does not mean that its use is not subject to longstanding legal principles.” Judge Rakoff wrote that using chatbots does not equate to consulting with a lawyer, highlighting the importance of understanding software terms of service when handling sensitive information.
Legal professionals across the country are adopting AI for various functions. In Montgomery County, Texas, district attorneys use AI to summarize handwritten documents and translate languages. The Los Angeles County Public Defender’s office employs similar technology to process police reports efficiently. Prosecutors are also leveraging AI to review body camera footage in criminal cases, aiming to improve productivity and reduce delays in the justice system.
However, concerns remain about the reliability of AI-generated content. Recent incidents include federal judges fining legal teams for submitting motions containing fabricated case law produced by AI tools. A database tracking such errors has recorded nearly 700 instances since early 2025. These mistakes can undermine public trust in the legal system, especially when made by prosecutors who represent state authority.
Some judicial officials argue that errors in legal briefs are not new and stress the ongoing need for human oversight. As Judge Xavier Rodriguez remarked, “Lawyers have been hallucinating well before AI,” underscoring existing professional standards requiring attorneys to verify their work.
The debate continues over how much reliance on AI is appropriate within the courtroom. While technology can streamline routine tasks, critical aspects like evaluating evidence or persuading juries still depend on human judgment. Some jurisdictions have already set boundaries; for example, King County Prosecuting Attorney’s Office in Seattle decided against accepting AI-assisted police narratives as evidence in 2024.
As courts and legal practitioners adapt to these changes, experts suggest developing formal policies governing AI use—addressing confidentiality risks and ensuring transparency—to maintain fairness within the justice system.



