Morgan v. V2X, Inc. Decision Sets Precedent on AI Disclosure in Discovery
by Justin Smith
One of the top concerns legal professionals have had regarding he use of generative AI is focused largely on hallucinations—the tendency of LLMs to invent fake case law. However, a recent decision from the U.S. District Court for the District of Colorado suggests the bench is moving toward a much more sophisticated concern: data privacy and the underlying data ingestion that powers these models.
In Morgan v. V2X, Inc. (D. Colo. Mar. 30, 2026), a routine employment discrimination case escalated into a high-stakes battle over how confidential discovery materials are handled in the age of AI. The result is a modified protective order that serves as a potential blueprint for modern litigation. This is the first order of its kind, as far as we are aware.
The Core Dispute: Productivity vs. Privacy
The conflict began when the plaintiff, appearing pro se, sought to use AI tools to bridge the technological gap between an unrepresented litigant and a well-funded corporate defendant. The defendant, V2X, Inc., raised alarms about whether its confidential information—including trade secrets and personnel files—was being fed into mainstream AI platforms like ChatGPT or Gemini.
This is an issue because if models are trained on privileged or confidential information, it’ll then replicate some or all of that information in future outputs.
While the plaintiff argued that his choice of software was protected work product, the court disagreed. Magistrate Judge Maritza Dominguez Braswell ruled that while a party’s mental impressions are protected, the identity of the AI tool is not. If confidential data is being uploaded to an AI platform that may compromise its confidentiality, the opposing side has a legitimate right to know where that data is going.
Privacy and Confidential Materials in the AI Age
One of the most intriguing parts of the decision is the court’s discussion of the Fourth Amendment and the reasonable expectation of privacy. The court acknowledged that in 2026, almost all data passes through third-party systems like Gmail. “Does that mean that anyone with a Gmail account has forfeited all rights to confidentiality and privacy?,” the court asked.
Contextualizing its analysis in Fourth Amendment search and seizure case law, the court determined that the answer was a simple “no.”
“The Fourth Amendment governs searches and seizures and offers a wholly different legal framework from the work product doctrine, but the principle reflected in those cases is informative: routing information through a third-party system does not forfeit all privacy,” the order said.
However, Judge Braswell noted that unlike passive search engines, AI chatbots are designed to foster interactions that encourage the disclosure of sensitive information.
“Unlike a general-purpose search engine, which passively returns results, many modern AI platforms are specifically designed and trained to engage,” Judge Braswell wrote. “They invite candid and significant disclosure of information, including sensitive information. They simulate empathy, foster trust, and interact in a way that feels genuine and intimate. Research confirms that people share personal and sensitive information with AI chatbots, often without appreciating what happens to that information once shared.”
For pro se litigants, the court noted, those interactions “closely resembles the kind of confidential, strategy-laden iterative work product that Rule 26(b)(3) was designed to protect.”
However, while work-product protections may have shielded the plaintiff from disclosing their AI usage – and the court does leave its determination at may – the plaintiff ultimately failed to make his case.
“Even if it is possible that in some contexts disclosing an AI tool can reveal mental impressions or strategy,” the court determined, “Plaintiff has not carried his burden to demonstrate that here.”
The court concluded that while routing data through an AI system doesn't automatically waive all protections, it does create a risk that mainstream, low-cost tools will persistently collect and store data for their own purposes.
The Court’s AI-Specific Language
In its decision, the court moved beyond general warnings and established a precise set of contractual must-haves for any legal professional looking to integrate AI into their workflow. To ensure that sensitive data remains protected from model training and unauthorized exposure, the court’s modified protective order explicitly mandates:
No party or authorized recipient may input, upload, or submit CONFIDENTIAL Information into any modern artificial intelligence platform, including any generative, analytical, or large language model-based tool (“AI”), unless the AI provider is contractually prohibited from:
(1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service. Where disclosure to a third party is essential to service delivery, any such third party shall be bound by obligations no less protective than those required by this Order. In addition, the AI provider must contractually afford the party or authorized recipient the ability to remove or delete all CONFIDENTIAL information upon request. A party intending to use AI that it contends meets these requirements must retain written documentation of these contractual protections.
Implications for Pro Se Litigants
A central part of the case was the plaintiff’s status as a pro se litigant seeking to leverage AI to manage a complex case. In the decision, the court directly addressed the reality that secure AI systems often come with a higher price tag.
“The Court recognizes that practically speaking, and in light of the current state of AI, this provision will (at least for now) bar the parties from using most, if not all, mainstream low-to-no cost AI to process Confidential Information. This type of restriction disadvantages pro se litigants,” the decision read.
It went on to discuss the fact that enterprise-tier AI accounts are often only available through organization-wide procurement and are too expensive for pro se litigants to access. However, it also acknowledged that the risks of uploading confidential data to mainstream tools was a real issue that needed to be addressed.
While the court’s intention with this order was to prevent the uploading of confidential information to openly available AI tools, it also highlighted that it did not restrict the pro se litigant from using them all together.
“To be clear, the Court does not intend to leave pro se Plaintiff without the benefits of AI,” the order read. “Modern AI tools may be used in many ways that do not involve uploading Confidential Information, and nothing in this particular Order restricts those uses.”
The New Rules of Engagement
To mitigate these risks, the court issued an amended protective order with three strict requirements for using AI with confidential discovery.
Training Bans. The AI provider must be contractually prohibited from using inputs to train or improve its models.
Strict Confidentiality. The provider cannot disclose inputs to third parties unless essential for service delivery.
Right to Delete. The user must have the contractual ability to remove or delete all confidential information upon request.
The Bottom Line for Practitioners
The Morgan decision effectively bars the use of standard or free versions of mainstream AI for processing sensitive case data. As firms (and pro se litigants) look to integrate AI, the burden is now on the attorney to verify the terms of service.
In the eyes of the court, a secure environment isn't just about preventing hacks—it’s about ensuring your client’s data doesn't become the training set for tomorrow’s algorithm.
Justin Smith is a Senior Content Marketing Manager at Everlaw. He focuses on the ways AI is transforming the practice of law, the future of ediscovery, and how legal teams are adapting to a rapidly changing industry. See more articles from this author.