In a Denver courtroom, a legal filing meant to defend My Pillow CEO Mike Lindell turned into a cautionary tale about artificial intelligence gone wrong. On April 16, 2025, U.S. District Judge Nina Wang tore into a brief submitted by Lindell’s attorneys, uncovering a staggering 30 errors—misquotes, mangled legal principles, and, most jaw-dropping, citations to cases that didn’t exist. The document, meant to counter a defamation lawsuit, was partly crafted by AI, and the fallout has the legal world buzzing.
The trouble started when attorneys Parker Kachouroff and Sandra DeMaster, representing Lindell, filed their opposition brief. It was supposed to bolster Lindell’s defense in a case tied to his election fraud claims. Instead, it became a masterclass in what not to do with AI. Judge Wang, in a scathing order, detailed the mess: nearly 30 citations were defective. Some quoted cases inaccurately. Others referenced legal principles that weren’t in the cited decisions at all. A few pointed to fictional cases, conjured out of thin air by the AI tool the lawyers leaned on.
Kachouroff and DeMaster didn’t deny the AI assist. They admitted to uploading a draft version of the brief by mistake—one that hadn’t been scrubbed of the AI’s hallucinations. Their final version, they claimed, was corrected. But Wang wasn’t buying the excuse. She ordered both lawyers to explain why they shouldn’t face disciplinary action for violating court rules. The hearing’s set for May, and the stakes are high: professional reputations hang in the balance.
This isn’t AI’s first courtroom flop. Back in 2023, a Manhattan lawyer got slapped with a $5,000 fine for submitting a ChatGPT-generated brief riddled with fake cases. The pattern’s clear—AI can churn out convincing prose, but it’s got a nasty habit of inventing facts. For Lindell’s team, the blunder adds another layer of chaos to an already contentious defamation fight, one rooted in his outspoken 2020 election claims.
The judge’s ruling laid bare the risks of leaning on tech without rigorous oversight. Legal briefs demand precision, and AI’s tendency to “fill in the blanks” with fiction doesn’t cut it. Wang’s order was blunt: the errors weren’t just sloppy—they undermined the court’s trust. Now, Kachouroff and DeMaster face a reckoning, and the legal community’s left grappling with a thorny question: can AI ever be trusted in the high-stakes world of law?
As of April 27, 2025, the court awaits the attorneys’ response. The defamation case rolls on, but this AI misstep has stolen the spotlight.