KY Lawyer Slammed After AI-Generated Briefs Mislead Appeals Court

Mark Hoskins

April 8, 2026

3
Min Read
KY Lawyer Slammed After AI-Generated Briefs Mislead Appeals Court

On This Post

A Kentucky attorney has come under scrutiny after admitting he used artificial intelligence to draft appellate briefs that contained inaccurate legal citations.

A recent opinion from the U.S. Court of Appeals for the Sixth Circuit explained how attorney Steven N. Howe submitted briefs in a criminal appeal that misquoted and misrepresented case law after relying on AI-generated content he did not properly verify.

According to the court’s April 3 opinion in United States v. Farris, Howe acknowledged that he used an artificial intelligence tool to prepare both his principal and reply briefs and then filed them without confirming the accuracy of the cited authorities. The court determined that several quotations attributed to real cases “do not appear in their cited sources,” and in some instances misrepresented the holdings of prior decisions.

The panel stated that Howe’s actions violated core professional obligations, stressing that “attorneys have an ethical obligation to verify the citations and propositions they submit to courts,” regardless of whether they use AI tools. As a result, the court ruled that Howe would not receive compensation under the Criminal Justice Act, removed him from the case, and referred the matter for possible disciplinary action.

The ruling highlights growing concerns about the reliability of generative AI in legal practice, even as some systems demonstrate advanced capabilities. In 2023, researchers found that GPT-4 passed the Uniform Bar Exam, scoring in roughly the 90th percentile and outperforming many human test takers, according to a Stanford-affiliated analysis.

Still, the Sixth Circuit warned that such advancements do not replace attorney oversight, noting that “new technologies… are no substitute for tried-and-true safeguards managed by practicing attorneys.”

This case adds to a series of incidents involving lawyers misusing AI in court filings. In 2025, two attorneys representing MyPillow CEO Mike Lindell faced $3,000 sanctions each after submitting a brief containing nearly 30 defective citations, including references to nonexistent cases, according to a prior report by The Dallas Express.

Courts have increasingly raised concerns about so-called “hallucinations,” where AI systems produce false yet plausible-sounding information. A separate DX investigation in 2025 revealed that Google’s AI Overview tool incorrectly stated that Diana Ross had been arrested for cocaine possession, a claim not supported by public records.

In the Kentucky case, the Sixth Circuit emphasized that even though Howe cited real cases rather than entirely fabricated ones, the inaccurate quotations and misleading arguments still amounted to serious misconduct.

“Attorneys should not utilize technology without knowing the ways in which it can be misused or contribute to inaccuracies,” the court wrote, adding that relying on staff to review AI-generated work does not meet professional standards.

The decision also pointed to broader ethical concerns, including the need for lawyers to protect client information, stay competent with evolving technology, and remain transparent when using AI tools.

While Howe told the court he had not faced prior discipline in his 40-year career, the judges concluded that his failure to review the AI-generated material led to “inexcusable transgressions” that delayed proceedings and consumed judicial resources.

The Dallas Express contacted Howe for comment before publication but did not receive a response.

Leave a Comment

Related Post