Fifth Circuit Proposes Expansive AI Disclosure Rule

After a pair of New York attorneys were aggressively sanctioned in June by a federal district court for submitting wholly fictional case citations they obtained from ChatGPT, lawyers, and legal scholars around the country, have rapidly become more aware of the dangers associated with irresponsible use of artificial intelligence and generative algorithms. Now, one federal appellate circuit court appears to be taking this issue a step further.

On November 29, 2023, the 5th U.S. Circuit Court of Appeals proposed a new submission requirement associated with the use of artificial intelligence. The new rule would require attorneys in appellate procedure to include a certification in every pleading, similar to already-existing certifications concerning compliance with formatting and other pleading standards, concerning the use of generative AI in the preparation or creation of the pleading. Rather than entirely foreclosing the use of AI, however, the proposed rule would allow attorneys to state whether AI had been used in any capacity and, if so, that a human being had reviewed the document for accuracy prior to its submission. The certification is as follows:

This document complies with the AI usage reporting requirement of 5TH CIR. R. 32.3 because:

□             no generative artificial intelligence program was used in the drafting of this document, or

□             a generative artificial intelligent program was used in the drafting of this document and all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.

Proponents of the rule argue that it will act as an important reminder to help attorneys avoid relying too much on this emerging technology. In particular, requiring attorneys to choose between the two options could serve as a reminder to more closely scrutinize pleadings which may have been prepared in part by support staff. However, others can point out that the signature of an attorney on a pleading (required by Rule 32(d) of the Federal Rules of Appellate Procedure) already serves as an implicit certification that the attorney stands behind the validity of the pleading similar to the explicit certification in the Federal Rules of Civil Procedure. Rule 11 of the FRCP provides that an attorney’s signature serves as an affirmative declaration that “the claims, defenses, and other legal contentions” in the pleading “are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.” Because established rules and conventions already include safeguards to prohibit attorneys from submitting pleadings they have not reviewed, some could see this new requirement as duplicative.

The legal profession has long depended on support staff for research and drafting assistance, but such assistance ordinarily carries with it an expectation of good faith. Unlike support staff that might merely make mistakes in conducting research or checking citations, a generative artificial intelligence needs no malicious intent to invent citations or entire cases. Like the New York attorneys learned, the possibility of “AI hallucination” (where an AI attempts to answer a question by imagining what the answer could be rather than relying on actual sources) opens the legal profession up to serious sanctions when AI-generated content and research is not checked for accuracy.

This summary is not intended to contain legal advice or to be an exhaustive review. If you have any questions regarding this article, please contact Chris Ferragamo at Jackson & Campbell, P.