(Bloomberg Law) -- A former federal judge is urging a change to the federal rules of evidence that would allow courts to weigh whether evidence is the product of generative artificial intelligence.

Former US District Judge Paul Grimm of Maryland, now the director of the Bolch Judicial Institute at Duke University, on Friday made the pitch to the federal judiciary’s Advisory Committee on Evidence Rules, which includes federal judges, lawyers, and academics.

Grimm said that when materials produced by algorithms, like deepfakes or other products of generative AI, have appeared in some federal cases, judges have declined to scrutinize the source, citing concerns over trade secrets.

He said one approach would be to change the federal rules of evidence to require that, if something is generated by an AI tool, that the party seeking to admit it show that it was created by software or a program that can reliably create the evidence in question.

Grimm said this option will allow for judges to hold hearings on the evidence, particularly if parties disagree on whether it is the product of artificial intelligence tools.

Grimm said he and his co-presenter, University of Waterloo computer science professor Maura Grossman, are focusing on recommendations they could make now for lawyers and judges.

“This is evidence is being used now and the experience of what’s happening when parties are trying to get access to it is problematic,” Grimm said, “and it deals with information that can be enormous and powerful.”

The advisory committee didn’t immediately take action on the recommendation. Chair US District Judge Patrick Schiltz of Minneapolis said the panel will hold a symposium on the topic in fall 2024 in order to hear from more experts on the issue.

Artificial intelligence has drawn scrutiny from courts Judge P. Kevin Castel of the Southern District of New York in June fined two Manhattan lawyers $5,000 after they admitted to including fake case citations in a court filing, due to the use of a AI tool.

Judge Brantley D. Starr of the US District Court for the Northern District of Texas, is requiringlawyers appearing in his court certify that if they did use AI to draft their filings, the accuracy of that information was verified by a human.

To contact the reporter on this story: Jacqueline Thomsen at jthomsen@bloombergindustry.com

To contact the editors responsible for this story: Seth Stern at sstern@bloomberglaw.com

©2023 Bloomberg L.P.