Recently, a judge in California expressed strong dissatisfaction with the actions of two law firms because they used undisclosed artificial intelligence-generated research materials in a supplementary legal document. Judge Michael Werner imposed a fine of $31,000 on these firms in last week's ruling due to numerous false, inaccurate, and misleading legal citations and references in the document.
Image source note: The image is generated by AI, authorized service provider Midjourney.
In his ruling, Judge Werner mentioned that he read the legal document and became interested in the cited legal cases, only to discover that these cases did not exist at all. He pointed out that this situation was alarming, nearly leading him to cite these false materials in a judicial order. He emphasized, "No reasonably competent lawyer should outsource research and writing to artificial intelligence."
According to court documents, during a civil lawsuit against State Farm, a plaintiff's legal representative used an AI-generated outline for the supplementary document. However, when this outline was sent to another law firm, K&L Gates, the AI-generated research materials included were actually false. Judge Werner noted that the lawyers or staff of these two firms clearly failed to verify or review these research materials before submitting the documents.
In reviewing the documents, Judge Werner found that at least two cited cases did not exist. After inquiring from K&L Gates, the re-submitted documents contained even more fictional citations and references. The judge then requested the lawyers to explain the situation, which ultimately led them to admit using AI tools, including Google Gemini and legal research tools like CoCounsel in Westlaw Precision.
This incident is not the first of its kind. Previously, there have been instances where lawyers were caught citing fictitious cases in court. For example, former Trump lawyer Michael Cohen once mistakenly cited a false court case in legal documents due to misunderstanding Google Gemini. Additionally, there have been cases where lawyers involved in lawsuits against Columbia Airlines included numerous fictitious cases generated by ChatGPT.
In his ruling, Judge Werner reiterated that it is "absolutely wrong" to initially fail to disclose the use of AI products in drafting documents. Sending these materials to other lawyers without informing them of their questionable AI origins places these professionals at risk.
Key points:
🔍 The judge fined the law firms using false AI-generated materials $31,000.
⚖️ The judge discovered multiple fictitious legal cases in the documents, distorting legal citations.
🤖 Lawyers admitted using AI tools for legal research without necessary verification and review.