Judge slams AI use in case, refers lawyers to disciplinary committee for fake citations

A HIGH Court judge has referred two attorneys to the Disciplinary Committee of the Law Association (LATT) after uncovering multiple fictitious legal authorities cited in court submissions in a lawsuit over the dismissal of a lab assistant in 2023.
“Irresponsible use of internet sources or generative AI tools undermines not only individual cases but also the credibility of the legal system as a whole. If such conduct is not condemned and appropriately addressed, it could lead to a dangerous erosion of the rule of law,” Justice Westmin James said in his ruling on the breach of contract lawsuit. This is possibly the first case locally involving the use of AI, resulting in sanctions.
In his ruling, James described the incident as a serious breach of professional ethics.
“The court recognises that errors can and do occur, even at the judicial level, as evidenced by the appellate process.
“However, the submission of fictitious or unverifiable legal authorities, whether sourced from generative AI tools or carelessly obtained from the internet, constitutes a serious breach of professional responsibility. Attorneys bear an ethical obligation to ensure that all materials submitted to the court are authentic, properly sourced, and reliable.
“The court must be able to place trust in the representations made by counsel as officers of the court.”
The issue arose during proceedings in an employment dispute in which attorneys for Nexgen Pathology Services Ltd, the claimant in the case, submitted several cases with citations to support their arguments that an employer who funds an employee’s training assumes certain contractual obligations.
However, the judge said the cases cited “lacked proper citations and bore characteristics inconsistent with valid Industrial Court decisions in Trinidad and Tobago,” nor could he verify the existence of the cases.
“These purported cases formed the sole legal foundation for the claimant’s central argument. Had they been legitimate, they would have constituted persuasive authority,” James wrote in his ruling.
Upon investigation, James said the court discovered that the cited cases bore inconsistent formatting and structures for Industrial Court rulings.
Notably, he said they were styled as disputes between individuals and companies, although only trade unions appear as parties before the Industrial Court in trade disputes.
He also noted that the cases were not included in the claimant’s bundle of authorities, although they were heavily relied on in submissions.
The judgment said the judge requested copies of the authorities. However, the claimant’s counsel responded that the cases had been retrieved from an online source that was “no longer accessible.” The judge was given a screenshot of an error message.
That explanation was deemed "wholly unsatisfactory" by the judge, who said, “Legitimate court judgments do not simply disappear without a trace from all recognised legal databases.
“Moreover, the absence of these cases from any recognised legal database raises serious questions as to their authenticity.”
In a follow-up response, the attorney attributed the inclusion of the false cases to a junior assistant’s inexperience and acknowledged that the sources were drawn from unverified searches on Google and Google Scholar.
The attorney denied using any AI-generated tools and accepted full responsibility for the oversight, citing heavy caseloads and inadequate supervision for the inclusion of inaccurate and misattributed citations.
However, James said, while digital tools, including AI and internet-based platforms, were increasingly common and valuable in legal research, admitting he, too, makes use of such tools where appropriate, “their use must be accompanied by discernment and subjected to rigorous verification.
“This is because AI-generated content is susceptible to producing what are commonly referred to as ‘hallucinations’ – fabricated, yet plausible-sounding outputs that may result from gaps or limitations in the model’s underlying data.
“Legal practitioners must not rely on such tools uncritically. Any information obtained through these means must be independently verified before being presented to the court.
“The court emphasises that citing non-existent cases, even inadvertently, constitutes a serious abuse of process and professionalism.
“It risks misleading the court, prejudicing the opposing party, and eroding public confidence in the administration of justice.
“Counsel are reminded that the duty of candour to the court requires that they verify the authenticity of every case cited.
“If any material has been generated with the assistance of AI or other non-traditional sources, full disclosure to the court is both appropriate and expected.”
He stressed, “Attorneys bear an ethical obligation to ensure that all materials submitted to the court are authentic, properly sourced, and reliable.
“It is regrettable that counsel failed to observe relevant guidance, including practice directions issued by the Caribbean Court of Justice and lessons from international jurisdictions, where similar conduct has resulted in disciplinary action against legal practitioners.”
In referring the matter to the Disciplinary Committee for investigation under section 37(2) of the Legal Profession Act, James noted, “This case illustrates that expertise in selecting and utilising research technologies, including those powered by AI, is now essential in modern legal practice.
“The court reaffirms that the integrity of the justice system relies on diligence, honesty, and professional accountability.
“Rest assured: the intelligence of this court is not artificial.”
In his ruling on Nexgen’s lawsuit against its former employee Darceuil Duncan, the judge found that the lab assistant breached her employment contract when she resigned shortly after completing a company-funded programme.
Duncan was ordered to pay Nexgen $38,740.17 in damages, plus interest and its legal costs assessed at $11,595.
James ruled that Duncan was obligated to remain with the company for 36 months post-training in keeping with the company’s employee investment programme policy, which formed part of her terms of employment.
James noted that although she was not a permanent staff member as typically required by the policy, this criterion was waived and did not invalidate the policy's applicability.
“In such circumstances, a reasonable term of continued employment would be implied,” James said. “The defendant’s resignation shortly after completing the training was not within a reasonable period.”
Duncan claimed she resigned because of an incident with her supervisor. In his analysis of the evidence, James noted that Duncan admitted she was not threatened or assaulted, nor could she recall specific threatening language.
“Raising one's voice, expressing criticism, or gesturing emphatically in a workplace, while perhaps unpleasant, does not rise to the level of a fundamental breach of the contract of employment.”
He noted the company’s CEO had also reached out to Duncan to investigate her concerns, but she resigned the same day without allowing the company time to address the issue. This, he said, “undermines her claim that resignation was her only option.”
He also dismissed Duncan’s counterclaim for damages for constructive dismissal and unpaid salary.
Surendra Ramkissoon and Larry Mooteeram represented Nexgen while Nneka Warner represented Duncan.
Comments
"Judge slams AI use in case, refers lawyers to disciplinary committee for fake citations"