Skip to main content

8.05.25

Artificial Intelligence: a stark warning of its limitations as a research tool in legal proceedings

We first wrote about the risk of using AI for legal research in our November 2023 blog here.  Since then, its use by legal professionals and litigants-in-person has increased significantly.  It comes as no surprise that some of the risks we highlighted are playing out in practice.

The suspected use of AI resulting in five ‘fake’ cases being cited in proceedings before the High Court has resulted in the solicitors and barrister responsible for submitting the citations in legal pleadings receiving financial penalties and a direction by a High Court Judge for their conduct to be reported to their respective regulators.

In judicial review proceedings in the case of Ayinde, R v The London Borough of Haringey [2025] EWHC 1040 (Admin) a junior barrister acting for the Claimant lodged written submissions which contained five fake cases.  Prior to the hearing, the submissions had been shared with the defendant local authority’s solicitors.  The defendant’s solicitors  wrote to the claimant’s solicitors advising that they could not locate any of the cases and requesting that copies be supplied.  A month after that communication was sent the claimant’s solicitors replied saying ‘there could be some concessions from our side in relation to any erroneous citation in the grounds, which are easily explained and can be corrected on the record if it were immediately necessary to do so’.  The correspondence made no concessions that the citations were not real cases and went on to describe them as ‘cosmetic errors’.  Mr Justice Ritchie described the letter as a ‘remarkable communication’.

At the hearing the claimant’s barrister claimed that the error occurred when she had dragged the cases from her own digital list of relevant cases into the document, once again referring to the fake cases as ‘minor citation errors’.  Her explanation was rejected by the Judge who gave an excoriating judgment stating that he did not accept the factual basis for the errors and noted that ‘if she had dropped it into an important court pleading, for which she bears professional responsibility because she puts her name on it. She should not have been making the submission to a High Court Judge that this case actually ever existed, because it does not exist’. 

When dealing with an application on behalf of the defendant for wasted costs caused by the claimant’s legal team, the defendant’s barrister made submissions that the explanation was more likely that the claimant’s barrister had used AI.  The Judge was unable to determine a factual basis as to the likelihood that AI had been used as the claimant’s barrister had not given sworn evidence.

In determining the application for wasted costs (and costs in general) the Judge concluded that both the barrister and solicitors firm had behaved improperly, unreasonably and negligently.  A clear warning was given that the responsibility does not lie with barristers alone for checking legal submissions stating ‘I should say it is the responsibility of the legal team, including the solicitors, to see that the statement of facts and grounds are correct.  They [the solicitors] should have been shocked when they were told that the citations did not exist’ adding that both parties should be self-reporting themselves to the Bar Council and Solicitors Regulation Authority respectively.  The defendant was directed to send a copy of the judgment to the regulators.

The Claimant’s barrister and solicitor were each ordered to pay a £2,000 wasted costs order.  Additionally, the costs of the preparation for and attendance at the hearing were also reduced by £1,500 for the barrister and £5,000 for the solicitors (which represented 50% of their claimed fee).

The use of AI in the UK justice system has been examined in a report by a cross-party law reform charity JUSTICE published in January 2025.  The report titled, AI in our Justice System sets out a framework to achieve a trustworthy system for use of AI in the justice system with two main requirements.  The first being that AI development should have a clear goal of what it is attempting to improve namely access to justice, fair and lawful decision-making and transparency.

The second requirement is that there must be a management of risk with the innovation of AI.  A specific requirement in managing those risks will be a duty to act responsibly:

‘All those involved in the design, development and deployment of AI within the justice system have a responsibility to ensure that the core features of the rule of law and human rights are embedded in each stage’.

The judgment in the Ayinde case clearly demonstrates the peril that may ensue for legal professionals where AI is relied upon in place of conventional research methods.


Share


Legal Disclaimer

Articles are intended as an introduction to the topic and do not constitute legal advice.