Skip to main content


Is it safe to use AI for legal research?  Not yet

The development and use of AI (artificial intelligence) has increased dramatically in the past year. Widely available AI apps can be used to ask complex questions and immediately receive ostensibly sophisticated answers. The use of AI has extended to the legal sector where chatbots can be used to search for cases relating to a particular topic/field of law to assist in legal research.  This development raises various questions, including, whether AI is a reliable source of legal knowledge.

What is AI and how is it being used?

AI programs analyse and seek to understand language so that they can adapt to various requests without any human intervention.  The result of this is that a computer can carry out tasks that are typically performed by humans.  Everyday uses of AI include automated digital assistants such as Alexa or Siri on mobile phones, smart speakers and other devices.  The aim of AI is to increase efficiency.

In the legal sector AI is being used as an aid for legal research.  AI chatbots such as ChatGPT, Google Bard and Bing AI enable the user to ask them a question or give a statement to which the chatbot responds promptly with an answer.  To do this the chatbots use information found on sources such as Wikipedia to put together responses to a question. They are trained to identify patterns of words and ideas to stay on topic in order to generate a seemingly thoughtful and helpful answer.

The risks AI chatbots create

AI chatbots use any information they can access to create an answer to what the user is asking.  Because of the unreliability of online content (including, in particular, so-called ‘fake news’) answers provided, whilst appearing to be trustworthy and definitive, may in fact be unreliable themselves.  This has been proven to be the case with legal research.

When Google Bard was asked to provide details of the most important media law cases in England and Wales in the last 15 years it provided correct analyses of key cases including as Reynolds v Times Newspapers Ltd [2001] 2 AC 127 and Campbell v MGN Ltd [2004] UKHL 22.  However, both these cases fall outside of the timeframe requested.  This is somewhat odd as one would have thought that this binary criteria would be easy for a computer program to get right.

Jameel v Wall Street Journal Europe Sprl [2006] EWCA Civ 1047 was also cited, again outside the timeframe, and with the incorrect citation([2006] UKHL 44 is the House of Lords decision, [2005] EWCA Civ 74 is the Court of Appeal decision).  Moreover, Google Bard stated that this case “established that journalists can be held liable for defamation even if they are reporting on the views of others”.   Whilst this general proposition of law is correct, Jameel did not establish it. Jameel is a noteworthy case as it recognised a form of abuse of process whereby the court could dismiss a defamation claim if no real and substantial tort had been committed within the jurisdiction (typically where there was no or very low levels of publication).

Lachaux v Independent Print Ltd [2019] UKSC 35 was identified as the case which “established that journalists can be held liable for defamation even if they did not intend to defame the claimant”.  Again, the citation is incorrect ([2019] UKSC 27 is the correct citation, [2019] UKSC 35 concerns an unrelated immigration case).  Again the purported ratio is incorrect and simply a general proposition of law.  Lachaux is important as it considered the interpretation of the test of serious harm to reputation under section 1(1) of the Defamation Act 2013.

The cases of Von Hannover v Germany (No. 2) [2012] ECHR 216 and Ashworth v MGN Ltd [2008] EQHC 1210 (QB) were also recognised by Bard, and whilst their analysis was correct, the citations, once again, are not.

Google Bard also listed the following cases as important:-

  • MOS v Dowler [2011] EWHC 1004 (QB)
  • Tinkler v Associated Newspapers Ltd [2018] EWHC 2667 (QB)

These may seem unfamiliar to media lawyers, and for good reason, they are not real cases!

Google Bard does provide a pop up with a link to Google so that the user can double check what they have been told. Nevertheless, as matters presently stand, lawyers and law students should generally avoid relying on AI as a means of legal research. Moreover, members of the public seeking legal advice should steer well clear of AI as it could lead to very costly mistakes and misunderstandings.  At the moment at least, the only safe option when it comes to legal advice is to continue to consult a specialist solicitor.



Legal Disclaimer

Articles are intended as an introduction to the topic and do not constitute legal advice.