Artificial Intelligence (AI tools) are without doubt one of the hottest topics in legal forums today. What are they? How can they help us in practice? Should we be using them at all? These are all questions that lawyers are asking themselves. The answers may not be straightforward, but what is clear is that there is a balance to be struck between embracing all that AI tools have to offer and managing the very particular risks that they pose.

How can AI help us?

The range of AI tools on offer is vast. At one end of the spectrum are the extractive AI tools that many in the legal world will have been using for some time, for example case management systems that can filter information or legal research tools that will summarise case law. At the other end of the scale are the emerging products using generative AI. These products create new material based on a given data set that has been ‘fed’ into the product. Tools now on the market include those that can review contracts, carry out title checks, or assess risk as part of AML procedures.

What are the risks of using these emerging AI tools?

With opportunity comes risk, and with AI tools comes some specific risks to look out for. These include:

  • Breaches of data protection and confidentiality
  • Inaccuracies and erroneous results
  • Bias and discrimination
  • Non-compliance with differing regulatory requirements

Practice Tips

Remember that you are responsible to your clients for the output from an AI tool. With this in mind, in making most of the technology, you should take proactive steps to identify and manage the risks:

  1. Be transparent. Make sure your use of AI is explained in your terms and conditions and that clients know how their data will be used.
  2. Scrutinise your providers. You should carry out stringent due diligence on any tool from a third-party provider that you consider using. It is important that you read the terms of use and understand their implications.
  3. Consider training requirements. Make sure all staff have had appropriate training on any AI system that they will be using.
  4. Remember confidentiality and data protection obligations. Remember that client data should never be entered into an open AI tool.
  5. Check your results. Always subject any work generated by AI to a human check. Verify any output before sending anything to a client.
  6. Follow the Law Society rules. Ensure that you continue to comply with the Law Society of Scotland’s Practice Rules and related guidance. Look out for the Law Society of Scotland’s Technology Law and Practice Committee’s Guide to generative AI for the profession, which will be published shortly.

Further reading

For more coverage of these issues, see Lockton AI: the future is here. This article by DWF LLP was first published in the Journal of the Law Society of Scotland on 14th June 2024. You may also be interested to read Deepfake and the risk of vendor fraud: challenges and solutions for solicitors. This article is written by our Lockton colleagues in London and discusses the risks posed by fraudsters using AI to impersonate clients in order to circumvent onboarding identity checks. Although the article was originally written for solicitors in England and Wales and refers to SRA guidance, the content is also relevant to Scottish Solicitors.