Artificial Intelligence: Action From The CLLS
By Maroulla Paul for CitySolicitor Magazine, Spring 2024 Edition
While there may be criticisms levied at the Government for not being more proactive when it comes to legislating AI, the CLLS is far from taking a back seat when it comes to navigating these new technologies and has set up a specialist committee to look at AI and the legal profession. It states its aims as;
‘The specialist Committee on Artificial Intelligence works to coordinate the City of London's response to AI.
For the law, the opportunities include automating manual tasks to save time and cost, improved access to justice, and – looking further ahead – perhaps AI-assisted negotiations. The challenges include ensuring the reliability and accuracy of AI, job displacement concerns, and meeting ethical standards and regulatory requirements in the legal industry.
The CLLS AI Committee will confront these opportunities and challenges head-on in order to provide guidance to our City members on what matters with this fast-moving and challenging technology.’
The Chair of the Committee is Minesh Tanna, a partner at Simmons & Simmons LLP in the firm’s Disputes & Investigations team and also their Global AI Lead - eminently well qualified for the role. In his Disputes work he has been focussing on technology for the past six to seven years and, for the last five years or so, working on AI as an area which is likely to generate legal work. His expertise is AI regulation, AI governance and legal disputes arising out of the use of AI. Disputes frequently arise from regulation – although there is currently limited AI regulation.
However, we are starting to see disputes arise from existing legal frameworks and their application to AI; for example in the data privacy, consumer protection and competition spheres. Open AI was blocked from using ChatGPT in Italy in response to action taken by the Italian Data Privacy Authority. In the future, on the regulatory side, Minesh predicts that there will be lots of regulatory action under specific AI regulation when these come into force; a good example being the EU’s regulation. There will also be disputes arising from contracts; where the procurement of AI may go wrong and the customer may blame the developer for issues with the underlying technology and the way it was developed that may have caused unintended consequences and harm.
“It is important to remember that from a disputes angle we are dealing in its most sophisticated forms with an autonomous form of technology which raises quite novel issues when it comes to attributing blame and legal responsibility to humans because they may not have had any control or even awareness of how the AI system was operating - is it therefore fair or appropriate to be attributing blame to those humans?”
Minesh says this is very fundamental to why he became so interested in AI originally. While he accepts that AI’s output is still to a large extent determined by the data that humans input, it is able to review that data and identify patterns in a way that humans cannot do and, more importantly, take decisions autonomously without being bound by a set of human parameters.
“Take an autonomous vehicle - as humans we cannot programme these to operate in the way they do because that would mean effectively coding, for example, for every turn that exists on roads to be able to guide the vehicle to make the turn. But every turn looks different, so coding for each one is practically impossible. Instead, we allow the vehicle to learn from a vast amount of data (including turns in the road) in the hope that it is able to identify when to make a turn - and when to stop when a pedestrian is crossing the road. Whilst the technology has vastly improved (such that autonomous vehicles may already be or soon become safer than human-driven vehicles), accidents are still happening - but where does the liability lie?”
Separately, AI is obviously going to have a significant impact on the legal profession, both in terms of advising on legal issues relating to AI but also the use of AI in the profession.
“The legal profession is ripe for the use of AI for several reasons. Firstly, we deal with a lot of information and a lot of data. Secondly, it is a competitive market and legal services tend to be quite expensive. This is an opportunity to use technology to bring down costs. Thirdly, without any criticism of the profession, it is still a human-led and service-based industry, with fees (particularly in commercial legal services) based principally on the time spent rather than the output.”
The CLLS Committee has been established to, amongst other things, look at interesting and challenging issues around AI in the profession particularly for City firms.
“We are dealing with a complex set of technologies that are not easy to understand and, with that in mind, the Committee has, in my view, three key goals. Number one is education - helping the profession to understand not just what AI is but how we can use it safely; what are the opportunities but also what are the risks that need to be considered? Next comes policy - contributing to important policy and regulatory development. Where there may be consultations and discussions around AI in the profession, this Committee will play a role. But also more widely where, for example, the UK is looking at regulation for AI generally, we will have an important part to play - not least because we have members who have been through the application of other digital regulation like GDPR so we are in a good place to be commenting on AI policy and regulatory developments generally. Finally, collaboration. AI is all pervasive so we need to work alongside other bodies; there is no sector, domain or industry AI will not touch.”
Minesh sees the potential for significant opportunities which AI can bring to our profession. We have already seen that technology can undertake tasks to a similar or even higher quality that are currently being done by humans.
“Take reviewing documents, for example. This is an area where law firms have already been using AI but not generative AI, which is the latest and most powerful form and which can produce content itself such as, in this context, drafting documents and correspondence which typically can take a lot of human time. The risks are, however, that we are dealing with an autonomous form of technology which we don’t necessarily fully understand - and, in particular, lawyers who may not always have a technical background will need to upskill and understand the technology in order to be able to use it safely and responsibly.”
One issue with large language models (LLMs) which produce linguistic content is ‘hallucinations’ which refers to the risk of LLMs producing inaccurate information. In a legal context where truth and accuracy are paramount that is obviously a key risk.
“Technologies have to be used in the right way and with the right parameters in place. LLMs, for example, may be treated as general chatbots that can give the answer to any question, but they principally view language through the lens of statistical functions; they do not understand language in the same way you or I may, and this can lead to factually inaccurate responses. Lawyers, in particular, need to be wary of this.”
Therefore - currently - generative AI in the legal profession may be treated more as a starting point. At this stage the need for human verification is crucial. Time will change this as machines become more sophisticated and accurate.
AI will also impact the shape of the legal industry. The tendency to democratise AI, making it available to everyone without the need for technical expertise or coding will allow clients to produce their own first drafts of documents which they have previously relied on their lawyers to do.
It seems as though AI will shake up our profession in so many ways - which is why we are fortunate that the CLLS had the foresight to set up this Committee to hopefully help us navigate these unchartered waters.