What does AI mean for Consumers of Legal Services?


AI is a sophisticated technology but its use in consumer interactions is very new. Understanding how it is playing out in general consumer services would greatly add to the resources that regulators can rely on to develop frameworks that work for the legal sector.

Nothing captures the headlines or our imaginations quite like emerging technology. In the 15 years I’ve worked in tech policy, there have been many ‘Next Big Things’.  Big data, internet of things, smart data, cloud computing are just a few who’ve had their time in the sun.

Since OpenAI launched ChatGPT to mainstream users in November 2022, artificial intelligence (AI), and more specifically Generative Artificial intelligence (GenAI) has taken top billing as possibly the most transformational Big Thing we’ve ever seen.

What do we talk about when we talk about AI?

AI is a broad term for a collection of advanced software that allow machines to simulate different aspects of human intelligence. Machine Learning is a particular type of AI but nowadays the term is often used interchangeably with AI. Advances in machine learning, deep learning and neural networks have enabled software that more closely mimics how humans learn and produce content.

These advances have given us Generative AI, which can generate synthetic content like text or speech that very accurately resembles human-created content. This new phase of Generative AI is likely to change the way consumer-facing services and sectors are managed behind the scenes and how they interact with end users.

AI: promises potential benefits and threats at the same time

Whether the transformation GenAI is predicted to usher in will be terrifying or exciting depends on your point of view. Crudely put, industry, investors and governments are excited about its potential, and keen to stake out an early lead in the space, whilst of course acknowledging risks.

However, many law-makers, politicians, creatives, academics, citizens and civil society organisations are deeply concerned that it could strip out human interactions, be used to discriminate against people, supercharge mis- and dis-information, replace labour or harm human rights and wellbeing.

What do consumers think of AI generally?

Consumer research shows that most people see its potential but are nervous about the risks. They are positive about its uses in healthcare, research or managing environmental resources, or simply as a fun, creative tool.

On the other hand, they worry about being manipulated, their privacy and security and that it could lead to unfair discrimination against themselves or other social groups.  Surveys also show citizens don’t think current regulation is enough to mitigate the risks and harms of AI.

What does AI mean for consumers of legal services?

Amongst this frenzy of activity and polarised opinions, all sectors are thinking through what AI means for them.

Legal service providers and regulators have looked across the huge breadth of areas where AI will have an impact: for example, legal research, contract review, e-discovery, document drafting or analysis, predicting risks and outcomes and summarising large bodies of text. There are also general business applications of AI like recruitment, training and marketing that law firms might use.

A survey by LexisNexis found that the number of lawyers using generative AI in their work has nearly quadrupled since last year, jumping from 11% in July 2023 to 41% in September 2024.

To different extents, all these activities will impact on consumers as end users of legal services.

However, there are also other areas where regulators of legal services have been encouraged to explore the way tech innovations like AI can support consumer needs such as greater access to justice, lower costs, quicker delivery and more specialised offers.

How are regulators supporting innovative tech like AI in legal services?  

Recognising the role of new innovations in meeting regulatory objectives, the LSB published guidance on promoting technology and innovation including but not limited to AI to improve access to legal services in April 2024.

The guidance set out how regulators might:

Promote the use of technology and innovation to improve access to legal services by helping to address unmet legal need and to ensure a broader array of legal services is delivered in the public interest”.

The guidance does a thorough job of referring to other regulators’ work or governance best practices, and guides regulators to look at how other sectors have promoted the use of technology and innovation for the benefit of consumers and the public. The LSB has also responded to the DSIT commission and flags the importance of case study learning from other regulators.

However, this could be greatly enhanced by tapping into the rich resource of insights, evidence and learning that the rapid deployment of AI has brought outside of regulated industries to get to the heart of some of the common consumer usage issues

Rapid learning can benefit everyone

Working in regulators’ favour is the fact that things move fast in AI and GenAI consumer deployment. In the eight months since the guidance was published, there are even more examples of how AI can help, hinder or even harm consumers in general consumer services and in legal settings.

Automating time-intensive tasks in things like conveyancing is becoming more common, which could eventually lower consumer costs, and free up staff to work on the thornier issues that inevitably arise.

But there are other cases where for example  a party in a tax tribunal who relied on legal authorities generated by an AI system which turned out to be fake, or the case of ‘electronic courts’ in Poland where automated repayment orders to people were easily intercepted by scammers leaving consumers without financial redress.  There’s also been further research into consumer uses in legal services and consumer attitudes more broadly, such as increasing fears identified by a global survey.

AI in non-legal consumer settings: what can we learn?

Consumers’ experience of AI in service delivery in other sectoral settings offers further insights relevant to legal services. Here’s some examples of the real-life impact of the risks of AI on consumers and organisations that regulators will want to be mindful of.  These deliberately focus on the potential pitfalls which inevitably attract attention. Giving more detail on how top-level risks actually play out in practice for consumers in sectors of early AI adoption is something the LSB could emphasise more as it’s a rich source of learning. Of course, plenty can also be learnt from good practice but this information is not so readily available.

  • Accountability: A passenger of Air Canada was incorrectly informed by an AI-enabled chatbot that he would not be charged to cancel a flight due to a family bereavement. Air Canada refused to reimburse him for his financial loss, and when taken to court, argued they were not responsible for the “misleading words” of its chatbot as it was a “separate legal entity” that should be held responsible for its own actions. The airline lost which helped establish a precedent over legal liability, but the case highlights the importance of clarity on accountability and transparency.

 

  • Bias and discrimination: AI systems can make mistakes due to poor data, or models trained on data which reflects structural bias that lead to an inaccurate assessment of circumstances and risk. Citizens Advice UK found people from an ethnic minority background were being charged more for car insurance even when all the other risk factors were the same. This suggests that unfair inferences and inaccurate correlations have been made by the system. If AI systems in legal services are used to assess risk, then it needs to be based on accurate and non-discriminatory systems.

 

  • Misleading information: Consumer group testing found that Bing’s ChatGPT-powered search engine presented the wrong information about the independent tests they carry out on consumer appliances and related recommendations. Consumers had no means to verify whether the information provided was correct, and that queries raised at different times produced different results – demonstrating a problem both with the sources of information and the processes by which that information was selected to be presented to consumers.

AI in legal services through a consumer lens: what can we learn?

Here are just a few examples of the type of insights we might glean by focusing on consumer interests, experiences, outcomes and expectations of AI in legal services:

  • Mixed perspectives: Findings from the Legal Services Consumer Panel 2024 Tracker suggest that consumers can hold mixed perspectives on the use of technology like AI in legal services: 68% of legal services consumers think accessing legal services digitally would make it more accessible to them, yet 56% of legal services consumers said they would trust a legal service less if they could only access it digitally.

 

  • Two-tier justice service? Research into AI in the family justice system published by the Nuffield Family Justice Observatory warned that lower cost AI-enabled services which are not regulated for quality may lead to those with less resources getting a poorer service. Without proper quality controls, this would cancel out any gains made by increasing access to justice. It could be that we end up in a place where a higher level service is provided by humans, with the lower level delivered by a machine.

 

  • Accuracy of legal information: Consumers have been using Generative AI services for guidance on legal decisions such as divorce, mortgages and investments. Although many of these services will use a disclaimer that advises getting professional help, the authoritative and plausible way in which information is presented would be very convincing to someone unfamiliar with that area of law or finance.

An Open University study found that the tools provided legal advice that was not always reliable or accurate and included misleading advice, advice based on out-of-date laws or laws in a different country and advice that was too generic to be helpful.

Critically, the paid-for versions of the tool delivered higher quality and more accurate advice than the free versions, linking to the point above about creating different tiers of service for consumers.

  • Digital confidence and legal know how: Despite the high levels of connectivity in the UK, digital exclusion and disparities are still present. Skills, confidence and understanding of how AI-enabled tools work, what their business model is, what data they are being trained on and an individuals’ rights to object to outputs (decisions) or inputs (for example personal data) will be very different amongst different users.

Let’s couple this with the way in which consumers interact with legal services. Consumers’ usage of legal services is likely to be irregular and infrequent, so there is already an information asymmetry between provider and consumer. Consumers are highly likely to have less knowledge about both the substance and the process of the law. If users are also less knowledgeable about how the delivery of sophisticated digital service offers like those involving AI works then they face a double disadvantage.

Why is learning from AI’s rapid rollout so important?

This blog has suggested just a handful of examples of how consumers are experiencing the arrival of AI in their lives.

Being able to quickly digest and learn about the impact of AI on people is crucial. Until very recently, AI was a technology designed for and largely used for industrial purposes. It is now being re-purposed in an ‘experimental phase’ for consumer uses like service provision, communication and information with few formal guardrails in place.

The focus to date has been on legal service providers being the primary users of AI technologies in legal practice, but we must remember that consumers may be using AI tools themselves or be impacted personally by the way in which AI is used to make decisions about them.

In this early stage of deploying AI for a complex sector like legal services, a more sophisticated understanding of risks and disadvantages to using AI in direct consumer interactions is important, particularly if it is targeted at access to justice goals.  There are inevitable concerns that this could stifle innovation, but without effectively anticipating risks we could end up in the situation that the Public Law Project has described as ‘hurt first, fix later’.

If regulators can learn quickly about how AI impacts people individually and collectively and why, they can strengthen their strategies and plans to support legal service providers in their AI innovation.

The LSCP will publish further research in 2025 to understand the different types of legal services that are best suited to digitalisation from a consumer perspective which will be helpful for regulators.

 

Liz Coll, LSCP Panel Member