The integration of artificial intelligence (AI) into the delivery of professional services presents exciting opportunities to drive efficiencies and operate more commercially for clients. However, AI has its limitations, and so its increasing use is also likely to provide fertile ground for regulatory complaints and negligence claims against professionals in years to come.
This is the first instalment in our series "AI: A Professional Negligence Perspective", where we will explore the risks and rewards for professionals when utilising AI. This article focuses on how the use of AI interplays with specific duties professionals owe to their clients.
Generative AI (Gen AI) in professional services
Gen AI is a form of artificial intelligence that is capable of creating new content, including text, images or sounds, based on patterns and data acquired from a body of training material.
In the legal sector uses can include the drafting and review of contracts, preparation of chronologies, legal research and drafting of submissions to the court. In the construction sector, examples are the use of automated valuation models (AVM) by quantity surveyors and the use of AI-integrated building information modelling (BIM) by architects which can detect and predict clashes between architectural, structural and mechanical systems. In the accountancy sector, Gen AI is being used in areas such as tax research/advice, financial reporting and auditing.
In terms of future trends, we see a compelling use case for mass document review and expect this to be a major growth area in coming years. Such tasks can be laborious, resource heavy and time consuming to carry out manually. However, Gen AI will be impactful here because such tasks are particularly well suited to its strengths of quickly analysing and synthesising vast amounts of information.
Professional duties of skill and care
Professionals require to exercise professional skill and care when providing advice and services. In the exercise of doing so they require to only act in matters which are within their sphere of competence, apply the appropriate level of diligence to the matter, and provide independent advice. These duties cannot be delegated to a Gen AI system, and the professional remains responsible for their work product. Professionals must ensure that to the extent such systems are relied upon in providing advice/services, they understand the limitations and are in the position to independently verify whether the outputs are useful and appropriate for the purposes of their specific instruction.
Independent verification is especially important because Gen AI is trained to predict the most likely combination of words from a mass of training data it has been provided. The quality of the output will be dependent on the prompts entered by the user and the underlying dataset which could well be inaccurate, incomplete, misleading or biased. It does not necessarily provide the most accurate answer, and basic Gen AI does not check its responses against an authoritative database. Indeed, in that sense the “intelligence” part of the name is somewhat misleading as the essential element of critical analysis is still lacking.
This is recognised in the Refreshed Artificial Intelligence (AI) Guidance for Judicial Office Holders (April 2025) where it is said that “current public AI chatbots do not produce convincing analysis or reasoning”. For these reasons, Gen AI is prone to producing “hallucinations” i.e. coherent and authoritative responses which are in fact inaccurate or fictitious. In the legal profession, examples of hallucinations include fabricated cases, legislation or other sources as seen in the cases of Ayinde v London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383. Accordingly, outputs should be treated with caution and verified by the professional, with particular attention being paid to whether the information is false/accurate, up to date and relevant to both the issue(s) at hand and the jurisdiction of operation.
Where Gen AI is being used to produce a high volume of outputs, it may be more proportionate to undertake randomised dip samples at regular intervals to scrutinise the quality of outputs. If that is the case, professionals should be transparent with their client about their use of the system, its capabilities and limitations, and the scope of the work that they are agreeing to carry out, which may require adaption of letters of engagement.
On the horizon is Agentic AI. These systems act autonomously (i.e. of their own agency) to achieve specific goals, problem solving and adapting strategy along the way, with limited supervision and prompting. The Law Society of England and Wales helpfully describes the differences between Gen and Agentic AI as follows – “consider generative AI as the intellect and agentic AI as both the mind and hands. Where a model like ChatGPT might help draft a report when asked, an agentic system can plan, write, revise and deliver it without further instruction”. It will be especially important for professionals to understand how such systems work and build in appropriate check in points to ensure that any errors are not propagated through out the process and that the thread of the system’s reasoning is not lost over time.
Client confidentiality
Professionals require to maintain their duties of client confidentiality when using Gen AI providers. When it comes to the legal profession, confidentiality is also the underpinning for the protection provided by legal professional privilege.
In terms of public providers, inputted information and documentation becomes part of the dataset and theoretically available to be used to respond to queries from other users. Therefore, confidential or client sensitive information should not be shared with such systems and any documentation being shared should have such information removed.
Private and more specialised Gen AI systems will likely provide enhanced safeguards. However, the professional still requires to satisfy themselves as to the extent of the safeguarding. The Law Society of Scotland’s (LSS) Guide to Generative AI recommends at a minimum that “(a) appropriate terms are in place with the vendor so that the information inputted will not be accessible by the vendor or used for any other purposes (b) the security arrangements meet appropriate information security standards and (c) that the use is compliant with the firm’s own terms of business with clients”.
Professionals should also consider whether client consent should be obtained prior to using confidential information in private Gen AI systems with reference to any applicable industry guidance. The RICS guidance for Responsible Use of Artificial Intelligence in Surveying Practice provides that regulated firms must refrain from uploading private/confidential data to an AI system unless they have the express written consent of all affected stakeholders to do so. Whereas the LSS guidance states that consideration should be given as to whether to inform clients in order to provide them with reassurances that appropriate safeguards are in place and answer any questions they may have.
Generative AI notetakers are commonplace and helpful tools for recording, transcribing and summarising meetings. However, if clients wish for certain conversations/advice to be confidential, and covered by legal professional privilege, careful consideration should be given to how those AI-generated transcripts are stored by the provider and extent to which they are accessible to the wider public. The risks of AI to legal professional privilege are something which we will explore in more depth later in this series.
Conclusions
While there are undoubtedly great opportunities for professionals arising from the use of AI, as with all new technology, alongside opportunity comes risk for professionals. If a client suffers loss due to a mistake arising from the use of AI, the client will inevitably look to the professional who used AI to deliver the service to recover that loss.
As different forms of AI are still developing, there is not a fully fledged normal or usual practice for professionals to follow. All professionals, and particularly solicitors, will need to monitor closely the guidance being issued by regulatory bodies around the use of AI. At an organisational level, guidance should be provided by the adoption of policies and clear guardrails on the proper use of AI. However, even so, we expect that risks will continue to arise in respect of shadow AI which IBM describes as “the unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the information technology (IT) department”.
For now, the key takeaway is that the ultimate responsibility rests with the professional person to comply with their duties of skill and care when utilising AI systems, which are only a tool in delivering the service to the client. From a risk management perspective professionals would be best advised to ensure that they are following any available regulatory guidance in the use of it and that terms of business and letters of engagement are suitably updated/adapted to take account of any use of AI.
Clients instructing professionals should think carefully about the extent to which they are comfortable with professionals using AI to deliver services and that they understand how it will be used. This is a theme which we will return to later in this series! If you would like to discuss this topic further then please get in touch with your usual Burness Paull contact or one of the contacts listed below.
Written by
Mhairi Morrison
Senior Solicitor
Dispute Resolution
Related News, Insights & Events
Error.
No results.
AI: opportunities and professional negligence risks
09/12/2025
The integration of artificial intelligence (AI) into the delivery of professional services presents exciting opportunities to drive efficiencies and operate more commercially for clients.
Risk resilience-navigating the storm
03/12/2025
This article explores the topic of risk management. We provide some real examples from our recent autumn risk conference.
Judicial review: what you need to know
01/12/2025
In this article, we explore all aspects related to judicial review.
{name}
{properties.pageSummary}
{properties.headline}
{properties.pageDate|date:dd/MM/yyyy}
{properties.shortDescription}