Given the rise in prominence of AI, we are increasingly seeing it used in everyday life and filtering into the workplace whether that be through company authorised software; or being used unofficially by employees.

In this article, we focus on the state of play of AI in the workplace from an employment law perspective, examining new risks and trends, along with giving guidance on best practice. 

AI is form of general technology where the machine or software is learning from the data that’s inputted and which it then analyses. It can then predict patterns (including language) and based on what it learns from the data, improve its performance of certain tasks over time. 

AI is made up of two main components: 

  • The data that is input – that could be a variety of things including text, audio, images, and video; and
  • The algorithms – this is the coding within the software that contains the instructions to perform specific tasks. 

We see examples of AI in a variety of different forms: 

  • ChatGPT or other generative AI software;
  • Chatbots;
  • Analysis tools;
  • Virtual assistants; and
  • Image and voice recognition software. 


Use of AI for employers in the workplace

There are already a range of AI-assisted tools available which can help employers to perform many workplace functions. Some examples of use cases include:

Recruiting and hiring:

  • Generating job descriptions which can be prepared quickly and then fine tuned by the hiring manager;
  • Asking or answering questions about preliminary job qualifications, salary ranges, and the hiring process, and potentially rejecting candidates based on set requirements;
  • Sourcing and screening prospective candidates by searching social media for people with qualifications and being used to screen applications;
  • Rating an applicant’s performance at aptitude testing; and
  • Conducting video and recorded interviews, and analysing the responses, facial movements, and speech tone and patterns.

Performance management, conduct and productivity:

  • Allocating tasks or shifts;
  • Measuring performance based on set criteria;
  • Selecting individuals for promotion based on set criteria or performance indicators.
  • Analysing productivity; and
  • Monitoring and tracking employee attendance; activity or behaviours. 

Other uses:

  • Predicting which employees are likely to leave their role (taking account of a variety of data and factors), to provide employee retention information;
  • Selecting employees as part of a redundancy process based on set criteria.
  • Taking over admin heavy or repetitive tasks; and
  • Monitoring safety risks of working environments. 


Key risks of employers using AI in the workplace


Discrimination 

The primary concern with AI tools is that there is a lack of transparency around how they arrive at their output. There is a risk that prejudices and biases hidden in the software can lead to discriminatory outcomes in decision making processes.

AI relies on the data available to it. Therefore, if practices have been discriminatory or favoured a group previously, then AI is likely to replicate those outcomes. 

Direct discrimination claims related to the use AI may be difficult to prove, as there is a lack of transparency around the data used to reach a decision. It would therefore be hard to show that any protected characteristic was a material factor for the person making the decision. However, there is still risk that direct discrimination might arise. A combination of other personal information, such as sickness absence, education, employment history, length of service and seniority, could act as a proxy for a protected characteristic. 

Where the risk may be greater is in relation to indirect discrimination. For example, assessment tools using video software may not work in the same manner for people of all skin colours, for those with facial disabilities, or for those with certain accents. 

Indirect discrimination can be justified where an employer demonstrates its actions were a proportionate means of achieving a legitimate aim. However, where justifying the decision of AI software, there may be issues in explaining the underlying rationale since sufficient information may not be available to employers to explain the outcome.  

Similarly, there’s potential for unlawful harassment to occur if an AI tool results in an employee experiencing unwanted conduct related to a relevant protected characteristic. Similar to the example above regarding indirect discrimination, that might happen where AI video software does not recognise the facial movements of a person with a facial disability; or where speech isn’t recognised for individuals with strong accents. 

This could result in the technology not responding appropriately to the interviewee and could accordingly have the effect of violating their dignity, or creating an intimidating, hostile, degrading, humiliating or offensive environment as required under the Equality Act. 

Unfair dismissal

The use of AI tools in decision making may increase the risk of unfair dismissal claims, for example where AI is used when making decisions about conduct, performance, or redundancy selection. The manager may not be able to justify the output and reliance on this may lead to outcomes which are substantially or procedurally unfair.  

Constructive dismissal might also be a risk where managers delegate decision making to AI and are unable to explain outcomes, which employees may consider a fundamental breach of their contract. Similarly, introducing significant use of AI into the workplace could constitute a change to the employees’ terms and conditions. If that is done without consultation and in a way which expands or undermines employees’ roles, that could constitute a breach of trust and confidence. 

Data protection issues

Depending on the type of data being put into AI software there may be data protection issues to consider. For example, running screening on CVs and cover letters might mean that personal information is input into AI software, or where AI is used in a decision-making process and information is not anonymised. Consideration therefore needs to be given for the lawful basis for data processing, how the software is being used and also data security. 

Employee relations

Thought should be given to the impact that the introduction of AI might have on employee relations, particularly depending on the use cases being trialled. 

Where AI is being used to automate elements of employees’ roles, there may be concern from staff about whether their role remains viable and whether they will effectively be replaced by AI tools. Similarly, where AI is being used for the monitoring of staff, consideration should be given as to the impact this may have on them and whether it is proportionate. 

Before introducing measures/software, employers may wish to consult with their staff about use cases and proposals. 


Use of AI by employees in the workplace 

Employers are increasingly facing the reality that their staff are using AI at work, whether that is sanctioned or not. 

A recent study by Software AG concluded that half of all computer-based workers use AI tools and 46 per cent of this contingent insisted they would continue to do, even if they were ordered not to. For employers, the risks of such unchecked usage are considerable. 

Approved AI tools that are vetted and secured by IT departments come with established legal risks identified, but these risks are multiplied and unchecked when the use of the tool is covert:

  • Legal liability: Anyone who has used AI tools is aware of their limitations: at least for now, for example their disposition for hallucinations and often for serving back to you an answer that sounds too good to be true. However, as their standards are generally high, they can lull users into a false sense of security. Employees relying on AI tools without appropriate training and safeguards risk producing work that compromises quality and can lead to wider reputational damage and liability.

  • Loss of your USP: Non-organisation specific AI tools are unlikely to match an organisation’s specific style and approach leading to potentially jarring inconsistencies. As AI usage grows, more and more customers will become aware that they can use online tools to check whether they are being served AI generated content. This can lead to a loss of trust, and at worst, allegations that the product or service is not as advertised, and even in breach of contract.

  • Data protection: Unregulated use of insecure AI tools risks the wider disclosure of sensitive company information, and the personal data of clients, customers, and colleagues. This presents a real danger of breaching data protection regulations and risking significant fines and reputational damage. 

  • Cyber crime and breaches of confidentiality: Unsanctioned AI tools not only carry the risk of sensitive information being mistakenly disclosed but being actively used for cyber crime. Bad faith actors who gain access to such information will potentially have the means available to blackmail the compromised organisation, and their clients, customers, and employees too. Leakage of sensitive information can also provide training data for ever more convincing phishing and ransomware attacks. Further, we are already seen the politicisation of certain AI tools like Deepseek and Grok, and it is possible that data inputted by unwitting employees could end up in controversy or being used for the interest of others. There are also risks associated with confidential/sensitive information being input to an unauthorised work system as the employer does not have control over where that data will go.

  • IP: Employees feeding work data into an AI tool could be inadvertently training the tool, shaping its knowledge base, and thus sharing company IP with other users of the tool on an industrial scale. This danger works both ways, with the reciprocal prospect of your employees, through AI tools, inadvertently using IP that does not belong to them and exposing their organisation to legal challenge.

  • Risk of harassment: Employees using software to generate images/content relating to other employees and then sharing it could constitute harassment. There is potential liability for employers where this occurs in the course of employment. This risk is heightened where there are allegations of sexual harassment given the new duty on employers to take reasonable steps to prevent this.

  • Grievances and litigation: There appears to be an increase in individuals who are using AI to get high level legal advice to use against the company and make escalatory legal claims.

In our experience, employees are increasingly using it to assist in drafting grievances and correspondence related to this. The employee often covers this by saying they have taken legal advice from their “family solicitor”.

We are also seeing AI being used in Tribunal cases that we’re acting in, with unrepresented litigants using it to draft their claim and to generate lengthy and frequent correspondence to us and the Tribunal, often in record quick time. This creates a lot of extra time and cost, as there is often difficulty in deciphering the information in the communications - it is often legally incoherent, referring to cases which are not relevant or failing to set out the proper legal basis for allegations or claims. 

With this type of communication, we are increasingly having to take a pragmatic approach in terms of what we respond to. We are also relying on the Tribunal to intervene and give the Claimant directions in terms of clarifying their claim. However, the Tribunals are currently under significant resourcing pressure, especially in England and Wales so the level of intervention and input we are getting on such cases varies region to region.

Where AI is being used in this way, either for litigation purposes, or in grievance communications the communications we are seeing often refer to confidential information, and there is a concern that large volumes of information, including emails, having been used to generate the output. 


Best practice guidance

Identify where AI has been used

Things to look out for include:

  • Use of particular AI typical words: many AI-generated texts are full of the same words used in any context. Words like “essential” and “impressive” are good examples, and you should also be wary of Americanisms and excessive punctuation. 

  • Repetitive sentence structures and phrasing: ultimately AI text is often based on a limited prompt, resulting in repetition. 

  • Lack of coherence: where AI has been used there can often be factual errors, or a lack of factual content which you think would be relevant.

  • Broad and generic language: AI-written texts often lack details or in-sentence examples.

Job applicants using AI 

Employers might wish to warn job candidates that you will disqualify any applications that have been created using or substantially copied from the output of AI and warn candidates that AI detection software may be used on their applications or CVs. However, doing that in practice could create risk including sharing personal details of applications with screen software from a data protection perspective. 

Manage how employees use AI

If employers are considering using AI in workplace, it is recommended they should:

  • Determine how much, if at all, employees will be allowed to use AI to perform their work functions.
  • Train employees on any restrictions or limitations on its use.
  • Ensure any AI output should be subject to rigorous review to avoid errors. AI should be cited when used, and staff may require training on how to get the best outputs.
  • Ensure staff understand the legal risks of using AI in the workplace.
  • Consult IT teams about approved platforms, and;
  • Put a suitable policy in place which addresses:
    • the permitted and prohibited uses of AI in the workplace;
    • the use of AI in recruitment, appraisal and promotion;
    • how an employer might be incorporating AI in its own services or products;
    • how any contractual arrangements the employer has with third parties will address liability arising from the use of AI.

If you would like to discuss the impact of AI in your workplace in more detail, please reach out to a member of our employment team. 

Written by

Ross Gale

Ross Gale

Senior Solicitor

Employment

ross.gale@burnesspaull.com +44 (0)141 273 6785

Get in touch

Related News, Insights & Events

Error.

No results.

Employment Law Lab Sep 25

AI in the Workplace

15/10/2025

This blog focuses on AI in the workplace from an employment law perspective, examining new risks and trends, along with giving guidance on best practice.

Read more
Civil Penalties And The Gig Economy

Civil penalties and the gig economy

09/10/2025

New tougher law introduced to conduct right to work checks to those in non-traditional employment.

Read more
Employment Law Lab Sep 25

Webinar recording: Employment Law Lab September edition

23/09/2025

Listen to our employment webinar - your one stop shop for all things employment law related.

Read more

Want to hear more from us?

Subscribe here Subscribe here