The field of AI is ever evolving, with regulation and practice often struggling to keep up.

Back in 2022, we wrote a blog about using artificial intelligence in a recruitment context and highlighted some of the associated risks. We have also been following the UK Government’s proposals towards regulation.

Now, one of the key areas of discussion in AI is the use of chatbots. We are seeing the proliferation of these systems, including uses in business, but what are the risks, and what can employers do to mitigate them?

What are Chatbots?

Generative AI Chatbots, like OpenAI’s ChatGPT, have exploded onto the scene in recent months. They respond to questions posed to them in natural language, like a human, using a combination of pre-programmed scripts and machine learning algorithms. There are various examples of chatbots providing users with instant answers to complex questions, assistance with drafting reports and even suggesting well-worded and cohesive responses to emails, and writing software code.

The ability of chatbots is causing disruption in many industries already, as companies come to terms with how they may be helped, or hindered, by this global phenomenon. Many employers are looking to embrace the benefits of chatbots in terms of both efficiency and cost-effectiveness.

However, as with any new technology, there are risks and legal issues associated with using chatbots, particularly when it comes to client or customer facing work.

What are the key risks and legal issues?

Some of the key risks and legal issues include:

  1. Contracting/Potential Liability: An area of concern for AI generally, has been how to allocate liability. This certainly applies in the case of chatbots. Whilst chatbots are able to provide sophisticated answers in some circumstances, they are prone to errors, and (at the very least) may not give the presentation or valued input of a human. This could lead to customer complaints, and potentially liability, which would in most cases rest with the company using the chatbot, rather than the developers. Specific contracts can also include requirements in relation to the performance of the contracts by particular individuals, and depending on the use cases, effectively outsourcing to chatbot could be a breach of contract.
  2. Data Protection and Security: Data protection legislation applies to the use of chatbots as it would for any other aspect of business. Failures to comply with data protection requirements can lead to significant fines, as well as reputational damage. Users can often be tempted to include personal information in a way that they would not do when working with an external company. The same applies to potentially sensitive confidential information. Sharing sensitive information with a chatbot can also lead to potentially cybersecurity issues, with cyber attacks often arising due to information provided from inside an organisation.
  3. Intellectual Property: If sharing your company’s proprietary IP with a chatbot, there are concerns around its use by the chatbot and others. The chatbot may use your IP to generate other responses, and people may copy your ideas, if not protected. Conversely, there is also the potential risk associated with claims regarding third-party IP when relying on a chatbot response to create a customer facing document, for example.

What steps should employers take?

There are some proactive steps employers can take to mitigate these risks and ensure that chatbots are being used appropriately and safely to create an enhanced client/customer service.

Our top tips for employers are as follows:

  1. Establish guidelines

    Employers should establish guidelines for the use of chatbots in client/customer work, including clearly setting out what types of client work are appropriate for chatbots e.g. proposing email responses or creating training materials, and what should remain handled unaided by employees e.g. holding confidential conversations about grievances or drafting contract clauses. Even chatbot platforms recognise that they should not be used in every situation.

  2. Provide proper training

    Employers should provide training on how to use chatbots correctly and securely, to ensure that employees are familiar with the chatbot’s capabilities and limitations. This will help them use the chatbot effectively and provide the best possible service to clients. As part of this, consideration should be given to the terms of use of each chatbot. Employees should also be made aware of the potential contractual risks and liability, particularly to customers.

  3. Avoid inputting personal data or sensitive/proprietary information

    It may be difficult or even impossible for a user to regulate how personal or sensitive data is utilised or shared once it has been submitted into a chatbot. Most chatbots are open about using conversations to improve their systems – after all, chatbots are designed to learn from past experience to improve future output.

    Whilst some companies have prohibited chatbot use entirely in response to data privacy concerns, others are taking a less strict, but nevertheless cautious, tactic of banning staff from inputting personal or sensitive data.

    If there is a suggestion that personal data may be inputted, then care would need to be taken to ensure that privacy notices and policies reflect this use, and that data protection principles more generally are complied with. The ICO has recently produced an AI and data protection risk toolkit, which may help with some of these decisions, see here: AI and data protection risk toolkit | ICO

    Similarly, care should be taken when including confidential information, or proprietary IP. Whilst chatbots can assist with creating work based on these, this should not be at the expense of jeopardising confidentiality, or the protection of your IP. These are also areas where mishandling could give rise to contractual liability.

  4. Double-check your work!

    Employers should remind staff to double-check their work after using a chatbot for client-facing tasks. While chatbots can be incredibly helpful in assisting employees with research and writing skills, it's important to ensure that work is accurate and complete before sending it to clients and customers.

  5. Regularly review chatbot performance
  6. Once it has been established when a chatbot can safely be used, employers should regularly review its performance in these areas to ensure that it is functioning properly and continuing to provide accurate and appropriate responses. This should include seeking feedback from employees on what they may have done differently in situations prior to the introduction of chatbot use.

Will the use of chatbots in the UK be regulated?

It is important for businesses to recognise that the general law around matters such as contracts/liability, IP and data protection will apply to the use of chatbots in the same way as any other aspect of the business.

The UK government has recently published its white paper on AI regulation, which seems to indicate a more “light-touch” approach than is being suggested in the EU. This will focus on existing regulators such as the ICO working to cross-sectoral principles, including: (i) transparency and explainability: (ii) safety, security and fairness; (iii) accountability and governance; (iv) contestability and redress; and (v) fairness.

The UK is seeking responses to its proposed approach, with the consultation closing on 21 June 2023. Users and providers of chatbot tools should monitor these developments. While there may not be strict penalties around compliance, failure to meet the standards could be a reputational issue.

Conclusion

Chatbots (and generative AI developments more generally) are currently at the forefront of public awareness: a technological advancement that has the potential to revolutionise the way we work.

By following our guidance above, employers can take steps to mitigate the risks associated with using chatbots and fully explore their potential.

The regulatory landscape for AI generally continues to develop, and it is important to keep up-to-date with the most recent developments.

If you would like to know more about using chatbots/AI in your workplace and your responsibilities, please do get in touch with a member of our specialist teams.

First published by DIGIT here.