TRUST IN

THE AGE OF

ARTIFICIAL

INTELLIGENCE

The Trust Economy is re-shaping our world. 

That was the headline from our recent paper, looking at how technology has transformed the way we live our lives.  Developments in artificial intelligence (AI) have begun to accelerate matters, and 2023 will be remembered as a pivotal year.

On the one hand there were fears that the technology could trigger a nuclear war, with the governments of the UK, US and France lobbying the UN to develop a protocol for how AI could be used in defence. The nations were afraid, they said, that without human intervention nuclear deterrents that employ AI could accidentally unleash Armageddon on the world.  Against this backdrop, the UK held the first global AI Safety Summit at Bletchley Park in early November 2023.

On the other hand, with the generative AI system ChatGPT reaching 100 million active monthly users just two months after launch, there was much debate about how it was going to revolutionise the workplace. Hype around the system, which can digest huge amounts of text and data and converse with its users using natural language, was such that it was being widely predicted that technology was finally on the cusp of reaching the point where it could replace large swathes of the human workforce.

For some employers the idea of being able to replace humans with machines, cutting costs in the process, will be attractive. Perhaps unsurprisingly though, that is not how most people view the prospect. Indeed, when the Centre for Data Ethics and Innovation published its Public Attitudes to Data and AI tracker at the end of 2023, it found that more people than a year previously had heard of AI and had an idea of what it could do, yet proportionally fewer felt positively about it. Nearly half the survey’s respondents (45 per cent) said they feared it would take people’s jobs, 35 per cent said it would lead to a loss of human creativity and problem-solving skills, and 34 per cent were afraid that humans would ultimately lose control of the technology.

Steph Wright, Head of the Scottish AI Alliance — the delivery body for Scotland’s national AI Strategy and its vision for Scotland to become a leader in the development and use of trustworthy, ethical and inclusive AI — says it is positive that AI has become a mainstream subject most people have an opinion on, but warns that a lot of the information circulating is based on myth.

“AI has cultural baggage because we've had decades of sci-fi cinema and literature that give a preconceived idea of what intelligent machines are,” she says. “The majority of these are in dystopian scenarios where they take over the world or they become smarter than humans and when you get news headlines that perhaps reinforce these preconceptions it's really hard.

“There's just so much hyperbole out there at the moment. There's so much fear mongering and so much hype, and it's hard to cut through the noise for most people because it's just a constant bombardment. There are really binary perspectives on AI that are very black and white. There's the whole ‘AI is the best thing since sliced bread and it can solve all your problems’ view. Then there's ‘it's going kill us all and humanity's not going to survive’. The truth is not either of those, but somewhere in between. It's a much more complex thing than people are presenting it as and much more nuanced conversations need to be had about it.”

Callum Sinclair, Head of Technology & Commercial at Burness Paull, picks up on the tension: “At a time when some technology companies (particularly large businesses) have challenges around public trust, developing technologies are becoming more complex to understand and more opaque - though often easier to use. The systems and processes that businesses have in place to ensure they can demonstrate ethical and trustworthy adoption and deployment to customers, supply chain partners and employees in an age of generative AI have never been more important.”

We are cognisant of the fact that AI firms have scraped a huge amount of data as that is needed to train, test and validate their algorithms...but AI firms have not been particularly transparent around the data they have used to train their models.

SERENA DEDERDING - GENERAL COUNSEL, COPYRIGHT LICENSING AGENCY (CLA)

In its last iteration, AI used machine learning to replicate human behaviours but at much higher speeds. It could, for example, be trained to scan vast swathes of documents for particular words or phrases, drastically cutting down the length of time required to do the task and eradicating much of the margin for error. In its latest iteration it can do so much more, using language-based models to ostensibly replicate human thought, coming up with answers to specific questions posed by the person operating it.

When ChatGPT launched at the end of 2022 there was no shortage of people using it to see if it could do a better job than they could, with students asking it to write essays, professors having it summarise research papers, and journalists using it to generate news reports.  GPT Store, which OpenAI launched in January 2024, will presumably take that to the next level again.

While generative AI presents potential benefits and opportunities to individuals and society, Serena Dederding, General Counsel at the Copyright Licensing Agency (CLA), says that without appropriate guardrails, generative AI also risks damaging and undermining many aspects of society, in particular the creative industries. Questions need to be answered about the data used to train such systems and, in particular, whether copyright protected works form part of that training data and rights holders have consented to the use of their works for training purposes – and transparency by generative AI firms here is key - before users can be comfortable and confident in their use of generative AI.  CLA is also concerned to ensure that generative AI or hybrid outputs are identified as such, as further outlined in its Principles on Copyright and Generative AI published last September.

“The UK is known for having a gold-standard copyright regime that is respected internationally and our view is that developers and deployers of generative AI systems must comply with applicable law, including copyright law,” Dederding says. “This is crucial to facilitate the ethical, safe and legal development of AI. We are cognisant of the fact that generative AI firms have scraped a huge amount of data, including copyright protected works, as that is needed to train, test and validate their system algorithms without permission or fair remuneration to rights holders. The use of original, high-quality data is imperative as it will lead to improved outcomes and reduce the risk of bias or inaccurate and false outcomes, but generative AI firms have not been transparent around the data they have used to train their models. That is for a variety of reasons. One is certainly around the copyright infringement issue.”

For Wright, this opaqueness only serves to heighten fears around disinformation and misinformation, two phenomena that have come to prominence in the AI-enabled age.

“Disinformation and misinformation have been around for a while, enabled by data technology,” she says. “Cambridge Analytica and their algorithms ultimately impacted and interfered with political discourse and elections all around the world and AI is just an extra step on top of that, it just makes it better. That's a really interesting trust economy concept. You can find yourself in the situation where you don't really trust something but you're going to have to because you need to use it. That's such a rubbish position to be in - forced trust because if not, then you're excluded. And that's coming from a privileged position - imagine what it's like for marginalised, vulnerable communities. Disinformation, misinformation and deep fakes are a big problem right now and you have to consider what kind of world we live in if they become so prolific.”

There are a number of ongoing US cases on similar issues with AI in progress at time of writing, and notably, a recent report in The Sunday Times newspaper highlighted how academics at St Andrews and Edinburgh universities were impersonated by AI. The work it produced, which painted central Asian country Kazakhstan in a positive light, was passed off as genuine commentary and even unwittingly published by mainstream news sites.

It is against this backdrop that governments are moving fast to come up with legislation that will govern how AI operates and give the public confidence in its trustworthiness. For Martin Nolan, Chief Legal Officer at travel search engine Skyscanner, one benefit that came out of the heightened debate around AI last year is that it has brought the need for regulation to the forefront of legislators’ minds.

“That initial hysteria has died down and people have started to be a little bit more pragmatic about AI, but the hysteria did get it very much onto people's radars in a way that it might otherwise not have done,” he says. “It was really helpful in bringing about faster moves towards regulation in this space than we would otherwise have had. The EU is probably leadingthe charge in terms of regulation in this space, but would they have been so fast to regulate if that hysteria had not been going? I very much doubt it.”

There's a danger that some businesses might sign up to get this ‘all bells and whistles’ tech, but they don't actually need it. You've got to take a step back and really think about what the problem is that it's solving.

MARTIN NOLAN - CHIEF LEGAL OFFICER, SKYSCANNER

In December the European Parliament and EU member states reached a provisional agreement on what will become the bloc’s Artificial Intelligence Act, and that is planned to take effect in stages, starting from end of 2024/start of 2025. However, Nolan says organisations are likely to begin thinking about how to comply with it as soon as the draft legislation is complete, with a key aspect of the act from his point of view being that it contains hefty GDPR-style fines for non-compliance. 

“There are some really bad things you must not do and then there's some more innocent stuff,” he explains. “For that really bad category, the fines for that will be up to 7 per cent of a company’s global turnover. That’s in the frightening, GDPR category that reminds people why [misusing AI] is a really bad thing to do. There's some teeth to this.”

Callum Sinclair adds: “The UK has altered its stance on the need to legislate in this space, having initially taken an apparent ‘pro-innovation’ approach with the view that existing UK legislation was flexible enough to accommodate advances in technology.  This view has now changed, but the UK consequently is behind the regulatory curve.”

Regardless of how quickly regulations come in, Nolan says it is vital that anyone considering using AI in the workplace is able to cut through the noise to work out how it can be helpful in their particular scenario. “It can be helpful and people are already doing some fantastic things with it, but some of it is probably still at the gimmicky stage,” he says. “There are elements where people are enjoying experimenting with it and playing with it, but they're not necessarily seeing industrial scale use of this to the extent that it's able to remove or repurpose workforces.

For large-scale data analysis it's going to be fantastic. Hopefully it can also democratise access to data across organisations and I would also expect to see people having more time to devote to being creative and strategic rather than being bogged down by bureaucracy and administrative tasks. And who would really ever complain about that?

“[But] there'll be some organisations that don't actually need it. There's a danger that some businesses might sign up to get this ‘all bells and whistles’ tech, but they don't actually need it. You've got to take a step back and really think about what the problem is that it's solving.”

Wright agrees. She believes that AI “could make our lives amazing”, particularly in areas such as medical imaging, where an AI tool will never get tired or stressed like a human might. She also believes it could “make the world more equal and more fair if deployed in the right way”, but she cautions that it cannot be seen as the solution to every business problem.

“One thing I always say to businesses is just look beyond the hype and don't fall for the FOMO [fear of missing out],” she says. “Businesses can feel pressured to explore AI and go down the route of using it because they feel like they will be missing out or they'll be losing a competitive edge if they don't, but the truth is AI doesn't solve all problems.

“You really need to understand what your business problem is and understand whether AI is the technology to solve it because if you just go leaping into it because you want to use some shiny AI, you're going to end up going down a rabbit hole and throwing resources about. You need to acknowledge that AI doesn't solve it all. It is not magic — that's one thing I repeat everywhere — it is just not magic. It is very clever, but ultimately it's just very clever maths.”

And, at the end of the day, Wright stresses that technology can fail and when it does it is the people behind it that will have to take responsibility.

“Let's not talk about robots taking over the world,” she says. “I'm a big fan of shifting the focus away from technology and onto the people behind it because, ultimately, you can regulate the technology or you can regulate the people behind it, making the decisions, developing it, selling it, deploying it. They are ultimately the people responsible. Don't blame the AI when something goes wrong because, ultimately, it is being instructed to do something. It's a tool that does what it's been made and told to do by a person so don't shirk that responsibility away.”

Regardless, there is no doubt that the paradigm has shifted.

The final word goes to Sinclair: “There is no putting the genie back in the bottle and generative AI, and all of the wonderful opportunities it can bring are here to stay.  But responsible creation, use and deployment will be a critical part of the Trust Economy if we are to avoid the worst of societal risks and challenges.”

Whether your business creates, sells or is enabled by technology, as a digitally native legal firm, we give thoughtful and precisely informed advice.

To discuss how to ensure you have the right assurances and protections in place to win earned trust and benefit from growth in the trust economy, get in touch. We’d love to have a conversation.

CALLUM SINCLAIR - HEAD OF TECHNOLOGY & COMMERCIAL

callum.sinclair@burnesspaull.com
+44 (0)141 273 6882
+44 (0)7391 405 414