Nobody ever said it would be easy. As we identified in a previous blog on this subject, the rapid developments in the field of autonomous vehicles have posed a number of novel legal questions, and the introduction of the Automated and Electric Vehicles Act 2018 does not come close to providing all of the answers.

In an attempt to head off some of these issues at the pass, the Scottish Law Commission and the Law Commission of England & Wales have launched a preliminary consultation paper focussing on the safety of passenger vehicles.

More questions than answers

From a product liability perspective, the provisional view advanced by the joint Law Commissions is that, whilst the Act provides the necessary statutory basis for compensating victims in the event that an automated vehicle causes damage, further clarification or guidance may be required in relation to three key areas: causation, contributory negligence and data retention.

Whilst the 2018 Act means the insurer of a vehicle is liable for an accident that “is caused by an automated vehicle when driving itself”, it is left to the courts to interpret what the “cause” of an accident is. For example: where a cyclist, surprised to see a vehicle with no driver, loses control and crashes into a pedestrian, what has “caused” the accident? Similarly, there are some black holes with the “single insurer” model, where the insurer of an automated car is first in line to compensate for damage caused by the car, but can then recover from the person who is ultimately liable for the accident under existing laws.

In particular, what happens when an uninsured human driver crashes into an automated vehicle, which in turn shunts forward and damages a third vehicle, or where an automated vehicle swerves to avoid an erratic (and uninsured) cyclist, and in doing so hits a parked car? There is a danger of an insurer having to pay out for an accident where the automated vehicle they insure was not at fault, with no prospect of recovering from the real culprit. The question is: should there be some guidance for the courts on these issues?

A further potential blind spot in the Act relates to contributory negligence – ie situations where the victim is partly to blame for the accident.  The current position only recognises a sharing of fault between the victim and another “person”. But what happens when the car that caused the damage was not driven by a person, and the insurer’s liability does not include any suggestion of “fault” on the insurer’s part? Another unknown is what happens where an automated vehicle has “caused” the accident simply by its involvement, but the injured party is wholly responsible - could compensation to the injured party be reduced to zero (which is not usually permitted)? The paper questions whether this aspect of the Act is sufficiently clear.

The third and final area that the paper highlights is data retention. With automated vehicles generating a far larger amount of data than conventional vehicles (several terabytes per day), is it reasonable for insurers to be expected to preserve all of this data for the normal period (at least three years from the date of the accident)?  Perhaps they should only have to do so if the claimant notifies the police or their insurer within a shorter set period.

The “state of the art defence” in product liability claims

Chapter 6 of the paper does provide some crumbs of comfort to manufacturers and component makers alike, in the form of the comments made regarding the “state of the art defence”.  The defence means that producers are not liable where the state of knowledge at the time of sale was such that they could not have known of the defect.  This is described as being “particularly relevant to automated vehicles as the technology is new and may give rise to novel and unexpected problems”. This is a view we would support.

Where do we go from here?

If there’s one thing that can be taken from the joint preliminary consultation paper, it is that the law is not yet fully prepared for the coming tide of automated vehicles. It is perhaps not surprising that we have reached this stage – we are, after all, entering into an area of the unknown, and the law is having to grapple with novel concepts that do not neatly fit within existing frameworks.

The legal profession is not alone in this dilemma, of course.  The development of automated vehicles has reignited debate around the “Trolley Problem”. The modern reimagining to this age old moral quandary asks whether it is better for an automated vehicle to take a course of action that leads to the death of its passengers, or one that leads to the death of innocent pedestrians.

A far reaching recent study by the Massachusetts Institute of Technology (which surveyed millions of people across 233 countries) posed many different variants of this question: what if the pedestrian is an old man, or a dog? What if the passengers are criminals? The results of the survey identified some global trends - a preference for sparing humans over animals, for example.  But they were far from uniform, with different nationalities and regions seeming to prioritise different traits, such as youth or lawfulness.

The idea behind the study was to initiate a conversation about the ethics of technology, and to inform and guide those who eventually decide how automated vehicles will be programmed to respond to real life dilemmas. However, it does raise an obvious question, which illustrates the difficulties faced by Government in legislating for automated vehicles: if humans cannot even agree on the “correct” answer to this moral dilemma, what chance have engineers (and lawmakers) got of correctly programming and monitoring this frighteningly powerful new technology?

The results of the joint consultation can be found here.