x
Loading
+ -
Restful slumber – Latest in sleep research (01/2016)

“Who should be punished if a child is run over by a robotic car?”

Interview: Urs Hafner

Legal academic Sabine Gless has touched a raw nerve. She argues that the law is not geared up to deal with robots that make decisions themselves – and may put people’s lives in danger.

“Robots are not legal persons. They are not liable for any damage they may cause and cannot be convicted.” Sabine Gless (Image: Andreas Zimmermann)
“Robots are not legal persons. They are not liable for any damage they may cause and cannot be convicted.” Sabine Gless (Picture: Andreas Zimmermann)

The robots are coming! In some cases, they are now impossible to miss. Self-driving cars and mail vans are completing test drives and, according to their promoters, will soon be traveling along our streets. Pilotless drones are in the air taking pictures and – if the major online retailers such as Amazon have their way – will soon be delivering packages. Many robots are already part of our daily lives, even if we are not aware of it – the website Google, for example, which is correlated to our search preferences, or the scalpels used in operations.

The public are fascinated by robots. They are seen as “intelligent”, which means that they respond to their environment based on the data they are continually collecting and analyzing, they do not get tired, and they never have a bad day. Yet robots are only machines; and they are only as intelligent as their programming. “Robots have no intuition and no morality.

They can’t reflect on what they are doing,” Sabine Gless says. That is how they differ from people. As an academic lawyer, Gless is grappling with the legal issues thrown up by the use of “intelligent agents”.

UNI NOVA: Ms Gless, you say that the legal system isn’t geared up to deal with robots. However, a robot is not a person, but a thing. The law has experience with that, so where is the problem?

SABINE GLESS: Yes, robots are machines programmed by people. To that extent, they are things. Even if they carry out functions autonomously and without human supervision – for example, if they drive cars or respond to search queries on the net – the law does not equate them with people. They are not legal subjects, meaning that they are not liable even for any damage they may cause and are not criminally responsible. However, if what many computer scientists claim is true – that in our daily lives we will soon be dealing with countless intelligent machines that are in control of themselves – in the near future people will be faced with many actors with no legal responsibility. The key question then is, who takes responsibility if something goes wrong?

UNI NOVA: Then the manufacturer is liable and the insurance company pays. Or you punish the programmer

GLESS: That doesn’t necessarily follow. What if the manufacturer and programmer are able to prove that their work was up to the latest scientific and technological standards, and they had made no mistakes? Would it be right to make a person who puts together an innovative device that we hope will improve our quality of life liable for any damage?

UNI NOVA: That would be tough for the computer scientist concerned.

GLESS: That’s right. Take the example of robot carers, which are welcomed by society as a means of solving the problem of older people requiring care. Who is to appear in court for a device that has thrown its elderly patient on to the floor, instead of lifting him into bed, and possibly killing him? It is often said that society needs to see perpetrators punished; in the case of a fatal accident, how do we satisfy that need? Where a death has occurred, you can’t just reach an agreement in civil law and pay for the damage – society won’t stand for that. Are you suggesting that the robot be put in jail?

UNI NOVA: No, that would be ridiculous.

GLESS: Exactly. Criminal law is made for people. We punish convicted people on the basis that they feel remorse and reflect on the things they have done – in other words, that they remember them. That is an old idea that is still central to our law today. Moreover, our punishments are geared towards human sensibilities. It hurts a person to have to pay money or to be incarcerated. A robot feels no remorse.

UNI NOVA: Some computer scientists think that soon robots will be capable of that.

GLESS: There are also lawyers who think that. However, punishing a robot is not a convincing solution. We need instead to solve the problem of responsibility. As lawyers, we must not forget that, when damage takes place, it is the job of the law to apportion liability to the person who is responsible. If the law can’t achieve that, we have a problem.

UNI NOVA: Are there any judicial precedents?

GLESS: Not yet. But that doesn’t mean that there has not been damage. You find stories on the net about accidents involving self-driving cars, but nothing about legal disputes. Presumably those responsible want to avoid them and prefer to pay out.

UNI NOVA: Let’s take an example. Say the police deploy a drone to carry out surveillance on a far-right demonstration, but the device comes down and kills a man. The manufacturer of the drone would then be taken to court.

GLESS: Not necessarily. It is not clear who is to blame here. Is it the manufacturer, the police, or maybe a hacker who has tampered with the device? There is nothing to stop the lawmaker holding the manufacturer liable for all damages resulting from use of the drone, even where he himself is not at fault. We call that strict liability. But the manufacturer would probably ask why the seller or user of the drone shouldn’t be liable as well.

UNI NOVA: So the case of the fatal drone accident could end without anyone being convicted?

GLESS: As the law stands, yes. That is not uncommon. Fatal accidents can take place even if no one makes a mistake. If a ski instructor decides, despite a high level of avalanche risk, that it is safe to take a group out, having made a careful assessment of the conditions, and then a block of snow comes down, killing a student, we don’t necessarily blame him for that. The difference with a robot that causes harm is that there we are consciously giving a machine the power to make decisions that could result in damage.

UNI NOVA: And the party responsible is not subject to criminal law.

GLESS: Yes. That is why legislators need to decide how responsibility should be apportioned.

UNI NOVA: Robots are not generally at the center of legal attention. Are you a voice crying in the wilderness?

GLESS: No. I am not the only person, or even the first, to engage with this subject. However, jurisprudence tends not to deal with issues until they come before the courts. The world has changed considerably in the last 20 years, particularly because of the internet.

UNI NOVA: Doesn’t the law always lag behind new developments, anyway? I am thinking, for example, of family law in relation to same-sex parenthood.

GLESS: Yes, in many instances the law is too conservative. But its inertia also has positive sides. It can’t constantly be absorbing every conceivable socio-political current. That is the job of politics.

UNI NOVA: You take some of your arguments from science fiction films or literature. Are there people in the academic community who think, “Gless is going off the rails?”

GLESS: (laughs) No one has ever put it to me like that. Actually, in art you come across a lot of visionary ideas about the relationship between society, technology, and the law. Shakespeare and Dürrenmatt thought through some fundamental issues around guilt and the law. Literature and films often highlight social conflicts and how the law tries to resolve them.

UNI NOVA: What is distinctive about the law’s approach?

GLESS: As lawyers, we think that we can resolve conflicts of interest in society by creating and enforcing suitable rules. In that we differ from economists, for example, who seek to resolve conflicts by creating systems of incentives to motivate people to behave in particular ways.

UNI NOVA: Does the law really have a chance, faced with the might of the digital giants?

GLESS: It won’t be an easy battle. The situation today has been compared with the period following the industrial revolution, when the state had to rein in the new captains of industry. It is no accident that people describe data as the new oil. I think the legal concepts we have are sufficient. The question is, can we enforce them?

UNI NOVA: Do you know of any successful examples?

GLESS: Google, for instance, was required by the German courts to monitor its search engine. If an automatically generated search query violates someone’s personality rights by linking them to prostitution, a sect, or a criminal offense, for example, Google must block the query so that a libel is not repeated over and over again.

UNI NOVA: What can the law learn from that case?

GLESS: That its starting point needs to be where the virtual becomes tangible in the real world, in the behavior of manufacturers, service providers, users, and licensing authorities. Lawyers need to sit down and have a conversation with computer scientists, technicians, and academics in the humanities and social sciences. If the law and technology don’t collaborate, there is a risk that we will have gaps in responsibility.

UNI NOVA: So responsibility lies with the state and with industry?

GLESS: It lies with everyone, including civil society. We should be under no illusions – industry will pursue its own interests. Google is not interested in the “right to be forgotten”. Everyone who uses intelligent devices, is active on the net, and leaves behind data tracks needs to deal with the resulting conflicts of interest: Who do those data belong to?

UNI NOVA: Who do they belong to?

GLESS: That is an extremely controversial issue. First, we need to clarify what “belong to” means in each specific case. An item of data or information isn’t a piece of bread that you can put in your pocket. The only thing that is clear is that we need to tackle these questions. The more digital development speeds up, the easier it becomes to control us and the greater the danger to our freedoms. People are becoming more aware of that, I think. I started researching this area in Basel a few years ago because my eyes were opened by questions from computer science students attending my lecture on internet law.

Sabine Gless is Professor of Criminal Law and Criminal Procedural Law at Basel University’s Law Faculty. She is looking at the effects of digital development on the legal system. Her research specialisms are in criminal procedural law and international criminal law, especially the law on mutual legal assistance and European criminal law.

To top