Who is responsible for an autonomous robot accident? An lawyer answers

RoboTech Vision presentation

October 23, 2020  Related

When developing a new product, it is very important to consider consumer protection from the perspective of its safety and privacy. This industry is becoming increasingly important in the development of autonomous robots, which is also the focus of RoboTech Vision. We have therefore contacted the partner of the law firm Malata, Pružinský, Hegedüš & Partners Ltd. (MPH), Zuzana Jahodníková, who specializes in IT and technology legislation to let us know who is legally responsible in the case of an autonomous vehicle accident and how the legislation protects users from processing their personal data.

The legal regulation of autonomous vehicles or robots that come into contact with people is just at its beginning. An example of its urgency is the accident of an autonomous car in which a pedestrian was recently killed during a test in Arizona. In the time of COVID-19, when using intelligent devices the topic of GDPR was again an often-discussed issue. To many of these areas, it is possible to apply already existing laws from other sectors. However, when using artificial intelligence, we often reach the limits of the law, and the question of its predictability or responsibility is therefore often unclear.

Can all risk scenarios be simulated in law? What can be done if the law is insufficient?

“Sometimes it is not possible to regulate or predict absolutely all situations that may occur in the real world through legal or other regulations. Objectively, this would sometimes not even be possible, as in artificial intelligence we also see certain limitations of the perception of development. Generally, however, through the general formulation of legal norms the legal system should strive to ensure that its regulatory action includes as many situations as possible that life can bring. Thanks to this feature of the law, today we can, with more or less failure, apply the existing legal framework to the regulation of relationships that result from the use of artificial intelligence. 

There are several legal tools are available to eliminate the “short-sightedness” of the current legislation. We often rely on analogies (the application of standards for related or similar topics) or on a broader interpretation of legal norms or the use of general legal policies and principles on which our legal system is based. Of course, this is not an ideal situation, because of this reason it brings quite a great deal of legal uncertainty.”

Does legislation take into account the differences between man-operated robots and autonomous devices? Who is legally responsible in the case of an accident?

“Security is a big issue with technologies that use artificial intelligence, and of course, it raises the greatest concerns. The operation of means of transport is still a source of increased danger and increased possibility of damage, even without the direct use of artificial intelligence in its control.

Natural and legal persons who carry out the transport are responsible for damage caused by the special nature of this operation. They cannot be released from their liability if the damage was caused by circumstances arising from this operation. A liability exclusion is only possible if it can be proved that the damage could not have been avoided despite all the necessary efforts. If the operation of two or more vehicle operators meets, they are all liable according to their participation in the damage caused.

For things other than means of transport, the legal regulation concerning liability for damage caused by the device or the thing also applies. Everyone is liable for the damage caused by circumstances based on the type of device or other items used to fulfil the obligation. Nobody can avoid this responsibility. These rules are relatively well established, and the detailed conditions for their application are also determined by the case law, so decisions of courts in cases in which these provisions have been applied to specific cases.

Therefore, if we are talking about liability for damage at the civil level, we already have a certain model of liability. However, it is difficult to imagine that these rules will be sufficient in the case of autonomous technologies and artificial intelligence.

At the criminal law level, for example, it has to be proven whether the injured party or his death was really caused by the perpetrator. This can be a problem in the case of technologies using artificial intelligence. Artificial intelligence already enables unpredictable behaviour, which is caused by the ability to act autonomously, not only on the basis of predetermined algorithms, especially when we talk about the so-called “black box” effect that occurs in robots with artificial intelligence.

The “black box” effect is a metaphor that refers to the fact that although in technologies with artificial intelligence we can identify the inputs (commands and algorithms) and subsequent outputs (behaviour or reaction of such technologies), but sometimes we do not know how an autonomous robot came up with a process (action). It is precisely this non-acquaintance that can significantly complicate the so-called causal connection between an action e.g. of a programmer and the unlawful consequence of killing a pedestrian by an autonomous vehicle. This is a problem for which the law has to find a generally acceptable solution.”

Is it possible to admit guilt even if the robot enters an unknown environment and gets lost? Does material responsibility also related to robots?

“To put it simply, we can say that in civil law everyone is basically liable for the damage caused by his own unlawful behaviour. This is, of course, a very general rule that is being refined, supplemented, and also changed, depending on the circumstances that may arise.

The robot does not have the ability to have rights and obligations or the ability to take legal action. Law does not give robots rights and obligations comparable to those of humans. For this reason, they cannot conclude a material liability agreement with their “employers”. In the legal sense, robots still remain objects, things.

However, it is possible that future technological developments will raise the question to legislators as to whether humanoid technologies with artificial intelligence can be given legal personality and, if so, when and under what conditions. The fact that this is not entirely unrealistic is also confirmed by the first case of a humanoid named Sophia, who was granted citizenship by Saudi Arabia. Thus, Sofia became the first humanoid with a status that is only granted to humans.”

In your opinion, would insurance be the solution in similar cases?

“Liability insurance for damage caused by technologies with artificial intelligence is, of course, a solution on how to minimize the negative consequences caused by the operation of these technologies or how to be protected against losses caused by the actions or inactivity of artificial intelligence.

I assume that the commercial insurance market will bring products whose role will be to cover such risky cases. It will therefore also be the task of insurance companies to prepare certain insurance products in this category. The activity of insurance companies will also depend on the success of the adoption of legal regulations, since the applicable legal regulations must also be taken into account when dealing with the fundamental questions of insurance conditions.”

Autonomous mobile robots use a sensor system, which also includes a camera that can capture a face, or a scanner that records biometric data. If the robot moves in public, what are its obligations regarding GDPR?

“The rules for protecting personal data will apply to monitoring by autonomous robots, as is the case, for example, with standart industrial cameras. A correct definition of the purpose of the processing of personal data, the determination of the processing time of the received records or the fulfilment of information obligations are required. Operators must therefore meet their obligations in this area and pay attention to the lawfulness of the collection of personal data.

This was one of the areas in which, at the time of the COVID-19 pandemic, there was a need to pay attention to the fulfilment of legal obligations, such as measuring body temperature or monitoring facial coverage with a mask. Information about the health of a natural person belongs to a special category of personal data that is identified as sensitive data that is the subject to relatively strict legal protection and cannot normally be processed. At the same time, the expansion of body temperature monitoring by employers and companies has increased public interest in information on what legal preconditions such monitoring can be performed at all.

The operator should always consider the use of the least invasive means in accordance with the principle of proportionality and also take care to protect the privacy of data subjects. The risk of inappropriate interfering the right to privacy should, therefore, be assessed by operators before using autonomous robots or robots in general.”

Author of the post

Dominika Krajčovičová

Marketing manager

Related articles

Robots against coronavirus. How does legislation take their safety into account?

We are completing autonomous navigation in the vineyard using a visual system

We tested our map-based autonomous navigation in three different environments



Pin It on Pinterest

Share This