TR Monitor

Legal liability for AI decision making

Zahide Altunbaş Sancak, Attorney, z.altunbas@guleryuz.av.tr Aziz Can Cengiz, Attorney, a.cengiz@guleryuz.av.tr Guleryuz & Partners

Artificial Intelligence and related TODAY, technologies are being deployed by governments and businesses alike in a wide spectrum of sectors. Undoubtedly, these systems carry some accompanying risk, which is nearly impossible to measure.

In a world where there are many examples of AI-induced harm such as self-driving cars causing fatal crashes, legal liability arising from AI decisions becomes an essential discussion.

CAN AI SYSTEMS BE HELD LIABLE FOR THEIR OWN DECISIONS?

Today, all around the world, legal entities as well as people can be held liable for their actions. However, seeing that AI systems lack the element of awareness, which makes them quite a bit different in comparison to other legal entities, which are formed and run by real people who are aware of their actions. Therefore, it is currently not possible for an AI system to be held liable for its own actions.

Today, while the legal nature of artificial intelligence is a point of disagreement in European Law, the dominant view is that artificial intelligence systems that are in general use can be classified as a “product” in the context of Article 2 of the Product Liability Directive 85/374 adopted by the Council of Europe and therefore can be evaluated in the context of “product liability”.

In Turkish law, the concepts of “products” and “producers” are not fully regulated under the Turkish Code of Obligations. After a long period where there was a legal gap regarding the issue, Product Reliability and Technical Regulations Act No. 7223, or PLTRA, entered into force on March 12, 2021. In this act, intangible goods and by extension AI systems were classified as “products”.

WHO CAN BE HELD LIABLE FOR AI DECISIONS?

AI is classified as a product both according to the Directive and the newly-adopted PLTRA in Turkish law. Therefore, liability for possible damages should be assessed in accordance with this classification.

The Directive and PLTRA are compatible in many respects. Most importantly, in both regulations, it has been stipulated that the manufacturer and the importer, along with the distributor who is secondarily responsible, will be jointly liable for the damages suffered by the user and non-user third parties, for any damage caused by a defect in the product.

Since the Directive and PLTRA adopt the principle of strict liability, it is sufficient to prove that the product is defective, the damage occurred and the causal link between the defect in the product and the damage in order for the manufacturer to be held responsible. In Turkish law, it is argued that the burden of proof regarding the defectiveness of the product should be interpreted as narrowly as possible.

HOW CAN MANUFACTURERS AVOID LIABILITY?

According to PLTRA, the manufacturer is relieved of liability in cases where they prove that they did not put the product on the market, that the defect was caused by the distributor’s or the user’s intervention or that the defect in the product was caused by its production in accordance with technical regulations or other requirements. Also, even if the defect itself is not caused by the user, if the damage is caused partly by the user, the manufacturer’s responsibility may be partially or completely removed.

According to the Directive, the manufacturer is also relieved of responsibility if “it is not possible to recognize the defect in the product according to the level of science and technique at the time of the product’s release”. This provision, which is the most common reason for relief from responsibility, is frequently criticized. Since this defense can be applied especially in products using new digital technologies such as artificial intelligence, many believe that a special regulation for such products is necessary.

CONCLUSION: WHAT SHOULD MANUFACTURERS AND USERS BE AWARE OF?

In the light of the foregoing explanations, there are not yet concrete and specific regulations on the determination of legal and criminal liability arising from artificial intelligence decisions. However, existing legal institutions can provide guidance in this regard and provide solutions to possible problems. Manufacturers, who are the primary beneficiaries of artificial intelligence technologies, are also the primary bearers of liability and should ensure that the artificial intelligence-based software and systems they put on the market will not make decisions that may cause damage as they will be primarily responsible for the damages caused by a possible false artificial intelligence decision, even if they are not at fault. Users should also take utmost care and avoid actions that may be interpreted as user fault in a possible accident, especially when using advanced systems that may cause great damage in case of a possible error such as self-driving cars.

BUSINESS BY LAW

en-tr

2021-06-14T07:00:00.0000000Z

2021-06-14T07:00:00.0000000Z

https://trmonitor.pressreader.com/article/281964610665415

NASIL BIR EKONOMI MEDYA HABER BASIN A.S. (Turkey)