What are the advantages of ignorance over intelligence?

The greatest vulnerability is ignorance

About a legal obligation to create traceability of artificially intelligent decisions

This article is part of the “Artificial Intelligence” series of articles.

I. Introductory words

In the article "The Third Me" in the Recht innovativ 02/2018 it was demanded that decisions made by artificially intelligent systems must be comprehensible. Something foreseeable and unforeseeable should not be released into legal dealings on an independent basis. Someone, but not the machine itself, must be liable for any harmful consequences. In order to link claims to the negative consequences of decisions made by artificially intelligent systems, these decisions must be traceable to an error or misconduct, which in turn must be assignable to a liability subject. The production of artificially intelligent systems and the offer of services based on artificially intelligent systems must not serve to evade or even conceal responsibility, as the creation of an electronic person would ultimately allow. The traceability of the decisions made by artificially intelligent systems is of social necessity, because the lack of traceability creates vulnerability of an unknown extent. Artificially intelligent systems will have a strong influence on our future, which is why they must not determine it unilaterally.

German law already offers a sufficient legal framework for assessing civil and criminal liability. However, such an assessment is only possible if a decision that triggered the damage, based on pure arithmetic processes, can be assigned to a person responsible. This article is therefore intended to investigate the question of how a concrete, legal obligation to establish the traceability of artificially intelligent decisions could be implemented.

II. The demarcation of comprehensibility from transparency and explainability

1. Transparency

The concept of traceability must be distinguished from the transparency of algorithms and computing processes, for example in the context of machine learning (ML). Synonymous use is prohibited. Because the transparency of Algorithmen et al. does not necessarily mean the traceability of the decisions made under their application, in particular not the possibility of determining civil or criminal liability. In essence, the data that are processed by an algorithm or system determine the result - not the computing process itself. An algorithm or computing process can be error-free, but the data used is faulty, tends to be discriminatory, or even "poisoned". The sensors from another manufacturer installed in a machine can incorrectly record and evaluate information from the environment and forward it to the algorithm that is still working correctly or to the next processing step. The feedback required for learning can also be misleading in nature. A transparent algorithm or computing process is therefore only one, but an important aspect of the traceability of an artificially intelligent decision. The transparency of an algorithm or computing process is therefore particularly important in the area of ​​research and development, maintenance in the context of performance obligations, but also in troubleshooting in the area of ​​warranty obligations. As a rule, it is not the manufacturer, but a third party who is obliged to do so, who would not be able to fulfill his contractual obligations without understanding the processes.

2. Explainability

The importance of the also diverse demanded explainability of artificially intelligent decisions is located here in the section that precedes the use of artificially intelligent systems: For example, when informing the potential user so that he can make an informed decision for or against the purchase or under other contracts Can make use. Explanability does not require a comprehensive explanation of the technical processes; as a rule, only a few people can understand this. Establishing a comprehensive understanding cannot be expected. However, the declaration of artificially intelligent decision-making must enable the recipient of the declaration to understand which risks arise from the use and what happens to his personal data that (must) be processed in the context of the use. Such an understanding can usually only be created with a simple representation that excludes complex technical designs. However, a simple explanation that is not wrong requires a thorough understanding of the explanatory. Transparency alone is not enough.

3. Determine terms in a purpose-oriented manner

Of course, it cannot be ruled out that other terms can also be used meaningfully. However, in order to avoid loopholes, it is important that a distinction is always made between what the purpose of the disclosure of decision components is. A sole legal obligation to make algorithms or computing processes transparent can in any case subsequently not constitute an obligation to provide high-quality data for those who could lead a correctly working algorithm or computing process with “garbage data” to incorrect decisions. This requires additional transparency of interacting parties, interactions and interdependencies in order to at least quantitatively limit sources of error. The determination of the error, the error source, the error quality and their assignment to a responsible person are therefore essentially the goals that are to be achieved with the required traceability of artificially intelligent decisions. Incidentally, what is comprehensible can also be simply explained.

4. Bracket effect of traceability in order to reduce vulnerability

The comprehensibility therefore necessarily includes everything that is necessary and possible for the purpose of clarification. In short: According to the local understanding, traceability means the regular documentation from the beginning of all processes and the components involved, whether of a human, organizational, mechanical, electronic or informational nature. A bracing effect is therefore ascribed to it here.

The explainability of the decision-making of artificially intelligent systems prior to use, their transparency, among other things, during use and the aforementioned comprehensible traceability after use that triggered a damaging event are seen here in their interaction as necessary and as the realizable opposite of the alleged "black box" artificial intelligence. Their opacity prevents any form of understanding and thereby creates not only mistrust but also fear. Fear means vulnerability and this must be reduced as much as possible.

III. Why do decisions made by artificially intelligent systems have to be comprehensible?

The reconstruction and thus visualization of processes and errors that are relevant to the decision in the context of so-called reverse engineering must not be the task of the injured party, e.g. to provide evidence of the causality of the error and to represent the manufacturer. A reversal of the burden of proof in his favor due to a lack of insight into the organization of the manufacturer does not lead to satisfactory results if he is allowed to refer to the "black box" and thus the supposedly impossible traceability of the decision-making of artificially intelligent systems. Such a disappointment damages the innovation in the long term, because it has a lasting effect on confidence in the technological advancement and prevents improvements to the inadequate predecessors.

IV. How do you make decisions of artificially intelligent systems comprehensible?

1. You don't have to be able to understand everything in detail to assign responsibility

It must be made clear in advance that not all decisions made by artificially intelligent applications can be fully understood today. The more complex the systems of artificial intelligence become, the greater the risk of incomplete troubleshooting and uncertain liability determination. Machine learning in particular teaches us that as the number of layers, neurons or “nodes” increases, the overview and control of the creative person decreases. Nevertheless, the best possible determination of responsibility must always be guaranteed. The fact that human intelligence is still not definable and comprehensible down to the last detail has not yet saved anyone from criminal or civil liability.

2. Traceability requirements graded according to importance and intervention quality

This means, against the background of very different levels of relevance and intervention quality of artificially intelligent decisions for people, that a gradation of the requirements for traceability must be created based on this:

Traceability does not mean permanent, even official monitoring, when it comes to machine text recognition and auto-correction. An exclusively retroactive, damage-related assessment of artificially intelligent decisions, on the other hand, is forbidden, for example, in the case of care robots that “work” with human health and life. Traceability also does not mean that the manufacturer is allowed to have permanent access to devices owned by the user.

We must continue to live with the doubts that arise from the unpredictability of living (together), especially in favor of the freedoms guaranteed by the constitution; this is part of the general risk to life. Doubts of a more extensive character must, however, be at the expense of the person who consciously created them or accepted them, resulting from the foreseeable unpredictability that existed from the beginning when they were placed on the market. The boundaries between general life risk and responsible risk are not clear, have never been clear and are always blurred. For these cases, there are indefinite legal terms in German law, such as “public interest” or “common good”, with a margin of appreciation for the executive. However, this cannot be completed without a minimum of traceability.

3. What risk is acceptable and can even be contractually assumed?

"Risk" also means that under certain circumstances, for example in the case of a comprehensive explanation and explanation before the use of unpredictable, artificially intelligent decision-making systems, the responsibility of the manufacturer or provider can be reduced with reference to the contributory negligence of the user within the meaning of Section 254 of the German Civil Code .

However, this reduction is not considered if, for example, in violation of the provisions on general terms and conditions (§ 305 ff. BGB), the principle of good faith (§ 242 BGB), the prohibition of usury and immorality (§ 138 BGB), legal prohibitions (§ 134 BGB) and special laws such as the Act against Unfair Competition (UWG) but also the General Data Protection Regulation (GDPR) attempts are made to shift responsibility. Idleness and ignorance, profit maximization through savings in quality as well as legal gimmicks must not be a means of exempting manufacturers and providers of increasingly popular artificial intelligence products.
For this reason, the legal construction of the electronic person was also rejected here. It sets a completely wrong incentive and would mean that decisions made by artificially intelligent systems are not even subjected to an attempt to make them comprehensible - with extremely dire consequences for the injured and unsuspecting subsequent users of artificial intelligence systems that cannot be improved. The protection of weaker people in the event of information asymmetries, which comes into play, for example, within the framework of the provisions on general terms and conditions (§§ 305 ff. BGB) and also the reversal of the burden of proof developed by case law in the context of tortious producer liability (§ 823 ff. must not be undermined simply by shifting the obligation to pay compensation to machines.

Objectively avoidable damage should not have to happen in the first place. At least its probability of occurrence must be able to be estimated as far as possible by the user in order to be able to protect his physical and financial integrity from the start. If he deliberately takes such a serious risk, sole responsibility cannot be placed with the manufacturer or provider.

4. Traceability in a multidimensional space is a challenge

The required traceability thus spans, sometimes tighter, sometimes looser, through and over a large, multi-dimensional space of possible applications of artificially intelligent systems, with the different horizons and thus interests of users, providers, manufacturers and authorities. On the one hand, this multi-dimensional space must be defined by public law, for example with regard to product safety, but also in the context of contractual risk distribution and tortious producer liability. Of course, every individual case cannot be regulated. Not everything can be monitored and checked. In addition, in order to be able to offer a minimum of flexibility, laws must be fundamentally technology-neutral. A law must neither allow it to be circumvented by using other technologies, nor prohibit a technology that is (also) useful. The decisions of artificially intelligent systems as a legislator in this area, which cannot be fully limited, are made explainable, transparent and comprehensible through a clear distribution of rights and obligations as well as advantages and disadvantages that optimally balances the interests involved.

V. How can one create a balance of interests and thereby an attractiveness of the location?

These requirements read downright paralyzing. But this impression must not justify the denial of legal processing and thus the elaboration of traceability in favor of the determination of responsibility for artificially intelligent decisions that cause damage. A way has to be found.

1. More duties and fewer rights make the location less attractive

However, when it comes to the equalizing distribution of legal responsibility, the following problems need to be solved:

- The more comprehensive the obligation to make the decisions of artificially intelligent systems comprehensible, the greater and more deterring the associated liability risk for innovative developers and providers. There is a risk of migrating abroad or into the legal gray or black zone.

- The greater your own liability risk, the lower the interest in making it understandable. Nobody wants to burden themselves or even harm them. There is also a risk of migration here.

- In addition, a comprehensive legal obligation to make traceable also means a difficulty up to the delay of the market launch with overhaul by other, less restrictive nations. As a result of the low attractiveness of the location, this again means the risk of people moving away.

So how can Germany keep the risks for users small and the gates open to the opportunities for free, creative development in order to establish itself as one of the leading nations in the field of artificial intelligence? More duties alone cannot be the solution. This also applies to offers that are no longer proprietary to the manufacturer, especially in the context of machine learning, of further developing, artificially intelligent decision-making systems.

2. Compensatory approaches in favor of more freedom of development

The solution, however, cannot be to not oblige innovative companies to make the decisions of their artificially intelligent systems traceable. A satisfactory result can only be achieved by encouraging the development and interaction of many factors. The article cannot present these in full, but the following essential factors and possible adjusting screws should be named:

a) Development facilitations must be integrated into the German legal system

The repeated reference to American conditions does not do justice to the balance of interests to be found. In the USA, the conditions for innovative developments are supposedly better because, for example, data protection requirements are lower and there is more room for innovations. However, this more favorable factual and legal situation, represented by Silicon Valley, is offset by feasible (punitive) claims for damages and compensation for pain and suffering to an extent unknown to the German legal system. Germany as a location cannot lower the legal requirements for manufacturers and suppliers of innovative and high-risk products without adjusting the (civil) legal consequences accordingly, especially upwards. It is not evident that the achievable (penal) damages and compensation for pain and suffering have a deterrent effect in the USA.However, Germany's options are limited: a general overhaul of a historically and organically developed legal system is not appropriate. It is therefore necessary to look at which known and proven instruments can also be used to achieve the desired goals. Otherwise the unity and thus the consistency of the legal order will be jeopardized.

With regard to the state's compensatory guarantee of constitutionally guaranteed fundamental rights, reference can be made at this point to German commercial law. To put it simply, public usage offers from artificially intelligent decision-making systems cannot bring with them risks that are not dissimilar to those of playground equipment within the meaning of Section 33c of the Trade Regulations (GewO); they may even go well beyond pure asset losses. Section 33c (1) sentence 1 GewO provides that the person

[who] wants to set up commercial gaming devices that are equipped with a technical device that influences the outcome of the game and that offer the possibility of winning, [...] the permission of the competent authority [requires].

If you look at Facebook algorithms and machine learning-based bubble formation in social networks of this kind, which can sometimes trigger addiction-like, self-damaging and reality-removing results in users like in gambling, because they demand attention and are binding for the benefit of the provider, then it is not far off to use them To subject artificially intelligent decision-making systems to a prohibition with reservation of permission, as in the case of § 33c GewO.

b) Facilitation of development requires the facilitation of the prosecution of claims in the event of damage

The easing of the prosecution of claims by injured parties, which necessarily corresponds to the easing of (further) development, could also be created through the creation of a special statutory strict liability. In this case, there would be no need for fault and in this respect no proof of fault on the part of the injured party. Manufacturers or providers are liable for the creation and maintenance of a permitted hazard.

A renewed reference to the USA does not help either, because US procedural law is structured considerably differently in order to make it easier for injured parties to pursue claims. Accordingly, German civil procedural law would have to be changed considerably in order to compensate for a shift in legal advantages towards manufacturer and supplier interests in the development and market introduction phase, again on the side of the injured party and the claimant. Special types of procedure are conceivable. However, because of their predominant delaying effect, model declaratory actions are not an example of making the prosecution easier. They have also not proven themselves with reference to the Capital Investor Model Procedure Act (KapMuG).

c) Self-administration to ensure the continuous quality of development and services

The creation of a further control instance with the aim of quality assurance is conceivable. Especially in the case of the results of machine learning, which are increasingly beyond human traceability, long-term considerations should be made as to how specialist knowledge, technically experienced control and monitoring can be efficiently bundled and thereby enable the required degree of objective, possibly machine-made traceability.

The compulsory membership of legal entities under public law with self-administration tasks, such as lawyers in local bar associations with associated legal courts, who monitor and judge compliance with professional or area-specific obligations, is one way of facilitating the development of innovative, artificially intelligent systems on the one hand to enable (cf. the independence of the lawyer, § 1 Federal Lawyers' Act, BRAO) and on the other hand to keep within a legally defined corridor of duties (see the lawyer's duties, §§ 43 ff. BRAO). Regardless of the insignificant weaknesses and disadvantages of the association, this self-administration solution could in principle ensure that conscientiousness (cf. § 43 sentence 1 BRAO), relevance to the facts (§ 43b BRAO) and specialist knowledge (§ 43a para. 6 BRAO) come more into focus as economic interests. Lurid marketing by distorting reality would then be prohibited accordingly (cf. § 43b BRAO). Further training obligations organized in self-administration and issued evidence of special specialist knowledge could promote the quality in the development of artificially intelligent systems. This would ensure that essentially only experts make decisions.

d) Creation of new duties flanking the new freedoms: compulsory insurance

If one wants to promote the development of artificially intelligent systems and thus the opening of new sources of danger in Germany, the new freedoms must be flanked by equally new obligations corresponding to these dangers:

It would make sense to create compulsory insurance (see Compulsory Insurance Act, PflVG) similar to professional liability insurance for lawyers (Section 51 BRAO), which in the event of damage in accordance with Section 117 (1) of the Insurance Contract Act (VVG) occurs regardless of the policyholder's behavior towards the injured party in breach of contract. In this constellation, it would be essential that there are disclosure and risk reporting obligations that must be met in order not to endanger the internal insurance cover.

3. The creation of the best possible traceability should be mandatory, but above all in your own interest

However, there should be special documentation and information obligations not only in relation to compulsory insurance, which enable the best possible traceability of artificially intelligent decisions. Similar to the legal record keeping obligation (§ 50 Abs. 1 S. 1 BRAO), which serves above all to prove dutiful action in the case of claims due to violations of the obligations from the mandate contract, the comprehensibility, comprehensive, i.e. complete chronological documentation of the ( Further) development of artificially intelligent systems must be mandatory. Even if the breach of this obligation is not monitored and prosecuted, if necessary, must be submitted to the Chamber in the event of a complaint (cf. § 56 Para. 1 BRAO), there is still an incentive for the best possible traceability. The complete documentation requirement serves here for your own protection.

Such documentation in one's own interest not least enables the obtaining of an official permit required in individual cases for the public offer of the use of artificially intelligent decision-making systems, the possibly necessary regular review of the fulfillment of the permit requirements and protection against the official prohibition (cf. due to the unreliability of the trader for the protection of the general public, § 35 GewO).

VI. Conclusion

It cannot be denied that Germany has to become more attractive as a location for innovative developments in the field of artificial intelligence. However, it will not be possible to create the necessary attractiveness for scientists and developers from all over the world within a short period of time. In particular, it is not possible to create Silicon Valley conditions overnight, since the legal systems in Germany and the USA have grown and developed completely differently. Both have their advantages and disadvantages. Nevertheless, there is a need for optimization and this is actually seen here as giving (further) developing companies more freedom. However, the German legal system must not get out of whack: Its structure primarily serves to balance interests. This must now be "negotiated" accordingly.

New freedom bring new obligations, risks and legal consequences with them. The most important duty is to create the best possible traceability of artificially intelligent decisions. It is easiest to implement this for the benefit of everyone if it primarily serves to protect oneself. This obligation should be encased in a regulatory system that is familiar to the German legal system and has largely proven itself. The lawyer's comprehensive, chronological record keeping and documentation obligation is a good example here.

The article was first published in Ri 03/2018, p. 136 ff.

Claudia Otto has been a lawyer since 2012, owner of the Frankfurt law firm COT Legal since 2016 and editor of the interdisciplinary journal Recht innovativ (Ri) since 2017. She advises, writes and lectures worldwide on questions of digitization, in particular emerging legal issues relating to technical innovations such as Artificial Intelligence (AI). Before founding COT Legal, the author worked for Hengeler Mueller for four and a half years. Claudia Otto has been a member of Transparency International e.V. since 2007, and a member of the Ri cooperation partner Robotics & AI Law Society e.V. (RAILS), European AI Alliance and GRUR since 2018.

  • Fritz Pieper is a lawyer at Taylor Wessing.