According to my forecast, anyone who wants to wait for a legally stable and predictable state of business in terms of AI use will no longer have to read such articles before the end of the 2020s. Even some of the statements made here are likely to be reinterpreted at one point or another at the end of this year.
If you don’t let that put you off, you will find that legal security around artificial intelligence (AI) and its production and use is a problem for everyone involved. Nevertheless, guard rails already exist today, on which current and future considerations can be based.
First, some terminology needs to be clarified. Artificial intelligence is often understood as self-learning. However, it is far from being autonomous in its decisions. Conversely, not all autonomously decisive systems are self-learning: That would certainly not be in the interest of many road users in an autonomous vehicle. If you read political concepts for dealing with artificial intelligence, you will often find that artificial intelligence is very often autonomously and self-learning. From the practitioners’ point of view, this is not correct, but it does explain the resulting regulatory ideas.
Due to the mandatory acceptance of self-learning mechanisms, the question of the data used, which are also used as learning material, is always of legal interest. A lot of legislative thoughts revolve around this, especially from an ethical point of view such as discrimination and, of course, data protection. However, these ethical aspects do not play a role in the following considerations, since as a manufacturer and operator of AI it is a matter of knowing about the existing legal framework.
The search for such regulations quickly leads to the result, but it is just as quickly sobering: Neither in Germany nor in the EU is there a legal basis that regulates the production and operation of AI even to a limited extent. There are several projects in which this topic is politically driven; In Germany, for example, the Bundestag’s Study Commission will present its final report this year. The European Commission has already done so in a white paper earlier this year. These regulations show, at least in part, which intellectual direction legislators will take in the future.
Even today there are regulations for the liability of manufacturers and operators if products cause personal injury or property damage. These regulations apply to toasters as well as to systems with AI. The AI incorrectly programmed by the manufacturer, the embedded product with incorrectly taught-in AI and the vulnerability of AI due to the lack of security standards are to be treated in the same way as the faulty solder joint in a toaster. The same applies to the operator of products and systems with AI. The operators are liable if products cause damage to the customer in a possible learning process due to incorrect settings, incorrect and improper environment as well as insufficient and incorrect data. There are generally applicable liability regulations for this.
However, proof of difficulty is suspected in AI because the decision of the AI is not transparent. This so-called opacity arises because usually many people involved in the manufacture and sale of products with AI. This means that no causation related to a company or a person can be proven. These assumptions are partially justified, even if the same is true for many other products – and a vehicle with 35,000 components and a control unit with over 50,000 parameters is certainly not an easily verifiable source of error.
From this, politics in the EU and also in Germany conclude a special approach for AI. A look into the legislative glass ball suggests that the future legal basis for the production and use of AI will be based on a different approach than the previous product liability law framework.
The efforts are in the direction of a risk-based approach: low-risk AI products are subject to few regulations, but riskier solutions, however, involve considerable requirements up to external external monitoring and certification. In order to facilitate the assertion of damage, there will be a liability for damage – i.e. without fault or even without tort. It is not yet clear whether the liability at risk is anchored at the manufacturer, at the operator or even at both.
There are also efforts to make use of the operator as well as the vehicle owner. However, it is doubtful that this view will prevail. Regardless of this, the operator should consider compulsory insurance. In this case, the injured party can turn to the nearest tangible cause, who has not caused any damage-related cause apart from operating the AI. The reimbursement of the amount paid then runs between the insurer and the manufacturer. Initial reactions from the insurance industry, however, do not allow the conclusion that the first policy models will quickly emerge here.
What does this mean for companies that want to manufacture and use AI? The most important finding from these explanations is that today’s regulations also apply to AI, software, and autonomous, self-learning systems. The requirements that jurisprudence places on error-free products and the organization of companies should already be implemented today.
The emerging special regulations for AI will often be based on standards and norms, from the assessment of the risk level to the minimum requirements for programming and tests as well as their certification. The earlier and stronger the economy gets involved in these future-oriented processes, the more likely it is that the legal structure will be practical. (hi / fm)