The European AI LAW
On 21 April 2021, the EU Commission published the official draft of the European AI law. Globally, it would be the first legal framework for artificial intelligence (AI). With this post, we provide a brief overview of the planned regulation.
The aim of the regulation is to form a legal framework on AI in the EU. On the one hand, the regulation intends to promote the development of AI, while on the other hand ensuring a high level of protection for public interests and creating trust in AI systems. The regulation includes prohibitions on certain practices of AI systems, setting requirements and obligations for high-risk AI systems, and transparency obligations for certain other AI systems. It is divided into twelve titles, the main part of this paper will deal with Title III – High Risk AI Systems. Title III is divided into 5 chapters:
- Chapter 1: Classification of AI Systems as high risk
- Chapter 2: Requirements for high-risk AI Systems
- Chapter 3: Obligations of AI Providers, AI Users, and Other Parties;
- Chapter 4: Notifying authorities and notified bodies (including auditing and certification bodies)
- Chapter 5: Standards, Conformity assessment, certificates and registration.
The scope
The scope is technically broad, but limits regulation primarily to high-risk systems, leaving a wide range of AI systems unaffected for now. Parts of the regulation (transparency obligations) also apply to AI systems acting with humans or that create or generate images, audio or video, regardless of the risk assessment.
The regulation applies to any placing on the market, putting into service and any type of use of AI systems in the EU. It is irrelevant whether the provider of the AI system is based in the EU. AI system, according to Article 3(1) of the draft means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
Restrictions are provided to take into account specificities of certain sectors of the economy: in particular, when it comes to AI systems used as safety components of products or systems, e.g., in motor vehicles. For these AI systems, only the requirements in Article 84 on updating the list in Annex 3 (list of high-risk AI systems) are applicable. The regulation does not apply to AI systems for military purposes.
Due to the high risk of certain projects supported by AI, prohibited practices are listed in Article 5. These include:
- manipulative actions that cause people to form opinions (especially children);
- “real-time” remote biometric identification for law enforcement purposes.
In any case, all remote biometric recognition falls within the range of high-risk AI systems. The use of remote biometric recognition in public areas for law enforcement purposes is always prohibited, except for a few exceptions regulated in Article 5(d). These include terrorist attacks, the search for missing children, and the prevention of an imminent threat to the life of a natural person. Critics fear that these exceptions could be interpreted in a way that the ban on remote biometric identification will be undermined. There is also criticism that there is no ban on identification by gender and sexual orientation. Especially the introduction of the term “real-time” holds potential for discussion. In the current draft, the ban is limited to “real-time” remote identification; “post” biometric remote identification is not banned in principle. Even the recitals do not specify exactly which remote identification can be subsumed under “real-time”. Also the legal definition in Article 3 can´t help to determine the difference between the two terms. Thus, “post” is supposed to cover any biometric remote identification that is not covered by “real-time”.
The prohibitions regarding artificial intelligence with an unacceptable risk are less comprehensive in this final version. In any case in the previous draft, social scoring was specifically prohibited – not limited to authorities – as was mass surveillance. The draft was already criticized for prohibiting too few measures – now these are further restricted, so there will certainly be even more potential for discussion on this in the further negotiations at EU level.
Which AI systems otherwise entail a high risk and therefore create a danger to the health and safety or fundamental rights of natural persons are regulated in Article 6 of the AI Act. Covered are e.g. AI systems in connection with critical infrastructure, security parts in products or AI systems in the justice system. Article 6(3) refers to Annex III, which contains an additional list of high-risk AI systems. The Commission regularly updates the list.
Introduction of a conformity assessment procedure for AI systems.
Compliance with the requirements must be established by means of a conformity assessment procedure, prior to placing on the market (Article 43). This involves checking whether various requirements are met, namely:
- high quality data sets,
- documentation and recording,
- transparency and provision of information,
- human oversight,
- robustness, accuracy and safety.
High-quality datasets can be used to ensure that the AI systems perform as intended and that risks to safety and fundamental rights are minimized. To this end, training and testing datasets must be sufficiently relevant, representative, and error-free. Common European data rooms are to be established by the Commission – as part of its “data strategy”. These should help to provide trustworthy access to high quality data.
Obligations of the individual actors in AI
The second chapter deals with the obligations of the individual actors, i.e. providers, agents, importers and distributors. The provider is responsible for ensuring that high-risk AI systems implement the obligations. This includes:
- Establishment of a quality management system,
- Conformity assessment
- Ensure the implementation of the prescribed conformity assessment procedure,
- Prepare the relevant documentation and
- Establish a post-market surveillance system.
- Automatic logging of activities to ensure traceability of results
- In addition: If the provider is outside the EU, then an authorized representative in the EU must be appointed (Article 25). This representative is to ensure the conformity of the AI systems placed on the market or put into operation by the provider.
The conformity assessment procedure (Chapter 4)
The provider must always carry out the conformity assessment. To indicate conformity with the regulation, the AI systems must be CE marked. The scope of the conformity assessment is regulated in Annexes VI-VII. In some cases, the conformity assessment can only be performed by a third party. “Third parties” means the notified bodies under this Regulation. Notified bodies verify that high-risk AI systems comply with this Regulation.
Based on conclusions drawn from the Corona crisis, market surveillance authorities (Article 47) may take action themselves for reasons of public safety or protection of life and health of natural persons: In such cases, they can individually authorize the placing on the market or putting into service of AI systems that have not undergone a conformity assessment yet.
There are also regulations to promote innovation (Article 53 et seq.), by creating a controlled experimentation and testing environment. SMEs and startups are to enjoy priority access to this.
A first conclusion on the draft European AI law
The EU has a difficult task: on the one hand, the EU economy should be strengthened in the use of AI and its development should be promoted; on the other hand, there should be no “wild west”, civil rights should be protected. In this respect, the law seems to aim for a minimum consensus. A certain regulation for high-risk applications is achieved so that companies and users can rely on a certain standard.
It will now be exciting to see what changes will result in the legislative process with the EU Parliament and EU Council. The negotiations could take years. The leaked draft, which we reported on a week ago, provided for stricter regulations in some cases; the now official version has been weakened (even further).
Developing laws for new technologies means special challenges, which the EU is now addressing – further developments will remain to be seen. In particular, it will be exciting to see whether the planned regulation can also influence other countries in the creation of a legal framework for AI systems.
Your contacts at BHO Legal: Dr. Matthias Lachenmann and Jan Helge Mey