Privacy & Data Protection

News Update Artificial Intelligence

European Commission proposes regulatory framework
02 juli 2021
2 July 2021

The European Commission published a proposal for an Artificial Intelligence (AI) Regulation, aiming to set rules governing the use of artificial intelligence whilst fostering innovation across the European Union.

The development and use of artificial intelligence have skyrocketed, seeing unparalleled advances in the technology's appearance in day-to-day life. Artificial intelligence can be found in everyday services such as Netflix and Spotify (allowing applications to learn users' preferences), self-driving cars, and notably, – in more recent times – disease mapping. Simultaneously, these developments have resulted in growing requests to regulate and level out new challenges faced by consumers, market players, governments, and regulators.

The proposed AI Regulation introduces new obligations for providers, importers, distributors, and users of artificial intelligence, thus impacting an extensive variety of businesses across many industries. At the same time, the proposal aims to support innovation in the EU and increase users' trust in the new and versatile generation of products.

Scope of the proposed regulation

The proposal broadly defines AI systems as all software that is developed with one or more of the following techniques and approaches:
  1. Machine learning approaches;
  2. Logic- and knowledge-based approaches; and
  3. Statistical approaches.
AI systems can generate output for a given set of human-defined objectives, such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The draft regulation has a broad extra-territorial scope. It is applicable to (i) providers placing on the market or putting into service AI systems in the EU, regardless of whether they are established within the EU, (ii) AI systems users within the EU, and (iii) AI systems providers and users that are located outside the EU, where the output produced by the system is used in the EU.

Types of AI systems

The proposal takes a risk-based approach, dividing AI systems into four categories: (i) unacceptable risk, (ii) high risk, (iii) limited risk, and (iv) minimal risk. The regulations differ per risk category: the greater the risk, the stricter the rules.

Unacceptable Risk
AI systems considered to be a threat to the safety, livelihoods, and rights of people are prohibited. These include systems that:
  • Materially distort an individual's behaviour by deploying subliminal techniques or exploit vulnerabilities due to the individual's age, physical or mental disability (in a manner likely) to cause physical or psychological harm (e.g. toys using voice assistance encouraging dangerous behaviour of minors);
  • Evaluate or classify the trustworthiness of individuals over a certain period of time based on their social behaviour or known or predicted characteristics, leading to certain detrimental or unfavourable treatment (e.g. social scoring by governments).
  • Use ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement (e.g. facial recognition systems). Exceptions exist for targeted searches for crime victims or preventing a terrorist attack.

High risk
This category is the main focus of the proposal. AI systems are considered to be high risk (i) if it concerns (a safety component of) a product already subject to EU safety regulations, or (ii) if designated by the European Commission as high risk. In particular, if not prohibited, remote biometric identification systems are considered high risk.
  • The first category concerns AI systems to be used in various items, including toys, medical devices, lifts, cars, planes, machinery, and personal protective equipment.
  • The second category relates to stand-alone systems with potential fundamental rights implications, such as recruiting and evaluating employees, evaluating creditworthiness or establishing a credit score (except for small scale providers for their own use), and profiling of individuals and crime analytics in law enforcement.

High-risk AI systems are subject to strict obligations before they can be placed on the market, which are mainly directed at AI system providers (i.e. anyone that develops an AI system or has an AI system developed) including:
  • Establishing of risk management systems;
  • Using high-quality datasets. AI systems must be trained, validated, and tested;
  • Preparing technical documents, including instructions on the purpose and use;
  • Logging of activity to ensure results are traceable;
  • Provision of user information and instructions for use;
  • Developing human oversight measures to minimise risk;
  • Achieving an appropriate level of accuracy, robustness, and cybersecurity;
  • Registering stand-alone high-risk AI systems in a publicly accessible EU database; and
  • Monitoring to evaluate continuous compliance and incident reporting.

Additionally, high-risk AI system providers must ensure that the system undergoes the relevant conformity assessment procedure. This can either be a self-assessment procedure or third-party assessment by involving a notified body.

Similar obligations exist for users, importers, and distributors. User obligations include using an AI system in accordance with its instructions and informing the provider when having reason to believe that its use may present a risk to health and safety or fundamental rights. Importers must ensure that they have performed a conformity assessment and compliant technical documents are available. Further, the AI system must bear the CE conformity marking and be accompanied by the required information. Like importers, distributors must verify that the high-risk AI system bears the required CE conformity marking and is accompanied by the required documents and instructions for use.

Limited risk
AI systems with specific transparency obligations, such as chatbots and deepfakes. Users must be made aware that they are interacting with a machine so that they can make informed choices or decide to step back.

Minimal risk
Applications that represent minimal or no risk for citizens' rights or safety remain unregulated.

Supervision and penalties

Member States are to appoint a national supervisory authority to supervise how the regulation will be applied and implemented. On an EU level, the proposed AI Regulation lays the foundation for a European Artificial Intelligence Board.

Failing to comply with the AI Regulation may result in incurring fines up to EUR 10-30 million or 2-6% of the global annual turnover, depending on the severity of the breach.

Outlook

The European Parliament and Member States will investigate the European Commission's proposal. If adopted, the new rules and obligations will be directly effective across the European Union. Given its broad scope and substantial penalties, the AI Regulation will have a significant impact on a range of industries. We expect that it could take until 2024 for the proposal to come into force after having passed through the legislative process. In the meantime, the EU Commission's proposal has rekindled a debate on AI's future and the rules that should govern it.

As part of this debate, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) published a joint opinion on 18 June 2021, outlining the high risks posed by remote biometric identification of individuals in publicly accessible spaces. The EDPB and the EDPS appeal for a general ban on any use of AI for automated recognition of human features in publicly accessible spaces, including the recognition of faces, fingerprints, keystrokes, or voices. The categorisation of individuals into clusters based on various characteristics, including ethnicity, gender, or sexual orientation should be explicitly prohibited, as specified under Article 21 of the EU's Charter of Fundamental Rights. Lastly, the joint opinion states that the AI Regulation should require that a product pursuing a CE conformity marking comply with the GDPR.

The proposed AI Regulation imposes obligations such as human oversight and monitoring duties which will lead to added financial implications for AI providers. Consequently, once effective, the new rules may show substantive changes in how providers decide to develop and redevelop their AI systems. Even though the proposed AI Regulation is not yet final, it sends a clear signal that the EU is serious about tackling unfettered AI systems. While the AI Regulation's material impacts are yet to become tangible, the sector's public and private organisations should prepare for a regulatory shake-up.
Written by:
Thomas de Weerd

Key Contact

Amsterdam
Advocaat | Partner
+31 20 605 69 85
+31 6 5165 9208

Key Contact

Londen
Advocaat | Associate
+31 20 605 61 58
+31 6 8266 7027

Key Contact

Amsterdam
Advocaat | Associate
+31 20 605 65 40
+31 6 1291 2607