IDC Names Securiti a Worldwide Leader in Data Privacy

View

European Commission’s Proposed Artificial Intelligence Regulation

Published November 1, 2022

Listen to the content

On 21 April 2021, the European Commission published its proposal for the Regulation laying down harmonised rules on artificial intelligence and a Coordinated Plan on artificial intelligence. On 19 October 2022, the Czech Presidency of the Council of the European Union offered its latest compromise text for the Proposed AI Regulation. Once the Proposed AI Regulation will be adopted by the European Parliament and member states, it will become directly enforceable across the European Union.

The Proposed AI Regulation aims to ensure that Artificial Intelligence (AI) systems placed on the European Union market respect the fundamental rights of individuals. It further aims to ensure legal certainty and facilitate the development of a single market for safe and trustworthy AI applications.

The Proposed Regulation consists of 108 pages and is complex and comprehensive. Let’s break it down into key points:

Definition of AI systems

The Regulation defines AI systems as software using one or more techniques and approaches “which generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with”. These techniques and approaches set out in Annex-I of the Proposed Regulation include machine learning approaches, logic and knowledge based approaches, statistical approaches, and bayesian estimation and optimisation methods.

Scope of the Regulation

The Proposed Regulation applies to the following:

  • Providers and users of AI systems in the European Union, irrespective of whether those providers are established in the Union or a third country.
  • Providers and users of AI systems located in a third country, where the output produced by the system is used in the European Union.

The Regulation does not apply to AI systems developed or used exclusively for military purposes or public bodies or international organisations in third countries that use AI in the context of international agreements for law enforcement or judicial cooperation.

Risk-based approach for AI Systems

The Regulation proposes a risk-based approach with four levels of risks for AI systems:

Unacceptable risk AI systems

This category includes those AI systems that pose a clear threat to the safety, livelihoods, and fundamental rights of people. The use of such AI systems is prohibited.

The placing, putting into service, or use of the following AI systems is prohibited:

  • AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  • AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  • ‘Real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless certain limited exceptions apply. The latest compromise text offered by the Czech Presidency has clarified the definition of “remote” that it includes both that the system is used from a distance and the identification occurs without the person’s active involvement;
  • AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
    1. Detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
    2. Detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity.

The use of real-time biometric identification systems in publicly accessible spaces for law enforcement purposes is permitted only where it is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risk. These permitted situations involve the search for potential victims of crime including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of criminal offences with certain protections and limitations.

High-risk AI systems

This category includes those AI systems that create a high risk to the health and safety or fundamental rights of natural persons. Such AI systems are permitted on the European market subject to compliance with certain mandatory requirements and ex-ante conformity assessment. All high-risk AI systems have strict obligations before they can be put on the market and throughout their lifecycle. For example, the requirements of high-quality data, documentation and traceability, transparency, human oversight, accuracy, and robustness, are strictly necessary to mitigate the risks to fundamental rights and safety before any high-risk AI system is put on the market.

The European Commission lists the following as high-risk AI systems:

  1. Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  2. Educational or vocational training, that may determine the access to education and professional course of someone's life (e.g. scoring of exams);
  3. Safety components of products (e.g. AI application in robot-assisted surgery);
  4. Employment, workers management, and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  5. Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  6. Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of the reliability of evidence);
  7. Migration, asylum, and border control management (e.g. verification of authenticity of travel documents);
  8. Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

Limited risk AI systems

This category includes those AI systems with specific transparency obligations. Providers of such AI systems must ensure that natural persons are informed that they are interacting with an AI system unless it is obvious from the circumstances and context of the use. This enables natural persons to make an informed choice to continue using the AI system or step back from a given situation. For example, the users of the following AI systems have transparency obligations:

  • Users of an emotion recognition system or a biometric categorisation system must inform the natural persons exposed of the operation of the system.
  • Users of an AI system that generates or manipulates images, audio or video content that appreciably resembles existing persons, objects, places, or other entities or events or would falsely appear to a person to be authentic or truthful (deep fake) must disclose that the content has been artificially generated or manipulated.

However, the transparency obligations do not apply to those AI systems that have been authorized by law to detect, prevent, investigate, and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.

Minimal risk AI systems

This category includes AI systems that represent only minimal or no risk for citizen’s rights or safety, such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category and the Regulation allows the free use of such applications subject to the existing legislation, without any additional legal obligations.

Consequences for non-compliance

The Regulation provides potentially significant fines for non-compliance:

  1. Certain non-compliance actions such as breach of the prohibition requirement on specific AI systems or data governance requirements for high-risk AI systems are subject to administrative fines of up to 30,000,000 EUR or 6% of the annual global turnover of the preceding financial year, whichever is higher.
  2. Other forms of non-compliance are subject to fines up to 20,000,000 EUR or 4% of the annual global turnover of the preceding financial year, whichever is higher.

The supply of incorrect or misleading information to notified bodies or national competent authorities in reply to a request is subject to a fine of up to 10,000,000 EUR or 2% of the annual global turnover of the preceding financial year, whichever is higher.

What’s Next?

The risk-based approach adopted by the European Commission indicates that high-risk AI systems constitute a core part of the regulatory framework. However, despite such a sound risk-based approach, certain gaps remain:

  • The Proposed AI Regulation does not address the need to completely ban the use of remote biometric identification systems including facial recognition in publicly accessible spaces, as pointed out by the EDPS.
  • The Proposed AI Regulation lacks a focus on people who are directly impacted by biased algorithms and whether or not any remedies are available to them.
  • The Proposed AI Regulation relies on self-assessment thereby, having weak enforcement power.
  • The exceptions under which unacceptable risk AI systems are permitted are very broad. In particular, biometric mass surveillance needs attention and the fact that transparency obligations do not apply to limited AI systems that are authorised by law to detect or prevent criminal offences appears to be against the requirements of data protection legal framework.

The Proposed AI Regulation will now be reviewed by the Council of the European Union and European Parliament after the end of the public consultation period on 6 August 2021. The European Parliament and member states will have to adopt the Proposed AI Regulation for it to become directly enforceable across the European Union. The Regulation appears to have a strong potential to safeguard the fundamental rights and individuals as well as support technological innovation, however, it remains to be seen how well it aligns with the EU data protection framework. Nevertheless, it can be considered the starting point of an upcoming legislative process in Europe.

Your Data+AI Command Center

Enable Safe Use of Data and AI

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

Follow