Securiti Tops DSPM ratings by GigaOm

View

Article 3: Definitions | EU AI Act

Published July 12, 2024 / Updated October 2, 2024

Article 3 of the AI Act is straightforward, providing an extensive list of important definitions of key concepts and terms discussed in the Act itself. These definitions are important for businesses as they clarify how the AI Act interprets specific concepts and practices, enabling organizations to adopt more effective compliance practices.

The important definitions covered by the AI Act include the following:

AI System

An AI system refers to a machine-based system designed to operate with a varying level of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Risk

Risk means the combination of the probability of an occurrence of harm and the severity of that harm.

Provider

Any natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.

Deployer

A deployer is any natural or legal person, public authority, agency, or other body that uses an AI system under its own authority, except in instances where the system is to be used for personal non-professional activity.

Authorized Representative

This refers to any natural or legal person located or established in the Union who has received and accepted a written mandate from an AI system or general-purpose AI model provider to perform and carry out the obligations and procedures on their behalf.

Importer

An importer refers to a natural or legal person located or established in the EU who places an AI system on the market bearing the name and trademark of a natural or legal person established in a third country.

Distributor

This refers to a natural or legal person within the supply chain, other than the provider or the importer, that makes an AI system available within the EU market.

Operator

This refers to a provider, product manufacturer, deployer, authorized representative, importer, or distributor.

Notifying Authority

This refers to the national authority responsible for setting up and carrying out the necessary procedures for assessing, designating, notifying, and monitoring conformity assessment bodies.

Conformity Assessment

This refers to the process of demonstrating whether an AI system can meet the requirements set out in Chapter III, Section 2 of the AI Act relating to a high-risk AI system.

Conformity Assessment Body

This refers to the body responsible for conducting third-party conformity assessment activities, including testing, certification, and inspection.

Notified Body

This refers to a conformity assessment body that has been notified in accordance with the provisions of the AI Act and other relevant Union harmonization legislation.

CE Marking

This refers to a provider's marking to an AI system that indicates its compliance with the requirements set out in Chapter III, Section 2 of the AI Act and other relevant Union harmonization legislation.

Post-Market Monitoring System

This refers to all the activities providers of an AI system have to do to collect and review experience gained from the use of AI systems to identify and address any immediate issues via corrective and preventive measures.

Market Surveillance Authority

This refers to any national authority responsible for carrying out activities related to the provisions of Regulation (EU) 2019/1020.

Biometric Data

This refers to personal data resulting from specific technical processing relating to the physical, physiological, or behavioral characteristics of a natural person, such as facial images or dactyloscopic data.

Biometric Identification

This refers to the automated recognition of physical, physiological, behavioral, or psychological human features to establish the identity of a natural person by comparing the biometric data of that individual to the biometric data of individuals stored in a database.

Biometric Verification

This means the automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data.

Special Categories of Personal Data

This refers to categories of personal data defined per Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680, and Article 10(1) of Regulation (EU) 2018/1725.

Emotion Recognition System

This refers to an AI system that can identify or infer the emotions and intentions of natural persons based on their biometric data.

Biometric Categorization System

This refers to an AI system that can assign natural persons to specific categories based on their biometric data unless it is strictly necessary for objective technical reasons.

Remote Biometric Identification System

This refers to an AI system that can identify natural persons without their active involvement, typically at a distance through the comparison of their biometric data with that available in a reference database.

Real-time Remote Biometric Identification System

This refers to a remote biometric identification system that can collect, compare, and identify biometric data without any significant delay for instant identification.

Post Remote Biometric Identification System

This refers to any Biometric Identification System that is not real-time in nature.

Law Enforcement Authority

Law Enforcement Authority refers to any public authority or authorized body/entity competent in preventing, investigating, detecting, or prosecuting criminal offenses or executing criminal penalties, including safeguards against threats to public security. Additionally, it may also refer to entities empowered by a member state law to exercise public authority and powers for the aforementioned purposes.

Law Enforcement

This refers to the activities carried out by a law enforcement authority or on their behalf by third parties to prevent, investigate, detect, or prosecute criminal offenses or execute criminal penalties.

AI Office

This refers to the Commission's functions related to the implementation, monitoring, and supervision of AI systems and general-purpose AI models, and AI governance developed under the European Artificial Intelligence Office.

National Competent Authority

This refers to a notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices, and bodies. References to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor.

Personal Data

Personal data are any information that is related to an identified or identifiable natural person.

Non-Personal Data

This refers to any other data than personal data as defined under GDPR.

This refers to consent provided by the subject, which is a freely given, specific, unambiguous, and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions after having been informed of all aspects of the testing that are relevant to the subject's decision to participate.

Deep Fake

This refers to AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful.

General-Purpose AI Model

General-purpose AI model is an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is released on the market and that can be integrated into a variety of downstream systems or applications.

Systemic Risk

This refers to risk specific to the high-impact capabilities of general-purpose AI models, which may pose a risk to the Union market owing to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or society as a whole that can be propagated at scale across the value chain.

General-Purpose AI System

An AI system based on a general-purpose AI model that is capable of serving a wide range of purposes, both directly and as an integration.

Floating-Point Operation (FLOP)

This refers to a mathematical operation that involves floating point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base.

Downstream Provider

This refers to a provider of an AI system that includes a general-purpose AI system that integrates an AI system regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New