LISTEN NOW: Evolution of Data Controls in the Era of Generative AI

View

Article 5: Prohibited Artificial Intelligence Practices | EU AI Act

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Associate Data Privacy Analyst, Securiti

CIPP/Europe

Article 5 of the AI Act contains detailed information on various activities and practices that are expressly prohibited.

The AI Act prohibits the following practices:

Subliminal Techniques

No AI system or model shall be made available on the market that uses subliminal techniques to influence the users’ consciousness. This extends to the use of possibly manipulative and deceptive techniques that may result in the distortion of a person’s ability to make an informed decision.

Exploitation of a Vulnerability

No AI system or model shall be made available on the market that exploits any vulnerabilities of a natural person or a specific group of persons, including their age, disability, social/economic situation, or association with a group in a manner that may cause harm to that person or someone else.

Social Evaluation

No AI system or model shall be made available on the market whose purpose is to evaluate or classify a natural person or group of persons based on their social behavior, inferred to predicted personality characteristics, or a social score that may lead to:

  • Unfavorable treatment for the natural persons in a social context that is unrelated to the context for which the data was initially generated or collected;
  • Unfavorable treatment of natural persons or groups of persons that is unjustified or disproportionate to their social behavior.

Risk Assessment

No AI system or model shall be made available on the market whose purpose is to make risk assessments of natural persons related to the likelihood of that person committing a criminal offense based solely on the profiling of that purpose. However, this prohibition does not apply to AI systems used to support human assessments related to the involvement of a person in a criminal activity, where such assessments rely on factual evidence directly associated with criminal conduct.

Facial Recognition

No AI system or model shall be made available on the market whose purpose is to create and expand facial recognition databases through the untargeted scraping of facial images using the internet or CCTV footage.

Employee Emotions

No AI system or model shall be made available on the market whose purpose is to assess the emotions of natural persons in a workplace or educational institute. However, this prohibition does not apply to AI systems where it is used for medical or safety reasons.

Biometric Categorization

No AI system or model shall be made available on the market whose purpose is to use biometric categorization systems to categorize natural persons based on their biometric data to deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. However, this prohibition does not apply to the labeling and filtration of lawfully acquired biometric datasets by law enforcement agencies (LEAs).

Real-Time Remote Biometric Identification by LEA

No AI system or model shall be made available on the market whose purpose is to put real-time remote biometric identification systems in use for law enforcement purposes unless it is necessary to:

  • Conduct a targeted search for specific victims of abduction, trafficking in human beings, or sexual exploitation of human beings, as well as searching for missing persons;
  • Prevent a specific and substantial threat to the life and safety of a natural person from an imminent terrorist attack;
  • Identify a person suspected to have committed a criminal offense;
  • Conduct a criminal investigation;
  • Execute a criminal penalty for a natural person found to have committed a criminal offense.

LEAs using real-time remote biometric identification in a publicly accessible space must ensure the use is in accordance with the aforementioned purposes and take into account the following considerations:

  • The nature of the possible usage, as well as the seriousness, probability, and scale of harm that may occur if the AI system is not used;
  • The consequences of the AI system’s usage to the rights and freedoms of natural persons involved, as well as the seriousness and scale of these consequences.

Furthermore, the use of real-time remote biometric identification in publicly accessible spaces will only be authorized if the LEA concerned conducts a fundamental rights impact assessment as required under the AI Act while also ensuring such a system is appropriately registered in the EU database. In cases of extreme emergency, such systems may be used without registration, provided the LEA completes the registration process without undue delay.

The use of real-time remote biometric identification in publicly accessible space will be subject to prior authorization to be granted by a judicial authority or relevant independent administrative authority whose decision is binding on the Member State in which the use is to take place. In cases of duly justified emergency, such systems may be used without the necessary authorization provided the LEA requests and gain the authorization without undue delay within 24 hours.

If the request is rejected, its use must be stopped immediately, and any collected data must be disposed of in addition to the generated results and outputs.

Each use of real-time remote biometric identification in a publicly accessible space should l be communicated to a relevant market surveillance authority and national data protection authority in accordance with the national rules. The notification must, at least, encompass the details outlined in Article 5(6). Such communication must not contain any sensitive operational data.

Suscríbase a nuestro boletín

Obtenga toda la información más reciente, actualizaciones de leyes y más en su bandeja de entrada


Compartir

Videos

Spotlight Talks

Spotlight 1:10:56

Cómo Prepararse y Actuar Positivamente ante la Ley 81

Cómo Prepararse yActuar Positivamente ante la Ley 81
Ver ahora View

Latest

View More

From Trial to Trusted: Securely Scaling Microsoft Copilot in the Enterprise

AI copilots and agents embedded in SaaS are rapidly reshaping how enterprises work. Business leaders and IT teams see them as a gateway to...

The ROI of Safe Enterprise AI View More

The ROI of Safe Enterprise AI: A Business Leader’s Guide

The fundamental truth of today’s competitive landscape is that businesses harnessing data through AI will outperform those that don’t. Especially with 90% of enterprise...

Data Security Governance View More

Data Security Governance: Key Principles and Best Practices for Protection

Learn about Data Security Governance, its importance in protecting sensitive data, ensuring compliance, and managing risks. Best practices for securing data.

AI TRiSM View More

What is AI TRiSM and Why It’s Essential in the Era of GenAI

The launch of ChatGPT in late 2022 was a watershed moment for AI, introducing the world to the possibilities of GenAI. After OpenAI made...

Managing Privacy Risks in Large Language Models (LLMs) View More

Managing Privacy Risks in Large Language Models (LLMs)

Download the whitepaper to learn how to manage privacy risks in large language models (LLMs). Gain comprehensive insights to avoid violations.

View More

Top 10 Privacy Milestones That Defined 2024

Discover the top 10 privacy milestones that defined 2024. Learn how privacy evolved in 2024, including key legislations enacted, data breaches, and AI milestones.

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New