Securiti+Veeam Will Accelerate Safe Enterprise Al at Scale

View

Article 5: Prohibited Artificial Intelligence Practices | EU AI Act

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Article 5 of the AI Act contains detailed information on various activities and practices that are expressly prohibited.

The AI Act prohibits the following practices:

Subliminal Techniques

No AI system or model shall be made available on the market that uses subliminal techniques to influence the users’ consciousness. This extends to the use of possibly manipulative and deceptive techniques that may result in the distortion of a person’s ability to make an informed decision.

Exploitation of a Vulnerability

No AI system or model shall be made available on the market that exploits any vulnerabilities of a natural person or a specific group of persons, including their age, disability, social/economic situation, or association with a group in a manner that may cause harm to that person or someone else.

Social Evaluation

No AI system or model shall be made available on the market whose purpose is to evaluate or classify a natural person or group of persons based on their social behavior, inferred to predicted personality characteristics, or a social score that may lead to:

  • Unfavorable treatment for the natural persons in a social context that is unrelated to the context for which the data was initially generated or collected;
  • Unfavorable treatment of natural persons or groups of persons that is unjustified or disproportionate to their social behavior.

Risk Assessment

No AI system or model shall be made available on the market whose purpose is to make risk assessments of natural persons related to the likelihood of that person committing a criminal offense based solely on the profiling of that purpose. However, this prohibition does not apply to AI systems used to support human assessments related to the involvement of a person in a criminal activity, where such assessments rely on factual evidence directly associated with criminal conduct.

Facial Recognition

No AI system or model shall be made available on the market whose purpose is to create and expand facial recognition databases through the untargeted scraping of facial images using the internet or CCTV footage.

Employee Emotions

No AI system or model shall be made available on the market whose purpose is to assess the emotions of natural persons in a workplace or educational institute. However, this prohibition does not apply to AI systems where it is used for medical or safety reasons.

Biometric Categorization

No AI system or model shall be made available on the market whose purpose is to use biometric categorization systems to categorize natural persons based on their biometric data to deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. However, this prohibition does not apply to the labeling and filtration of lawfully acquired biometric datasets by law enforcement agencies (LEAs).

Real-Time Remote Biometric Identification by LEA

No AI system or model shall be made available on the market whose purpose is to put real-time remote biometric identification systems in use for law enforcement purposes unless it is necessary to:

  • Conduct a targeted search for specific victims of abduction, trafficking in human beings, or sexual exploitation of human beings, as well as searching for missing persons;
  • Prevent a specific and substantial threat to the life and safety of a natural person from an imminent terrorist attack;
  • Identify a person suspected to have committed a criminal offense;
  • Conduct a criminal investigation;
  • Execute a criminal penalty for a natural person found to have committed a criminal offense.

LEAs using real-time remote biometric identification in a publicly accessible space must ensure the use is in accordance with the aforementioned purposes and take into account the following considerations:

  • The nature of the possible usage, as well as the seriousness, probability, and scale of harm that may occur if the AI system is not used;
  • The consequences of the AI system’s usage to the rights and freedoms of natural persons involved, as well as the seriousness and scale of these consequences.

Furthermore, the use of real-time remote biometric identification in publicly accessible spaces will only be authorized if the LEA concerned conducts a fundamental rights impact assessment as required under the AI Act while also ensuring such a system is appropriately registered in the EU database. In cases of extreme emergency, such systems may be used without registration, provided the LEA completes the registration process without undue delay.

The use of real-time remote biometric identification in publicly accessible space will be subject to prior authorization to be granted by a judicial authority or relevant independent administrative authority whose decision is binding on the Member State in which the use is to take place. In cases of duly justified emergency, such systems may be used without the necessary authorization provided the LEA requests and gain the authorization without undue delay within 24 hours.

If the request is rejected, its use must be stopped immediately, and any collected data must be disposed of in addition to the generated results and outputs.

Each use of real-time remote biometric identification in a publicly accessible space should l be communicated to a relevant market surveillance authority and national data protection authority in accordance with the national rules. The notification must, at least, encompass the details outlined in Article 5(6). Such communication must not contain any sensitive operational data.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Securiti+Veeam Will Accelerate Safe Enterprise Al at Scale
We started Securiti Al with the strong conviction that in the Information Age, the Information aka Data, is the life blood of businesses and a unified platform was needed to provide all essential controls and deep intelligence around...
View More
DataAI Security for Financial Services: Turn Risk Into competitive Advantage
Financial services run on sensitive data. AI is now in fraud detection, underwriting, risk modelling, and customer service, raising both upside and risk. Institutions...
View More
Navigating China’s AI Regulatory Landscape in 2025: What Businesses Need to Know
A 2025 guide to China’s AI rules - generative-AI measures, algorithm & deep-synthesis filings, PIPL data exports, CAC security reviews with a practical compliance...
View More
All You Need to Know About Ontario’s Personal Health Information Protection Act 2004
Here’s what you need to know about Ontario’s Personal Health Information Protection Act of 2004 to ensure effective compliance with it.
The 5 Tenets of Modern DSPM for Financial Services View More
The 5 Tenets of Modern DSPM for Financial Services
Learn the 5 tenets of modern DSPM for financial services: continuous discovery, access governance, real-time risk visibility, automated remediation, and continuous compliance.
Maryland Online Data Privacy Act (MODPA) View More
Maryland Online Data Privacy Act (MODPA): Compliance Requirements Beginning October 1, 2025
Access the whitepaper to discover the compliance requirements under the Maryland Online Data Privacy Act (MODPA). Learn how Securiti helps ensure swift compliance.
DSPM vs Legacy Security Tools: Filling the Data Security Gap View More
DSPM vs Legacy Security Tools: Filling the Data Security Gap
The infographic discusses why and where legacy security tools fall short, and how a DSPM tool can make organizations’ investments smarter and more secure.
Operationalizing DSPM: 12 Must-Dos for Data & AI Security View More
Operationalizing DSPM: 12 Must-Dos for Data & AI Security
A practical checklist to operationalize DSPM—12 must-dos covering discovery, classification, lineage, least-privilege, DLP, encryption/keys, policy-as-code, monitoring, and automated remediation.
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New