Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems | EU AI Act

Contributors

Adeel Hasan

Sr. Data Privacy Analyst at Securiti

CIPM, CIPP/Canada

Muhammad Faisal Sattar

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/Asia

Listen to the content

On 13 March 2024, the European Parliament formally adopted the European Union Artificial Intelligence Act (EU AI Act), which aims to ensure the protection of fundamental rights while simultaneously boosting innovation. To achieve this objective, the EU AI Act introduces, among other things, the obligation to carry out a fundamental rights impact assessment (FRIA) in certain situations.

This blog provides an overview of the FRIA and covers important aspects, including the entities responsible for carrying out the assessment, what should be included in the assessment, and when it should be carried out.

What is an FRIA?

As the name suggests, the Fundamental Rights Impact Assessment (FRIA) under the EU AI Act is aimed at protecting individuals' fundamental rights from the adverse impacts produced by AI systems. The primary goal of an FRIA is to identify the specific risks to the rights of individuals or groups of individuals likely to be affected and identify measures to be taken in the case of a materialization of those risks.

Which AI Systems are Covered for an FRIA?

The high-risk AI systems referred to in Article 6(2) of the EU AI Act are subject to the requirements of FRIAs. Below is a brief description of the in-scope and exempted high-risk AI systems:

A. In-Scope High-Risk AI Systems

The high-risk AI systems listed in the following areas, as detailed in Annex III of the EU AI Act, are subject to the requirements of FRIAs:

  1. Biometrics;
  2. Educational and vocational training;
  3. Employment, workers management, and access to self-employment;
  4. Access to and enjoyment of essential private services and essential public services and benefits;
  5. Law enforcement;
  6. Migration, asylum, and border control management; and
  7. Administration of justice and democratic processes.

B. Exempted High-Risk AI Systems

The high-risk AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity are exempted from the requirements related to the FRIAs.

Who Needs to Conduct an FRIA?

Article 27 of the EU AI Act obliges certain types of deployers to conduct an FRIA. Before delving into the covered types of deployers, let us first understand who a deployer is. As per Article 3(4) of the EU AI Act, a deployer is:

“a natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”

Let’s now briefly discuss each of the covered types of deployers obligated to conduct an FRIA:

A. Deployers that are bodies governed by public law

Deployers that are governed by public law must conduct an FRIA before deploying a high-risk AI system. Although the EU AI Act does not explicitly define the phrase ‘bodies governed by public law’, it has been defined under other EU legislations. For example, under Article 2(4) of the EU Directive 2014/24, ‘bodies governed by public law’ means bodies that have all of the following characteristics:

  • they are established for the specific purpose of meeting needs in the general interest, not having an industrial or commercial character;
  • they have legal personality; and
  • they are financed, for the most part, by the State, regional or local authorities, or by other bodies governed by public law; or are subject to management supervision by those authorities or bodies; or have an administrative, managerial or supervisory board, more than half of whose members are appointed by the State, regional or local authorities, or by other bodies governed by public law.

Assuming ‘bodies governed by public law’ is interpreted in the same manner as above for the purposes of the EU AI Act, a diverse range of deployers may fall within this category. It is pertinent to note that the types of deployers falling within this category may vary depending on the laws of the relevant member state.

B. Deployers that are private entities providing public services

Deployers that are not governed by public law but are involved in providing public services are also under an obligation to conduct an FRIA before deploying a high-risk AI system. The EU AI Act does not define the term ‘public services’; however, Recital (96) sheds some light on the concept. As per the said recital, some examples of services of public nature are education, healthcare, social services, housing, and administration of justice.

A closer look at this category of covered deployers reveals that more entities will be subject to the requirements of the FRIA than would appear at first sight. The use of the broad term ‘public services’ without providing criteria to determine such services hints at the legislative intent to cover all the deployers involved in the provision of services that can reasonably affect the public interest.

C. Deployers of certain high-risk AI systems

Deployers of the following high-risk AI systems, referred to in points 5(b) and (c) of Annex III of the EU AI Act, are also obligated to conduct an FRIA:

  • AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; and
  • AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance

The deployers of the above-mentioned high-risk AI systems must conduct FRIAs before deployment, regardless of whether they are bodies governed by public law or private entities providing public services.

When Should an FRIA be Conducted?

The obligation to conduct an FRIA applies to the first use of a high-risk AI system. Therefore, the deployer should conduct the FRIA before putting the system into service. It is pertinent to note that a deployer may rely on a previously conducted FRIA or existing impact assessment carried out by the provider; however, necessary steps must be taken to update the information in case any of the elements assessed in the FRIA (discussed below) has changed or is no longer up to date.

How Should an FRIA be Conducted?

The EU AI Act does not specify any manner in which an FRIA should be conducted; however, it refers to a template for a questionnaire to be developed by the AI Officer to facilitate deployers conducting an FRIA.

Irrespective of the manner in which the assessment is carried out, an FRIA should consist of the following elements:

  • a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
  • a description of the period of time within which and the frequency with which each high-risk AI system is intended to be used;
  • the categories of natural persons and groups likely to be affected by its use in the specific context;
  • the specific risks of harm likely to have an impact on the categories of persons or groups of persons identified pursuant to point (c) above, taking into account the information given by the provider pursuant to Article 13 of the EU AI Act;
  • a description of the implementation of human oversight measures according to the instructions for use;
  • the measures to be taken where those risks materialize, including arrangements for internal governance and complaint mechanisms.

It is important to note that a data protection impact assessment (DPIA) conducted pursuant to Article 35 of the GDPR Regulation (EU) 2016/679 (GDPR) or Article 27 of Directive (EU) 2016/680 (Directive) complements an FRIA conducted under the EU AI Act. Therefore, for the purposes of an FRIA, a deployer may be deemed in compliance with any obligations that have already been complied with as a result of a DPIA conducted under the GDPR or the Directive. However, it must be noted that such DPIAs are only complementary to an FRIA, and any requirements related to an FRIA not specifically addressed in the DPIAs must be complied with by the deployers accordingly.

Notification of the FRIA

Once an FRIA has been carried out, the deployers must notify the relevant market surveillance authority of its results in addition to filling out and submitting the template for a questionnaire; yet to be developed by the AI Office. The deployers may be exempt from the obligation to notify where necessary for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. However, such an exemption shall be for a limited period while necessary FRIA procedures are carried out without undue delay.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix View More
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix
The Challenge of Navigating Global Data Privacy Laws In today’s privacy-first world, navigating data protection laws and direct marketing compliance requirements is no easy...
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA) View More
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA)
Delve into Uganda's Data Protection and Privacy Act (DPPA), including data subject rights, organizational obligations, and penalties for non-compliance.
Data Risk Management View More
What Is Data Risk Management?
Learn the ins and outs of data risk management, key reasons for data risk and best practices for managing data risks.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders View More
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Singapore’s PDPA and consent requirements. Stay compliant and protect your business.
View More
Australia’s Privacy Act & Consent: Essential Guide for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Australia’s Privacy Act and consent requirements. Stay compliant and protect your business.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New