Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems | EU AI Act

Contributors

Adeel Hasan

Sr. Data Privacy Analyst at Securiti

CIPM, CIPP/Canada

Muhammad Faisal Sattar

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/Asia

Listen to the content

On 13 March 2024, the European Parliament formally adopted the European Union Artificial Intelligence Act (EU AI Act), which aims to ensure the protection of fundamental rights while simultaneously boosting innovation. To achieve this objective, the EU AI Act introduces, among other things, the obligation to carry out a fundamental rights impact assessment (FRIA) in certain situations.

This blog provides an overview of the FRIA and covers important aspects, including the entities responsible for carrying out the assessment, what should be included in the assessment, and when it should be carried out.

What is an FRIA?

As the name suggests, the Fundamental Rights Impact Assessment (FRIA) under the EU AI Act is aimed at protecting individuals' fundamental rights from the adverse impacts produced by AI systems. The primary goal of an FRIA is to identify the specific risks to the rights of individuals or groups of individuals likely to be affected and identify measures to be taken in the case of a materialization of those risks.

Which AI Systems are Covered for an FRIA?

The high-risk AI systems referred to in Article 6(2) of the EU AI Act are subject to the requirements of FRIAs. Below is a brief description of the in-scope and exempted high-risk AI systems:

A. In-Scope High-Risk AI Systems

The high-risk AI systems listed in the following areas, as detailed in Annex III of the EU AI Act, are subject to the requirements of FRIAs:

  1. Biometrics;
  2. Educational and vocational training;
  3. Employment, workers management, and access to self-employment;
  4. Access to and enjoyment of essential private services and essential public services and benefits;
  5. Law enforcement;
  6. Migration, asylum, and border control management; and
  7. Administration of justice and democratic processes.

B. Exempted High-Risk AI Systems

The high-risk AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity are exempted from the requirements related to the FRIAs.

Who Needs to Conduct an FRIA?

Article 27 of the EU AI Act obliges certain types of deployers to conduct an FRIA. Before delving into the covered types of deployers, let us first understand who a deployer is. As per Article 3(4) of the EU AI Act, a deployer is:

“a natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”

Let’s now briefly discuss each of the covered types of deployers obligated to conduct an FRIA:

A. Deployers that are bodies governed by public law

Deployers that are governed by public law must conduct an FRIA before deploying a high-risk AI system. Although the EU AI Act does not explicitly define the phrase ‘bodies governed by public law’, it has been defined under other EU legislations. For example, under Article 2(4) of the EU Directive 2014/24, ‘bodies governed by public law’ means bodies that have all of the following characteristics:

  • they are established for the specific purpose of meeting needs in the general interest, not having an industrial or commercial character;
  • they have legal personality; and
  • they are financed, for the most part, by the State, regional or local authorities, or by other bodies governed by public law; or are subject to management supervision by those authorities or bodies; or have an administrative, managerial or supervisory board, more than half of whose members are appointed by the State, regional or local authorities, or by other bodies governed by public law.

Assuming ‘bodies governed by public law’ is interpreted in the same manner as above for the purposes of the EU AI Act, a diverse range of deployers may fall within this category. It is pertinent to note that the types of deployers falling within this category may vary depending on the laws of the relevant member state.

B. Deployers that are private entities providing public services

Deployers that are not governed by public law but are involved in providing public services are also under an obligation to conduct an FRIA before deploying a high-risk AI system. The EU AI Act does not define the term ‘public services’; however, Recital (96) sheds some light on the concept. As per the said recital, some examples of services of public nature are education, healthcare, social services, housing, and administration of justice.

A closer look at this category of covered deployers reveals that more entities will be subject to the requirements of the FRIA than would appear at first sight. The use of the broad term ‘public services’ without providing criteria to determine such services hints at the legislative intent to cover all the deployers involved in the provision of services that can reasonably affect the public interest.

C. Deployers of certain high-risk AI systems

Deployers of the following high-risk AI systems, referred to in points 5(b) and (c) of Annex III of the EU AI Act, are also obligated to conduct an FRIA:

  • AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; and
  • AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance

The deployers of the above-mentioned high-risk AI systems must conduct FRIAs before deployment, regardless of whether they are bodies governed by public law or private entities providing public services.

When Should an FRIA be Conducted?

The obligation to conduct an FRIA applies to the first use of a high-risk AI system. Therefore, the deployer should conduct the FRIA before putting the system into service. It is pertinent to note that a deployer may rely on a previously conducted FRIA or existing impact assessment carried out by the provider; however, necessary steps must be taken to update the information in case any of the elements assessed in the FRIA (discussed below) has changed or is no longer up to date.

How Should an FRIA be Conducted?

The EU AI Act does not specify any manner in which an FRIA should be conducted; however, it refers to a template for a questionnaire to be developed by the AI Officer to facilitate deployers conducting an FRIA.

Irrespective of the manner in which the assessment is carried out, an FRIA should consist of the following elements:

  • a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
  • a description of the period of time within which and the frequency with which each high-risk AI system is intended to be used;
  • the categories of natural persons and groups likely to be affected by its use in the specific context;
  • the specific risks of harm likely to have an impact on the categories of persons or groups of persons identified pursuant to point (c) above, taking into account the information given by the provider pursuant to Article 13 of the EU AI Act;
  • a description of the implementation of human oversight measures according to the instructions for use;
  • the measures to be taken where those risks materialize, including arrangements for internal governance and complaint mechanisms.

It is important to note that a data protection impact assessment (DPIA) conducted pursuant to Article 35 of the GDPR Regulation (EU) 2016/679 (GDPR) or Article 27 of Directive (EU) 2016/680 (Directive) complements an FRIA conducted under the EU AI Act. Therefore, for the purposes of an FRIA, a deployer may be deemed in compliance with any obligations that have already been complied with as a result of a DPIA conducted under the GDPR or the Directive. However, it must be noted that such DPIAs are only complementary to an FRIA, and any requirements related to an FRIA not specifically addressed in the DPIAs must be complied with by the deployers accordingly.

Notification of the FRIA

Once an FRIA has been carried out, the deployers must notify the relevant market surveillance authority of its results in addition to filling out and submitting the template for a questionnaire; yet to be developed by the AI Office. The deployers may be exempt from the obligation to notify where necessary for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. However, such an exemption shall be for a limited period while necessary FRIA procedures are carried out without undue delay.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigation OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View
Spotlight 13:11

Securing GenAI: From SaaS Copilots to Enterprise Applications

Rehan Jalil
Watch Now View
Spotlight 47:02

Navigating Emerging Technologies: AI for Security/Security for AI

Rehan Jalil
Watch Now View
Spotlight 59:55

Building Safe
Enterprise AI

Watch Now View

Latest

Automating EU AI Act Compliance View More

Automating EU AI Act Compliance: A 5-Step Playbook for GRC Teams

Artificial intelligence is revolutionizing industries, driving innovation in healthcare, finance, and beyond. But with great power comes great responsibility—especially when AI decisions impact health,...

Navigating the Evolving Data Security Landscape View More

Navigating the Evolving Data Security Landscape: Why Detection Alone Isn’t Enough

Proactive vs. Reactive: Why Threat Detection Alone Falls Short in Data Protection In an era where digital transformation and AI adoption are accelerating at...

View More

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)

Gain insights into South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act).

Navigating Data Regulations in Malaysia's Financial Sector View More

Navigating Data Regulations in Malaysia’s Financial Sector

Gain insights into data regulations in Malaysia’s financial sector. Learn how Securiti’s robust automation tools help organizations ensure swift compliance with Malaysia’s evolving regulatory...

Sensitive Personal Information (SPI) View More

Navigating Sensitive Personal Information (SPI) Under U.S. State Privacy Laws

Download the whitepaper to understand how U.S. state privacy laws define Sensitive Personal Information (SPI) and what governance requirements businesses must follow to ensure...

Navigating Data Regulations in the UAE Financial Services Industry View More

Navigating Data Regulations in the UAE Financial Services Industry

Download the whitepaper to explore key strategies and insights for navigating data regulations in the UAE's financial services industry. Learn about compliance with evolving...

Texas Data Privacy and Security Act (TDPSA) View More

Navigating the Texas Data Privacy and Security Act (TDPSA): Key Details

Download the infographic to learn key details about Texas’ Data Privacy and Security Act (TDPSA) and simplify your compliance journey with Securiti.

Oregon’s Consumer Privacy Act (OCPA) View More

Navigating Oregon’s Consumer Privacy Act (OCPA): Key Details

Download the infographic to learn key details about Oregon’s Consumer Privacy Act (OCPA) and simplify your compliance journey with Securiti.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New