On 13 March 2024, the European Parliament formally adopted the European Union Artificial Intelligence Act (EU AI Act), which aims to ensure the protection of fundamental rights while simultaneously boosting innovation. To achieve this objective, the EU AI Act introduces, among other things, the obligation to carry out a fundamental rights impact assessment (FRIA) in certain situations.
This blog provides an overview of the FRIA and covers important aspects, including the entities responsible for carrying out the assessment, what should be included in the assessment, and when it should be carried out.
What is an FRIA?
As the name suggests, the Fundamental Rights Impact Assessment (FRIA) under the EU AI Act is aimed at protecting individuals' fundamental rights from the adverse impacts produced by AI systems. The primary goal of an FRIA is to identify the specific risks to the rights of individuals or groups of individuals likely to be affected and identify measures to be taken in the case of a materialization of those risks.
Which AI Systems are Covered for an FRIA?
The high-risk AI systems referred to in Article 6(2) of the EU AI Act are subject to the requirements of FRIAs. Below is a brief description of the in-scope and exempted high-risk AI systems:
A. In-Scope High-Risk AI Systems
The high-risk AI systems listed in the following areas, as detailed in Annex III of the EU AI Act, are subject to the requirements of FRIAs:
- Biometrics;
- Educational and vocational training;
- Employment, workers management, and access to self-employment;
- Access to and enjoyment of essential private services and essential public services and benefits;
- Law enforcement;
- Migration, asylum, and border control management; and
- Administration of justice and democratic processes.
B. Exempted High-Risk AI Systems
The high-risk AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating, or electricity are exempted from the requirements related to the FRIAs.
Who Needs to Conduct an FRIA?
Article 27 of the EU AI Act obliges certain types of deployers to conduct an FRIA. Before delving into the covered types of deployers, let us first understand who a deployer is. As per Article 3(4) of the EU AI Act, a deployer is:
“a natural or legal person, public authority, agency, or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”
Let’s now briefly discuss each of the covered types of deployers obligated to conduct an FRIA:
A. Deployers that are bodies governed by public law
Deployers that are governed by public law must conduct an FRIA before deploying a high-risk AI system. Although the EU AI Act does not explicitly define the phrase ‘bodies governed by public law’, it has been defined under other EU legislations. For example, under Article 2(4) of the EU Directive 2014/24, ‘bodies governed by public law’ means bodies that have all of the following characteristics:
- they are established for the specific purpose of meeting needs in the general interest, not having an industrial or commercial character;
- they have legal personality; and
- they are financed, for the most part, by the State, regional or local authorities, or by other bodies governed by public law; or are subject to management supervision by those authorities or bodies; or have an administrative, managerial or supervisory board, more than half of whose members are appointed by the State, regional or local authorities, or by other bodies governed by public law.
Assuming ‘bodies governed by public law’ is interpreted in the same manner as above for the purposes of the EU AI Act, a diverse range of deployers may fall within this category. It is pertinent to note that the types of deployers falling within this category may vary depending on the laws of the relevant member state.
B. Deployers that are private entities providing public services
Deployers that are not governed by public law but are involved in providing public services are also under an obligation to conduct an FRIA before deploying a high-risk AI system. The EU AI Act does not define the term ‘public services’; however, Recital (96) sheds some light on the concept. As per the said recital, some examples of services of public nature are education, healthcare, social services, housing, and administration of justice.
A closer look at this category of covered deployers reveals that more entities will be subject to the requirements of the FRIA than would appear at first sight. The use of the broad term ‘public services’ without providing criteria to determine such services hints at the legislative intent to cover all the deployers involved in the provision of services that can reasonably affect the public interest.
C. Deployers of certain high-risk AI systems
Deployers of the following high-risk AI systems, referred to in points 5(b) and (c) of Annex III of the EU AI Act, are also obligated to conduct an FRIA:
- AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; and
- AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance
The deployers of the above-mentioned high-risk AI systems must conduct FRIAs before deployment, regardless of whether they are bodies governed by public law or private entities providing public services.
When Should an FRIA be Conducted?
The obligation to conduct an FRIA applies to the first use of a high-risk AI system. Therefore, the deployer should conduct the FRIA before putting the system into service. It is pertinent to note that a deployer may rely on a previously conducted FRIA or existing impact assessment carried out by the provider; however, necessary steps must be taken to update the information in case any of the elements assessed in the FRIA (discussed below) has changed or is no longer up to date.
How Should an FRIA be Conducted?
The EU AI Act does not specify any manner in which an FRIA should be conducted; however, it refers to a template for a questionnaire to be developed by the AI Officer to facilitate deployers conducting an FRIA.
Irrespective of the manner in which the assessment is carried out, an FRIA should consist of the following elements:
- a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;
- a description of the period of time within which and the frequency with which each high-risk AI system is intended to be used;
- the categories of natural persons and groups likely to be affected by its use in the specific context;
- the specific risks of harm likely to have an impact on the categories of persons or groups of persons identified pursuant to point (c) above, taking into account the information given by the provider pursuant to Article 13 of the EU AI Act;
- a description of the implementation of human oversight measures according to the instructions for use;
- the measures to be taken where those risks materialize, including arrangements for internal governance and complaint mechanisms.
It is important to note that a data protection impact assessment (DPIA) conducted pursuant to Article 35 of the GDPR Regulation (EU) 2016/679 (GDPR) or Article 27 of Directive (EU) 2016/680 (Directive) complements an FRIA conducted under the EU AI Act. Therefore, for the purposes of an FRIA, a deployer may be deemed in compliance with any obligations that have already been complied with as a result of a DPIA conducted under the GDPR or the Directive. However, it must be noted that such DPIAs are only complementary to an FRIA, and any requirements related to an FRIA not specifically addressed in the DPIAs must be complied with by the deployers accordingly.
Notification of the FRIA
Once an FRIA has been carried out, the deployers must notify the relevant market surveillance authority of its results in addition to filling out and submitting the template for a questionnaire; yet to be developed by the AI Office. The deployers may be exempt from the obligation to notify where necessary for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. However, such an exemption shall be for a limited period while necessary FRIA procedures are carried out without undue delay.