Article 6 of the AI Act outlines the criteria for classifying AI systems as high-risk or not. It also elaborates on the Commission’s role and responsibilities in making these classifications, providing clarity on the process and its oversight.
Regardless of whether an AI system or model is placed on the market or put into service independently, it will be considered high-risk if both the following conditions are met:
- The AI system is to be used as a safety component of a product, or is the product itself, covered under the Union harmonization legislation;
- The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment intended to be made available on the market or put into service.
However, an AI system referred to in Annex III will not be considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of any decision making. This will apply where any of the following conditions is met:
- The AI system will be used to perform a procedural task;
- The AI system will be used to improve the result of a previously completed human activity;
- The AI system will be used to detect decision-making patterns and identify any anomalies in these patterns without replacing current or previous human assessments without human review;
- The AI system will be used to perform a preparatory task for an assessment for the purposes listed in Annex III.
In any case, an AI system used to profile a natural person will be considered high-risk.
A provider that believes its AI system not to be high-risk per their assessment before making it available on the market will be subject to the registration obligation and must provide documentation of its assessment if requested by the relevant national competent authorities.
Commission’s Role
The Commission will consult the European Artificial Intelligence Board no later than 18 months after the AI Act comes into force to develop and publish guidelines outlining the practical implication of this Article. It will issue guidelines and provide examples to distinguish between high-risk and non-high-risk AI systems. Additionally, they will have the authority to update the high-risk classification criteria, adding or removing conditions as necessary, based on emerging evidence. Any revisions must ensure that the level of protection for health, safety, and rights is not compromised.