Article 9: Risk Management System | EU AI Act

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Associate Data Privacy Analyst, Securiti

Article 9 of the AI Act contains details related to the risk management system that organizations are obligated to have deployed within their operations and products.

Per these requirements, organizations must establish, implement, document, and maintain a risk management system associated with a high-risk AI system. Such a system should be continuous and meticulously planned to ensure it runs throughout the system's lifecycle.

The risk management system shall consist of the following steps:

  • The appropriate identification and analysis of all known and foreseeable risks to the  health, safety or fundamental rights when the high-risk AI system is used for its intended purpose;
  • The estimation of the likelihood of the foreseeable risks emerging when the high-risk AI system is used for its intended purposes;
  • Evaluation of all other relevant risks based on the data analysis gathered by the post-market monitoring system;
  • The adoption of appropriate risk management practices and measures to adequately address the identified risks.

This Article only addresses risks that can be reasonably mitigated or eliminated through AI system design, development, or providing adequate technical information.

Risk management measures should consider the combined effects of requirements to minimize risks effectively, achieving a balance between risk mitigation and implementation. They should also ensure that residual risks are acceptable for each hazard and the overall AI system, thereby identifying the most appropriate measures to minimize risks to an acceptable level.

To that end, the following measures should be taken:

  • Elimination or reduction of identified risks as far as technically feasible through adequate design and development of the high-risk AI system;
  • Implementation of adequate mitigation and control measures to address the risks that cannot be eliminated;
  • Providing deployers with the necessary information and training to mitigate risks, considering their technical expertise, experience, and expected use case, to ensure the safe deployment of high-risk AI systems.

The high-risk AI system should be tested to identify effective and targeted risk management measures. Such tests shall ensure consistent performance of the AI system compliance with the requirements set out under the provisions of this Act. Such tests may also include testing in real-world conditions.

These tests may be conducted at any time during the development process before they are made available on the market or put into service. In any case, when implementing these risk management measures, the provider must properly consider whether the intended purpose of the high-risk AI system will adversely impact persons under the age of 18 or other vulnerable groups.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

What's
New