Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Article 43: Conformity Assessment | EU AI Act

Contributors

Semra Islam

Sr. Data Privacy Analyst

CIPM, CIPP/Europe

Muhammad Faisal Sattar

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/Asia

Article 43 of the AI Act elaborates on the essential details related to conformity assessment.

On 13 March 2024, the European Parliament formally adopted the European Union Artificial Intelligence Act (EU AI Act), which aims to ensure the protection of fundamental rights while simultaneously boosting innovation. To achieve this objective, the EU AI Act introduces, among other things, the obligations of conformity assessment and fundamental rights impact assessment. This blog provides an overview of these assessments and covers important aspects, including the entities responsible for carrying out these assessments, what should be included in these assessments, and when they should be carried out.

Conformity Assessment

Description

Conformity assessment is the process of demonstrating whether the requirements set out in the EU AI Act (Chapter III, Section 2) relating to high-risk AI systems have been fulfilled.

These requirements are as follows:

  • Establishing, implementing, documenting, and maintaining a risk management system to address the risks posed by a high-risk AI system;
  • Implementing effective data governance (bias mitigation, etc.), which involves training, validating, and testing data sets;
  • Maintaining requisite up-to-date technical documentation in a clear and comprehensive manner;
  • Ensuring that high-risk AI systems technically allow for the automatic recording of events (logs) over the lifetime of the system;
  • Development and design of high-risk AI systems in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately and to facilitate both providers and deployers in achieving compliance with their relevant obligations;
  • Development and design of high-risk AI systems in a manner that they can have proper human oversight during the period in which they are in use;
  • Development and design of high-risk AI systems in a manner that they maintain an appropriate level of accuracy, robustness and cybersecurity throughout their lifecycle.

Further, where a product contains an AI system that is also subject to the requirements of the EU harmonization legislation listed in Annex I - Section A of the Act, the providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements under the applicable harmonization legislation. The EU harmonization laws create common standards for different products across the internal market. The EU harmonization laws listed in Annex I - Section A include laws on machinery, lifts, toys, personal protective equipment, medical devices, etc.

For an understanding of high-risk AI systems, please refer to Navigating the Future: How the EU AI Act Shapes AI Governance.

When Should a Conformity Assessment be Conducted?

A conformity assessment should be performed before a high-risk AI system is placed on the market or put into service.

Further, whenever a change occurs that may affect the compliance of a high-risk AI system with the EU AI Act or when the system's intended purpose changes, that AI system should undergo a new conformity assessment.

Who Needs to Conduct a Conformity Assessment?

 

Entity

When is a Conformity Assessment Required?

Providers They place on the market or put into service a high-risk AI system.
Distributors, Importers, Deployers or other Third Parties They put their name or trademark on a high-risk AI system already placed on the market or put into service;

They make a substantial modification to a high-risk AI system in such a way that it remains a high-risk AI system; or

They modify the intended purpose of an AI system that has not been classified as high-risk in such a way that the AI system becomes high-risk.

Product Manufactures In the case of high-risk AI systems that are safety components of products covered by the EU harmonization legislation, the system is placed on the market or put into service under the name or trademark of the product manufacturer.

 

Types of Conformity Assessment

Conformity assessment procedures can be based on the following:

  • Internal control, without the involvement of a notified body; or
  • Assessment of the quality management system (QMS) and the technical documentation, with the involvement of a notified body.

Conformity Assessment based on Internal Control (Self Assessment)

As part of the conformity assessment procedure based on internal control, the provider is required to do this self-assessment to:

  • Verify that the established QMS for the high-risk AI system is in compliance with the requirements of Article 17 of the EU AI Act. These requirements include regulatory compliance, technical specifications, quality controls and testing, data management, risk management, monitoring, reporting, etc.;
  • Examine the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements for high-risk AI systems (Chapter III, Section 2 of the AI Act), as listed above; and
  • Verify that the AI system's design and development process and post-market monitoring, as referred to in Article 72, are consistent with the technical documentation.

Conformity Assessment based on Assessment of the Quality Management System and the Technical Documentation with the Involvement of a Notified Body

This conformity assessment procedure involves a notified body. A notified body is a conformity assessment body notified in accordance with the EU AI Act and other relevant EU harmonization legislation, which performs third-party conformity assessment activities, including testing, certification, and inspection.

The provider shall submit an application to a notified body for the assessment of the QMS for its high-risk AI system as per the requirements of Article 17 of the EU AI Act. The application of the provider shall contain details of, amongst other things, the list of AI systems covered under the same QMS, the technical documentation of those AI systems, the documentation concerning the QMS, and a description of the procedures in place to ensure that the QMS remains adequate and effective. Any changes to an approved QMS system shall be brought to the attention of the notified body. The approved QMS shall be subject to surveillance by the notified body.

Further, the provider shall also submit an application to a notified body for the assessment of the technical documentation related to the AI system. The application shall include the technical documentation necessary for the notified body to assess the compliance of the AI system with the relevant essential requirements for high-risk AI systems (Chapter III, Section 2 of the AI Act). The notified body shall be granted full access to the training, validation, and testing datasets used. The notified body may also require the provider to supply further evidence or carry out further tests and may directly carry out adequate tests. In exceptional circumstances, the notified body, upon a reasoned request, shall also be granted access to the training and trained models of the AI system.

Where the AI system is in conformity with the requirements for high-risk AI systems, an EU technical documentation assessment certificate shall be issued by the notified body. Any change to the AI system that could affect the compliance of the AI system with the EU AI Act’s requirements or its intended purpose shall be approved by the same notified body.

For assessment purposes, providers shall be required to allow the notified body to access the premises where the design, development, and testing of the AI systems is taking place and share with it all necessary information. The notified body shall carry out periodic audits to make sure that the provider maintains and applies the QMS and shall provide the provider with an audit report. In the context of those audits, the notified body may carry out additional tests of the AI systems for which an EU technical documentation assessment certificate was issued.

For the purposes of this type of conformity assessment, the provider may choose any of the notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by EU institutions, bodies, offices or agencies, the applicable market surveillance authority shall act as a notified body.

What Type of Conformity Assessment Should Be Carried Out?

Type of High-Risk AI System

Type of Conformity Assessment Required

High-risk AI systems, as referred to in Annex III, except those used in biometrics.

These are AI systems listed in any of the following areas:

  • Critical infrastructure;
  • Educational and vocational training;
  • Employment, workers management, and access to self-employment;
  • Access to and enjoyment of essential private services and essential public services and benefits;
  • Law enforcement;
  • Migration, asylum, and border control management; and
  • Administration of justice and democratic processes.

For more details on these AI systems, please refer to Navigating the Future: How the EU AI Act Shapes AI Governance.

Conformity Assessment based on internal control.
High-risk AI systems used in biometrics, where the provider has applied harmonized standards or common specifications in demonstrating the compliance of a high-risk AI system with the requirements listed above. Either of the two procedures (at the option of the provider).
High-risk AI systems used in biometrics, however:

  • harmonized standards do not exist, and common specifications are not available;
  • the provider has not applied or has applied only in part the harmonized standards;
  • the common specifications exist, but the provider has not applied them; or
  • one or more of the harmonized standards have been published with a restriction and only on the part of the standard that was restricted.
Conformity Assessment based on the assessment of the QMS and the technical documentation, with the involvement of a notified body.
High-risk AI systems to which the laws listed in Annex I, section A (List of Union harmonization legislation based on the New Legislative Framework) apply. The relevant conformity assessment as required under the applicable harmonization law.

 

Deviation from the Conformity Assessment Procedure

The EU AI Act outlines specific scenarios where deviation from the normal conformity assessment procedure is permissible, and a high-risk AI system may be introduced to the market or put into operation even before the assessment process is completed. This exemption applies solely to cases involving public security, safeguarding life and health, environmental conservation, and protecting critical industrial and infrastructural assets.

How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. The Data + AI Command Center provides organizations access to a cluster of individual modules and solutions designed to ensure compliance with major data and AI-related regulations.

The Compliance Management Platform is one such solution that seamlessly automates compliance via standard controls and tests.

With it, organizations can access a built-in library of compliance standards customized to their unique needs, leverage predefined tests for multiple frameworks, identify and address compliance issues efficiently with targeted guidance, tracking, and auto-remediation, and generate comprehensive reports for documentation and stakeholder information purposes, among other benefits.

Similarly, the Assessment Automation module enables automated generation of records of processing (RoPA), privacy impact assessments, and other data-related impact assessment per the regulatory requirements.

Furthermore, organizations may monitor the progress of multiple assessments simultaneously in real-time thanks to the unified privacy dashboard while also allowing for completed assessments to be tracked for readiness audit trail purposes.

Request a demo today to learn more about these modules and how else Securiti can help your organization comply with the various requirements of the EU’s AI Act.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Key Data Protection Reforms Introduced by the Data Use and Access Act View More
Key Data Protection Reforms Introduced by the Data Use and Access Act
UK DUAA 2025 updates UK GDPR, DPA and PECR. Changes cover research and broad consent, legitimate interests and SARs, automated decisions, transfers and cookies.
FTC's 2025 COPPA Final Rule Amendments View More
FTC’s 2025 COPPA Final Rule Amendments: What You Need to Know
Gain insights into FTC's 2025 COPPA Final Rule Amendments. Discover key definitions, notices, consent choices, methods, exceptions, requirements, etc.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
EU AI Act Mapping: A Step-by-Step Compliance Roadmap View More
EU AI Act Mapping: A Step-by-Step Compliance Roadmap
Explore the EU AI Act Mapping infographic—a step-by-step compliance roadmap to help organizations understand key requirements, assess risk, and align AI systems with EU...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New