Introduction
General-purpose AI (GPAI) models are the heart of artificial intelligence (AI), bringing rapid industrial changes globally. These AI models are fundamental technologies that can be modified for multiple uses. The EU AI Act sets guidelines and establishes rules for developing and applying these AI models, ensuring their safe and transparent use. The AI Office of the European Commission has recently provided clarity on these guidelines through FAQs. Along with this, the Third Draft of the General-Purpose AI Code of Practice (the Code) has been prepared as per the EU AI Act, providing further guidance to the providers of these AI models.
Let’s explore the significance of these AI models and the obligations that their providers must meet.
What are GPAI Models?
GPAI Models
The EU AI Act defines GPAI models as AI models trained with large data and self-supervision at scale that are capable of performing diverse tasks and being integrated into various systems or applications. An example of GPAI models is a large generative AI model that allows for flexible content generation.
GPAI Models with Systemic Risk
Systemic risks are defined as large-scale harms caused by advanced AI models or other models with an equivalent impact. These risks can manifest themselves, e.g., “through the lowering of barriers for chemical or biological weapons development.” The EU AI Act classifies a GPAI model with systemic risk if it is one of the most advanced at that particular time or has an equivalent impact.
The EU AI Act has established a criterion of 10^25 floating-point operations (FLOP) for model training to identify the most advanced models. However, the European Commission's AI Office continuously monitors technological advancements and can change this threshold as necessary.
The Code of Practice
Section 4, Article 56 of the EU AI Act puts a responsibility on the AI Office to create code of practice. The purpose of the code of practice is to provide rules for the development and deployment of GPAI models, ensuring the proper application of the provisions of the EU AI Act, mainly Articles 53 and 55.
On March 11, 2025, the Chairs and the Vice-Chairs of the GPAI Code of Practice, with input from AI experts, policymakers, and industry leaders, presented the third draft of the Code. The Code includes obligations for the providers of GPAI models and GPAI models with systemic risks and outlines best practices for transparency, risk assessment, and safety measures. As compared to the first two drafts, this draft has a more streamlined structure with improved commitments and measures.
Working Groups
The Code has four working groups, each focusing on different parts of the EU AI Act. These include:
- Transparency & Copyright (WG1): to ensure that AI providers document their models properly and follow copyright laws.
- Risk Assessment (WG2): to evaluate if an AI model poses a systemic risk (a major risk that could harm society).
- Technical Risk Mitigation (WG3): to create ways to reduce the dangers of risky AI models.
- Governance Risk Mitigation (WG4): to set up responsible management and oversight for AI safety.
Obligations of Providers of GPAI Models
The EU AI Act establishes certain obligations that the providers of GPAI Models need to fulfill, which are expanded upon by the Code.
GPAI Models
The EU AI Act
As per the EU AI Act, the providers of GPAI models must:
- Maintain the technical documentation of the model and make it accessible to the AI Office and other national competent authorities and to the AI system providers who wish to integrate the GPAI model into their systems;
- Establish a policy to comply with Union copyright law and other related rights;
- Provide a detailed summary of training content as per the AI Office’s template;
- Cooperate with the Commission and national competent authorities;
- Must designate, in writing, an authorized representative before the release of the GPAI model in the market, when the provider is based in a third country and the GPAI model is being put on the Union market;
- Make it possible for its designated representative to carry out the duties outlined in the mandate obtained from the provider, and
- Treat all the obtained information and documentation as confidentiality obligations.
The EU AI Act further allows the providers of GPAI models to rely on the Code till a harmonized standard is released.
The Code of Practice
The Code expands on the obligations mentioned in the EU AI Act for the providers of the GPAI models. These include:
Transparency (Documentation)
- Maintaining model documentation
Providers are required to create a document titled "Information and Documentation about the General-Purpose AI Model" containing all requested information in the Model Documentation Form. They must report the information in the Computational Resources and Energy Consumption sections as per the EU AI Act. If changes occur, they must update the Model Documentation, keeping previous versions for 10 years.
- Providing relevant information
Providers are required to disclose contact information for the AI Office and downstream providers to request access to the Model Documentation and ensure confidentiality, provide updated information to the downstream providers, when necessary, informing them about the model’s capabilities and limitations. Providers must act promptly and consider public transparency. Some information may be summarized for training content.
- Ensuring quality, integrity, and security of information
Providers are required to maintain quality, integrity, and compliance with the EU AI Act’s obligations by following established protocols and technical standards in managing documented information.
Exception: Open-source AI models may be exempt from transparency rules, provided they meet certain conditions under Article 53(2) of the EU AI Act.
Copyright
The providers:
- Must establish and implement a copyright policy complying with the Union law;
- Must ensure that only legally available content is used when gathering online data to train their models;
- Using web crawlers (automated tools that scan the internet to collect data), must adhere to the copyright rules while collecting information to train their AI models;
- Must follow website rules, use Machine-Readable copyright signals, support standardized copyright protection, and provide transparency and content visibility, to adhere to the EU copyright law;
- Must obtain adequate information about protected content that is not web-crawled by the provider;
- Must limit the memorization of copyrighted content and prohibit copyright violations, to mitigate the risk of AI systems generating content that infringes copyright; and
- Must designate a point of contact and enable the lodging of complaints.
GPAI Models with Systemic Risk
As the GPAI models with systemic risk have the potential to cause significant societal and economic impacts because of their advanced capabilities and widespread use, there are stricter oversight and risk mitigation measures for these.
The EU AI Act
In addition to the obligations on providers for GPAI models, providers of GPAI models with systemic risk shall:
- conduct standardized evaluations using state-of-the-art tools, such as adversarial testing, to identify and mitigate risks;
- evaluate and mitigate potential risks along with their sources at the Union level that arise from their development, market placement, or deployment;
- promptly document and report serious incidents along with their possible corrective measures to the AI Office and national authorities;
- ensure adequate cybersecurity protection for GPAI with systemic risk and the model’s physical infrastructure; and
- treat all the obtained information and documentation as confidentiality obligations.
The EU AI Act further allows the providers of GPAI models with systemic risks to rely on the Code till a harmonized standard is released.
The Code of Practice
Expanding on the obligations under the EU AI Act, the Code provides 16 commitments to be followed by the providers of GPAI models with systemic risk. These include:
- Safety and Security Framework: Providers must adopt a “Safety and Security Framework” for systemic risk assessment, mitigation, and governance.
- Systemic Risk Assessment and Mitigation: Providers must assess risks and mitigate them throughout the model lifecycle, including development.
- Systemic Risk Identification: Providers must identify significant risks and characterize them for further analysis.
- Systemic Risk Analysis: Providers must rigorously analyze risks for severity and probability, using diverse evaluation methods.
- Systemic Risk Acceptance Determination: Providers must determine the acceptability of systemic risks based on predefined criteria before proceeding with deployment.
- Safety Mitigations: Providers must reduce systemic risks by the implementation of proportionate and state-of-the-art technical safety measures.
- Security Mitigations: Providers must prevent unauthorized access to model assets through strict security measures.
- Safety and Security Model Reports: Providers must create and submit a “Safety and Security Model Report” to the AI Office, documenting the results of systemic risk assessments and justifications for releasing the model in the market.
- Adequacy Assessments: Providers must periodically evaluate and update their Safety and Security Framework based on findings.
- Systemic Risk Responsibility Allocation: Providers must clearly assign the responsibilities for systemic risk management within the organization along with the provision of required resources for managing those risks.
- Independent External Assessors: Providers must obtain external evaluations of systemic risks before market release.
- Serious Incident Reporting: Providers must set up processes to report major AI-related incidents to the AI Office promptly.
- Non-Retaliation Protections: Providers must protect workers reporting systemic risks to authorities from retaliation.
- Notifications: Providers must regularly inform the AI Office about relevant AI models and compliance efforts.
- Documentation: Providers must correctly record and maintain the relevant compliance information.
- Public Transparency: Providers must publicly disclose key information about systemic risks to enable oversight.
Looking Forward
According to the EU AI Act, the Code must be finalized by May 2, 2025, and come into effect from August 2, 2025. This gives providers of GPAI models some time to align their practices with the requirements mentioned in the EU AI Act and the Code. However, in case the Code is not finalized by August 2, 2025, the European Commission can issue common rules regarding the obligations set out in Articles 53 and 55 of the EU AI Act.
The EU AI Act and the obligations under the Code pave the path to a safer, more responsible, and more transparent AI environment while unlocking the full capability of GPAI models.
How Securiti Can Help
Securiti’s robust automation modules enable organizations to navigate General-Purpose AI models under the EU AI Act and comply with applicable obligations.
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. Securiti provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Request a demo to learn more.