Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Navigating General-Purpose AI Models Under the EU AI Act

Published May 4, 2025
Contributors

Aswah Javed

Associate Data Privacy Analyst at Securiti

Adeel Hasan

Sr. Data Privacy Analyst at Securiti

CIPM, CIPP/Canada

Listen to the content

Introduction

General-purpose AI (GPAI) models are the heart of artificial intelligence (AI), bringing rapid industrial changes globally. These AI models are fundamental technologies that can be modified for multiple uses. The EU AI Act sets guidelines and establishes rules for developing and applying these AI models, ensuring their safe and transparent use. The AI Office of the European Commission has recently provided clarity on these guidelines through FAQs. Along with this, the Third Draft of the General-Purpose AI Code of Practice (the Code) has been prepared as per the EU AI Act, providing further guidance to the providers of these AI models.

Let’s explore the significance of these AI models and the obligations that their providers must meet.

What are GPAI Models?

GPAI Models

The EU AI Act defines GPAI models as AI models trained with large data and self-supervision at scale that are capable of performing diverse tasks and being integrated into various systems or applications. An example of GPAI models is a large generative AI model that allows for flexible content generation.

GPAI Models with Systemic Risk

Systemic risks are defined as large-scale harms caused by advanced AI models or other models with an equivalent impact. These risks can manifest themselves, e.g., “through the lowering of barriers for chemical or biological weapons development.” The EU AI Act classifies a GPAI model with systemic risk if it is one of the most advanced at that particular time or has an equivalent impact.

The EU AI Act has established a criterion of 10^25 floating-point operations (FLOP) for model training to identify the most advanced models. However, the European Commission's AI Office continuously monitors technological advancements and can change this threshold as necessary.

The Code of Practice

Section 4, Article 56 of the EU AI Act puts a responsibility on the AI Office to create code of practice. The purpose of the code of practice is to provide rules for the development and deployment of GPAI models, ensuring the proper application of the provisions of the EU AI Act, mainly Articles 53 and 55.

On March 11, 2025, the Chairs and the Vice-Chairs of the GPAI Code of Practice, with input from AI experts, policymakers, and industry leaders, presented the third draft of the Code. The Code includes obligations for the providers of GPAI models and GPAI models with systemic risks and outlines best practices for transparency, risk assessment, and safety measures. As compared to the first two drafts, this draft has a more streamlined structure with improved commitments and measures.

Working Groups

The Code has four working groups, each focusing on different parts of the EU AI Act. These include:

  • Transparency & Copyright (WG1): to ensure that AI providers document their models properly and follow copyright laws.
  • Risk Assessment (WG2): to evaluate if an AI model poses a systemic risk (a major risk that could harm society).
  • Technical Risk Mitigation (WG3): to create ways to reduce the dangers of risky AI models.
  • Governance Risk Mitigation (WG4): to set up responsible management and oversight for AI safety.

Obligations of Providers of GPAI Models

The EU AI Act establishes certain obligations that the providers of GPAI Models need to fulfill, which are expanded upon by the Code.

GPAI Models

The EU AI Act

As per the EU AI Act, the providers of GPAI models must:

  1. Maintain the technical documentation of the model and make it accessible to the AI Office and other national competent authorities and to the AI system providers who wish to integrate the GPAI model into their systems;
  2. Establish a policy to comply with Union copyright law and other related rights;
  3. Provide a detailed summary of training content as per the AI Office’s template;
  4. Cooperate with the Commission and national competent authorities;
  5. Must designate, in writing, an authorized representative before the release of the GPAI model in the market, when the provider is based in a third country and the GPAI model is being put on the Union market;
  6. Make it possible for its designated representative to carry out the duties outlined in the mandate obtained from the provider, and
  7. Treat all the obtained information and documentation as confidentiality obligations.

The EU AI Act further allows the providers of GPAI models to rely on the Code till a harmonized standard is released.

The Code of Practice

The Code expands on the obligations mentioned in the EU AI Act for the providers of the GPAI models. These include:

Transparency (Documentation)
  • Maintaining model documentation
    Providers are required to create a document titled "Information and Documentation about the General-Purpose AI Model" containing all requested information in the Model Documentation Form. They must report the information in the Computational Resources and Energy Consumption sections as per the EU AI Act. If changes occur, they must update the Model Documentation, keeping previous versions for 10 years.
  • Providing relevant information
    Providers are required to disclose contact information for the AI Office and downstream providers to request access to the Model Documentation and ensure confidentiality, provide updated information to the downstream providers, when necessary, informing them about the model’s capabilities and limitations. Providers must act promptly and consider public transparency. Some information may be summarized for training content.
  • Ensuring quality, integrity, and security of information
    Providers are required to maintain quality, integrity, and compliance with the EU AI Act’s obligations by following established protocols and technical standards in managing documented information.

Exception: Open-source AI models may be exempt from transparency rules, provided they meet certain conditions under Article 53(2) of the EU AI Act.

The providers:

  • Must establish and implement a copyright policy complying with the Union law;
  • Must ensure that only legally available content is used when gathering online data to train their models;
  • Using web crawlers (automated tools that scan the internet to collect data), must adhere to the copyright rules while collecting information to train their AI models;
  • Must follow website rules, use Machine-Readable copyright signals, support standardized copyright protection, and provide transparency and content visibility, to adhere to the EU copyright law;
  • Must obtain adequate information about protected content that is not web-crawled by the provider;
  • Must limit the memorization of copyrighted content and prohibit copyright violations, to mitigate the risk of AI systems generating content that infringes copyright; and
  • Must designate a point of contact and enable the lodging of complaints.

GPAI Models with Systemic Risk

As the GPAI models with systemic risk have the potential to cause significant societal and economic impacts because of their advanced capabilities and widespread use, there are stricter oversight and risk mitigation measures for these.

The EU AI Act

In addition to the obligations on providers for GPAI models, providers of GPAI models with systemic risk shall:

  1. conduct standardized evaluations using state-of-the-art tools, such as adversarial testing, to identify and mitigate risks;
  2. evaluate and mitigate potential risks along with their sources at the Union level that arise from their development, market placement, or deployment;
  3. promptly document and report serious incidents along with their possible corrective measures to the AI Office and national authorities;
  4. ensure adequate cybersecurity protection for GPAI with systemic risk and the model’s physical infrastructure; and
  5. treat all the obtained information and documentation as confidentiality obligations.

The EU AI Act further allows the providers of GPAI models with systemic risks to rely on the Code till a harmonized standard is released.

The Code of Practice

Expanding on the obligations under the EU AI Act, the Code provides 16 commitments to be followed by the providers of GPAI models with systemic risk. These include:

  1. Safety and Security Framework: Providers must adopt a “Safety and Security Framework” for systemic risk assessment, mitigation, and governance.
  2. Systemic Risk Assessment and Mitigation: Providers must assess risks and mitigate them throughout the model lifecycle, including development.
  3. Systemic Risk Identification: Providers must identify significant risks and characterize them for further analysis.
  4. Systemic Risk Analysis: Providers must rigorously analyze risks for severity and probability, using diverse evaluation methods.
  5. Systemic Risk Acceptance Determination: Providers must determine the acceptability of systemic risks based on predefined criteria before proceeding with deployment.
  6. Safety Mitigations: Providers must reduce systemic risks by the implementation of proportionate and state-of-the-art technical safety measures.
  7. Security Mitigations: Providers must prevent unauthorized access to model assets through strict security measures.
  8. Safety and Security Model Reports: Providers must create and submit a “Safety and Security Model Report” to the AI Office, documenting the results of systemic risk assessments and justifications for releasing the model in the market.
  9. Adequacy Assessments: Providers must periodically evaluate and update their Safety and Security Framework based on findings.
  10. Systemic Risk Responsibility Allocation: Providers must clearly assign the responsibilities for systemic risk management within the organization along with the provision of required resources for managing those risks.
  11. Independent External Assessors: Providers must obtain external evaluations of systemic risks before market release.
  12. Serious Incident Reporting: Providers must set up processes to report major AI-related incidents to the AI Office promptly.
  13. Non-Retaliation Protections: Providers must protect workers reporting systemic risks to authorities from retaliation.
  14. Notifications: Providers must regularly inform the AI Office about relevant AI models and compliance efforts.
  15. Documentation: Providers must correctly record and maintain the relevant compliance information.
  16. Public Transparency: Providers must publicly disclose key information about systemic risks to enable oversight.

Looking Forward

According to the EU AI Act, the Code must be finalized by May 2, 2025, and come into effect from August 2, 2025. This gives providers of GPAI models some time to align their practices with the requirements mentioned in the EU AI Act and the Code. However, in case the Code is not finalized by August 2, 2025, the European Commission can issue common rules regarding the obligations set out in Articles 53 and 55 of the EU AI Act.

The EU AI Act and the obligations under the Code pave the path to a safer, more responsible, and more transparent AI environment while unlocking the full capability of GPAI models.

How Securiti Can Help

Securiti’s robust automation modules enable organizations to navigate General-Purpose AI models under the EU AI Act and comply with applicable obligations.

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. Securiti provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
Inside Echoleak View More
Inside Echoleak
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA) View More
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA)
Delve into Uganda's Data Protection and Privacy Act (DPPA), including data subject rights, organizational obligations, and penalties for non-compliance.
Data Risk Management View More
What Is Data Risk Management?
Learn the ins and outs of data risk management, key reasons for data risk and best practices for managing data risks.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
View More
Key Amendments to Saudi Arabia PDPL Implementing Regulations
Download the infographic to gain insights into the key amendments to the Saudi Arabia PDPL Implementing Regulations. Learn about proposed changes and key takeaways...
Understanding Data Regulations in Australia’s Telecom Sector View More
Understanding Data Regulations in Australia’s Telecom Sector
Gain insights into the key data regulations in Australia’s telecommunication sector. Learn how Securiti helps ensure swift compliance.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New