Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

EU AI Act Implementation Efforts: How Europe is Turning Policy into Practice

Contributors

Anas Baig

Product Marketing Manager at Securiti

Aiman Kanwal

Assoc. Data Privacy Analyst at Securiti

Published September 1, 2025

Listen to the content

Compliance with the AI Act is a marathon, not a sprint. While the Act entered into force on August 1, 2024, its full application will roll out over several years. For instance, the first obligations for AI literacy and provisions on prohibited AI practices came into effect in February 2025, with governance rules for General-Purpose AI (GPAI) models following six months later in August 2025. The full application of the Act, including for certain high-risk AI systems, will extend into August 2027.

To support organizations through this phased implementation, both EU member states and EU bodies have launched various initiatives. These resources provide crucial guidance on important matters related to the use of AI, such as copyrights, intellectual property, documentation, public rights, and workplace applications.

Read on for an overview of these implementation efforts by both member states and EU-level bodies.

EU Level Implementation Efforts

EU-level implementation efforts by various bodies include:

I. AI Office - February 2024

The AI Office was inaugurated in February 2024 as an extension of the EU Commission, meant to specifically oversee the supervision and enforcement of the various AI Act provisions. Considering the nature of the AI Act’s enforcement timeline, the body acts as a central authority on all official matters related to the Act’s implementation.

II. European Commission’s FAQs on AI Literacy - May 2025

To support compliance with the AI literacy obligations under Article 4 of the EU AI Act, the EU Commission has published a list of FAQs. These FAQs clarify the required level of understanding for both AI system providers and users. The Commission also announced additional resources, including a web portal with learning packages and plans for a living repository to provide real-life examples of AI literacy in action

III. European Commission’s Instruments To Support Responsible Deployment Of GPAI Models - July 2025

More recently, in July 2025, the EU Commission released three different instruments aimed at ensuring the responsible development and deployment of GPAI models, including:

The GPAI Code of Practice covers three extremely sensitive aspects of data used in large language model (LLM) training: transparency, copyrights, and safety & security. Although voluntary, the Code is a highly practical compliance tool because it offers a presumption of conformity, meaning signatories are assumed to be compliant with the relevant AI Act obligations. The Code's adequacy has been formally assessed and confirmed by the EU's AI Board.

The document is a crucial transitional tool, designed to bridge the gap until legally binding technical standards are developed. As a living document, it is expected to evolve to keep pace with rapid technological advancements. The Commission has published a signatory list of 25 organizations that have committed to the Code.

IV. EPRS Report On Copyrights In GenAI - July 2025

The European Parliament Research Services (EPRS) published a report that examined the technological aspects of GenAI, specifically in the copyright context. This is primarily due to the close interdependency of AI training data and the generated output. It identifies two key challenges, i.e., attribution of how much the generated output is traceable to the initial training data and novelty of the generated output, for whether it qualifies as a new creation or is just high-dimensional parroting.

V. BFDI & AZOP Joint Consultation on Personal Data Protection In LLM Training - July 2025

The Croatian Personal Data Protection Agency (AZOP) and Germany’s Federal Commissioner for Data Protection and Freedom of Information (BfDI) published a joint European consultation related to the protection of personal data being used in LLM training. It sought expert opinions related to the challenges in AI development and data protection. The consultation comprises eight questions and runs till August 30, 2025, with a report containing experts’ recommendations and opinions being included along with specific guidelines.

Member State Level Implementation Efforts

The EU’s member states have also launched their own various national initiatives to aid in AI Act implementation. These are as follows:

I. Croatia - March 2025

Croatia’s Personal Data Protection Agency (AZOP) has released guidance on conducting Fundamental Rights Impact Assessments (FRIAs). This assessment, originally developed by the Data Protection Authority of Catalonia (ADPCAT), encourages a multidisciplinary approach when aiming to proactively identify relevant risks and mitigate them through the timely deployment of appropriate safeguards.

Moreover, these assessments are mandatory before any high-risk AI systems can be used, even when there is no high-risk personal data being processed.

II. Denmark - August 2025

Danish efforts related to the AI Act implementation include.

Datatilsynet, the Danish Data Protection Agency, has been appointed as a supervisory authority for specific provisions of the EU AI Act. The agency will oversee compliance with two key rules on prohibited practices under Article 5 of the Act:

  • Criminal Offenses: A ban on using AI to predict a person's risk of committing a crime based solely on profiling or personality traits.
  • Biometrics: A ban on using AI for biometric categorization to infer sensitive personal data such as race, political opinions, or sexual orientation.

The agency stated that while the right to complain and the obligation to report serious incidents will not take effect until August 2, 2026, it is ready to handle inquiries and initiate its own cases in the interim.

III. Germany - July 2025

Germany has been prolific in producing various resources meant to aid organizations in their AI Act compliance efforts.

  • The Federal Office for Information Security (BSI) released the QUAIDAL framework to ensure appropriate quality for all AI training data through a catalog of 143 metrics to help providers meet the strict regulatory requirements for data quality, transparency, fairness, and compliance.
  • The Federal News Agency (Bundesnetzagentur) has an interactive AI service desk with a compliance compass tool that provides practical information to companies related to understanding whether they’re subject to the AI Act and their relevant obligations, if there are any.
  • Similarly, the Society for Data Protection & Data Security (GDD) publishes its model guidelines that are meant to serve as a template for creating an internal AI governance policy that addresses the ethical, legal, and safety-related use of AI, from copyright compliance to security measures.
  • The Federal Commissioner for Data Protection and Freedom of Information (BfDI) has most recently launched a public consultation on personal data protection in AI models. Expert opinion is sought on issues like data anonymization, technical measures to prevent data memorization, risk assessment, and how individuals can exercise their rights within AI systems.

IV. Ireland - May 2025

In Ireland, the Department of Public Expenditure, NDP Delivery, and Reform released their joint guidelines for the responsible use of AI in public services, where human involvement is emphasized to ensure continued public trust in these technologies. The guidelines highlight seven key principles for trustworthy AI, along with practical tools and examples to help the integration of these principles into real-life scenarios.

V. Luxembourg - August 2025

Luxembourg’s National Commission for Data Protection (CNPD) has released comprehensive guidance on AI literacy obligations. Similar to the EU Commission’s FAQs on AI literacy, the CNPD’s guide also emphasizes the need for all personnel operating and using AI systems to have an appropriate level of AI knowledge and proficiency. This includes all employees, temporary workers, civil servants, trainees, and apprentices.

VI. Norway - June 2025

Norway’s Ministry of Digitalization and Public Administration has also released its guide on the use of AI assistants in the workplace. Meant for both public and private organizations, it contains a checklist for responsible AI use in various contexts and key areas such as clarifying purpose and organizational readiness, ensuring legal compliance with the AI Act and GDPR, as well as the technical setup, quality assurance, and ongoing operations.

The Ministry has also released its own draft of the AI Act to ensure the national regulatory framework is in line with the EU’s AI Act. The draft closely resembles the EU’s AI Act in areas such as monetary penalties, while also establishing the Norwegian Communications Authority (Nkom) as the primary regulator and allowing sector-specific authorities to handle industry-specific oversight. The public consultation on the draft will continue until September 30, 2025.

VII. Portugal - July 2025

The Portuguese Agency for Administrative Modernization (AMA) has published its guide for AI, along with an ethical risk evaluation tool that focuses on public administration entities. Private organizations may use it as a reference as well. The guide requires that responsible AI must be both transparent and robust, with the necessary mechanisms in place to ensure its fairness and inclusivity while mitigating bias. Additionally, it must also be sustainable and thoroughly tested through pre-implementation testing and consistent monitoring.

VIII. Spain - June 2025

Spain is proactively building a comprehensive regulatory framework that aligns with the EU AI Act. The country has established Europe's first AI regulator, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which became operational in June 2024.

Spain is drafting a national AI law, titled "Good Use and Governance of Artificial Intelligence." The first draft of this law was approved in March 2025, with the aim of supplementing the EU Act with a domestic legal and sanctioning regime. The country has also implemented Europe’s first AI regulatory sandbox, selecting 12 high-risk projects in April 2025 to test systems and inform future regulations.

What Lies Ahead

As mentioned earlier, the AI Act has a fairly extensive enforcement timeline. Both EU bodies and member states will continue to monitor developments in GenAI technologies as well as other critical aspects to ensure they can continue to produce similar resources that can aid organizations in their compliance efforts.

Some such resources include the EU’s AI Pact, a voluntary compliance agreement where organizations would be encouraged to comply with a few key provisions. The European Commission is expected to continue publishing guidelines and common rules to ensure consistent implementation across the bloc.

While the aforementioned efforts by member states are comprehensive, they will be joined by several others over the coming months until full implementation in 2027. There will be guides, frameworks, toolkits, as well as pieces of legislation that will be made available, all with the intention of ensuring GenAI capabilities can be leveraged to their highest potential without compromising the rights of their citizens.

How Securiti Helps

The AI Act is an extremely comprehensive regulation. Compliance need not be complicated, but it will nonetheless require extensive attention to detail and, more importantly, the selection of the right tools, processes, and mechanisms to ensure all regulatory requirements are met.

This is where Securiti can help.

Securiti’s Data Command Center and AI Governance is a holistic solution for building safe, enterprise-grade generative AI systems. This enterprise solution comprises several components that can be used collectively to build end-to-end secure enterprise AI systems or in various other contexts to address diverse AI use cases.

With the AI Governance solution, organizations can conduct comprehensive processes involving all AI components and functionalities used within their workflows, including model risk identification, analysis, controls, monitoring, documentation, categorization assessment, fundamental rights impact assessment, and conformity assessment.

Leveraged properly, these solutions ensure all critical obligations are met in an effective and timely manner without compromising an organization’s other operations.

Request a demo today and learn more about how Securiti can help you select and deploy the most appropriate modules and solutions to comply with the regulatory requirements of the EU’s AI Act.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Shrink The Blast Radius: Automate Data Minimization with DSPM View More
Shrink The Blast Radius
Recently, DaVita disclosed a ransomware incident that ultimately impacted about 2.7 million people, and it’s already booked $13.5M in related costs this quarter. Healthcare...
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
View More
EU Publishes Template for Public Summaries of AI Training Content
The EU released the Explanatory Notice and Template for the Public Summary of Training Content for General-Purpose AI (GPAI) Models. Learn more.
Decoding Saudi Arabia’s Cybersecurity Risk Management Framework View More
Decoding Saudi Arabia’s Cybersecurity Risk Management Framework
Discover the Kingdom of Saudi Arabia’s National Framework for Cybersecurity Risk Management by the NCA. Learn how TLP, risk assessment and proactive strategies protect...
View More
The Rise of AI in Financial Institutions: Realignment of Risk & Reward
Learn how AI is transforming financial institutions by reshaping risk management, regulatory compliance, and growth opportunities. Learn how organizations can realign risk and reward...
Redefining Data Privacy Careers in the Age of AI View More
Redefining Data Privacy Careers in the Age of AI
Securiti's whitepaper provides a detailed overview of the impact AI is poised to have on data privacy jobs and what it means for professionals...
7 Data Minimization Best Practices View More
7 Data Minimization Best Practices: A DSPM Powered Guide
Discover 7 core data minimization best practices in this DSPM-powered infographic checklist. Learn how to cut storage waste, automate discovery, detection and remediation.
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New