Compliance with the AI Act is a marathon, not a sprint. While the Act entered into force on August 1, 2024, its full application will roll out over several years. For instance, the first obligations for AI literacy and provisions on prohibited AI practices came into effect in February 2025, with governance rules for General-Purpose AI (GPAI) models following six months later in August 2025. The full application of the Act, including for certain high-risk AI systems, will extend into August 2027.
To support organizations through this phased implementation, both EU member states and EU bodies have launched various initiatives. These resources provide crucial guidance on important matters related to the use of AI, such as copyrights, intellectual property, documentation, public rights, and workplace applications.
Read on for an overview of these implementation efforts by both member states and EU-level bodies.
EU Level Implementation Efforts
EU-level implementation efforts by various bodies include:
I. AI Office - February 2024
The AI Office was inaugurated in February 2024 as an extension of the EU Commission, meant to specifically oversee the supervision and enforcement of the various AI Act provisions. Considering the nature of the AI Act’s enforcement timeline, the body acts as a central authority on all official matters related to the Act’s implementation.
II. European Commission’s FAQs on AI Literacy - May 2025
To support compliance with the AI literacy obligations under Article 4 of the EU AI Act, the EU Commission has published a list of FAQs. These FAQs clarify the required level of understanding for both AI system providers and users. The Commission also announced additional resources, including a web portal with learning packages and plans for a living repository to provide real-life examples of AI literacy in action
III. European Commission’s Instruments To Support Responsible Deployment Of GPAI Models - July 2025
More recently, in July 2025, the EU Commission released three different instruments aimed at ensuring the responsible development and deployment of GPAI models, including:
The GPAI Code of Practice covers three extremely sensitive aspects of data used in large language model (LLM) training: transparency, copyrights, and safety & security. Although voluntary, the Code is a highly practical compliance tool because it offers a presumption of conformity, meaning signatories are assumed to be compliant with the relevant AI Act obligations. The Code's adequacy has been formally assessed and confirmed by the EU's AI Board.
The document is a crucial transitional tool, designed to bridge the gap until legally binding technical standards are developed. As a living document, it is expected to evolve to keep pace with rapid technological advancements. The Commission has published a signatory list of 25 organizations that have committed to the Code.
IV. EPRS Report On Copyrights In GenAI - July 2025
The European Parliament Research Services (EPRS) published a report that examined the technological aspects of GenAI, specifically in the copyright context. This is primarily due to the close interdependency of AI training data and the generated output. It identifies two key challenges, i.e., attribution of how much the generated output is traceable to the initial training data and novelty of the generated output, for whether it qualifies as a new creation or is just high-dimensional parroting.
V. BFDI & AZOP Joint Consultation on Personal Data Protection In LLM Training - July 2025
The Croatian Personal Data Protection Agency (AZOP) and Germany’s Federal Commissioner for Data Protection and Freedom of Information (BfDI) published a joint European consultation related to the protection of personal data being used in LLM training. It sought expert opinions related to the challenges in AI development and data protection. The consultation comprises eight questions and runs till August 30, 2025, with a report containing experts’ recommendations and opinions being included along with specific guidelines.
Member State Level Implementation Efforts
The EU’s member states have also launched their own various national initiatives to aid in AI Act implementation. These are as follows:
I. Croatia - March 2025
Croatia’s Personal Data Protection Agency (AZOP) has released guidance on conducting Fundamental Rights Impact Assessments (FRIAs). This assessment, originally developed by the Data Protection Authority of Catalonia (ADPCAT), encourages a multidisciplinary approach when aiming to proactively identify relevant risks and mitigate them through the timely deployment of appropriate safeguards.
Moreover, these assessments are mandatory before any high-risk AI systems can be used, even when there is no high-risk personal data being processed.
II. Denmark - August 2025
Danish efforts related to the AI Act implementation include.
Datatilsynet, the Danish Data Protection Agency, has been appointed as a supervisory authority for specific provisions of the EU AI Act. The agency will oversee compliance with two key rules on prohibited practices under Article 5 of the Act:
- Criminal Offenses: A ban on using AI to predict a person's risk of committing a crime based solely on profiling or personality traits.
- Biometrics: A ban on using AI for biometric categorization to infer sensitive personal data such as race, political opinions, or sexual orientation.
The agency stated that while the right to complain and the obligation to report serious incidents will not take effect until August 2, 2026, it is ready to handle inquiries and initiate its own cases in the interim.
III. Germany - July 2025
Germany has been prolific in producing various resources meant to aid organizations in their AI Act compliance efforts.
- The Federal Office for Information Security (BSI) released the QUAIDAL framework to ensure appropriate quality for all AI training data through a catalog of 143 metrics to help providers meet the strict regulatory requirements for data quality, transparency, fairness, and compliance.
- The Federal News Agency (Bundesnetzagentur) has an interactive AI service desk with a compliance compass tool that provides practical information to companies related to understanding whether they’re subject to the AI Act and their relevant obligations, if there are any.
- Similarly, the Society for Data Protection & Data Security (GDD) publishes its model guidelines that are meant to serve as a template for creating an internal AI governance policy that addresses the ethical, legal, and safety-related use of AI, from copyright compliance to security measures.
- The Federal Commissioner for Data Protection and Freedom of Information (BfDI) has most recently launched a public consultation on personal data protection in AI models. Expert opinion is sought on issues like data anonymization, technical measures to prevent data memorization, risk assessment, and how individuals can exercise their rights within AI systems.
IV. Ireland - May 2025
In Ireland, the Department of Public Expenditure, NDP Delivery, and Reform released their joint guidelines for the responsible use of AI in public services, where human involvement is emphasized to ensure continued public trust in these technologies. The guidelines highlight seven key principles for trustworthy AI, along with practical tools and examples to help the integration of these principles into real-life scenarios.
V. Luxembourg - August 2025
Luxembourg’s National Commission for Data Protection (CNPD) has released comprehensive guidance on AI literacy obligations. Similar to the EU Commission’s FAQs on AI literacy, the CNPD’s guide also emphasizes the need for all personnel operating and using AI systems to have an appropriate level of AI knowledge and proficiency. This includes all employees, temporary workers, civil servants, trainees, and apprentices.
VI. Norway - June 2025
Norway’s Ministry of Digitalization and Public Administration has also released its guide on the use of AI assistants in the workplace. Meant for both public and private organizations, it contains a checklist for responsible AI use in various contexts and key areas such as clarifying purpose and organizational readiness, ensuring legal compliance with the AI Act and GDPR, as well as the technical setup, quality assurance, and ongoing operations.
The Ministry has also released its own draft of the AI Act to ensure the national regulatory framework is in line with the EU’s AI Act. The draft closely resembles the EU’s AI Act in areas such as monetary penalties, while also establishing the Norwegian Communications Authority (Nkom) as the primary regulator and allowing sector-specific authorities to handle industry-specific oversight. The public consultation on the draft will continue until September 30, 2025.
VII. Portugal - July 2025
The Portuguese Agency for Administrative Modernization (AMA) has published its guide for AI, along with an ethical risk evaluation tool that focuses on public administration entities. Private organizations may use it as a reference as well. The guide requires that responsible AI must be both transparent and robust, with the necessary mechanisms in place to ensure its fairness and inclusivity while mitigating bias. Additionally, it must also be sustainable and thoroughly tested through pre-implementation testing and consistent monitoring.
VIII. Spain - June 2025
Spain is proactively building a comprehensive regulatory framework that aligns with the EU AI Act. The country has established Europe's first AI regulator, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), which became operational in June 2024.
Spain is drafting a national AI law, titled "Good Use and Governance of Artificial Intelligence." The first draft of this law was approved in March 2025, with the aim of supplementing the EU Act with a domestic legal and sanctioning regime. The country has also implemented Europe’s first AI regulatory sandbox, selecting 12 high-risk projects in April 2025 to test systems and inform future regulations.
What Lies Ahead
As mentioned earlier, the AI Act has a fairly extensive enforcement timeline. Both EU bodies and member states will continue to monitor developments in GenAI technologies as well as other critical aspects to ensure they can continue to produce similar resources that can aid organizations in their compliance efforts.
Some such resources include the EU’s AI Pact, a voluntary compliance agreement where organizations would be encouraged to comply with a few key provisions. The European Commission is expected to continue publishing guidelines and common rules to ensure consistent implementation across the bloc.
While the aforementioned efforts by member states are comprehensive, they will be joined by several others over the coming months until full implementation in 2027. There will be guides, frameworks, toolkits, as well as pieces of legislation that will be made available, all with the intention of ensuring GenAI capabilities can be leveraged to their highest potential without compromising the rights of their citizens.
How Securiti Helps
The AI Act is an extremely comprehensive regulation. Compliance need not be complicated, but it will nonetheless require extensive attention to detail and, more importantly, the selection of the right tools, processes, and mechanisms to ensure all regulatory requirements are met.
This is where Securiti can help.
Securiti’s Data Command Center and AI Governance is a holistic solution for building safe, enterprise-grade generative AI systems. This enterprise solution comprises several components that can be used collectively to build end-to-end secure enterprise AI systems or in various other contexts to address diverse AI use cases.
With the AI Governance solution, organizations can conduct comprehensive processes involving all AI components and functionalities used within their workflows, including model risk identification, analysis, controls, monitoring, documentation, categorization assessment, fundamental rights impact assessment, and conformity assessment.
Leveraged properly, these solutions ensure all critical obligations are met in an effective and timely manner without compromising an organization’s other operations.
Request a demo today and learn more about how Securiti can help you select and deploy the most appropriate modules and solutions to comply with the regulatory requirements of the EU’s AI Act.