Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View
Veeam

The Funniest Evening at RSA with Hasan Minhaj

Hasan Minhaj Request ticket
View

Italy’s AI Law: A Comprehensive Guide to Law No. 132/2025

Author

Aiman Kanwal

Assoc. Data Privacy Analyst at Securiti

Published November 4, 2025

Listen to the content

Introduction

Italy's Law No. 132/2025 (Italy’s AI Law), effective October 10, 2025, marks a significant milestone as the first national AI legislation within the EU. This law complements the existing Regulation (EU) 2024/1689 (EU AI Act) by addressing areas not covered by the EU regulation, establishing a precedent for national AI frameworks in Europe. The legislation introduces clear governance structures and sector-specific safeguards for critical areas such as healthcare, employment, justice, and intellectual professions. Furthermore, it enhances protections for minors, democratic institutions, and intellectual property rights in the context of artificial intelligence. This article provides a comprehensive overview of these transformative changes.

Relationship to the EU AI Act

A critical feature of Italy's AI Law is its complementary relationship with the EU AI Act. The Italian national law must be interpreted and applied in accordance with EU AI Act rules and definitions, ensuring consistency across the European regulatory landscape.

Rather than creating parallel or conflicting requirements, the Italian law fills specific gaps in the EU framework with Italy-specific provisions addressing national priorities in sectors like healthcare, employment, justice, and culture. Importantly, the law does not introduce new compliance obligations beyond the EU AI Act; instead, it provides additional sectoral guidance and safeguards within the boundaries established by European legislation.

Definitions

Article 2 of Italy’s AI Law adopts the definitions set out in the EU AI Act, ensuring full alignment with the European framework. Most key terms, including artificial intelligence system and artificial intelligence model, follow the EU definitions directly.

The only definition introduced at the national level is “data,” defined as any digital representation of acts, facts, or information, including sound, visual, or audiovisual data. For all other terminology, the law refers to the corresponding provisions of the EU AI Act.

Structure

Italy's AI law is divided into 6 chapters comprising 28 articles in total.

  • Chapter I: General Principles: Establishes fundamental principles, national AI strategy framework, and guiding values
  • Chapter II: Sector-Specific Rules: Contains provisions for healthcare, scientific research, employment, justice, and intellectual professions
  • Chapter III: Governance and Authorities: Defines the roles of Agency for Digital Italy (AgID), National Cybersecurity Agency (ACN), and other regulatory bodies
  • Chapter IV: Protection of Users and Copyright: Addresses data protection for minors, intellectual property rights, and AI-assisted works
  • Chapter V: Criminal and Procedural Law: Introduces new offenses and procedural modifications for AI-related disputes
  • Chapter VI: Final Provisions: Covers implementation timelines, delegated powers, and international cooperation

Principles

Article 4 of the Act establishes the foundational principles that govern AI deployment in Italy. These principles reflect a human-centric approach to AI governance and include:

  • Data Protection and Privacy: The law mandates lawful, correct, and transparent processing of personal data, reinforcing Italy's commitment to GDPR principles in the AI context.
  • Fundamental Rights Centrality: The law places fundamental rights at the center of AI governance, ensuring that technological advancement does not come at the expense of constitutional protections and human dignity.
  • Safety and Transparency: AI systems must be safe for users and stakeholders, with clear transparency requirements that enable individuals to understand when and how AI affects them.
  • Proportionality: Regulatory requirements must be proportionate to the risks posed by AI systems, avoiding unnecessary burdens while ensuring adequate protection.

National AI Strategy

Beyond specific regulatory requirements, the Act establishes a framework for ongoing strategic planning. The National AI Strategy must be updated every two years by the Interministerial Committee for Digital Transition, ensuring that Italy's approach evolves with technological developments. This strategy serves as a reference framework for policy and regulatory decisions across government, with support from the Department for Digital Transformation. The regular update requirement ensures that Italy's AI governance remains responsive to emerging challenges and opportunities.

Governance Structure

One of the most significant features of Italy's AI Law is its governance architecture, which distributes regulatory responsibilities between two primary authorities:

  • Agency for Digital Italy (AgID): AgID serves as the notifying authority responsible for innovation promotion and conformity assessment. This positioning reflects Italy's commitment to fostering AI innovation while ensuring technical compliance with regulatory standards.
  • National Cybersecurity Agency (ACN): ACN functions as the market surveillance authority responsible for oversight, inspection, and sanctions. Additionally, ACN serves as Italy's single point of contact with EU institutions on AI matters, streamlining international coordination.

This dual authority model balances the objectives of promoting innovation and ensuring compliance, with clear delineation of responsibilities to avoid regulatory overlap or confusion.

Coordination Mechanisms

Recognizing that AI impacts multiple regulatory domains, the law establishes coordination mechanisms. A Coordination Committee at the Presidency of the Council of Ministers provides high-level oversight and ensures consistency across government agencies.

The law preserves existing powers for specialized authorities, including the Garante (Italian Data Protection Authority) for privacy matters and Authority for guarantees in communications (AGCOM) for communications. It also mandates collaboration with sectoral regulators, including the Bank of Italy, National Commission for Companies and the Stock Exchange (CONSOB) and Institute for Insurance Supervision (IVASS), ensuring that AI oversight leverages existing regulatory expertise. This coordinated approach prevents regulatory fragmentation while respecting the specialized knowledge of sector-specific authorities.

Sector-Specific Provisions

1. Healthcare & Scientific Research

Italy's AI Law includes particularly innovative provisions for healthcare and scientific research (Articles 7-8), recognizing both the transformative potential of AI in these fields and the sensitivity of health data.

  • Data Processing for Research: One of the most significant provisions allows secondary use of personal data (including sensitive health data) for public interest scientific research without requiring new consent. This provision aims to accelerate medical AI development while maintaining privacy protections through mandatory removal of direct identifiers. Before processing begins, organizations must provide 30-day notification to the Garante, including details of GDPR compliance measures (Article 8). This secondary use is identified as serving a significant public interest under the Italian Constitution, reflecting Italy's commitment to advancing medical research.
  • AI in Clinical Settings: The law establishes clear boundaries for AI in healthcare delivery. AI must function only as a support tool and cannot discriminate against patients or independently decide access to treatment. Final medical decisions must remain with human healthcare professionals, preserving the physician-patient relationship and professional responsibility.

2. Employment

The use of AI in the workplace (Article 11) is mandated to improve working conditions, protect the psychophysical integrity of workers, and increase productivity.

  • Principles: AI use must be safe, reliable, transparent, and cannot conflict with human dignity or violate the confidentiality of personal data.
  • Non-Discrimination: AI in managing the employment relationship must guarantee compliance with the inviolable rights of the worker without discrimination (including based on sex, age, ethnic origin, etc.).
  • Disclosure of the use of AI to Workers: The employer or the client is required to inform the worker of the use of artificial intelligence.
  • Monitoring: The Observatory on the adoption of artificial intelligence systems in the world of work is established at the Ministry of Labor and Social Policies. This body is tasked with defining a strategy for AI use in the workplace, monitoring its impact on the labor market, and promoting the training of workers and employers in the field of artificial intelligence.
  • Training Delegation: The Government is delegated to adopt decrees that provide for literacy and training paths on AI systems for professionals and workers.

3. Justice System

The Italian AI Law introduces specific safeguards for the use of artificial intelligence within the justice system, reaffirming that human judgment remains central to all judicial activity.

  • Reserved Decision-Making: Decisions involving the interpretation and application of law, the evaluation of facts and evidence, and the adoption of judicial measures are strictly reserved for magistrates. AI may support, but never replace, the exercise of judicial discretion or reasoning.
  • Authorized Uses and Oversight: AI systems may be used only for support functions, such as the organization of justice services, simplification of judicial work, and ancillary administrative activities. The Ministry of Justice regulates these applications and authorizes experimental use of AI in judicial offices, pending full implementation of the EU AI Act, after consulting the designated national authorities.
  • Judicial Training: The Ministry of Justice is tasked with promoting training and awareness programs for judges and administrative staff on AI technologies. These initiatives focus on digital literacy, responsible use, and understanding of AI-related risks and benefits within judicial operations.
  • Jurisdiction for AI Disputes: The law delegates to the government the power to issue legislative decrees within twelve months to establish a comprehensive legal framework governing the use of data, algorithms, and mathematical methods for training AI systems. This framework must define the applicable legal regime, specify the rights and obligations of relevant parties, and provide mechanisms for protection and sanctions in case of violations. Additionally, the law assigns jurisdiction over disputes arising from this regime to specialized business sections; dedicated divisions within Italy’s ordinary courts that handle complex commercial and technological matters, supported by an amendment to the Code of Civil Procedure extending their competence to cases relating to the operation of an artificial intelligence system.

4. Intellectual Professions

The law addresses how AI affects traditional intellectual professions such as law, accounting, architecture, and engineering.

  • Protecting the Fiduciary Relationship: Professionals must inform clients about the use of AI systems in delivering intellectual services, maintaining trust and awareness of AI involvement.
  • Communication Standards: Information must be presented in clear, simple, and comprehensive language. AI systems act as support tools, assisting professionals in law, engineering, accounting, and similar fields, but cannot replace the professional’s final judgment, and clients must be fully informed of their use.

5. Democratic Institutions

The law explicitly prohibits AI use that could interfere with democratic institutions or distort public debate. This includes protections against disinformation campaigns, opinion manipulation, and AI-generated content designed to undermine electoral integrity or public discourse.

Data Protection Provisions

1. Minors Protection

The Act establishes age-differentiated consent requirements for minors' data in AI systems:

  • Young Children: Parental consent is required for children under 14, recognizing their developmental limitations in understanding AI's implications.
  • Adolescents: Children aged 14-18 can provide their own consent if information about AI systems is accessible and comprehensible to them. This graduated approach respects developing autonomy while ensuring age-appropriate protection.

2. Healthcare Data Processing

To promote innovation while protecting privacy, the law provides a framework for processing health data in AI development:

  • Public Interest Declaration: Processing personal and sensitive data for AI-driven scientific research (e.g., prevention, diagnosis, drug development, public health) is recognized as serving the public interest, consistent with Articles 32 and 33 of the Italian Constitution and GDPR Article 9(2)(g).
  • Secondary Use Authorization: Secondary use of personal data without direct identifiers is permitted for research, even for sensitive categories, without additional consent when initial consent is legally provided.
  • Transparency: Data controllers may fulfill information obligations via general disclosures on their websites.

3. Sports Research

The law also permits processing data related to athletic performance for research purposes:

  • Permitted Processing: Data may be anonymized, pseudonymized, or synthesized for studying athletic gestures, movements, and performances across all sports.
  • Condition: Such processing requires appropriate information to the data subject.

Intellectual Property

The Italian AI Law updates copyright legislation (Law 22 April 1941, n. 633) to clarify the status of works created with AI assistance and to regulate the use of copyrighted material for AI training.

  • Copyright Protection for Human-Created Works: The law reinforces that copyright applies only to works originating from human creativity:
    • Protection is limited to works reflecting the author’s intellectual effort.
    • AI-generated works without meaningful human contribution are not eligible for copyright, distinguishing them from human-created works.
  • Text and Data Mining (TDM) for AI Training: The law permits the use of copyrighted material to train AI models, under specific conditions:
    • Reproduction or extraction of works from databases or networks to which one has legitimate access is allowed for AI text and data mining, including generative AI.
    • TDM must respect Articles 70-ter and 70-quater of the Copyright Law, which allow content owners to exercise opt-out rights.
    • Violations of these provisions, including unauthorized reproduction or extraction through AI systems, are subject to penalties under the Copyright Law.

Criminal Law Provisions

The Act creates new criminal offenses specifically addressing AI-related harms:

  • Deepfake Dissemination: A new offense criminalizes the unlawful dissemination of AI-manipulated content (deepfakes) with imprisonment penalties. This addresses growing concerns about synthetic media used for fraud, defamation, or manipulation.
  • AI as Aggravating Factor: The law establishes a general aggravating circumstance for crimes committed using AI tools, recognizing that AI can amplify criminal harm through scale, sophistication, or targeting capabilities.

The law amends Italy's Code of Civil Procedure to address AI-related disputes, ensuring that judicial processes can effectively handle novel questions arising from AI deployment.

Compliance Considerations

Organizations deploying AI in Italy must navigate a multi-layered compliance landscape:

  • EU AI Act Baseline: All EU AI Act obligations apply in Italy, including risk classification, conformity assessment, transparency requirements, and record-keeping for high-risk systems.
  • Sector-Specific Requirements: Additional obligations apply depending on the sector and use case. Healthcare organizations, employers, justice system entities, intellectual professionals, and others must comply with sector-specific provisions described above.
  • Enhanced Transparency: Beyond EU requirements, certain applications require broader information disclosure, particularly in employment contexts.
  • Impact Assessments: Organizations must conduct assessments to prevent algorithmic discrimination and ensure fundamental rights protection, particularly when deploying AI in sensitive contexts.

Interaction with Existing Frameworks

Italy's AI Law builds on and integrates with existing regulatory frameworks rather than replacing them:

  • Labor Law: AI employment provisions build on Italy's existing labor protection laws and rules governing remote monitoring of workers.
  • GDPR: Data protection requirements integrate with existing GDPR obligations, including data protection impact assessments (DPIAs) and privacy by design principles.
  • Copyright Law: Intellectual property provisions leverage and extend existing Italian copyright law frameworks.

International Context

Italy's pioneering legislation has significant implications beyond its borders:

  • EU Precedent: As the first EU member state with comprehensive national AI legislation, Italy provides a model for how other member states might supplement the EU AI Act with national provisions.
  • National Sovereignty: The law demonstrates how member states can address national priorities and concerns within the boundaries of EU harmonization, balancing centralized European regulation with local democratic control.
  • Regulatory Innovation: Italy's provisions on copyright for AI-assisted works, healthcare data for research, and criminal liability for deepfakes may influence other jurisdictions grappling with similar questions.

Conclusion

Italy's Law No. 132/2025 represents a landmark development in AI governance, the first comprehensive national AI law among EU member states. By complementing the EU AI Act with targeted sectoral provisions, Italy has charted a path for how member states can address national priorities while maintaining European regulatory harmonization.

The law's sector-specific approach to healthcare, employment, justice, and intellectual professions provides concrete guidance for high-stakes AI applications while preserving fundamental principles of human dignity, professional responsibility, and democratic integrity. Its innovative provisions on copyright protection for AI-assisted works and criminal liability for deepfakes address emerging challenges at the frontier of AI law.

For organizations operating in Italy, the law creates both obligations and opportunities. While compliance requires attention to sector-specific requirements beyond the EU AI Act baseline, the law also provides legal certainty and pathways for innovation, particularly in healthcare research and AI-assisted creative work.

As implementation proceeds and other member states observe Italy's experience, Italy’s AI Law may prove influential far beyond Italian borders, shaping how Europe balances AI innovation with fundamental rights protection in the years ahead.

Organizations should view compliance not as a one-time exercise but as an ongoing process of monitoring regulatory developments, engaging with authorities, and adapting practices as Italy's AI governance framework matures. With thoughtful implementation, Italy's AI Law can support responsible innovation that serves both economic competitiveness and societal values.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Introducing Agent Commander
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About View More
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Building A Secure AI Foundation For Financial Services View More
Building A Secure AI Foundation For Financial Services
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
Indiana, Kentucky & Rhode Island Privacy Laws View More
Indiana, Kentucky & Rhode Island Privacy Laws: What Changed & What Businesses Should Do Now
A breakdown of new data privacy laws in Indiana, Kentucky, and Rhode Island—key obligations, consumer rights, enforcement timelines, and what businesses should do now.
Agentic AI Security: OWASP Top 10 with Enterprise Controls View More
Agentic AI Security: OWASP Top 10 with Enterprise Controls
Map the OWASP Top 10 risks for agentic AI to enterprise-grade controls, identity, data security, guardrails, monitoring, and governance to stop autonomous AI abuse.
View More
Strategic Priorities For Security Leaders In 2026
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI. Category:...
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New