Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

Veeamon Tour'26 - Data & AI Trust CONVERGE for the Agentic Era

View

Article 2: Scope | EU AI Act

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Article 2 of the AI Act defines the scope of the regulation. More specifically, it defines which AI systems will fall under its jurisdiction as well as identifying critical exemptions. A thorough understanding of this aspect of the AI Act is important to stakeholders, including developers, users, and policymakers, as it elaborates on the boundaries of regulatory oversight.

Scope of Application

The AI Act covers all major AI models, systems, and applications that are placed on the market, put into service, or are expected to be used within the European Union, regardless of whether the provider or user is physically based in the EU or not. This includes:

  • Providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, regardless of whether they are located in the EU or not;
  • Deployers of AI systems that have their place of establishment or are located in the EU;
  • Providers and deployers of AI systems who have their place of establishment or who are located in a third country where the output of their AI system is used within the EU:
    • Importers and distributors of AI systems;
    • Product manufacturers putting into service or placing on the market an AI system together with their product or under their own name or trademark;
    • Authorized representatives of providers not established in the EU; and
    • Affected persons located in the EU.

Exemptions

The AI Act contains major exemptions as well. This is to ensure that the limitations imposed by the regulations do not have a blanket effect, with critical use cases being exempted. These include the following:

  • AI systems to be used exclusively for military, defense, or national security purposes;
  • AI systems released and available under open-source licenses unless they were made available as high-risk AI systems or fall under prohibited AI practices or are entities listed in Chapter IV of the AI Act;
  • Activities related to research, testing, and development of AI systems, provided such activities are subject to relevant EU law;
  • AI systems developed and to be used exclusively for personal non-professional activities;
  • Public authorities in third countries and international organizations whose use of AI systems is subject to international agreements for law enforcement and judicial cooperation with the EU or member state(s), given appropriate safeguards are in place for the protection of personal data.

Implications for Stakeholders

Article 2 carries significant implications for all stakeholders tied to AI systems per the AI Act. A thorough understanding of their systems and products is necessary to verify and assess whether their product/service is subject to the AI Act's provisions. The accuracy of this assessment influences the development process, AI design principles, and how such stakeholders choose to market their products/services.

Article 2 provides policymakers and regulatory bodies with clear guidelines on how to assess which entities will be subject to regulatory oversight. These guidelines are vital for the proactive identification of high-risk AI systems to protect the public interest appropriately.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight
Future-Proofing for the Privacy Professional
Watch Now View
Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Latest
View More
Building Sovereign AI with HPE Private Cloud AI and Veeam Securiti Gencore AI
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
View More
Securiti.ai Names Accenture as 2025 Partner of the Year
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
View More
Agentic AI & Privacy: Governing Autonomous AI Agents in the Enterprise
Learn how to govern agentic AI in the enterprise. Manage privacy risks, control data access, enforce policies and ensure compliance for autonomous AI agents.
View More
Opt-Outs That Stick: Consent Withdrawal Across Marketing, SaaS & GenAI
Securiti's whitepaper provides a detailed overview of various consent withdrawal requirements across marketing, SaaS, and GenAI. Read now to learn more.
View More
ROT Data Minimization
Eliminate redundant, obsolete, and trivial (ROT) data to improve AI accuracy, reduce storage costs, and minimize security and compliance risks at scale.
View More
Agent Commander: Solution Brief
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New