Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Article 2: Scope | EU AI Act

Published July 11, 2024 / Updated August 8, 2024

Article 2 of the AI Act defines the scope of the regulation. More specifically, it defines which AI systems will fall under its jurisdiction as well as identifying critical exemptions. A thorough understanding of this aspect of the AI Act is important to stakeholders, including developers, users, and policymakers, as it elaborates on the boundaries of regulatory oversight.

Scope of Application

The AI Act covers all major AI models, systems, and applications that are placed on the market, put into service, or are expected to be used within the European Union, regardless of whether the provider or user is physically based in the EU or not. This includes:

  • Providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the EU, regardless of whether they are located in the EU or not;
  • Deployers of AI systems that have their place of establishment or are located in the EU;
  • Providers and deployers of AI systems who have their place of establishment or who are located in a third country where the output of their AI system is used within the EU:
    • Importers and distributors of AI systems;
    • Product manufacturers putting into service or placing on the market an AI system together with their product or under their own name or trademark;
    • Authorized representatives of providers not established in the EU; and
    • Affected persons located in the EU.

Exemptions

The AI Act contains major exemptions as well. This is to ensure that the limitations imposed by the regulations do not have a blanket effect, with critical use cases being exempted. These include the following:

  • AI systems to be used exclusively for military, defense, or national security purposes;
  • AI systems released and available under open-source licenses unless they were made available as high-risk AI systems or fall under prohibited AI practices or are entities listed in Chapter IV of the AI Act;
  • Activities related to research, testing, and development of AI systems, provided such activities are subject to relevant EU law;
  • AI systems developed and to be used exclusively for personal non-professional activities;
  • Public authorities in third countries and international organizations whose use of AI systems is subject to international agreements for law enforcement and judicial cooperation with the EU or member state(s), given appropriate safeguards are in place for the protection of personal data.

Implications for Stakeholders

Article 2 carries significant implications for all stakeholders tied to AI systems per the AI Act. A thorough understanding of their systems and products is necessary to verify and assess whether their product/service is subject to the AI Act's provisions. The accuracy of this assessment influences the development process, AI design principles, and how such stakeholders choose to market their products/services.

Article 2 provides policymakers and regulatory bodies with clear guidelines on how to assess which entities will be subject to regulatory oversight. These guidelines are vital for the proactive identification of high-risk AI systems to protect the public interest appropriately.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New