The emergence of artificial intelligence (AI) has created a flurry of disruption as businesses across industries restructure and redefine their paths toward innovation. And with great opportunity comes swift regulation. As with GDPR before it, the EU has the distinction of coming up with the first comprehensive law on artificial intelligence, the aptly named “AI Act.”
On March 13, 2024, the EU Parliament approved the draft law, passing it by an overwhelming majority. Here’s what organizations need to know:
The aim of the EU AI Act
The AI Act lays out general guidelines for applying AI-driven systems, products, and services. It aims to protect the use of both personal and non-personal data — and the fundamental rights and interests of individuals within the EU, ensuring that the AI systems used are safe, transparent, traceable, non-discriminatory, environmentally friendly, and overseen by people — not automated technologies.
Defining AI System
The act establishes a technology-neutral, uniform definition for what an AI system is — one that aims to be broad and flexible enough to encompass future developments in AI:
A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Who the AI Act applies to: Scope and exemptions
As with GDPR, the AI Act has an extra-territorial application, meaning that businesses operating outside the EU may be implicated.
The AI Act applies to:
- Providers that place on the market or put into service AI systems or general-purpose AI models in the EU, regardless of whether or not the providers are located in the EU.
- Deployers of AI systems that are established or located within the EU.
- Providers and deployers of AI systems whose output is used within the EU, regardless of whether or not the providers or the deployers are established or located in the EU or outside the EU, including:
- Importers and distributors of AI systems;
- Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
- Authorized representatives of providers not established in the EU; and
- Affected persons located in the EU.
The following systems and groups are exempted from the Act:
- Public authorities in non-EU countries and international organizations using AI systems under international agreements for law enforcement or judicial cooperation with the EU or member state(s), given that adequate safeguards are in place for the protection of personal data.
- AI systems developed or utilized for purposes beyond EU lawmaking authority, such as military, defense, or national security.
- Deployers who are natural persons using AI systems in the course of a purely personal, non-professional activity.
- Research, testing (excluding testing in real-world conditions), and development activities related to AI systems before market placement or service use, provided that these activities are conducted respecting applicable EU law.
- AI systems released under free and open-source licenses unless they are placed on the market or put into service as high-risk AI systems or fall under Chapter II and Chapter IV of the AI Act.
A risk-based approach: Unacceptable and high-risk
The AI Act classifies AI systems into two main categories based on the risk they pose by their application and usage. Their obligations vary depending on the level of risk caused by the AI system.
Risk Category 1: Unacceptable-risk AI systems
AI systems categorized as having “unacceptable” risks clearly endanger or have the potential to infringe upon an individual’s safety and fundamental rights, leading to physical or psychological harm. Under the AI Act, these AI systems are prohibited from being placed, operated, or used in the EU.
Risk Category 2: High-risk AI systems
AI systems that create a high risk to individuals’ safety, health, or fundamental rights are considered “high risk” under the law and are permitted to be used with certain obligations. This category includes safety components of products falling under sector-specific Union regulations, which continue to be deemed high-risk during third-party conformity assessment under those regulations. The obligations of high-risk AI systems, among others, include documentation, traceability, cybersecurity, and human oversight, along with transparency requirements that require providers of AI systems to inform end-users that they are interacting with an AI.
Regulatory authorities and penalties for non-compliance
The AI Act proposes the establishment of a new European Artificial Intelligence Board (EAIB, or “AI Board”) enforcement authority at the EU level, which will be responsible for creating codes of conduct, advising on implementing the Act, promoting AI literacy, and collecting and sharing expertise and best practices. In addition, member states must designate at least one notifying authority and at least one market surveillance authority to ensure compliance at the national level.
The AI Act also calls for the establishment of an AI Office to coordinate enforcement of the AI Act and to investigate infringement, a Scientific Panel of Independent Experts to advise and monitor potential safety risks, and an Advisory Forum for Stakeholders to provide technical expertise and engage various perspectives in the decision-making process.
In alignment with the risk-based approach, penalties for the AI Act vary based on the severity of the violation:
- For breach of prohibitions provided under the AI Act, fines can go up to €35 million or 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
- Non-compliance in relation to high-risk systems GPAI models, and other systems may attract fines of up to €15 million or 3% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
- For supplying incorrect or misleading information, penalties can be up to €7.5 million or 1% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
- Small- or medium-sized businesses failing to comply with the Act will be fined as specified above but will pay the lower amount applicable.
What organizations should do next to ensure compliance with the EU AI Act?
Enterprises that process personal data through AI systems must ensure that their practices comply with the EU AI Act. Using Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration for ensuring the safe use of data and AI — organizations can navigate existing and future regulatory compliance by:
- Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
- Conducting AI risk assessments to identify and classify AI systems by risk level.
- Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
- Implementing appropriate privacy, security, and governance guardrails for protecting data and AI systems.
- Ensure compliance with applicable data and AI regulations.
Check out the whitepaper to learn more about how the EU AI Act will shape the future of AI governance — and how you can ensure compliant and innovative AI use for your enterprise.
Explore AI Governance Center https://securiti.ai/ai-governance/