Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Inside the EU AI Act, the World’s First Comprehensive AI Law

Author

Ankur Gupta

Director for Data Governance and AI Products at Securiti

Listen to the content

This post is also available in: Brazilian Portuguese

The emergence of artificial intelligence (AI) has created a flurry of disruption as businesses across industries restructure and redefine their paths toward innovation. And with great opportunity comes swift regulation. As with GDPR before it, the EU has the distinction of coming up with the first comprehensive law on artificial intelligence, the aptly named “AI Act.”

On March 13, 2024, the EU Parliament approved the draft law, passing it by an overwhelming majority. Here’s what organizations need to know:

The aim of the EU AI Act

The AI Act lays out general guidelines for applying AI-driven systems, products, and services. It aims to protect the use of both personal and non-personal data — and the fundamental rights and interests of individuals within the EU, ensuring that the AI systems used are safe, transparent, traceable, non-discriminatory, environmentally friendly, and overseen by people — not automated technologies.

Defining AI System

The act establishes a technology-neutral, uniform definition for what an AI system is — one that aims to be broad and flexible enough to encompass future developments in AI:

A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Who the AI Act applies to: Scope and exemptions

As with GDPR, the AI Act has an extra-territorial application, meaning that businesses operating outside the EU may be implicated.

The AI Act applies to:

  • Providers that place on the market or put into service AI systems or general-purpose AI models in the EU, regardless of whether or not the providers are located in the EU.
  • Deployers of AI systems that are established or located within the EU.
  • Providers and deployers of AI systems whose output is used within the EU, regardless of whether or not the providers or the deployers are established or located in the EU or outside the EU, including:
    • Importers and distributors of AI systems;
    • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
    • Authorized representatives of providers not established in the EU; and
    • Affected persons located in the EU.

The following systems and groups are exempted from the Act:

  1. Public authorities in non-EU countries and international organizations using AI systems under international agreements for law enforcement or judicial cooperation with the EU or member state(s), given that adequate safeguards are in place for the protection of personal data.
  2. AI systems developed or utilized for purposes beyond EU lawmaking authority, such as military, defense, or national security.
  3. Deployers who are natural persons using AI systems in the course of a purely personal, non-professional activity.
  4. Research, testing (excluding testing in real-world conditions), and development activities related to AI systems before market placement or service use, provided that these activities are conducted respecting applicable EU law.
  5. AI systems released under free and open-source licenses unless they are placed on the market or put into service as high-risk AI systems or fall under Chapter II and Chapter IV of the AI Act.

A risk-based approach: Unacceptable and high-risk

The AI Act classifies AI systems into two main categories based on the risk they pose by their application and usage. Their obligations vary depending on the level of risk caused by the AI system.

Risk Category 1: Unacceptable-risk AI systems

AI systems categorized as having “unacceptable” risks clearly endanger or have the potential to infringe upon an individual’s safety and fundamental rights, leading to physical or psychological harm. Under the AI Act, these AI systems are prohibited from being placed, operated, or used in the EU.

Risk Category 2: High-risk AI systems

AI systems that create a high risk to individuals’ safety, health, or fundamental rights are considered “high risk” under the law and are permitted to be used with certain obligations. This category includes safety components of products falling under sector-specific Union regulations, which continue to be deemed high-risk during third-party conformity assessment under those regulations. The obligations of high-risk AI systems, among others, include documentation, traceability, cybersecurity, and human oversight, along with transparency requirements that require providers of AI systems to inform end-users that they are interacting with an AI.

Regulatory authorities and penalties for non-compliance

The AI Act proposes the establishment of a new European Artificial Intelligence Board (EAIB, or “AI Board”) enforcement authority at the EU level, which will be responsible for creating codes of conduct, advising on implementing the Act, promoting AI literacy, and collecting and sharing expertise and best practices. In addition, member states must designate at least one notifying authority and at least one market surveillance authority to ensure compliance at the national level.

The AI Act also calls for the establishment of an AI Office to coordinate enforcement of the AI Act and to investigate infringement, a Scientific Panel of Independent Experts to advise and monitor potential safety risks, and an Advisory Forum for Stakeholders to provide technical expertise and engage various perspectives in the decision-making process.

In alignment with the risk-based approach, penalties for the AI Act vary based on the severity of the violation:

  • For breach of prohibitions provided under the AI Act, fines can go up to €35 million or 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Non-compliance in relation to high-risk systems GPAI models, and other systems  may attract fines of up to €15 million or 3% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
  • For supplying incorrect or misleading information, penalties can be up to €7.5 million or 1% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Small- or medium-sized businesses failing to comply with the Act will be fined as specified above but will pay the lower amount applicable.

What organizations should do next to ensure compliance with the EU AI Act?

Enterprises that process personal data through AI systems must ensure that their practices comply with the EU AI Act. Using Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration for ensuring the safe use of data and AI — organizations can navigate existing and future regulatory compliance by:

  1. Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
  2. Conducting AI risk assessments to identify and classify AI systems by risk level.
  3. Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
  4. Implementing appropriate privacy, security, and governance guardrails for protecting data and AI systems.
  5. Ensure compliance with applicable data and AI regulations.

Check out the whitepaper to learn more about how the EU AI Act will shape the future of AI governance — and how you can ensure compliant and innovative AI use for your enterprise.

Explore AI Governance Center https://securiti.ai/ai-governance/

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
Inside Echoleak View More
Inside Echoleak
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
What is AI Security Posture Management (AI-SPM)? View More
What is AI Security Posture Management (AI-SPM)?
AI SPM stands for AI Security Posture Management. It represents a comprehensive approach to ensure the security and integrity of AI systems throughout the...
View More
Data Security & GDPR Compliance: What You Need to Know
Learn the importance of data security in ensuring GDPR compliance. Implement robust data security measures to prevent non-compliance with the GDPR.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Understanding Data Regulations in Australia’s Telecom Sector View More
Understanding Data Regulations in Australia’s Telecom Sector
Gain insights into the key data regulations in Australia’s telecommunication sector. Learn how Securiti helps ensure swift compliance.
Top 3 Key Predictions on GenAI's Transformational Impact in 2025 View More
Top 3 Key Predictions on GenAI’s Transformational Impact in 2025
Discover how a leading Chief Data Officer (CDO) breaks down top predictions for GenAI’s transformative impact on operations and innovation in 2025.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New