Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Inside the EU AI Act, the World’s First Comprehensive AI Law

Author

Ankur Gupta

Director for Data Governance and AI Products at Securiti

Listen to the content

The emergence of artificial intelligence (AI) has created a flurry of disruption as businesses across industries restructure and redefine their paths toward innovation. And with great opportunity comes swift regulation. As with GDPR before it, the EU has the distinction of coming up with the first comprehensive law on artificial intelligence, the aptly named “AI Act.”

On March 13, 2024, the EU Parliament approved the draft law, passing it by an overwhelming majority. Here’s what organizations need to know:

The aim of the EU AI Act

The AI Act lays out general guidelines for applying AI-driven systems, products, and services. It aims to protect the use of both personal and non-personal data — and the fundamental rights and interests of individuals within the EU, ensuring that the AI systems used are safe, transparent, traceable, non-discriminatory, environmentally friendly, and overseen by people — not automated technologies.

Defining AI System

The act establishes a technology-neutral, uniform definition for what an AI system is — one that aims to be broad and flexible enough to encompass future developments in AI:

A machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Who the AI Act applies to: Scope and exemptions

As with GDPR, the AI Act has an extra-territorial application, meaning that businesses operating outside the EU may be implicated.

The AI Act applies to:

  • Providers that place on the market or put into service AI systems or general-purpose AI models in the EU, regardless of whether or not the providers are located in the EU.
  • Deployers of AI systems that are established or located within the EU.
  • Providers and deployers of AI systems whose output is used within the EU, regardless of whether or not the providers or the deployers are established or located in the EU or outside the EU, including:
    • Importers and distributors of AI systems;
    • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
    • Authorized representatives of providers not established in the EU; and
    • Affected persons located in the EU.

The following systems and groups are exempted from the Act:

  1. Public authorities in non-EU countries and international organizations using AI systems under international agreements for law enforcement or judicial cooperation with the EU or member state(s), given that adequate safeguards are in place for the protection of personal data.
  2. AI systems developed or utilized for purposes beyond EU lawmaking authority, such as military, defense, or national security.
  3. Deployers who are natural persons using AI systems in the course of a purely personal, non-professional activity.
  4. Research, testing (excluding testing in real-world conditions), and development activities related to AI systems before market placement or service use, provided that these activities are conducted respecting applicable EU law.
  5. AI systems released under free and open-source licenses unless they are placed on the market or put into service as high-risk AI systems or fall under Chapter II and Chapter IV of the AI Act.

A risk-based approach: Unacceptable and high-risk

The AI Act classifies AI systems into two main categories based on the risk they pose by their application and usage. Their obligations vary depending on the level of risk caused by the AI system.

Risk Category 1: Unacceptable-risk AI systems

AI systems categorized as having “unacceptable” risks clearly endanger or have the potential to infringe upon an individual’s safety and fundamental rights, leading to physical or psychological harm. Under the AI Act, these AI systems are prohibited from being placed, operated, or used in the EU.

Risk Category 2: High-risk AI systems

AI systems that create a high risk to individuals’ safety, health, or fundamental rights are considered “high risk” under the law and are permitted to be used with certain obligations. This category includes safety components of products falling under sector-specific Union regulations, which continue to be deemed high-risk during third-party conformity assessment under those regulations. The obligations of high-risk AI systems, among others, include documentation, traceability, cybersecurity, and human oversight, along with transparency requirements that require providers of AI systems to inform end-users that they are interacting with an AI.

Regulatory authorities and penalties for non-compliance

The AI Act proposes the establishment of a new European Artificial Intelligence Board (EAIB, or “AI Board”) enforcement authority at the EU level, which will be responsible for creating codes of conduct, advising on implementing the Act, promoting AI literacy, and collecting and sharing expertise and best practices. In addition, member states must designate at least one notifying authority and at least one market surveillance authority to ensure compliance at the national level.

The AI Act also calls for the establishment of an AI Office to coordinate enforcement of the AI Act and to investigate infringement, a Scientific Panel of Independent Experts to advise and monitor potential safety risks, and an Advisory Forum for Stakeholders to provide technical expertise and engage various perspectives in the decision-making process.

In alignment with the risk-based approach, penalties for the AI Act vary based on the severity of the violation:

  • For breach of prohibitions provided under the AI Act, fines can go up to €35 million or 7% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Non-compliance in relation to high-risk systems GPAI models, and other systems  may attract fines of up to €15 million or 3% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
  • For supplying incorrect or misleading information, penalties can be up to €7.5 million or 1% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Small- or medium-sized businesses failing to comply with the Act will be fined as specified above but will pay the lower amount applicable.

What organizations should do next to ensure compliance with the EU AI Act?

Enterprises that process personal data through AI systems must ensure that their practices comply with the EU AI Act. Using Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration for ensuring the safe use of data and AI — organizations can navigate existing and future regulatory compliance by:

  1. Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
  2. Conducting AI risk assessments to identify and classify AI systems by risk level.
  3. Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
  4. Implementing appropriate privacy, security, and governance guardrails for protecting data and AI systems.
  5. Ensure compliance with applicable data and AI regulations.

Check out the whitepaper to learn more about how the EU AI Act will shape the future of AI governance — and how you can ensure compliant and innovative AI use for your enterprise.

Explore AI Governance Center https://securiti.ai/ai-governance/

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigation OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View
Spotlight 13:11

Securing GenAI: From SaaS Copilots to Enterprise Applications

Rehan Jalil
Watch Now View
Spotlight 47:02

Navigating Emerging Technologies: AI for Security/Security for AI

Rehan Jalil
Watch Now View
Spotlight 59:55

Building Safe
Enterprise AI

Watch Now View

Latest

Automating EU AI Act Compliance View More

Automating EU AI Act Compliance: A 5-Step Playbook for GRC Teams

Artificial intelligence is revolutionizing industries, driving innovation in healthcare, finance, and beyond. But with great power comes great responsibility—especially when AI decisions impact health,...

Navigating the Evolving Data Security Landscape View More

Navigating the Evolving Data Security Landscape: Why Detection Alone Isn’t Enough

Proactive vs. Reactive: Why Threat Detection Alone Falls Short in Data Protection In an era where digital transformation and AI adoption are accelerating at...

View More

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)

Gain insights into South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act).

Navigating Data Regulations in Malaysia's Financial Sector View More

Navigating Data Regulations in Malaysia’s Financial Sector

Gain insights into data regulations in Malaysia’s financial sector. Learn how Securiti’s robust automation tools help organizations ensure swift compliance with Malaysia’s evolving regulatory...

Sensitive Personal Information (SPI) View More

Navigating Sensitive Personal Information (SPI) Under U.S. State Privacy Laws

Download the whitepaper to understand how U.S. state privacy laws define Sensitive Personal Information (SPI) and what governance requirements businesses must follow to ensure...

Navigating Data Regulations in the UAE Financial Services Industry View More

Navigating Data Regulations in the UAE Financial Services Industry

Download the whitepaper to explore key strategies and insights for navigating data regulations in the UAE's financial services industry. Learn about compliance with evolving...

Texas Data Privacy and Security Act (TDPSA) View More

Navigating the Texas Data Privacy and Security Act (TDPSA): Key Details

Download the infographic to learn key details about Texas’ Data Privacy and Security Act (TDPSA) and simplify your compliance journey with Securiti.

Oregon’s Consumer Privacy Act (OCPA) View More

Navigating Oregon’s Consumer Privacy Act (OCPA): Key Details

Download the infographic to learn key details about Oregon’s Consumer Privacy Act (OCPA) and simplify your compliance journey with Securiti.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New