Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

Aligning Your AI Systems With GDPR: What You Need to Know

Contributors

Anas Baig

Product Marketing Manager at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Published December 30, 2025

Listen to the content

In a recent survey on the increased use of AI, nearly 78% of the respondents mentioned that their organizations use AI in at least one of their business functions. The use of AI is reshaping how modern organizations operate and leverage their resources with maximum efficiency. From fundamental elements such as customer support chatbots to predictive hiring tools, AI systems are now becoming a crucial value driver for organizations, with their importance only expected to elevate further in the future.

However, this integration poses several challenges, including ethical, operational, and, of course, regulatory.

In the EU, these responsibilities can be primarily viewed through the lens of the General Data Protection Regulation (GDPR). In recent years, data regulators in the EU have been especially active in maintaining heightened scrutiny of AI-driven systems. The Italian Garante fined OpenAI €15 million for alleged GDPR violations in its ChatGPT functionalities, and the Dutch DPA fined Clearview AI €30.5 million for using extensive biometric databases to train its AI systems that were critical to its operations but in flagrant violation of GDPR transparency requirements. There are numerous other such examples, all highlighting a single message: innovation cannot bypass accountability.

This blog explores how organizations can ensure appropriate alignment of their AI systems with the GDPR, not as a mere compliance requirement but as a strategic differentiator, with details on how best to interpret the GDPR in the AI context, critical DSR considerations under both the GDPR and the AI Act, practical steps organizations can take for compliance purposes, and of course, solutions that can ease the path towards compliance.

Read on to learn more.

Understanding GDPR in the AI Context

Almost all data protection authorities (DPAs) across the EU have released extensive resources aimed at helping organizations ensure their data processing complies with GDPR requirements. The surge in AI usage has necessitated even more guidance by the DPAs. Read on to learn more about what these resources say about compliance with GDPR in the AI context.

Key GDPR Principles Relevant For AI Systems

a. Lawful, Fair, And Transparent Processing

One of the GDPR’s most fundamental requirements is for all personal data processing to be grounded on valid legal bases (i.e., consent, contract, legal obligation, vital interests, public interest, and legitimate interests). Moreover, these bases must also be appropriately identified in addition to being communicated to the individuals whose data is being processed. This would extend to personal data processing being carried out by or for AI systems. Individuals should have an idea of how their data is being used to train, test, or improve AI models. Such transparency can be a challenge for organizations since AI is still a major “black box” with a fair degree of opacity surrounding its explainability and understanding of how its decision-making processes work.

b. Purpose Limitation & Data Minimization

Data usability and reusability are key facets of AI systems. Data can be leveraged for multiple purposes and different contexts. However, the GDPR has strict requirements related to data collected for specific purposes being used for those purposes only. An organization may only use such data in a way compatible with such purposes. Any deviation from it must be justified and should only be done with the consent of the individual to whom the data belongs. This is further complicated by data minimization requirements, where organizations may only collect or retain data as far as it is necessary for the stated purpose. Hence, an organization cannot hope to leverage extensive datasets in its AI training cycle without undertaking measures to ensure it has appropriately informed the users of such uses.

c. Data Accuracy

GDPR requires personal data to be accurate and up-to-date. Similarly, the AI Act builds on the GDPR principles by requiring AI systems to use unbiased data to prevent discriminatory outcomes.

d. Automated Decision-Making & Profiling

The GDPR provides individuals the right not to be subjected to automated decision-making, especially in cases that significantly affect individuals, such as credit approvals or employment screening. This means that individuals have the right to request reconsideration by a human reviewer. Hence, any AI system being used in such contexts must ensure there are effective human oversight measures in place, along with mechanisms that allow individuals to challenge such decisions, and a reasonable degree of explanation of how such decisions are made. Consequently, businesses leveraging AI in such domains must adopt “explainable AI” frameworks to ensure individuals never have to deal with opaque and unchallengeable outcomes.

e. Accountability & Governance

The GDPR’s accountability principle requires organizations to not only be compliant but also be able to demonstrate this compliance. In the AI context, this can be achieved through extensive documentation of all records of processing activities that involve AI. Moreover, other records can include data lineage records, model training logs, DPIAs, and governance structures.

f. Security Of Processing

Regardless of context, organizations are expected to undertake the strictest measures possible to ensure appropriate data security throughout the processing lifecycle. AI systems involve complex data flows, which are further complicated by cloud-based training environments, APIs, and integrations with third-party providers. Unique risks associated with the AI systems include potential bias in training data, or manipulation by unauthorized individuals, etc. Measures organizations can take to ensure sufficient data security in such environments include encryption, access controls, penetration testing, and logging and monitoring.

Organizations can also rely on a few additional measures to ensure the quality and integrity of training data. These measures include data provenance to track data origin and monitor sources of bias, anomaly detection, and human review of high-risk data points.

GDPR Data Subject Rights in the AI Context

One of the GDPR’s most critical obligations for all subject organizations is the data subject rights. Various DPAs across the EU, such as in Belgium, Austria, and France, among others, have offered extensive resources meant to aid organizations in harmonizing the data subject rights within the AI context.

a. Right to Access

Individuals have the right to know if their data is being processed and for exactly what purpose. In AI contexts, it requires organizations to demonstrate not just the data’s use, but also the necessity of its use, and exactly how it was used by the AI model. Moreover, such an explanation must be easy enough to understand for a layman. This overlaps with the AI Act’s requirement for high-risk AI system providers to offer as much information as possible related to their system capabilities, limitations, and intended use.

b. Right to Rectification

This is a peculiar instance. Individuals may ask for their data to be rectified in case it has become outdated, inaccurate, or obsolete. For the organization collecting the data and using it to train AI models, it would be in their best interest to have the most accurate data possible. However, undertaking such corrections can be a tricky prospect since, unlike traditional databases, individually correcting singular data points when it has been used as part of a dataset is not only a strenuous task but is not always easy to demonstrate, making it difficult to prove the correction has taken place.

c. Right to Erasure

Individuals have the right to request that all their data in possession of the organization be erased, essentially allowing them to be “forgotten.” However, in the AI context, this presents a unique problem since organizations must scramble to ensure the appropriate erasure from their training datasets being used to improve AI models. Moreover, they must also ensure that their influence on the AI model is minimized as much as possible.

d. Right to Restriction of Processing & Data Portability

Individuals have the right to request a limitation on how their data is used. They may also request that it be moved to another service. Within the AI context, it means organizations must have the means and mechanisms to identify and isolate individual personal data from within complex datasets. While this may be a complex technical challenge, it is reiterated by both the GDPR and the AI Act in requiring all high-risk AI services to have data governance frameworks in place that make management of such requests easier and more accessible for individuals.

e. Right to Object

All individuals have the right to object to their data being processed. This would extend to AI applications such as those that leverage such capabilities on a massive scale, such as targeted advertising, risk scoring, or profiling. Not only must organizations respect users’ right to object to their data being used in such contexts, but they must also provide them with opt-outs. Additionally, the AI Act’s obligations ensure organizations must also label all such AI systems to inform individuals of the AI’s involvement and enable them to exercise their rights accordingly.

f. Right to Object to Automated Decision-Making

Arguably, the most relevant data subject right in the context of AI is the individual’s right to object to any decisions based solely on automated processing that can produce legal consequences for them. The AI Act reinforces this by making it a requirement for all high-risk AI systems, such as hiring algorithms and fraud detection systems, to have appropriate human oversight, auditability, and risk management mechanisms. Doing so ensures individuals are protected against harmful outcomes.

DPAs Recommendations

a. Build Transparency & Explainability Into AI Systems From Day One

The German DPA’s Guidance on AI & Data Protection elaborates how organizations must view transparency and explainability as a vital consideration and element of their AI system designs, rather than an afterthought to be added later on.

The processing logic, in addition to the input/output loop, must be documented in a manner that provides laymen, especially the data subjects, a reliable understanding of how decisions are made and, more importantly, how their data fits into the decision-making process. Additional elements, such as “input history” and model training iteration documentation, should reflect all the training loops that have been run to give a comprehensive picture of how the model reached its current generational capabilities.

Frameworks that facilitate explainable AI (xAI), along with its facets such as decision-logic summaries and user-friendly disclosures, ensure there are understandable summaries of technical automated high-risk binary decision-making by AI models. Moreover, such documentation ensures compliance with both GDPR and AI Act requirements.

b. Conduct Rigorous DPIAs, Audits, And Risk Assessments

All proprietary AI systems rely on high-risk and large-scale personal data. This data is ingested as part of structured and unstructured training datasets. CNIL’s AI System Development: CNIL’s Recommendations to Comply with the GDPR recommend conducting a DPIA when two criteria are met for the AI system’s development. The criteria include that the AI system collects sensitive data, large volumes of personal data, data of vulnerable persons (i.e., minors, persons with disabilities, etc), crossing and combination of datasets, and use of new and innovative technical solutions.

Irrespective of the above two criteria being met, DPIA must particularly be conducted if significant risks of data misuse, breach, or discrimination exist. A well-designed and comprehensive DPIA can be vital in establishing a critical chain of accountability and laying the foundations for oversight mechanisms to follow.

Other audits can also be carried out to focus on other important use cases and contexts, such as bias, discrimination, or reidentification risks. As a result of such regular tests, an organization can validate bias metrics, test edge cases, and evaluate potential harms, which can then be leveraged to improve both internal governance and external accountability.

c. Limit Use & Retention Of Personal Data With Purpose Limitation

Through purpose limitation, organizations can undertake a proactive approach towards data minimization when building compliant AI systems.

While AI models consistently require more and more data to continue improving, various GDPR provisions place strict limitations on how and when data can be used and reused. Data minimization does not prevent the use of data in large training datasets.

Therefore, dataset construction becomes more than just a staple starting block, with organizations having to ensure only data points necessary for the model performance and as permitted by the individual are fed into the training instance.

d. Emphasize Human Oversight, Testing, And Control Over Automated Decisions

While accountability is a highly iterated obligation under both the GDPR and the AI Act, embedding it into AI training processes and the AI system itself is easier said than done. It requires a careful and considerate development of roles, documentation, governance frameworks, and oversight mechanisms.

If done properly and in a manner consistent with each organization’s unique needs and operational approach towards AI system training, each of these elements ensures responsibilities are efficiently assigned for all key aspects, such as AI oversight, DPO involvement, escalation paths, and internal rules.

The Spanish DPA’s Guide on AI-Based Data Processing recommends documenting all the model training, validation, versioning, and decision logs in an easy-to-access repository, along with version control, audit trails, and modification logs adds another layer of transparency that will aid an organization’s compliance efforts.

Best Practices for AI Systems Compliance

a. Embed Privacy By Design & By Default

It may seem like an obvious practice, but organizations need to consider compliance as a critical necessity of AI systems rather than an afterthought. This will only be possible if they opt for options such as privacy by design with safeguards such as data minimization, pseudonymization, and anonymization in the AI workflows. Not only does this reduce the overall risk exposure, but it also simplifies the process of demonstrating compliance when regulators or partners ask questions on data protection measures being undertaken.

Moreover, with privacy by default, organizations ensure they only process data strictly necessary for their stated purposes. This can include limitations on data retention periods or the default disabling of features that would otherwise leave an organization prone to regulatory and reputational risks.

b. Maintain Comprehensive Documentation

At the heart of the GDPR is accountability. Regulators expect organizations processing data and using it to train their AI systems to be able to demonstrate their compliance with clear records to back up their claims. This includes documentation of training data sources, lawful bases, risk assessments, and decision-making processes throughout the AI lifecycle.

Such documentation demonstrates a strong governance structure within an organization that ensures all compliance is not only implemented but also actively monitored and evaluated to ensure its continuous improvement. Moreover, they can also be crucial in fostering cross-department collaboration between various organizational departments.

c. Leverage Privacy-Enhancing Technologies

Per the Spanish DPA’s Guidance on AI-based data processing, advanced Privacy-Enhancing Technologies (PETs) such as differential privacy, federated learning, and synthetic data can all be leveraged in various contexts and forms to allow organizations the flexibility to maximize their AI innovation while reducing the overall privacy risks. Moreover, these techniques are not only highly compatible with model training but also allow for personal data to remain anonymous throughout the training cycle.

Additionally, the adoption of such technologies sends a clear signal to regulators and customers related to their dedication to data protection across the AI model training cycle.

d. Implement Robust Human Oversight & Contestability Mechanisms

AI systems that can affect individuals significantly, such as credit scoring, hiring, and healthcare, must not be deployed without appropriate human oversight as well as human review. Embedding a human-in-the-loop structure ensures all outputs are validated and any possible errors are caught. Results in all cases must remain contestable to ensure alignment with the GDPR’s philosophy of giving users the chance to challenge decisions.

Measures to facilitate such an approach include providing clear escalation channels, dashboards for human reviewers, and workflows that allow decisions to be reversed and overridden.

e. Regular Audits & Monitoring

Compliance is not a static activity; rather, as an organization’s AI systems evolve in both capabilities and in scale to accommodate new use-cases, the risks posed by such systems will evolve. It is therefore necessary to have regular audits, both technical and legal, to ensure all issues, such as bias, security vulnerabilities, or unintended consequences, can be addressed before they can mutate into compliance failures.

Moreover, continuous monitoring and updates are just as important with feedback loops for user compliance, regulator guidance, and incident reports, all playing a crucial role in making compliance an ongoing process that consistently improves an organization’s posture and enables resilience.

How Securiti Can Help

For organizations, GDPR compliance continues to be a significant challenge even after so many years of its having come into effect. Technological developments, market trends, evolving customer expectations, new regulations, and organizational culture have all played their role in keeping it so. The leaps in AI capabilities are one of those factors. While it offers unprecedented operational benefits, it also poses several critical challenges in terms of ensuring an organization’s data practices continue to align with regulatory requirements.

This is where Securiti can help.

Securiti’s Gencore AI is a holistic solution for building safe, enterprise-grade GenAI systems. This enterprise solution consists of several components that can be used collectively to build end-to-end safe enterprise AI systems and to address AI data security obligations and challenges across various use cases.

This enables an incredibly effective yet simplified enterprise AI system through comprehensive data controls and governance mechanisms that mitigate all identifiable risks proactively. It can be further complemented with DSPM, which provides organizations with intelligent discovery, classification, and risk assessment, marking a significant shift from a reactive data security approach to proactive data security management suited to the AI context, while ensuring the organization can continue to leverage its data resources to their maximum potential without sacrificing performance or effectiveness.

Request a demo today to learn more about how Securiti can help your organization navigate the complexities of ensuring a perfect balance between its AI usage and GDPR compliance.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
Australia’s Guidance for AI Adoption View More
Australia’s Guidance for AI Adoption
Access the whitepaper to learn about what businesses need to know about Australia’s Guidance for AI Adoption. Discover how Securiti helps ensure compliance.
Montana Privacy Amendment on Notices: What to Change by Oct 1 View More
Montana Privacy Amendment on Notices: What to Change by Oct 1
Download the whitepaper to learn about the Montana Privacy Amendment on Notices and what to change by Oct 1. Learn how Securiti helps.
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New