Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

Veeamon Tour'26 - Data & AI Trust CONVERGE for the Agentic Era

View

EDPB Report on ChatGPT Taskforce: Navigating GDPR Compliance for LLMs

Get Free GDPR Assessment
Contributors

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Maria Khan

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/E

Published June 26, 2024

Listen to the content

The widespread adoption of large language models (LLMs) has been a notable trend in recent years due to the rapid advancement in artificial intelligence and technology, with applications across multiple domains and industries. These models rely heavily on web scraping which involves the automated collection and extraction of different publicly available data from the web which are then used for training purposes.

Since these models are trained on vast publicly available datasets often containing personal data, their development and deployment must adhere to the requirements of the General Data Protection Regulation (GDPR) and other applicable data protection regulations. Over the last few months, different countries across the EU, including the Netherlands, Italy, and the UK, have released guidance and instructions on web scraping for LLMs that help them comply with data protection standards. Notably, Meta has recently halted the use of public content shared by adults on Facebook and Instagram across the EU/EEA for training its AI model following directions from the Irish Data Protection Authority (DPA).

While the EU AI Act aims to establish a legal framework for the deployment and use of AI systems, it should be read together with the GDPR if the processing of personal data is involved. On May 23, 2024, the European Data Protection Board (EDPB) published a report outlining the key takeaways from the ChatGPT Task Force's work (EDPB Report). While not formally designated as guidance, this Report provides valuable insights that will likely influence the evaluation of AI systems' compliance with the GDPR.

Although the EDPB Report is specific to ChatGPT, it has indicated some major takeaways and actionable insights for LLMs which are as follows: 

(1) Ensure data protection and security safeguards:

  • When processing personal data, LLMs must be designed and deployed to ensure accountability and data protection by design, prioritizing compliance with GDPR requirements at all processing stages. Controllers should proactively implement necessary safeguards and measures to protect personal data and cannot rely on technical impossibility for justifying non-compliance with the requirements of the GDPR.
  • LLMs must adopt technical measures defining precise data collection criteria and ensuring that certain data categories are not collected or that certain sources (such as public social media profiles) are excluded from the data collected. Appropriate technical measures should be adopted to delete or anonymize personal data that has been collected via web scraping before the training stage.

(2) Ensure lawful basis for collection and processing of personal data:

  • Although the EDPB Report does not indicate whether or not consent is likely to be an appropriate lawful basis in the context of web scraping, it does mention that each processing of personal data must meet at least one of the lawful basis specified in Article 6(1) of the GDPR or Article 9(2) of the GDPR in the case of sensitive personal data processing. The data subject’s consent is unlikely to be an appropriate lawful basis in the context of web scraping due to the large-scale data collection and the difficulty of identifying whose data we want to scrape. Similarly, performance of contract is unlikely to be an appropriate legal basis as there is no such relationship between the data subject and the data controller which requires the data subject to provide his or her personal data.
  • The EDPB Report recognizes that OpenAI relies on legitimate interests as the basis of the collection and processing of personal data to train ChatGPT. The EDPB Report highlights the controllers of LLMs must establish a legitimate basis for collecting and processing of personal data, including data scraped from the web and user inputs. To rely on legitimate interests as a legal basis of data processing, the following three tests must be met:
    • The necessity test which requires the pursuit of a legitimate interests by the data controller to whom the data is disclosed,
    • The purpose test which requires the data controller to identify the specific purpose of data processing and the need to process personal data for the legitimate interests pursued, and
    • The balancing test which requires the data controller to balance the legitimate interests of the controller against the fundamental rights and freedoms of data subjects.

In this Report, the EDPB highlighted the need to balance data controller’s interests with users' privacy rights and implement safeguards to ensure compliance with the GDPR.

(3) Ensure the protection of sensitive personal data:

  • Article 9 of the GDPR lays down conditions under which the processing of sensitive personal data can take place. One of the grounds is if the processing relates to sensitive personal data which are manifestly made public by the data subject. Even though LLMs rely on publicly available data, the EDPB has cautioned that it is important to ascertain whether the data subject had intended, explicitly and by clear affirmative action, to make that sensitive personal data accessible to the general public, and just because personal data is publicly accessible does not imply that it was made public by the data subject manifestly. This view is consistent with the Advocate General’s Opinion in the case of (Schrems v. Meta). In this case, the Advocate General noted that a statement made by a person about his or her sexual orientation in a panel discussion open to the public does not in itself permit the aggregating and analyzing the sexual orientation of that person for personalized advertising purposes.
  • To ensure only appropriate data is collected and retained, sensitive personal data categories should be filtered, both during data collection (selecting criteria for what data is collected) and immediately after data collection (deleting data).

(4) Ensure user transparency and fairness:

  • LLMs relying on legitimate interests as the legal basis for using input and uploaded files from data subjects (referred to as "Content") for training purposes must clearly and demonstrably inform data subjects about this practice. Users must be aware that their Content may be used for such purposes and must be given the option to opt-out. The EDPB further indicates that if the input data becomes part of the data model, OpenAI must remain responsible for the compliance with the GDPR in the first place and should not put the onus on data subjects by placing a clause in the Terms and Conditions that data subjects are responsible for their chat inputs.
  • The controller must provide proper information on the probabilistic output creation mechanisms and their limited reliability level, including explicit reference to the fact that the generated text may be biased or made up. Where notification to data subjects is not possible, take appropriate measures to protect data subjects’ rights and freedoms, including making the information publicly available
  • LLMs must ensure that their data processing practices are fair and do not unfairly shift responsibility to users. It is important to provide clear information to users about how their data is used, especially regarding AI training. Moreover, personal data should not be processed in a detrimental, discriminatory, unjustifiable, unexpected, or misleading manner.

(5) Ensure data subjects’ rights fulfillment:

  • The EDPB emphasized the importance of making it easy for users to exercise their GDPR rights, such as accessing, correcting, and deleting their data. LLMs must facilitate straightforward mechanisms for users to manage their data and make decisions about automatic processing. It should continue to improve the modalities/interfaces to facilitate data subject rights.

Based on the afore-mentioned takeaways identified in the EDPB Report on ChatGPT Taskforce, LLMs must implement appropriate data protection measures both at the time of the determination of the means for data processing and at the time of the data processing itself and integrate the necessary safeguards for the protection of data subjects’ rights. This EDPB Report indicates that web scraping on the basis of the data controller’s legitimate interests is possible if appropriate technical measures are in place.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight
Future-Proofing for the Privacy Professional
Watch Now View
Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Latest
View More
Building Sovereign AI with HPE Private Cloud AI and Veeam Securiti Gencore AI
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
View More
Securiti.ai Names Accenture as 2025 Partner of the Year
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
View More
Opt-Outs That Stick: Consent Withdrawal Across Marketing, SaaS & GenAI
Securiti's whitepaper provides a detailed overview of various consent withdrawal requirements across marketing, SaaS, and GenAI. Read now to learn more.
View More
The Hidden Privacy Cost of Shadow AI & Shadow Data
Download the whitepaper to discover the risks of Shadow AI and Shadow Data, why traditional controls fail, and how to build proactive, scalable AI...
View More
Agent Commander: Solution Brief
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
Compliance with CCPA Amendments with Securiti View More
Compliance with CCPA Amendments with Securiti
Stay compliant with 2026 CCPA amendments using Securiti, covering updated consent requirements, expanded sensitive data definitions, enhanced consumer rights, and readiness assessments.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New