Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

Veeamon Tour'26 - Data & AI Trust CONVERGE for the Agentic Era

View

The CJEU’s Decision on Processing Personal Data for Advertising Purposes

Contributors

Semra Islam

Sr. Data Privacy Analyst

CIPM, CIPP/Europe

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Published October 14, 2024

Listen to the content

The Court of Justice of the European Union (CJEU) recently delivered a significant ruling on the processing of personal data for personalized advertising. The court’s decision imposes strict limitations on how organizations can use such data, particularly when it involves sensitive information.

Background of the Case

The case originated from a lawsuit filed in Austria by privacy activist Maximilian Schrems, challenging Meta Platforms Ireland’s processing of his personal data. Schrems argued that Meta unlawfully used his data, including sensitive information about his sexual orientation, for targeted advertising. Meta collects extensive data of Facebook users through various means, such as cookies, social plug-ins, and pixels, tracking users’ activities both on and off the platform. This data collection allows Meta to infer users’ interests and serve personalized ads. The case was brought before the CJEU after two key questions were referred for interpretation.

Questions Before the CJEU

The first question was whether Article 5(1)(c) of the GDPR (data minimization principle) allows a platform to aggregate, analyze, and process all personal data it holds for targeted advertising without restrictions concerning the storage time or the type of data.

The second question examines whether a statement made by a person about their own sexual orientation for the purposes of a panel discussion permits the processing of other data concerning sexual orientation in order to aggregate and analyze the data for the purposes of personalized advertising. This hinges on the interpretation of two provisions:

  • Article 5(1)(b), which mandates that personal data must be processed for specified, legitimate purposes and not in a way incompatible with those purposes.
  • Article 9(2)(e), which allows the processing of sensitive data if the data subject has made that data manifestly public.

First Question: Data Minimization Principle

The Court emphasized that the data minimization principle requires personal data to be adequate, relevant, and limited to what is necessary for the processing. This reflects the proportionality principle, meaning that the processing of personal data should be strictly necessary for the intended purpose, and any excessive or indiscriminate processing is unlawful.

The Court highlighted Article 5(1)(e), which requires that personal data be kept in a form that permits identification of data subjects for only as long as is necessary for the purposes of processing. Consequently, the Court declared that even if data processing is initially lawful, it can become unlawful if the data is kept beyond the necessary period or if it is used for purposes other than the original intention. Once the original purpose is fulfilled, the data must be deleted. The Court further stated that in the light of the principle of data minimisation, the controller may not engage in the collection of personal data in a generalized and indiscriminate manner and must refrain from collecting data that is not strictly necessary for the purposes of the processing.

As per the ruling, Meta Platforms Ireland was collecting personal data of Facebook users both on and outside the social network and also followed users’ navigation patterns on those sites through the use of social plug-ins and pixels embedded in the relevant websites. The Court held that such extensive processing is characterized by a serious interference with the fundamental rights of the data subjects, particularly the right to respect for private life and protection of personal data.

Whereas the referring court shall make the final assessment of whether the processing conducted by Meta was justified in light of its objective of dissemination of targeted advertising, the Court declared that the principle of data minimization precludes controllers, such as social media platform operators, from aggregating, analyzing and processing personal data collected either on or outside the platform for the purposes of targeted advertising, without any restrictions as to time and without distinction as to the type of data.

Second Question: Public Disclosure of Sensitive Data

The second question considered whether Schrem's statement about his sexual orientation at a public panel discussion had given implied consent for Meta to process additional data related to his sexual orientation for advertisement. The CJEU recognized that under Article 9(2)(e) of the GDPR, which provides one of the exemptions to the general prohibition on processing of sensitive data, sensitive data can be processed if the individual has manifestly made it public.

However, the Court clarified that even if Schrems had made his sexual orientation public during the discussion, this did not authorize Meta to process additional data related to his sexual orientation obtained from third-party websites and apps, with a view to aggregating and analyzing such data for the purposes of personalized advertising. The court emphasized that the exception in Article 9(2)(e) must be interpreted strictly, meaning that the fact that a person has manifestly made a public statement concerning their sexual orientation does not imply that that person has given their consent to the processing of other data relating to their sexual orientation by the operator of an online social network platform.

Conclusion

The CJEU’s ruling has significant implications for how organizations handle personal data. Organizations must ensure that personal data is collected for specific and legitimate purposes and that further processing is not done in ways that are incompatible with those purposes. The amount of data processed should be restricted to what is strictly necessary to achieve the intended purpose.

Furthermore, personal data must only be retained for as long as needed to fulfill the original processing purpose. Importantly, organizations cannot assume that the public disclosure of certain information permits unrestricted processing of related data without proper justification or further consent. This decision is a crucial reminder for organizations to review and refine their data processing practices to ensure compliance with data protection laws. Strengthening data governance frameworks that prioritize privacy and ensure minimal and proportionate use of personal data enables organizations to show their commitment to regulatory compliance.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight
Future-Proofing for the Privacy Professional
Watch Now View
Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Latest
View More
Building Sovereign AI with HPE Private Cloud AI and Veeam Securiti Gencore AI
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
View More
Securiti.ai Names Accenture as 2025 Partner of the Year
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
View More
Opt-Outs That Stick: Consent Withdrawal Across Marketing, SaaS & GenAI
Securiti's whitepaper provides a detailed overview of various consent withdrawal requirements across marketing, SaaS, and GenAI. Read now to learn more.
View More
The Hidden Privacy Cost of Shadow AI & Shadow Data
Download the whitepaper to discover the risks of Shadow AI and Shadow Data, why traditional controls fail, and how to build proactive, scalable AI...
View More
Agent Commander: Solution Brief
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
Compliance with CCPA Amendments with Securiti View More
Compliance with CCPA Amendments with Securiti
Stay compliant with 2026 CCPA amendments using Securiti, covering updated consent requirements, expanded sensitive data definitions, enhanced consumer rights, and readiness assessments.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New