Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

NOYB’s Privacy Complaint Against an AI Tech Giant: LLM Firewalls as a Viable Solution

Contributors

Anas Baig

Product Marketing Manager at Securiti

Omer Imran Malik

Data Privacy Legal Manager, Securiti

FIP, CIPT, CIPM, CIPP/US

Published June 26, 2024

Listen to the content

In 2022, OpenAI amazed the world with its breakthrough technology when it first launched ChatGPT. Flashforward to 2024, the AI tech giant now faces severe backlash from an activist group, None Of Your Business (NOYB), for violating multiple provisions enshrined in the European Union’s General Data Protection Regulation (GDPR). The activist group filed a formal complaint with the Austrian Data Protection Authority when it failed to answer the date of birth of a known personality accurately and to comply with the data subject's requests for access, correction, and erasure.

Overview of ChatGPT’s Hallucination & NOYB’s Complaint

NOYB, a non-profit organization, founded by Max Schrems, filed in its complaint that the organization asked the generative AI application about the date of birth of its founder. The algorithm gave incorrect, made-up answers in response to the prompt. The model uses vast volumes of datasets, mostly public data, to generate responses “to user requests by predicting the next most likely words that might appear in response to each prompt.” Since the date of birth of the data subject (Max Schrems) isn’t available online, the algorithm couldn’t provide accurate results and rather ‘hallucinated’ false answers. In its report, NOYB reiterated that LLMs are known for their hallucinations, which occur when the model analyzes patterns that are nonexistent, producing illogical responses and therefore, if it cannot be accurate about a person’s personal details, it should not be allowed to provide them in the first place.

Upon receiving an incorrect response, the complainant filed data subject requests with the controller for access, correction, and erasure of his personal data in ChatGPT NOYB claimed that OpenAI failed to fully provide any information regarding the data processed through the model and also admitted it could not comply with the correction and erasure request as  “there is no way to prevent its systems from displaying the data subject’s inaccurate date of birth in the output.”

Violations of Crucial GDPR Articles

The complaint's most important points lie in two crucial issues that breach several GDPR provisions, as highlighted by NOYB: limited access to data and inaccurate information. Let’s take a quick look at the provisions that the controller violated.

Article 5(1)(d) GDPR

Article 5(1)(d) of the GDPR obligates that the controller must make sure that the personal data of the data subjects is accurate and kept up to date. However, in NOYB’s case, the controller refused to rectify or completely remove the inaccurate information, arguing that it could block the data, preventing it from appearing, but in doing so, it would filter all the information pertaining to the data subject. NOYB argued in its complaint that “as long as ChatGPT keeps showing inaccurate data on the respondent, the controller violates Article 5(1)(d) GDPR.”

NOYB requested the Austrian Data Protection Authority to investigate the matter and use its authority to order the controller to take corrective measures. The non-profit organization further suggested the authority to impose an administrative fine “to guarantee the controller’s future compliance with the GDPR.”

Articles 15 and 17 of GDPR

Articles 15 and 17 of the GDPR provide the right of access and erasure to the data subjects, while Article 12 of the GDPR provides “transparent information, communication and modalities for the exercise of the rights of the data subject.” Date of birth comes under the category of personal data in the GDPR, which means added measures must be taken to ensure the appropriate handling, security, and accuracy of the data collected, stored, and processed by the controller. NOYB claims that the controller breached many rules in the GDPR, especially Articles 12(3) and 17, by failing to provide access to the information processed by the large language model, i.e., ChatGPT, and also by failing to correct or delete the falsified information.

What Do We Learn From It? Why Does an LLM Firewall Matter?

Large Language Models, such as GenAI, are highly prone to mistakes as they use massive volumes of datasets for training and fine-tuning. Since these datasets also include personally identifiable information (PII), including sensitive personal data, it becomes paramount to have strict controls and policies to safeguard data against unauthorized access or sensitive data exposure.

In OpenAI’s case, they couldn’t fulfill the request to access personal information and rectify or remove inaccurate information because once an LLM model ingests personal data, it's arguably impossible to scrub that data from it. Without having appropriate controls around data when it interacts with LLMs, similar scenarios may occur, compelling businesses to face heavy fines and penalties as well as a tarnished reputation.

Here, LLM Firewalls are a viable solution, enabling enterprises to filter out harmful responses, retrievals, and prompts. These firewalls are placed at different instances of LLM interactions and protect the model and data from various internal and external threats. A robust policy framework can be implemented to redact sensitive data or block hallucinations.

Read on to Learn More About LLM

Protect Your LLM Landscape with Securiti’s Context-Aware LLM Firewalls

Securiti offers a new category of context-aware, distributed LLM Firewall. The solution is built to protect the LLMs against prohibited topics, hallucinations, harmful content, and sensitive data exposure. Our LLM Firewall solution offers features that include:

  • Sensitive data protection with inline detection, classification, and sanitization.
  • Automated sensitive data detection, classification, and redaction.
  • Toxic content prevention and tone and guidelines compliance.
  • Compliance with global data and AI regulations and industry frameworks.

Request a demo today to learn.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
Australia’s Guidance for AI Adoption View More
Australia’s Guidance for AI Adoption
Access the whitepaper to learn about what businesses need to know about Australia’s Guidance for AI Adoption. Discover how Securiti helps ensure compliance.
Montana Privacy Amendment on Notices: What to Change by Oct 1 View More
Montana Privacy Amendment on Notices: What to Change by Oct 1
Download the whitepaper to learn about the Montana Privacy Amendment on Notices and what to change by Oct 1. Learn how Securiti helps.
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New