Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Rite Aid Banned From Use of AI Facial Recognition : FTC Complaint’s Implications

Published January 17, 2024
Contributors

Anas Baig

Product Marketing Manager at Securiti

Adeel Hasan

Sr. Data Privacy Analyst at Securiti

CIPM, CIPP/Canada

Listen to the content

Data privacy and AI governance are highly complicated domains within the United States. While there have been frequent suggestions of a federal regulation related to data privacy in the US, similar to the GDPR in the EU, it is unlikely that such legislation will come into effect in the near future. However,the federal government has taken a proactive interest in AI governance.

While several states have come out with their own specific regulations targeting various aspects of AI use, the federal government has also been involved in numerous initiatives and plans aimed at cementing the US' reputation as a global leader in AI.

On December 19, the Federal Trade Commission (FTC) made headlines for bringing an official complaint against Rite Aid that combined both the aforementioned domains.

The details of the complaint, including the alleged violations, malpractices, and technology in use, are likely to have a lasting impact with several lessons for both privacy and AI governance professionals.

These lessons will not only serve as critical foundations for organizations still developing their AI governance frameworks but also guide how to balance the possible capabilities of AI with data privacy obligations towards their customers.

Read on to learn more.

The Complaint In a Nutshell

Rite Aid is a drugstore chain in the United States. With more than 2,000 physical locations, it is the third-largest retail pharmacy in the country. On December 19, 2023, the FTC sued Rite Aid for violating Section 5 of the FTC Act. The FTC alleged that Rite Aid was involved in:

  • An unfair Facial Recognition Technology (FRT) practice, improperly using FRT that falsely flagged Rite-Aid customers for shoplifting;
  • Violating a 2010 order of FTC by failing to implement a comprehensive security program to protect customers’ personal information.

In addition to the complaint, the FTC has also attached a stipulated order that seeks to not only ban Rite-Aid from using FRT-related mechanisms for a period of five years but would also require Rite Aid to initiate a comprehensive overhaul of its existing information security policies for it to resume the use of such technology after the five year period.

Apart from the aforementioned baseline compliance, the FTC order reveals other best practices it will expect from Rite Aid and from similar uses of biometric systems in the future. Such best practices, as listed in the FTC order, have been detailed below.

A Brief Background

This isn’t Rite Aid’s first run-in with FTC. Back in November 2010, the FTC filed an official administrative complaint alleging that Rite Aid had failed to undertake appropriate measures to mitigate unauthorized access to its users’ personal information.

At the time, Rite Aid agreed to conduct a thorough review of its internal policies and practices to ensure a modern information security plan was in place that afforded its users’ personal information with an adequate degree of protection. Additionally, it agreed to maintain comprehensive documentation of all the steps it would be taking in pursuit of the aforementioned task to demonstrate its compliance.

Then, in July 2020, Reuters published a bombshell report as part of its Watchful Eyes investigative series. Per its investigation, Rite Aid had been using FRT mechanisms, specifically in lower-income, non-white neighborhoods across the country. The state-of-the-art technology was being procured from a company that had links to the Chinese government.

Over the course of the next few months, Rite Aid confirmed the existence of the technology but defended its use as an anti-theft measure devoid of any racial intent. Critically, Rite Aid assured Reuters that it had ceased the use of the technology, stating that it had also severed all business ties with the vendor providing the FRT services.

However, by this time, the FTC had opened its own investigation related to the matter. The investigation’s initial purpose was to verify whether Rite Aid was complying with the 2010 agreement. Through a series of information requests in 2022, the FTC now seems to have enough evidence to support its claims that Rite Aid was not only in breach of the 2010 agreement but was also in violation of Section 5 of the FTC Act, as explained earlier.

Details on FTC’s Complaint

Per the FTC’s complaint, Rite Aid was involved in deploying the FRT mechanisms between 2012 and 2020. The drugstore chain maintained an extensive database of images it had procured from law enforcement. Rite Aid used these within its datasets to identify potential “persons of interest” that would be likely to partake or attempt criminal activity within Rite Aid’s stores.

The FRT that Rite Aid used, cross-referenced each shopper against this dataset. In case of positive matches, the store employees would be given an alert along with instructions on how to handle the situation.

Rite Aid’s alleged violations of the FTC Act can be summarized in the following critical areas:

Third-Party Services

Rite Aid engaged with several third-party service providers when implementing and deploying its AI facial recognition function. However, it has now emerged that the company did not undertake sufficient measures to ensure the outputs generated as a result of using these third-party services had an appropriate degree of accuracy and reliability. The vendor whose system was at use in the Rite Aid matter allegedly included a disclaimer that it made no representations or warranties as to the accuracy or reliability of the system.

Data Usage

An extension of the offense as mentioned above, several instances of outputs generated were of insufficient quality to warrant the actions triggered by these results. This includes allegedly failing to account for low-quality images, which reduced the accuracy of precise recognition of individuals. Furthermore, employees were trained to “push for as many enrollments” of low-quality images as possible in the dataset being fed to the AI models. At the same time, these images were also retained for extensive periods, often without appropriate user consent.

Alert Process

The alerts generated by the facial recognition mechanism did not contain any confidence values denoting the likelihood of the individual present in the store being the same as the one identified by the mechanism.

Consequently, thousands of instances of false matches led to real-world harm for customers entering Rite Aid stores, including heightened surveillance, refusal of service, public embarrassment, false detainment, and police reports. Most of the customers affected by Rite Aid’s practices were Black, Asian, Latino, and women, further highlighting the biased nature of the harm caused by Rite Aid's use of the AI facial recognition technology.

Inadequate Information Security Program

Another key facet of the FTC’s allegations against Rite Aid is its inability to maintain an information security program that was sufficiently equipped to safeguard the personal data that was being collected and used.

Additionally, the existing program did not account for whether the third-party service providers it was relying on for the FRT mechanisms had similar programs to protect users’ personal information.

Lastly, there needed to be more comprehensive documentation of its existing program in the manner that both parties had agreed to related to the 2010 complaint. The FTC has concluded that owing to Rite Aid’s violation of the 2010 agreement, its violations are likely to have caused substantial consumer injury.

Future Measures Rite Aid is Expected to Take

The Rite Aid case is poised to mark a significant development in terms of AI governance within the US. The FTC’s detailed report states that it will be using the findings of this incident to refine and evolve its expectations from organizations deploying similar AI functionalities across their systems.

As for Rite Aid itself, the ruling is only the start of what will be a lengthy remedial roadmap to undo the damage its use of facial recognition technology has done. Organizations using similar technologies within their physical stores will now be required to disclose their usage to consumers, citing how Rite Aid categorically instructed its employees not to do so.

Furthermore, the FTC’s consent order lists the best practices Rite Aid will be expected to have implemented if it wishes to deploy a biometric system with similar functionality at the end of the 5-year injunction. These include the following:

Frequent Assessments

At the end of the 5-year period, if Rite Aid wishes to deploy a similar FRT mechanism, it will only be allowed to do so if it has conducted a thorough and comprehensive written assessment of its entire data infrastructure.

Such an assessment would need to include:

  • All the minor and major risks such as physical, financial, or reputational injury, stigma, or emotional distress that a consumer may experience;
  • Documentation of the testing;
  • The methodology behind the development of the FRT Rite Aid plans on deploying;
  • Data factors that may affect the accuracy of the outputs;
  • Review of the standard industry practices;
  • Training likely to be required to use the FRT mechanism;
  • Customers’ right to opt out of being subjected to the mechanism;
  • Policies governing the operation of the mechanism;
  • Analysis of potential adverse consequences.

Accuracy & Reliability

One of the more damning aspects of the latest FTC complaint is how lax Rite Aid’s policies have been with regard to assessing the accuracy and reliability of the mechanisms it had deployed. Hence, it will be required to undertake a comprehensive test to appropriately document, implement, maintain, and assess the mechanisms as well as the relevant safeguards deployed to counter any risks from the mechanism.

Such tests will need to be conducted annually, with the FTC to provide additional information on the specific operational and documentation requirements related to these tests.

Employee Training

Rite Aid will be required to ensure its employees are appropriately attuned annually to the information security program it adopts. To that end, its employees should be well-versed in the various governance risks associated with the FRT mechanism it deploys at the time, how to interpret the outputs generated by the mechanism accurately, and most importantly, the theoretical limitations of the mechanisms.

Employees’ knowledge of these requirements must be judged and documented per metrics that Rite Aid must develop. Furthermore, these metrics will also need to be reviewed by the FTC being used.

Data Quality Control

For Rite Aid, quality datasets will not only be a matter of operational efficiency but also a condition to resume deployment of FRT mechanisms. The quality of the dataset being used to train any AI-related mechanism will have a proportional impact on the quality of the outputs generated.

Rite Aid will be expected to undertake all possible measures to mitigate bias that may originate as a result of bad image quality that might affect any future FRT mechanisms. Furthermore, there will be strict data retention requirements, specifically for any biometric data it may end up using.

Arguably, it is the most straightforward yet complicated requirement for Rite Aid. In case it ever deploys a similar FRT mechanism, it will need the explicit consent of those customers entering its physical premises that may be subject to this mechanism.

An additional notice will need to be provided in case the mechanism triggers an action that may lead to physical, financial, or reputational harm to the individuals. In case an individual launches a complaint, it will need to be resolved within 30 days.

Implications for Others

Bias has been a contentious issue since the advent of AI. With an extensive number of organizations leveraging AI into their security apparatus, the problems of bias, consent, and best practices have unsurprisingly intersected.

The Rite Aid case is likely to act as a strong precedent of how the FTC aims to tackle issues within this domain. Similarly, its repercussions for Rite Aid will carry strong lessons for other retail companies that already have or are in the process of deploying facial recognition technology within their own premises.

The FTC has released multiple resources on how organizations must manage users’ biometric information, appropriately inform the users of how such information is collected and used in their privacy policy, and, most importantly, have unambiguous consent from all users related to the use of such information.

Most importantly, this episode lends a unique perspective into what the FTC expects from organizations. Some of these expectations were already present in its earlier report on AI harms, where it highlighted inaccuracy, biases, and discrimination as possible harmful side effects of poorly designed AI tools. Similarly, it also published a report specifically warning against potential abuse of FRT and individuals’ biometric information.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and Generative AI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Organizations aiming to shore up their data practices as a result of the Rite Aid episode will find the Data Command Center’s plethora of modules and solutions incredibly useful in their pursuit of effective compliance.

These solutions include a Privacy Policy Management solution that can be dynamically integrated into the privacy program and customized based on the unique business operations. The Data Quality module ensures key data is easy to discover, understand, and of high quality with the appropriate references and a list of business rules that have been applied to data.

All of these modules can be deployed, monitored, and adjusted from a centralized dashboard with an easy-to-use interface, allowing for seamless and prompt actions whenever necessary

Request a demo today to learn more about how Securiti can help your organization create a fully compliant data privacy and security program while maximizing the potential benefits of AI.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
Inside Echoleak View More
Inside Echoleak
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
What Is Data Risk Assessment and How to Perform it? View More
What Is Data Risk Assessment and How to Perform it?
Get insights into what is a data risk assessment, its importance and how organizations can conduct data risk assessments.
What is AI Security Posture Management (AI-SPM)? View More
What is AI Security Posture Management (AI-SPM)?
AI SPM stands for AI Security Posture Management. It represents a comprehensive approach to ensure the security and integrity of AI systems throughout the...
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
View More
Key Amendments to Saudi Arabia PDPL Implementing Regulations
Download the infographic to gain insights into the key amendments to the Saudi Arabia PDPL Implementing Regulations. Learn about proposed changes and key takeaways...
Understanding Data Regulations in Australia’s Telecom Sector View More
Understanding Data Regulations in Australia’s Telecom Sector
Gain insights into the key data regulations in Australia’s telecommunication sector. Learn how Securiti helps ensure swift compliance.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New