Data privacy and AI governance are highly complicated domains within the United States. While there have been frequent suggestions of a federal regulation related to data privacy in the US, similar to the GDPR in the EU, it is unlikely that such legislation will come into effect in the near future. However,the federal government has taken a proactive interest in AI governance.
While several states have come out with their own specific regulations targeting various aspects of AI use, the federal government has also been involved in numerous initiatives and plans aimed at cementing the US' reputation as a global leader in AI.
On December 19, the Federal Trade Commission (FTC) made headlines for bringing an official complaint against Rite Aid that combined both the aforementioned domains.
The details of the complaint, including the alleged violations, malpractices, and technology in use, are likely to have a lasting impact with several lessons for both privacy and AI governance professionals.
These lessons will not only serve as critical foundations for organizations still developing their AI governance frameworks but also guide how to balance the possible capabilities of AI with data privacy obligations towards their customers.
Read on to learn more.
The Complaint In a Nutshell
Rite Aid is a drugstore chain in the United States. With more than 2,000 physical locations, it is the third-largest retail pharmacy in the country. On December 19, 2023, the FTC sued Rite Aid for violating Section 5 of the FTC Act. The FTC alleged that Rite Aid was involved in:
- An unfair Facial Recognition Technology (FRT) practice, improperly using FRT that falsely flagged Rite-Aid customers for shoplifting;
- Violating a 2010 order of FTC by failing to implement a comprehensive security program to protect customers’ personal information.
In addition to the complaint, the FTC has also attached a stipulated order that seeks to not only ban Rite-Aid from using FRT-related mechanisms for a period of five years but would also require Rite Aid to initiate a comprehensive overhaul of its existing information security policies for it to resume the use of such technology after the five year period.
Apart from the aforementioned baseline compliance, the FTC order reveals other best practices it will expect from Rite Aid and from similar uses of biometric systems in the future. Such best practices, as listed in the FTC order, have been detailed below.
A Brief Background
This isn’t Rite Aid’s first run-in with FTC. Back in November 2010, the FTC filed an official administrative complaint alleging that Rite Aid had failed to undertake appropriate measures to mitigate unauthorized access to its users’ personal information.
At the time, Rite Aid agreed to conduct a thorough review of its internal policies and practices to ensure a modern information security plan was in place that afforded its users’ personal information with an adequate degree of protection. Additionally, it agreed to maintain comprehensive documentation of all the steps it would be taking in pursuit of the aforementioned task to demonstrate its compliance.
Then, in July 2020, Reuters published a bombshell report as part of its Watchful Eyes investigative series. Per its investigation, Rite Aid had been using FRT mechanisms, specifically in lower-income, non-white neighborhoods across the country. The state-of-the-art technology was being procured from a company that had links to the Chinese government.
Over the course of the next few months, Rite Aid confirmed the existence of the technology but defended its use as an anti-theft measure devoid of any racial intent. Critically, Rite Aid assured Reuters that it had ceased the use of the technology, stating that it had also severed all business ties with the vendor providing the FRT services.
However, by this time, the FTC had opened its own investigation related to the matter. The investigation’s initial purpose was to verify whether Rite Aid was complying with the 2010 agreement. Through a series of information requests in 2022, the FTC now seems to have enough evidence to support its claims that Rite Aid was not only in breach of the 2010 agreement but was also in violation of Section 5 of the FTC Act, as explained earlier.
Details on FTC’s Complaint
Per the FTC’s complaint, Rite Aid was involved in deploying the FRT mechanisms between 2012 and 2020. The drugstore chain maintained an extensive database of images it had procured from law enforcement. Rite Aid used these within its datasets to identify potential “persons of interest” that would be likely to partake or attempt criminal activity within Rite Aid’s stores.
The FRT that Rite Aid used, cross-referenced each shopper against this dataset. In case of positive matches, the store employees would be given an alert along with instructions on how to handle the situation.
Rite Aid’s alleged violations of the FTC Act can be summarized in the following critical areas:
Third-Party Services
Rite Aid engaged with several third-party service providers when implementing and deploying its AI facial recognition function. However, it has now emerged that the company did not undertake sufficient measures to ensure the outputs generated as a result of using these third-party services had an appropriate degree of accuracy and reliability. The vendor whose system was at use in the Rite Aid matter allegedly included a disclaimer that it made no representations or warranties as to the accuracy or reliability of the system.
Data Usage
An extension of the offense as mentioned above, several instances of outputs generated were of insufficient quality to warrant the actions triggered by these results. This includes allegedly failing to account for low-quality images, which reduced the accuracy of precise recognition of individuals. Furthermore, employees were trained to “push for as many enrollments” of low-quality images as possible in the dataset being fed to the AI models. At the same time, these images were also retained for extensive periods, often without appropriate user consent.
Alert Process
The alerts generated by the facial recognition mechanism did not contain any confidence values denoting the likelihood of the individual present in the store being the same as the one identified by the mechanism.
Consequently, thousands of instances of false matches led to real-world harm for customers entering Rite Aid stores, including heightened surveillance, refusal of service, public embarrassment, false detainment, and police reports. Most of the customers affected by Rite Aid’s practices were Black, Asian, Latino, and women, further highlighting the biased nature of the harm caused by Rite Aid's use of the AI facial recognition technology.
Another key facet of the FTC’s allegations against Rite Aid is its inability to maintain an information security program that was sufficiently equipped to safeguard the personal data that was being collected and used.
Additionally, the existing program did not account for whether the third-party service providers it was relying on for the FRT mechanisms had similar programs to protect users’ personal information.
Lastly, there needed to be more comprehensive documentation of its existing program in the manner that both parties had agreed to related to the 2010 complaint. The FTC has concluded that owing to Rite Aid’s violation of the 2010 agreement, its violations are likely to have caused substantial consumer injury.
Future Measures Rite Aid is Expected to Take
The Rite Aid case is poised to mark a significant development in terms of AI governance within the US. The FTC’s detailed report states that it will be using the findings of this incident to refine and evolve its expectations from organizations deploying similar AI functionalities across their systems.
As for Rite Aid itself, the ruling is only the start of what will be a lengthy remedial roadmap to undo the damage its use of facial recognition technology has done. Organizations using similar technologies within their physical stores will now be required to disclose their usage to consumers, citing how Rite Aid categorically instructed its employees not to do so.
Furthermore, the FTC’s consent order lists the best practices Rite Aid will be expected to have implemented if it wishes to deploy a biometric system with similar functionality at the end of the 5-year injunction. These include the following:
Frequent Assessments
At the end of the 5-year period, if Rite Aid wishes to deploy a similar FRT mechanism, it will only be allowed to do so if it has conducted a thorough and comprehensive written assessment of its entire data infrastructure.
Such an assessment would need to include:
- All the minor and major risks such as physical, financial, or reputational injury, stigma, or emotional distress that a consumer may experience;
- Documentation of the testing;
- The methodology behind the development of the FRT Rite Aid plans on deploying;
- Data factors that may affect the accuracy of the outputs;
- Review of the standard industry practices;
- Training likely to be required to use the FRT mechanism;
- Customers’ right to opt out of being subjected to the mechanism;
- Policies governing the operation of the mechanism;
- Analysis of potential adverse consequences.
Accuracy & Reliability
One of the more damning aspects of the latest FTC complaint is how lax Rite Aid’s policies have been with regard to assessing the accuracy and reliability of the mechanisms it had deployed. Hence, it will be required to undertake a comprehensive test to appropriately document, implement, maintain, and assess the mechanisms as well as the relevant safeguards deployed to counter any risks from the mechanism.
Such tests will need to be conducted annually, with the FTC to provide additional information on the specific operational and documentation requirements related to these tests.
Employee Training
Rite Aid will be required to ensure its employees are appropriately attuned annually to the information security program it adopts. To that end, its employees should be well-versed in the various governance risks associated with the FRT mechanism it deploys at the time, how to interpret the outputs generated by the mechanism accurately, and most importantly, the theoretical limitations of the mechanisms.
Employees’ knowledge of these requirements must be judged and documented per metrics that Rite Aid must develop. Furthermore, these metrics will also need to be reviewed by the FTC being used.
Data Quality Control
For Rite Aid, quality datasets will not only be a matter of operational efficiency but also a condition to resume deployment of FRT mechanisms. The quality of the dataset being used to train any AI-related mechanism will have a proportional impact on the quality of the outputs generated.
Rite Aid will be expected to undertake all possible measures to mitigate bias that may originate as a result of bad image quality that might affect any future FRT mechanisms. Furthermore, there will be strict data retention requirements, specifically for any biometric data it may end up using.
User Consent
Arguably, it is the most straightforward yet complicated requirement for Rite Aid. In case it ever deploys a similar FRT mechanism, it will need the explicit consent of those customers entering its physical premises that may be subject to this mechanism.
An additional notice will need to be provided in case the mechanism triggers an action that may lead to physical, financial, or reputational harm to the individuals. In case an individual launches a complaint, it will need to be resolved within 30 days.
Implications for Others
Bias has been a contentious issue since the advent of AI. With an extensive number of organizations leveraging AI into their security apparatus, the problems of bias, consent, and best practices have unsurprisingly intersected.
The Rite Aid case is likely to act as a strong precedent of how the FTC aims to tackle issues within this domain. Similarly, its repercussions for Rite Aid will carry strong lessons for other retail companies that already have or are in the process of deploying facial recognition technology within their own premises.
The FTC has released multiple resources on how organizations must manage users’ biometric information, appropriately inform the users of how such information is collected and used in their privacy policy, and, most importantly, have unambiguous consent from all users related to the use of such information.
Most importantly, this episode lends a unique perspective into what the FTC expects from organizations. Some of these expectations were already present in its earlier report on AI harms, where it highlighted inaccuracy, biases, and discrimination as possible harmful side effects of poorly designed AI tools. Similarly, it also published a report specifically warning against potential abuse of FRT and individuals’ biometric information.
How Securiti Can Help
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and Generative AI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Organizations aiming to shore up their data practices as a result of the Rite Aid episode will find the Data Command Center’s plethora of modules and solutions incredibly useful in their pursuit of effective compliance.
These solutions include a Privacy Policy Management solution that can be dynamically integrated into the privacy program and customized based on the unique business operations. The Data Quality module ensures key data is easy to discover, understand, and of high quality with the appropriate references and a list of business rules that have been applied to data.
All of these modules can be deployed, monitored, and adjusted from a centralized dashboard with an easy-to-use interface, allowing for seamless and prompt actions whenever necessary
Request a demo today to learn more about how Securiti can help your organization create a fully compliant data privacy and security program while maximizing the potential benefits of AI.