Automated Decision-Making (ADM) refers to the use of Artificial Intelligence (AI) systems and algorithms to make decisions or significantly influence them without direct human intervention. Under California’s proposed regulations, automated decision making involves any technology that processes personal information to execute a decision, replace human decision-making, or substantially influence human decisions, including profiling.
Automated decision-making systems range from simple tasks like sorting emails and files by order of their name, date of modification, or size, to complex ones such as assessing individuals’ creditworthiness or creating personalized digital browsing experiences for users online. These systems rely on a vast amount of personal data to detect patterns and make autonomous decisions that can have significant legal, economic, or personal impacts.
However, the increasing integration of automated decision-making into everyday life has raised substantial concerns over privacy, discrimination, and transparency. Recognizing these risks, California’s data privacy regulator, the California Privacy Protection Agency (CPPA), has proposed detailed rules under the Automated Decisionmaking Technology (ADMT) Regulations. The timeline so far:
The CPPA Board is now reviewing whether to adopt these regulations or make further changes. Read on to learn more.
Why is California Developing ADMT Regulations?
California leads globally in technology innovation, thanks in part to Silicon Valley. While automated decision making offers transformative benefits, lawmakers and regulators recognize the need for responsible development and use of such technologies, especially where they impact individuals’ rights and opportunities.
The proposed ADMT Regulations aim to:
Preventing Discrimination & Bias
Discrimination and bias can occur in automated decision-making systems. In most cases, this is due to bias in the training datasets that an AI model or system was trained on. Hence, a framework needs to be in place that requires businesses to assess and document ADM’s potential impacts, including discrimination or disparate outcomes, and implement measures to detect and mitigate bias before significant harm occurs..
Better Transparency & Accountability
Transparency is central to the CPRA, and the ADMT Regulations continue this principle. Businesses must disclose how automated decision-making systems operate, including:
- How do automated decision-making processes use personal information to make significant decisions,
- The types of outputs produced,
- How are those outputs used in decision-making,
- Whether and how human reviewers may influence the final decisions.
Earlier drafts required businesses to disclose the specific “logic used” and “key parameters” of automated decision-making systems. The current draft no longer demands this level of technical detail, instead allowing businesses to protect trade secrets and maintain security safeguards.
Businesses must also inform consumers about what happens if they choose to opt out of automated decision making, except where fraud prevention or safety exceptions apply. Importantly, pre-use notices can now be delivered alongside other privacy notices required under the CPRA.
By having ADMT Regulations in place, California can ensure users are adequately protected online without stifling businesses’ capacity to innovate responsibly. Such standards allow for the ethical development, deployment, and assessment of all such technologies, which not only benefits the users but also further cements California’s reputation as a global leader in AI ethics and governance.
Protecting Consumer Rights
Automated decision-making systems can profoundly influence individuals’ lives. The ADMT Regulations aim to safeguard consumers, ensuring they can exercise privacy rights effectively and remain in control of decisions that affect them.
Significant Decisions Under ADMT Regulations
The draft regulations define a “significant decision” as one that results in:
- The provision or denial of financial or lending services, e.g., granting or denying loans, managing deposit accounts, or offering installment plans.
- Housing decisions, e.g., approvals or denials of permanent or temporary residence. Administrative decisions purely about availability or successful payment do not count as significant decisions.
- Education enrollment or opportunities, including admissions, awarding educational credentials, suspension, or expulsion.
- Employment or independent contracting opportunities or compensation, such as hiring, work assignments, promotions, demotions, termination, or salary decisions.
- Healthcare services, including diagnosis, treatment, or health assessments.
A major clarification in the most recent draft is that advertising and marketing activities are explicitly excluded from being treated as significant decisions. Therefore, businesses deploying automated decision-making purely for behavioral advertising are not subject to ADMT rules under the proposed regulations.
How to Opt Out of Automated Decision-Making in California
Under the current draft, consumers will have the right to opt out of automated decision-making processes used to make significant decisions about them.
- Businesses must offer at least two or more methods for submitting opt-out requests, including:
- Toll-free telephone numbers,
- Email addresses,
- Online forms,
- Physical forms submitted by mail or in person.
Businesses that deploy automated decision making for significant decisions are not automatically required to offer an opt-out if they provide an appeals process allowing consumers to request a human review. Notably:
- The earlier drafts required human reviewers to be “qualified” experts.
- The current draft only requires the human reviewer to know how to interpret and use the ADM’s output and have the authority to make or change decisions based on that analysis.
In certain situations, consumers do not have opt-out rights. Businesses may refuse to offer opt-outs if automated decision making is used solely for:
- Detecting or preventing security incidents,
- Investigating, resisting, or preventing fraud,
- Protecting the life or physical safety of the consumer or others.
These exceptions are absolute. They exist because these uses of automated decision making are viewed as critical for security, fraud prevention, or emergency safety.
Additionally, businesses do not have to offer opt-outs or appeals in certain contexts:
- When automated decision making is used purely for workplace or educational purposes (e.g., HR processes).
- When profiling occurs in publicly accessible places.
- When automated decision making is used solely for training AI or machine learning models, rather than making decisions about specific individuals.
Once a business verifies an opt-out request, it must stop using automated decision-making for the requesting individual and confirm receipt and completion of the request.
If a business denies an opt-out request, it must inform the consumer why, explain their right to appeal, and provide further guidance on available options. Consumers facing difficulties may also seek assistance from the CPPA.
Right of Transparency/Access
Consumers have a right to know:
- Whether automated decision-making is being used to make significant decisions about them,
- How their personal information is processed in those decisions,
- The types of data inputs used and the types of outputs generated,
- The potential consequences or outcomes of automated decision-making usage.
This right empowers individuals to understand and potentially challenge the role of automated decision-making in decisions that affect them.
Businesses must provide clear and comprehensive explanations, allowing consumers to make informed choices about engaging with automated decision-making systems.
Earlier drafts would have required highly technical disclosures of automated decision-making logic and key parameters. The current draft takes a more practical approach, balancing transparency with business confidentiality and security.
Profiling Under the Regulations
The draft regulations explicitly include profiling within the scope of ADM. Profiling involves using automated processing to evaluate personal aspects like:
- Intelligence,
- Behavior,
- Performance at work,
- Economic situation,
- Health,
- Personal preferences,
- Location and movements.
Hence, profiling activities that significantly influence decisions about consumers fall under the same rules and opt-out rights as other automated decision-making uses, provided they result in significant decisions.
Importantly, the regulations clarify that behavioral advertising does not count as significant decision-making under the current draft.
How Securiti Can Help
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. Numerous reputable and esteemed global enterprises rely on Securiti's Data Command Center for their data security, privacy, governance, and compliance needs.
This is owing to the Data Command Center being equipped with several solutions and modules designed to ensure swift and reliable compliance via complete automation. These modules, ranging from cookie consent management to assessment automation, universal consent, and vendor risk management, empower an organization to maintain real-time oversight of its compliance with all relevant regulatory requirements via the centralized dashboard.
Furthermore, this enables proactive measures from an organization if a potential violation or non-compliance is detected.
In such delicate situations, it can often come down to a few vital minutes and even seconds to prevent a potential incident, making a solution like the Data Command Centre that much more critical.
Request a demo today to learn more about how Securiti can help your organization implement automation in a regulatory-compliant manner.
Frequently Asked Questions
Here are some other commonly asked questions you may have related to automated decision-making under CPRA.