Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

What is Automated Decision-Making Under CPRA Proposed ADMT Regulations

Author

Anas Baig

Product Marketing Manager at Securiti

Published July 15, 2025

Listen to the content

Automated Decision-Making (ADM) refers to the use of Artificial Intelligence (AI) systems and algorithms to make decisions or significantly influence them without direct human intervention. Under California’s proposed regulations, automated decision making involves any technology that processes personal information to execute a decision, replace human decision-making, or substantially influence human decisions, including profiling.

Automated decision-making systems range from simple tasks like sorting emails and files by order of their name, date of modification, or size, to complex ones such as assessing individuals’ creditworthiness or creating personalized digital browsing experiences for users online. These systems rely on a vast amount of personal data to detect patterns and make autonomous decisions that can have significant legal, economic, or personal impacts.

However, the increasing integration of automated decision-making into everyday life has raised substantial concerns over privacy, discrimination, and transparency. Recognizing these risks, California’s data privacy regulator, the California Privacy Protection Agency (CPPA), has proposed detailed rules under the Automated Decisionmaking Technology (ADMT) Regulations. The timeline so far:

The CPPA Board is now reviewing whether to adopt these regulations or make further changes. Read on to learn more.

Why is California Developing ADMT Regulations?

California leads globally in technology innovation, thanks in part to Silicon Valley. While automated decision making offers transformative benefits, lawmakers and regulators recognize the need for responsible development and use of such technologies, especially where they impact individuals’ rights and opportunities.

The proposed ADMT Regulations aim to:

Preventing Discrimination & Bias

Discrimination and bias can occur in automated decision-making systems. In most cases, this is due to bias in the training datasets that an AI model or system was trained on. Hence, a framework needs to be in place that requires businesses to assess and document ADM’s potential impacts, including discrimination or disparate outcomes, and implement measures to detect and mitigate bias before significant harm occurs..

Better Transparency & Accountability

Transparency is central to the CPRA, and the ADMT Regulations continue this principle. Businesses must disclose how automated decision-making systems operate, including:

  • How do automated decision-making processes use personal information to make significant decisions,
  • The types of outputs produced,
  • How are those outputs used in decision-making,
  • Whether and how human reviewers may influence the final decisions.

Earlier drafts required businesses to disclose the specific “logic used” and “key parameters” of automated decision-making systems. The current draft no longer demands this level of technical detail, instead allowing businesses to protect trade secrets and maintain security safeguards.

Businesses must also inform consumers about what happens if they choose to opt out of automated decision making, except where fraud prevention or safety exceptions apply. Importantly, pre-use notices can now be delivered alongside other privacy notices required under the CPRA.

Promoting Ethical Development

By having ADMT Regulations in place, California can ensure users are adequately protected online without stifling businesses’ capacity to innovate responsibly. Such standards allow for the ethical development, deployment, and assessment of all such technologies, which not only benefits the users but also further cements California’s reputation as a global leader in AI ethics and governance.

Protecting Consumer Rights

Automated decision-making systems can profoundly influence individuals’ lives. The ADMT Regulations aim to safeguard consumers, ensuring they can exercise privacy rights effectively and remain in control of decisions that affect them.

Significant Decisions Under ADMT Regulations

The draft regulations define a “significant decision” as one that results in:

  • The provision or denial of financial or lending services, e.g., granting or denying loans, managing deposit accounts, or offering installment plans.
  • Housing decisions, e.g., approvals or denials of permanent or temporary residence. Administrative decisions purely about availability or successful payment do not count as significant decisions.
  • Education enrollment or opportunities, including admissions, awarding educational credentials, suspension, or expulsion.
  • Employment or independent contracting opportunities or compensation, such as hiring, work assignments, promotions, demotions, termination, or salary decisions.
  • Healthcare services, including diagnosis, treatment, or health assessments.

A major clarification in the most recent draft is that advertising and marketing activities are explicitly excluded from being treated as significant decisions. Therefore, businesses deploying automated decision-making purely for behavioral advertising are not subject to ADMT rules under the proposed regulations.

How to Opt Out of Automated Decision-Making in California

Under the current draft, consumers will have the right to opt out of automated decision-making processes used to make significant decisions about them.

  • Businesses must offer at least two or more methods for submitting opt-out requests, including:
  • Toll-free telephone numbers,
  • Email addresses,
  • Online forms,
  • Physical forms submitted by mail or in person.

Businesses that deploy automated decision making for significant decisions are not automatically required to offer an opt-out if they provide an appeals process allowing consumers to request a human review. Notably:

  • The earlier drafts required human reviewers to be “qualified” experts.
  • The current draft only requires the human reviewer to know how to interpret and use the ADM’s output and have the authority to make or change decisions based on that analysis.

In certain situations, consumers do not have opt-out rights. Businesses may refuse to offer opt-outs if automated decision making is used solely for:

  • Detecting or preventing security incidents,
  • Investigating, resisting, or preventing fraud,
  • Protecting the life or physical safety of the consumer or others.

These exceptions are absolute. They exist because these uses of automated decision making are viewed as critical for security, fraud prevention, or emergency safety.

Additionally, businesses do not have to offer opt-outs or appeals in certain contexts:

  • When automated decision making is used purely for workplace or educational purposes (e.g., HR processes).
  • When profiling occurs in publicly accessible places.
  • When automated decision making is used solely for training AI or machine learning models, rather than making decisions about specific individuals.

Once a business verifies an opt-out request, it must stop using automated decision-making for the requesting individual and confirm receipt and completion of the request.

If a business denies an opt-out request, it must inform the consumer why, explain their right to appeal, and provide further guidance on available options. Consumers facing difficulties may also seek assistance from the CPPA.

Right of Transparency/Access

Consumers have a right to know:

  • Whether automated decision-making is being used to make significant decisions about them,
  • How their personal information is processed in those decisions,
  • The types of data inputs used and the types of outputs generated,
  • The potential consequences or outcomes of automated decision-making usage.

This right empowers individuals to understand and potentially challenge the role of automated decision-making in decisions that affect them.

Businesses must provide clear and comprehensive explanations, allowing consumers to make informed choices about engaging with automated decision-making systems.

Earlier drafts would have required highly technical disclosures of automated decision-making logic and key parameters. The current draft takes a more practical approach, balancing transparency with business confidentiality and security.

Profiling Under the Regulations

The draft regulations explicitly include profiling within the scope of ADM. Profiling involves using automated processing to evaluate personal aspects like:

  • Intelligence,
  • Behavior,
  • Performance at work,
  • Economic situation,
  • Health,
  • Personal preferences,
  • Location and movements.

Hence, profiling activities that significantly influence decisions about consumers fall under the same rules and opt-out rights as other automated decision-making uses, provided they result in significant decisions.

Importantly, the regulations clarify that behavioral advertising does not count as significant decision-making under the current draft.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. Numerous reputable and esteemed global enterprises rely on Securiti's Data Command Center for their data security, privacy, governance, and compliance needs.

This is owing to the Data Command Center being equipped with several solutions and modules designed to ensure swift and reliable compliance via complete automation. These modules, ranging from cookie consent management to assessment automation, universal consent, and vendor risk management, empower an organization to maintain real-time oversight of its compliance with all relevant regulatory requirements via the centralized dashboard.

Furthermore, this enables proactive measures from an organization if a potential violation or non-compliance is detected.

In such delicate situations, it can often come down to a few vital minutes and even seconds to prevent a potential incident, making a solution like the Data Command Centre that much more critical.

Request a demo today to learn more about how Securiti can help your organization implement automation in a regulatory-compliant manner.

Frequently Asked Questions

Here are some other commonly asked questions you may have related to automated decision-making under CPRA.

In layman's terms, automated decision-making usually refers to an arrangement where systems and processes have been implemented that ensure decisions are made based on algorithms and machine learning models without the need for human involvement or decision-making at the operational and functional levels. Such decisions can vary, ranging from simple recommendations on the best restaurants near a person to complex evaluations such as credit approvals and hiring decisions.

Though they are closely related, there is a difference between the two. AI refers to a field of computer science that focuses on creating systems and mechanisms that can perform tasks that would typically require human intelligence. On the other hand, automated decision-making is the application of AI to make decisions without the need for human input or involvement by analyzing data and determining the best course of action.

The draft ADMT regulations under the CPRA give consumers opt-out rights, not opt-in rights. This means businesses can generally use automated decision-making technologies for significant decisions unless a consumer actively chooses to opt out.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Securiti and Databricks: Putting Sensitive Data Intelligence at the Heart of Modern Cybersecurity
Securiti is thrilled to partner with Databricks to extend Databricks Data Intelligence for Cybersecurity. This collaboration marks a pivotal moment for enterprise security, bringing...
Shrink The Blast Radius: Automate Data Minimization with DSPM View More
Shrink The Blast Radius
Recently, DaVita disclosed a ransomware incident that ultimately impacted about 2.7 million people, and it’s already booked $13.5M in related costs this quarter. Healthcare...
View More
All You Need to Know About Ontario’s Personal Health Information Protection Act 2004
Here’s what you need to know about Ontario’s Personal Health Information Protection Act of 2004 to ensure effective compliance with it.
View More
What is Trustworthy AI? Your Comprehensive Guide
Learn what Trustworthy AI means, the principles behind building reliable AI systems, its importance, and how organizations can implement it effectively.
Maryland Online Data Privacy Act (MODPA) View More
Maryland Online Data Privacy Act (MODPA): Compliance Requirements Beginning October 1, 2025
Access the whitepaper to discover the compliance requirements under the Maryland Online Data Privacy Act (MODPA). Learn how Securiti helps ensure swift compliance.
Retail Data & AI: A DSPM Playbook for Secure Innovation View More
Retail Data & AI: A DSPM Playbook for Secure Innovation
The resource guide discusses the data security challenges in the Retail sector, the real-world risk scenarios retail businesses face and how DSPM can play...
DSPM vs Legacy Security Tools: Filling the Data Security Gap View More
DSPM vs Legacy Security Tools: Filling the Data Security Gap
The infographic discusses why and where legacy security tools fall short, and how a DSPM tool can make organizations’ investments smarter and more secure.
Operationalizing DSPM: 12 Must-Dos for Data & AI Security View More
Operationalizing DSPM: 12 Must-Dos for Data & AI Security
A practical checklist to operationalize DSPM—12 must-dos covering discovery, classification, lineage, least-privilege, DLP, encryption/keys, policy-as-code, monitoring, and automated remediation.
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New