Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

An Overview of California’s Assembly Bill 2885 on Artificial Intelligence

Contributors

Anas Baig

Product Marketing Manager at Securiti

Aswah Javed

Associate Data Privacy Analyst at Securiti

Published April 3, 2025

Listen to the content

I. Introduction

The rapid evolution of artificial intelligence (AI) necessitates the development of robust regulatory frameworks that address AI’s risks and influence on society. California, a leader in regulatory frameworks, is at the forefront of addressing AI-related risks.

California Assembly Bill 2885 (AB 2885), one of the state's most recent initiatives, specifically addresses AI development, deployment, and regulation. This legislative bill aims to standardize the definition of "artificial intelligence" in various California laws. This is crucial considering the AI industry's explosive growth and growing applicability in everyday life. The Bill has amended the Business and Professions Code, the Education Code, and the Government Code relating to artificial intelligence. The Secretary of State passed and chaptered AB 2885 on September 28, 2024.

This guide dives into the key definitions under AB 2885, amendments, key provisions, implications for businesses, and how Securiti can help ensure swift compliance.

II. Key Definitions

AB 2885 amended the Business and Professions Code, the Education Code, and the Government Code relating to artificial intelligence for the following definitions:

A. Artificial Intelligence

“Artificial Intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

B. Automated Decision System

“Automated decision system” means a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decision-making and materially impacts natural persons. “Automated decision system” does not include a spam email filter, firewall, antivirus software, identity and access management tools, calculator, database, dataset, or other compilation of data.

C. Content

“Content” means statements or comments made by users and media that are created, posted, shared, or otherwise interacted with by users on an internet-based service or application. It does not include media put on a service or application exclusively for cloud storage, transmitting files, or file collaboration.

D. Deepfake

“Deepfake” means audio or visual content, generated or manipulated by artificial intelligence, that would falsely appear to be authentic or truthful and features depictions of people appearing to say or do things they did not say or do without their consent.

E. Digital Content Forgery

“Digital content forgery” means the use of technologies, including artificial intelligence and machine learning techniques, to fabricate or manipulate audio, visual, or text content with the intent to mislead.

F. Digital Content Provenance

“Digital content provenance” means the verifiable chronology of the original piece of digital content, such as an image, video, audio recording, or electronic document.

G. High-risk Automated Decision System

“High-risk automated decision system” means an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice.

III. Government Operations Agency

Section 4 of AB 2885 outlines the amendments to the Government Code. The law establishes a Government Operations Agency governed by the Secretary of Government Operations, who must assess the impact and risks of deepfakes and digital content forgery technologies on California’s state government, California-based businesses, and residents. The evaluation will cover:

  • The widespread prevalence of deepfakes and related privacy risks.
  • The potential privacy impacts of these technologies and the impact of digital content forgery technologies on civic engagement, including voter influence.
  • Legal implications surrounding these technologies.
  • Best practices for mitigating the risks, including the feasibility of adopting a digital content provenance standard to combat forgery and deepfakes.

This evaluation aims to enhance privacy, security, and trust in digital content. Additionally, the Secretary of Government Operations is tasked with developing a plan that includes:

  • Investigating the feasibility and challenges of developing digital content provenance standards for state departments.
  • Enhancing scrutiny of digital forgeries for internet companies, journalists, watchdog organizations, and the public.
  • Developing mechanisms for content creators to cryptographically certify the authenticity of original media and nondeceptive manipulations.
  • Developing or identifying tools for the public to verify media authenticity while safeguarding privacy and civil liberties.

This plan aims to improve trust in digital content while protecting personal privacy.

Report

The Secretary of Government Operations must submit a report to the Legislature by October 1, 2024, evaluating the possible applications and risks of deepfake technology for California businesses and the state government. This report will include a coordinated plan and recommendations for amending the definitions of digital content forgery and deepfakes. The report must comply with Government Code Section 9795. This provision will expire on January 1, 2025, unless any other future legislation extends it.

IV. The Department of Technology

AB 2885 establishes the Department of Technology, by amending the Government Code, to conduct a detailed inventory of all high-risk automated decision systems that have been proposed for use, development, or procurement by any state agency or that are currently being used, developed, or procured by any state agency by September 1, 2024, at the latest, in coordination with other interagency bodies as it considers appropriate. This inventory must include the following:

  • Decisions that an automated decision system can make or support and intended benefits.
  • Research results that evaluate the effectiveness and comparative advantages of the automated decision system's applications and alternatives.
  • Data categories and personal information used.
  • Risk mitigation measures, including performance metrics, cybersecurity controls, privacy controls, and risk assessments.
  • Processes for contesting decisions made by these systems.

This aims to ensure transparency, security, and fairness in AI use by state entities.

Report

The Department of Technology must provide a comprehensive inventory report on high-risk automated decision systems to the Senate Committee on Governmental Organization and the Assembly Committee on Privacy and Consumer Protection by January 1, 2025, and every year after that. This reporting obligation will expire on January 1, 2029. According to Section 9795 of the Government Code, all reports must follow the correct submission protocols for legislative reports.

V. Key Provisions Under AB 2885

AB 2885’s key provisions include:

A. High-Risk AI Systems Inventory

The Department of Technology must identify and list state entities using "high-risk AI systems" in an inventory. These systems are classified as high-risk because they have the potential to substantially influence people or groups, especially in areas like public safety, employment, or healthcare. The aim is to ensure that governmental entities are well-informed about AI's applications and possible societal effects.

The Department of Technology will evaluate how these systems make decisions and ensure they are utilized responsibly. The inventory will include descriptions of the AI systems, their intended application, and any data used in their training or functioning.

B. Deepfake and Manipulative AI Content

AB 2885 highlights concerns over AI-generated manipulative content, such as deepfakes, which pose a significant risk to public safety and privacy. AB 2885 proposes mechanisms to detect and mitigate the use of AI-generated deceptive content within state operations.

C. Economic Development Subsidies

Section 53083.1 of the Government Code, as amended, mandates that local authorities provide comprehensive information to the public before granting warehouse distribution center subsidies for economic development. This includes the beneficiary's information, the subsidy’s schedule, projected job creation, wages, and tax revenue.

Agencies must also conduct yearly public hearings and report on AI's impacts on employment, particularly AI-related automation. Nondisclosure agreements are prohibited, and agencies must report to the Governor’s Office. These measures ensure transparency and accountability for public subsidies.

VI. Implications of AB 2885

AB 2885 introduces multiple implications for various entities impacted by its provisions.

A. For State Agencies

The bill mandates that state agencies conduct comprehensive audits of bias and fairness, evaluate their AI systems, and ensure that AI usage is transparent. Although these steps may result in higher administrative expenses, they aim to ensure AI's appropriate and moral use.

B. For Californians

The measure protects individuals against ambiguous or biased AI judgments. It empowers individuals with the right to know how AI tools impact them and the ability to dispute AI system decision-making.

C. For Technology Developers

AI system developers must comply with evolving guidelines on transparency, bias mitigation, and ethical considerations. Failure to non-comply may lead to greater scrutiny of the AI models AI developers produce and potentially influence the design of future AI technologies.

VII. How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, and redaction, as well as indexing and retrieval capabilities. Additionally, it facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.

Request a demo to learn more.

Frequently Asked Questions

AB 2885 is a California law that establishes a consistent definition of artificial intelligence and sets guidelines for how state agencies use AI systems. It aims to improve clarity, oversight, and accountability in AI development and deployment.

If your AI system is used by California state agencies, or supports them indirectly, you must comply with AB 2885. This includes being able to document how the system works, how data is handled, and how risk controls and transparency are maintained.

Under AB 2885, state agencies must:

  • Keep an inventory of all AI and automated decision systems they use
  • Identify systems considered high risk, especially those affecting people’s rights or access to services
  • Evaluate these systems for bias, accuracy, fairness, and transparency
  • Report findings to support accountability and public awareness

Together, these requirements help ensure AI is used in line with ethical and privacy standards.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
Red Teaming View More
What is AI Red Teaming? Complete Guide
AI red teaming tests AI systems for security, safety, and misuse risks. Learn how it works, common techniques, real-world use cases, and why it...
LLM01 OWASP Prompt Injection View More
LLM01 OWASP Prompt Injection: Understanding Security Risk in LLM Applications
Understand OWASP LLM01 Prompt Injection, how attacks work, real-world examples, impact on LLM apps, and enterprise controls to prevent data leakage and abuse.
View More
CNIL’s €475 Million Cookie Consent Enforcement: Key Lessons for Organizations
Download the whitepaper to learn about CNIL’s €475 million cookie consent enforcement fine. Discover key lessons for organizations and how to automate compliance.
Australia Privacy Act Reform – Tranche 2 View More
Australia Privacy Act Reform – Tranche 2
Access the whitepaper to gain an overview of Tranche 2, its strategic intent, core reforms expected, business impact, and executive checklist to ensure swift...
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New