Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

California’s Legal Advisories on AI

Author

Aswah Javed

Associate Data Privacy Analyst at Securiti

Listen to the content

Introduction

California Attorney General Rob Bonta issued two legal advisories on January 13, 2025, reminding consumers of their rights and advising businesses and healthcare entities who develop, sell, or use artificial intelligence (AI) about their obligations under California law.

The first legal advisory advises consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws; the second advisory provides guidance specific to healthcare entities about their obligations under California law.

The advisories provide guidance but are not intended to be comprehensive and do not identify all laws that may apply to the development and use of AI.

Advisory 1: Application of Existing California Laws to Artificial Intelligence Advisory

This advisory provides an overview of many existing California laws that may apply to entities that develop, sell, or use AI, including consumer protection, civil rights, competition, data protection laws, and election misinformation laws.

1. California’s Unfair Competition Law

California’s Unfair Competition Law protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. Practices that deceive or harm consumers fall squarely within the purview of the Unfair Competition Law, and developers, entities that use AI, and end-users of AI systems should be aware that traditional consumer legal protections apply equally in the AI context.

For example, it may be unlawful under California’s Unfair Competition Law to:

  • Falsely advertise the accuracy, quality, or utility of AI systems. This includes claiming they have capabilities, being entirely powered by AI, or claiming they perform tasks better than humans, is a violation of industry standards and bias, as it undermines the credibility of AI systems.
  • Use AI to create deceptive content, such as deepfakes and chatbots, and to create media that appears to represent unreal events or utterances.
  • Use AI to create and knowingly use another person’s name, voice, signature, photograph, or likeness without that person’s prior consent.
  • Use AI to impersonate a real person for purposes of harming, intimidating, threatening, or defrauding another person.
  • Use AI to impersonate a real person for purposes of receiving money or property.
  • Use AI to impersonate a real person for any unlawful purpose.
  • Use AI to impersonate a government official in the execution of official duties.
  • Use AI in a manner that is unfair, including using AI in a manner that results in negative impacts that outweigh its utility, or in a manner that offends public policy, is immoral, unethical, oppressive, or unscrupulous, or causes substantial injury.
  • Create, market, or disseminate an AI system that disregards federal or state laws, including false advertising, civil rights, privacy, and specific industries and activities.

Businesses may also be liable for supplying AI products when they know or should have known that AI will be used to violate the law.

2. Other Laws

The advisory states several laws that are applicable to AI developers and deployers. Such as:

  • California’s False Advertising Law: prohibits false advertising regarding AI products, their capabilities, and the use of AI in goods or services.
  • California’s Competition Laws: AI developers and users should be aware of potential risks to fair competition, such as pricing set by AI systems, and the potential for anticompetitive actions by dominant AI companies to harm competition in AI markets.
  • California’s Civil Rights Laws: AI systems should be wary of potential biases and provide specific reasons for adverse actions against citizens, including when AI was used.
    • For instance, the federal Fair Credit Reporting Act, Equal Credit Opportunity Act, and the California Consumer Credit Reporting Agencies Act require such specific reasons to be provided to Californians who receive adverse actions based on their credit scores.
    • The Consumer Financial Protection Bureau has clarified that creditors using AI or complex credit models must still provide reasons when denying or taking another adverse action against an individual.
  • Election Misinformation Laws:  AI cannot be used in elections, including undeclared chatbots, impersonating candidates, and distributing deceptive media or to incentivize purchases or influence votes.
  • Data Protection Laws:
    • California Consumer Privacy Act (CCPA): AI developers and users must ensure data collection is proportionate to the intended purpose, not for non-disclosed purposes, and research uses must be compatible with the collected context.
    • AB 1008: confirms that the protections for personal information in the CCPA apply to personal information in AI systems that are capable of outputting personal information.
    • SB 1223: expands the definition of sensitive personal information to include “neural data.”
    • California Invasion of Privacy Act (CIPA): restricts recording or listening to private electronic communication and prohibits the use of systems that examine or record voiceprints to determine the truth or falsity of statements without consent.
    • Student Online Personal Information Protection Act (SOPIPA): protects consumer data, including education and healthcare data, from being sold, targeted, or amassed by education technology providers for K-12 school purposes, requiring developers and users to ensure compliance with SOPIPA.
    • Confidentiality of Medical Information Act (CMIA): Developers and users should ensure that AI systems used for healthcare, including direct-to-consumer services, comply with the CMIA.

3. New Laws

This advisory also summarizes several new California AI laws that went into effect on January 1, 2025. These include laws regarding:

  • Disclosure Requirements for Businesses
    • AB 2013 requires AI developers to disclose information on their websites about their training data on or before January 1, 2026, including a high-level summary of the datasets used in the development of the AI system or service.
    • AB 2905 requires telemarketing calls that use AI-generated or significantly modified synthetic marketing to disclose that use.
    • SB 942  places obligations on AI developers, starting January 1, 2026, to make free and accessible tools to detect whether specified content was generated by generative AI systems.
  • Unauthorized Use of Likeness
      • AB 2602 requires that contracts authorizing the use of an individual’s voice and likeness in a digital replica created through AI technology include a “reasonably specific description” of the proposed use and that the individual be represented by legal counsel or by a labor union. Absent these requirements, the contract is unenforceable, unless the uses are otherwise consistent with the terms of the contract and the underlying work.
      • AB 1836 prohibits the use of a deceased personality’s digital replica without prior consent within 70 years of the personality’s death, imposing a minimum $10,000 fine for the violation. A deceased personality is any natural person whose name, voice, signature, photograph, or likeness has commercial value at the time of that person’s death, or because of that person’s death.
  • Use of AI in Election and Campaign Materials: AB 2355 and AB 2655
  • Prohibition and Reporting of Exploitative Uses of AI: AB 1831 and SB 981
  • Supervision of AI Tools in Healthcare Settings
    • SB 1120 requires health insurers to ensure that licensed physicians supervise the use of AI tools that make decisions about healthcare services and insurance claims.

Advisory 2: Application of Existing California Law to Artificial Intelligence in Healthcare

This advisory guides healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems by detailing entities’ obligations under California law, including under the state’s consumer protection, civil rights, data privacy, and professional licensing laws.

For example, it may be unlawful in California to:

  • Deny health insurance claims using AI or other automated decision-making systems in a manner that overrides doctors’ views about necessary treatment.
  • Use generative AI or automated decision-making tools to create erroneous patient notes, communications, or medical orders, often based on stereotypes or protected classifications.
  • Use AI-based decision-making systems to predict healthcare access based on past claims data, denying services to disadvantaged patients and enhancing services to robust past access groups.
  • Double-book a patient’s appointment, or create other administrative barriers, because AI or other automated decisionmaking systems predict that a patient is the “type of person” more likely to miss an appointment.
  • Conduct cost/benefit analysis of medical treatments for patients with disabilities using AI or other automated decision-making systems that are based on stereotypes that undervalue the lives of people with disabilities.

California’s Health Consumer Protection Laws

The State laws in California prohibit:

  • payment of referral fees or kickbacks for medical services and other types of fraudulent billing, such as the use of AI to generate fraudulent bills or inaccurate upcodes of patient records.
  • supplying AI tools when the businesses know, or should have known, that AI will be used to violate the law.

California's professional licensing laws mandate that only human physicians are licensed to practice medicine, and AI is not allowed to be used for this purpose. Licensed physicians may violate conflict of interest law if they or their family members have a financial interest in AI services. Using AI or other automated decision tools to make decisions about patients' medical treatment may also violate California's ban on the practice of medicine by corporations and other "artificial legal entities."

Recent amendments to the Knox-Keene Act and California Insurance Code limit healthcare service plans’ ability to use AI or other automated decision systems to deny coverage. Healthcare service plans must ensure that AI does not replace a licensed healthcare provider's decision-making, base decisions on individual enrollees' medical history and clinical circumstances, does not discriminate, is open to audit, is periodically reviewed, and does not use patient data beyond its intended and stated purpose.

California Anti-Discrimination Laws

The non-discrimination mandate of California law covers healthcare programs or activities. These rules prohibit the types of discriminatory practices likely to be caused by AI, including disparate impact discrimination (also known as “discriminatory effect” or “adverse impact”) and denial of full and equal access. The use of AI in healthcare is subject to additional state laws prohibiting discrimination against healthcare consumers in various settings, such as California’s Unruh Civil Rights Act, California’s Insurance Code, California’s Health and Safety Code, and  California Fair Employment and Housing Act (FEHA).

California’s Patient Privacy and Autonomy Laws

The health AI sector has experienced significant growth due to the vast amounts of patient data used to build and train AI and make decisions that impact health services. California state medical privacy laws provide more stringent protections than federal health privacy laws like HIPAA. For instance:

  • The Confidentiality of Medical Information Act (CMIA) and the Information Practices Act govern the use and disclosure of Californians' medical information, requiring entities to preserve confidentiality and ensure patients have access to that information.
  • Sensitive information, including mental and behavioral healthcare and reproductive and sexual healthcare, receive heightened protection.
  • California law requires physicians to provide information that a reasonable person in the patient's position would need for informed consent to a proposed course of treatment.
  • Recent amendments to the CMIA require providers and electronic health records (EHRs) to keep patients' reproductive and sexual health information confidential and separate from their medical records. As developers and users of EHRs and related applications increasingly incorporate AI, they must ensure compliance with the CMIA and limit access to and improper use of sensitive information.
  • The Genetic Privacy Information Act provides special protections for individuals’ genetic data, and California healthcare service plans and other entities are prohibited from disclosing to third parties the results of genetic tests without the patient’s permission.
  • “Dark patterns” cannot be used to obtain patient consent.
  • The Patient Access to Health Records Act provides California patients and their representatives with the right to obtain their own medical records.
  • The Insurance Information and Privacy Protection Act gives healthcare consumers the right to determine what information has been collected about them, and the reasons for adverse decisions.

California also has general privacy laws that apply to the use of AI, including the constitutional right to privacy that applies to both government and private entities.

These advisories are not exhaustive and provide an overview of the applicability of existing laws to AI, highlighting the importance of compliance with state, federal, and local laws in developing and deploying AI. This provides a comprehensive approach to AI governance.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, and redaction, as well as indexing and retrieval capabilities. Additionally, it facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigation OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View
Spotlight 13:11

Securing GenAI: From SaaS Copilots to Enterprise Applications

Rehan Jalil
Watch Now View
Spotlight 47:02

Navigating Emerging Technologies: AI for Security/Security for AI

Rehan Jalil
Watch Now View
Spotlight 59:55

Building Safe
Enterprise AI

Watch Now View

Latest

Automating EU AI Act Compliance View More

Automating EU AI Act Compliance: A 5-Step Playbook for GRC Teams

Artificial intelligence is revolutionizing industries, driving innovation in healthcare, finance, and beyond. But with great power comes great responsibility—especially when AI decisions impact health,...

Navigating the Evolving Data Security Landscape View More

Navigating the Evolving Data Security Landscape: Why Detection Alone Isn’t Enough

Proactive vs. Reactive: Why Threat Detection Alone Falls Short in Data Protection In an era where digital transformation and AI adoption are accelerating at...

View More

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)

Gain insights into South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act).

Navigating Data Regulations in Malaysia's Financial Sector View More

Navigating Data Regulations in Malaysia’s Financial Sector

Gain insights into data regulations in Malaysia’s financial sector. Learn how Securiti’s robust automation tools help organizations ensure swift compliance with Malaysia’s evolving regulatory...

Sensitive Personal Information (SPI) View More

Navigating Sensitive Personal Information (SPI) Under U.S. State Privacy Laws

Download the whitepaper to understand how U.S. state privacy laws define Sensitive Personal Information (SPI) and what governance requirements businesses must follow to ensure...

Navigating Data Regulations in the UAE Financial Services Industry View More

Navigating Data Regulations in the UAE Financial Services Industry

Download the whitepaper to explore key strategies and insights for navigating data regulations in the UAE's financial services industry. Learn about compliance with evolving...

Texas Data Privacy and Security Act (TDPSA) View More

Navigating the Texas Data Privacy and Security Act (TDPSA): Key Details

Download the infographic to learn key details about Texas’ Data Privacy and Security Act (TDPSA) and simplify your compliance journey with Securiti.

Oregon’s Consumer Privacy Act (OCPA) View More

Navigating Oregon’s Consumer Privacy Act (OCPA): Key Details

Download the infographic to learn key details about Oregon’s Consumer Privacy Act (OCPA) and simplify your compliance journey with Securiti.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New