Introduction
California Attorney General Rob Bonta issued two legal advisories on January 13, 2025, reminding consumers of their rights and advising businesses and healthcare entities who develop, sell, or use artificial intelligence (AI) about their obligations under California law.
The first legal advisory advises consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws; the second advisory provides guidance specific to healthcare entities about their obligations under California law.
The advisories provide guidance but are not intended to be comprehensive and do not identify all laws that may apply to the development and use of AI.
Advisory 1: Application of Existing California Laws to Artificial Intelligence Advisory
This advisory provides an overview of many existing California laws that may apply to entities that develop, sell, or use AI, including consumer protection, civil rights, competition, data protection laws, and election misinformation laws.
1. California’s Unfair Competition Law
California’s Unfair Competition Law protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. Practices that deceive or harm consumers fall squarely within the purview of the Unfair Competition Law, and developers, entities that use AI, and end-users of AI systems should be aware that traditional consumer legal protections apply equally in the AI context.
For example, it may be unlawful under California’s Unfair Competition Law to:
- Falsely advertise the accuracy, quality, or utility of AI systems. This includes claiming they have capabilities, being entirely powered by AI, or claiming they perform tasks better than humans, is a violation of industry standards and bias, as it undermines the credibility of AI systems.
- Use AI to create deceptive content, such as deepfakes and chatbots, and to create media that appears to represent unreal events or utterances.
- Use AI to create and knowingly use another person’s name, voice, signature, photograph, or likeness without that person’s prior consent.
- Use AI to impersonate a real person for purposes of harming, intimidating, threatening, or defrauding another person.
- Use AI to impersonate a real person for purposes of receiving money or property.
- Use AI to impersonate a real person for any unlawful purpose.
- Use AI to impersonate a government official in the execution of official duties.
- Use AI in a manner that is unfair, including using AI in a manner that results in negative impacts that outweigh its utility, or in a manner that offends public policy, is immoral, unethical, oppressive, or unscrupulous, or causes substantial injury.
- Create, market, or disseminate an AI system that disregards federal or state laws, including false advertising, civil rights, privacy, and specific industries and activities.
Businesses may also be liable for supplying AI products when they know or should have known that AI will be used to violate the law.
2. Other Laws
The advisory states several laws that are applicable to AI developers and deployers. Such as:
- California’s False Advertising Law: prohibits false advertising regarding AI products, their capabilities, and the use of AI in goods or services.
- California’s Competition Laws: AI developers and users should be aware of potential risks to fair competition, such as pricing set by AI systems, and the potential for anticompetitive actions by dominant AI companies to harm competition in AI markets.
- California’s Civil Rights Laws: AI systems should be wary of potential biases and provide specific reasons for adverse actions against citizens, including when AI was used.
- For instance, the federal Fair Credit Reporting Act, Equal Credit Opportunity Act, and the California Consumer Credit Reporting Agencies Act require such specific reasons to be provided to Californians who receive adverse actions based on their credit scores.
- The Consumer Financial Protection Bureau has clarified that creditors using AI or complex credit models must still provide reasons when denying or taking another adverse action against an individual.
- Election Misinformation Laws: AI cannot be used in elections, including undeclared chatbots, impersonating candidates, and distributing deceptive media or to incentivize purchases or influence votes.
- Data Protection Laws:
- California Consumer Privacy Act (CCPA): AI developers and users must ensure data collection is proportionate to the intended purpose, not for non-disclosed purposes, and research uses must be compatible with the collected context.
- AB 1008: confirms that the protections for personal information in the CCPA apply to personal information in AI systems that are capable of outputting personal information.
- SB 1223: expands the definition of sensitive personal information to include “neural data.”
- California Invasion of Privacy Act (CIPA): restricts recording or listening to private electronic communication and prohibits the use of systems that examine or record voiceprints to determine the truth or falsity of statements without consent.
- Student Online Personal Information Protection Act (SOPIPA): protects consumer data, including education and healthcare data, from being sold, targeted, or amassed by education technology providers for K-12 school purposes, requiring developers and users to ensure compliance with SOPIPA.
- Confidentiality of Medical Information Act (CMIA): Developers and users should ensure that AI systems used for healthcare, including direct-to-consumer services, comply with the CMIA.
3. New Laws
This advisory also summarizes several new California AI laws that went into effect on January 1, 2025. These include laws regarding:
- Disclosure Requirements for Businesses
- AB 2013 requires AI developers to disclose information on their websites about their training data on or before January 1, 2026, including a high-level summary of the datasets used in the development of the AI system or service.
- AB 2905 requires telemarketing calls that use AI-generated or significantly modified synthetic marketing to disclose that use.
- SB 942 places obligations on AI developers, starting January 1, 2026, to make free and accessible tools to detect whether specified content was generated by generative AI systems.
- Unauthorized Use of Likeness
-
- AB 2602 requires that contracts authorizing the use of an individual’s voice and likeness in a digital replica created through AI technology include a “reasonably specific description” of the proposed use and that the individual be represented by legal counsel or by a labor union. Absent these requirements, the contract is unenforceable, unless the uses are otherwise consistent with the terms of the contract and the underlying work.
- AB 1836 prohibits the use of a deceased personality’s digital replica without prior consent within 70 years of the personality’s death, imposing a minimum $10,000 fine for the violation. A deceased personality is any natural person whose name, voice, signature, photograph, or likeness has commercial value at the time of that person’s death, or because of that person’s death.
- Use of AI in Election and Campaign Materials: AB 2355 and AB 2655
- Prohibition and Reporting of Exploitative Uses of AI: AB 1831 and SB 981
- Supervision of AI Tools in Healthcare Settings
- SB 1120 requires health insurers to ensure that licensed physicians supervise the use of AI tools that make decisions about healthcare services and insurance claims.
Advisory 2: Application of Existing California Law to Artificial Intelligence in Healthcare
This advisory guides healthcare providers, insurers, vendors, investors, and other healthcare entities that develop, sell, and use AI and other automated decision systems by detailing entities’ obligations under California law, including under the state’s consumer protection, civil rights, data privacy, and professional licensing laws.
For example, it may be unlawful in California to:
- Deny health insurance claims using AI or other automated decision-making systems in a manner that overrides doctors’ views about necessary treatment.
- Use generative AI or automated decision-making tools to create erroneous patient notes, communications, or medical orders, often based on stereotypes or protected classifications.
- Use AI-based decision-making systems to predict healthcare access based on past claims data, denying services to disadvantaged patients and enhancing services to robust past access groups.
- Double-book a patient’s appointment, or create other administrative barriers, because AI or other automated decisionmaking systems predict that a patient is the “type of person” more likely to miss an appointment.
- Conduct cost/benefit analysis of medical treatments for patients with disabilities using AI or other automated decision-making systems that are based on stereotypes that undervalue the lives of people with disabilities.
California’s Health Consumer Protection Laws
The State laws in California prohibit:
- payment of referral fees or kickbacks for medical services and other types of fraudulent billing, such as the use of AI to generate fraudulent bills or inaccurate upcodes of patient records.
- supplying AI tools when the businesses know, or should have known, that AI will be used to violate the law.
California's professional licensing laws mandate that only human physicians are licensed to practice medicine, and AI is not allowed to be used for this purpose. Licensed physicians may violate conflict of interest law if they or their family members have a financial interest in AI services. Using AI or other automated decision tools to make decisions about patients' medical treatment may also violate California's ban on the practice of medicine by corporations and other "artificial legal entities."
Recent amendments to the Knox-Keene Act and California Insurance Code limit healthcare service plans’ ability to use AI or other automated decision systems to deny coverage. Healthcare service plans must ensure that AI does not replace a licensed healthcare provider's decision-making, base decisions on individual enrollees' medical history and clinical circumstances, does not discriminate, is open to audit, is periodically reviewed, and does not use patient data beyond its intended and stated purpose.
California Anti-Discrimination Laws
The non-discrimination mandate of California law covers healthcare programs or activities. These rules prohibit the types of discriminatory practices likely to be caused by AI, including disparate impact discrimination (also known as “discriminatory effect” or “adverse impact”) and denial of full and equal access. The use of AI in healthcare is subject to additional state laws prohibiting discrimination against healthcare consumers in various settings, such as California’s Unruh Civil Rights Act, California’s Insurance Code, California’s Health and Safety Code, and California Fair Employment and Housing Act (FEHA).
California’s Patient Privacy and Autonomy Laws
The health AI sector has experienced significant growth due to the vast amounts of patient data used to build and train AI and make decisions that impact health services. California state medical privacy laws provide more stringent protections than federal health privacy laws like HIPAA. For instance:
- The Confidentiality of Medical Information Act (CMIA) and the Information Practices Act govern the use and disclosure of Californians' medical information, requiring entities to preserve confidentiality and ensure patients have access to that information.
- Sensitive information, including mental and behavioral healthcare and reproductive and sexual healthcare, receive heightened protection.
- California law requires physicians to provide information that a reasonable person in the patient's position would need for informed consent to a proposed course of treatment.
- Recent amendments to the CMIA require providers and electronic health records (EHRs) to keep patients' reproductive and sexual health information confidential and separate from their medical records. As developers and users of EHRs and related applications increasingly incorporate AI, they must ensure compliance with the CMIA and limit access to and improper use of sensitive information.
- The Genetic Privacy Information Act provides special protections for individuals’ genetic data, and California healthcare service plans and other entities are prohibited from disclosing to third parties the results of genetic tests without the patient’s permission.
- “Dark patterns” cannot be used to obtain patient consent.
- The Patient Access to Health Records Act provides California patients and their representatives with the right to obtain their own medical records.
- The Insurance Information and Privacy Protection Act gives healthcare consumers the right to determine what information has been collected about them, and the reasons for adverse decisions.
California also has general privacy laws that apply to the use of AI, including the constitutional right to privacy that applies to both government and private entities.
These advisories are not exhaustive and provide an overview of the applicability of existing laws to AI, highlighting the importance of compliance with state, federal, and local laws in developing and deploying AI. This provides a comprehensive approach to AI governance.
How Securiti Can Help
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, and redaction, as well as indexing and retrieval capabilities. Additionally, it facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.
Request a demo to learn more.