Securiti’s AI Regulation digest provides a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
North and South America Jurisdiction
1. NIST Publishes Guidelines On Differential Privacy Evaluation
Date: March 6, 2025
Summary: The National Institute of Standards and Technology (NIST) released guidelines for evaluating differential privacy, a Privacy-Enhancing Technology (PET).
The guidelines aim to standardize the evaluation of privacy risks and mitigation in datasets, detailing the components of a differential privacy guarantee and providing an interactive software archive. They are intended for various professionals managing data analytics risks but do not guide government agencies on legal interactions. The guidelines cover the definition of differential privacy, techniques to achieve it, including adding random noise, and practical deployment concerns such as trust models, implementation challenges, and security considerations. Read More.
2. California Civil Rights Council Adopts Regulations On Use Of Automated Decision Systems
Date: March 21, 2025
Summary: The California Civil Rights Council (CRC) has adopted its final regulations on the use of automated decision systems (ADS).
The regulations will be applicable to existing laws to AI following employers’ increasing use of ADS in personnel-related decisions. The regulations target potential employment discrimination resulting from AI use, such as failing to hire individuals based on a particular protected basis. The regulations provide a standardized definition of ADS and prohibit their discriminatory use in employment-related decisions, with such systems being banned from making any employment decisions that discriminate on a protected basis and from being used in ways that express a preference for certain individuals in pre-employment practices. Furthermore, these regulations extend record-keeping requirements for employment-related documents from two to four years, with additional prohibitions on inquiring about criminal history and discrimination based on English proficiency, height, weight, and other personal characteristics.
The regulations are now under review at the California Office of Administrative Law for approval. Read More.
3. Virginia AI Bill Vetoed By Governor
Date: March 24, 2025
Summary: The Virginia legislature passed HB 2094 on February 20, 2025. Meant to regulate the development and deployment of high-risk AI systems within the state and modeled after Colorado's AI Act, HB 2094 has a narrower scope and only applies to cases where the output from an AI system serves as the “principal basis” for a consequential decision without a meaningful human review, oversight, involvement, or intervention. Unlike Colorado AI Act, HB 2094 does not have any general disclosure/public reporting obligations, except for transparency requirements in relation to high-risk generative AI systems generating or modifying synthetic content.
However, Virginia Governor Glenn Youngkin vetoed this bill on March 24, 2025, stating that enacting it would establish a burdensome AI regulatory framework and stifle the AI industry in the state. This follows a February call from the US Chamber of Commerce, urging the Governor to veto the bill for various reasons, specifically the adverse impact it could have on small businesses. Read More.
Date: March 26, 2025
Summary: The Utah legislature passed several laws on March 26, 2025, including the following:
HB 452: This is applicable to mental health chatbots that use GenAI to converse with people seeking mental health therapy. Per the law, suppliers of such chatbots must refrain from advertising products or services during user interactions unless the user explicitly consents. They must also refrain from the sale or sharing of any individually identifiable health information gathered from users. Additionally, licensed mental health professionals cannot be replaced with such GenAI systems. They must also create thorough documentation with a detailed policy describing the involvement of licensed mental health professionals in chatbot development, processes for regular testing and review of chatbot performance, and measures to prevent discriminatory treatment of users.
SB 226: Artificial Intelligence Consumer Protection Amendments narrow down the law’s scope by limiting the GenAI disclosure requirements only to instances where a consumer or supplier directly asks so in a “high risk” interaction. “High-risk” interactions are described as instances where a GenAI system collects sensitive personal information and involves significant decision-making, such as in financial, legal, medical, and mental health contexts. The law also includes a safe harbor for AI suppliers if they provide clear disclosures at the start or throughout an interaction, ensuring users are aware they are engaging with AI. The law requires all AI developers and deployers to disclose when GenAI is used in high-risk scenarios, with violations resulting in civil penalties of up to $5,000 per incident. Additionally, the legislation also extends the repeal date of the state’s Artificial Intelligence Policy Act to July 2027. Read More on HB 452 | Read More on SB 226.
EMEA Jurisdiction
5. EU's AI Office Publishes FAQs On General Purpose AI Models in AI Act
Date: March 10, 2025
Summary: The EU's AI Office has published Questions & Answers on General-Purpose AI Models in the AI Act. This resource is meant to address the interpretation of several AI Act provisions. However, it does not constitute the official position of the European Commission and remains subject to the Court of Justice of the European Union's (CJEU) interpretation of the AI Act. Read More.
6. European Commission Releases Third Draft Of General-Purpose AI Code Of Practice
Date: March 11, 2025
Summary: The European Commission has released the third draft of the General-Purpose AI Code of Practice. In it, the Commission refines the high-level commitments and detailed measures for implementation, including two commitments on transparency and copyrights for GPAI providers and sixteen on safety and security by providers with systemic risk. The latest draft includes an executive summary and an interactive website for stakeholder input. The final Code is expected to be released in May, which will allow alignment with the AI Act per the best practices. Read More.
Asia Jurisdiction
7. New Chinese Regulation Standardizing Identification Of AI-Generated Content To Take Effect In September
Date: March 14, 2025
Summary: New regulations meant to standardize the identification of AI-generated content will take effect in China on September 1, 2025. Under this regulation, all such synthetic text, images, audio, and video will be labeled as such.
Additionally, all service providers must add visible warnings, such as text or watermarks, and embed metadata to indicate AI-generated content. Online platforms must also verify the content authenticity and flag any suspected AI-generated content. Read More.
8. Hong Kong's Office of the Privacy Commissioner for Personal Data Publishes Checklist On Guidelines For Use Of GenAI By Employees
Date: March 31, 2025
Summary: The Office of the Privacy Commissioner for Personal Data (PCPD) has published its Checklist on Guidelines for the Use of Generative AI by Employees. The checklist is meant to help organizations develop internal AI policies compliant with the Personal Data (Privacy) Ordinance (PDPO), while covering AI usage scope, data privacy, security measures, and policy enforcement. Additionally, the PCPD also released the findings of its investigations into the ImagineX Management Company Limited data breach, which exposed the personal data of over 127,000 individuals due to security lapses, including the failure to remove a temporary account and the use of outdated software. ImagineX was found in violation of Data Protection Principle 4(1) and issued a notice requiring corrective actions. Read More.