Securiti’s AI Regulation digest provides a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
Editorial Note
AI Regulation Worldwide: Fast Moves, Diverging Paths
In June, the global AI legal and regulatory landscape was abuzz with activity, but the pace varied significantly across countries. Vietnam took the lead in enacting the world’s first law exclusively focused on the digital technology industry. Hong Kong’s PCDP followed closely, aiming to demystify compliance requirements by launching a new website and publishing guidelines to make compliance more “digestible” for the layman. South Korea, too, was an active participant by passing an amendment that obligates AI service providers to disclose the datasets used in training GenAI models. In the US, states like Texas, New York, and California have advanced laws on AI transparency and accountability. However, debates continue over federal proposals that could limit state-level regulation.
Global AI regulation is evolving along different tracks, with some regions moving swiftly to establish clear rules, while others are still defining their approach.
Watch: June's AI Pulse - All Major Highlights
A quick overview of global AI headlines you cannot afford to miss.
North and South America Jurisdiction
1. Texas Responsible AI Governance Law Signed into Law
June 22, 2025 Texas, United States
On June 22, 2025, Texas Governor Greg Abbott signed House Bill 140, enacting the Responsible Artificial Intelligence Governance Act, which will take effect on January 1, 2026. The law establishes a regulatory framework for AI development and deployment in Texas, introducing requirements related to biometric identifiers, consent, and restrictions on certain AI practices such as social scoring.
The law applies to any person or entity conducting business in Texas, producing products or services used by Texas residents, or engaging in the development, distribution, or deployment of artificial intelligence systems within the state. It also prohibits the use of AI systems for purposes such as manipulating human behavior to incite harm or criminal activity, undermining individuals’ informed decision-making, and engaging in social scoring practices.
2. New York Advances Bill Requiring Warning Labels for Addictive Social Media Platforms
June 18, 2025 New York, United States
On June 18, 2025, New York’s State Assembly passed Bill 4505, which mandates warning labels on social media platforms deemed addictive. The measure aims to address mental health concerns among youth, defining addictive platforms based on features like autoplay and infinite scrolling.
Under the Bill, users must see health warnings each time they visit such platforms, following guidelines set by the Commissioner of Mental Health. These warnings cannot be hidden or obscured, and violations may result in civil penalties of up to $5,000 per incident.
The Bill now awaits the Governor’s signature to become law, signaling New York’s increasing focus on regulating the impact of digital platforms on mental health.
3. California Publishes Report on Frontier AI Policy
June 17, 2025 California, United States
On June 17, 2025, California Governor Gavin Newsom published a report on Frontier AI policy, addressing critical issues such as data collection, safety measures, and security practices. The report recommends steps to strengthen accountability, including third-party risk assessments, stronger whistleblower protections, and clear guidelines for reporting adverse events, specifying what information should be reported and who should be responsible.
It also outlines four approaches for classifying foundation models, based on factors like the developer, development costs, technical characteristics, and real-world impact. The report signals California’s intent to play a proactive role in shaping policy around emerging AI technologies and ensuring responsible development and deployment practices.
4. Stop Deepfakes Act Passed by New York State Senate
June 12, 2025 New York, United States
On June 12, 2025, the New York State Senate passed Senate Bill 6954A, known as the Stop Deepfakes Act. The law requires systems that generate synthetic content to embed provenance data detailing the content’s origin, modifications, AI use, provider identity, and timestamps.
It also places similar requirements on state agencies and bars hosting platforms from offering access to systems that don’t comply. The New York State Attorney General is authorized to impose civil penalties and injunctions for violations.
This legislation could set an important precedent for regulating deepfakes as synthetic media becomes more realistic. Organizations involved in creating, hosting, or distributing content should monitor how these provenance requirements might impact their technologies and workflows.
5. US Lawmakers Push Back Against Federal AI Preemption in One Big Beautiful Bill Act
June 3, 2025 United States
On June 3, 2025, a group of 260 US lawmakers released a letter opposing the AI provisions in the One Big Beautiful Bill Act, which passed the House on May 22, 2025. The letter criticizes the Act’s proposal to prohibit state-level enforcement of AI regulations for ten years, arguing that it would restrict policymakers’ ability to address emerging AI challenges and leave consumers unprotected. Lawmakers also expressed concern that the Act could undermine existing AI-related laws aimed at improving consumer transparency, regulating government technology procurement, and safeguarding patient rights in healthcare.
For anyone involved in AI development, use, regulation, or policy, following this debate is crucial, as the lawmakers’ opposition may lead to amendments that affect how existing AI laws apply nationwide.
6. French CNIL Issues Recommendations on Using Legitimate Interest as Legal Basis
June 19, 2025 France
On June 19, 2025, France’s data protection authority, CNIL, published recommendations for AI developers on relying on legitimate interest under the GDPR for AI model development. The CNIL states that legitimate interest is valid only if the purpose is lawful, the processing is necessary, and it does not disproportionately affect data subjects’ rights.
The guidance offers practical examples and advises developers to avoid irrelevant sensitive data, limit certain data collection, and ensure transparency and data subject rights. These recommendations provide clearer pathways for AI innovation while helping organizations balance technological development with strong privacy protections.
7. European Parliamentary Committee Releases Draft Report Recommending Algorithmic Management in Workplace Directive
June 18, 2025
On June 18, 2025, the European Parliament’s Employment and Social Affairs Committee released a draft report recommending a new Directive to regulate algorithmic management in the workplace. The proposal seeks to protect employee health and privacy, close gaps left by the AI Act and GDPR, and ensure transparency when large volumes of employee data are processed.
It would require employers to explain how algorithmic systems work and ban processing personal data related to emotions, private conversations, and predictions about fundamental rights. The proposed Directive could significantly reshape workplace dynamics by strengthening transparency and employee data rights.
8. EDPS & Spanish AEPD Publish Joint Report on Federated Learning as a Privacy-Enhancing Technology for Training AI Models
June 10, 2025
On June 10, 2025, the European Data Protection Supervisor (EDPS) and Spain’s data protection authority (AEPD) published a joint report exploring federated learning as a privacy-enhancing technology for AI development. Federated learning allows AI models to train locally on devices, sharing only aggregated results instead of raw data, and is being explored for applications in healthcare, speech recognition, and autonomous vehicles.
The report highlights federated learning’s potential to support data minimization, privacy protection, and reduced cybersecurity risks through decentralized processing, while also cautioning about risks like data leakage from model updates. Overall, the report provides valuable guidance for organizations seeking to innovate with AI while staying compliant with Europe’s strict data protection standards, especially in sensitive sectors.
9. European Data Protection Board announced Two New Support Pool of Experts (SPE) Projects
June 5, 2025
On June 5, 2025, the European Data Protection Board (EDPB) announced two specialized AI Support Pool of Experts (SPE) projects following its June 3-4 meeting. Both initiatives aim to help organizations close compliance gaps when implementing AI systems.
The first report, Law & Compliance in AI Security & Data Protection, is tailored for legal professionals, providing essential AI knowledge under the GDPR and EU AI Act. It maps compliance risks across the AI lifecycle and offers practical guidance for deploying AI systems in line with regulatory requirements. The second report, Fundamentals of Secure AI Systems with Personal Data, targets technical professionals building high-risk AI systems. It translates privacy requirements into development practices, including secure coding, testing protocols, and continuous monitoring for compliant AI deployment.
Together, these resources offer coordinated legal and technical guidance, helping organizations meet both GDPR data protection standards and the EU AI Act’s obligations through integrated approaches rather than siloed efforts. You can read more about the first report here and the second report here.
Asia Jurisdiction
10. Vietnam's National Assembly Passes Law on Digital Technology Industry
June 14, 2025 Vietnam
Vietnam’s National Assembly has passed the Law on Digital Technology Industry, laying the foundation for regulating emerging technologies such as AI, blockchain, and digital assets. The law formally recognizes digital and crypto assets under civil law, sets out governance principles for artificial intelligence, including transparency, accountability, and safety, and mandates licensing for high-risk AI systems.
The law also promotes innovation by introducing incentives for the development of AI, blockchain, and semiconductor technologies, positioning Vietnam as a competitive digital hub in the region. By addressing both the opportunities and risks of digital transformation, the legislation signals Vietnam’s commitment to responsible and future-oriented tech regulation.
11. Bill Amending Basis Act on AI Introduced in South Korea
June 13, 2025 South Korea
The Democratic Party lawmaker Park Su-hyun has tabled a bill that amends the Basic Act on Artificial Intelligence in South Korea’s National Assembly. The amendment would obligate AI service providers to disclose the datasets used to train GenAI models and provide right-holders with mechanisms to verify whether their work was used. Although no sanctions are included, it represents the first legislative attempt at dedicated copyright safeguards in the era of AI.
This initiative reflects growing global concerns about intellectual property in AI development and signals a move toward greater transparency and accountability in dataset usage.
12. India's Data Security Council Publishes AI Adoption Guide
June 11, 2025 India
The Data Security Council of India (DSCI) has published a comprehensive policy template to support organizations in adopting AI responsibly and securely. The template provides guidance on assigning governance roles, implementing privacy and security measures throughout the AI lifecycle, and integrating practical controls.
This initiative reflects India’s growing focus on fostering ethical AI development while safeguarding data protection and trust.
13. Hong Kong’s PCPD Issues Checklist to Help In PDPO Compliance Efforts
June 9, 2025 Hong Kong
On June 9, 2025, Hong Kong’s Privacy Commissioner for Personal Data (PCPD) issued a letter urging organizations to develop AI usage policies to protect personal data and organizational interests. Alongside this guidance, the PCPD published a “Checklist for Employee Use of Generative AI” to help ensure compliance with Hong Kong’s Personal Data (Privacy) Ordinance (PDPO). The checklist offers practical steps for tailoring internal policies, focusing on clear AI usage rules, data input guidelines, and anonymization practices.
This encourages organizations to embed privacy safeguards at the operational level, particularly in employee use of AI tools. This not only helps prevent inadvertent data leaks but also signals an increasing expectation for accountability and transparency in AI deployment, positioning organizations to better navigate compliance challenges while fostering responsible innovation.
14. PCPD Launches Website Offering Updates on AI Developments in Hong Kong & Globally
June 9, 2025 Hong Kong
Hong Kong’s Privacy Commissioner for Personal Data (PCPD) has launched its “AI Security” website. It is meant to offer guidance related to AI through educational content that covers regulatory updates and PCPD news while giving the public and organizations easy access to practical AI usage topics to ensure they’re aware of AI-related developments, both in Hong Kong and globally.
By centralizing AI-related information, the PCPD strengthens public understanding and organizational readiness in navigating AI’s privacy and security challenges. This initiative underscores the regulator’s commitment to proactive engagement, promoting responsible AI adoption through accessible and up-to-date educational content.
Comprehensive AI Law in China: Watch for China’s progress on a new AI law following calls from this year’s National People’s Congress to build a legal framework that promotes innovation while managing risks. Lawmakers are considering dynamic risk classification, ethical safeguards, and international cooperation to fill gaps left by existing laws like the Data Security Law and Personal Information Protection Law.
The EU Commission opens consultation on high-risk AI systems under EU AI Act: The European Commission's AI Office launched a targeted stakeholder consultation on June 6, 2025, seeking input on implementing the EU AI Act's rules for high-risk AI systems, with the consultation running for six weeks and submissions due by July 18, 2025.
NIST opens comments on integrating Artificial Intelligence: The National Institute of Standards and Technology (NIST) is seeking public comments on integrating artificial intelligence (AI) into the NICE Workforce Framework for Cybersecurity.
Bills are advancing in the legislative process: In California SB 1018 (Automated Decisions Safety Act), AB 1064 (Leading Ethical AI Development for Kids Act), SB 420 (AI Transparency Act), AB 1018 (Automated Decision Safety Act), while in Michigan HB 4536 (Insurance claims using AI), Michigan HB 4537 (Amending Social Welfare Act to include AI reviews) are advancing through the legislative process.
Join Our Newsletter
Get all the latest information, law updates and more delivered to your inbox
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...
Spotlight Talks
Spotlight
11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
AI SPM stands for AI Security Posture Management. It represents a comprehensive approach to ensure the security and integrity of AI systems throughout the...
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Download the infographic to gain insights into the key amendments to the Saudi Arabia PDPL Implementing Regulations. Learn about proposed changes and key takeaways...