Securiti’s AI Regulation digest provides a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
Editorial Note
Responsible AI Use - Cautioned, Measured, and Much More Educated
“Responsible AI Use” has been the dominant theme this month as jurisdictions around the world move from exploration to enforcement. Governments are prioritising AI literacy, ethical use and risk mitigation- evident in the rising compliance demands for organizations in the U.S., the EU’s structured push for AI education and transparency, and Hong Kong’s call for cultivating a culture of AI awareness. At the same time rising tensions such as backlash against sweeping U.S. federal preemption proposals highlight the complex balancing act between innovation and ensuring oversight. Whether through national strategies, ethical policies or legal enforcement, one message is consistent: AI must be developed and deployed with accountability, trust, and human oversight at its core.
North and South America Jurisdiction
1. US House Passes One Big Beautiful Bill Act with 10-Year AI Preemption
May 22, 2025 United States
The U.S. House of Representatives passed the One Big Beautiful Bill Act, which includes a 10-year moratorium on state-level legislation regulating AI models and systems. The bill restricts states from enforcing any laws specific to AI, while allowing exceptions for measures that facilitate AI deployment, streamline administrative procedures, or impose requirements under existing federal or generally applicable laws. It also mandates that AI systems be treated similarly to non-AI systems with respect to fees, bonds, and equivalent functionality.
In response, a coalition of civil rights and consumer advocacy groups submitted a letter opposing the AI preemption clause. They argue the moratorium would undermine vital state protections and prevent accountability for harms caused by AI, particularly in areas like healthcare, housing, and child safety. As the bill moves to the Senate, debate continues over the appropriate balance between federal oversight and state innovation in AI governance.
2. US President Signs TAKE IT DOWN ACT to Combat Non-Consensual Deepfake Content
May 19, 2025 United States
The U.S. President has signed the bipartisan TAKE IT DOWN Act into law, establishing federal criminal penalties for the distribution of non-consensual intimate imagery, including AI-generated deepfakes. The law requires online platforms to remove such content within 48 hours of a victim’s request, with enforcement led by the Federal Trade Commission.
The enactment reflects growing federal momentum to address the misuse of AI in harmful content creation and online abuse.
3. US Copyright Office Issues Report On GenAI Training
May 15, 2025 United States
The US Copyright Office has released a report on generative AI training, highlighting challenges involved in licensing copyrighted works. The report highlights that compensation models must consider revenue- based structures, particularly for smaller AI developers and copyright owners may opt out if licensing terms conflict with their statutory rights. The report further emphasizes the training legality heavily depends on the types of works used, their sources, purposes, and the level of control over AI outputs.
U.S. businesses involved in AI development using copyrighted content must closely review these findings to ensure compliance with emerging copyright standards.
4. Montana Signs HB 590 Into Law to Advance Health Data Access
May 13, 2025 Montana, United States
Montana’s Governor has signed HB 590 into law, introducing significant updates to the state’s electronic health records framework. The law requires health carriers to implement application programming interfaces (APIs) by July 1, 2026, to improve access to health data. It also prohibits information blocking by healthcare providers and mandates that certain medical test results be automatically disclosed into patients' electronic health records within 72 hours.
Healthcare providers and related businesses operating in Montana must review their data access and interoperability practices to ensure compliance with the new requirements.
5. Utah Enacts 4 AI Laws Targeting Transparency, Safety and Sectoral Oversight
May 7, 2025 Utah, United States
Utah has officially enacted four AI-related laws, signaling a broad push toward regulating AI across sectors. The new legislation includes:
Senate Bill 271 - Addresses unauthorized AI impersonation;
Senate Bill 226 - Requires transparency in high-risk AI interactions with consumers;
House Bill 452 - Regulates AI use in mental health and advertising contexts;
Senate Bill 180 - Mandates AI policy adoption by law enforcement.
Organizations operating in Utah should closely review these new laws to ensure compliance with sector-specific obligations. Read more on SB 271, SB 226, HB 452, and SB 180.
Europe & Africa Jurisdiction
6. Irish DPC Issues Statement On Meta’s Use Personal Data For LLM Training
May 21, 2025 Ireland
Ireland’s Data Protection Commission (DPC) has issued a statement on Meta’s processing of personal data from Facebook and Instagram for large language model (LLM) training within the EU/EEA. Following earlier concerns raised in June 2024, which led to a temporary halt, Meta has revised its approach and introduced additional safeguards. These include updated transparency notices, extended user notification periods, an improved objection mechanism, and technical measures such as data de-identification and output filtering.
The DPC is continuously monitoring the situation, requiring Meta to submit an evaluation report by October 2025 while reminding users to review privacy settings to manage data usage.
7. Finland Issues New AI Guidance For Data Protection Compliance
May 20, 2025 Finland
Finland’s Office of the Data Protection Ombudsman has published new guidance for AI developers and deployers to ensure compliance with data protection requirements. The guidance outlines how organizations can lawfully use personal data within AI systems and align with GDPR standards.
Key areas include assessing data protection risks from the individual’s perspective, ensuring a clear legal basis for processing throughout the AI lifecycle, upholding data subject rights, and maintaining transparency about data use. The guidance also reinforces GDPR principles such as data minimization and purpose limitation.
Organizations operating in Finland must implement data protection by design, ensuring that AI systems are developed and deployed with a transparent, lawful, and rights-respecting approach.
8. Italy’s Garante Fines Replika AI €5M, Opens New Probe into GenAI Training Methods
May 19, 2025 Italy
Italy’s Privacy Authority, the Garante, has fined U.S.-based Luka Inc. €5 million for GDPR violations related to its AI chatbot, Replika. The fine follows findings that the company lacked a valid legal basis for data processing, failed to provide an adequate privacy policy, and did not implement sufficient age verification despite claiming to exclude minors from the platform. These issues were first raised when the Garante blocked the app’s use in Italy in February 2023.
In addition to the fine, the Garante has launched a new investigation into Replika’s generative AI training methods. The inquiry will assess how user data is handled across the system’s lifecycle, including risk assessments, data protection measures during model training, and whether anonymization or pseudonymization is applied.
This action reflects increasing scrutiny of generative AI practices in Europe, particularly around transparency, lawful data use, and the treatment of sensitive user groups.
9. Meta Faces Legal Showdown in EU Over Use of Personal Data for AI Training
May 14, 2025
NOYB has issued a cease-and-desist letter to Meta after the company revealed plans to use Facebook and Instagram users’ personal data across the EU for AI training without obtaining opt-in consent. Instead, Meta claims a “legitimate interest,” offering users only an opt-out option.
This move raises serious GDPR compliance concerns and could trigger EU-wide litigation under the Collective Redress Directive. NOYB and other Qualified Entities are considering injunctions and potential class actions, with damages that could reach billions. German consumer groups have also signaled legal action, while most DPAs have limited their response to advising users to opt out, rather than challenging Meta’s legal basis.
Despite growing pressure, Meta continues with its AI plans. If injunctions are granted, it may be forced to halt processing and delete AI models trained on unlawfully used EU data.
10. European Commission Releases Analysis Of Stakeholder Feedback On AI Definition & Prohibited Practices Under AI Act
May 12, 2025
The European Commission has released an analysis of stakeholder feedback on the AI definition and prohibited practices under the AI Act. The report is based on two consultations, which highlighted concerns related to the vagueness of the “AI System” definition, especially due to the use of terms like “autonomy” and “adaptiveness,” which could wrongly include traditional software such as tax tools or basic statistical models.
As for prohibited practices, stakeholders found terms like “manipulation” and “social scoring” too ambiguous. They cited risks from deepfakes, manipulative chatbots, and biased algorithms, with ethical concerns related to AI systems’ prejudice against vulnerable groups in areas like micro-lending or advertising. Other activities that were strongly opposed include social scoring, predictive policing tools, and untargeted facial data scraping.
The report reflects ongoing efforts to clarify and fine-tune the AI Act to ensure legal certainty while upholding fundamental rights.
11. European Commission Issues FAQs On AI Literacy Requirements Under AI Act
May 7, 2025
The European Commission issued FAQs on AI literacy under the EU AI Act on May 7, 2025. In these FAQs, the Commission clarifies that providers and deployers of AI systems must ensure sufficient AI literacy among all individuals involved in their use, including employees, contractors, and clients. The level of literacy should be appropriate for the person’s role and the context of AI deployment. While formal assessments are not required, they may help demonstrate compliance with transparency and human oversight provisions of the Act. Organizations are advised to evaluate existing AI literacy levels across stakeholders and design role-based training accordingly.
Further guidance will follow from the AI Office, which has already launched an AI portal and provides role-specific AI training for its staff.
12. Japan Passes Landmark AI Legislation to Guide Safe Development
May 28, 2025 Japan
Japan’s parliament passed its first-ever legislation on artificial intelligence during a House of Councillors plenary session, marking a significant step toward regulating the fast-developing technology.
The bill aims to promote the safe use and development of AI while addressing risks such as misinformation and disinformation. It empowers the government to request business cooperation in investigating AI misuse but does not impose any penalties. Moreover, a task force will formulate national AI policy and draft guidelines for companies and users. With this move, Japan joins the growing global momentum toward structured and risk-aware AI regulation.
13. Australia Issues Guidance on Emerging AI Cybersecurity Threats
May 23, 2025 Australia
The Australian Signals Directorate has released guidance on AI-related cyber threats, highlighting how AI amplified existing risks while introducing new ones. Additionally, the guidance explains core AI concepts and warns of vulnerabilities like data leaks, targeted attacks on AI models, and malicious code hidden in models or datasets. It also includes best practices such as data encryption, data provenance tracking, and secure storage.
Thus, organizations must adapt their cybersecurity strategies to counter the unique challenges posed by AI, which could compromise the integrity of AI systems at their very foundation.
Oman's Ministry of Transport, Communications, and Information Technology (MTCIT) has released a national policy that promotes ethical AI use. The policy espouses principles like transparency, fairness, accountability, inclusiveness, and privacy while requiring human oversight in critical decisions, bias mitigation, governance frameworks, regular evaluations, and documented reporting.
This policy marks a significant milestone in aligning AI adoption with Oman Vision 2040, balancing innovation with societal values. By prioritizing trustworthy AI, Oman positions itself as a regional leader in digital ethics and regulatory readiness.
15. Hong Kong PCPD AI Inspection Report Finds No Breaches Of PDPO, Urges Stronger Oversight
May 8, 2025 Hong Kong
Hong Kong’s Privacy Commissioner for Personal Data (PCPD) released an AI inspection report covering 60 organizations, revealing that 80% currently use AI, with 88% having used it for over a year. The report found no breaches of the Personal Data (Privacy) Ordinance (PDPO), indicating appropriate compliance measures are in place. However, the PCPD’s detailed recommendations including the establishment of AI governance frameworks, regular risk assessments, employee training, AI incident response planning, and alignment with PCPD guidelines reflect a growing regulatory focus on responsible AI use.
This signals that while current compliance levels are acceptable, the risks associated with AI necessitate structured oversight.
CJEU to Decide Landmark AI Copyright Case: ln Like Company v. Google Ireland the CJEU will assess whether AI outputs infringe copyright and if training on protected content qualifies as “reproduction.” The case could reshape AI training practices and copyright enforcement across the EU.
Growing AI protections in Texas: Texas HB 149 and SB 1188 aiming to regulate AI, require U.S. based health data storage, and mandate verification of AI-generated medical content are progressing.
Federal AI bills are progressing in US: U.S. bills like S.1633 (AI system evaluation), S.1638 (AI tied to foreign threats), and the AI Whistleblower Protection Act are advancing.
Public comment period for draft regulations in California: Public comment on proposed Automated Decisionmaking Technology rules is open from May 9 to June 2, 2025.
EU Opens Consultation on Data Union Strategy: The European Commission is accepting public feedback until July 18, 2025, on its Data Union Strategy, which aims to streamline data rules, support global data flows, and enable high-quality datasets for advancing generative AI.
Join Our Newsletter
Get all the latest information, law updates and more delivered to your inbox
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Discover why shifting focus from AI risk to AI readiness is critical for enterprises. Learn how Data Security Posture Management (DSPM) empowers organizations to...
Download the infographic on the European Health Data Space Regulation, which features a clear timeline and roadmap highlighting key legislative milestones, implementation phases, and...