Securiti’s AI Regulation digest provides a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
Editorial Note
Global AI Regulation Gains Momentum: CPPA, White House, EDPB, and More
This month marks a shift from planning to real action. We are seeing the emergence of dual priorities: fostering innovation while tightening accountability. Countries like the U.S., EU members, Singapore, and South Korea are converging on key themes- transparency, human oversight and cross border data safeguards. Organizations must now navigate a landscape where staying compliant demands constant adaptation, deep expertise, and a firm commitment to ethical AI practices. The pace of regulation is accelerating faster than ever; the next few months will reveal who can keep up - and who risks falling behind.
North and South America Jurisdiction
1. Texas Advances Responsible AI Governance Act
Date: April 23, 2025
State: Texas, United States
House Bill 149, the Texas Responsible Artificial Intelligence Governance Act, passed the state House, and is now pending before the Senate Business and Commerce Committee.
The Bill establishes comprehensive AI governance rules, applying to any entity developing, distributing, or deploying AI systems within Texas, or offering AI-related services or products to Texas residents. The legislation prohibits the use of AI systems for manipulative purposes, such as inciting harm, bypassing informed decision-making, or conducting social scoring. If enacted, it would position Texas as a leader among U.S. states in crafting ethical AI standards that prioritize both innovation and consumer protection.
Organizations developing or offering AI tools in Texas should monitor the bill’s progress closely and prepare to align their operations with emerging regulatory obligations. Read More
2. Arkansas Enacts Generative AI Ownership Law
Date: April 21, 2025
State: Arkansas, United States
Arkansas has enacted Act 927 (formerly House Bill 1876), establishing legal ownership rights over content and models generated using generative AI tools. Under the law, individuals who provide input to AI systems own the generated outputs and trained models unless the content infringes existing IP rights. In employment settings, ownership vests with the employer if the AI use occurs under the employee’s scope of duties and direction. The law explicitly excludes any AI-generated content that violates copyright or IP protections.
This landmark law sets a precedent in clarifying intellectual property rights in the age of generative AI.
Organizations and individuals working with AI in Arkansas should review their contractual frameworks to align with the new statute. Read more
3. Montana Enacts Nation’s First “Right to Compute” Law
Date: April 16, 2025
State: Montana, United States
Montana has made history with the enactment of the Right to Compute Act (SB 212), the first U.S. law affirming individuals’ fundamental right to own and use computational tools and AI technologies. Signed into law by Governor Greg Gianforte, the legislation enshrines protections under Montana’s Constitution, limiting state interference to only those restrictions that are “demonstrably necessary” for public health or safety.
Key provisions include mandated risk assessments and human override mechanisms for AI-managed critical infrastructure. The Act sets a national precedent for digital freedoms, contrasting more restrictive AI bills vetoed in other states like California (SB1047) and Virginia (HB 2094).
Organizations operating in Montana or shaping AI access policies should closely review the implications of this groundbreaking law. Read More
4. White House Releases Memorandums to Accelerate AI Adoption Across Federal Agencies
Date: April 7, 2025
Country: United States
The White House has released two memorandums, M-25-21 and M-25-22, under President Trump's Executive Order 14179, aiming to accelerate AI innovation and adoption within U.S. federal agencies.
The M-25-21 sets guidelines for federal agencies to use AI, emphasizing innovation, governance, and public trust. The key policies include maximizing American AI use, utilizing an AI Chief AI Officer, creating a "high impact" category for AI use cases, producing AI adoption assessments, and discontinuing AI use by April 3, 2026.
The M-25-22 reforms procurement practices introducing performance-based contracts, restrictions on vendor lock-in, and protections against unauthorized commercialization of public datasets.
The White House also issued a fact sheet which details these new policies, eliminating barriers to federal AI use and procurement.
The memorandums mark a significant policy shift, positioning AI as a cornerstone of government modernization and competitiveness. Read More
Europe and Africa Jurisdiction
5. European Commission Launches Consultation On General-Purpose AI (GPAI) Guidelines
Date: April 22, 2025
The European Commission launched a public consultation inviting stakeholders to provide practical insights to shape upcoming guidelines on general-purpose AI (GPAI) models under the AI Act.
These guidelines will be non-binding and aim to clarify various key concepts, such as what qualifies as a GPAI model, who the providers are in different scenarios, and what constitutes placing a model on the market. Additionally, these guidelines will also explain the role of the AI Office and how adherence to the forthcoming Code of Practice on GPAI could serve as a compliance benchmark. Feedback is open to all stakeholders until May 22, 2025.
The Commission also plans to launch a separate consultation on classifying high-risk AI systems to support the broader implementation of the AI Act.
If you are a GPAI provider, downstream AI developer, member of civil society, public authority, or an organization using GPAI models, you should actively participate in the consultation process by submitting your feedback and insights through the designated channels to help shape the regulatory landscape. Read More
Date: April 16, 2025
Starting from the end of May 2025, Meta will begin using posts, photos, and other content shared by adult users in Europe- across Facebook, Instagram, WhatsApp and Messenger- to train its AI models. Private messages and content from users under 18 will not be used.
After initially pausing the rollout due to privacy concerns raised by the Irish Data Protection Authority, Meta is now moving forward, offering a dedicated opt-out mechanism. Notifications are being sent via apps and email to users in Europe explaining the changes and linking to the opt-out forms. To opt out, you may use the following links:
If you are a Meta user in Europe and do not wish for your data including past public content to be used for AI training, you must opt out before the end of May 2025. After that deadline, data may already be incorporated into Meta’s AI models and might not be removable. Read More
7. EDPB Releases Report on AI Privacy Risks & LLM Mitigation Strategies
Date: April 10, 2025
The European Data Protection Board (EDPB) has published an in-depth report on managing privacy risks in large language models (LLMs), offering a detailed framework for risk assessment and mitigation aligned with the GDPR and the AI Act. Using the AI lifecycle (from design to deployment), the report maps privacy risks to each stage and outlines actionable strategies, including continuous monitoring, incident response mechanisms, and maintaining risk registers
The report emphasizes role clarity under EU law: deployers typically act as controllers, while providers may assume controller or processor roles depending on how data is handled. Key recommendations include maintaining risk registers, setting up incident response plans, applying privacy-by-design from the outset, and ensuring ongoing monitoring and governance even after deployment.
If your organization develops or uses LLMs, this report offers valuable guidance to ensure privacy-responsible deployment aligned with upcoming EU AI and GDPR requirements. Read More
Asia Jurisdiction
8. Saudi Arabia proposes Global AI Hub Law
Date: April 24, 2025
Country: Saudi Arabia
Saudi Arabia’s Communication, Space and Technology Commission (CST) has opened the draft Global AI Hub Law for public feedback, laying the foundation for the Kingdom to become a leading global center for AI infrastructure and cooperation. The draft law outlines a framework for the establishment of foreign-operated data hubs within Saudi Arabia, aiming to create a trusted environment for cross-border collaboration in AI technologies.
Under the proposed law, “private,” “extended,” and “virtual” AI hubs would be established by foreign governments or service providers, each governed by distinct legal and operational terms. Foreign countries would be required to enter bilateral agreements with Saudi Arabia in order to operate dedicated “private” or “extended” hubs. These agreements would define operational responsibilities and ensure compliance with both Saudi and guest country laws.
Virtual hubs operated by authorized providers would offer services hosted under the jurisdiction of a designated foreign country. Each model is designed to provide legal certainty and clear regulatory conditions for international data operations hosted within the Kingdom.
Public consultation on the draft law is open until May 14, 2025, via the Saudi government’s Istitlaa platform. Read More
9. South Korea’s PIPC Finds DeepSeek In Violation Of Data Privacy Requirements
Date: April 24, 2025
Country: South Korea
South Korea’s Personal Information Protection Commission (PIPC) has announced preliminary findings against Hangzhou DeepSeek AI, citing multiple privacy violations, including unauthorized data transfers and AI training without user consent. The Commission has issued corrective action recommendations.
DeepSeek was found to have breached Korean privacy requirements by transferring personal data overseas without proper consent, inadequately disclosing its practices, and using user inputs for AI training without offering an opt-out. Although DeepSeek has since updated its policies, blocked improper data transfers, and added new safeguards, PIPC has mandated the immediate destruction of previously transferred user data and full compliance with enhanced protection measures.
DeepSeek must implement corrective actions within 60 days, or face further regulatory action.
The case signals tougher oversight on AI companies operating in South Korea, particularly around cross-border data transfers and AI model training practices. Read More
10. Singapore Expands Cyber Essentials and Cyber Trust Certifications to Cover AI, Cloud, and OT Security
Date: April 15, 2025
Country: Singapore
The Cyber Security Agency of Singapore (CSA) has expanded its Cyber Essentials and Cyber Trust certification programs to include cloud security, AI security, and operational technology (OT) security.
The update aims to streamline cybersecurity requirements for businesses, particularly SMEs, by providing practical guidance for cloud usage, mitigating AI-related risks like vulnerabilities in Large Language Models (LLMs), and securing legacy OT systems. The expansion is part of Singapore’s broader effort to strengthen digital trust and ensure organizations are better equipped to manage evolving threats.
CSA is also considering making these certifications mandatory for organizations handling sensitive data or bidding for government contracts. SMEs can access co-funding support to achieve certification through CSA’s CISO-as-a-Service program.
Organizations operating in Singapore should review the expanded standards to maintain cybersecurity readiness and compliance. Read More
11. UAE launches AI-powered Legislative Intelligence Office
Date: April 14, 2025
Country: United Arab Emirates
The UAE Cabinet has approved the launch of the world’s first integrated Regulatory Intelligence Ecosystem within government operations to modernise lawmaking. The new system will use artificial intelligence to connect and monitor federal and local laws, judicial rulings, public services, and executive actions, creating a dynamic, real-time legislative framework.
This AI-driven model aims to accelerate lawmaking by up to 70%, enabling faster updates, real-time tracking of global legislative changes, and aligning UAE laws with leading international best practices. The ecosystem will transition the UAE from static laws to adaptable, living regulations that better respond to economic, social, and technological changes. The Cabinet also approved the establishment of a Regulatory Intelligence Office under the General Secretariat to oversee implementation.
This shift positions the UAE as a front-runner in legal-tech, connecting lawmakers directly with performance data and reducing time-to-reform. Read More.
WHAT’S NEXT: Key AI Developments to Watch For