AI regulation is rapidly diverging across jurisdictions, revealing two dominant models: innovation-led frameworks and control-driven governance. While the U.S. emphasizes competitiveness and centralized policy direction, the EU continues refining risk-based regulation with added safeguards. In contrast, parts of Asia are moving toward stronger state oversight, combining broad prohibitions with flexible enforcement powers.
At the same time, a consistent theme is emerging: accountability is shifting back to humans, whether through clinical oversight, liability models, or governance frameworks. Transparency is also becoming non-negotiable, particularly around training data, bias, and AI-generated content.
Going forward, organizations should expect more granular, sector-specific rules, tighter scrutiny on high-risk AI, and increasing fragmentation across regions, making adaptability, governance maturity, and cross-border compliance strategies critical.
North & South America Jurisdiction
1. California Issues Executive Order to Strengthen AI Procurement Standards
March 30, 2026 California, United States
California Governor Gavin Newsom has issued an executive order to strengthen AI governance in state procurement, requiring companies to demonstrate robust privacy, security, and ethical safeguards to do business with the state.
The order directs agencies to assess AI systems for risks such as bias, misuse, and violations of civil rights, while also promoting responsible adoption of generative AI to improve public services. It includes plans to develop best practices for watermarking AI-generated content and expanding oversight of AI vendors.
The move positions California as pursuing a more protective and accountability-driven approach to AI, highlighting growing divergence between state and federal AI governance strategies in the U.S.
2. White House Unveils National AI Legislative Framework
March 20, 2026
The White House has introduced a comprehensive national AI legislative framework aimed at balancing innovation, economic competitiveness, and public trust.
The framework outlines six key priorities: protecting children through parental controls and safety features; strengthening national security and infrastructure; safeguarding intellectual property while enabling fair use for AI training; preventing AI-driven censorship; accelerating innovation by removing regulatory barriers; and developing an AI-ready workforce. It also emphasizes the need for a unified federal approach, warning that fragmented state-level regulation could undermine U.S. leadership in the global AI race.
This signals a shift toward centralized AI governance, with a strong focus on national security, economic growth, and global competitiveness.
3. European Parliament Adopts Position on AI “Digital Omnibus” Proposal
March 26, 2026
The European Parliament has adopted its position on the “Digital Omnibus” proposal to streamline the EU Artificial Intelligence Act, with strong majority support.
The proposal introduces a targeted ban on AI “nudifier” systems used to generate non-consensual intimate images, while allowing systems with built-in safeguards. It also sets clearer compliance timelines, including deadlines for high-risk AI systems and watermarking requirements for AI-generated content.
To reduce regulatory burden, the Parliament proposed easing obligations where AI systems are already governed by sector-specific laws and extending support mechanisms to small mid-cap enterprises. It also permits limited processing of personal data to detect algorithmic bias, subject to strict safeguards.
The proposal now moves to negotiations with the Council, marking a key step toward refining the EU’s AI regulatory framework.
4. UK Report Warns Generative AI Threatens Creative Industries
March 6, 2026 United Kingdom
The UK Parliament’s Communications and Digital Committee has warned that generative AI poses a significant risk to the country’s creative industries, primarily due to the unlicensed use of copyrighted material for AI training.
The report highlights concerns around the lack of transparency from AI developers, making it difficult for creators to determine whether their works have been used or to enforce their rights. It also identifies gaps in legal protection for digital likeness, style, and identity. Rather than reforming copyright law, the Committee recommends a licensing-based AI framework, mandatory transparency on training data, and stronger protections against unauthorized digital replicas. It also urges the government to reject proposals for broad text and data mining exceptions.
The findings underscore growing regulatory pressure to align AI development with creator rights and accountability.
5. Singapore’s MOH and HSA Launch Revised AI in Healthcare Guidelines (AIHGle 2.0)
March 10, 2026 Singapore
Singapore’s Ministry of Health and Health Sciences Authority have released updated guidelines for AI use in healthcare, strengthening oversight of advanced systems such as generative AI and deep learning.
The revised framework introduces a lifecycle-based approach, requiring developers to continuously validate AI systems, while healthcare providers must implement risk-based governance and oversight. It also addresses emerging risks such as model drift and lack of transparency. Importantly, the guidelines reinforce that healthcare professionals remain ultimately responsible for clinical decisions, ensuring AI outputs are validated and clearly communicated to patients.
The update reflects a growing focus on accountability, safety, and trust in the deployment of AI in high-risk sectors like healthcare.
6. China Opens Membership for AI Security Standards Working Group (WG9)
March 20, 2026 China
China’s National Cybersecurity Standardization Technical Committee (TC260) has opened applications for membership in its Artificial Intelligence Security Standards Working Group (WG9).
The group will focus on developing AI security standards, including assessing current risks, identifying emerging trends, and establishing a structured framework for AI security governance. Membership is open to qualified domestic entities, including companies, universities, and research institutions involved in relevant technical fields.
This move reflects China’s continued efforts to formalize AI security standards and strengthen regulatory control over AI development and deployment.
7. Hong Kong PCPD Warns of Privacy Risks in Agentic AI Tools
March 16, 2026 Hong Kong
Hong Kong’s Privacy Commissioner for Personal Data (PCPD) has issued an alert highlighting heightened privacy and security risks associated with agentic AI systems, such as OpenClaw.
Unlike traditional chatbots, agentic AI can autonomously perform multi-step tasks with broad system access, including files, emails, and external services, increasing risks of unauthorized access, data breaches, and system compromise. The PCPD advises organizations to adopt strict safeguards, including limiting access rights, using trusted and updated versions, securing system environments, and conducting continuous risk assessments. It also emphasizes the importance of a human-in-the-loop approach for high-impact decisions.
The alert highlights growing regulatory concern around advanced AI systems with autonomous capabilities and elevated data access.
8. Vietnam’s New AI Law Balances Innovation with State Control
March 1, 2026 Vietnam
Vietnam has introduced a comprehensive AI law, marking the first such framework in Southeast Asia and reflecting a dual focus on innovation and strong state oversight.
The law adopts a risk-based approach, requiring AI systems to be classified by risk level, with higher-risk systems subject to notification, audits, and stricter compliance obligations. It also mandates labeling of AI-generated content and prohibits uses such as deceptive deepfakes and activities threatening public order or national security. A key feature is its human accountability model, where responsibility for AI outcomes remains with individuals rather than systems. At the same time, the framework includes incentives to support domestic AI development.
The law signals a broader trend toward centralized governance models that combine regulatory control with strategic support for national AI ecosystems.
EU AI Transparency Code: The European Commission is advancing its Code of Practice on AI-generated content, with the final version expected by June 2026 following stakeholder feedback.
France AI in Healthcare Guidance: CNIL has opened consultation on draft guidelines for AI use in healthcare, with comments due by April 16, 2026.
South Korea AI Transparency Rules: The PIPC is set to release updated privacy policy standards in April 2026, strengthening disclosure requirements for generative AI.
U.S. AI Chatbot Regulation: Washington has passed a law regulating AI companion chatbots, taking effect January 1, 2027.
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Securiti’s latest whitepaper walks organizations through the complex challenge of navigating global AI governance challenges. Read now to leverage these insights.
Minimize data exposure in AI agents and copilots. Apply privacy guardrails like data minimization, access controls, masking, and policy enforcement to prevent leakage and...
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...