Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

Veeamon Tour'26 - Data & AI Trust CONVERGE for the Agentic Era

View

Global AI Regulations Roundup: Top Stories of March 2026

Watch: March's AI Pulse - All Major Highlights

A quick overview of global AI headlines you cannot afford to miss.

Play Video
Contributors

Yasir Nawaz

Digital Content Producer at Securiti

Aamina Shekha

Associate Data Privacy Analyst at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Faqiha Amjad

Associate Data Privacy Analyst at Securiti

Published April 6, 2026 / Updated April 16, 2026

Editorial

AI Governance Splits: Innovation vs Control

AI regulation is rapidly diverging across jurisdictions, revealing two dominant models: innovation-led frameworks and control-driven governance. While the U.S. emphasizes competitiveness and centralized policy direction, the EU continues refining risk-based regulation with added safeguards. In contrast, parts of Asia are moving toward stronger state oversight, combining broad prohibitions with flexible enforcement powers.

At the same time, a consistent theme is emerging: accountability is shifting back to humans, whether through clinical oversight, liability models, or governance frameworks. Transparency is also becoming non-negotiable, particularly around training data, bias, and AI-generated content.

Going forward, organizations should expect more granular, sector-specific rules, tighter scrutiny on high-risk AI, and increasing fragmentation across regions, making adaptability, governance maturity, and cross-border compliance strategies critical.

North & South America Jurisdiction

1. California Issues Executive Order to Strengthen AI Procurement Standards

March 30, 2026
California, United States

California Governor Gavin Newsom has issued an executive order to strengthen AI governance in state procurement, requiring companies to demonstrate robust privacy, security, and ethical safeguards to do business with the state.

The order directs agencies to assess AI systems for risks such as bias, misuse, and violations of civil rights, while also promoting responsible adoption of generative AI to improve public services. It includes plans to develop best practices for watermarking AI-generated content and expanding oversight of AI vendors.

The move positions California as pursuing a more protective and accountability-driven approach to AI, highlighting growing divergence between state and federal AI governance strategies in the U.S.

Read More

2. White House Unveils National AI Legislative Framework

March 20, 2026

The White House has introduced a comprehensive national AI legislative framework aimed at balancing innovation, economic competitiveness, and public trust.

The framework outlines six key priorities: protecting children through parental controls and safety features; strengthening national security and infrastructure; safeguarding intellectual property while enabling fair use for AI training; preventing AI-driven censorship; accelerating innovation by removing regulatory barriers; and developing an AI-ready workforce. It also emphasizes the need for a unified federal approach, warning that fragmented state-level regulation could undermine U.S. leadership in the global AI race.

This signals a shift toward centralized AI governance, with a strong focus on national security, economic growth, and global competitiveness.

Read More

Europe & Africa Jurisdiction

3. European Parliament Adopts Position on AI “Digital Omnibus” Proposal

March 26, 2026

The European Parliament has adopted its position on the “Digital Omnibus” proposal to streamline the EU Artificial Intelligence Act, with strong majority support.

The proposal introduces a targeted ban on AI “nudifier” systems used to generate non-consensual intimate images, while allowing systems with built-in safeguards. It also sets clearer compliance timelines, including deadlines for high-risk AI systems and watermarking requirements for AI-generated content.

To reduce regulatory burden, the Parliament proposed easing obligations where AI systems are already governed by sector-specific laws and extending support mechanisms to small mid-cap enterprises. It also permits limited processing of personal data to detect algorithmic bias, subject to strict safeguards.

The proposal now moves to negotiations with the Council, marking a key step toward refining the EU’s AI regulatory framework.

Read More

4. UK Report Warns Generative AI Threatens Creative Industries

March 6, 2026
United Kingdom

The UK Parliament’s Communications and Digital Committee has warned that generative AI poses a significant risk to the country’s creative industries, primarily due to the unlicensed use of copyrighted material for AI training.

The report highlights concerns around the lack of transparency from AI developers, making it difficult for creators to determine whether their works have been used or to enforce their rights. It also identifies gaps in legal protection for digital likeness, style, and identity. Rather than reforming copyright law, the Committee recommends a licensing-based AI framework, mandatory transparency on training data, and stronger protections against unauthorized digital replicas. It also urges the government to reject proposals for broad text and data mining exceptions.

The findings underscore growing regulatory pressure to align AI development with creator rights and accountability.

Read More

Asia Jurisdiction

5. Singapore’s MOH and HSA Launch Revised AI in Healthcare Guidelines (AIHGle 2.0)

March 10, 2026
Singapore

Singapore’s Ministry of Health and Health Sciences Authority have released updated guidelines for AI use in healthcare, strengthening oversight of advanced systems such as generative AI and deep learning.

The revised framework introduces a lifecycle-based approach, requiring developers to continuously validate AI systems, while healthcare providers must implement risk-based governance and oversight. It also addresses emerging risks such as model drift and lack of transparency. Importantly, the guidelines reinforce that healthcare professionals remain ultimately responsible for clinical decisions, ensuring AI outputs are validated and clearly communicated to patients.

The update reflects a growing focus on accountability, safety, and trust in the deployment of AI in high-risk sectors like healthcare.

Read More

6. China Opens Membership for AI Security Standards Working Group (WG9)

March 20, 2026
China

China’s National Cybersecurity Standardization Technical Committee (TC260) has opened applications for membership in its Artificial Intelligence Security Standards Working Group (WG9).

The group will focus on developing AI security standards, including assessing current risks, identifying emerging trends, and establishing a structured framework for AI security governance. Membership is open to qualified domestic entities, including companies, universities, and research institutions involved in relevant technical fields.

This move reflects China’s continued efforts to formalize AI security standards and strengthen regulatory control over AI development and deployment.

Read More

7. Hong Kong PCPD Warns of Privacy Risks in Agentic AI Tools

March 16, 2026
Hong Kong

Hong Kong’s Privacy Commissioner for Personal Data (PCPD) has issued an alert highlighting heightened privacy and security risks associated with agentic AI systems, such as OpenClaw.

Unlike traditional chatbots, agentic AI can autonomously perform multi-step tasks with broad system access, including files, emails, and external services, increasing risks of unauthorized access, data breaches, and system compromise. The PCPD advises organizations to adopt strict safeguards, including limiting access rights, using trusted and updated versions, securing system environments, and conducting continuous risk assessments. It also emphasizes the importance of a human-in-the-loop approach for high-impact decisions.

The alert highlights growing regulatory concern around advanced AI systems with autonomous capabilities and elevated data access.

Read More

8. Vietnam’s New AI Law Balances Innovation with State Control

March 1, 2026
Vietnam

Vietnam has introduced a comprehensive AI law, marking the first such framework in Southeast Asia and reflecting a dual focus on innovation and strong state oversight.

The law adopts a risk-based approach, requiring AI systems to be classified by risk level, with higher-risk systems subject to notification, audits, and stricter compliance obligations. It also mandates labeling of AI-generated content and prohibits uses such as deceptive deepfakes and activities threatening public order or national security. A key feature is its human accountability model, where responsibility for AI outcomes remains with individuals rather than systems. At the same time, the framework includes incentives to support domestic AI development.

The law signals a broader trend toward centralized governance models that combine regulatory control with strategic support for national AI ecosystems.

Read More

WHAT'S NEXT:
Key AI Developments to Watch For

  • EU AI Transparency Code: The European Commission is advancing its Code of Practice on AI-generated content, with the final version expected by June 2026 following stakeholder feedback.
  • France AI in Healthcare Guidance: CNIL has opened consultation on draft guidelines for AI use in healthcare, with comments due by April 16, 2026.
  • South Korea AI Transparency Rules: The PIPC is set to release updated privacy policy standards in April 2026, strengthening disclosure requirements for generative AI.
  • U.S. AI Chatbot Regulation: Washington has passed a law regulating AI companion chatbots, taking effect January 1, 2027.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight
Future-Proofing for the Privacy Professional
Watch Now View
Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Latest
View More
Building Sovereign AI with HPE Private Cloud AI and Veeam Securiti Gencore AI
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
View More
Securiti.ai Names Accenture as 2025 Partner of the Year
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Navigating Global AI Governance: A Comprehensive Guide For Enterprise Compliance View More
Navigating Global AI Governance: A Comprehensive Guide For Enterprise Compliance
Securiti’s latest whitepaper walks organizations through the complex challenge of navigating global AI governance challenges. Read now to leverage these insights.
View More
Minimize What You Expose: Privacy Guardrails for AI Agents and Copilots
Minimize data exposure in AI agents and copilots. Apply privacy guardrails like data minimization, access controls, masking, and policy enforcement to prevent leakage and...
View More
Agent Commander: Solution Brief
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
Compliance with CCPA Amendments with Securiti View More
Compliance with CCPA Amendments with Securiti
Stay compliant with 2026 CCPA amendments using Securiti, covering updated consent requirements, expanded sensitive data definitions, enhanced consumer rights, and readiness assessments.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New