Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

Global AI Regulations Roundup: Top Stories of January 2026

Contributors

Yasir Nawaz

Digital Content Producer at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Aiman Kanwal

Assoc. Data Privacy Analyst at Securiti

Faqiha Amjad

Associate Data Privacy Analyst at Securiti

Published February 5, 2026

Editorial Note

The Dilemma of AI Governance: State Enforcement vs Federal Preemption

January was marked by significant jurisdictional friction, representing a shift away from the "governance by policy" trend toward active legal conflict. Enforcement became increasingly aggressive as we simultaneously observed a surge in state and federal actions. Although the creation of the federal AI Litigation Task Force signaled a push for centralized oversight, states such as California did not retreat; instead, they doubled-down with high-profile enforcement actions.

Globally, we observed a heightened regulatory focus on technical standards, from Singapore’s world-first Agentic AI framework at Davos to the EU’s first implementation warnings. This demonstrates that compliance is no longer the “goal”, but a “means” to the “goal” involving real-time monitoring and "responsible-by-design" engineering.

For organizations, this reality requires navigating the tension between federal regulation and state-level litigation, mandating a flexible compliance framework rather than a rigid checklist. In 2026, competitive advantage belongs to those who view transparency and adaptability as foundational elements of their AI strategy.

North & South America Jurisdiction

1. U.S. State Attorneys General Press xAI Over AI-Generated Nonconsensual Intimate Images

January 23, 2026
United States

A coalition of 35 U.S. state attorneys general has increased pressure on xAI over allegations that its chatbot Grok enabled the creation and spread of AI-generated nonconsensual intimate images (NCII), including content involving minors. The actions follow reports that Grok allowed users to transform ordinary images into sexually explicit content without consent.

The escalation includes a formal investigation and cease-and-desist order from California Attorney General Rob Bonta, alongside coordinated demands from multiple states for stronger safeguards, content removal, user suspensions, and reporting to law enforcement. The move signals rising state-level enforcement risk for AI developers over harmful model outputs.

Read More

2. Ontario Privacy and Human Rights Regulators Issue Joint Principles on Responsible AI

January 21, 2026
Ontario, Canada

The Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) have released joint Principles for the Responsible Use of Artificial Intelligence, setting expectations for organizations developing or deploying AI systems. The principles require AI to be valid and reliable, safe, privacy-protective, human-rights-affirming, transparent, and accountable throughout its lifecycle.

The guidance builds on earlier IPC-OHRC work and aligns with domestic and international AI governance frameworks. It signals that AI adoption in Ontario must balance innovation with enforceable privacy and human rights safeguards, and that failures in AI governance may attract multi-regulatory scrutiny.

Read More

3. California Targets xAI Over Grok-Generated Nonconsensual and Child Sexual Abuse Imagery

January 14-16, 2026
California, United States

California Attorney General Rob Bonta has escalated enforcement action against xAI over allegations that its chatbot Grok was used to generate and distribute AI-generated nonconsensual intimate images (NCII) and child sexual abuse material (CSAM). The action follows reports that users were able to upload ordinary images and receive sexually explicit outputs without the subjects’ consent. On January 14, the California Attorney General’s Office opened a formal investigation into xAI’s practices. Just two days later, on January 16, the state issued a cease-and-desist order, demanding that xAI immediately halt the creation and dissemination of such content.

The move marks one of the most significant state-level enforcement actions to date, directly targeting AI model outputs, signaling heightened legal risk for developers whose systems enable harmful generative content.

4. Department of Justice Establishes AI Litigation Task Force

January 13, 2026
United States

Following the Executive Order of December 11, 2026, the US Department of Justice has established an AI Litigation Task Force. Headed by the Attorney General, the group is mandated to oversee the deployment of AI within the federal government and to challenge state-level AI regulations that may conflict with federal interests or impede innovation. The Task Force will evaluate state laws on the grounds of unconstitutional interference with interstate commerce and federal preemption, with a specific focus on "onerous" requirements

The group will act as a central hub for coordinating AI-related enforcement actions and policy across different departments. While its primary focus is to promote a “minimally burdensome” standard, the group will also be working towards preventing algorithmic discrimination and ensuring that AI is used responsibly by private entities.

Overall, the creation of the Task Force, as a specialized unit, signals a shift towards centralized AI governance, as opposed to the emerging “patchwork” of AI laws.

Read More

Europe & Africa Jurisdiction

5. EDPB and EDPS Issue Joint Opinion on Simplifying EU AI Rules

January 21, 2026

The European Data Protection Board and the European Data Protection Supervisor have published Joint Opinion 1/2026 on proposals to simplify the implementation of harmonized AI rules under the EU’s Digital Omnibus initiative. The regulators stress that the use of special categories of personal data for bias prevention must remain exceptional, strictly necessary, and subject to supervision by data protection authorities.

The opinion also opposes removing registration obligations for certain high-risk AI systems, emphasizing the need for public transparency, and calls for mandatory DPA involvement in EU-level AI sandboxes. While supporting centralized oversight by the AI Office, the boards warn against sidelining national authorities and caution that delaying high-risk AI obligations or weakening provider responsibilities could undermine fundamental rights and create legal uncertainty.

Read More

6. AEPD Publishes Guide on Risks of the Use of Third-Party Images in AI Systems

January 13, 2026
Spain

The Spanish Data Protection Authority (AEPD) has highlighted that the use of third-party images in AI systems is not a neutral act, as any identifiable image or video remains personal data even when modified by filters or avatars.

The entire process leads to an effective loss of control for users over their personally identifiable data, as AI systems often retain "non-visible" copies and generate technical metadata that can lead to persistent identification and re-identification of the subject. Because of the deep information asymmetry between AI providers and the individuals depicted, it becomes increasingly difficult for people to exercise their rights to erasure or objection, making even "playful" uses of AI a high-risk activity for data protection.

Read More

7. Ofcom Opens Investigation Into X Over Grok-Generated Sexualised Content

January 12, 2026
United Kingdom

The UK’s Office of Communications (Ofcom) has launched a formal investigation into X to assess potential breaches of the Online Safety Act, following reports that the Grok chatbot was used to generate and share non-consensual sexualised images and child sexual abuse material.

The inquiry will examine whether X failed to properly assess risks, implement effective age-assurance measures, or remove illegal content promptly. While X has announced technical restrictions, Ofcom has stated the matter remains a priority. If violations are confirmed, X could face fines of up to £18 million or 10% of global turnover, and potentially court-ordered service restrictions in the UK. Read More

8. ICO’s New Tech Futures Report Discusses Agentic AI

January 8, 2026
United Kingdom

The Information Commissioner’s Office (ICO) has released a new Tech Futures report that explores the rise of Agentic AI and its implications for consumer privacy.

Among the various AI-related predictions, notable ones relate to those of AI becoming a mainstay with capabilities of independent management of household finances, negotiating prices, and making proactive purchases based on learned preferences. While the ICO acknowledges the transformational potential of these autonomous systems, it warns that public trust depends on ensuring that technological advancements do not come at the cost of data privacy.

The Executive Director of ICO has also emphasized how ICO will be proactively monitoring developers and deployers this year to ensure agentic systems are built on strong data protection foundations. The report serves as a call to action for the industry to prioritize "privacy by design" as these digital companions move toward making independent real-world decisions.

Read More

9. Finland Adopts Decentralized Supervision Model for AI Regulation Implementation

January 7, 2026
Finland

Finland has empowered 15 national authorities to supervise the implementation of the AI Regulation. This decentralized model aims to ensure AI systems do not endanger health, safety, or fundamental rights, while fostering trust for innovation.

The Finnish Transport and Communications Agency will serve as the national contact point for coordination efforts. In contrast, the Data Protection Commissioner will serve as the market surveillance authority for high-risk systems and to monitor prohibited practices.

Read More

10. AGCM Closes Investigation Into DeepSeek

January 6, 2026
Italy

Italy’s Autorità Garante della Concorrenza e del Mercato (AGCM) has closed its investigation into DeepSeek following the company’s agreement to legally binding transparency commitments. The probe, launched in June 2025, concerned allegations that users were not adequately informed about the risks of AI hallucinations, inaccurate outputs, and fabricated information.

To resolve the case, DeepSeek implemented enhanced and immediate risk disclosures, including clear warnings about hallucinations and output limitations. AGCM concluded that these measures sufficiently address consumer protection concerns, bringing the investigation to a close.

Read More

Asia Jurisdiction

11. South Korean Ministry of Science and ICT Launches AI Basic Law Support Desk

January 22, 2026
South Korea

The MSIT has launched the AI Basic Law Support Desk. It will be staffed by legal, regulatory, and technical experts to provide accurate and prompt consultations. All consultations will remain confidential. General inquiries will be responded to within 72 hours on weekdays, and complex inquiries within 14 days.

This initiative supports the smooth implementation of the AI Basic Act, which is the world's second comprehensive AI law after the EU AI Act. The government has implemented a grace period of at least one year during which it will focus on consultations and education rather than investigations or administrative sanctions, helping companies determine whether their systems fall within the law's scope and how to comply accordingly.

Read More

12. Singapore Launches Global Governance Framework for Agentic AI Systems

January 22, 2026
Singapore

Singapore’s Infocomm Media Development Authority (IMDA) has unveiled the world’s first comprehensive Agentic AI System Governance Framework, announced at the World Economic Forum in Davos. Building on Singapore’s 2020 Model AI Governance Framework, the guidance applies to both in-house and third-party AI agents and focuses on practical deployment controls.

The framework addresses four core areas: risk scoping and limits on autonomy, meaningful human accountability, technical lifecycle controls (including testing and monitoring), and end-user transparency and responsibility. IMDA expects organizations globally to adapt the framework over the next 12-24 months, potentially positioning it as a de facto international standard for governing autonomous AI systems.

Read More

13. Australia Issues Cybersecurity Guidance on AI Adoption for Small Businesses

January 14, 2026
Australia

Australia’s Australian Signals Directorate (ASD), via the Australian Cyber Security Centre (ACSC), has released guidance on managing cybersecurity risks associated with cloud-based AI systems, in collaboration with New Zealand’s National Cyber Security Centre (NCSC-NZ) and Council of Small Business Organisations Australia (COSBOA). The guidance targets small businesses and highlights risks such as data leakage, privacy breaches, unauthorized access, output manipulation, and vendor security gaps.

Key recommendations include strengthening data governance, defining clear AI-use policies (including data that must not be uploaded), vetting AI vendors’ privacy and security practices, training staff on responsible AI use, and securing sensitive information. The guidance follows a 2025 incident involving a notifiable breach after personal and health data were uploaded to an AI system and forms part of ASD’s broader Small Business Hub resources.

Read More

14. Taiwan Passes Artificial Intelligence Basic Act, Establishing National AI Governance Framework

January 1, 2026
Taiwan

Taiwan’s Legislative Yuan has passed the Artificial Intelligence Basic Act, establishing a national, risk-based AI governance framework aligned with international standards. The Act designates the National Science and Technology Council (NSTC) as the central authority and mandates the Ministry of Digital Affairs (MODA) to develop an interoperable AI risk classification framework. It codifies core principles including privacy, transparency, fairness, accountability, and cybersecurity.

The law does not impose immediate private-sector obligations, with detailed requirements to follow through sector-specific rules over the next two years.

Organizations should begin mapping AI systems, assessing risk, and strengthening privacy-by-design practices in anticipation of future high-risk labeling, documentation, and oversight requirements.

Read More

WHAT'S NEXT:
Key AI Developments to Watch For

Virginia (US)SB 245 was referred to the Senate Committee on Education and Health, proposing tightened regulation on social media and AI by mandating age screening, prohibiting "dark patterns."

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
Understanding of Enterprise AI Agents View More
Understanding of Enterprise AI Agents – The Complete Guide
Learn what enterprise AI agents are, how they work, real-world use cases, key benefits, risks, and the governance and security controls needed to deploy...
IT vs OT Cybersecurity: What’s the Difference and Why It Matters View More
IT vs OT Cybersecurity: What’s the Difference and Why It Matters
Discover what is IT and OT, their key differences, why cybersecurity is essential to regulate and secure both, and much more.
View More
CNIL’s €475 Million Cookie Consent Enforcement: Key Lessons for Organizations
Download the whitepaper to learn about CNIL’s €475 million cookie consent enforcement fine. Discover key lessons for organizations and how to automate compliance.
Australia Privacy Act Reform – Tranche 2 View More
Australia Privacy Act Reform – Tranche 2
Access the whitepaper to gain an overview of Tranche 2, its strategic intent, core reforms expected, business impact, and executive checklist to ensure swift...
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New