Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

Global AI Regulations Roundup: Top Stories of December 2025

Contributors

Yasir Nawaz

Digital Content Producer at Securiti

Aswah Javed

Associate Data Privacy Analyst at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Faqiha Amjad

Associate Data Privacy Analyst at Securiti

Published January 1, 2026 / Updated January 5, 2026

Editorial Note

This month’s AI developments point to a clear shift from experimentation to enforceable governance. Regulators are no longer debating whether to regulate AI, but how fast organizations can operationalize risk controls, transparency, and accountability. Across jurisdictions, we see convergence around core themes: child safety, misuse prevention, security-by-design, and clearer role allocation between AI developers and deployers.
At the same time, regulatory approaches are diverging in structure. The EU is advancing detailed, lifecycle-based obligations under the AI Act, while countries in the Asia-Pacific are prioritizing governance frameworks, security guidance, and national capability-building. In the U.S., momentum is building through sectoral enforcement and state-level AI laws rather than a single comprehensive statute.
For organizations, the takeaway is practical: AI compliance is becoming operational, not theoretical. Inventorying AI systems, documenting risk decisions, preparing incident reporting workflows, and aligning security controls will be critical to staying ahead of 2026 enforcement timelines.

North & South America Jurisdiction

1. New York Enacts RAISE Act Establishing AI Safety Obligations for Large Developers

December 19, 2025
New York, United States

New York has enacted the Responsible AI Safety and Education (RAISE) Act, establishing new safety and governance obligations for large, frontier AI developers. The law requires covered developers to publish and annually review AI safety and security plans, addressing risk assessment, mitigation, cybersecurity, third-party evaluations, and incident response.

The Act introduces 72-hour reporting requirements for critical AI safety incidents and creates a dedicated AI oversight office within the New York State Department of Financial Services to enforce compliance. Penalties for repeat violations can reach $3 million. Overall, the RAISE Act raises compliance expectations for AI developers operating in New York, and signals continued momentum toward state-level AI governance in the U.S.

Read More

2. U.S. Senators Press AI Companies on Scam Prevention and Platform Safeguards

December 12, 2025
United States

U.S. Senators Maggie Hassan and Josh Hawley have called on leading generative AI providers, including OpenAI, Google, Meta, Microsoft, Anthropic, xAI, and Perplexity AI, to strengthen safeguards against the growing misuse of AI for scams and fraud.

The bipartisan request reflects rising concern that generative AI is lowering the cost and scale of fraud by enabling highly personalized scam messages, voice cloning, and automated outreach at an industrial scale. Citing FBI data showing $16.6 billion in scam-related losses in 2024, the Senators questioned whether existing guardrails are effective or merely symbolic. The move signals increasing congressional scrutiny of AI risk controls and foreshadows potential regulatory expectations around misuse prevention, monitoring, and cooperation with law enforcement.

Read More

3. White House Executive Order Pushing for Federal AI Preemption

December 11, 2025
United States

U.S. President Donald Trump has issued a new Executive Order establishing a “minimally burdensome” national framework for artificial intelligence, signaling a strong push toward federal preemption of state AI laws. Issued amid growing state-level AI regulation, the Order frames U.S. AI leadership as an economic and national security priority and criticizes state laws deemed overly restrictive or innovation-stifling.

The EO directs federal agencies to identify, challenge, and potentially block state AI laws that conflict with federal policy, including through litigation, funding restrictions, and new federal reporting standards. It also calls for legislative recommendations to establish uniform federal AI rules, while carving out areas such as child safety and government use.

Overall, the Order signals a move toward centralized AI governance in the U.S., with implications for organizations navigating state-level AI compliance requirements and future federal standards.

Read More

4. GUARD Act Gains Co-sponsors from Multiple State Senators

December 10, 2025
United States

A bipartisan group of U.S. Senators has added new cosponsors to the GUARD Act, proposed legislation aimed at strengthening safeguards for children’s interactions with AI chatbots. The bill would prohibit AI companion systems designed for minors, require chatbots to clearly disclose their non-human status, and introduce criminal liability for companies that knowingly make AI systems available to minors that generate or solicit sexual content.

The proposal reflects growing regulatory concern about the risks posed by AI-driven conversational systems to children, including exposure to harmful content and manipulation. If enacted, the GUARD Act would significantly raise compliance expectations for AI developers, particularly those offering consumer-facing or conversational AI products, by introducing age-verification requirements, transparency obligations, and heightened content controls for child-related use cases.

Read More

5. Florida Proposes Artificial Intelligence Bill of Rights and Data Center Safeguards

December 4, 2025
Florida, United States

Florida has announced a proposal to establish a Citizen Bill of Rights for Artificial Intelligence, aimed at strengthening consumer protections related to AI use. The proposal would introduce requirements such as clear disclosure when users interact with AI systems, restrictions on the use of an individual’s name, image, or likeness without consent, enhanced parental controls for minors, limits on AI use in mental health services, and safeguards around data security and sharing. The proposal also includes measures limiting the use of AI in insurance claims decisions and prohibiting government use of certain foreign-developed AI tools.

In parallel, Florida proposed new rules governing hyperscale AI data centers, including restrictions on utility cost pass-throughs to consumers, limits on taxpayer subsidies, enhanced local government authority over siting decisions, and protections for environmental and water resources. Together, the proposals signal growing state-level attention to AI consumer rights, transparency, and infrastructure impacts.

Read More

6. CISA and International Partners Issue Guidance on AI Use in Operational Technology

December 3, 2025
United States

The U.S. Cybersecurity and Infrastructure Security Agency (CISA), together with Australia’s Cyber Security Centre and other international partners, has released joint guidance on the secure integration of artificial intelligence into operational technology (OT) systems. The guidance targets critical infrastructure owners and operators deploying AI in industrial and control system environments.

The document outlines four core principles to help organizations manage safety, security, and reliability risks when integrating AI, including machine learning, large language models, and AI agents. It emphasizes continuous monitoring, validation, and risk management to prevent unintended impacts on critical operations.

The guidance reflects growing global alignment on securing AI-enabled industrial systems as AI adoption expands in critical infrastructure.

Read More

Europe & Africa Jurisdiction

7. European Commission Publishes First Draft of the Code Of Practice For Marking and Labelling AI-Generated Content

December 17, 2025

The European Commission has released the first draft of a Code of Practice on marking and labelling AI-generated content, providing practical guidance for implementing the transparency obligations under Article 50 of the EU AI Act. The draft aims to ensure that synthetic content, such as deepfakes or AI-generated media, is clearly identifiable, supporting efforts to combat disinformation and impersonation.

The framework introduces a dual-layer approach: AI providers must embed machine-readable markers, such as metadata or watermarking, while professional deployers must apply visible labels or standardized disclosures for users. Once finalized, the Code will become legally enforceable by August 2, 2026, signaling upcoming compliance requirements for organizations developing or using generative AI in the EU.

Read More

8. UK Ofcom Clarifies Online Safety Act Obligations for AI Chatbots

December 16, 2025
United Kingdom

UK regulator Ofcom has clarified how the Online Safety Act applies to AI chatbots, particularly following concerns about self-harm encouragement and impersonation risks. The guidance confirms that AI chatbots fall within the Act when they enable user-to-user interactions, such as sharing AI-generated content, group chats involving chatbots, or user-created bots accessible by others.

Providers of in-scope services must assess and mitigate risks, with heightened protections for children, including safeguards against harmful content and robust age verification for services capable of generating pornographic material. Chatbots limited to one-to-one interactions, without content sharing or multi-user features, remain outside the Act’s scope. The clarification provides important boundary-setting for chatbot providers designing or deploying AI-driven services in the UK.

Read More

9. Spanish AESIA Announces Several Guides for AI Act Compliance Obligations

December 10, 2025
Spain

Spain’s AI supervisory authority, AESIA, has released a series of non-binding guidance documents to help organizations prepare for compliance with the EU AI Act. The guides cover key areas including risk classification, roles and responsibilities, continuous risk management, transparency obligations, cybersecurity, and incident reporting.

Notably, AESIA introduced a checklist-based self-assessment tool aligned with 12 core AI Act requirements, alongside guidance on managing cybersecurity threats such as model poisoning, adversarial attacks, and supply-chain risks. While non-binding, the materials offer practical direction for providers and deployers implementing AI governance frameworks ahead of enforcement.

Read More

10. European Commission Investigates Google’s Use of Creators’ Content for AI

December 9, 2025

The European Commission has opened a formal antitrust investigation into whether Google unfairly used web publishers’ and YouTube creators’ content to develop its AI services, including AI Overviews and AI Mode. Regulators are examining whether content was used without adequate compensation or a meaningful opt-out, particularly where refusing use could result in loss of visibility on Google Search.

The investigation also considers whether Google restricted rival AI developers’ access to YouTube content, potentially distorting competition. If confirmed, the practices could constitute an abuse of dominant position under EU competition law, exposing Google to fines of up to 10% of its global annual turnover.

The case highlights growing regulatory scrutiny of how dominant platforms leverage content to train and deploy AI systems.

Read More

Asia Jurisdiction

11. Hong Kong’s Privacy Commissioner Released a Toolkit to Protect Children From Various AI Risks

December 17, 2025
Hong Kong

Hong Kong’s Privacy Commissioner for Personal Data (PCPD) has released new guidance on the abuse of AI-generated deepfakes, alongside findings from a CCTV privacy investigation. The Deepfakes Toolkit, aimed at schools and parents, addresses risks such as cyberbullying, scams, and falsified intimate images, and emphasizes that deepfake creation and use remain subject to existing data protection and criminal laws.

Separately, the PCPD issued an advisory letter following an investigation into CCTV placement near a restroom, stressing that surveillance should not occur in areas with a reasonable expectation of privacy and reinforcing requirements for lawful, fair, and proportionate data collection.

Read More

12. India Introduces AI Ethics and Accountability Bill

December 16, 2025
India

India has introduced the Artificial Intelligence (Ethics and Accountability) Bill 2025, proposing an ethics-focused framework for AI governance through the creation of an AI Ethics Committee.

The Bill targets risks such as algorithmic bias, lack of transparency, and misuse of AI in sensitive sectors, including surveillance, law enforcement, credit, and employment. The proposal would require AI developers to disclose system purposes and limitations, conduct bias audits, maintain compliance records, and participate in grievance redressal processes.

While the Bill marks an early step toward AI accountability, it leaves key issues such as copyright, data ownership, licensing, and compensation for training data unaddressed, highlighting gaps likely to shape future AI policy debates in India.

Read More

13. Vietnam’s National Assembly Passes Law on Artificial Intelligence

December 10, 2025
Vietnam

Vietnam’s National Assembly has passed a Law on Artificial Intelligence, set to take effect on March 1, 2026. The law establishes foundational definitions for AI systems and stakeholders, introduces principles such as human-centered design, fairness, and transparency, and adopts a risk-based classification of AI systems.

Enforcement will be overseen by the Ministry of Science and Technology, with penalties applicable to violations involving high-risk AI systems. The law significantly raises regulatory expectations for organizations developing or deploying AI in Vietnam, requiring early alignment of governance, risk management, and compliance practices ahead of the effective date.

Read More

14. South Korea’s Internet and Security Agency Releases AI Security Guide

December 10, 2025
South Korea

South Korea’s Korea Internet & Security Agency (KISA) has published an AI Security Guide aimed at helping organizations prevent emerging AI-related security threats.

The guidance emphasizes establishing a clear AI security governance framework, including defined roles, policies, and accountability. Key recommendations include risk analysis and threat modeling, implementation of mitigation measures, securing AI data through encryption and access controls, and educating users on threats such as phishing and deepfakes.

The guide signals growing regulatory focus on proactive AI security practices as organizations increasingly integrate AI into their operations.

Read More

15. Australian Government Launched Its National AI Plan

December 1, 2025
Australia

Australia has launched a National AI Plan outlining a coordinated strategy to support AI innovation, adoption, and safety across the economy. The plan aims to position Australia as a developer and adopter of trusted AI, while ensuring benefits are broadly shared across businesses, communities, and the public sector.

Key pillars include investment in AI infrastructure and skills, support for SMEs and regional adoption, integration of AI into public services, and the establishment of an AI Safety Institute to monitor risks and promote responsible use. The plan emphasizes safety, transparency, and trust, alongside ongoing review of legal and regulatory frameworks addressing issues such as privacy, bias, and security.

Overall, the initiative signals a national, whole-of-economy approach to AI governance and competitiveness.

Read More

16. EU and Singapore Reinforce Digital Cooperation Through Digital Partnership Council

December 1, 2025
Singapore

The European Union and Singapore held their second Digital Partnership Council meeting, reaffirming commitments to deepen cooperation across key digital policy areas, including artificial intelligence, cybersecurity, online safety, data, and digital trust services. The discussions emphasized aligning approaches to AI safety, including collaboration on large language models, as well as joint efforts to tackle online harms and scams, with a focus on protecting minors.

The partners also explored interoperability of digital identity and trust services, expanded cooperation on cross-border data flows and data spaces, and continued collaboration on cyber resilience. Additional areas of interest included semiconductors and quantum technologies, highlighting shared priorities in innovation, standards-setting, and international digital governance.

Read More

WHAT'S NEXT:
Key AI Developments to Watch For

  1. The EU Commission announced that the AI Board has held its sixth meeting to discuss the Digital Omnibus proposal and AI Act implementation priorities. The AI Office is preparing guidelines on high-risk classifications, transparency, incident reporting, and the Act’s interplay with other EU laws. Stakeholders should monitor these for compliance updates, with guidelines expected soon.
  2. The EU Commission is collecting feedback on draft rules for AI regulatory sandboxes mandated by the AI Act. These sandboxes will allow providers to develop and test innovative AI systems under regulatory supervision, supporting both innovation and compliance. The draft is open for public feedback for five weeks until January 23, 2026.
  3. Singapore’s MAS has released AI Risk-Management Guidelines that propose governance, oversight, and lifecycle controls for financial institutions, including AI inventories, risk assessment, and proportional controls for technologies such as generative AI. Public consultation is open until January 31, 2026.
  4. South Korea’s Enforcement Decree of the AI Basic Act outlines criteria for AI support programs, safety and transparency requirements, and enforcement mechanisms, including a one-year grace period before fines come into effect on January 22, 2026.
  5. The National Institute of Standards and Technology (NIST) released a preliminary draft of its Cybersecurity Framework Profile for Artificial Intelligence, which provides organizations with a roadmap to manage AI-specific risks by mapping the core functions of NIST’s Cybersecurity Framework 2.0 to AI environments. The draft is open for public comments until January 30, 2025.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
View More
What is Cybersecurity Management?
Discover what cybersecurity management is, its importance, the CISO’s role, types, and best practices for effective cybersecurity management. Learn more.
Montana Privacy Amendment on Notices: What to Change by Oct 1 View More
Montana Privacy Amendment on Notices: What to Change by Oct 1
Download the whitepaper to learn about the Montana Privacy Amendment on Notices and what to change by Oct 1. Learn how Securiti helps.
2026 Privacy Law Updates: Key Developments You Need to Know View More
2026 Privacy Law Updates: Key Developments You Need to Know
Access the whitepaper to learn about key privacy law updates in 2026. Discover key developments you need to know. Learn how Securiti can help.
View More
The Future of Privacy: Top Emerging Privacy Trends in 2026
Access the infographic to discover the top emerging privacy trends in 2026. Learn what organizations must do to thrive in 2026 and beyond.
India’s DPDPA Rules View More
India’s DPDPA Rules
Access the infographic to learn about India’s DPDPA 2025 basics. Discover phased timelines, what the rules require, when they apply, key obligations, and much...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New