Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View
Veeam

The Funniest Evening at RSA with Hasan Minhaj

Hasan Minhaj Request ticket
View

Global AI Regulations Roundup: Top Stories of November 2025

Watch: November's AI Pulse - All Major Highlights

A quick overview of global AI headlines you cannot afford to miss.

Play Video
Contributors

Yasir Nawaz

Digital Content Producer at Securiti

Aswah Javed

Associate Data Privacy Analyst at Securiti

Rohma Fatima Qayyum

Associate Data Privacy Analyst at Securiti

Faqiha Amjad

Associate Data Privacy Analyst at Securiti

Published December 4, 2025 / Updated December 18, 2025

Editorial Note

AI Governance: The Era of Enforceable Integrity

November signaled a decisive global shift: AI governance is no longer about setting principles; it’s about asserting power, jurisdiction, and enforceability. Across regions, regulators moved aggressively to close gaps between fast-moving AI deployment and slow-moving legal systems. What stands out is the convergence: governments are independently settling on risk-based frameworks, lifecycle controls, and sector-specific obligations, even as their political strategies diverge.

The U.S. is shaping a state-driven model resisting federal preemption, Europe is sharpening enforcement tools ahead of AI Act implementation, and Asia is racing to operationalize AI governance through decrees, consultations, and national oversight bodies. The common theme: AI governance is becoming operational, not aspirational.

For organizations, this next phase demands more than compliance checklists. It requires architectural readiness: data quality, documentation, human oversight, and risk testing, embedded directly into AI pipelines. The jurisdictions are moving fast. Those who wait will be regulated by surprise; those who prepare will differentiate by design.

North & South America Jurisdiction

1. Attorney Generals’ Coalition Sends Letter to Congress Over AI Regulation Moratorium

November 25, 2025
United States

A coalition of 36 state AGs has sent a letter to Congressional leaders opposing the proposed provision that inserts an AI regulation moratorium into the annual National Defense Authorization Act (NDAA). In the AGs’ opinion, such a federal preemption would leave communities exposed to significant and rapidly evolving risks associated with AI, stifle regulatory innovation by states, and effectively grant technology companies a shield of immunity, preventing AGs from using their enforcement authority to protect residents.

The coalition urges Congress to reject the moratorium and instead work with the states to establish effective, thoughtful federal regulations that serve as a floor, not a ceiling, for AI protection. This step shows the strong stance of AGs to encourage AI protection in the country.

Read More

2. Tennessee AI Advisory Council Releases to Drive Responsible Innovation

November 24, 2025
California, United States

Tennessee has released its first Artificial Intelligence Advisory Council Action Plan, outlining a statewide strategy to guide responsible, transparent, and effective AI adoption across government, education, and industry. Submitted to state leadership, the plan positions Tennessee to become a national leader in ethical AI deployment by focusing on improved public services, economic growth, and workforce readiness.

The framework is built around four pillars: launching measurable AI pilot programs, strengthening statewide data and compute infrastructure, expanding AI literacy and reskilling initiatives, and enhancing governance, safety, and accountability to protect citizen rights and privacy. State officials emphasized that Tennessee is shifting from planning to implementation, with the Advisory Council continuing its work through 2028 and issuing annual progress reports.

The action plan serves as a starting point in the regulation of AI in Tennessee.

Read More

3. EPIC, FTC, & CFPB Co-Publish Guide on AI Chatbots for Kids & Teens

November 10, 2025
United States

EPIC, the FTC, and the CFPB have jointly released a new guide titled “How Existing Laws Apply to AI Chatbots for Kids and Teens,” reinforcing a core regulatory message: there is no AI exemption in existing law. The guide outlines how regulators can apply current frameworks- COPPA for parental consent and limits on data collection, state privacy laws to restrict targeted advertising and monetization of minors’ data, and UDAP authorities to address deceptive claims about chatbot safety or capabilities.

It also highlights how emerging state laws governing AI mental-health tools and “companion” chatbots can be used to establish guardrails against manipulation, self-harm risks, and misinformation. The publication serves as a practical enforcement roadmap for attorneys general and federal agencies as AI tools for minors rapidly expand.

Read More

Europe & Africa Jurisdiction

4. ODPA Releases Ten-Step Practical AI Guidance

November 25, 2025
Guernsey

Guernsey’s Office of the Data Protection Authority (ODPA) has issued Ten-Step Practical AI Guidance to help organizations responsibly deploy AI systems that rely on personal data. The guidance outlines clear expectations across the AI lifecycle, starting with identifying whether personal data is used and clarifying controller/processor roles and lawful bases. It stresses the need for DPIAs, transparency, fairness, bias testing, and human-review mechanisms for significant automated decisions.

The ODPA also emphasizes responsible handling of training data, avoiding unlawfully scraped datasets, ensuring accuracy, and applying minimization and anonymization where possible. Organizations are urged to adopt a risk-based approach, document their decisions, implement strong security measures, and maintain ongoing oversight as models evolve.

Overall, the guidance aims to provide practical, actionable steps for ensuring AI deployment complies with Guernsey’s data protection requirements.

Read More

5. AI Office Launches Whistleblower Tool for AI Act Violations

November 24, 2025

The EU’s AI Office has unveiled a new AI Act Whistleblower Tool, enabling individuals to report suspected violations of the AI Act confidentially. The platform supports anonymous submissions in any EU language and allows providers of GPAI models and certain AI systems to upload supporting evidence and securely track the status of their report.

The tool marks a significant step in strengthening EU enforcement capacity as the AI Act moves toward full implementation, signaling the bloc’s commitment to transparency, accountability, and early detection of systemic risks across the AI ecosystem.

Read More

6. Turkey’s Data Protection Authority Issues Guide on GenAI and Personal Data Protection

November 24, 2025
Turkey

Turkey’s Personal Data Protection Authority has released a new guide on the use of generative AI under Law No. 6698, outlining how personal data risks arise during model training, deployment, and output generation. The Authority stresses the need to clearly define data controller-processor roles based on actual functions, and reinforces compliance with core principles such as lawfulness, transparency, and accuracy.

The guide highlights key risks, including hallucinations, bias, security vulnerabilities, and intellectual property concerns and sets out mitigation measures, emphasizing the importance of human oversight given the potential inaccuracy of AI outputs. It also introduces heightened safeguards for vulnerable groups, particularly children, signaling Turkey’s focus on responsible GenAI deployment.

Read More

7. European Commission Releases Digital Omnibus Proposal Amending the AI Act

November 19, 2025

The European Commission has unveiled its Digital Omnibus on AI Regulation, proposing targeted amendments to the AI Act to streamline implementation and make compliance more innovation-friendly.

The proposal links the enforcement timeline for high-risk AI obligations to the availability of conformity tools, meaning Annex I and III rules would only apply once standards and guidelines are formally in place. It also expands simplified obligations to Small Mid-Caps, centralizes enforcement for certain high-risk and GPAI-based systems under the AI Office, and authorizes limited processing of special-category data for bias detection under strict safeguards. The Omnibus further tasks the AI Office with establishing an EU-wide regulatory sandbox, broadens real-world testing allowances, redirects AI literacy responsibilities toward EU institutions, and removes the requirement for a harmonized post-market monitoring plan.

Collectively, the proposal aims to reduce compliance friction while tightening regulatory coherence ahead of full AI Act enforcement.

Read More

8. European Data Protection Supervisor Publishes AI Risk Assessment Guidance for Data Controllers

November 11, 2025

The EDPS has published guidance meant to support data controllers in conducting risk assessments while developing, procuring, and deploying AI capabilities.

Drawing on ISO 31000 principles, the document outlines a structured approach to evaluating risks across the AI lifecycle, emphasizing interpretability, explainability, fairness, accuracy, data minimization, and security. While not exhaustive, the guidance complements existing EDPS materials and reinforces that AI governance must integrate both technical safeguards and core data-protection principles. It also encourages tailored, context-specific assessments, emphasizing that even under rapidly evolving AI rules, accountability remains anchored in established EU privacy standards.

Read More

9. DPC Issues Statement Raising Concerns Over LinkedIn GenAI Models Training

November 7, 2025
Ireland

Ireland’s Data Protection Commission has issued a formal statement outlining significant concerns with LinkedIn’s plan to train its generative AI models using personal data from EU/EEA users. Although LinkedIn intended to begin model training in November 2025, the DPC flagged material risks relating to transparency, scope of data use, minors’ protections, and the handling of sensitive content. In response, LinkedIn has narrowed the categories and timeframe of data used, introduced stronger safeguards for users under 18, added filters to prevent the collection of sensitive information, and expanded its GDPR documentation, providing revised DPIAs, LIAs, and compatibility assessments.

The DPC has also required LinkedIn to submit a follow-up report within five months, assessing whether its safeguards function effectively in practice. Ongoing monitoring will focus on user control mechanisms, including opt-out settings and LinkedIn’s Data Processing Objection Form, signaling continued regulatory scrutiny over GenAI training practices in the EU.

Read More

10. Hungary Enacts National AI Act Implementing EU AI Rules

November 3, 2025
Hungary

The Hungarian government has announced the enactment of Act LXXV of 2025, which Hungary has enacted Act LXXV of 2025, putting the EU AI Act into effect domestically and creates a full national enforcement structure.

The law applies to all AI providers, deployers, manufacturers, and importers operating in Hungary, and designates the AI Market Surveillance Authority as the primary regulator, with additional oversight for high-risk financial-sector AI by the Hungarian National Bank. The Act also establishes a Hungarian AI Council to coordinate public–private governance and monitor public-sector AI use. Violations will carry fines equivalent to those under the EU AI Act- up to HUF 13.3 billion (~$39 million) for serious infringements.

The law takes effect 31 days after promulgation, with certain obligations phased in through August 2026.

11. European Parliament Study Highlights Overlaps Between EU AI Act and Other Digital Laws

November 1, 2025

The European Parliament has published a study examining how the EU AI Act intersects with the GDPR, the Data Act, the DSA, the DMA, the Cyber Resilience Act, and NIS2.

The report warns that overlapping obligations, particularly around impact assessments, cybersecurity requirements, and transparency rules, may lead to duplicative compliance burdens and regulatory uncertainty. It highlights issues such as the parallel use of DPIAs and FRIAs, dual cybersecurity expectations for high-risk AI systems, and overlapping duties for VLOPs under both the AI Act and the DSA.

The study recommends coordinated short- and long-term reforms to streamline requirements and ensure a coherent, predictable regulatory environment.

Read More

Asia Jurisdiction

12. Saudi Arabia & US Sign Strategic AI Partnership

November 18, 2025
Saudi Arabia

Saudi Arabia and the United States have signed a Strategic AI Partnership focused on advancing semiconductor supply chains, AI application development, national capability building, and large-scale digital infrastructure. Beyond its economic and technological ambitions, the partnership carries regulatory significance: it may influence future cross-border AI standards, shape alignment on technology transfer and IP protections, and encourage both countries to harmonize parts of their domestic AI governance frameworks around shared priorities.

This marks a notable step in the geopolitics of AI cooperation.

Read More

13. Singapore Launches Public Consultation on New AI Risk Management Guidelines

November 13, 2025
Singapore

The Monetary Authority of Singapore (MAS) has launched a public consultation on new AI risk-management guidelines for financial institutions, detailing expectations around governance, oversight, and lifecycle controls such as data quality, fairness, transparency, human oversight, and ongoing model monitoring. It has been designed to align with existing national frameworks like FEAT and the IMDA Model AI Governance Framework. These guidelines also require institutions to maintain AI inventories, assess risk materiality, and scale controls proportionately to their size and use of technologies.

MAS invites industry and stakeholder feedback until January 31, 2026.

Read More

14. Singapore’s MAS & UK’s FCA Form Strategic Partnership Advancing Responsible AI Adoption

November 12, 2025
Singapore

The Monetary Authority of Singapore (MAS) and the UK’s Financial Conduct Authority (FCA) have announced a major UK-Singapore AI-in-Finance Partnership, unveiled during the Singapore FinTech Festival. The initiative aims to accelerate safe, responsible, and cross-border AI innovation across both financial hubs.

The partnership will support joint testing of AI systems, deeper regulatory cooperation, and coordinated discussions on responsible AI. It also connects industry programs- MAS’s PathFin.ai and the FCA’s AI Spotlight, to help firms scale trustworthy AI solutions across both markets.

Read More

15. South Korea’s MSIT Opens Public Consultation on Draft Enforcement Decree of the AI Act

November 12, 2025
South Korea

South Korea’s Ministry of Science and ICT (MSIT) has opened a public consultation on the draft Enforcement Decree of the AI Act, with feedback accepted until December 22, ahead of the law’s effective date on January 22, 2026.

The draft outlines detailed compliance duties for AI developers, service providers, and designated entities, including pre-use disclosures for high-risk AI, harm-evaluation requirements, lifecycle governance controls, and public reporting obligations. It also sets criteria for AI R&D and national AI cluster operations, while expressly excluding national defense and security systems. Proposed penalties range from 500 to 3,000 administrative units, with higher fines for repeat or severe violations.

The consultation marks a key step in finalizing Korea’s operational AI governance framework.

Read More

16. Vietnam Proposes AI Law Introducing Four-Tier Risk Classification

November 6, 2025
Vietnam

Vietnam’s Ministry of Science and Technology (MST) has proposed a new AI law introducing a four-tier risk classification system: unacceptable, high, medium, and low risk. Each tier carries distinct obligations, including prohibitions, conformity assessments, governance controls, and transparency requirements.

Modeled closely on the EU AI Act, the proposal supports Vietnam’s digital transformation agenda and establishes a structured, risk-based regulatory approach to ensure safer and more responsible AI development and deployment nationwide.

Read More

17. Vietnam’s State Bank Releases Report Exploring Role of AI in Financial Governance

November 6, 2025
Vietnam

The State Bank of Vietnam has released a new BIS report assessing how AI can enhance financial governance, supervision, and policymaking. The report highlights AI’s potential in macroeconomic analysis, data collection, payment-system monitoring, and financial-stability oversight, while also identifying key challenges, including privacy risks, cybersecurity pressures, talent shortages, and reliance on foreign technology providers.

To strengthen adoption, the report recommends increased data sharing among banks, improved collaboration across the sector, and the development of a unified national AI governance framework to ensure secure, trustworthy, and effective deployment.

Read More

18. India Releases National AI Governance Guidelines

November 5, 2025
India

India’s Ministry of Electronics and Information Technology has issued new AI Governance Guidelines under the IndiaAI Mission, establishing one of the country’s most comprehensive frameworks for responsible AI development.

The guidelines set out seven core principles: trust, a people-first approach, innovation, fairness, accountability, explainability, and safety, alongside recommendations across six governance pillars, an implementation action plan, and operational guidance for industry and regulators. The framework emphasizes transparency, compliance, auditability, and grievance redressal, while calling for a review of existing laws to ensure they align with emerging AI risks.

The release marks a major step toward building a unified national approach to ethical, trustworthy, and resilient AI deployment in India.

Read More

19. Kazakhstan Establishes New Information Security Committee to Oversee Data Protection

November 4, 2025
Kazakhstan

Kazakhstan’s government has approved the establishment of the Information Security Committee under its Ministry of Artificial Intelligence and Digital Development (MAIDD).

This committee will oversee informatization, personal data protection, and information security in addition to monitoring compliance across state bodies, individuals, and legal entities, responding to incidents, and coordinating with national and international partners on cybersecurity policies. The committee will also enforce laws on electronic documents, digital signatures, and personal data protection, issue penalties for violations, and promote information security awareness.

Read More

WHAT'S NEXT:
Key AI Developments to Watch For

  1. House Bill 5764, the AI for Mainstreet Act, continues to advance and could become the country’s first federal framework focused on supporting AI adoption by small and mid-sized businesses.
  2. Denmark is preparing landmark legislation to amend copyright law, giving individuals rights over their likeness, voice, and facial features to ban the unauthorized sharing of AI-generated deepfakes. The bill is set for submission in 2025 with an aim to become law in late 2025 or early 2026.
  3. South Korea’s Enforcement Decree of the AI Act is coming into effect on January 22, 2026.
  4. China’s "Measures for the Certification of Personal Information Exported Overseas" will take effect on January 1, 2026, tightening controls on cross-border data transfers with new certification, audit, and oversight requirements.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Introducing Agent Commander
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About View More
Risk Silos: The Biggest AI Problem Boards Aren’t Talking About
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
California’s Delete Request and Opt-out Platform (DROP) and the Delete Act View More
California’s Delete Request and Opt-out Platform (DROP) and the Delete Act
Understand California’s DROP platform and the Delete Act, including compliance timelines, the 45-day cycle, broker obligations, and how to operationalize compliance.
Building A Secure AI Foundation For Financial Services View More
Building A Secure AI Foundation For Financial Services
Access the whitepaper and discover how financial institutions eliminate Shadow AI, enforce real-time AI policies, and secure sensitive data with a unified DataAI control...
Emerging AI Security Trends For 2026 View More
Emerging AI Security Trends For 2026
Securiti’s latest infographic provides security leaders with a walkthrough of all the emerging AI security trends for 2026 to help them assess and plan...
Safe AI, Accelerated: View More
Safe AI, Accelerated: Securing Data & AI Across the Lifecycle
Securiti’s latest infographic dives into the issue organizations face when scaling their AI projects safely, and how best they can address those challenges.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New