November signaled a decisive global shift: AI governance is no longer about setting principles; it’s about asserting power, jurisdiction, and enforceability. Across regions, regulators moved aggressively to close gaps between fast-moving AI deployment and slow-moving legal systems. What stands out is the convergence: governments are independently settling on risk-based frameworks, lifecycle controls, and sector-specific obligations, even as their political strategies diverge.
The U.S. is shaping a state-driven model resisting federal preemption, Europe is sharpening enforcement tools ahead of AI Act implementation, and Asia is racing to operationalize AI governance through decrees, consultations, and national oversight bodies. The common theme: AI governance is becoming operational, not aspirational.
For organizations, this next phase demands more than compliance checklists. It requires architectural readiness: data quality, documentation, human oversight, and risk testing, embedded directly into AI pipelines. The jurisdictions are moving fast. Those who wait will be regulated by surprise; those who prepare will differentiate by design.
North & South America Jurisdiction
1. Attorney Generals’ Coalition Sends Letter to Congress Over AI Regulation Moratorium
November 25, 2025 United States
A coalition of 36 state AGs has sent a letter to Congressional leaders opposing the proposed provision that inserts an AI regulation moratorium into the annual National Defense Authorization Act (NDAA). In the AGs’ opinion, such a federal preemption would leave communities exposed to significant and rapidly evolving risks associated with AI, stifle regulatory innovation by states, and effectively grant technology companies a shield of immunity, preventing AGs from using their enforcement authority to protect residents.
The coalition urges Congress to reject the moratorium and instead work with the states to establish effective, thoughtful federal regulations that serve as a floor, not a ceiling, for AI protection. This step shows the strong stance of AGs to encourage AI protection in the country.
2. Tennessee AI Advisory Council Releases to Drive Responsible Innovation
November 24, 2025 California, United States
Tennessee has released its first Artificial Intelligence Advisory Council Action Plan, outlining a statewide strategy to guide responsible, transparent, and effective AI adoption across government, education, and industry. Submitted to state leadership, the plan positions Tennessee to become a national leader in ethical AI deployment by focusing on improved public services, economic growth, and workforce readiness.
The framework is built around four pillars: launching measurable AI pilot programs, strengthening statewide data and compute infrastructure, expanding AI literacy and reskilling initiatives, and enhancing governance, safety, and accountability to protect citizen rights and privacy. State officials emphasized that Tennessee is shifting from planning to implementation, with the Advisory Council continuing its work through 2028 and issuing annual progress reports.
The action plan serves as a starting point in the regulation of AI in Tennessee.
3. EPIC, FTC, & CFPB Co-Publish Guide on AI Chatbots for Kids & Teens
November 10, 2025 United States
EPIC, the FTC, and the CFPB have jointly released a new guide titled “How Existing Laws Apply to AI Chatbots for Kids and Teens,” reinforcing a core regulatory message: there is no AI exemption in existing law. The guide outlines how regulators can apply current frameworks- COPPA for parental consent and limits on data collection, state privacy laws to restrict targeted advertising and monetization of minors’ data, and UDAP authorities to address deceptive claims about chatbot safety or capabilities.
It also highlights how emerging state laws governing AI mental-health tools and “companion” chatbots can be used to establish guardrails against manipulation, self-harm risks, and misinformation. The publication serves as a practical enforcement roadmap for attorneys general and federal agencies as AI tools for minors rapidly expand.
Guernsey’s Office of the Data Protection Authority (ODPA) has issued Ten-Step Practical AI Guidance to help organizations responsibly deploy AI systems that rely on personal data. The guidance outlines clear expectations across the AI lifecycle, starting with identifying whether personal data is used and clarifying controller/processor roles and lawful bases. It stresses the need for DPIAs, transparency, fairness, bias testing, and human-review mechanisms for significant automated decisions.
The ODPA also emphasizes responsible handling of training data, avoiding unlawfully scraped datasets, ensuring accuracy, and applying minimization and anonymization where possible. Organizations are urged to adopt a risk-based approach, document their decisions, implement strong security measures, and maintain ongoing oversight as models evolve.
Overall, the guidance aims to provide practical, actionable steps for ensuring AI deployment complies with Guernsey’s data protection requirements.
5. AI Office Launches Whistleblower Tool for AI Act Violations
November 24, 2025
The EU’s AI Office has unveiled a new AI Act Whistleblower Tool, enabling individuals to report suspected violations of the AI Act confidentially. The platform supports anonymous submissions in any EU language and allows providers of GPAI models and certain AI systems to upload supporting evidence and securely track the status of their report.
The tool marks a significant step in strengthening EU enforcement capacity as the AI Act moves toward full implementation, signaling the bloc’s commitment to transparency, accountability, and early detection of systemic risks across the AI ecosystem.
6. Turkey’s Data Protection Authority Issues Guide on GenAI and Personal Data Protection
November 24, 2025 Turkey
Turkey’s Personal Data Protection Authority has released a new guide on the use of generative AI under Law No. 6698, outlining how personal data risks arise during model training, deployment, and output generation. The Authority stresses the need to clearly define data controller-processor roles based on actual functions, and reinforces compliance with core principles such as lawfulness, transparency, and accuracy.
The guide highlights key risks, including hallucinations, bias, security vulnerabilities, and intellectual property concerns and sets out mitigation measures, emphasizing the importance of human oversight given the potential inaccuracy of AI outputs. It also introduces heightened safeguards for vulnerable groups, particularly children, signaling Turkey’s focus on responsible GenAI deployment.
7. European Commission Releases Digital Omnibus Proposal Amending the AI Act
November 19, 2025
The European Commission has unveiled its Digital Omnibus on AI Regulation, proposing targeted amendments to the AI Act to streamline implementation and make compliance more innovation-friendly.
The proposal links the enforcement timeline for high-risk AI obligations to the availability of conformity tools, meaning Annex I and III rules would only apply once standards and guidelines are formally in place. It also expands simplified obligations to Small Mid-Caps, centralizes enforcement for certain high-risk and GPAI-based systems under the AI Office, and authorizes limited processing of special-category data for bias detection under strict safeguards. The Omnibus further tasks the AI Office with establishing an EU-wide regulatory sandbox, broadens real-world testing allowances, redirects AI literacy responsibilities toward EU institutions, and removes the requirement for a harmonized post-market monitoring plan.
Collectively, the proposal aims to reduce compliance friction while tightening regulatory coherence ahead of full AI Act enforcement.
8. European Data Protection Supervisor Publishes AI Risk Assessment Guidance for Data Controllers
November 11, 2025
The EDPS has published guidance meant to support data controllers in conducting risk assessments while developing, procuring, and deploying AI capabilities.
Drawing on ISO 31000 principles, the document outlines a structured approach to evaluating risks across the AI lifecycle, emphasizing interpretability, explainability, fairness, accuracy, data minimization, and security. While not exhaustive, the guidance complements existing EDPS materials and reinforces that AI governance must integrate both technical safeguards and core data-protection principles. It also encourages tailored, context-specific assessments, emphasizing that even under rapidly evolving AI rules, accountability remains anchored in established EU privacy standards.
9. DPC Issues Statement Raising Concerns Over LinkedIn GenAI Models Training
November 7, 2025 Ireland
Ireland’s Data Protection Commission has issued a formal statement outlining significant concerns with LinkedIn’s plan to train its generative AI models using personal data from EU/EEA users. Although LinkedIn intended to begin model training in November 2025, the DPC flagged material risks relating to transparency, scope of data use, minors’ protections, and the handling of sensitive content. In response, LinkedIn has narrowed the categories and timeframe of data used, introduced stronger safeguards for users under 18, added filters to prevent the collection of sensitive information, and expanded its GDPR documentation, providing revised DPIAs, LIAs, and compatibility assessments.
The DPC has also required LinkedIn to submit a follow-up report within five months, assessing whether its safeguards function effectively in practice. Ongoing monitoring will focus on user control mechanisms, including opt-out settings and LinkedIn’s Data Processing Objection Form, signaling continued regulatory scrutiny over GenAI training practices in the EU.
10. Hungary Enacts National AI Act Implementing EU AI Rules
November 3, 2025 Hungary
The Hungarian government has announced the enactment of Act LXXV of 2025, which Hungary has enacted Act LXXV of 2025, putting the EU AI Act into effect domestically and creates a full national enforcement structure.
The law applies to all AI providers, deployers, manufacturers, and importers operating in Hungary, and designates the AI Market Surveillance Authority as the primary regulator, with additional oversight for high-risk financial-sector AI by the Hungarian National Bank. The Act also establishes a Hungarian AI Council to coordinate public–private governance and monitor public-sector AI use. Violations will carry fines equivalent to those under the EU AI Act- up to HUF 13.3 billion (~$39 million) for serious infringements.
The law takes effect 31 days after promulgation, with certain obligations phased in through August 2026.
11. European Parliament Study Highlights Overlaps Between EU AI Act and Other Digital Laws
November 1, 2025
The European Parliament has published a study examining how the EU AI Act intersects with the GDPR, the Data Act, the DSA, the DMA, the Cyber Resilience Act, and NIS2.
The report warns that overlapping obligations, particularly around impact assessments, cybersecurity requirements, and transparency rules, may lead to duplicative compliance burdens and regulatory uncertainty. It highlights issues such as the parallel use of DPIAs and FRIAs, dual cybersecurity expectations for high-risk AI systems, and overlapping duties for VLOPs under both the AI Act and the DSA.
The study recommends coordinated short- and long-term reforms to streamline requirements and ensure a coherent, predictable regulatory environment.
12. Saudi Arabia & US Sign Strategic AI Partnership
November 18, 2025 Saudi Arabia
Saudi Arabia and the United States have signed a Strategic AI Partnership focused on advancing semiconductor supply chains, AI application development, national capability building, and large-scale digital infrastructure. Beyond its economic and technological ambitions, the partnership carries regulatory significance: it may influence future cross-border AI standards, shape alignment on technology transfer and IP protections, and encourage both countries to harmonize parts of their domestic AI governance frameworks around shared priorities.
This marks a notable step in the geopolitics of AI cooperation.
13. Singapore Launches Public Consultation on New AI Risk Management Guidelines
November 13, 2025 Singapore
The Monetary Authority of Singapore (MAS) has launched a public consultation on new AI risk-management guidelines for financial institutions, detailing expectations around governance, oversight, and lifecycle controls such as data quality, fairness, transparency, human oversight, and ongoing model monitoring. It has been designed to align with existing national frameworks like FEAT and the IMDA Model AI Governance Framework. These guidelines also require institutions to maintain AI inventories, assess risk materiality, and scale controls proportionately to their size and use of technologies.
MAS invites industry and stakeholder feedback until January 31, 2026.
14. Singapore’s MAS & UK’s FCA Form Strategic Partnership Advancing Responsible AI Adoption
November 12, 2025 Singapore
The Monetary Authority of Singapore (MAS) and the UK’s Financial Conduct Authority (FCA) have announced a major UK-Singapore AI-in-Finance Partnership, unveiled during the Singapore FinTech Festival. The initiative aims to accelerate safe, responsible, and cross-border AI innovation across both financial hubs.
The partnership will support joint testing of AI systems, deeper regulatory cooperation, and coordinated discussions on responsible AI. It also connects industry programs- MAS’s PathFin.ai and the FCA’s AI Spotlight, to help firms scale trustworthy AI solutions across both markets.
15. South Korea’s MSIT Opens Public Consultation on Draft Enforcement Decree of the AI Act
November 12, 2025 South Korea
South Korea’s Ministry of Science and ICT (MSIT) has opened a public consultation on the draft Enforcement Decree of the AI Act, with feedback accepted until December 22, ahead of the law’s effective date on January 22, 2026.
The draft outlines detailed compliance duties for AI developers, service providers, and designated entities, including pre-use disclosures for high-risk AI, harm-evaluation requirements, lifecycle governance controls, and public reporting obligations. It also sets criteria for AI R&D and national AI cluster operations, while expressly excluding national defense and security systems. Proposed penalties range from 500 to 3,000 administrative units, with higher fines for repeat or severe violations.
The consultation marks a key step in finalizing Korea’s operational AI governance framework.
16. Vietnam Proposes AI Law Introducing Four-Tier Risk Classification
November 6, 2025 Vietnam
Vietnam’s Ministry of Science and Technology (MST) has proposed a new AI law introducing a four-tier risk classification system: unacceptable, high, medium, and low risk. Each tier carries distinct obligations, including prohibitions, conformity assessments, governance controls, and transparency requirements.
Modeled closely on the EU AI Act, the proposal supports Vietnam’s digital transformation agenda and establishes a structured, risk-based regulatory approach to ensure safer and more responsible AI development and deployment nationwide.
17. Vietnam’s State Bank Releases Report Exploring Role of AI in Financial Governance
November 6, 2025 Vietnam
The State Bank of Vietnam has released a new BIS report assessing how AI can enhance financial governance, supervision, and policymaking. The report highlights AI’s potential in macroeconomic analysis, data collection, payment-system monitoring, and financial-stability oversight, while also identifying key challenges, including privacy risks, cybersecurity pressures, talent shortages, and reliance on foreign technology providers.
To strengthen adoption, the report recommends increased data sharing among banks, improved collaboration across the sector, and the development of a unified national AI governance framework to ensure secure, trustworthy, and effective deployment.
18. India Releases National AI Governance Guidelines
November 5, 2025 India
India’s Ministry of Electronics and Information Technology has issued new AI Governance Guidelines under the IndiaAI Mission, establishing one of the country’s most comprehensive frameworks for responsible AI development.
The guidelines set out seven core principles: trust, a people-first approach, innovation, fairness, accountability, explainability, and safety, alongside recommendations across six governance pillars, an implementation action plan, and operational guidance for industry and regulators. The framework emphasizes transparency, compliance, auditability, and grievance redressal, while calling for a review of existing laws to ensure they align with emerging AI risks.
The release marks a major step toward building a unified national approach to ethical, trustworthy, and resilient AI deployment in India.
19. Kazakhstan Establishes New Information Security Committee to Oversee Data Protection
November 4, 2025 Kazakhstan
Kazakhstan’s government has approved the establishment of the Information Security Committee under its Ministry of Artificial Intelligence and Digital Development (MAIDD).
This committee will oversee informatization, personal data protection, and information security in addition to monitoring compliance across state bodies, individuals, and legal entities, responding to incidents, and coordinating with national and international partners on cybersecurity policies. The committee will also enforce laws on electronic documents, digital signatures, and personal data protection, issue penalties for violations, and promote information security awareness.
House Bill 5764, the AI for Mainstreet Act, continues to advance and could become the country’s first federal framework focused on supporting AI adoption by small and mid-sized businesses.
Denmark is preparing landmark legislation to amend copyright law, giving individuals rights over their likeness, voice, and facial features to ban the unauthorized sharing of AI-generated deepfakes. The bill is set for submission in 2025 with an aim to become law in late 2025 or early 2026.
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...
Spotlight Talks
Spotlight
50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Explore key data and AI security challenges facing credit bureaus—PII exposure, model risk, data accuracy, access governance, AI bias, and compliance with FCRA, GDPR,...
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...