September shows us that AI governance is moving from fragmented initiatives to a global pressure test. With laws emerging in Italy, California, and China and frameworks launched in India and South Korea, the regulatory race is no longer about “if” but “how fast.” What unites these efforts isn’t uniformity, but a demand for proof: that transparency labels work, that risk assessments are real, and that governance is embedded, not decorative. Organizations should expect less room for interpretation and more emphasis on demonstrable accountability. In AI, the winners will be those who turn compliance into an engine for credibility and trust.
North & South America Jurisdiction
1. California Enacts Landmark Frontier AI Transparency Law
September 29, 2025 California, United States
On September 29, 2025, California’s Governor signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law. The measure applies to developers of “frontier models” trained with over 10^26 operations, requiring them to publish a framework and public transparency reports covering catastrophic risk safeguards, cybersecurity practices, and internal governance.
California frames the law as the first of its kind in the U.S., combining guardrails with support for innovation to strengthen public trust. For major AI developers, this means new compliance obligations that echo the EU AI Act’s approach and set a national precedent in the absence of federal rules.
2. Canada’s Regulators Release Joint Paper on Synthetic Media
September 18, 2025 Canada
On September 18, 2025, the Canadian Digital Regulators Forum, bringing together the Competition Bureau, Privacy Commissioner, CRTC, and Copyright Board, published a paper on the risks and opportunities of synthetic media, including deepfakes. The report reviews global regulatory approaches and highlights impacts across competition, copyright, broadcasting, and privacy, with particular concern over deceptive marketing, copyright ownership, and the use of personal data in AI-generated content.
The initiative signals Canada’s push toward cross-regulatory collaboration, showing that oversight of AI-generated content will not rest with a single regulator. Platforms, advertisers, and creators should expect greater scrutiny on labelling, transparency, and personal data use as regulators coordinate to keep pace with synthetic media’s rapid growth.
3. California Bill Regarding Use of AI in Employment Decisions Awaits Governor's Approval
September 17, 2025 California, United States
On September 17, 2025, California’s Legislature passed Senate Bill 7, now awaiting the Governor’s signature. The bill would require employers to notify workers when using Automated Decision Systems (ADS) for critical employment decisions and give employees the right to access the data used in those systems.
If signed, this law will expand worker transparency rights and impose new compliance duties on employers using AI in hiring, promotion, or termination. Companies should prepare now, as this aligns with broader global trends requiring explainability and accountability in workplace AI tools.
4. California Bill on AI in Employment Decisions Awaits Approval
September 17, 2025 California, United States
On September 17, 2025, California’s Legislature passed Senate Bill 7, now awaiting the Governor’s signature. The bill would require employers to notify workers when using Automated Decision Systems (ADS) for critical employment decisions and give employees the right to access the data used in those systems.
If signed, this law will expand worker transparency rights and impose new compliance duties on employers using AI in hiring, promotion, or termination. Companies should prepare now, as this aligns with broader global trends requiring explainability and accountability in workplace AI tools.
5. California’s LEAD for Kids Act Awaits Governor’s Approval
September 15, 2025 California
On September 23, 2025, California’s Assembly Bill 1064, the LEAD for Kids Act, was enrolled for Governor Newsom’s approval. The bill seeks to regulate the use of AI chatbots by children, introducing restrictions intended to reduce potential online harms.
If signed, the law would place new compliance obligations on chatbot providers, especially around design features and age-related access controls. It reflects California’s growing focus on AI safety for minors, but companies will need to prepare for uncertainty as enforcement details and practical application are clarified.
6. California Bill on AI and Healthcare Licenses Sent to Governor
September 15, 2025 California
On September 15, 2025, California’s Governor received Assembly Bill 489, which prohibits the use of terms or titles that falsely suggest a healthcare license or certification. The measure specifically bars AI systems from using designations that imply medical advice or care is being provided by a licensed professional. Enforcement would fall under healthcare licensing boards, which could seek injunctions or restraining orders for violations.
If signed, the law will set clear limits on how AI tools present themselves in healthcare contexts. This will affect AI developers, telehealth providers, and digital health platforms, requiring them to carefully review marketing, interfaces, and disclosures to avoid regulatory risk.
7. The California AI Transparency Act Passed the State Legislature
September 12, 2025 California
On September 12, 2025, the California Legislature passed Assembly Bill 853, the AI Transparency Act, which introduces phased disclosure requirements for AI-generated content. The law takes effect on August 2, 2026, with key obligations starting January 1, 2027. From that date, large online platforms must detect and disclose AI-generated content and provide provenance data to users, while GenAI hosting platforms must embed the required disclosures. By January 1, 2028, capture device manufacturers will also need to offer users the option to include latent disclosures in recorded content.
These measures will significantly shape transparency standards in digital media. Businesses operating AI systems, online platforms, or capture devices should begin preparing now, as compliance will require both technical changes and new user-facing disclosure mechanisms.
8. California Legislature Passes First AI Chatbot Safeguards
September 11, 2025 California
On September 11, 2025, California’s Legislature passed Senate Bill 243, the first U.S. law of its kind to establish safeguards for AI chatbots. The bill requires operators of companion chatbots to implement protections for minors, including restrictions on sexual content, clear disclosures that users are interacting with AI, and protocols for handling suicidal ideation or self-harm. It also provides families with a private right of action against noncompliant developers. The bill now awaits the Governor’s signature.
If enacted, SB 243 will create some of the most stringent obligations on chatbot providers in the U.S., forcing platforms to adapt safety features and reporting practices. This marks a broader trend of states stepping in with AI rules in the absence of federal regulation.
Peru has officially published its AI Regulation of Law No. 31814 in the Official Gazette, establishing a framework for the ethical, safe, and inclusive use of AI to support economic and social development. The rules apply broadly to government, private sector, academia, and civil society, with exemptions for personal and national defense uses. The regulation adopts a risk-based model, classifying AI systems as prohibited, high-risk, or acceptable risk, and sets obligations around transparency, data protection, human oversight, and ethical conduct.
This marks one of the first comprehensive AI regulations in Latin America, aligning Peru with global regulatory trends. Organizations deploying AI in Peru should prepare for stricter scrutiny of high-risk systems and ensure governance processes meet the new ethical and compliance standards.
10. Alberta Issues Guidance on AI Scribe Tools in Healthcare
September 3, 2025 Canada
On September 3, 2025, the Office of the Information and Privacy Commissioner of Alberta (OIPC) released new AI Scribe Privacy Impact Assessment Guidance under the Health Information Act. The guidance requires healthcare custodians using AI scribe tools to conduct detailed PIAs, map data flows, and structure vendor contracts to restrict use strictly to permitted purposes. Vendors must prohibit training on patient data, ensure secure destruction at contract termination, and provide detailed technical documentation covering architecture, hosting, integration with medical records, and security controls.
This marks a significant step in applying privacy law directly to AI in healthcare. Custodians and vendors will need to treat AI scribes with the same rigor as other regulated health information systems, with heightened attention to contractual safeguards and technical transparency.
Governor Josh Stein has signed Executive Order 24 to strengthen North Carolina’s role in AI governance and innovation. The order creates an AI Leadership Council co-chaired by the state’s IT and Commerce Secretaries, establishes an AI Accelerator within the Department of Information Technology, and requires each state agency to form AI Oversight Teams. It also introduces a public AI literacy and fraud-prevention training program.
With major AI investments such as Amazon’s $10 billion innovation campus and FUJIFILM’s biomanufacturing facility already in the state, this move positions North Carolina as a leader in AI deployment. The framework is expected to accelerate industry growth, improve government services, and shape workforce readiness, while also anticipating challenges such as energy demand and regulatory oversight.
On September 23, 2025, LinkedIn announced that starting November 3, 2025, it will begin using user data, including profiles, resumes, posts, and public activity, to train its generative AI models. The rollout begins in the UK before expanding to the EU, EEA, Switzerland, Canada, and Hong Kong. LinkedIn cites “legitimate interest” as its legal basis and offers an opt-out option.
This means users’ professional data will directly feed GenAI training, raising concerns over privacy, sensitive information exposure, and bias. For organizations, the move may trigger employee questions and regulatory scrutiny, as company-related content could be repurposed for AI, challenging the limits of “legitimate interest” under privacy laws.
13. European Parliament Publishes Briefing on AI Continent Action Plan
September 22, 2025
On September 22, 2025, the European Parliament released a briefing on the AI Continent Action Plan, aimed at strengthening investment, infrastructure, and skills to expand AI adoption across the EU’s public and private sectors. The plan highlights persistent weaknesses, including a fragmented market, low private investment, and dependence on foreign providers for cloud and semiconductors. Debate also continues between industry groups calling for lighter rules and civil society groups urging stronger safeguards.
For individuals, the plan signals more investment in skills and access to AI-enabled services, though protections against potential overreach remain contested. For organizations, the initiative underscores both new opportunities for funding and innovation and heightened pressure to comply with EU regulatory expectations as the bloc seeks greater technological independence.
14. Italy Becomes First EU Member State to Pass National AI Law
September 18, 2025 Italy
On September 17, 2025, Italy approved its first national AI law, making it the first EU member state to adopt legislation that complements the EU AI Act. The 28-article framework establishes governance through the Agency for Digital Italy, the National Cybersecurity Agency, and the Garante, while setting rules for health, labor, justice, protections for minors, and even criminal penalties for misuse of AI-generated content.
The law places Italy ahead of its peers, signaling that EU countries may move faster than Brussels in tailoring AI governance to local needs. Companies operating in Italy now face earlier compliance duties than in other member states, particularly in health and workplace AI applications, while individuals gain new safeguards around data use, safety, and transparency. This dual-track regulation could serve as a model for other EU governments looking to tighten oversight before the AI Act fully takes effect.
15. Dutch DPA Publishes Guidance on AI Act Product Compliance Standards
September 12, 2025 Netherlands
On September 12, 2025, the Dutch Data Protection Authority published guidance clarifying how product standards can be used to demonstrate compliance under the EU AI Act. Providers may either follow harmonized standards to benefit from a presumption of conformity or rely on national, European, or international standards and their own technical specifications to prove compliance. The guidance also urges high-risk AI providers to proactively track which standards they apply and monitor upcoming harmonized publications.
This guidance gives companies practical clarity on how to operationalize AI Act requirements while highlighting the burden on high-risk AI developers to maintain ongoing documentation and governance. It signals that regulators expect businesses to move early on standards adoption rather than wait for enforcement deadlines.
16. The Government of Hungary Releases its AI Strategy 2025-2030
September 4, 2025 Hungary
On September 4, 2025, Hungary released its AI Strategy for 2025–2030, aiming to create a safe and innovation-friendly AI environment aligned with the EU AI Act. The plan avoids building parallel domestic rules, instead introducing complementary measures such as new data asset laws, sector-specific regulations, and transparency requirements for large language models. It also establishes two new bodies: an AI Council to guide policy and monitor EU compliance, and an AI Office to handle market surveillance and serve as the national contact point under EU rules.
The strategy signals Hungary’s intent to integrate closely with EU frameworks while tailoring oversight to local needs. For organizations, this means preparing for stricter transparency and data governance requirements, as well as opportunities to test AI solutions through upcoming regulatory sandbox programs.
17. China’s TC260 Releases AI Emergency Response Guide For GenAI Providers
September 22, 2025 China
On September 22, 2025, China’s TC260 released an AI Emergency Response Guide for generative AI service providers. The guide classifies security incidents into four levels based on system importance, business loss, and social impact, and outlines a four-phase approach: preparedness, monitoring and early warning, incident handling, and review and improvement. Each phase includes detailed technical and management measures to strengthen incident response.
This move sets a clear benchmark for AI risk management in China. GenAI providers will need to align with structured response frameworks, while users and regulators can expect more consistent accountability in the event of AI-related disruptions.
18. South Korea Leads Global Push for Privacy-Protective, Innovation-Friendly AI at Seoul Assembly
September 17, 2025 South Korea
On September 17, 2025, at the 47th Global Privacy Assembly in Seoul, over 20 data protection authorities, including those from Canada, New Zealand, and Hong Kong, adopted a joint declaration supporting AI that is both privacy-protective and innovation-friendly. The declaration builds on the Paris AI Action Summit’s framework, emphasizing lawful processing, proportional risk management, Privacy by Design, and cross-border cooperation.
This development elevates South Korea’s role as a convener of global AI governance. For regulators and businesses alike, it signals growing international convergence on privacy-first AI principles, which may ease cross-border compliance but also raise expectations for stronger safeguards in AI deployment.
19. South Korea Releases Draft Decree For AI Basic Act
September 16, 2025 South Korea
On 16 September 2025, South Korea’s Ministry of Science and ICT (MSIT) released the draft enforcement decree for the AI Basic Act, set to take effect in January 2026. The decree excludes defense-related AI, establishes a National AI Strategy Committee, and introduces transparency duties, safety requirements, and criteria for high-impact AI. It mandates impact assessments for high-impact systems, with guidelines on identifying affected rights and populations, and provides subsidies to offset compliance costs. Penalties for violations may reach up to KRW 10 million.
This draft gives businesses early clarity on how Korea will implement its landmark AI law. Companies deploying high-impact AI should begin preparing governance and transparency processes now, as phased enforcement and impact assessments will directly shape market access and regulatory scrutiny.
20. South Korea’s Personal Information Protection Commission (PIPC) & CPO Council Issue AI Privacy Governance Joint Declaration
September 16, 2025 South Korea
South Korea’s PIPC and the Korea CPO Council have issued a joint declaration on AI privacy governance, signed by 61 major organizations, including Samsung and Hyundai. The declaration sets out seven key practices for a safe AI ecosystem covering transparency, privacy risk management, and compliance with privacy laws—and calls for the development of a global code of conduct to balance innovation with the protection of individual rights.
This signals growing alignment between regulators and industry in South Korea, with major companies committing publicly to privacy-first AI governance. For global firms, the call for an international code of conduct suggests momentum toward shared baseline standards that could reduce fragmentation but raise compliance expectations worldwide.
On September 25, 2025, China’s Cyberspace Administration (CAC) issued its updated AI Safety Governance Framework 2.0, laying out principles for safe and trustworthy AI. The framework stresses human oversight, national sovereignty, transparency, and proactive risk management, while placing obligations on developers, operators, providers, and users to ensure fairness, privacy, intellectual property protection, and strong security safeguards.
This update reflects China’s push to set global benchmarks in AI governance. For organizations, it means stricter accountability and operational standards, reinforcing the need to align products and practices with evolving Chinese regulatory expectations.
22. CCAPAC Publishes New Report Calls For “Holistic AI Cybersecurity Framework”
September 13, 2025
On September 13, 2025, the Coalition for Cybersecurity in Asia-Pacific (CCAPAC) released a report urging the creation of a holistic AI cybersecurity framework. The report recommends lifecycle security management, stronger data governance, coordinated incident response, and oversight mechanisms aligned with international standards. It also calls for governments to update cybersecurity strategies, foster international cooperation, and expand AI literacy while working closely with industry and academia.
This push reflects growing recognition that traditional cybersecurity rules are insufficient for AI. For governments, it highlights the need to adapt national security strategies to AI-specific risks. For companies, it signals rising expectations to embed cybersecurity and resilience into AI systems from design through deployment.
23. South Korea Launches Presidential Committee on National AI Strategy
September 9, 2025 South Korea
On September 9, 2025, South Korea established its Presidential Committee on National AI Strategy under the Ministry of Science and ICT. The 50-member body, including 34 civilian experts across eight subcommittees, will shape the country’s AI roadmap. Its work will build on the Korea AI Action Plan, the upcoming National AI Computing Center, and the AI Basic Law set to take effect in January 2026.
The committee emphasizes South Korea’s ambition to lead in global AI governance and innovation. For businesses, this signals clearer regulatory direction and opportunities to align with state-backed initiatives, while for policymakers, it sets a model for integrating expertise across government, industry, and academia.
24. India’s NCAIC Launches its AI Governance Framework
September 8, 2025 India
India’s National Cyber and AI Center (NCAIC) has unveiled the AI Governance Framework for India 2025–26. The framework introduces a risk-based taxonomy, defines governance roles such as Chief AI Risk Officer (CARO) and AI Risk and Ethics Committee (AIREC), and embeds security, privacy, and safety into AI systems through lifecycle controls. It also provides practical tools: templates, model cards, and an AI Bill of Materials (AIBOM) to guide ministries, public sector undertakings, and enterprises in responsible innovation, while ensuring compliance with the DPDP Act 2023 and CERT-In Directions.
This initiative sets a clear standard for trustworthy AI, positioning India as a leader in safe and ethical AI deployment.
25. South Korea’s Personal Information Protection Commission (PIPC) Revised Privacy Impact Assessment (PIA) Standards
September 4, 2025 South Korea
Effective September 5, 2025, South Korea’s Personal Information Protection Commission (PIPC) requires public institutions to follow updated Privacy Impact Assessment standards when introducing or using AI. The revised standards set evaluation criteria for AI system learning, development, operation, and management. They emphasize lawful data use, avoiding unnecessary sensitive data- particularly for children- clear retention and destruction rules, and measures to protect data subject rights, including accountability, acceptable use policies, and reporting for AI-generated information.
These updates strengthen oversight of AI in the public sector and aim to reduce privacy risks, reinforcing public trust. Organizations engaging with public bodies in Korea should expect stricter compliance checks and be prepared to demonstrate privacy-by-design in AI projects.
As of September 1, 2025, China’s Measures for the Identification of AI-Generated Synthetic Content are in force. The rules require all AI-generated text, images, audio, video, and virtual content to carry both explicit labels (such as visible warnings) and implicit identifiers (like metadata or watermarks). Providers must also prevent tampering and adopt standardized labeling practices in line with existing AI and cybersecurity regulations.
This marks a major step toward systematic traceability of AI content in China, reshaping how synthetic media is created and shared while setting a clear benchmark for transparency and accountability.
27. Thailand Releases New AI Study Following UNESCO’s AI Ethics Guidelines
September 1, 2025 Thailand
On September 1, 2025, Thailand’s Electronic Transactions Development Agency (ETDA) released a study on AI development aligned with UNESCO’s AI Ethics recommendations. While Thailand’s PDPA provides a strong privacy foundation, the country lacks a dedicated AI law and an Ethical Impact Assessment framework. The study recommends voluntary soft laws, AI certification schemes, safe testing sandboxes, and an interdisciplinary approach to responsible AI.
By adopting these measures, Thailand aims to foster trustworthy AI innovation and build public confidence. For businesses, the study signals an opportunity to engage early with voluntary frameworks that may shape future regulation, while positioning Thailand as a regional hub for ethical AI practices.
28. Singapore Drafts Guidance on GenAI in Legal Sector
September 1, 2025
On September 1, 2025, Singapore’s Ministry of Law issued a draft guide for the responsible use of generative AI in the legal sector. The five-step framework covers governance, tool evaluation, piloting, and ongoing review, with a strong focus on ethics, confidentiality, and transparency. Lawyers would be required to protect client data, remain accountable, and disclose when GenAI tools are used. The draft is open for consultation until September 30, 2025.
This move sets an early benchmark for GenAI use in professional services, showing how sector-specific guidance can balance innovation with trust. For law firms, it signals that client transparency and data safeguards will be central to AI adoption in legal practice.
India’s Digital & AI Governance: Following the release of India’s AI Governance Framework, the government is moving ahead with its ₹500 crore IndiaAI Mission. Expect the rollout of more than 500 data labs nationwide, aimed at strengthening AI research capacity and grassroots innovation.
France's EU AI Act Enforcement Plan: France is preparing its national enforcement strategy for the EU AI Act, which will distribute oversight responsibilities across regulators. Parliament is expected to review draft legislation in the coming months. If approved, DGCCRF and Arcom would oversee manipulative AI bans, while CNIL monitors prohibited practices such as predictive policing.
EU AI Act Transparency Guidelines Consultation: The European Commission has opened a consultation on AI transparency guidelines under Article 50 of the AI Act, including disclosure of AI interactions and content recognition requirements. Submissions are due by October 2, 2025.
European Commission’s Draft AI Incident Reporting Guidance: A second Commission consultation is underway on draft guidance for reporting serious AI incidents, helping providers prepare for mandatory requirements that take effect in August 2026. Feedback is open until November 7, 2025.
New US Federal AI Bills: Two new federal bills are advancing: the FAIR Act (H.B. 5315), which would prohibit agencies from procuring biased LLMs, and the Sandbox Act, creating a regulatory sandbox for AI developers. Both could reshape federal AI procurement and innovation policy if passed.
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...
Spotlight Talks
Spotlight
50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
The promise of AI Agents is staggering— intelligent systems that make decisions, use tools, automate complex workflows act as force multipliers for every knowledge...
Boards are tuned in to the AI conversation, but there’s a blind spot many organizations still haven’t named: risk silos. Everyone agrees AI governance...
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
A breakdown of new data privacy laws in Indiana, Kentucky, and Rhode Island—key obligations, consumer rights, enforcement timelines, and what businesses should do now.
Download the whitepaper to learn how to align AI use with consent, prevent purpose creep, and operationalize governance controls for safe, scalable GenAI.
Map the OWASP Top 10 risks for agentic AI to enterprise-grade controls, identity, data security, guardrails, monitoring, and governance to stop autonomous AI abuse.
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI. Category:...
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...