Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Navigating China’s AI Regulatory Landscape in 2025: What Businesses Need to Know

Author

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Published October 13, 2025

Listen to the content

China is charging full speed into the future of artificial intelligence, and 2025 could be a make-or-break year for businesses looking to ride this wave. The country has built a powerful regulatory framework packed with enforceable rules, technical standards, and labeling requirements—designed to push innovation while keeping a tight grip on AI’s risks. For companies, these regulations aren’t just boxes to tick—they’re a roadmap for success in one of the world’s fastest-moving and most tightly regulated AI markets.

In this blog, we break down the key measures every organization must know to stay compliant, competitive, and ahead of the curve in China’s AI landscape.

Practical Compliance Checklist for 2025

  • Implement explicit and implicit labels.
  • Secure training and follow annotation security specifications.
  • Conduct privacy audits and define audit thresholds and frequency.
  • File registration of generative AI services with CAC and maintain compliance records.
  • Align AI projects with “AI Plus” guidelines and engage in global governance initiatives.

Background of AI and China

China’s rapid ascent in artificial intelligence is no accident—it reflects a mix of heavy state investment, thriving private sector innovation, and an early recognition that AI would be central to economic and geopolitical strength. Unlike many jurisdictions where regulation trails technology, China has taken a proactive, top-down approach to governing AI. The 2023 Interim Measures for Generative AI Services marked a turning point, setting obligations for companies offering AI tools, from service registration and model filing to content governance and safety checks. These measures established transparency as a cornerstone, requiring AI services to display model names and filing numbers, and mandating that tools influencing public opinion register with the Cyberspace Administration of China (CAC). Since their rollout, the CAC has already approved and registered hundreds of generative AI platforms—including DeepSeek and Baidu’s Ernie Bot—demonstrating how regulation is actively shaping the AI market while positioning China as both a global innovator and regulator of artificial intelligence. These measures are also part of China’s broader “AI Plus” development plan, which seeks to promote innovation while ensuring that AI deployment aligns with legal, ethical, and societal standards.

Key AI Regulatory Milestones in 2025

Published Date

Effective

Date

Purpose

Targeted Towards

Key Points for Businesses

Emergency Response Guidelines for Generative Artificial Intelligence Services 

Sept 22, 2025 N/A To implement the  ‘Interim Measures for the Administration of Generative AI Services’ and guide the establishment of a standardized security emergency response framework for generative AI services. Generative AI service providers, their partners, and relevant departments responsible for managing or supervising generative AI service security. Encourages organizations to:

  • implement security emergency response mechanisms in line with the Interim Measures for Generative AI Services,
  • establish robust governance structures and incident response teams and mechanisms,
  • promote lifecycle safety from AI research and development to deployment,
  • apply continuous monitoring, early-warning systems, and automated alerts for models, data, and networks,
  • classify and respond to security incidents by type and severity,
  • report major incidents promptly and restore services securely,
  • conduct post-incident reviews, and
  • safeguard against illegal, biased, false, or privacy/IP-violating content, data breaches, and network attacks.

AI ​​Security Governance Framework (Version 2.0)

Sept 15, 2025 N/A Provide a comprehensive, structured approach to ensuring the safe, ethical, and responsible development, deployment, and use of AI technologies. AI developers, deployers, and operators Encourages organizations to:

  • implement technological safeguards,
  • establishing robust governance measures,
  • promoting lifecycle safety from research and development to deployment.
  • apply ethical principles,
  • conduct AI safety assessments, and
  • maintain traceability in AI-generated content.

Measures for Labeling AI-Generated Content

March 7, 2025 September 1, 2025 Promote responsible AI use, protect user rights, and ensure transparency in online content. Online service providers offering generative AI services, covering AI-generated text, images, audio, video, and virtual scenes. Measures for Providers:

  • Label AI-generated content with:
    • Explicit labels (visible notices, audio alerts, or image/video markers that persist when shared or downloaded).
    • Implicit labels (metadata with attributes, provider details, reference numbers; digital watermarks encouraged).
  • Retain logs for six months if content is published without explicit labels.
  • Prohibit removal, alteration, or falsification of labels.
  • Prevent tools that bypass labeling requirements.
  • Follow all relevant laws, regulations, and standards under regulatory oversight.

Moreover, providers with online content transmission services should:

  • Detect and verify AI-generated content and metadata.
  • Respond to user claims about mislabeling.
  • Offer tools for labeling AI content.

Cybersecurity Technology—Basic Security Requirements for Generative Artificial Intelligence Services

April 25, 2025 Nov 1, 2025 Provide basic security requirements for generative AI services, including training data security, model security, and safety measures.

.

Service providers conducting safety assessments.

Also serves as a reference for relevant authorities and third-party evaluators.

Training Data Security Requirements

  • Data Source Security: Service providers must conduct random sampling security assessments of data sources before collection. If more than 5% of the data contains illegal or harmful information, it should not be used for training.
  • Data Content Management: All training data must be filtered to remove illegal or harmful content before use.
  • Intellectual Property Protection: Service providers should have strategies and rules for managing the intellectual property of training data and establish channels for reporting and updating related issues.
  • Personal Information Protection: Before using training data containing personal information, service providers must obtain the individual's consent or comply with other legal requirements.

Model Security Requirements

  • Model Development and Deployment: Service providers should ensure that models are developed and deployed securely, with measures to prevent unauthorized access and tampering.
  • Model Evaluation: Regular evaluations should be conducted to assess the model's performance and security, including its ability to handle various inputs safely.
  • Model Updates: Updates to models should be managed securely to prevent the introduction of vulnerabilities.

Safety Measures

  • User Data Security: Service providers must implement measures to protect user data, including encryption and access controls.
  • Incident Response: Establish procedures for responding to security incidents, including detection, reporting, and mitigation.
  • Compliance with Laws and Regulations: Service providers should comply with relevant cybersecurity, data security, and personal information protection laws and standards.

Generative Artificial Intelligence Data Annotation Security Specification" (GB/T 45674—2025)

April 25, 2025 Nov 1, 2025 Establishes comprehensive security requirements for the data annotation* process in generative AI systems.

*Data annotation is a critical activity that directly influences the quality and safety of training data and, consequently, the generated content.

Organizations involved in generative AI data annotation activities. Security Measures

  1. Platform or Tool Security: Organizations must conduct regular security assessments of annotation platforms or systems to identify and address potential vulnerabilities. Platforms should maintain detailed logs of user operations and system activities to facilitate investigations in case of security incidents.
  2. Rule Security: Clear and secure annotation rules should be established to guide the labeling process, ensuring consistency and safety in the generated data.
  3. Personnel Requirements: Personnel involved in data annotation must undergo security training and be managed effectively to prevent unauthorized access and ensure adherence to security protocols.
  4. Verification Requirements: There should be robust mechanisms to verify the accuracy and security of annotated data, including functional and security verification processes.

It also outlines methods to evaluate the security of annotation platforms, rules, personnel, and verification processes to ensure compliance with the established security requirements.

Cybersecurity Technology—Security Specification for Generative Artificial Intelligence Pre-training and Fine-tuning Data

April 25, 2025 Nov 1, 2025 Outline security requirements for data processing activities related to pre-training and fine-tuning of generative AI models. AI service providers conducting data processing and security self-assessments, as well as third-party institutions evaluating data security. General Security Measures

  • Develop security management strategies for pre-training and fine-tuning data, including classification, data processing security, and incident response.
  • Implement data encryption during storage and transmission to prevent unauthorized access.
  • Ensure traceability of training data by establishing data identification between batches.
  • Comply with relevant standards for personal information protection and data processing security.

Security Measures for Pre-training Data Processing

  • Data Collection: Evaluate and record data to ensure that harmful or illegal content does not exceed 5%.
  • Data Preprocessing: Implement measures to clean and sanitize data, removing any malicious or irrelevant information.
  • Data Usage: Ensure that data used in training does not compromise model integrity or security.

Security Measures for Fine-tuning Data Processing

  • Data Collection: Follow similar protocols as pre-training data collection, ensuring data quality and legality.
  • Data Preprocessing: Apply domain-specific adjustments while maintaining data security.
  • Data Usage: Monitor and evaluate the impact of fine-tuning data on model performance and security.

It also outlines evaluation methods for data collection, preprocessing, and usage to ensure legality, quality, security, and performance throughout pre-training and fine-tuning.

Action Plan for Global Governance of AI

July 26, 2025 Create a human-centric, safe, and inclusive AI ecosystem that benefits all, guided by cooperation, fairness, and transparency. Stakeholders involved in AI deployment, development and governance. The Action Plan covers the following key points:

  • Collaboration & Innovation: Governments, industry, research institutions, and civil society are urged to work together to advance AI technology, digital infrastructure, and cross-border innovation.
  • AI Across Industries: From healthcare and education to smart cities and climate solutions, AI should empower every sector while supporting sustainable development goals.
  • Open & High-Quality Data: Promotes lawful data sharing, development of global datasets, and safeguards for privacy and diversity.
  • Sustainability & Efficiency: Encourages energy-efficient AI, green computing, and environmentally friendly development models.
  • Global Standards & Governance: Strengthens international norms, technical standards, and risk management frameworks, ensuring AI is ethical, transparent, and interoperable.
  • Capacity Building & Inclusion: Focuses on supporting developing countries, bridging the AI divide, and protecting the digital rights of women and children.
  • Multi-Stakeholder Engagement: Encourages enterprises, researchers, and policy makers to collaborate on innovation, safety, ethics, and global governance platforms.

How Securiti Can Help

Thus, China’s AI regulations and guidelines in 2025 mark a transition from aspirational guidance to concrete, enforceable obligations. From a legal and strategic perspective, businesses that proactively implement labeling, security, and privacy compliance measures will not only avoid regulatory pitfalls—they will also gain credibility in one of the world’s fastest-growing AI markets. In essence, navigating China’s AI landscape is no longer just about compliance—it’s about embedding accountability, transparency, and alignment with national priorities into the very DNA of AI operations.

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with China’s regulatory landscape.

Request a demo to learn more.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
Securiti and Databricks: Putting Sensitive Data Intelligence at the Heart of Modern Cybersecurity
Securiti is thrilled to partner with Databricks to extend Databricks Data Intelligence for Cybersecurity. This collaboration marks a pivotal moment for enterprise security, bringing...
Shrink The Blast Radius: Automate Data Minimization with DSPM View More
Shrink The Blast Radius
Recently, DaVita disclosed a ransomware incident that ultimately impacted about 2.7 million people, and it’s already booked $13.5M in related costs this quarter. Healthcare...
View More
Navigating China’s AI Regulatory Landscape in 2025: What Businesses Need to Know
A 2025 guide to China’s AI rules - generative-AI measures, algorithm & deep-synthesis filings, PIPL data exports, CAC security reviews with a practical compliance...
View More
All You Need to Know About Ontario’s Personal Health Information Protection Act 2004
Here’s what you need to know about Ontario’s Personal Health Information Protection Act of 2004 to ensure effective compliance with it.
Maryland Online Data Privacy Act (MODPA) View More
Maryland Online Data Privacy Act (MODPA): Compliance Requirements Beginning October 1, 2025
Access the whitepaper to discover the compliance requirements under the Maryland Online Data Privacy Act (MODPA). Learn how Securiti helps ensure swift compliance.
Retail Data & AI: A DSPM Playbook for Secure Innovation View More
Retail Data & AI: A DSPM Playbook for Secure Innovation
The resource guide discusses the data security challenges in the Retail sector, the real-world risk scenarios retail businesses face and how DSPM can play...
DSPM vs Legacy Security Tools: Filling the Data Security Gap View More
DSPM vs Legacy Security Tools: Filling the Data Security Gap
The infographic discusses why and where legacy security tools fall short, and how a DSPM tool can make organizations’ investments smarter and more secure.
Operationalizing DSPM: 12 Must-Dos for Data & AI Security View More
Operationalizing DSPM: 12 Must-Dos for Data & AI Security
A practical checklist to operationalize DSPM—12 must-dos covering discovery, classification, lineage, least-privilege, DLP, encryption/keys, policy-as-code, monitoring, and automated remediation.
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New