Published Date
|
Effective
Date
|
Purpose
|
Targeted Towards
|
Key Points for Businesses
|
Emergency Response Guidelines for Generative Artificial Intelligence Services
|
Sept 22, 2025 |
N/A |
To implement the ‘Interim Measures for the Administration of Generative AI Services’ and guide the establishment of a standardized security emergency response framework for generative AI services. |
Generative AI service providers, their partners, and relevant departments responsible for managing or supervising generative AI service security. |
Encourages organizations to:
- implement security emergency response mechanisms in line with the Interim Measures for Generative AI Services,
- establish robust governance structures and incident response teams and mechanisms,
- promote lifecycle safety from AI research and development to deployment,
- apply continuous monitoring, early-warning systems, and automated alerts for models, data, and networks,
- classify and respond to security incidents by type and severity,
- report major incidents promptly and restore services securely,
- conduct post-incident reviews, and
- safeguard against illegal, biased, false, or privacy/IP-violating content, data breaches, and network attacks.
|
AI Security Governance Framework (Version 2.0)
|
Sept 15, 2025 |
N/A |
Provide a comprehensive, structured approach to ensuring the safe, ethical, and responsible development, deployment, and use of AI technologies. |
AI developers, deployers, and operators |
Encourages organizations to:
- implement technological safeguards,
- establishing robust governance measures,
- promoting lifecycle safety from research and development to deployment.
- apply ethical principles,
- conduct AI safety assessments, and
- maintain traceability in AI-generated content.
|
Measures for Labeling AI-Generated Content
|
March 7, 2025 |
September 1, 2025 |
Promote responsible AI use, protect user rights, and ensure transparency in online content. |
Online service providers offering generative AI services, covering AI-generated text, images, audio, video, and virtual scenes. |
Measures for Providers:
- Label AI-generated content with:
- Explicit labels (visible notices, audio alerts, or image/video markers that persist when shared or downloaded).
- Implicit labels (metadata with attributes, provider details, reference numbers; digital watermarks encouraged).
- Retain logs for six months if content is published without explicit labels.
- Prohibit removal, alteration, or falsification of labels.
- Prevent tools that bypass labeling requirements.
- Follow all relevant laws, regulations, and standards under regulatory oversight.
Moreover, providers with online content transmission services should:
- Detect and verify AI-generated content and metadata.
- Respond to user claims about mislabeling.
- Offer tools for labeling AI content.
|
Cybersecurity Technology—Basic Security Requirements for Generative Artificial Intelligence Services
|
April 25, 2025 |
Nov 1, 2025 |
Provide basic security requirements for generative AI services, including training data security, model security, and safety measures.
.
|
Service providers conducting safety assessments.
Also serves as a reference for relevant authorities and third-party evaluators.
|
Training Data Security Requirements
- Data Source Security: Service providers must conduct random sampling security assessments of data sources before collection. If more than 5% of the data contains illegal or harmful information, it should not be used for training.
- Data Content Management: All training data must be filtered to remove illegal or harmful content before use.
- Intellectual Property Protection: Service providers should have strategies and rules for managing the intellectual property of training data and establish channels for reporting and updating related issues.
- Personal Information Protection: Before using training data containing personal information, service providers must obtain the individual's consent or comply with other legal requirements.
Model Security Requirements
- Model Development and Deployment: Service providers should ensure that models are developed and deployed securely, with measures to prevent unauthorized access and tampering.
- Model Evaluation: Regular evaluations should be conducted to assess the model's performance and security, including its ability to handle various inputs safely.
- Model Updates: Updates to models should be managed securely to prevent the introduction of vulnerabilities.
Safety Measures
- User Data Security: Service providers must implement measures to protect user data, including encryption and access controls.
- Incident Response: Establish procedures for responding to security incidents, including detection, reporting, and mitigation.
- Compliance with Laws and Regulations: Service providers should comply with relevant cybersecurity, data security, and personal information protection laws and standards.
|
Generative Artificial Intelligence Data Annotation Security Specification" (GB/T 45674—2025)
|
April 25, 2025 |
Nov 1, 2025 |
Establishes comprehensive security requirements for the data annotation* process in generative AI systems.
*Data annotation is a critical activity that directly influences the quality and safety of training data and, consequently, the generated content.
|
Organizations involved in generative AI data annotation activities. |
Security Measures
- Platform or Tool Security: Organizations must conduct regular security assessments of annotation platforms or systems to identify and address potential vulnerabilities. Platforms should maintain detailed logs of user operations and system activities to facilitate investigations in case of security incidents.
- Rule Security: Clear and secure annotation rules should be established to guide the labeling process, ensuring consistency and safety in the generated data.
- Personnel Requirements: Personnel involved in data annotation must undergo security training and be managed effectively to prevent unauthorized access and ensure adherence to security protocols.
- Verification Requirements: There should be robust mechanisms to verify the accuracy and security of annotated data, including functional and security verification processes.
It also outlines methods to evaluate the security of annotation platforms, rules, personnel, and verification processes to ensure compliance with the established security requirements.
|
Cybersecurity Technology—Security Specification for Generative Artificial Intelligence Pre-training and Fine-tuning Data
|
April 25, 2025 |
Nov 1, 2025 |
Outline security requirements for data processing activities related to pre-training and fine-tuning of generative AI models. |
AI service providers conducting data processing and security self-assessments, as well as third-party institutions evaluating data security. |
General Security Measures
- Develop security management strategies for pre-training and fine-tuning data, including classification, data processing security, and incident response.
- Implement data encryption during storage and transmission to prevent unauthorized access.
- Ensure traceability of training data by establishing data identification between batches.
- Comply with relevant standards for personal information protection and data processing security.
Security Measures for Pre-training Data Processing
- Data Collection: Evaluate and record data to ensure that harmful or illegal content does not exceed 5%.
- Data Preprocessing: Implement measures to clean and sanitize data, removing any malicious or irrelevant information.
- Data Usage: Ensure that data used in training does not compromise model integrity or security.
Security Measures for Fine-tuning Data Processing
- Data Collection: Follow similar protocols as pre-training data collection, ensuring data quality and legality.
- Data Preprocessing: Apply domain-specific adjustments while maintaining data security.
- Data Usage: Monitor and evaluate the impact of fine-tuning data on model performance and security.
It also outlines evaluation methods for data collection, preprocessing, and usage to ensure legality, quality, security, and performance throughout pre-training and fine-tuning.
|
Action Plan for Global Governance of AI
|
July 26, 2025 |
|
Create a human-centric, safe, and inclusive AI ecosystem that benefits all, guided by cooperation, fairness, and transparency. |
Stakeholders involved in AI deployment, development and governance. |
The Action Plan covers the following key points:
- Collaboration & Innovation: Governments, industry, research institutions, and civil society are urged to work together to advance AI technology, digital infrastructure, and cross-border innovation.
- AI Across Industries: From healthcare and education to smart cities and climate solutions, AI should empower every sector while supporting sustainable development goals.
- Open & High-Quality Data: Promotes lawful data sharing, development of global datasets, and safeguards for privacy and diversity.
- Sustainability & Efficiency: Encourages energy-efficient AI, green computing, and environmentally friendly development models.
- Global Standards & Governance: Strengthens international norms, technical standards, and risk management frameworks, ensuring AI is ethical, transparent, and interoperable.
- Capacity Building & Inclusion: Focuses on supporting developing countries, bridging the AI divide, and protecting the digital rights of women and children.
- Multi-Stakeholder Engagement: Encourages enterprises, researchers, and policy makers to collaborate on innovation, safety, ethics, and global governance platforms.
|