I. Introduction
South Korea’s first major AI legislation – the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (AI Basic Act) was signed into law on January 21, 2025. It aims to balance AI advancement with the protection of individuals’ rights and dignity in South Korea.
The AI Basic Act promotes the development of AI technologies that enhance the quality of life while ensuring safety, reliability, and transparency in decision-making, making South Korea the second country in the world to establish a comprehensive legislative framework governing AI after the European Union.
It requires clear explanations of key criteria behind AI’s final results while mandating the government to ensure AI safety, support innovation, and develop policies to help residents adapt to AI-driven societal changes. Thus, it focuses on fostering public trust in AI by setting clear ethical standards and regulatory frameworks to guide its responsible use. One year after its promulgation, it will take effect on January 22, 2026, apart from minor provisions that will take effect from January 24, 2026.
II. Who Needs to Comply with the AI Basic Act
A. Material Scope
The AI Basic Act covers the development, use, and regulation of AI technologies across various sectors. It assigns responsibilities to AI operators, who are divided into two categories: AI developers (who create AI systems) and AI operators (who integrate AI into products or services). The act also places obligations on the government.
B. Territorial Scope
The law has domestic and extraterritorial effect, covering foreign AI-related activities that impact South Korea. Exemptions apply to AI used in national defense or security.
III. Definitions of Key Terms
A. Artificial Intelligence (AI)
The electronic embodiment of human intellectual abilities, including learning, reasoning, perception, judgment, and language understanding.
B. Artificial Intelligence System
An AI-based system with varying degrees of autonomy and adaptability that generates predictions, recommendations, and decisions affecting real and virtual environments.
C. High-Impact Artificial Intelligence
AI systems that pose significant risks to human life, physical safety, and fundamental rights. These include AI applications in:
- energy supply;
- drinking water production;
- healthcare system operation;
- medical device development and use;
- nuclear power safety and operation;
- criminal investigations using biometric data;
- decisions impacting rights and obligations (e.g., hiring, loan approvals);
- transport systems ;
- government decision-making affecting public services or taxes;
- student evaluation in education; and
- any other critical areas affecting safety and fundamental rights.
D. Generative Artificial Intelligence
AI that generates outputs such as text, images, sound, and video based on input data.
E. Artificial Intelligence Business Operators
Entities involved in AI development and operation, categorized as AI Developers (those who create AI systems) and AI Operators (those who use AI in products or services).
IV. Obligations of AI Operators
A. Ethics and Reliability Requirements
Article 27 outlines Artificial Intelligence Ethics Principles, under which the government may establish and announce principles to promote ethics in AI. The principles will include the following:
- safety and reliability concerns to ensure that the creation and use of AI does not endanger human life, physical health, or mental well-being;
- accessibility concerns to ensure that everyone may utilize AI-powered goods and services without restrictions or inconvenience;
- matters concerning the creation and application of AI to enhance human well-being.
Article 28 also outlines the possibility of establishing a Private Autonomous Artificial Intelligence Ethics Committee to ensure compliance with ethical principles.
B. Transparency Requirement
Article 31 outlines the transparency requirement for high-impact and generative AI systems. AI operators must determine whether their AI qualifies as high-impact AI. If required, they can seek confirmation from the Minister of Science, Technology, and ICT (Minister). To ensure transparency when deploying high-impact AI or generative AI, operators must:
- inform users in advance that an AI-based product or service is being used;
- label AI-generated content (e.g., images, text, or videos) as being AI-generated; and
- explicitly indicate AI-generated content, such as virtual sounds, images, or videos, that resemble real ones (unless it's artistic or creative, in which case disclosure should not disrupt the experience).
C. Confirmation of Impact Requirement
Article 33 requires organizations to conduct an initial assessment determining whether AI incorporated into their goods or services meets the criteria for high-impact AI. If necessary, they may ask the Minister for confirmation.
D. Safety and Reliability Requirements
Article 34 outlines safety and reliability requirements for high-impact AI systems, including:
- developing and operating a risk management plan;
- ensuring AI explainability by disclosing the reasoning behind AI-generated decisions, the key criteria used and the datasets used for AI training and operation;
- implementing user protection measures;
- ensuring human oversight of AI operations; and
- maintaining documentation of safety and reliability measures.
Moreover, Article 32 outlines additional safety requirements. AI operators using more complex learning calculations must implement policies to identify, assess, and mitigate risks at every phase of the AI lifecycle. In addition, they have to develop a system for monitoring and risk management and report the implementation outcomes to the Minister.
E. Impact Assessment Requirement
Article 35 outlines the impact assessment requirement for high-impact AI systems. Organizations that integrate such systems into their goods or services should assess their possible impact on people's basic rights beforehand. Additionally, organizations using high-impact AI systems that have passed such impact assessments must receive priority for goods or services meant for government use.
V. Obligations on Foreign AI Operators
Article 36 outlines the requirement of designating a local representative. Foreign AI operators (without a physical presence in South Korea) meeting a user or revenue threshold (set by presidential decree) must appoint a local representative and report it to the Minister. Responsibilities of the domestic representative include:
- submitting compliance reports;
- requesting confirmation of high-impact AI classification;
- supporting compliance with AI safety and reliability measures; and
- having a physical presence in South Korea.
The AI operator who appointed the domestic agent will be held accountable for the Basic AI Act if the domestic agent breaches any of the Basic AI Act’s provisions.
VI. Obligations on the Government
A. Establishment of a Basic Plan for Artificial Intelligence
Article 6 details the establishment of the Basic Plan for Artificial Intelligence. The Minister must develop a Basic AI Plan every three years, with input from central agencies and local governments. The Basic Plan must include the following:
- AI policy direction and strategy;
- AI talent development and industry growth plans;
- AI ethics and regulatory frameworks;
- AI investment and financial support measures;
- fairness, transparency, responsibility, and safety of AI;
- international AI cooperation strategies; and
- AI’s impact on sectors such as education, labor, economy, and culture.
B. Establishment of Artificial Intelligence Policy Center
Article 11 details the establishment of the Artificial Intelligence Policy Center. The Minister may establish an AI Policy Center to guide policy development, support AI-related professional skills, and analyze societal impacts and trends.
Additionally, an AI Safety Research Institute may be created to address AI risks, protect individuals, and enhance public trust by researching safety policies, developing evaluation methods, and promoting international cooperation on AI safety.
C. Establishment of the National Artificial Intelligence Commission
Article 7 details the establishment of the National Artificial Intelligence Commission (Commission). The Commission will resolve issues related to major policies for the development of AI. It will also provide recommendations on ethics, safety, and regulations that agencies must implement.
D. Establishment of Artificial Intelligence Safety Research Institute
Article 12 details the establishment of the Artificial Intelligence Safety Research Institute. The institute will be responsible for defining and analyzing the risks related to AI safety, researching AI safety policy, AI safety evaluation standards and methods, AI safety technology and standardization, etc.
Additionally, the government should fund AI projects, support standardization, manage AI data, assist businesses, mentor startups, attract global talent, and foster cooperation. It will also establish the Korea AI Promotion Association, research AI progress and support the verification and certification of AI systems.
VII. Regulatory Authority
The Minister of Science, Technology, and ICT (Minister) has the authority to investigate potential violations of AI regulations.
VIII. Penalties for Non-Compliance
Individuals or entities that fail to comply with disclosure requirements, designate a domestic representative (in the case of a foreign operator), or implement suspension or correction orders may be fined up to 30 million won.
IX. How Can an Organization Operationalize the AI Basic Act
To operationalize the AI Basic Act, organizations should:
- understand the provisions outlined in the AI Basic Act to establish compliant internal policies;
- ensure transparency in AI operations and disclose practices as required by the AI Basic Act;
- establish an AI governance framework aligned with the principles outlined in the AI Basic Act;
- appoint a local representative along with an internal ethics and compliance officer;
- conduct AI impact assessments and establish regular monitoring mechanisms; and
- ensure compliance with international AI laws.
X. How Securiti Can Help
Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.
Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI Guidelines on AI Governance and Ethics.
Request a demo to learn more.