Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Published March 3, 2025

Listen to the content

I. Introduction

South Korea’s first major AI legislation – the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (AI Basic Act) was signed into law on January 21, 2025. It aims to balance AI advancement with the protection of individuals’ rights and dignity in South Korea.

The AI Basic Act promotes the development of AI technologies that enhance the quality of life while ensuring safety, reliability, and transparency in decision-making, making South Korea the second country in the world to establish a comprehensive legislative framework governing AI after the European Union.

It requires clear explanations of key criteria behind AI’s final results while mandating the government to ensure AI safety, support innovation, and develop policies to help residents adapt to AI-driven societal changes. Thus, it focuses on fostering public trust in AI by setting clear ethical standards and regulatory frameworks to guide its responsible use. One year after its promulgation, it will take effect on January 22, 2026, apart from minor provisions that will take effect from January 24, 2026.

II. Who Needs to Comply with the AI Basic Act

A. Material Scope

The AI Basic Act covers the development, use, and regulation of AI technologies across various sectors. It assigns responsibilities to AI operators, who are divided into two categories: AI developers (who create AI systems) and AI operators (who integrate AI into products or services). The act also places obligations on the government.

B. Territorial Scope

The law has domestic and extraterritorial effect, covering foreign AI-related activities that impact South Korea. Exemptions apply to AI used in national defense or security.

III. Definitions of Key Terms

A. Artificial Intelligence (AI)

The electronic embodiment of human intellectual abilities, including learning, reasoning, perception, judgment, and language understanding.

B. Artificial Intelligence System

An AI-based system with varying degrees of autonomy and adaptability that generates predictions, recommendations, and decisions affecting real and virtual environments.

C. High-Impact Artificial Intelligence

AI systems that pose significant risks to human life, physical safety, and fundamental rights. These include AI applications in:

  • energy supply;
  • drinking water production;
  • healthcare system operation;
  • medical device development and use;
  • nuclear power safety and operation;
  • criminal investigations using biometric data;
  • decisions impacting rights and obligations (e.g., hiring, loan approvals);
  • transport systems ;
  • government decision-making affecting public services or taxes;
  • student evaluation in education; and
  • any other critical areas affecting safety and fundamental rights.

D. Generative Artificial Intelligence

AI that generates outputs such as text, images, sound, and video based on input data.

E. Artificial Intelligence Business Operators

Entities involved in AI development and operation, categorized as AI Developers (those who create AI systems) and AI Operators (those who use AI in products or services).

IV. Obligations of AI Operators

A. Ethics and Reliability Requirements

Article 27 outlines Artificial Intelligence Ethics Principles, under which the government may establish and announce principles to promote ethics in AI. The principles will include the following:

  • safety and reliability concerns to ensure that the creation and use of AI does not endanger human life, physical health, or mental well-being;
  • accessibility concerns to ensure that everyone may utilize AI-powered goods and services without restrictions or inconvenience;
  • matters concerning the creation and application of AI to enhance human well-being.

Article 28 also outlines the possibility of establishing a Private Autonomous Artificial Intelligence Ethics Committee to ensure compliance with ethical principles.

B. Transparency Requirement

Article 31 outlines the transparency requirement for high-impact and generative AI systems. AI operators must determine whether their AI qualifies as high-impact AI. If required, they can seek confirmation from the Minister of Science, Technology, and ICT (Minister). To ensure transparency when deploying high-impact AI or generative AI, operators must:

  • inform users in advance that an AI-based product or service is being used;
  • label AI-generated content (e.g., images, text, or videos) as being AI-generated; and
  • explicitly indicate AI-generated content, such as virtual sounds, images, or videos, that resemble real ones (unless it's artistic or creative, in which case disclosure should not disrupt the experience).

C. Confirmation of Impact Requirement

Article 33 requires organizations to conduct an initial assessment determining whether AI incorporated into their goods or services meets the criteria for high-impact AI. If necessary, they may ask the Minister for confirmation.

D. Safety and Reliability Requirements

Article 34 outlines safety and reliability requirements for high-impact AI systems, including:

  • developing and operating a risk management plan;
  • ensuring AI explainability by disclosing the reasoning behind AI-generated decisions, the key criteria used and the datasets used for AI training and operation;
  • implementing user protection measures;
  • ensuring human oversight of AI operations; and
  • maintaining documentation of safety and reliability measures.

Moreover, Article 32 outlines additional safety requirements. AI operators using more complex learning calculations must implement policies to identify, assess, and mitigate risks at every phase of the AI lifecycle. In addition, they have to develop a system for monitoring and risk management and report the implementation outcomes to the Minister.

E. Impact Assessment Requirement

Article 35 outlines the impact assessment requirement for high-impact AI systems. Organizations that integrate such systems into their goods or services should assess their possible impact on people's basic rights beforehand. Additionally, organizations using high-impact AI systems that have passed such impact assessments must receive priority for goods or services meant for government use.

V. Obligations on Foreign AI Operators

Article 36 outlines the requirement of designating a local representative. Foreign AI operators (without a physical presence in South Korea) meeting a user or revenue threshold (set by presidential decree) must appoint a local representative and report it to the Minister. Responsibilities of the domestic representative include:

  • submitting compliance reports;
  • requesting confirmation of high-impact AI classification;
  • supporting compliance with AI safety and reliability measures; and
  • having a physical presence in South Korea.

The AI operator who appointed the domestic agent will be held accountable for the Basic AI Act if the domestic agent breaches any of the Basic AI Act’s provisions.

VI. Obligations on the Government

A. Establishment of a Basic Plan for Artificial Intelligence

Article 6 details the establishment of the Basic Plan for Artificial Intelligence. The Minister must develop a Basic AI Plan every three years, with input from central agencies and local governments. The Basic Plan must include the following:

  • AI policy direction and strategy;
  • AI talent development and industry growth plans;
  • AI ethics and regulatory frameworks;
  • AI investment and financial support measures;
  • fairness, transparency, responsibility, and safety of AI;
  • international AI cooperation strategies; and
  • AI’s impact on sectors such as education, labor, economy, and culture.

B. Establishment of Artificial Intelligence Policy Center

Article 11 details the establishment of the Artificial Intelligence Policy Center. The Minister may establish an AI Policy Center to guide policy development, support AI-related professional skills, and analyze societal impacts and trends.

Additionally, an AI Safety Research Institute may be created to address AI risks, protect individuals, and enhance public trust by researching safety policies, developing evaluation methods, and promoting international cooperation on AI safety.

C. Establishment of the National Artificial Intelligence Commission

Article 7 details the establishment of the National Artificial Intelligence Commission (Commission). The Commission will resolve issues related to major policies for the development of AI. It will also provide recommendations on ethics, safety, and regulations that agencies must implement.

D. Establishment of Artificial Intelligence Safety Research Institute

Article 12 details the establishment of the Artificial Intelligence Safety Research Institute. The institute will be responsible for defining and analyzing the risks related to AI safety, researching AI safety policy, AI safety evaluation standards and methods, AI safety technology and standardization, etc.

Additionally, the government should fund AI projects, support standardization, manage AI data, assist businesses, mentor startups, attract global talent, and foster cooperation. It will also establish the Korea AI Promotion Association,  research AI progress and support the verification and certification of AI systems.

VII. Regulatory Authority

The Minister of Science, Technology, and ICT (Minister) has the authority to investigate potential violations of AI regulations.

VIII. Penalties for Non-Compliance

Individuals or entities that fail to comply with disclosure requirements, designate a domestic representative (in the case of a foreign operator), or implement suspension or correction orders may be fined up to 30 million won.

IX. How Can an Organization Operationalize the AI Basic Act

To operationalize the AI Basic Act, organizations should:

  • understand the provisions outlined in the AI Basic Act to establish compliant internal policies;
  • ensure transparency in AI operations and disclose practices as required by the AI Basic Act;
  • establish an AI governance framework aligned with the principles outlined in the AI Basic Act;
  • appoint a local representative along with an internal ethics and compliance officer;
  • conduct AI impact assessments and establish regular monitoring mechanisms; and
  • ensure compliance with international AI laws.

X. How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI Guidelines on AI Governance and Ethics.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Key Data Protection Reforms Introduced by the Data Use and Access Act View More
Key Data Protection Reforms Introduced by the Data Use and Access Act
UK DUAA 2025 updates UK GDPR, DPA and PECR. Changes cover research and broad consent, legitimate interests and SARs, automated decisions, transfers and cookies.
FTC's 2025 COPPA Final Rule Amendments View More
FTC’s 2025 COPPA Final Rule Amendments: What You Need to Know
Gain insights into FTC's 2025 COPPA Final Rule Amendments. Discover key definitions, notices, consent choices, methods, exceptions, requirements, etc.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
EU AI Act Mapping: A Step-by-Step Compliance Roadmap View More
EU AI Act Mapping: A Step-by-Step Compliance Roadmap
Explore the EU AI Act Mapping infographic—a step-by-step compliance roadmap to help organizations understand key requirements, assess risk, and align AI systems with EU...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New