Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)

Published March 3, 2025
Contributors

Anas Baig

Product Marketing Manager at Securiti

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Listen to the content

I. Introduction

South Korea’s first major AI legislation – the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (AI Basic Act) was signed into law on January 21, 2025. It aims to balance AI advancement with the protection of individuals’ rights and dignity in South Korea.

The AI Basic Act promotes the development of AI technologies that enhance the quality of life while ensuring safety, reliability, and transparency in decision-making, making South Korea the second country in the world to establish a comprehensive legislative framework governing AI after the European Union.

It requires clear explanations of key criteria behind AI’s final results while mandating the government to ensure AI safety, support innovation, and develop policies to help residents adapt to AI-driven societal changes. Thus, it focuses on fostering public trust in AI by setting clear ethical standards and regulatory frameworks to guide its responsible use. One year after its promulgation, it will take effect on January 22, 2026, apart from minor provisions that will take effect from January 24, 2026.

II. Who Needs to Comply with the AI Basic Act

A. Material Scope

The AI Basic Act covers the development, use, and regulation of AI technologies across various sectors. It assigns responsibilities to AI operators, who are divided into two categories: AI developers (who create AI systems) and AI operators (who integrate AI into products or services). The act also places obligations on the government.

B. Territorial Scope

The law has domestic and extraterritorial effect, covering foreign AI-related activities that impact South Korea. Exemptions apply to AI used in national defense or security.

III. Definitions of Key Terms

A. Artificial Intelligence (AI)

The electronic embodiment of human intellectual abilities, including learning, reasoning, perception, judgment, and language understanding.

B. Artificial Intelligence System

An AI-based system with varying degrees of autonomy and adaptability that generates predictions, recommendations, and decisions affecting real and virtual environments.

C. High-Impact Artificial Intelligence

AI systems that pose significant risks to human life, physical safety, and fundamental rights. These include AI applications in:

  • energy supply;
  • drinking water production;
  • healthcare system operation;
  • medical device development and use;
  • nuclear power safety and operation;
  • criminal investigations using biometric data;
  • decisions impacting rights and obligations (e.g., hiring, loan approvals);
  • transport systems ;
  • government decision-making affecting public services or taxes;
  • student evaluation in education; and
  • any other critical areas affecting safety and fundamental rights.

D. Generative Artificial Intelligence

AI that generates outputs such as text, images, sound, and video based on input data.

E. Artificial Intelligence Business Operators

Entities involved in AI development and operation, categorized as AI Developers (those who create AI systems) and AI Operators (those who use AI in products or services).

IV. Obligations of AI Operators

A. Ethics and Reliability Requirements

Article 27 outlines Artificial Intelligence Ethics Principles, under which the government may establish and announce principles to promote ethics in AI. The principles will include the following:

  • safety and reliability concerns to ensure that the creation and use of AI does not endanger human life, physical health, or mental well-being;
  • accessibility concerns to ensure that everyone may utilize AI-powered goods and services without restrictions or inconvenience;
  • matters concerning the creation and application of AI to enhance human well-being.

Article 28 also outlines the possibility of establishing a Private Autonomous Artificial Intelligence Ethics Committee to ensure compliance with ethical principles.

B. Transparency Requirement

Article 31 outlines the transparency requirement for high-impact and generative AI systems. AI operators must determine whether their AI qualifies as high-impact AI. If required, they can seek confirmation from the Minister of Science, Technology, and ICT (Minister). To ensure transparency when deploying high-impact AI or generative AI, operators must:

  • inform users in advance that an AI-based product or service is being used;
  • label AI-generated content (e.g., images, text, or videos) as being AI-generated; and
  • explicitly indicate AI-generated content, such as virtual sounds, images, or videos, that resemble real ones (unless it's artistic or creative, in which case disclosure should not disrupt the experience).

C. Confirmation of Impact Requirement

Article 33 requires organizations to conduct an initial assessment determining whether AI incorporated into their goods or services meets the criteria for high-impact AI. If necessary, they may ask the Minister for confirmation.

D. Safety and Reliability Requirements

Article 34 outlines safety and reliability requirements for high-impact AI systems, including:

  • developing and operating a risk management plan;
  • ensuring AI explainability by disclosing the reasoning behind AI-generated decisions, the key criteria used and the datasets used for AI training and operation;
  • implementing user protection measures;
  • ensuring human oversight of AI operations; and
  • maintaining documentation of safety and reliability measures.

Moreover, Article 32 outlines additional safety requirements. AI operators using more complex learning calculations must implement policies to identify, assess, and mitigate risks at every phase of the AI lifecycle. In addition, they have to develop a system for monitoring and risk management and report the implementation outcomes to the Minister.

E. Impact Assessment Requirement

Article 35 outlines the impact assessment requirement for high-impact AI systems. Organizations that integrate such systems into their goods or services should assess their possible impact on people's basic rights beforehand. Additionally, organizations using high-impact AI systems that have passed such impact assessments must receive priority for goods or services meant for government use.

V. Obligations on Foreign AI Operators

Article 36 outlines the requirement of designating a local representative. Foreign AI operators (without a physical presence in South Korea) meeting a user or revenue threshold (set by presidential decree) must appoint a local representative and report it to the Minister. Responsibilities of the domestic representative include:

  • submitting compliance reports;
  • requesting confirmation of high-impact AI classification;
  • supporting compliance with AI safety and reliability measures; and
  • having a physical presence in South Korea.

The AI operator who appointed the domestic agent will be held accountable for the Basic AI Act if the domestic agent breaches any of the Basic AI Act’s provisions.

VI. Obligations on the Government

A. Establishment of a Basic Plan for Artificial Intelligence

Article 6 details the establishment of the Basic Plan for Artificial Intelligence. The Minister must develop a Basic AI Plan every three years, with input from central agencies and local governments. The Basic Plan must include the following:

  • AI policy direction and strategy;
  • AI talent development and industry growth plans;
  • AI ethics and regulatory frameworks;
  • AI investment and financial support measures;
  • fairness, transparency, responsibility, and safety of AI;
  • international AI cooperation strategies; and
  • AI’s impact on sectors such as education, labor, economy, and culture.

B. Establishment of Artificial Intelligence Policy Center

Article 11 details the establishment of the Artificial Intelligence Policy Center. The Minister may establish an AI Policy Center to guide policy development, support AI-related professional skills, and analyze societal impacts and trends.

Additionally, an AI Safety Research Institute may be created to address AI risks, protect individuals, and enhance public trust by researching safety policies, developing evaluation methods, and promoting international cooperation on AI safety.

C. Establishment of the National Artificial Intelligence Commission

Article 7 details the establishment of the National Artificial Intelligence Commission (Commission). The Commission will resolve issues related to major policies for the development of AI. It will also provide recommendations on ethics, safety, and regulations that agencies must implement.

D. Establishment of Artificial Intelligence Safety Research Institute

Article 12 details the establishment of the Artificial Intelligence Safety Research Institute. The institute will be responsible for defining and analyzing the risks related to AI safety, researching AI safety policy, AI safety evaluation standards and methods, AI safety technology and standardization, etc.

Additionally, the government should fund AI projects, support standardization, manage AI data, assist businesses, mentor startups, attract global talent, and foster cooperation. It will also establish the Korea AI Promotion Association,  research AI progress and support the verification and certification of AI systems.

VII. Regulatory Authority

The Minister of Science, Technology, and ICT (Minister) has the authority to investigate potential violations of AI regulations.

VIII. Penalties for Non-Compliance

Individuals or entities that fail to comply with disclosure requirements, designate a domestic representative (in the case of a foreign operator), or implement suspension or correction orders may be fined up to 30 million won.

IX. How Can an Organization Operationalize the AI Basic Act

To operationalize the AI Basic Act, organizations should:

  • understand the provisions outlined in the AI Basic Act to establish compliant internal policies;
  • ensure transparency in AI operations and disclose practices as required by the AI Basic Act;
  • establish an AI governance framework aligned with the principles outlined in the AI Basic Act;
  • appoint a local representative along with an internal ethics and compliance officer;
  • conduct AI impact assessments and establish regular monitoring mechanisms; and
  • ensure compliance with international AI laws.

X. How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI Guidelines on AI Governance and Ethics.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix View More
Simplifying Global Direct Marketing Compliance with Securiti’s Rules Matrix
The Challenge of Navigating Global Data Privacy Laws In today’s privacy-first world, navigating data protection laws and direct marketing compliance requirements is no easy...
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA) View More
A Complete Guide on Uganda’s Data Protection and Privacy Act (DPPA)
Delve into Uganda's Data Protection and Privacy Act (DPPA), including data subject rights, organizational obligations, and penalties for non-compliance.
Data Risk Management View More
What Is Data Risk Management?
Learn the ins and outs of data risk management, key reasons for data risk and best practices for managing data risks.
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders View More
Singapore’s PDPA & Consent: Clear Guidelines for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Singapore’s PDPA and consent requirements. Stay compliant and protect your business.
View More
Australia’s Privacy Act & Consent: Essential Guide for Enterprise Leaders
Download the essential infographic for enterprise leaders: A clear, actionable guide to Australia’s Privacy Act and consent requirements. Stay compliant and protect your business.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New