Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Listen to the content

I. Introduction

South Korea’s first major AI legislation – the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (AI Basic Act) was signed into law on January 21, 2025. It aims to balance AI advancement with the protection of individuals’ rights and dignity in South Korea.

The AI Basic Act promotes the development of AI technologies that enhance the quality of life while ensuring safety, reliability, and transparency in decision-making, making South Korea the second country in the world to establish a comprehensive legislative framework governing AI after the European Union.

It requires clear explanations of key criteria behind AI’s final results while mandating the government to ensure AI safety, support innovation, and develop policies to help residents adapt to AI-driven societal changes. Thus, it focuses on fostering public trust in AI by setting clear ethical standards and regulatory frameworks to guide its responsible use. One year after its promulgation, it will take effect on January 22, 2026, apart from minor provisions that will take effect from January 24, 2026.

II. Who Needs to Comply with the AI Basic Act

A. Material Scope

The AI Basic Act covers the development, use, and regulation of AI technologies across various sectors. It assigns responsibilities to AI operators, who are divided into two categories: AI developers (who create AI systems) and AI operators (who integrate AI into products or services). The act also places obligations on the government.

B. Territorial Scope

The law has domestic and extraterritorial effect, covering foreign AI-related activities that impact South Korea. Exemptions apply to AI used in national defense or security.

III. Definitions of Key Terms

A. Artificial Intelligence (AI)

The electronic embodiment of human intellectual abilities, including learning, reasoning, perception, judgment, and language understanding.

B. Artificial Intelligence System

An AI-based system with varying degrees of autonomy and adaptability that generates predictions, recommendations, and decisions affecting real and virtual environments.

C. High-Impact Artificial Intelligence

AI systems that pose significant risks to human life, physical safety, and fundamental rights. These include AI applications in:

  • energy supply;
  • drinking water production;
  • healthcare system operation;
  • medical device development and use;
  • nuclear power safety and operation;
  • criminal investigations using biometric data;
  • decisions impacting rights and obligations (e.g., hiring, loan approvals);
  • transport systems ;
  • government decision-making affecting public services or taxes;
  • student evaluation in education; and
  • any other critical areas affecting safety and fundamental rights.

D. Generative Artificial Intelligence

AI that generates outputs such as text, images, sound, and video based on input data.

E. Artificial Intelligence Business Operators

Entities involved in AI development and operation, categorized as AI Developers (those who create AI systems) and AI Operators (those who use AI in products or services).

IV. Obligations of AI Operators

A. Ethics and Reliability Requirements

Article 27 outlines Artificial Intelligence Ethics Principles, under which the government may establish and announce principles to promote ethics in AI. The principles will include the following:

  • safety and reliability concerns to ensure that the creation and use of AI does not endanger human life, physical health, or mental well-being;
  • accessibility concerns to ensure that everyone may utilize AI-powered goods and services without restrictions or inconvenience;
  • matters concerning the creation and application of AI to enhance human well-being.

Article 28 also outlines the possibility of establishing a Private Autonomous Artificial Intelligence Ethics Committee to ensure compliance with ethical principles.

B. Transparency Requirement

Article 31 outlines the transparency requirement for high-impact and generative AI systems. AI operators must determine whether their AI qualifies as high-impact AI. If required, they can seek confirmation from the Minister of Science, Technology, and ICT (Minister). To ensure transparency when deploying high-impact AI or generative AI, operators must:

  • inform users in advance that an AI-based product or service is being used;
  • label AI-generated content (e.g., images, text, or videos) as being AI-generated; and
  • explicitly indicate AI-generated content, such as virtual sounds, images, or videos, that resemble real ones (unless it's artistic or creative, in which case disclosure should not disrupt the experience).

C. Confirmation of Impact Requirement

Article 33 requires organizations to conduct an initial assessment determining whether AI incorporated into their goods or services meets the criteria for high-impact AI. If necessary, they may ask the Minister for confirmation.

D. Safety and Reliability Requirements

Article 34 outlines safety and reliability requirements for high-impact AI systems, including:

  • developing and operating a risk management plan;
  • ensuring AI explainability by disclosing the reasoning behind AI-generated decisions, the key criteria used and the datasets used for AI training and operation;
  • implementing user protection measures;
  • ensuring human oversight of AI operations; and
  • maintaining documentation of safety and reliability measures.

Moreover, Article 32 outlines additional safety requirements. AI operators using more complex learning calculations must implement policies to identify, assess, and mitigate risks at every phase of the AI lifecycle. In addition, they have to develop a system for monitoring and risk management and report the implementation outcomes to the Minister.

E. Impact Assessment Requirement

Article 35 outlines the impact assessment requirement for high-impact AI systems. Organizations that integrate such systems into their goods or services should assess their possible impact on people's basic rights beforehand. Additionally, organizations using high-impact AI systems that have passed such impact assessments must receive priority for goods or services meant for government use.

V. Obligations on Foreign AI Operators

Article 36 outlines the requirement of designating a local representative. Foreign AI operators (without a physical presence in South Korea) meeting a user or revenue threshold (set by presidential decree) must appoint a local representative and report it to the Minister. Responsibilities of the domestic representative include:

  • submitting compliance reports;
  • requesting confirmation of high-impact AI classification;
  • supporting compliance with AI safety and reliability measures; and
  • having a physical presence in South Korea.

The AI operator who appointed the domestic agent will be held accountable for the Basic AI Act if the domestic agent breaches any of the Basic AI Act’s provisions.

VI. Obligations on the Government

A. Establishment of a Basic Plan for Artificial Intelligence

Article 6 details the establishment of the Basic Plan for Artificial Intelligence. The Minister must develop a Basic AI Plan every three years, with input from central agencies and local governments. The Basic Plan must include the following:

  • AI policy direction and strategy;
  • AI talent development and industry growth plans;
  • AI ethics and regulatory frameworks;
  • AI investment and financial support measures;
  • fairness, transparency, responsibility, and safety of AI;
  • international AI cooperation strategies; and
  • AI’s impact on sectors such as education, labor, economy, and culture.

B. Establishment of Artificial Intelligence Policy Center

Article 11 details the establishment of the Artificial Intelligence Policy Center. The Minister may establish an AI Policy Center to guide policy development, support AI-related professional skills, and analyze societal impacts and trends.

Additionally, an AI Safety Research Institute may be created to address AI risks, protect individuals, and enhance public trust by researching safety policies, developing evaluation methods, and promoting international cooperation on AI safety.

C. Establishment of the National Artificial Intelligence Commission

Article 7 details the establishment of the National Artificial Intelligence Commission (Commission). The Commission will resolve issues related to major policies for the development of AI. It will also provide recommendations on ethics, safety, and regulations that agencies must implement.

D. Establishment of Artificial Intelligence Safety Research Institute

Article 12 details the establishment of the Artificial Intelligence Safety Research Institute. The institute will be responsible for defining and analyzing the risks related to AI safety, researching AI safety policy, AI safety evaluation standards and methods, AI safety technology and standardization, etc.

Additionally, the government should fund AI projects, support standardization, manage AI data, assist businesses, mentor startups, attract global talent, and foster cooperation. It will also establish the Korea AI Promotion Association,  research AI progress and support the verification and certification of AI systems.

VII. Regulatory Authority

The Minister of Science, Technology, and ICT (Minister) has the authority to investigate potential violations of AI regulations.

VIII. Penalties for Non-Compliance

Individuals or entities that fail to comply with disclosure requirements, designate a domestic representative (in the case of a foreign operator), or implement suspension or correction orders may be fined up to 30 million won.

IX. How Can an Organization Operationalize the AI Basic Act

To operationalize the AI Basic Act, organizations should:

  • understand the provisions outlined in the AI Basic Act to establish compliant internal policies;
  • ensure transparency in AI operations and disclose practices as required by the AI Basic Act;
  • establish an AI governance framework aligned with the principles outlined in the AI Basic Act;
  • appoint a local representative along with an internal ethics and compliance officer;
  • conduct AI impact assessments and establish regular monitoring mechanisms; and
  • ensure compliance with international AI laws.

X. How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with the AI Guidelines on AI Governance and Ethics.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View
Spotlight 13:11

Securing GenAI: From SaaS Copilots to Enterprise Applications

Rehan Jalil
Watch Now View
Spotlight 47:02

Navigating Emerging Technologies: AI for Security/Security for AI

Rehan Jalil
Watch Now View

Latest

View More

Accelerating Safe Enterprise AI with Gencore Sync & Databricks

We are delighted to announce new capabilities in Gencore AI to support Databricks' Mosaic AI and Delta Tables! This support enables organizations to selectively...

View More

Building Safe, Enterprise-grade AI with Securiti’s Gencore AI and NVIDIA NIM

Businesses are rapidly adopting generative AI (GenAI) to boost efficiency, productivity, innovation, customer service, and growth. However, IT & AI executives—particularly in highly regulated...

View More

The Right to Data Portability in the Middle East

Discover the regulatory landscape of data portability in the Middle East, particularly its requirements, limitations/exceptions. Learn how Securiti helps ensure swift compliance.

Data Protection in the Telecommunications Sector of the UAE View More

Data Protection in the Telecommunications Sector of the UAE

Gain insights into data protection regulations in the UAE telecommunications sector. Discover data governance framework, data security obligations and how Securiti can help.

The Future of Privacy View More

The Future of Privacy: Top Emerging Privacy Trends in 2025

Download the whitepaper to gain insights into the top emerging privacy trends in 2025. Analyze trends and embed necessary measures to stay ahead.

View More

Personalization vs. Privacy: Data Privacy Challenges in Retail

Download the whitepaper to learn about the regulatory landscape and enforcement actions in the retail industry, data privacy challenges, practical recommendations, and how Securiti...

Nigeria's DPA View More

Navigating Nigeria’s DPA: A Step-by-Step Compliance Roadmap

Download the infographic to learn how Nigeria's Data Protection Act (DPA) mapping impacts your organization and compliance strategy.

Decoding Data Retention Requirements Across US State Privacy Laws View More

Decoding Data Retention Requirements Across US State Privacy Laws

Download the infographic to explore data retention requirements across US state privacy laws. Understand key retention requirements and noncompliance penalties.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New