Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

South Korea’s Safe Use of Personal Information in the Age of AI Guidance

Published November 6, 2023
Contributors

Anas Baig

Product Marketing Manager at Securiti

Muhammad Faisal Sattar

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/Asia

Listen to the content

The Personal Information Protection Commission (PIPC) released its guidance on the safe use of personal information in the age of AI on August 3, 2023. The released document addressed several ethical concerns related to using AI in tandem with personal information.

The guidance aims to provide appropriate instructions to all relevant stakeholders, both domestic and international, on how they may minimize the risk of privacy and data infringements.

Additionally, the guidance provides rules and considerations to consider when interpreting the Personal Information Protection Act (PIPA) concerning AI development and deployment.

Lastly, the guidance offers critical information on how South Korea plans to approach AI regulations, emphasizing public-private collaboration and strategic principles for specific sectors and industries.

1. A Principle-Based System

The guide acknowledges the rapid pace of the overall AI industry's growth and the expansive nature of its direct and indirect involvement in humans' lives. With that in mind, the PIPC has recommended establishing a principle-based discipline system. This is critical because the guide explicitly states that it would follow a more "principles-oriented approach rather than a regulatory one.

It is under this principle that an "AI Privacy Team" will be established that will play the active role of the main consultation party for all organizations within South Korea seeking advice on the following matters:

  • Development and integration of AI models and services;
  • Providing legal interpretations of personal information processing guidelines;
  • Reviewing the application of regulatory sandboxes.

The "AI Privacy Team" will operate under a review system expected to be introduced in 2023. The primary purpose of this system would be to analyze an organization's business operations and help prepare a compliance recommendation related to the PIPA. However, this team will not have any powers to carry out administrative actions related to the results of such an analysis.

2. Guidelines For Each Sector Via Public-Private Cooperation

The current guidelines on AI represent only the most basic standards and principles. Further cooperation and collaboration with the private sector will be essential to create a more comprehensive plan that satisfies all concerned stakeholders.

To this end, the "AI Privacy Public-Private Policy Council" will be established to allow a common discussion platform for AI companies, developers, academics, legal professionals, and civic groups to voice their opinions, concerns, and recommendations. This Council will also work jointly with the PIPC to create data processing standards in AI environments for each specific sector.

The Council will also assist in the expansion of R&D capabilities for the activation of privacy enhancement technology (PET). Furthermore, in cases where there is ambiguity in PET application or the need for verification arises, technology development and verification can be conducted within a designated "personal information safe zone" that ensures both security and safety.

Lastly, an 'AI risk assessment model' with the capability to precisely assess AI-related risks will be prepared. This model will facilitate the design of tailored regulations based on the risk level of  AI. To establish such a risk assessment system effectively, it is imperative to conduct diverse experiments and initiatives. Hence, the PIPC plans to leverage a 'regulatory sandbox' to accumulate a range of AI cases and, using these cases, identify the risks by analyzing their operational performance and risk factors.

3. Strengthening International Cooperation System

A global cooperative system will be implemented to form a digital international standard for AI. Based on the Paris Initiative, the PIPC hopes to establish a new level of digital order within South Korea.

The "AI and Data Privacy International Conference" organized in Seoul in June 2023 was the first step in this direction. The committee engaged in discussions with representatives from regulatory and supervisory bodies, reviewing a wide array of laws, policies, and cases from different countries concerning infringements and violations related to personal information caused by AI.

Further, the PIPC plans to host a Global Privacy Assembly in 2025 to discuss new privacy issues that will emerge within the AI industry by then. The PIPC hopes to position South Korea as a major international actor in creating a standard international system by hosting such an event. It hopes to encourage greater collaboration between South Korean firms and global AI operators such as OpenAI, Google, and Meta.

4. Personal Information Processing Standards for AI Development & Service Stage

The guideline acknowledges the absence of distinct standards for managing personal information in the context of AI development and services. The PIPC intends to address this gap within the framework of the existing "Personal Information Protection Act".

The primary objective is to establish clear and specific principles and standards to guide each stage of the process, including AI development, data collection, AI training, and service provision. This approach aims to facilitate a more transparent and specific assessment of personal information processing within the stated guidelines.

a. Planning Stage

  • Organizations are advised to adopt a privacy-oriented design principle or Privacy by Design when planning their AI models and services.
  • Personal information protection-centered design principles (Privacy by Design) should be reflected during the modeling, learning, and operation processes to minimize the risk of personal information infringement.
  • Developers and personal data protection managers should collaborate appropriately to identify the potential risks, and design, apply, and manage all relevant countermeasures, and build a governance framework that ensures collaboration with privacy officers and handles response measures.

b. Data Collection Stage

  • During the data collection process, organizations should ideally have clear divisions of processing principles for general personal information, public information, video information, and biometric information.
  • When developing large-scale language models, the use of publicly available information may become necessary, and in such cases, the legal grounds and considerations for processing public information have been specified.
  • When data contains information from mobile video equipment such as drones and self-driving cars, organizations are advised to consider the upcoming revisions within the PIPA related to drones and autonomous vehicles.

c. AI Learning Stage

  • All personal information must be pseudonymized to facilitate AI research and development without the need for separate consent. The use of Privacy Enhancing Technology (PET), such as synthetic data, is recommended.
  • Organizations must also undertake strict measures to prevent risks that may occur before and after this stage, such as re-identification through linkage and combination with other information.
  • Since these risks cannot be eliminated entirely, the level of implementation of preventive measures will be determined based on the degree of effort to minimize them. Additionally, it is recommended to actively use Privacy Enhancing Technologies (PET), such as synthetic data, to enhance privacy and mitigate risk in AI applications.

d. AI Service Stage

  • To safeguard data subjects' rights and ensure secure transparency during both the development phases of the AI model and the subsequent commercialization of the service.
  • The PIPC will prepare further guidelines after sufficient review of the specific scope and method of disclosure and ways to exercise rights;
  • Even when using APIs from pre-existing AI models or plugins, organizations should actively guide users by providing detailed usage guidelines and technical documents so that they can comply with privacy measures.

How Securiti Can Help

Organizations understand the tremendous challenges and opportunities AI presents. Used appropriately, it can lead to a steep increase in both productivity and efficiency. However, its usage must be tampered with responsibility, owing to, among several other factors, the vast amount of users' personal information and data involved.

Striking the right balance between responsible data usage and leveraging AI capabilities to their maximum potential can be both a strategic and operational obstacle for organizations.

This is where Securiti can help.

With its Data Command Center, a centralized platform that enables the safe use of data and GenAI, Securiti provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. Securiti's Data Command Center has a proven track record of providing data security, privacy, governance, and compliance for various organizations of varying sizes and industries.

With the Data Command Center, organizations can enable a proactive approach toward honoring their regulatory obligations with respect to data privacy while leveraging the maximum benefits from AI usage.

Request a demo today to learn more about how Securiti can help you comply with South Korea's PIPA and other global data privacy regulations.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 11:18

Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh

Watch Now View
Spotlight 13:38

Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines

Sanofi Thumbnail
Watch Now View
Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View

Latest

The Overprivileged Access Crisis: A CISO’s Guide to Data Access Governance View More

The Overprivileged Access Crisis: A CISO’s Guide to Data Access Governance

Overprivileged data access has quietly become a systemic risk, where users, groups, and machines routinely hold far broader permissions than their jobs require. Approximately...

Securiti Powers Sovereign AI in the EU with NVIDIA View More

Securiti Powers Sovereign AI in the EU with NVIDIA

The EU has taken the lead globally in ensuring that the power of AI systems is harnessed for the overall wellbeing of human citizens...

What Are Internet Cookies & How Do They Work? View More

What Are Internet Cookies & How Do They Work?

Cookies store information in a user’s web browser. Our guide explains what cookies are, how they work, the different types, and other important questions.

An Overview of Nigeria’s Data Protection Act, 2023 View More

An Overview of Nigeria’s Data Protection Act, 2023

Gain insights into Nigeria’s Data Protection Act, 2023. Learn about its scope, key obligations for data controllers and data processors, data subject rights, penalties,...

Beyond DLP: Guide to Modern Data Protection with DSPM View More

Beyond DLP: Guide to Modern Data Protection with DSPM

Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.

Mastering Cookie Consent: Global Compliance & Customer Trust View More

Mastering Cookie Consent: Global Compliance & Customer Trust

Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.

From AI Risk to AI Readiness: Why Enterprises Need DSPM Now View More

From AI Risk to AI Readiness: Why Enterprises Need DSPM Now

Discover why shifting focus from AI risk to AI readiness is critical for enterprises. Learn how Data Security Posture Management (DSPM) empowers organizations to...

The European Health Data Space Regulation View More

The European Health Data Space Regulation: A Legislative Timeline and Implementation Roadmap

Download the infographic on the European Health Data Space Regulation, which features a clear timeline and roadmap highlighting key legislative milestones, implementation phases, and...

View More

Modern DSPM for Dummies: A Comprehensive Guide

Modern DSPM for Dummies is a comprehensive guide that explores the benefits, core capabilities, and the critical need for modern data security posture management.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

What's
New