Securiti AI Launches Context-Aware LLM Firewalls to Secure GenAI Applications

View

South Korea’s Safe Use of Personal Information in the Age of AI Guidance: What You Should Know

Published November 6, 2023

Listen to the content

The Personal Information Protection Commission (PIPC) released its guidance on the safe use of personal information in the age of AI on August 3, 2023. The released document addressed several ethical concerns related to using AI in tandem with personal information.

The guidance aims to provide appropriate instructions to all relevant stakeholders, both domestic and international, on how they may minimize the risk of privacy and data infringements.

Additionally, the guidance provides rules and considerations to consider when interpreting the Personal Information Protection Act (PIPA) concerning AI development and deployment.

Lastly, the guidance offers critical information on how South Korea plans to approach AI regulations, emphasizing public-private collaboration and strategic principles for specific sectors and industries.

1. A Principle-Based System

The guide acknowledges the rapid pace of the overall AI industry's growth and the expansive nature of its direct and indirect involvement in humans' lives. With that in mind, the PIPC has recommended establishing a principle-based discipline system. This is critical because the guide explicitly states that it would follow a more "principles-oriented approach rather than a regulatory one.

It is under this principle that an "AI Privacy Team" will be established that will play the active role of the main consultation party for all organizations within South Korea seeking advice on the following matters:

  • Development and integration of AI models and services;
  • Providing legal interpretations of personal information processing guidelines;
  • Reviewing the application of regulatory sandboxes.

The "AI Privacy Team" will operate under a review system expected to be introduced in 2023. The primary purpose of this system would be to analyze an organization's business operations and help prepare a compliance recommendation related to the PIPA. However, this team will not have any powers to carry out administrative actions related to the results of such an analysis.

2. Guidelines For Each Sector Via Public-Private Cooperation

The current guidelines on AI represent only the most basic standards and principles. Further cooperation and collaboration with the private sector will be essential to create a more comprehensive plan that satisfies all concerned stakeholders.

To this end, the "AI Privacy Public-Private Policy Council" will be established to allow a common discussion platform for AI companies, developers, academics, legal professionals, and civic groups to voice their opinions, concerns, and recommendations. This Council will also work jointly with the PIPC to create data processing standards in AI environments for each specific sector.

The Council will also assist in the expansion of R&D capabilities for the activation of privacy enhancement technology (PET). Furthermore, in cases where there is ambiguity in PET application or the need for verification arises, technology development and verification can be conducted within a designated "personal information safe zone" that ensures both security and safety.

Lastly, an 'AI risk assessment model' with the capability to precisely assess AI-related risks will be prepared. This model will facilitate the design of tailored regulations based on the risk level of  AI. To establish such a risk assessment system effectively, it is imperative to conduct diverse experiments and initiatives. Hence, the PIPC plans to leverage a 'regulatory sandbox' to accumulate a range of AI cases and, using these cases, identify the risks by analyzing their operational performance and risk factors.

3. Strengthening International Cooperation System

A global cooperative system will be implemented to form a digital international standard for AI. Based on the Paris Initiative, the PIPC hopes to establish a new level of digital order within South Korea.

The "AI and Data Privacy International Conference" organized in Seoul in June 2023 was the first step in this direction. The committee engaged in discussions with representatives from regulatory and supervisory bodies, reviewing a wide array of laws, policies, and cases from different countries concerning infringements and violations related to personal information caused by AI.

Further, the PIPC plans to host a Global Privacy Assembly in 2025 to discuss new privacy issues that will emerge within the AI industry by then. The PIPC hopes to position South Korea as a major international actor in creating a standard international system by hosting such an event. It hopes to encourage greater collaboration between South Korean firms and global AI operators such as OpenAI, Google, and Meta.

4. Personal Information Processing Standards for AI Development & Service Stage

The guideline acknowledges the absence of distinct standards for managing personal information in the context of AI development and services. The PIPC intends to address this gap within the framework of the existing "Personal Information Protection Act".

The primary objective is to establish clear and specific principles and standards to guide each stage of the process, including AI development, data collection, AI training, and service provision. This approach aims to facilitate a more transparent and specific assessment of personal information processing within the stated guidelines.

a. Planning Stage

  • Organizations are advised to adopt a privacy-oriented design principle or Privacy by Design when planning their AI models and services.
  • Personal information protection-centered design principles (Privacy by Design) should be reflected during the modeling, learning, and operation processes to minimize the risk of personal information infringement.
  • Developers and personal data protection managers should collaborate appropriately to identify the potential risks, and design, apply, and manage all relevant countermeasures, and build a governance framework that ensures collaboration with privacy officers and handles response measures.

b. Data Collection Stage

  • During the data collection process, organizations should ideally have clear divisions of processing principles for general personal information, public information, video information, and biometric information.
  • When developing large-scale language models, the use of publicly available information may become necessary, and in such cases, the legal grounds and considerations for processing public information have been specified.
  • When data contains information from mobile video equipment such as drones and self-driving cars, organizations are advised to consider the upcoming revisions within the PIPA related to drones and autonomous vehicles.

c. AI Learning Stage

  • All personal information must be pseudonymized to facilitate AI research and development without the need for separate consent. The use of Privacy Enhancing Technology (PET), such as synthetic data, is recommended.
  • Organizations must also undertake strict measures to prevent risks that may occur before and after this stage, such as re-identification through linkage and combination with other information.
  • Since these risks cannot be eliminated entirely, the level of implementation of preventive measures will be determined based on the degree of effort to minimize them. Additionally, it is recommended to actively use Privacy Enhancing Technologies (PET), such as synthetic data, to enhance privacy and mitigate risk in AI applications.

d. AI Service Stage

  • To safeguard data subjects' rights and ensure secure transparency during both the development phases of the AI model and the subsequent commercialization of the service.
  • The PIPC will prepare further guidelines after sufficient review of the specific scope and method of disclosure and ways to exercise rights;
  • Even when using APIs from pre-existing AI models or plugins, organizations should actively guide users by providing detailed usage guidelines and technical documents so that they can comply with privacy measures.

How Securiti Can Help

Organizations understand the tremendous challenges and opportunities AI presents. Used appropriately, it can lead to a steep increase in both productivity and efficiency. However, its usage must be tampered with responsibility, owing to, among several other factors, the vast amount of users' personal information and data involved.

Striking the right balance between responsible data usage and leveraging AI capabilities to their maximum potential can be both a strategic and operational obstacle for organizations.

This is where Securiti can help.

With its Data Command Center, a centralized platform that enables the safe use of data and GenAI, Securiti provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. Securiti's Data Command Center has a proven track record of providing data security, privacy, governance, and compliance for various organizations of varying sizes and industries.

With the Data Command Center, organizations can enable a proactive approach toward honoring their regulatory obligations with respect to data privacy while leveraging the maximum benefits from AI usage.

Request a demo today to learn more about how Securiti can help you comply with South Korea's PIPA and other global data privacy regulations.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

What's
New