Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

New Zealand’s Privacy Commissioner Issues Guidance on AI Usage

Contributors

Anas Baig

Product Marketing Manager at Securiti

Muhammad Faisal Sattar

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/Asia

Listen to the content

On September 21, 2023, New Zealand's Office of the Privacy Commissioner (OPC) published guidance on Artificial Intelligence and the Information Privacy Principles (IPPs). This guidance expands upon the OPC’s initial set of expectations around AI use, published on May 25, 2023. The purpose of the guidance is to assist New Zealanders who use AI tools in complying with the Privacy Act 2020 as it relates to the usage of AI.

The guidance explains how AI tools function, provides real-world examples and sets out several questions to think about concerning privacy concerns. It also explains how AI relates to the 13 IPPs in the Privacy Act. The Privacy Act applies when collecting, using, or sharing personal information and using AI tools.

According to the Privacy Commissioner, AI tools pose unique privacy challenges as AI enables unique ways of collecting and combining personal information, making it more challenging to see, comprehend, and justify using personal information. Organizations should exercise extra caution when feeding personal information into AI technologies since it’s still unclear how these algorithms produce a specific result.

Personal information under the Privacy Act includes information like a person's name, address, contact details, or photographs. It can also include technical metadata like map coordinates, Internet protocol addresses, or device identifiers related to a person. It also includes information about a person that is inaccurate or made up, including fake social profiles and deepfake images.

Overview of the Updated Guidance

The guidance takes a broad approach to AI systems and their potential privacy impacts as AI is still a developing area, where experts disagree on how capable current systems are and how this will develop over time.

The guidance suggests that AI users in New Zealand should align with IPPs at every stage of the AI process. The guidelines specifically state that organizations utilizing AI tools should:

  • Realize that privacy is a starting point for responsible use of AI tools, and the best time to start privacy work is at the beginning.
  • Think carefully about the use-case before relying on exciting new tools to solve it, and be confident that you understand potential privacy risks.
  • Have senior leadership approval based on full consideration of risks and mitigations.
  • Review whether a generative AI tool is necessary and proportionate given potential privacy impacts, and consider whether you could take a different approach.
  • Conduct a privacy impact assessment before using these tools, including what data sources they were trained on and how relevant and reliable they are for your purposes.
  • Be transparent, telling people how, when, and why the tool is being used.
  • Consider Māori perspectives and engage with them about potential risks and impacts to the taonga of their information.
  • Develop procedures about accuracy and access by individuals to their information.
  • Ensure human review before acting on AI outputs to reduce risks of inaccuracy and bias.
  • Ensure that the AI tool does not retain or disclose personal information.

Understanding the potential risks will enable you to use privacy policies to govern your AI tools and ensure privacy statements set clear expectations.

Consider the IPPs When Using AI Tools

The 13 IPPs, which govern how agencies must handle personal information, are the fundamental component of the Privacy Act. The IPPs govern the activities of collecting, using, and sharing personal information.

The IPPs apply whether you're developing your own AI tools, utilize AI tools to aid in decision-making, or have team members who utilize AI informally at work. They also apply when overseas organizations give AI tools to New Zealanders. You have to take your privacy obligations into account in each situation.

The IPPs provide guidelines for handling personal data, including how to collect it (IPPs1-4), use and protect it (IPPs5–10), and share it (IPPs 11–12). There are also specific requirements for unique identifiers (IPP13). Key questions to ask include:

Is the training data behind an AI tool relevant, reliable, and ethical?

AI tools reproduce patterns from their training data. Agencies are generally required to gather personal information directly from the individual it concerns (IPP2) and to disclose all information they collect, including how it will be used (IPP3). Additionally, agencies have to ensure that personal information obtained is fair and legal and does not unnecessarily pry into private matters, especially when it comes to obtaining data from minors (IPP4).

AI tools replicate patterns observed in their training data. Therefore, an organization cannot know whether a tool incorporates personal information obtained in a way that violates IPPs 1–4 unless it has an in-depth understanding of the training data and the design methods used to create it. Additionally, any gaps or biases may limit accuracy (IPP8).

Organizations must specify why they collect personal data and only use and disclose it for those reasons (IPPs 10 and 11). This implies that organizations must carefully assess the reasons behind their information collection needs and ensure they only obtain the information necessary to meet those requirements.

An organization must disclose at the time of data collection if it intends to use personal information for AI tool training. Training data is the foundation of AI tools, and if an organization is offering a service, like a chatbot or a phone line, it needs to explicitly inform users about this and provide them an option to opt out of having their information used for these purposes.

Additionally, organizations need to be confident they are using personal information in ways that fit the purpose for which it was collected. Reusing information for training may go against this (IPP10).

How are you keeping track of the information you collect and use with AI tools?

A person has the right to request access to and correct any information held about them by an agency (IPP6 and IPP7). Prior to implementing an AI tool, the Commissioner asserts that organizations must establish processes for handling requests from individuals requesting access to and correcting their personal data. Prior to using an AI tool during the procurement phase, you may want to think about the following:

  • Are you confident you can provide information about a person to them if they ask for it?
  • Are you confident that you can correct personal information?
  • How often are models you rely on updated? Can you correct AI outputs in a timely way?
  • How will you verify the identity of an individual requesting their information?

AI capabilities also make it easier for people to mimic other people realistically. Thus, organizations must be extra cautious when confirming the identity of someone requesting sensitive information.

How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?

Agencies that possess personal information are required under IPP8 to take reasonable measures to verify that the data is accurate, updated, complete, relevant, and not misleading before using or disclosing it. This prompts the question, what "reasonable steps" can organizations take to ensure that AI technologies will adhere to the concept of accuracy?

An organization should conduct privacy impact assessments and evaluate every stage of an AI tool's lifecycle, which might involve examining the training process that developed it and engaging with the community, including Māori, to understand and uphold fairness and accuracy. For instance, discussing Māori about the potential risks and impacts to the taonga of their information.

What are you doing to track and manage new risks to information from AI tools?

Organizations must safeguard personal information, prompts, and training data from theft, unauthorized access, and other misuse (IPP5). This includes using cybersecurity measures, such as two-factor authentication.

As a result, organizations will have to decide if they can utilize AI tools without sharing back data or if they can rely on contractual clauses that prevent the provider from using the input data for training. Additionally, they will require privacy breach response strategies addressing the possible risks of using AI tools.

In Conclusion

The Commissioner suggests that the safest course of action is to avoid putting personal information into an AI tool if an organization is unsure about it and ensure that everyone in the organization complies.

We all rely on individuals and organizations accepting accountability for their actions within the larger framework, necessitating the importance of being proactive about privacy to better control risk and use AI tools more efficiently. Additionally, organizations should ensure that the training data is obtained and handled in a manner that complies with data privacy regulations, AI laws, and ethical standards.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View

Latest

Accelerating Safe Enterprise AI View More

Accelerating Safe Enterprise AI: Securiti’s Gencore AI with Databricks and Anthropic Claude

Securiti AI collaborates with the largest firms in the world who are racing to adopt and deploy safe generative AI systems, leveraging their own...

View More

CAIO’s Guide to Building Safe Knowledge Agents

AI is rapidly moving from test cases to real-world implementation like internal knowledge agents and customer service chatbots, and a PwC report predicts 2025...

View More

What are Data Security Controls & Its Types

Learn what are data security controls, the types of data security controls, best practices for implementing them, and how Securiti can help.

View More

What is cloud Security? – Definition

Discover the ins and outs of cloud security, what it is, how it works, risks and challenges, benefits, tips to secure the cloud, and...

The Future of Privacy View More

The Future of Privacy: Top Emerging Privacy Trends in 2025

Download the whitepaper to gain insights into the top emerging privacy trends in 2025. Analyze trends and embed necessary measures to stay ahead.

View More

Personalization vs. Privacy: Data Privacy Challenges in Retail

Download the whitepaper to learn about the regulatory landscape and enforcement actions in the retail industry, data privacy challenges, practical recommendations, and how Securiti...

India’s Telecom Security & Privacy Regulations View More

India’s Telecom Security & Privacy Regulations: A High-Level Overview

Download the infographic to gain a high-level overview of India’s telecom security and privacy regulations. Learn how Securiti helps ensure swift compliance.

Nigeria's DPA View More

Navigating Nigeria’s DPA: A Step-by-Step Compliance Roadmap

Download the infographic to learn how Nigeria's Data Protection Act (DPA) mapping impacts your organization and compliance strategy.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New