Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

New Zealand’s Privacy Commissioner Issues Guidance on AI Usage

Contributors

Anas Baig

Product Marketing Manager at Securiti

Muhammad Faisal Sattar

Data Privacy Legal Manager at Securiti

FIP, CIPT, CIPM, CIPP/Asia

Published November 3, 2023 / Updated December 12, 2023

Listen to the content

On September 21, 2023, New Zealand's Office of the Privacy Commissioner (OPC) published guidance on Artificial Intelligence and the Information Privacy Principles (IPPs). This guidance expands upon the OPC’s initial set of expectations around AI use, published on May 25, 2023. The purpose of the guidance is to assist New Zealanders who use AI tools in complying with the Privacy Act 2020 as it relates to the usage of AI.

The guidance explains how AI tools function, provides real-world examples and sets out several questions to think about concerning privacy concerns. It also explains how AI relates to the 13 IPPs in the Privacy Act. The Privacy Act applies when collecting, using, or sharing personal information and using AI tools.

According to the Privacy Commissioner, AI tools pose unique privacy challenges as AI enables unique ways of collecting and combining personal information, making it more challenging to see, comprehend, and justify using personal information. Organizations should exercise extra caution when feeding personal information into AI technologies since it’s still unclear how these algorithms produce a specific result.

Personal information under the Privacy Act includes information like a person's name, address, contact details, or photographs. It can also include technical metadata like map coordinates, Internet protocol addresses, or device identifiers related to a person. It also includes information about a person that is inaccurate or made up, including fake social profiles and deepfake images.

Overview of the Updated Guidance

The guidance takes a broad approach to AI systems and their potential privacy impacts as AI is still a developing area, where experts disagree on how capable current systems are and how this will develop over time.

The guidance suggests that AI users in New Zealand should align with IPPs at every stage of the AI process. The guidelines specifically state that organizations utilizing AI tools should:

  • Realize that privacy is a starting point for responsible use of AI tools, and the best time to start privacy work is at the beginning.
  • Think carefully about the use-case before relying on exciting new tools to solve it, and be confident that you understand potential privacy risks.
  • Have senior leadership approval based on full consideration of risks and mitigations.
  • Review whether a generative AI tool is necessary and proportionate given potential privacy impacts, and consider whether you could take a different approach.
  • Conduct a privacy impact assessment before using these tools, including what data sources they were trained on and how relevant and reliable they are for your purposes.
  • Be transparent, telling people how, when, and why the tool is being used.
  • Consider Māori perspectives and engage with them about potential risks and impacts to the taonga of their information.
  • Develop procedures about accuracy and access by individuals to their information.
  • Ensure human review before acting on AI outputs to reduce risks of inaccuracy and bias.
  • Ensure that the AI tool does not retain or disclose personal information.

Understanding the potential risks will enable you to use privacy policies to govern your AI tools and ensure privacy statements set clear expectations.

Consider the IPPs When Using AI Tools

The 13 IPPs, which govern how agencies must handle personal information, are the fundamental component of the Privacy Act. The IPPs govern the activities of collecting, using, and sharing personal information.

The IPPs apply whether you're developing your own AI tools, utilize AI tools to aid in decision-making, or have team members who utilize AI informally at work. They also apply when overseas organizations give AI tools to New Zealanders. You have to take your privacy obligations into account in each situation.

The IPPs provide guidelines for handling personal data, including how to collect it (IPPs1-4), use and protect it (IPPs5–10), and share it (IPPs 11–12). There are also specific requirements for unique identifiers (IPP13). Key questions to ask include:

Is the training data behind an AI tool relevant, reliable, and ethical?

AI tools reproduce patterns from their training data. Agencies are generally required to gather personal information directly from the individual it concerns (IPP2) and to disclose all information they collect, including how it will be used (IPP3). Additionally, agencies have to ensure that personal information obtained is fair and legal and does not unnecessarily pry into private matters, especially when it comes to obtaining data from minors (IPP4).

AI tools replicate patterns observed in their training data. Therefore, an organization cannot know whether a tool incorporates personal information obtained in a way that violates IPPs 1–4 unless it has an in-depth understanding of the training data and the design methods used to create it. Additionally, any gaps or biases may limit accuracy (IPP8).

Organizations must specify why they collect personal data and only use and disclose it for those reasons (IPPs 10 and 11). This implies that organizations must carefully assess the reasons behind their information collection needs and ensure they only obtain the information necessary to meet those requirements.

An organization must disclose at the time of data collection if it intends to use personal information for AI tool training. Training data is the foundation of AI tools, and if an organization is offering a service, like a chatbot or a phone line, it needs to explicitly inform users about this and provide them an option to opt out of having their information used for these purposes.

Additionally, organizations need to be confident they are using personal information in ways that fit the purpose for which it was collected. Reusing information for training may go against this (IPP10).

How are you keeping track of the information you collect and use with AI tools?

A person has the right to request access to and correct any information held about them by an agency (IPP6 and IPP7). Prior to implementing an AI tool, the Commissioner asserts that organizations must establish processes for handling requests from individuals requesting access to and correcting their personal data. Prior to using an AI tool during the procurement phase, you may want to think about the following:

  • Are you confident you can provide information about a person to them if they ask for it?
  • Are you confident that you can correct personal information?
  • How often are models you rely on updated? Can you correct AI outputs in a timely way?
  • How will you verify the identity of an individual requesting their information?

AI capabilities also make it easier for people to mimic other people realistically. Thus, organizations must be extra cautious when confirming the identity of someone requesting sensitive information.

How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?

Agencies that possess personal information are required under IPP8 to take reasonable measures to verify that the data is accurate, updated, complete, relevant, and not misleading before using or disclosing it. This prompts the question, what "reasonable steps" can organizations take to ensure that AI technologies will adhere to the concept of accuracy?

An organization should conduct privacy impact assessments and evaluate every stage of an AI tool's lifecycle, which might involve examining the training process that developed it and engaging with the community, including Māori, to understand and uphold fairness and accuracy. For instance, discussing Māori about the potential risks and impacts to the taonga of their information.

What are you doing to track and manage new risks to information from AI tools?

Organizations must safeguard personal information, prompts, and training data from theft, unauthorized access, and other misuse (IPP5). This includes using cybersecurity measures, such as two-factor authentication.

As a result, organizations will have to decide if they can utilize AI tools without sharing back data or if they can rely on contractual clauses that prevent the provider from using the input data for training. Additionally, they will require privacy breach response strategies addressing the possible risks of using AI tools.

In Conclusion

The Commissioner suggests that the safest course of action is to avoid putting personal information into an AI tool if an organization is unsure about it and ensure that everyone in the organization complies.

We all rely on individuals and organizations accepting accountability for their actions within the larger framework, necessitating the importance of being proactive about privacy to better control risk and use AI tools more efficiently. Additionally, organizations should ensure that the training data is obtained and handled in a manner that complies with data privacy regulations, AI laws, and ethical standards.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Key Data Protection Reforms Introduced by the Data Use and Access Act View More
Key Data Protection Reforms Introduced by the Data Use and Access Act
UK DUAA 2025 updates UK GDPR, DPA and PECR. Changes cover research and broad consent, legitimate interests and SARs, automated decisions, transfers and cookies.
FTC's 2025 COPPA Final Rule Amendments View More
FTC’s 2025 COPPA Final Rule Amendments: What You Need to Know
Gain insights into FTC's 2025 COPPA Final Rule Amendments. Discover key definitions, notices, consent choices, methods, exceptions, requirements, etc.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
EU AI Act Mapping: A Step-by-Step Compliance Roadmap View More
EU AI Act Mapping: A Step-by-Step Compliance Roadmap
Explore the EU AI Act Mapping infographic—a step-by-step compliance roadmap to help organizations understand key requirements, assess risk, and align AI systems with EU...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New