Securiti Launches Industry’s First Solution To Automate Compliance

View

New Zealand’s Privacy Commissioner Issues Guidance on AI Usage

Published November 3, 2023 / Updated December 12, 2023

Listen to the content

On September 21, 2023, New Zealand's Office of the Privacy Commissioner (OPC) published guidance on Artificial Intelligence and the Information Privacy Principles (IPPs). This guidance expands upon the OPC’s initial set of expectations around AI use, published on May 25, 2023. The purpose of the guidance is to assist New Zealanders who use AI tools in complying with the Privacy Act 2020 as it relates to the usage of AI.

The guidance explains how AI tools function, provides real-world examples and sets out several questions to think about concerning privacy concerns. It also explains how AI relates to the 13 IPPs in the Privacy Act. The Privacy Act applies when collecting, using, or sharing personal information and using AI tools.

According to the Privacy Commissioner, AI tools pose unique privacy challenges as AI enables unique ways of collecting and combining personal information, making it more challenging to see, comprehend, and justify using personal information. Organizations should exercise extra caution when feeding personal information into AI technologies since it’s still unclear how these algorithms produce a specific result.

Personal information under the Privacy Act includes information like a person's name, address, contact details, or photographs. It can also include technical metadata like map coordinates, Internet protocol addresses, or device identifiers related to a person. It also includes information about a person that is inaccurate or made up, including fake social profiles and deepfake images.

Overview of the Updated Guidance

The guidance takes a broad approach to AI systems and their potential privacy impacts as AI is still a developing area, where experts disagree on how capable current systems are and how this will develop over time.

The guidance suggests that AI users in New Zealand should align with IPPs at every stage of the AI process. The guidelines specifically state that organizations utilizing AI tools should:

  • Realize that privacy is a starting point for responsible use of AI tools, and the best time to start privacy work is at the beginning.
  • Think carefully about the use-case before relying on exciting new tools to solve it, and be confident that you understand potential privacy risks.
  • Have senior leadership approval based on full consideration of risks and mitigations.
  • Review whether a generative AI tool is necessary and proportionate given potential privacy impacts, and consider whether you could take a different approach.
  • Conduct a privacy impact assessment before using these tools, including what data sources they were trained on and how relevant and reliable they are for your purposes.
  • Be transparent, telling people how, when, and why the tool is being used.
  • Consider Māori perspectives and engage with them about potential risks and impacts to the taonga of their information.
  • Develop procedures about accuracy and access by individuals to their information.
  • Ensure human review before acting on AI outputs to reduce risks of inaccuracy and bias.
  • Ensure that the AI tool does not retain or disclose personal information.

Understanding the potential risks will enable you to use privacy policies to govern your AI tools and ensure privacy statements set clear expectations.

Consider the IPPs When Using AI Tools

The 13 IPPs, which govern how agencies must handle personal information, are the fundamental component of the Privacy Act. The IPPs govern the activities of collecting, using, and sharing personal information.

The IPPs apply whether you're developing your own AI tools, utilize AI tools to aid in decision-making, or have team members who utilize AI informally at work. They also apply when overseas organizations give AI tools to New Zealanders. You have to take your privacy obligations into account in each situation.

The IPPs provide guidelines for handling personal data, including how to collect it (IPPs1-4), use and protect it (IPPs5–10), and share it (IPPs 11–12). There are also specific requirements for unique identifiers (IPP13). Key questions to ask include:

Is the training data behind an AI tool relevant, reliable, and ethical?

AI tools reproduce patterns from their training data. Agencies are generally required to gather personal information directly from the individual it concerns (IPP2) and to disclose all information they collect, including how it will be used (IPP3). Additionally, agencies have to ensure that personal information obtained is fair and legal and does not unnecessarily pry into private matters, especially when it comes to obtaining data from minors (IPP4).

AI tools replicate patterns observed in their training data. Therefore, an organization cannot know whether a tool incorporates personal information obtained in a way that violates IPPs 1–4 unless it has an in-depth understanding of the training data and the design methods used to create it. Additionally, any gaps or biases may limit accuracy (IPP8).

Organizations must specify why they collect personal data and only use and disclose it for those reasons (IPPs 10 and 11). This implies that organizations must carefully assess the reasons behind their information collection needs and ensure they only obtain the information necessary to meet those requirements.

An organization must disclose at the time of data collection if it intends to use personal information for AI tool training. Training data is the foundation of AI tools, and if an organization is offering a service, like a chatbot or a phone line, it needs to explicitly inform users about this and provide them an option to opt out of having their information used for these purposes.

Additionally, organizations need to be confident they are using personal information in ways that fit the purpose for which it was collected. Reusing information for training may go against this (IPP10).

How are you keeping track of the information you collect and use with AI tools?

A person has the right to request access to and correct any information held about them by an agency (IPP6 and IPP7). Prior to implementing an AI tool, the Commissioner asserts that organizations must establish processes for handling requests from individuals requesting access to and correcting their personal data. Prior to using an AI tool during the procurement phase, you may want to think about the following:

  • Are you confident you can provide information about a person to them if they ask for it?
  • Are you confident that you can correct personal information?
  • How often are models you rely on updated? Can you correct AI outputs in a timely way?
  • How will you verify the identity of an individual requesting their information?

AI capabilities also make it easier for people to mimic other people realistically. Thus, organizations must be extra cautious when confirming the identity of someone requesting sensitive information.

How are you testing that AI tools are accurate and fair for your intended purpose? Are you talking with people and communities with an interest in these issues?

Agencies that possess personal information are required under IPP8 to take reasonable measures to verify that the data is accurate, updated, complete, relevant, and not misleading before using or disclosing it. This prompts the question, what "reasonable steps" can organizations take to ensure that AI technologies will adhere to the concept of accuracy?

An organization should conduct privacy impact assessments and evaluate every stage of an AI tool's lifecycle, which might involve examining the training process that developed it and engaging with the community, including Māori, to understand and uphold fairness and accuracy. For instance, discussing Māori about the potential risks and impacts to the taonga of their information.

What are you doing to track and manage new risks to information from AI tools?

Organizations must safeguard personal information, prompts, and training data from theft, unauthorized access, and other misuse (IPP5). This includes using cybersecurity measures, such as two-factor authentication.

As a result, organizations will have to decide if they can utilize AI tools without sharing back data or if they can rely on contractual clauses that prevent the provider from using the input data for training. Additionally, they will require privacy breach response strategies addressing the possible risks of using AI tools.

In Conclusion

The Commissioner suggests that the safest course of action is to avoid putting personal information into an AI tool if an organization is unsure about it and ensure that everyone in the organization complies.

We all rely on individuals and organizations accepting accountability for their actions within the larger framework, necessitating the importance of being proactive about privacy to better control risk and use AI tools more efficiently. Additionally, organizations should ensure that the training data is obtained and handled in a manner that complies with data privacy regulations, AI laws, and ethical standards.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

What's
New