Generative Artificial Intelligence (GenAI) may well be the most significant development of modern times. Its applications seem limitless, highlighting its worth even further. Based on potential alone, it can revolutionize everything we’ve known about work and productivity.
However, while GenAI capabilities would bring unprecedented operational efficiencies and unparalleled innovation, it raises alarming questions related to data privacy and security.
GenAI's capabilities rely on an endless stream of input data to enable some of the core functionalities that make it such a critical tool. The prime challenge related to AI, in general, is how to leverage these capabilities without forsaking principles of user privacy.
Governments worldwide have attempted to address this question. New Zealand’s government did so with its Interim Guidance for Generative AI in Public Service. In its own words, the guidance provides advice from data, digital, privacy, procurement, and security system leaders about using GenAI tools in public service.
The guidance provides guardrails to promote the safe deployment of GenAI tools while also making recommendations on what practices businesses must avoid if they wish to leverage these tools without violating users’ trust, privacy, as well as regulatory requirements.
Necessity Above All
Organizations that offer GenAI services must ensure that the product/service they offer provides a public service benefit. More importantly, there should be an evident necessity for such services. Some of the considerations organizations must take into account include the following:
- Efficiency & Productivity: The product/service being offered must represent an upgrade over existing options via simplification and automation of the processes involved.
- Improved Service Design & Delivery: The product/service being offered must provide customers with a greater degree of customization and personalization choices.
- Enhanced Cybermonitoring: The product/service being offered must provide considerably better protection online via the use of modern techniques such as predictive analysis, vulnerability assessment, and threat detection.
- Innovation: The product/service being offered must represent a noticeable improvement on any existing product/service available on the market via big-data-based insights.
- Improved Policy Deployment: The product/service being offered must empower users to gain a comprehensive real-time analysis of all policies in place to enable future improvements.
Critical Recommendations
The Interim Guidance provides some recommendations related to what practices organizations should encourage and avoid to enhance the chances of safe GenAI usage. These include the following:
Don’t Use GenAI for Sensitive Data
The Interim Guidance is straightforward in stating that organizations should steer clear of using any datasets containing sensitive data for training GenAI tools in use. Additionally, organizations should take proactive measures to gain granular insights related to such datasets to avoid accidental usage, as the potential risks outweigh the benefits.
Personal Data Governance
Similar to sensitive data, the guidance recommends avoiding the use of personal data to train GenAI tools if such tools are third-party applications. If the users have expressly consented to having their data processed for use in such applications, then the organization may proceed to do so, but only if the use will be for GenAI applications within the organization’s internal network.
Additionally, organizations should avoid using any datasets that may not fully satisfy the usage criteria per the Official Information Act, as this could degrade public trust and confidence.
Don’t Use GenAI for Critical Decision-Making
GenAI usage faces several unique challenges. One such challenge is GenAI models generating inaccurate, incomplete, or, in some cases, completely false outputs. This exacerbates the likelihood of misinformation and bias. As developers continue to develop the best strategies to mitigate such challenges, organizations would be well-advised to avoid relying on GenAI outputs when making critical business or public-facing decisions.
Deter Shadow IT
Organizations must adopt a uniform approach to GenAI usage internally. The inability to do so may lead to different teams using different GenAI tools that pose different security, data, and privacy-related risks and challenges. Mitigating such risks can be exhaustive to resources that could be better used elsewhere.
Acknowledge the Risk
The number of GenAI tools available on the market has proliferated at an unbelievable rate. However, not all such tools are equal. Apart from the quality of outputs, they differ in terms of risks they carry owing to the lack of reliable privacy and security protocols in free GenAI tools. That is not to say that paid GenAI tools are free of risks, as each tool presents a different set of risks and challenges. Organizations must understand and acknowledge this in their overall AI governance framework.
Honor The Treaty of Waitangi
Organizations operating in New Zealand that deal with Zealanders’ data must take extra precautions in this regard. This recommendation is more ethical than operational as it calls on organizations using GenAI tools to work closely with the Iwi Māori when using Māori data in a way that may impact the Māori. Understandably, since this is a fairly sensitive issue, the guidance recommends having a deeper contextual understanding of the Māori and Crown relationship. As for the exact measures, Māori Data Governance may provide the necessary insights.
Additionally, the existing Māori-Crown relationship approach provides a relationship engagement model that an organization and its Te Tiriti partners can use when making decisions involving Māori data.
Practical Steps For Safe & Responsible AI Deployment
AI’s benefits are manifold, and when used appropriately, organizations stand to maximize their existing productivity. However, there are several risks associated with using such GenAI capabilities. If not properly used and managed, these risks can severely damage the public’s trust in the organization’s credibility and reputation.
To mitigate such risks, the interim guidance provides some “Dos”. These are practical measures that organizations can take related to deploying AI capabilities for public service:
Building a Robust Data Governance Framework
Organizations must place the development of a robust and reliable data governance framework at the forefront of their overall AI governance strategy since any AI-generated outputs will depend on the data an organization is able to feed into it. A comprehensive framework of this nature would involve several governance-related measures that empower an organization to utilize its data resources in a compliant manner. Some critical measures that can be adopted include the following:
- Data Mininimzation: Only the most necessary data should be collected, processed, used, and stored. This would not only limit the chances of any unnecessary data collection but also reduce the potential harm in the event of a data breach.
- Purpose Specification: The first step is to limit the collection, processing, and use of data. This must be followed by ensuring clearly defined reasons for doing so. Concrete reasons will result in transparent objectives, allowing for more direct compliance measures in this regard.
- Data Retention: A critical aspect of purpose specification is how to dispose of the data once it has served its purpose. Appropriate security measures and protocols should be in place that allow for seamless deletion and destruction of data once it is no longer needed.
- Data Quality & Accessibility: The higher the data quality, the better the output generated by AI models. It’s a simple equation where ensuring data accuracy, relevancy, and quality will directly translate into the quality of output generated. Hence, appropriate measures must be taken to ensure any collected data can be appropriately accessed and corrected based on necessity.
Continuous Education & Training
It may seem like a fairly straightforward step. Still, organizations must place a tremendous degree of emphasis on ensuring they have ongoing training programs and educational resources internally that are capable of keeping their employees informed on the best practices related to responsible AI usage. Doing so can offset a significant risk most organizations fall victim to, i.e., insider threats. Such training programs and educational resources can consist of the following:
- Role-Specific Training: “Employees” is a fairly broad term. Hence, tailoring the relevant training resources for different roles corresponding to their responsibilities and, ideally, their usage of AI resources can help organizations ensure they’ve equipped all their employees with the specific skills and knowledge needed to use AI capabilities responsibly.
- Regular Updates: Privacy and AI-related regulations are a highly dynamic domain. Owing to near-constant developments, regulations must also be consistently updated to reflect these developments. Hence, it is crucial to regularly evaluate and update the training resources meant to educate employees on how to use AI capabilities responsibly.
Public Engagement & Transparency
The modern customer is more educated and aware of their rights than ever before. The same holds for their data being used in tandem with AI capabilities by organziations providing them their products/services. Transparency from organizations in this regard by providing resources necessary to alleviate public anxiety and concerns can go a long way in building a relationship of trust. Some steps that can help include the following:
- Public Consultations: The consumers should be a high priority for organizations aiming to use AI responsibly, not only for regulatory purposes but also to learn firsthand the potential societal impact their AI usage can have and address any grievances directly.
- Transparency Reports: In addition to public consultations, regular transparency reports can be published that include comprehensive details and information related to the organization’s use of AI capabilities, the outputs generated, methodologies used, and, most importantly, how the aforementioned governance framework helped them leverage AI responsibly.
- Stakeholder Pipelines: When using AI and data in tandem, an organization will be engaging with a high volume of stakeholders. It is necessary to have an accurate view of this engagement, specifically with marginalized groups, to ensure inclusive decision-making and AI usage.
Ethical AI Development
AI, in its current form, promises to have a major influence on businesses, society, and communications as we know them. And this influence will only grow as AI capabilities become more profound. The onus is on the organizations developing and deploying such capabilities to undertake the necessary measures to ensure all AI usage is within ethical parameters. These measures can include:
- Bias Mitigation: Biases are a major concern when deploying AI capabilities. They can emerge for several reasons, and detection can be a complicated process. However, the proper techniques can proactively detect and mitigate the presence of elements that may lead to bias within the input datasets, ensuring fairness throughout the AI use lifecycle.
- Explainability: Explainable AI (XAI) has emerged as a critical challenge related to AI usage within organizations. While AI provides tremendous benefits and leaps in productivity, organizations often need more or more understanding of how the AI models deployed make their decisions. Hence, steps must be taken to provide both users and internal employees greater visibility into the decision-making process of the AI models to both enhance accountability and ensure compliance with relevant documentation-related requirements.
- Ethical Audits: An ethical audit involves a thorough evaluation of the organization’s governance measures to ensure compliance with the relevant ethical considerations and standards. Such audits highlight potential blindspots and areas for improvement.
Leverage PETs
Privacy has been and will continue to be a primary concern for most customers. Its importance may even reach new heights in the face of modern AI usage and capabilities. Hence, to alleviate such concerns without compromising on the benefits of AI, organizations need to adopt PETs within their operations proactively. These may include:
Encryption: State-of-the-art encryption has long been considered the most reliable and effective measure an organization can deploy to ensure the privacy of all collected data. That still holds when protecting data within the context of AI usage. Advanced encryption protocols can be deployed to ensure all data at rest, in transit, and in use is appropriately protected from tempering as well as unauthorized access.
Differential Privacy: Differential Privacy is a fairly new data security concept but one that has proven extremely successful for organizations deploying it. Differential privacy techniques can obscure and obfuscate all collected datasets at the individual data point level.
Audits, Assessments, Compliance Checks
Deploying all the aforementioned measures and recommendations would be the first step. It is necessary to evaluate the ability of such measures to continue delivering their intended purpose in real time. Consequently, regular assessments, audits, and compliance checks are necessary. These include:
- Technology Audits: With a technology audit, organizations can conduct a thorough and comprehensive evaluation of any and all AI systems in use within their infrastructure to ensure they’re being used per the relevant regulations and ethical considerations.
- Governance Audits: Similar to a technology audit, a governance audit allows for a similar assessment of the governance structure, processes, policies, and mechanisms currently deployed within the organization. Such a practice highlights any blindspots or potential weaknesses, allowing for proactive and immediate corrective measures to be taken.
- Third-Party Reviews: Once an organization believes it has taken all necessary steps to ensure its usage of AI capabilities meets the necessary regulatory requirements, it can engage independent auditors to perform similar audits. Doing so will not only help promote transparency and trust but also provide the organization with helpful insights that can be leveraged for future audits.
How Securiti Can Help
The interim GenAI guidance explained earlier provides critical insights into the direction, tools, and overall strategies organizations are expected to adopt to ensure compliance with future regulations on the matter. Done properly, organizations will be capable of leveraging maximum benefits from leaps in AI while also mitigating any potential risks.
This is where Securiti can help such organizations.
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
The Data Command Center (DCC) is equipped with numerous modules and solutions designed to ensure regulatory compliance with any data and AI-related obligations an organization may be subjected to. These solutions are not only easy to use and deploy but also allow for real-time assessments of an organization’s activities, enabling proactive interventions when necessary.
In addition to the DCC, Securiti provides a 5-Step Path to AI Governance that aims to empower organizations in discovering, assessing, and mapping all AI models and systems in use while also facilitating appropriate controls and compliance measures.
Request a demo today and get in touch with Securiti to learn more about how you can stay ahead of your competition in complying with relevant data and AI regulatory requirements in New Zealand.
Alt title suggestion
Managing Data & Privacy in the Age of Generative AI in New Zealand