Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

An Overview of Australia’s Voluntary AI Safety Standard

Published January 1, 2025
Contributors

Anas Baig

Product Marketing Manager at Securiti

Asaad Ahmad Qureshy

Associate Data Privacy Analyst at Securiti

Salma Khan

Data Privacy Analyst at Securiti

CIPP/Asia

Listen to the content

Table of contents

I. Introduction

On September 5, 2024, the Australian Government introduced the Voluntary AI Safety Standard (Standard), marking a significant step towards responsible and ethical development and deployment of Artificial Intelligence (AI) technologies. This Standard provides Australian organisations and developers with comprehensive guidelines for identifying and mitigating evolving AI risks. By addressing security, accountability, transparency, and fairness, the Standard positions Australia as a global leader in AI regulation and governance.

Notably, the Voluntary AI Safety Standard complements the government’s broader Safe and Responsible AI agenda, which includes the development of mandatory guardrails for high-risk AI settings. These mandatory guardrails, outlined in a concurrently released proposals paper, are closely aligned with the voluntary measures in the Standard. The alignment signals the government’s intent to transition these voluntary guidelines into enforceable regulations, encouraging organisations to proactively adopt these practices now to ease future compliance.

By adopting the Voluntary AI Safety Standard, organisations can ensure individuals and businesses safely leverage AI to its full potential while minimizing risks to data security and human rights.

This paper dives into the key components of the Standard, its application, critical definitions, and how organisations can operationalize these guidelines to align their AI practices with its recommendations.

II. Whom Does the Standard Apply To

a. Material Scope

The standard applies to organisations developing and deploying AI systems.

b. Territorial Scope

This standard is intended for Australian organisations involved in the development and deployment of AI systems, as well as those whose AI technologies impact Australian citizens, organisations, or infrastructure.

Although voluntary, it establishes standards for organisations that employ, develop, or interact with AI within the Australian jurisdiction. This may affect both local and international businesses that want to meet Australia’s AI Safety Standards when operating in ways that impact Australian people or organisations.

III. Key Definitions Under the Standard

a. Safe and Responsible AI

Designing, developing, implementing, and using AI should all be done safely. Its use needs to be responsible, trustworthy, and focused on individuals. The development and application of AI systems should minimize the possibility of adverse effects on individuals, communities, and society while maximizing advantages.

b. AI Deployer

An individual or business that supplies or uses an AI system to deliver a good or service. Deployment can occur within or outside an organisation and impact customers or individuals who are not system deployers.

c. AI Developer

An entity or organisation that designs, develops, tests, and provides(AI technologies, including models and components.

d. AI User

An entity that utilzies or relies on an AI system. This entity may be a system, an individual, or an organisation (such as a business, government, or non-profit).

e. Affected Stakeholder

An organisation, person, community, or other system that is impacted by an AI system's choices or actions.

IV. Key Components of the Standard

The standard outlines ten voluntary guardrails designed to mitigate risks associated with AI:

Guardrail 1 – Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

By establishing accountability procedures, Guardrail 1 lays the groundwork for an organisation's safe and responsible use of AI. It involves designating an AI-use owner, developing an AI strategy, and providing the organisation with the necessary training.

Guardrail 2 – Establish and implement a risk management process to identify and mitigate risks

Establish a risk management process to assess the impact and risks of AI use, considering potential harms informed by a stakeholder impact assessment. Conduct ongoing risk assessments to ensure that risk mitigations remain effective.

Guardrail 3 – Protect AI systems, and implement data governance measures to manage data quality and provenance.

Implement robust cybersecurity, privacy, and data governance policies based on the AI system's use case and risk profile. These steps should address aspects unique to AI, such as cyber risks, data provenance, and data quality.

Guardrail 4 – Test AI models and systems to evaluate model performance and monitor the system once deployed.

Before deployment, thoroughly test AI models and systems, then continuously monitor them for variations in behavior or unexpected consequences. Based on risk and impact assessments, testing should align with established acceptance standards.

Guardrail 5 – Enable human control or intervention in an AI system to achieve meaningful human oversight.

Ensure mechanisms for human control or intervention throughout the AI system's lifecycle. Since AI systems frequently utilize parts from several vendors, efficient human monitoring minimizes the possibility of unexpected outcomes and risks by enabling appropriate controls.

Guardrail 6 – Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

Establish trust by transparently communicating AI's usage, function, and instances in which it produces content. Depending on the particular use case, stakeholders, and technology, select the best disclosure technique to reassure users and the public about ethical AI practices.

Guardrail 7 – Establish processes for people impacted by AI systems to challenge use or outcomes.

Businesses should provide procedures for users and other impacted stakeholders to contest and challenge AI-related choices, outcomes, or interactions.

Guardrail 8 – Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

Organisations should share information with others in the AI supply chain to help them understand AI’s components, how it was built, and the risks associated with the AI system.

Guardrail 9 – Keep and maintain records to allow third parties to assess compliance with guardrails.

Organisations should keep detailed records of processing activities and general records to demonstrate compliance with the guardrails, including an AI inventory and thorough documentation of AI systems.

Guardrail 10 – Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Organisations should involve stakeholders throughout the AI system's lifecycle to detect and minimize potential risks, biases, and unintended consequences. This involves addressing accessibility, reducing unwanted bias, and eliminating ethical biases in the AI system.

V. Alignment with International Standards

The Voluntary AI Safety Standard is consistent with international best practices, including the AS ISO/IEC 42001:2023 and the NIST AI RMF 1.0. This alignment ensures that Australian businesses adhering to the standard also comply with global AI regulations.

The NIST AI RMF 1.0 and AS ISO/IEC 42001:2023 are two examples of worldwide best practices that align with the Voluntary AI Safety Standard. Due to this alignment, Australian companies that comply with the voluntary standard are, in fact, complying with international AI laws.

VI. How Can Organisations Operationalize the Standard

Here are ways organisations can operationalize Australia’s Voluntary AI Safety Standard:

  • Establish Accountability: Appoint an AI lead in charge of strategy, oversight, and ensuring AI is safely utilized throughout the organisation.
  • Develop an AI Strategy: Establish a structured approach to implementing AI that includes objectives, risk control, and moral standards.
  • Implement Risk Management Processes: Regularly assess AI risks and impacts and conduct risk assessments to mitigate any possible negative effects.
  • Set Up Data Governance and Cybersecurity: Develop robust cybersecurity, privacy, and data handling procedures adapted to AI's unique needs, such as data provenance and quality.
  • Thorough Testing and Monitoring: Before deploying AI systems, test them and monitor them to identify any unexpected impacts or changes in behavior.
  • Enable Human Oversight: Establish mechanisms that enable human intervention to handle unforeseen issues at any stage of the AI lifecycle.
  • Ensure Transparency and Disclosure: Clearly communicate AI use, roles, and when content is AI-generated, building trust with users and stakeholders.
  • Provide Challenge and Appeal Mechanisms: Establish channels for impacted stakeholders to challenge AI judgments or results.
  • Share Information Across the AI Supply Chain: Collaborate with supply chain partners to understand and manage AI components, data sources, and AI risks.
  • Document Compliance Efforts: Maintain records, including an AI inventory and documentation, to demonstrate compliance with the standard and other AI laws.
  • Engage with Stakeholders: Regularly consult stakeholders to identify potential risks, address bias, and ensure AI accessibility, transparency and ethical fairness.

VII. How Securiti Can Help

Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti Gencore AI enables organisations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.

Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organisations to comply with the Guidelines on AI Governance and Ethics.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 11:29

Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like

Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18

Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh

Watch Now View
Spotlight 13:38

Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines

Sanofi Thumbnail
Watch Now View
Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View

Latest

View More

Databricks AI Summit (DAIS) 2025 Wrap Up

5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...

Inside Echoleak View More

Inside Echoleak

How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...

What is SSPM? (SaaS Security Posture Management) View More

What is SSPM? (SaaS Security Posture Management)

This blog covers all the important details related to SSPM, including why it matters, how it works, and how organizations can choose the best...

View More

“Scraping Almost Always Illegal”, Netherlands DPA Declares

Explore the Dutch Data Protection Authority's guidelines on web scraping, its legal complexities, privacy risks, and other relevant details important to your organization.

Beyond DLP: Guide to Modern Data Protection with DSPM View More

Beyond DLP: Guide to Modern Data Protection with DSPM

Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.

Mastering Cookie Consent: Global Compliance & Customer Trust View More

Mastering Cookie Consent: Global Compliance & Customer Trust

Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.

ROI of Data Minimization: Save Millions in Cost, Risk & AI With DSPM View More

ROI of Data Minimization: Save Millions in Cost, Risk & AI With DSPM

ROT data is a costly liability. Discover how DSPM-powered data minimization reduces risk and how Securiti’s two-phase framework helps.

From AI Risk to AI Readiness: Why Enterprises Need DSPM Now View More

From AI Risk to AI Readiness: Why Enterprises Need DSPM Now

Discover why shifting focus from AI risk to AI readiness is critical for enterprises. Learn how Data Security Posture Management (DSPM) empowers organizations to...

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New