Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Understanding Australia’s New Mandatory AI Guardrails: A Move Towards Safe and Responsible AI

Published September 18, 2024
Author

Asaad Ahmad Qureshy

Associate Data Privacy Analyst at Securiti

Listen to the content

On September 5, 2024, the Department of Industry, Science, and Resources (DISR) in Australia initiated a public consultation on the proposed Mandatory Guardrails for the Safe Use of AI in High-Risk Settings.

While Australia has made progress with interim measures, such as the Safe and Responsible AI discussion paper in early 2024, significant regulatory gaps remain—particularly regarding the upstream development and deployment of AI technologies. Upstream refers to the initial phases of AI development and use prior to the technology's complete integration into goods or services utilized in industries deemed high-risk. This includes the design, training, and testing phases, where critical safety decisions are made. These gaps concern accountability, transparency, and enforcement, leaving high-risk AI applications insufficiently governed.

The interim measures emphasized the need for voluntary guardrails but fell short of addressing the specific safety issues that arise in high-risk AI settings. The newly proposed mandatory guardrails directly target these gaps, aiming to establish clearer regulatory frameworks, improve accountability for AI developers, and provide stronger enforcement mechanisms.

Australia's current regulatory landscape for AI consists of laws that vary in scope and application across different sectors. For example, the Therapeutic Goods Administration (TGA) has issued guidance for regulating software-based medical devices, including AI-driven tools like large language models.

The eSafety Commissioner, under the Online Safety Act 2021, regulates AI-enabled internet services, with a focus on curbing illegal and harmful content through the Search Engine Services Code and the Designated Internet Services industry standard. These codes place obligations on AI services to manage risks like child exploitation and pro-terror content. Laws also include the Privacy Act 1988, which governs data protection; the Copyright Act 1968, addressing intellectual property; and the Criminal Code Act 1995, which tackles cybercrimes.

Additionally, economy-wide frameworks like the Corporations Act 2001, Fair Work Act 2009, and Competition and Consumer Act 2010 apply indirectly, influencing AI use through corporate, employment, and consumer protection standards.

However, these laws were not designed specifically for AI, leading to regulatory gaps in addressing AI's unique risks and so current legal frameworks are seen often inadequate for addressing AI's unique challenges, including:

  • Accountability: Determining who is legally responsible for harm caused by AI, especially when existing laws assume human decision-making.
  • Transparency: Ensuring a clear understanding of how AI systems work so that affected persons and regulators can identify and address potential harms.
  • Regulatory Gaps: Addressing the lack of oversight at the development phase of AI, such as during the training of AI models.
  • Enforcement Limitations: Existing remedies may be insufficient or difficult to enforce in practice, particularly in cases involving complex AI systems.

Aligning with International Standards

Australia’s proposed risk-based response aims to place obligations on all actors across the AI supply chain who are in a position to effectively prevent harm. This pre-market approach aligns Australia’s regulatory efforts with those of other jurisdictions, such as the European Union, Canada, and the United Kingdom, which are signatories to the multilateral Bletchley Declaration on AI safety.

Additionally, it builds on commitments from international initiatives like the Hiroshima AI Process Code of Conduct and the Frontier AI Safety Commitments. The pre-market approach refers to a regulatory model where AI systems are required to undergo risk assessments, testing, and certification before being deployed in the market. This approach ensures that high-risk AI applications meet mandatory safety and ethical standards before their use, helping to mitigate potential harms before they reach the public.

Defining High-Risk AI in Australia

In its effort to regulate AI, the Australian Government has proposed a two-tiered framework to identify "high-risk" AI systems. This approach will determine when mandatory guardrails should be applied to AI technologies, ensuring they are used safely and responsibly.

Two Categories of High-Risk AI

  1. AI with Known or Foreseeable Uses:
    • This category covers AI systems and general-purpose AI (GPAI) (which the paper defines as an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems) models where the potential applications are known or can be reasonably predicted. The focus here is on regulating the use or application of the technology.
    • The risk is evaluated based on the context in which the AI system will be used or the anticipated applications of the GPAI model. Organizations must assess whether a specific use is "high-risk" using a set of proposed guiding principles.
  2. Advanced, Highly-Capable GPAI Models:
    • The second category targets advanced GPAI models with capabilities that may lead to unpredictable applications and risks.
    • These models are considered high-risk because they could potentially be misused for various purposes with emergent threats. By the time these risks are identified, it might be too late to implement effective preventative measures.

Guiding Principles for High-Risk AI

When determining if an AI system's use is high-risk, organizations should consider the following:

  1. Human Rights: Evaluate the risk of adverse impacts on individual rights recognized under Australian human rights law and international obligations. This includes risks of discrimination based on age, disability, race, or sex.
  2. Health and Safety: Assess the potential adverse impacts on an individual's physical or mental health and safety. AI products that could harm health or safety should always be considered high-risk.
  3. Legal Effects: Consider the risk of adverse legal effects, such as defamation or significant impacts on an individual's legal rights. This is particularly important for AI systems affecting essential services like housing, finance, or legal proceedings.
  4. Group Impacts: Acknowledge the risk of adverse impacts on groups of individuals or the collective rights of cultural groups, particularly marginalized communities like First Nations people.
  5. Broader Societal Impact: Analyze the potential negative consequences for the Australian economy, society, environment, and rule of law. Consider the severity and extent of these adverse impacts.
  6. Severity and Extent: The Australian Government's approach to high-risk AI focuses on assessing the severity and extent of impacts, considering factors such as affected groups, scale of harm, and effectiveness of mitigation measures.

Applying Guardrails Across the AI Supply Chain and Lifecycle

The Australian Government proposes to enforce mandatory guardrails for AI systems by allocating responsibilities to both developers and deployers. Given the complex AI supply chain, where multiple organizations may handle different stages of a system’s development and deployment, clear definitions and obligations are essential. Developers will be primarily responsible for implementing guardrails during the design and training phases, while deployers will manage risks during the operational phase. Special attention will be given to defining roles and handling open-source models to ensure effective regulation and accountability.

Proposed Guardrails

1. Guardrail 1: Establish, Implement, and Publish an Accountability Process

Organizations developing or deploying high-risk AI systems must create and publicly share an accountability process. This should include governance policies, roles, responsibilities, and regulatory compliance strategies, aligning with frameworks like Canada's Artificial Intelligence and Data Act (AIDA) and the EU AI Act.

2. Guardrail 2: Implement a Risk Management Process

Organizations must establish comprehensive risk management processes to identify and mitigate risks associated with high-risk AI systems. This involves assessing impacts, applying mitigation measures, and adapting strategies based on AI risk management standards.

3. Guardrail 3: Protect AI Systems and Implement Data Governance

Organizations must enforce robust data governance and cybersecurity measures to ensure data quality, legality, and security. This includes addressing biases, securing data from unauthorized access, and complying with relevant data protection and copyright laws.

4. Guardrail 4: Test and Monitor AI Models

Organizations must rigorously test AI models before deployment and continuously monitor them to ensure they perform as expected and manage risks effectively. Testing methods and metrics should align with the AI system’s intended use, and ongoing monitoring is crucial for detecting unintended consequences and performance changes.

5. Guardrail 5: Enable Human Control and Intervention

Organizations must design AI systems to ensure meaningful human oversight, allowing for effective understanding, monitoring, and intervention when necessary. This ensures that humans can manage and address risks during the deployment and operational phases of the AI system.

6. Guardrail 6: Inform End-Users about AI Interactions

Organizations must clearly inform end-users about AI’s role in decision-making, interactions, and AI-generated content. Transparency is key to fostering public trust and ensuring users can recognize and understand the impact of AI.

7. Guardrail 7: Establish Processes for Challenging AI Outcomes

Organizations must create processes that allow individuals affected by high-risk AI systems to contest decisions or file complaints. This includes establishing complaint-handling functions and providing information necessary for meaningful contestation and redress.

8. Guardrail 8: Ensure Transparency Across the AI Supply Chain

Organizations must share critical information about high-risk AI systems with other involved parties to facilitate effective risk management. This includes details on data sources, design decisions, and system limitations, with transparency helping to address risks and maintain accountability across the supply chain.

9. Guardrail 9: Maintain Comprehensive Records for Compliance Assessment

Organizations must keep detailed records about high-risk AI systems, including design specifications, data provenance, and risk management processes. These records must be available to relevant authorities for compliance verification and external scrutiny.

10. Guardrail 10: Conduct Conformity Assessments for Compliance

Organizations must perform conformity assessments to verify adherence to guardrails before deploying high-risk AI systems. These assessments should be repeated periodically and whenever significant changes occur, ensuring ongoing compliance and accountability.

Regulatory Options for Mandating AI Guardrails

The current regulatory landscape may not fully address the unique challenges posed by AI, necessitating a review and potential overhaul of existing laws. The proposed paper’s final section explores various regulatory options available to the Australian Government to mandate the proposed mandatory guardrails for AI. These options range from adapting existing frameworks on a sector-by-sector basis to introducing entirely new legislative approaches. Specifically, three primary options are considered:

Option 1: Domain-Specific Approach

The domain-specific approach involves incorporating AI guardrails into existing sector-specific regulations. This method uses Australia’s current regulatory framework, minimizing disruption and leveraging familiar laws to address AI risks.

Advantages:

  • It avoids duplicative regulations, reduces compliance burdens within single sectors, and allows gradual implementation.
  •  Regulations can be tailored to specific sectoral risks, enhancing relevance.

Limitations:

  • This approach may lead to inconsistencies and gaps between sectors, resulting in uneven regulation.
  • The incremental process might be slower and more complex due to the need for multiple legislative updates.
  • Strong coordination and potentially additional reforms may be required to address these issues effectively.

Option 2: Framework Approach

The framework approach involves creating a new piece of legislation that sets out the fundamental principles and guardrails for AI regulation. This framework would define the key concepts and requirements for high-risk AI systems, which would then be implemented through amendments to existing laws.

Advantages:

  • Consistency: Provides a unified set of definitions and principles for AI regulation, promoting a coherent approach across various sectors.
  • Flexibility: Allows existing laws to integrate new standards without requiring a complete overhaul of the regulatory system.
  • International Alignment: Facilitates alignment with global standards, aiding Australian companies in international AI supply chains.
  • Regulatory Efficiency: Helps avoid regulatory gaps by addressing issues across multiple frameworks, reducing the risk of regulatory arbitrage.

Limitations:

  • Scope Limitations: The effectiveness depends on the existing laws it amends, which might not cover all AI-related issues.
  • Coordination Challenges: Requires extensive coordination among agencies, which could slow down implementation and create gaps.
  • Incremental Change: This may result in a slower process compared to a comprehensive new Act, as it relies on modifying existing legislation.

Option 3: Whole-of-Economy Approach

Introducing a new AI-specific Act would create a comprehensive legislative framework for regulating AI. This Act would define high-risk AI applications, establish mandatory guardrails, and set up a dedicated monitoring and enforcement regime overseen by an independent AI regulator. The approach would address gaps in existing regulations and ensure consistency across sectors, modeled after the Canadian AIDA. Unlike Option 2, which involves adapting existing regulatory frameworks through framework legislation, Option 3 introduces a stand-alone AI-specific Act with enforceable provisions and a dedicated regulator, offering a more centralized and consistent approach to AI regulation.

Advantages:

  • Consistency: Provides clear, uniform definitions and guardrails for AI, avoiding gaps and overlaps in existing regulations.
  • Comprehensive Coverage: Extends regulatory obligations to developers and ensures consistent enforcement across the economy.
  • Regulatory Efficiency: Streamlines AI regulation into a single framework rather than amending multiple laws.
  • International Alignment: Facilitates compatibility with international standards, enhancing Australian companies' global integration.
  • Dedicated Oversight: An independent AI regulator can develop and share specialized expertise.

Limitations:

  • Complexity: This may introduce additional complexity and potential duplication with existing frameworks, requiring careful legislative design.
  • Coordination Challenges: Potential difficulties in aligning the new Act with existing regulations, necessitating robust coordination efforts.
  • Regulator Establishment: This may require setting up a new regulator or expanding an existing one, which involves time and resources.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 13:38

Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines

Sanofi Thumbnail
Watch Now View
Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View

Latest

Pete Angstadt joins Securiti View More

Why I joined Securiti

I’m thrilled to be joining Securiti as they embark on their next phase of growth. Why did I decide to join? In short -...

AI System Observability: Go Beyond Model Governance View More

AI System Observability: Go Beyond Model Governance

Across industries, AI systems are no longer just tools acting on human prompts. The AI landscape is evolving rapidly, and AI systems are gaining...

Top Data Security Challenges & How to Solve Them View More

Top Data Security Challenges & How to Solve Them

Learn the top data security challenges organizations face today. Learn about the challenge and its solution. Enhance your data security posture today.

View More

What is Enterprise Data Security?

Get comprehensive insights into enterprise data security, what it is, its importance, key components, and how Securiti helps ensure the utmost enterprise data security.

Mastering Cookie Consent: Global Compliance & Customer Trust View More

Mastering Cookie Consent: Global Compliance & Customer Trust

Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.

Why Data Access Is Your Weakest Link—And How DSPM Fixes It View More

Why Data Access Is Your Weakest Link—And How DSPM Fixes It

Learn how DSPM provides unified Data+AI Access governance, offering contextual data intelligence, automated controls, safe AI+data access, and consistent least-privilege enforcement.

The European Health Data Space Regulation View More

The European Health Data Space Regulation: A Legislative Timeline and Implementation Roadmap

Download the infographic on the European Health Data Space Regulation, which features a clear timeline and roadmap highlighting key legislative milestones, implementation phases, and...

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New