On September 5, 2024, the Department of Industry, Science, and Resources (DISR) in Australia initiated a public consultation on the proposed Mandatory Guardrails for the Safe Use of AI in High-Risk Settings.
While Australia has made progress with interim measures, such as the Safe and Responsible AI discussion paper in early 2024, significant regulatory gaps remain—particularly regarding the upstream development and deployment of AI technologies. Upstream refers to the initial phases of AI development and use prior to the technology's complete integration into goods or services utilized in industries deemed high-risk. This includes the design, training, and testing phases, where critical safety decisions are made. These gaps concern accountability, transparency, and enforcement, leaving high-risk AI applications insufficiently governed.
The interim measures emphasized the need for voluntary guardrails but fell short of addressing the specific safety issues that arise in high-risk AI settings. The newly proposed mandatory guardrails directly target these gaps, aiming to establish clearer regulatory frameworks, improve accountability for AI developers, and provide stronger enforcement mechanisms.
Australia's current regulatory landscape for AI consists of laws that vary in scope and application across different sectors. For example, the Therapeutic Goods Administration (TGA) has issued guidance for regulating software-based medical devices, including AI-driven tools like large language models.
The eSafety Commissioner, under the Online Safety Act 2021, regulates AI-enabled internet services, with a focus on curbing illegal and harmful content through the Search Engine Services Code and the Designated Internet Services industry standard. These codes place obligations on AI services to manage risks like child exploitation and pro-terror content. Laws also include the Privacy Act 1988, which governs data protection; the Copyright Act 1968, addressing intellectual property; and the Criminal Code Act 1995, which tackles cybercrimes.
Additionally, economy-wide frameworks like the Corporations Act 2001, Fair Work Act 2009, and Competition and Consumer Act 2010 apply indirectly, influencing AI use through corporate, employment, and consumer protection standards.
However, these laws were not designed specifically for AI, leading to regulatory gaps in addressing AI's unique risks and so current legal frameworks are seen often inadequate for addressing AI's unique challenges, including:
- Accountability: Determining who is legally responsible for harm caused by AI, especially when existing laws assume human decision-making.
- Transparency: Ensuring a clear understanding of how AI systems work so that affected persons and regulators can identify and address potential harms.
- Regulatory Gaps: Addressing the lack of oversight at the development phase of AI, such as during the training of AI models.
- Enforcement Limitations: Existing remedies may be insufficient or difficult to enforce in practice, particularly in cases involving complex AI systems.
Aligning with International Standards
Australia’s proposed risk-based response aims to place obligations on all actors across the AI supply chain who are in a position to effectively prevent harm. This pre-market approach aligns Australia’s regulatory efforts with those of other jurisdictions, such as the European Union, Canada, and the United Kingdom, which are signatories to the multilateral Bletchley Declaration on AI safety.
Additionally, it builds on commitments from international initiatives like the Hiroshima AI Process Code of Conduct and the Frontier AI Safety Commitments. The pre-market approach refers to a regulatory model where AI systems are required to undergo risk assessments, testing, and certification before being deployed in the market. This approach ensures that high-risk AI applications meet mandatory safety and ethical standards before their use, helping to mitigate potential harms before they reach the public.
Defining High-Risk AI in Australia
In its effort to regulate AI, the Australian Government has proposed a two-tiered framework to identify "high-risk" AI systems. This approach will determine when mandatory guardrails should be applied to AI technologies, ensuring they are used safely and responsibly.
Two Categories of High-Risk AI
- AI with Known or Foreseeable Uses:
- This category covers AI systems and general-purpose AI (GPAI) (which the paper defines as an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems) models where the potential applications are known or can be reasonably predicted. The focus here is on regulating the use or application of the technology.
- The risk is evaluated based on the context in which the AI system will be used or the anticipated applications of the GPAI model. Organizations must assess whether a specific use is "high-risk" using a set of proposed guiding principles.
- Advanced, Highly-Capable GPAI Models:
- The second category targets advanced GPAI models with capabilities that may lead to unpredictable applications and risks.
- These models are considered high-risk because they could potentially be misused for various purposes with emergent threats. By the time these risks are identified, it might be too late to implement effective preventative measures.
Guiding Principles for High-Risk AI
When determining if an AI system's use is high-risk, organizations should consider the following:
- Human Rights: Evaluate the risk of adverse impacts on individual rights recognized under Australian human rights law and international obligations. This includes risks of discrimination based on age, disability, race, or sex.
- Health and Safety: Assess the potential adverse impacts on an individual's physical or mental health and safety. AI products that could harm health or safety should always be considered high-risk.
- Legal Effects: Consider the risk of adverse legal effects, such as defamation or significant impacts on an individual's legal rights. This is particularly important for AI systems affecting essential services like housing, finance, or legal proceedings.
- Group Impacts: Acknowledge the risk of adverse impacts on groups of individuals or the collective rights of cultural groups, particularly marginalized communities like First Nations people.
- Broader Societal Impact: Analyze the potential negative consequences for the Australian economy, society, environment, and rule of law. Consider the severity and extent of these adverse impacts.
- Severity and Extent: The Australian Government's approach to high-risk AI focuses on assessing the severity and extent of impacts, considering factors such as affected groups, scale of harm, and effectiveness of mitigation measures.
Applying Guardrails Across the AI Supply Chain and Lifecycle
The Australian Government proposes to enforce mandatory guardrails for AI systems by allocating responsibilities to both developers and deployers. Given the complex AI supply chain, where multiple organizations may handle different stages of a system’s development and deployment, clear definitions and obligations are essential. Developers will be primarily responsible for implementing guardrails during the design and training phases, while deployers will manage risks during the operational phase. Special attention will be given to defining roles and handling open-source models to ensure effective regulation and accountability.
Proposed Guardrails
1. Guardrail 1: Establish, Implement, and Publish an Accountability Process
Organizations developing or deploying high-risk AI systems must create and publicly share an accountability process. This should include governance policies, roles, responsibilities, and regulatory compliance strategies, aligning with frameworks like Canada's Artificial Intelligence and Data Act (AIDA) and the EU AI Act.
2. Guardrail 2: Implement a Risk Management Process
Organizations must establish comprehensive risk management processes to identify and mitigate risks associated with high-risk AI systems. This involves assessing impacts, applying mitigation measures, and adapting strategies based on AI risk management standards.
3. Guardrail 3: Protect AI Systems and Implement Data Governance
Organizations must enforce robust data governance and cybersecurity measures to ensure data quality, legality, and security. This includes addressing biases, securing data from unauthorized access, and complying with relevant data protection and copyright laws.
4. Guardrail 4: Test and Monitor AI Models
Organizations must rigorously test AI models before deployment and continuously monitor them to ensure they perform as expected and manage risks effectively. Testing methods and metrics should align with the AI system’s intended use, and ongoing monitoring is crucial for detecting unintended consequences and performance changes.
5. Guardrail 5: Enable Human Control and Intervention
Organizations must design AI systems to ensure meaningful human oversight, allowing for effective understanding, monitoring, and intervention when necessary. This ensures that humans can manage and address risks during the deployment and operational phases of the AI system.
Organizations must clearly inform end-users about AI’s role in decision-making, interactions, and AI-generated content. Transparency is key to fostering public trust and ensuring users can recognize and understand the impact of AI.
7. Guardrail 7: Establish Processes for Challenging AI Outcomes
Organizations must create processes that allow individuals affected by high-risk AI systems to contest decisions or file complaints. This includes establishing complaint-handling functions and providing information necessary for meaningful contestation and redress.
8. Guardrail 8: Ensure Transparency Across the AI Supply Chain
Organizations must share critical information about high-risk AI systems with other involved parties to facilitate effective risk management. This includes details on data sources, design decisions, and system limitations, with transparency helping to address risks and maintain accountability across the supply chain.
9. Guardrail 9: Maintain Comprehensive Records for Compliance Assessment
Organizations must keep detailed records about high-risk AI systems, including design specifications, data provenance, and risk management processes. These records must be available to relevant authorities for compliance verification and external scrutiny.
Organizations must perform conformity assessments to verify adherence to guardrails before deploying high-risk AI systems. These assessments should be repeated periodically and whenever significant changes occur, ensuring ongoing compliance and accountability.
Regulatory Options for Mandating AI Guardrails
The current regulatory landscape may not fully address the unique challenges posed by AI, necessitating a review and potential overhaul of existing laws. The proposed paper’s final section explores various regulatory options available to the Australian Government to mandate the proposed mandatory guardrails for AI. These options range from adapting existing frameworks on a sector-by-sector basis to introducing entirely new legislative approaches. Specifically, three primary options are considered:
Option 1: Domain-Specific Approach
The domain-specific approach involves incorporating AI guardrails into existing sector-specific regulations. This method uses Australia’s current regulatory framework, minimizing disruption and leveraging familiar laws to address AI risks.
Advantages:
- It avoids duplicative regulations, reduces compliance burdens within single sectors, and allows gradual implementation.
- Regulations can be tailored to specific sectoral risks, enhancing relevance.
Limitations:
- This approach may lead to inconsistencies and gaps between sectors, resulting in uneven regulation.
- The incremental process might be slower and more complex due to the need for multiple legislative updates.
- Strong coordination and potentially additional reforms may be required to address these issues effectively.
Option 2: Framework Approach
The framework approach involves creating a new piece of legislation that sets out the fundamental principles and guardrails for AI regulation. This framework would define the key concepts and requirements for high-risk AI systems, which would then be implemented through amendments to existing laws.
Advantages:
- Consistency: Provides a unified set of definitions and principles for AI regulation, promoting a coherent approach across various sectors.
- Flexibility: Allows existing laws to integrate new standards without requiring a complete overhaul of the regulatory system.
- International Alignment: Facilitates alignment with global standards, aiding Australian companies in international AI supply chains.
- Regulatory Efficiency: Helps avoid regulatory gaps by addressing issues across multiple frameworks, reducing the risk of regulatory arbitrage.
Limitations:
- Scope Limitations: The effectiveness depends on the existing laws it amends, which might not cover all AI-related issues.
- Coordination Challenges: Requires extensive coordination among agencies, which could slow down implementation and create gaps.
- Incremental Change: This may result in a slower process compared to a comprehensive new Act, as it relies on modifying existing legislation.
Option 3: Whole-of-Economy Approach
Introducing a new AI-specific Act would create a comprehensive legislative framework for regulating AI. This Act would define high-risk AI applications, establish mandatory guardrails, and set up a dedicated monitoring and enforcement regime overseen by an independent AI regulator. The approach would address gaps in existing regulations and ensure consistency across sectors, modeled after the Canadian AIDA. Unlike Option 2, which involves adapting existing regulatory frameworks through framework legislation, Option 3 introduces a stand-alone AI-specific Act with enforceable provisions and a dedicated regulator, offering a more centralized and consistent approach to AI regulation.
Advantages:
- Consistency: Provides clear, uniform definitions and guardrails for AI, avoiding gaps and overlaps in existing regulations.
- Comprehensive Coverage: Extends regulatory obligations to developers and ensures consistent enforcement across the economy.
- Regulatory Efficiency: Streamlines AI regulation into a single framework rather than amending multiple laws.
- International Alignment: Facilitates compatibility with international standards, enhancing Australian companies' global integration.
- Dedicated Oversight: An independent AI regulator can develop and share specialized expertise.
Limitations:
- Complexity: This may introduce additional complexity and potential duplication with existing frameworks, requiring careful legislative design.
- Coordination Challenges: Potential difficulties in aligning the new Act with existing regulations, necessitating robust coordination efforts.
- Regulator Establishment: This may require setting up a new regulator or expanding an existing one, which involves time and resources.