Overview
On 17 January 2024, the Department of Industry Science and Resources (DISR) published the Australian Government's interim response to the DISR consultation for their discussion paper, 'Supporting Responsible AI in Australia.' The discussion paper issued on 1 June 2023 sought views on how the Australian Government could mitigate any potential risks of artificial intelligence (AI) and support safe and responsible AI practices. In particular, the response outlines the feedback received from stakeholders and discusses the Government's strategy to ensure the safe development of AI.
The Australian Government’s Interim Response
From 1 June to 4 August 2023, the government engaged in extensive consultations, seeking input from diverse stakeholders such as the public, advocacy groups, academia, industry, legal firms, and government agencies. While the submissions to consultation (“Submissions”) expressed enthusiasm for AI's potential benefits in areas like healthcare, education, and productivity, concerns were raised about potential harms throughout its lifecycle.
Examples included violations of intellectual property laws during data collection, biases impacting model outputs, environmental impacts during training, and competition issues affecting consumers. Notably, these Submissions emphasized the inadequacy of current regulatory frameworks in addressing AI risks, leading to a consensus on the necessity of regulatory guardrails, especially for high-risk AI applications.
Key Takeaways from the Interim Response
The government, having initiated a dialogue with the Australian community through a discussion paper, is committed to furthering this conversation on effectively leveraging AI opportunities while addressing associated risks. The initial analysis, encompassing Submissions and global discussions like the AI Safety Summit, has highlighted the following key insights:
- Acknowledging AI's positive impact on job creation and industry growth.
- Recognizing that not all AI applications necessitate regulatory responses, the government emphasizes the need to ensure unimpeded use of low-risk AI. Simultaneously, it acknowledges that the existing regulatory framework falls short, especially in addressing risks posed by high-risk AI applications in legitimate settings and frontier models.
- Existing laws are deemed insufficient to prevent AI-induced harms before they occur, necessitating an enhanced response to post-occurrence harms. The unique speed and scale of AI systems can worsen harms, sometimes making them irreversible. This situation prompts consideration of a tailored, AI-specific response.
- The government contemplates introducing mandatory obligations for those developing or using high-risk AI systems to ensure safety and emphasizes international collaboration to establish safety standards, acknowledging the integration of overseas-developed models in Australia.
The Australian government aims for safe AI development in high-risk settings and encourages AI use in low-risk settings. Immediate focus includes evaluating mandatory safeguards, considering implementation through existing laws or innovative approaches, and committing to close consultation with industry, academia, and the community.
Principles Guiding the Government’s Interim Response to Support Safe and Responsible AI
The Australian Government committed to five principles when guiding its interim response:
- Risk-Based Approach: Adopting a risk-based framework to facilitate the safe use of AI, tailoring obligations on developers and deployers based on the assessed level of risk associated with AI use, deployment, or development.
- Balanced and Proportionate: Avoiding unnecessary or disproportionate burdens on businesses, the community, and regulators. The government will balance the need for innovation and competition with the need to protect community interests, including privacy, security, and public and online safety.
- Collaborative and Transparent: Emphasizing openness, the government will actively engage with experts nationwide to shape its approach to safe and responsible AI use. Public involvement and technical expertise will be sought, ensuring clear government actions that empower AI developers, implementers, and users with knowledge of their rights and protections.
- Trusted International Partner: Consistency with the Bletchley Declaration and leverage its strong foundations and domestic capabilities to support global action to address AI risks
- Community First: Placing people and communities at the core, the government will prioritize the development and implementation of regulatory approaches that align with the needs, abilities, and social context of all individuals.
Next Steps for the Australian Government in AI
In line with the Australian Government’s overall objective to maximize the opportunities that AI presents for our economy and society, the proposed next steps relate to the following:
a. Preventing Harms
In response to concerns, the government aims to further explore regulatory guardrails focused on testing, transparency, and accountability to prevent AI-related harms. This includes:
- Testing: Internal and external testing, sharing safety best practices, ongoing auditing, and cybersecurity measures.
- Transparency: User awareness of AI system use, public reporting on limitations and capabilities, and disclosure of data processing details.
- Accountability: Designated roles for AI safety and mandatory training for developers, particularly in high-risk settings.
This includes defining 'high risk' and aligning with existing government initiatives. To complement future regulatory considerations, immediate steps involve:
- AI Safety Standard: The National AI Centre will collaborate with industry to develop a voluntary AI Safety Standard, simplifying responsible AI adoption for businesses.
- Watermarking Consideration: The Department of Industry will engage with industry stakeholders to evaluate the potential benefits of voluntary watermarking or similar data provenance mechanisms, particularly in high-risk AI settings.
- Expert Advisory Group: Recognizing the need for expert input, an interim advisory group will support the government in developing options for AI guardrails. Future considerations may include a permanent advisory body.
Following this, the next steps include consulting on new mandatory guardrails, developing a voluntary AI Safety Standard, and exploring voluntary labeling for AI-generated content.
b. Clarifying and Strengthening Laws
To address concerns raised during consultations, substantial efforts are underway across the government to clarify and fortify laws, ensuring the protection of citizens. Key initiatives include:
- Developing new laws empowering the Australian Communications and Media Authority to combat online misinformation and disinformation.
- Statutory review of the Online Safety Act 2021 to adapt to evolving online harms.
- Collaborating with state and territory governments, industry, and the research community to establish a regulatory framework for automated vehicles in Australia, incorporating work health and safety laws.
- Undertaking research and consultation to address the implications of AI on copyright and broader intellectual property law.
- Implementing privacy law reforms to enhance protections in the context of AI applications.
- Strengthening Australia’s competition and consumer laws to tackle issues arising from digital platforms.
- Establishing an Australian Framework for Generative AI in schools with education ministers guiding the responsible and ethical use of generative AI tools while ensuring privacy, security, and safety.
- Ensuring the security of AI tools through principles like security by design, under the Cyber Security Strategy.
c. International Collaboration
Australia is closely monitoring how other countries are responding to the challenges of AI, including initial efforts in the EU, the US, and Canada. Building on its engagement at the UK AI Safety Summit in November, the Government will continue to work with other countries to shape international efforts in this area. The Interim Response indicates that any new laws would need to be tailored to Australia. The Australian government will take the following actions:
- The Australian Government, aligning with the Bletchley Declaration, commits to supporting the development of a State of the Science report.
- Ongoing international engagement aims to shape global AI governance and promote safe and responsible AI deployment.
- Efforts to enhance Australian participation in key international forums developing AI standards are underway.
- A continuous dialogue with international partners ensures alignment and interoperability with Australia's domestic responses to AI risks.
d. Maximizing AI Benefits
In the 2023–24 Budget, the Australian government allocated $75.7 million for AI initiatives, emphasizing the following key areas:
- AI Adopt Program ($17 million): Creating centers to assist SMEs in making informed decisions on leveraging AI for business enhancement.
- National AI Centre Expansion ($21.6 million): Extending the center's scope for vital research and leadership in the AI industry.
- Next-Generation AI Graduates Programs ($34.5 million): Continuing funding to attract and train the next wave of job-ready AI specialists.
These initiatives complement substantial private investments in Australia's technology sector, particularly in AI, which reached $1.9 billion in 2022. The government is committed to exploring further opportunities for AI adoption and development, potentially including the creation of an AI Investment Plan, aligning with efforts to establish responsible AI use and build public trust.
Conclusion
The Australian Government's interim response demonstrates a commitment to fostering AI's benefits while addressing associated risks. Through a principled approach, it aims to ensure safe, responsible, and community-oriented AI development, contributing to Australia's economic growth and technological advancement. Ongoing consultations and collaboration will shape a comprehensive and effective regulatory framework for the evolving AI landscape.