Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Safe and Responsible AI in Australia – The Government’s Interim Response

Author

Sadaf Ayub Choudary

Data Privacy Analyst at Securiti

CIPP/US

Listen to the content

This post is also available in: Brazilian Portuguese

Overview

On 17 January 2024, the Department of Industry Science and Resources (DISR) published the Australian Government's interim response to the DISR consultation for their discussion paper, 'Supporting Responsible AI in Australia.' The discussion paper issued on 1 June 2023 sought views on how the Australian Government could mitigate any potential risks of artificial intelligence (AI) and support safe and responsible AI practices. In particular, the response outlines the feedback received from stakeholders and discusses the Government's strategy to ensure the safe development of AI.

The Australian Government’s Interim Response

From 1 June to 4 August 2023, the government engaged in extensive consultations, seeking input from diverse stakeholders such as the public, advocacy groups, academia, industry, legal firms, and government agencies. While the submissions to consultation (“Submissions”) expressed enthusiasm for AI's potential benefits in areas like healthcare, education, and productivity, concerns were raised about potential harms throughout its lifecycle.

Examples included violations of intellectual property laws during data collection, biases impacting model outputs, environmental impacts during training, and competition issues affecting consumers. Notably, these Submissions emphasized the inadequacy of current regulatory frameworks in addressing AI risks, leading to a consensus on the necessity of regulatory guardrails, especially for high-risk AI applications.

Key Takeaways from the Interim Response

The government, having initiated a dialogue with the Australian community through a discussion paper, is committed to furthering this conversation on effectively leveraging AI opportunities while addressing associated risks. The initial analysis, encompassing Submissions and global discussions like the AI Safety Summit, has highlighted the following key insights:

  1. Acknowledging AI's positive impact on job creation and industry growth.
  2. Recognizing that not all AI applications necessitate regulatory responses, the government emphasizes the need to ensure unimpeded use of low-risk AI. Simultaneously, it acknowledges that the existing regulatory framework falls short, especially in addressing risks posed by high-risk AI applications in legitimate settings and frontier models.
  3. Existing laws are deemed insufficient to prevent AI-induced harms before they occur, necessitating an enhanced response to post-occurrence harms. The unique speed and scale of AI systems can worsen harms, sometimes making them irreversible. This situation prompts consideration of a tailored, AI-specific response.
  4. The government contemplates introducing mandatory obligations for those developing or using high-risk AI systems to ensure safety and emphasizes international collaboration to establish safety standards, acknowledging the integration of overseas-developed models in Australia.

The Australian government aims for safe AI development in high-risk settings and encourages AI use in low-risk settings. Immediate focus includes evaluating mandatory safeguards, considering implementation through existing laws or innovative approaches, and committing to close consultation with industry, academia, and the community.

Principles Guiding the Government’s Interim Response to Support Safe and Responsible AI

The Australian Government committed to five principles when guiding its interim response:

  1. Risk-Based Approach: Adopting a risk-based framework to facilitate the safe use of AI, tailoring obligations on developers and deployers based on the assessed level of risk associated with AI use, deployment, or development.
  2. Balanced and Proportionate: Avoiding unnecessary or disproportionate burdens on businesses, the community, and regulators. The government will balance the need for innovation and competition with the need to protect community interests, including privacy, security, and public and online safety.
  3. Collaborative and Transparent: Emphasizing openness, the government will actively engage with experts nationwide to shape its approach to safe and responsible AI use. Public involvement and technical expertise will be sought, ensuring clear government actions that empower AI developers, implementers, and users with knowledge of their rights and protections.
  4. Trusted International Partner: Consistency with the Bletchley Declaration and leverage its strong foundations and domestic capabilities to support global action to address AI risks
  5. Community First: Placing people and communities at the core, the government will prioritize the development and implementation of regulatory approaches that align with the needs, abilities, and social context of all individuals.

Next Steps for the Australian Government in AI

In line with the Australian Government’s overall objective to maximize the opportunities that AI presents for our economy and society, the proposed next steps relate to the following:

a. Preventing Harms

In response to concerns, the government aims to further explore regulatory guardrails focused on testing, transparency, and accountability to prevent AI-related harms. This includes:

  • Testing: Internal and external testing, sharing safety best practices, ongoing auditing, and cybersecurity measures.
  • Transparency: User awareness of AI system use, public reporting on limitations and capabilities, and disclosure of data processing details.
  • Accountability: Designated roles for AI safety and mandatory training for developers, particularly in high-risk settings.

This includes defining 'high risk' and aligning with existing government initiatives. To complement future regulatory considerations, immediate steps involve:

  • AI Safety Standard: The National AI Centre will collaborate with industry to develop a voluntary AI Safety Standard, simplifying responsible AI adoption for businesses.
  • Watermarking Consideration: The Department of Industry will engage with industry stakeholders to evaluate the potential benefits of voluntary watermarking or similar data provenance mechanisms, particularly in high-risk AI settings.
  • Expert Advisory Group: Recognizing the need for expert input, an interim advisory group will support the government in developing options for AI guardrails. Future considerations may include a permanent advisory body.

Following this, the next steps include consulting on new mandatory guardrails, developing a voluntary AI Safety Standard, and exploring voluntary labeling for AI-generated content.

b. Clarifying and Strengthening Laws

To address concerns raised during consultations, substantial efforts are underway across the government to clarify and fortify laws, ensuring the protection of citizens. Key initiatives include:

  • Developing new laws empowering the Australian Communications and Media Authority to combat online misinformation and disinformation.
  • Statutory review of the Online Safety Act 2021 to adapt to evolving online harms.
  • Collaborating with state and territory governments, industry, and the research community to establish a regulatory framework for automated vehicles in Australia, incorporating work health and safety laws.
  • Undertaking research and consultation to address the implications of AI on copyright and broader intellectual property law.
  • Implementing privacy law reforms to enhance protections in the context of AI applications.
  • Strengthening Australia’s competition and consumer laws to tackle issues arising from digital platforms.
  • Establishing an Australian Framework for Generative AI in schools with education ministers guiding the responsible and ethical use of generative AI tools while ensuring privacy, security, and safety.
  • Ensuring the security of AI tools through principles like security by design, under the Cyber Security Strategy.

c. International Collaboration

Australia is closely monitoring how other countries are responding to the challenges of AI, including initial efforts in the EU, the US, and Canada. Building on its engagement at the UK AI Safety Summit in November, the Government will continue to work with other countries to shape international efforts in this area. The Interim Response indicates that any new laws would need to be tailored to Australia. The Australian government will take the following actions:

  • The Australian Government, aligning with the Bletchley Declaration, commits to supporting the development of a State of the Science report.
  • Ongoing international engagement aims to shape global AI governance and promote safe and responsible AI deployment.
  • Efforts to enhance Australian participation in key international forums developing AI standards are underway.
  • A continuous dialogue with international partners ensures alignment and interoperability with Australia's domestic responses to AI risks.

d. Maximizing AI Benefits

In the 2023–24 Budget, the Australian government allocated $75.7 million for AI initiatives, emphasizing the following key areas:

  • AI Adopt Program ($17 million): Creating centers to assist SMEs in making informed decisions on leveraging AI for business enhancement.
  • National AI Centre Expansion ($21.6 million): Extending the center's scope for vital research and leadership in the AI industry.
  • Next-Generation AI Graduates Programs ($34.5 million): Continuing funding to attract and train the next wave of job-ready AI specialists.

These initiatives complement substantial private investments in Australia's technology sector, particularly in AI, which reached $1.9 billion in 2022. The government is committed to exploring further opportunities for AI adoption and development, potentially including the creation of an AI Investment Plan, aligning with efforts to establish responsible AI use and build public trust.

Conclusion

The Australian Government's interim response demonstrates a commitment to fostering AI's benefits while addressing associated risks. Through a principled approach, it aims to ensure safe, responsible, and community-oriented AI development, contributing to Australia's economic growth and technological advancement. Ongoing consultations and collaboration will shape a comprehensive and effective regulatory framework for the evolving AI landscape.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share

More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
Why I Joined Securiti View More
Why I Joined Securiti
I’m beyond excited to join Securiti.ai as a sales leader at this pivotal moment in their journey. The decision was clear, driven by three...
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures View More
Navigating the Data Minefield: Essential Executive Recommendations for M&A and Divestitures
The U.S. M&A landscape is back in full swing. May witnessed a significant rebound in deal activity, especially for transactions exceeding $100 million, signaling...
Key Data Protection Reforms Introduced by the Data Use and Access Act View More
Key Data Protection Reforms Introduced by the Data Use and Access Act
UK DUAA 2025 updates UK GDPR, DPA and PECR. Changes cover research and broad consent, legitimate interests and SARs, automated decisions, transfers and cookies.
FTC's 2025 COPPA Final Rule Amendments View More
FTC’s 2025 COPPA Final Rule Amendments: What You Need to Know
Gain insights into FTC's 2025 COPPA Final Rule Amendments. Discover key definitions, notices, consent choices, methods, exceptions, requirements, etc.
View More
Is Your Business Ready for the EU AI Act August 2025 Deadline?
Download the whitepaper to learn where your business is ready for the EU AI Act. Discover who is impacted, prepare for compliance, and learn...
View More
Getting Ready for the EU AI Act: What You Should Know For Effective Compliance
Securiti's whitepaper provides a detailed overview of the three-phased approach to AI Act compliance, making it essential reading for businesses operating with AI.
Navigating the Minnesota Consumer Data Privacy Act (MCDPA) View More
Navigating the Minnesota Consumer Data Privacy Act (MCDPA): Key Details
Download the infographic to learn about the Minnesota Consumer Data Privacy Act (MCDPA) applicability, obligations, key features, definitions, exemptions, and penalties.
EU AI Act Mapping: A Step-by-Step Compliance Roadmap View More
EU AI Act Mapping: A Step-by-Step Compliance Roadmap
Explore the EU AI Act Mapping infographic—a step-by-step compliance roadmap to help organizations understand key requirements, assess risk, and align AI systems with EU...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
What's
New