Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

An Overview of Australia’s Framework for the Assurance of AI in Government

Contributors

Anas Baig

Product Marketing Manager at Securiti

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Salma Khan

Data Privacy Analyst at Securiti

CIPP/Asia

Published November 17, 2024

Listen to the content

Australia is witnessing rapid developments across its data privacy, cybersecurity, and AI landscape. As AI rapidly advances and revolutionizes public service delivery while enhancing social, economic, and environmental well-being, it is crucial to recognize and address the associated risks. Balancing innovation with responsibility will ensure these advancements benefit society while mitigating potential negative impacts.

The widespread adoption of AI poses significant risks, especially with AI now being utilized by governments across Australia. These risks include security, privacy, legal, and ethical concerns, including bias and fairness. Moreover, as noted by the Department of Prime Minister and Cabinet, there is also low public trust in AI. To address these challenges, Australia released a robust framework.

Australia’s ‘National Framework for the Assurance of Artificial Intelligence in Government’ (framework), provides a nationally consistent approach to ensuring the use of AI in government. The framework was released on 21st June 2024 by the Data and Digital Ministers Meeting, a cross-jurisdictional group of ministers from Australia's federal, state, and territory governments. It primarily aims to standardize the public sector's usage of AI systems.

With the framework's release, the Australian Government aims to bolster public confidence and trust in AI systems. The framework establishes a consistent, ethical, and lawful approach, based on Australia’s AI Ethics Principles, to guide safe and responsible AI development and deployment in the public sector.

This guide delves into the details of the framework, explaining how it establishes standards and policy expectations for the public sector. The private sector should also proactively ensure their business practices comply with the framework.

What is an AI System?

In November 2023, the Organization for Economic Co-operation and Development (OECD) member countries approved this revised definition of an AI system:

“A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

Thus, in simpler terms, an AI system takes inputs to produce outputs, such as predictions and recommendations that can help individuals make decisions.

Implementing Australia’s AI Ethics Principles in Government

The national framework for AI assurance in government establishes a consistent, nationwide approach for ensuring the proper use of AI in government activities. The framework emphasizes foundational principles over technical details, enabling jurisdictions to develop specific policies and guidance tailored to their legislative, policy, and operational contexts.

An Overview of Australia’s Framework

The following practices align with Australia’s 8 AI Ethics Principles, designed to ensure AI is safe, secure, and reliable. They demonstrate how governments can apply them practically in AI assurance. Moreover, application varies based on jurisdictional governance and specific use cases, which present different risks and may require varying levels of assurance. This is because not all AI use cases need the detailed application of all practices to be deemed safe and responsible.

Governments should consider cornerstones for their assurance practices to effectively apply the AI ethics principles. These may include AI governance, data governance, and adopting a risk-based approach.

1. Human, Societal and Environmental Wellbeing

AI systems should benefit individuals and the environment at every stage of the AI lifecycle. To enable this, governments should:

Document intentions

Governments should clearly define and document an AI use case's purpose, goals, and anticipated outcomes for individuals, communities, and the environment. They should assess risks, evaluate if AI is the best option, ensure a clear public benefit, and consider non-AI alternatives.

Consult with stakeholders

Governments should consult with stakeholders, including experts and impacted groups, early in the process to identify and mitigate risks effectively.

Assess impact

Governments should evaluate the potential impacts of an AI use case on individuals, communities, and the environment to ensure benefits outweigh the risks. They should manage these impacts using methods like algorithmic and stakeholder impact assessments.

2. Human-Centered Values

AI systems should uphold human rights, value diversity, and preserve individual autonomy. To enable this, governments should:

Comply with rights protections

Governments should ensure AI use complies with legal human rights protections, including legislation, international obligations, constitutions, and common law. AI use should also align with public sector, workplace, and diversity policies. Moreover, human rights impact assessments and expert advice may help identify and mitigate risks.

Incorporate diverse perspectives

Governments should engage individuals with diverse lived experiences throughout the AI use case lifecycle. This guarantees informed perspectives and prevents the neglect of essential factors.. Thus, representation should include people with disabilities, multicultural and religious communities, various socio-economic backgrounds, diverse genders and sexualities, and Aboriginal and Torres Strait Islander people.

Ensure digital inclusion

Governments should adhere to digital service and inclusion standards, considering individual users' needs, context, and experiences throughout the AI use case lifecycle. Additionally, they should ensure that assistive technologies are used to aid individuals with disabilities.

3. Fairness

AI systems should prioritize inclusivity and accessibility, ensuring they do not cause or contribute to unjust discrimination against individuals, communities, or groups. To enable this, governments should:

Define fairness in context

Governments should evaluate the expected benefits, potential impacts, and vulnerabilities of impacted groups to gauge the ‘fairness’ of an AI use case.

Comply with anti-discrimination obligations

Governments should ensure AI use complies with anti-discrimination laws and guidelines for attributes such as age, disability, race, religion, sex, intersex status, gender identity, and sexual orientation. Staff should be trained to identify, report, and resolve biased AI outputs, and expert advice should be sought when necessary.

Ensure quality of data and design

Governments should maintain high-quality data and algorithmic design. Conducting audits of AI inputs and outputs for biases, utilizing data quality statements, and implementing strong data governance practices can help identify and reduce bias in AI systems.

4. Privacy Protection and Security

AI systems should respect and safeguard individuals' privacy rights and ensure data protection. To enable this, governments should:

Comply with privacy obligations

Governments should ensure AI use complies with laws and policies on consent, collection, storage, use, disclosure, and retention of personal information. This includes informing individuals when their data is collected or used for AI training. The "privacy by design" principle should be implemented to enhance data protection. This involves integrating privacy measures into the development process from the very beginning. It ensures that privacy is considered at every project stage, from planning to implementation, helping protect users' personal information. Additionally, conducting privacy impact assessments—systematic evaluations of how a project may affect the privacy of individuals—can help identify and address privacy risks. Moreover, seeking expert advice when needed is also essential for effective compliance.

Minimize and protect personal information

Governments should evaluate if collecting, using, and disclosing personal information is necessary, reasonable, and proportionate for each AI use case. They should consider using privacy-enhancing technologies like synthetic data, anonymization, encryption, and secure aggregation to achieve similar outcomes while reducing privacy risks. Sensitive information should always be handled with caution.

Secure systems and data

Governments should ensure AI use cases comply with security and data protection laws, policies, and guidelines throughout the supply chain. Moreover, security measures should align with relevant cybersecurity strategies. Access to systems and data should be restricted to authorized staff as needed for their duties, and expert advice should be sought when necessary.

5. Reliability and Safety

This means that AI systems should reliably operate in accordance with their intended purpose throughout their life cycle. To enable this, governments should:

Use appropriate datasets

Governments should ensure AI systems are trained and validated on data sets that are accurate, representative, authenticated, reliable and tailored to specific use cases.

Conduct pilot studies

Governments should test AI systems in small-scale pilots to identify and address issues before scaling. They should balance governance with effectiveness, as highly controlled environments might not reveal all risks and opportunities, while less controlled settings may present governance challenges.

Test and verify

Governments should test and verify AI system performance using methods like red teaming, conformity assessments, human feedback reinforcement, metrics, and performance testing.

Monitor and evaluate

Governments should continuously assess AI systems to ensure they operate safely, reliably, and ethically. This includes evaluating system performance, user interactions, and impacts on individuals, society, and the environment while also incorporating feedback from those affected by AI outcomes.

Be prepared to disengage

Governments should be ready to quickly and safely shut down an AI system if an unresolvable issue arises, such as a data breach, unauthorized access, or system compromise. These scenarios should be included in business continuity, data breach, and security response plans.

6. Transparency and Explainability

This means that there should be transparency and responsible disclosure to ensure that individuals are aware when AI is significantly impacting them. Additionally, they should know when an AI system is engaging with them. To enable this, governments should:

Disclose the use of AI

Governments should be transparent about using AI, informing users and those affected by it. Additionally, they should keep a record that outlines when AI is employed, its objectives, intended applications, and any limitations.

Maintain reliable data and information assets

Governments should adhere to laws, policies, and standards for keeping reliable records of AI decisions, testing, and data assets. This ensures transparency, allows for oversight from both internal and external parties, and promotes accountability and continuity of knowledge.

Provide clear explanations

Governments should clearly explain how AI systems reach outcomes, detailing inputs, variables, testing results, and human oversight. When explainability is limited, they should balance AI benefits against these limitations and, if proceeding, document reasons and apply increased oversight. In administrative decision-making, AI-influenced decisions must be explainable, and humans must be held accountable.

Support and enable frontline staff

Governments should train and support frontline staff to clearly explain AI-influenced outcomes to users. Emphasize the importance of human-to-human relationships, especially for vulnerable individuals, those with complex needs, and those uneasy with AI use in government.

7. Contestability

This means that when an AI system significantly impacts a person, community, group, or environment, there should be a process for people to challenge its use or outcomes in a timely manner. To enable this, governments should:

Governments should ensure AI use in administrative decision-making complies with laws, policies, and guidelines, adhering to legality, fairness, rationality, and transparency principles. They should provide access to reviews, dispute resolution, and investigations. Moreover, they should seek advice to understand their obligations and proposed AI use.

Communicate rights and protections clearly

Governments should clearly inform individuals of their rights and protections regarding each AI use case, providing ways to raise concerns and objections and seek remedies. They should clearly communicate channels that can be used to challenge AI use or outcomes, with transparent feedback and response mechanisms ensuring timely human review throughout the AI lifecycle.

8. Accountability

This means those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the systems' outcomes. Additionally, human oversight of AI systems should be facilitated. To enable this, governments should:

Establish clear roles and responsibilities

Governments should manage their AI use through clearly defined roles and accountability lines. This includes assigning senior leadership and specific area responsibilities, addressing security, data governance, privacy, and other obligations, and integrating AI oversight with existing governance practices and risk management frameworks.

Train staff and embed capability

Governments should create policies, procedures, and training programs to ensure employees understand their duties, system limitations, and AI assurance practices.

Embed a positive risk culture

Governments should foster a positive risk culture by promoting open, proactive AI risk management as a daily practice. This approach encourages open dialogue of uncertainties and opportunities, allows employees to voice concerns, and ensures processes are in place to escalate issues to the relevant accountable parties.

Avoid overreliance

Governments are responsible for all AI-generated outputs and must identify and address incorrect outputs. They should consider how much they rely on AI, its associated risks, and the accountability issues that arise, as excessive reliance might result in the acceptance of biased or inaccurate outputs and risk business continuity.

Conclusion

Australia’s national framework for the assurance of AI systems in the public sector reminds the private sector of the importance of having clear foundations for the safe and responsible use of AI. By outlining clear standards and policy expectations, this framework enhances accountability and transparency within public institutions and encourages private enterprises to adopt similar practices. In this way, the framework acts as a catalyst for innovation while prioritizing safety and ethical considerations, paving the way for sustainable growth and responsible AI deployment across all sectors.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti has been recognized with numerous industry and analyst awards, including "Most Innovative Startup" by RSA, "Top 25 Machine Learning Startups" by Forbes, "Most Innovative AI Companies" by CB Insights, "Cool Vendor in Data Security" by Gartner, and "Privacy Management Wave Leader" by Forrester.

Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, and redaction, as well as indexing and retrieval capabilities. Additionally, it facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.

Request a demo to learn more.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
Australia’s Guidance for AI Adoption View More
Australia’s Guidance for AI Adoption
Access the whitepaper to learn about what businesses need to know about Australia’s Guidance for AI Adoption. Discover how Securiti helps ensure compliance.
Montana Privacy Amendment on Notices: What to Change by Oct 1 View More
Montana Privacy Amendment on Notices: What to Change by Oct 1
Download the whitepaper to learn about the Montana Privacy Amendment on Notices and what to change by Oct 1. Learn how Securiti helps.
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New