Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

An Overview of Australia’s New “Online Safety Amendment (Social Media Minimum Age) Act 2024”

Author

Salma Khan

Data Privacy Analyst at Securiti

CIPP/Asia

Published December 5, 2024

Listen to the content

Introduction

The Australian government is taking a significant step to enhance online safety for children through the Online Safety Amendment (Social Media Minimum Age) Act 2024 (Act). This law introduces new obligations for social media platforms to enforce age restrictions, protect children under 16 from creating accounts, and ensure the privacy of personal data collected during age verification.

Moreover, the Office of the Australian Information Commissioner (OAIC) has also released the new Privacy Guidance on the Social Media Minimum Age (SMMA), requiring additional privacy and accountability obligations for platforms and third-party age assurance providers. The guidance emphasizes the need for transparency, proportionality in data collection, and enhanced privacy safeguards.

This article provides an overview of the impact that this Act and relevant guidance may have on social media providers, users, and regulators once signed into law.

Defining “Age-Restricted Social Media Platforms”

The Act applies to platforms designed for online social interaction between users, particularly those enabling:

  • Sharing material for social purposes (excluding business use).
  • Interactions between users, such as linking accounts or posting content.

If social media platforms meet the above-stated criteria, they are considered “age-restricted.” A social media platform may also be explicitly designated as “age-restricted” under legislative rules. Essentially, the Act encapsulates popular services like Instagram and TikTok, as both enable users to share content for social purposes, interact with others, and post material.

However, platforms that do not allow access to Australian users, such as region-specific social media services like VK (VKontakte), could be exempt from these requirements. Additionally, services excluded by legislative rules or designed for business purposes (such as LinkedIn) are not considered age-restricted under the Act.

Children’s Data Protection Obligations of Covered Platforms

Platforms classified as age-restricted must take reasonable steps to prevent children under the age of 16 from creating accounts. 

1. Comply with Reasonable Steps to Restrict Under-16 Access on Social Media Platforms

To clarify what this means in practice, the eSafety Commissioner released the Social Media Minimum Age Regulatory Guidance in September 2025. It clarifies that "reasonable steps" must be informed by principles that include being:

  • Reliable, accurate, robust, and effective.
  • Privacy-preserving and data-minimising.
  • Transparent.
  • Proportionate

Platforms are expected to implement layered and risk-based age assurance measures, such as age verification, estimation, or inference, while avoiding over-reliance on self-declaration. They are also required to remove existing accounts held by users under 16, prevent circumvention, and establish fair review mechanisms. Importantly, the guidance emphasises that compliance extends beyond written policies to include demonstrable implementation, continuous monitoring, and improvement, supported by proper record-keeping and ongoing engagement with eSafety.

How Securiti Can Help

Securiti helps align age assurance with eSafety principles by ensuring data minimization and proportionality via automatic data classification and destruction. Securiti’s Consent Management provides necessary transparency and compliance with OAIC notices, delivering the auditability required to prove measures are reliable and robust.

2. Comply with Age Verification Measures

To comply with the Act, platforms must:

  • Implement age assurance measures to verify users' ages.
    • Age verification through official documents (e.g., submitting a government-issued ID).
    • Third-party age verification services that cross-check user data with trusted databases.
  • Protect the personal data collected for age verification, ensuring it is used solely for this purpose and not further processed for any other reasons.
  • Be transparent using just-in-time notices to explain what is collected, why, how long it is retained, and the user's choices.

Platforms holding personal information for age assurance must:

  • Only use or disclose the data for verifying age or as permitted under the Australian Privacy Principles (APPs).

How Securiti Can Help

Securiti’s Sensitive Data Intelligence module detects hidden data in cloud platforms, classifies sensitive information, builds a searchable catalog, links personal data to owners for reporting, and highlights data risk areas.

To comply with the Act, platforms must:

  • For age verification data, platforms must obtain voluntary, informed, specific, and withdrawable consent from users, and the data can only be used for age verification purposes unless additional consent is obtained.
  • The consent request must be written and designed so users can understand what they are agreeing to.
  • The consent must be unambiguous, meaning entities cannot seek it through pre-selected settings or opt-outs.
  • Secondary use and disclosures of data should be strictly optional and easily withdrawn.
  • For other personal data, such as a user's address, the same consent requirements apply, but consent must be obtained for the specific purpose the data is collected for, following the Privacy Act 1988.

How Securiti Can Help

Securiti’s Consent Management automates consent tracking and management, simplifying the management of first-party and third-party consent, enabling organizations to obtain, record, track, and manage individuals' explicit consent.

4. Comply with Data Retention and Destruction Obligation

To comply with the Act, platforms must:

  • Personal information collected for age verification must be destroyed after its intended use. There is no allowance for de-identification; the information must be destroyed.
  •  Age assurance inputs (e.g., document images, selfies, biometric information) must be destroyed immediately once the purpose of age assurance has been met.
  • Failure to destroy such information is deemed an interference with privacy under the Privacy Act 1988, making it subject to complaints.

How Securiti Can Help

Securiti’s Redundant, Obsolete and Trivial (ROT) Data Minimization solution uses AI to identify and remove unnecessary data, reducing storage costs and ensuring compliance with retention policies​. It enables organizations to leverage granular insights and discover the security posture of data assets across on-premise, IaaS, SaaS, and data clouds.

Enforcement and Penalties

The Act grants significant powers to the eSafety Commissioner, which is Australia’s independent regulator for online safety, to monitor compliance, enforce rules, and penalize breaches:

The Commissioner can request information from:

  • Age-restricted social media platforms to assess compliance.
  • Providers of electronic services related to the determination of whether their platforms fall under the rules.

The maximum penalty is 30,000 penalty units, or $9.9 million, and for corporations, it rises to 150,000 penalty units, or $49.5 million. If a platform is found non-compliant under the Act, the e-safety Commissioner may:

  • Issue a statement of non-compliance to the provider.
  • Publicly name and shame non-compliant platforms on the Commissioner’s website.

Implementation Timeline and Review

The Act provides a 12-month implementation period for platforms (envisioned within its scope) to comply with the new age restriction obligations. These rules will apply to both new and existing accounts.

To ensure the rules' effectiveness, a mandatory independent review will be conducted within two years of implementation. The review will assess the rules' impact and recommend any necessary adjustments.

Going forward

The Act raises the bar for safeguarding children online. To avoid substantial penalties, social media platforms must prioritize age assurance systems, data protection, and compliance.

By implementing protective systems and aligning them with regulations, platforms can foster safer online environments for users while adhering to Australia’s strict privacy and safety standards.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
Australia’s Guidance for AI Adoption View More
Australia’s Guidance for AI Adoption
Access the whitepaper to learn about what businesses need to know about Australia’s Guidance for AI Adoption. Discover how Securiti helps ensure compliance.
Montana Privacy Amendment on Notices: What to Change by Oct 1 View More
Montana Privacy Amendment on Notices: What to Change by Oct 1
Download the whitepaper to learn about the Montana Privacy Amendment on Notices and what to change by Oct 1. Learn how Securiti helps.
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New