Australia is set to become the first country in the world to introduce a national social media age restriction law banning users under 16 from accessing platforms like TikTok, Instagram, and Snapchat. The Australian government passed this law in late 2024, and the Online Safety Amendment (Social Media Minimum Age) Act represents a bold move to protect younger users from online harms, but it also raises serious questions about enforcement, privacy, and unintended consequences. In this article, we explain the new law’s entails, how it will work, and what it means for children, parents, tech platforms, and the future of online regulation.

Overview of the New Social Media Ban

The law, passed by the Australian Parliament in November 2024, introduces a legal age minimum of 16 for all social media accounts. Platforms must take “reasonable steps” to ensure compliance, which includes verifying users’ ages and preventing underage access. This law applies to both new and existing accounts.

The implementation deadline is December 10, 2025, giving tech companies one year to develop and deploy compliant systems. It’s part of a broader government initiative to protect children online, including age assurance trials and regulatory consultations.

What Does the Law Say?

The teen social media ban introduced by the Australian government is a landmark policy that reshapes how young people in Australia interact with the digital world. Passed in late 2024, the legislation targets social media companies directly and places strict obligations on them to limit access for children under the age of 16.

Below is a comprehensive overview of the law’s core provisions:

Applies to All Australians Under the Age of 16

The ban prohibits Australian children under 16 from creating or maintaining accounts on social media platforms. This includes current users, not just new sign-ups.

Platforms, Not Parents, Are Responsible for Compliance

The onus is on social media companies to ensure compliance. Parents cannot approve or override the restriction, and companies cannot shift accountability onto guardians. The responsibility to identify and restrict underage users lies entirely with the platforms.

Users Under 16 Will Be Blocked from Creating or Maintaining Accounts

Platforms must actively prevent underage users from signing up and must deactivate existing accounts that belong to users under 16. This requirement includes identifying accounts already in use by minors and shutting them down prior to the December 2025 deadline.

Non-Compliant Platforms Face Fines up to AUD 49.5 Million

If social media companies fail to comply, they may face significant legal threats, including fines of up to AUD 49.5 million or 5% of their global turnover. The eSafety Commissioner is empowered to enforce penalties and oversee platform accountability.

Even if a parent is comfortable with their child using social media, the law prohibits platforms from allowing access. This was a deliberate move by the Australian government to prevent workarounds and ensure uniform enforcement across the board.

Exemptions for Educational and Child-Focused Platforms

Certain platforms are exempt from the ban, particularly those designed with children’s development, education, or mental health in mind. These include:

  • YouTube Kids
  • Messenger Kids
  • Google Classroom
  • Kids Helpline
  • Other services approved by the eSafety Commissioner

These exemptions aim to preserve access to essential digital resources while reducing exposure to harmful content commonly found on mainstream social platforms.

How Will Platforms Enforce the Age Restriction?

Enforcement won’t be as simple as adding a checkbox for users to confirm their age. Under the new law, platforms must implement age-assurance systems designed to detect and restrict underage access. These systems must:

  • Deactivate accounts held by under-16 users
  • Prevent new underage signups
  • Comply with the eSafety Commissioner’s guidelines (expected later in 2025)
  • Cooperate with regulatory audits and respond to user complaints

While some companies may take a minimalist approach, others are likely to adopt robust identity and biometric verification mechanisms, especially given the high stakes involved.

What Age Verification Methods Will Be Used?

To comply with the law, platforms are expected to rely on several types of age verification technologies. Each comes with different privacy, security, and usability considerations.

Document-based Verification

This method requires users to upload a government-issued ID (e.g. passport or driver’s license) along with a selfie. Biometric software compares the face and document to verify both identity and age.

It’s accurate and fraud-resistant, but critics warn of potential privacy risks from mass ID collection and data breaches.

AI and Facial Recognition

Platforms may also use AI-driven facial age estimation, which evaluates a user’s age based on facial features without needing documents. This approach is faster and more privacy-friendly but less precise, so some platforms may use it as a first step with fallback to ID checks.

Third-party or Telecom Provider Checks

Another option is to verify age via trusted intermediaries, such as mobile carriers, banks, or tokenized third-party services. For example, the eSafety Commissioner recommends a “double-blind token” system that confirms age without revealing identity to the platform. This reduces data exposure and protects user anonymity.

Concerns, Criticism, and Risks

While the law aims to safeguard children, it has also faced strong criticism from privacy experts, child psychologists, digital rights groups, and tech companies.

  • Privacy: Critics argue that verifying every user’s age—even adults—could lead to overreach, mass surveillance, or increased risk of data breaches.
  • Effectiveness: Kids may still find ways to circumvent restrictions using VPNs, false IDs, or offshore apps that don’t comply with Australian laws.
  • Digital exclusion: Experts warn that banning under-16s may push them to fringe or unmoderated online spaces, removing access to supportive, educational, or creative communities.
  • Legal overreach: Some commentators note that 16-year-olds can work, drive, or even vote in certain Australian states—but won’t be allowed to use Instagram.

Global Precedents: Are Other Countries Doing the Same?

Australia’s new law is unique in its scope and enforcement.

Other countries, including the UK, EU members, and several US states, have introduced age verification laws for adult content or specific services. But none have implemented a national, platform-wide ban on under-16s for general social media.

UK: Age Appropriate Design Code mandates age checks for adult sites; social media platforms face new duty-of-care rules but not age bans.

EU: Digital Services Act encourages age-appropriate design, but doesn’t impose hard bans.

US: States like Utah, Arkansas, and Louisiana have proposed or passed laws requiring parental consent for minors, but enforcement varies, and many face legal challenges.

Australia’s approach is being closely watched by governments worldwide, and Prime Minister Albanese is expected to present the model at the UN General Assembly in September 2025.

What Comes Next?

Between now and December, platforms will need to finalize their age-verification systems in coordination with the eSafety Commissioner, who is leading industry consultations and drafting technical guidelines.

In parallel, the government is expanding age assurance expectations to include:

  • Search engines
  • Messaging apps
  • Gaming platforms
  • App stores
  • AI assistants and chatbot tools

The goal is to establish a comprehensive age-gated digital environment for minors. By year-end, Australia will have one of the most ambitious online child protection regimes in the world.

FAQ

It’s a law that prohibits anyone under 16 from holding accounts on social media platforms such as TikTok, Instagram, Snapchat, X, and others.
It takes effect on December 10, 2025, one year after the law passed.
No, but they will be legally required to block under-16 users. Failure to do so may result in large fines.
Using age verification tools like document checks, facial recognition, AI-based age estimation, or data from third-party providers like banks and telecoms.
Yes. While others are exploring age checks or parental controls, Australia is the first country to enact a national law banning under-16s from all major social media platforms.