In a sweeping pre-emptive strike ahead of a controversial new law, tech giant Meta has begun forcibly removing hundreds of thousands of Australian children from its platforms, launching a massive account purge that tells users under 16: “You’re Out.”
The company has started notifying and deactivating the accounts of an estimated 150,000 Facebook users and 350,000 Instagram accounts belonging to teens aged 13 to 15, a week before Australia’s world-first social media ban for under-16s officially takes effect on December 10. The move, which also locks users out of the Threads platform, is Meta’s dramatic compliance with a law that threatens fines of up to A$49.5 million for companies that fail to take “reasonable steps” to block minors.
While pledging to follow the law, Meta fired a shot across the government’s bow, arguing that a “more effective, standardised, and privacy-preserving approach” would require app stores, not individual companies, to verify ages. Teens caught in the dragnet have a brief window to download their data and can appeal by submitting a “video selfie” or ID, but for most, the digital eviction is now underway.

Australia’s Social Media Ban: Quick Facts
What’s Happening?
From December 10, 2025, social media platforms are legally required to take “reasonable steps” to prevent Australians under the age of 16 from having an account. They must find and deactivate existing accounts and stop new ones from being created.
Which Platforms are Banned?
The ten key platforms are: Facebook, Instagram, Threads, Snapchat, TikTok, X, YouTube, Reddit, Kick, and Twitch. The list is based on platforms whose main purpose is “online social interaction”. Notably excluded are WhatsApp, Messenger, Discord, Roblox, and YouTube Kids.
Who’s Responsible?
The legal onus is entirely on the social media companies, not children or their parents. If platforms fail to take reasonable steps, they face fines of up to AU$49.5 million (approx. US$32 million).
How Will Age Be Checked?
The law is strict: platforms cannot just rely on a user-provided birthdate. They must use age-assurance technology, which could include:
a) Frictionless Checks: Analysing account age, language, or connections.
b) Active Verification: Requesting a video selfie, bank record, or photo ID.
Most importantly, the law prohibits platforms from forcing users to provide a government ID; they must offer a reasonable alternative.
What Should Users Under 16 Do Now?
They should download the data they want to keep (photos, videos, messages) before their accounts are deactivated. Platforms like Meta began this process on December 4th. Users who believe they’ve been blocked in error can appeal through the platform’s official help channels.
Why It Matters
Meta is executing the ban with the clinical efficiency of an automated purge, showcasing both its ability to identify underage users at scale and its willingness to sacrifice them to avoid gargantuan fines.
The company’s public critique of the law is a masterclass in deflection. By blaming the government for a flawed approach while simultaneously proving it can enforce that approach with terrifying speed, Meta positions itself as both a responsible enforcer and a critic of heavy-handed regulation. It’s having its cake and eating it too.
For the half-million Australian teens about to be digitally disappeared, this is a brutal lesson in how little control they have over their online identities. Their profiles, memories, and social networks—the digital fabric of their adolescence—are being archived or erased based on a corporate algorithm’s guess about their age. The ban may protect them from some harms, but its execution reveals a far more unsettling truth: in the battle between child safety and platform power, the users themselves are merely collateral.
















