Australia’s internet regulator has criticised the world’s biggest social platforms of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, cautioning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Compliance Failures Exposed in First Major Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply amongst the world’s biggest social media platforms in her inaugural review since the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement appropriate safeguards to prevent minors from using their services. Julie Inman Grant expressed particular concern about structural gaps in age verification systems, highlighting that some platforms have allowed children who initially declared themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings represent a notable intensification in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has stressed that simply showing some children still hold accounts is insufficient; platforms must rather provide concrete evidence that they have established robust systems and processes designed to prevent under-16s from opening accounts in the outset. This shift reflects the government’s commitment to ensure tech giants responsible, with possible sanctions looming for companies that fail to meet the statutory obligations.
- Allowing formerly prohibited users to confirm again their age and regain account access
- Enabling multiple tries at the same age assurance method with no repercussions
- Insufficient safeguards to block accounts for under-16s from being created
- Insufficient notification systems for families and the wider community
- Absence of transparent data about regulatory measures and user account terminations
The Magnitude of the Problem
The substantial scale of social media activity amongst Australian young people underscores the compliance challenge facing both the authorities and the platforms in question. With numerous accounts already removed or restricted since the implementation of the ban, the figures paint a picture of widespread initial non-compliance. The eSafety Commissioner’s conclusions indicate that the technical and procedural obstacles to enforcing age restrictions have proven far more complex than expected, with platforms struggling to distinguish genuine age declarations from false claims. This complexity has placed enforcement authorities wrestling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the operational challenges lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the real challenge of verifying age digitally. However, the regulatory report suggests that some platforms may not be making sufficient effort to implement the systems mandated legally. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they stand to incur substantial fines that could reshape their business models in Australia and possibly affect compliance frameworks internationally.
What the Statistics Demonstrate
In the first month following the ban’s launch, Australian officials indicated that 4.7 million accounts had been suspended or removed. Whilst this figure initially seemed to prove regulatory success, subsequent analysis reveals a more nuanced picture. The sheer volume of account removals suggests that many under-16s had been able to set up accounts in the beginning, revealing that protective safeguards were inadequate. Additionally, the data raises questions about whether removed accounts represent genuine enforcement or merely users closing their profiles of their own accord in in light of the updated rules.
The restricted transparency regarding these figures has disappointed independent observers trying to determine the ban’s genuine effectiveness. Platforms have disclosed scant details about their compliance procedures, effectiveness metrics, or the profile of deleted profiles. This absence of transparency makes it hard for regulators and the public to assess whether the ban is operating as planned or whether teenagers are merely discovering different means to use social media. The Commissioner’s demand for comprehensive proof of systematic compliance measures reflects increasing concern with platforms’ unwillingness to share comprehensive data.
Sector Reaction and Pushback
The social media giants have addressed the regulatory enforcement measures with a combination of compliance assurances and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a major challenge across the industry. The company has advocated for a alternative strategy, suggesting that strong age verification systems and parental consent requirements put in place at the app store level would be more effective than platform-level enforcement. This position reflects wider concerns across the industry that the current regulatory framework puts an impractical burden on individual platforms.
Snap, the developer of Snapchat, has adopted a more assertive public position, announcing that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ business models—which traditionally depended on maximising user engagement and expansion—and the regulatory requirement to systematically remove an whole age group remains unresolved. Companies have long resisted rigorous age verification methods, pointing to privacy concerns and technical limitations, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta argues age verification should occur at app store level instead of on individual platforms
- Snap claims to have locked 450,000 accounts since the ban’s implementation in December
- Industry groups highlight privacy concerns and technical obstacles as barriers to effective age verification
- Platforms contend they are making their best effort whilst questioning the ban’s overall effectiveness
More Extensive Inquiries About the Ban’s Efficacy
As Australia’s under-16 online platform ban enters its implementation stage, fundamental questions remain about whether the law will accomplish its stated objectives or merely push young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that following implementation, significant loopholes exist—children continue finding ways to bypass age verification systems, and platforms have had difficulty prevent new underage accounts from being established. Critics argue that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply migrate to alternative services, secure messaging apps, or VPNs designed to mask their age and location.
The ban’s worldwide effects increase the complexity of assessments of its effectiveness. Countries including the United Kingdom, Canada, and multiple European countries are watching Australia’s approach closely, exploring similar regulatory measures for their own citizens. If the ban proves ineffective at reducing children’s social media usage or cannot protect them from harmful content, it could weaken the case for comparable regulations elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage access, it may inspire other administrations to implement similar strategies. The outcome will potentially determine international regulatory direction for many years ahead, making Australia’s enforcement efforts examined far beyond its borders.
Who Benefits and Those Who Suffer
Mental health advocates and organisations focused on child safety have championed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates legitimate uses of social media for young people—keeping friendships alive, obtaining educational material, and engaging with online communities around shared interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families challenge.
The ban’s practical impact goes further than individual users to influence content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously utilised effectively. Meanwhile, the ban inadvertently advantages large technology companies with resources to create age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unforeseen effects suggest the ban’s effects reach well further than the simple goal of child protection.
What Lies Ahead for Enforcement
Australia’s eSafety Commissioner has indicated a marked change from hands-off observation to proactive action, marking a key milestone in the rollout of the youth access prohibition. The authority will now gather evidence to determine whether platforms have neglected to implement “reasonable steps” to restrict child participation, a legal standard that extends beyond simply recording that children remain on these platforms. This strategy demands demonstrable proof that organisations have introduced appropriate systems and processes intended to prevent minors. The enforcement team has signalled it will conduct enquiries systematically, constructing evidence that could lead to substantial penalties for non-compliance. This move from observation to action reflects increasing dissatisfaction with the services’ existing measures and indicates that consensual engagement alone will no longer suffice.
The enforcement phase presents important questions about the adequacy of penalties and the concrete procedures for holding tech giants accountable. Australia’s legislation delivers compliance mechanisms, but their success relies on the eSafety Commissioner’s willingness to pursue regulatory enforcement and the platforms’ capacity to respond meaningfully. International observers, particularly regulators in the Britain and Europe, will keenly observe Australia’s regulatory approach and results. A robust enforcement effort could establish a model for additional countries contemplating comparable restrictions, whilst failure might undermine the entire regulatory framework. The forthcoming period will prove crucial whether Australia’s pioneering regulatory approach translates into genuine protection for teenagers or becomes largely performative in its influence.
