eSafety Report Uncovers Loopholes in Social Media Age Verification for Australian Children

Author:

Australia’s online safety regulator has raised concerns about the ease with which children can bypass the minimum age requirements imposed by social media platforms. This issue has gained prominence ahead of the government’s upcoming ban on social media access for users under 16, a policy set to take effect at the end of 2025. The report from eSafety combines data from a national survey of social media usage among eight to 15-year-olds with input from eight major social media platforms, including YouTube, Facebook, and Twitch.

The decision to ban social media access for children under 16 positions Australia as a pioneer in online safety regulation, potentially influencing global standards. This landmark move reflects growing concerns about the impact of social media on young users, including issues related to privacy, mental health, and exposure to inappropriate content. The ban is intended to create a safer digital environment for children, setting a benchmark for other countries considering similar measures.

The report revealed that despite the platforms’ age restrictions, a significant number of young Australians are actively using social media. In 2024, 80 percent of children aged eight to 12 reported using social media, with YouTube, TikTok, Instagram, and Snapchat being the most popular choices. This widespread usage occurs despite the platforms’ policies that generally prohibit users under 13 from accessing their services. The findings highlight a gap between policy and practice, with children easily circumventing age restrictions.

YouTube is the only platform among the surveyed services that officially allows under-13 usage through family accounts with parental supervision. However, even on YouTube, no children aged eight to 12 reported having their accounts suspended for being underage, raising questions about the effectiveness of current age verification methods. The report also found that 95 percent of Australian teens under 16 used at least one of the eight surveyed platforms, underscoring the pervasive influence of social media among young people.

A key issue identified in the report is the reliance on self-declared dates of birth during the sign-up process. Except for Reddit, all the surveyed platforms require users to enter their date of birth when creating an account. However, they rely solely on self-declaration, without implementing additional age assurance tools to verify users’ ages. This loophole allows children to easily enter false birth dates, bypassing the platforms’ minimum age requirements.

The eSafety commissioner, Julie Inman Grant, emphasized the need for stricter age verification measures, noting that significant work remains for platforms that currently depend on users’ honesty to determine age. With the enforcement of Australia’s minimum age legislation on the horizon, social media companies face mounting pressure to implement more robust age verification systems. The report suggests that without improved measures, platforms may struggle to comply with the upcoming regulations, potentially facing legal and financial repercussions.

The report also highlighted differences in the proactive detection of underage users among platforms. TikTok, Twitch, Snapchat, and YouTube have implemented tools to detect users under 13 more proactively, leveraging technologies such as artificial intelligence and behavior analysis. These tools aim to identify and remove underage accounts before they engage extensively on the platform. In contrast, other platforms included in the survey did not use such tools, despite having the necessary technology available. This inconsistency suggests a lack of industry-wide standards for age verification and underage user detection.

Furthermore, the report revealed that most social media platforms are actively researching ways to improve their age assurance systems. A majority of the services surveyed are exploring advanced technologies, including facial recognition and artificial intelligence, to enhance accuracy in age verification. However, these technologies also raise privacy concerns, particularly regarding data collection and storage. Balancing effective age verification with user privacy and data protection remains a complex challenge for social media companies.

In addition to technical solutions, the report noted that some platforms provide user-friendly pathways for reporting underage accounts. These reporting mechanisms allow concerned users to flag suspected underage profiles, prompting the platform to review and take appropriate action. However, the effectiveness of these reporting systems depends on user participation and the platforms’ commitment to investigating flagged accounts thoroughly.

The findings from eSafety’s report underscore the urgent need for more effective age verification and assurance mechanisms on social media platforms. With Australia’s minimum age legislation set to take effect in 2025, social media companies face increasing pressure to enhance their systems and comply with stricter regulations. This regulatory push is likely to accelerate the development and deployment of advanced age verification technologies, potentially setting new global standards for online safety.

Australia’s proactive stance on social media regulation reflects a growing global trend towards stricter digital safety measures, particularly for children. As other jurisdictions observe Australia’s approach, it could inspire similar legislative actions worldwide, leading to more uniform age verification standards across social media platforms. For social media companies, this represents a critical juncture to innovate and invest in secure and reliable age verification solutions, ensuring both compliance and user safety.

As the digital landscape continues to evolve, the balance between protecting young users and maintaining user privacy will remain a key challenge for regulators and social media companies alike. Australia’s bold step towards enforcing a social media age limit sets a precedent that could reshape online safety norms globally, influencing how digital platforms operate and protect their youngest users.