Meta has expanded its Teen Accounts safety system worldwide, bringing the feature to Facebook, Messenger, and Instagram. The company says the move will automatically place hundreds of millions of teens under stricter protections, including limits on explicit content, account discovery, and the ability to livestream. But a new independent report suggests the tools do not work as promised and may leave teens exposed to the very risks Meta claims to be addressing.
The report, “Teen Accounts, Broken Promises,” was conducted by Cybersecurity for Democracy in collaboration with whistleblower Arturo Béjar and advocacy groups in the US and UK. Researchers tested 47 of Meta’s safety features and found that nearly two-thirds were either discontinued or ineffective. Only eight tools were judged to work as intended, while core protections such as sensitive content filters and restrictions on adult contact often failed. In some tests, adults were still able to message teens, bullying content passed through filters, and harmful material, including sexual and self-harm content, remained visible.
Experts argue that Meta’s approach overlooks how teens actually use these platforms. As Laura Edelson, co-director of Cybersecurity for Democracy, explained, teens often seek out risky material as part of normal behavior, making proactive safety barriers essential. Béjar compared Meta’s responsibility to that of a car manufacturer—ensuring the product itself is fundamentally safe, rather than leaving the burden on parents to intervene after harm occurs.
Meta has disputed the report, claiming it misrepresents its tools and pointing to internal data showing reduced exposure to harmful content and less unwanted contact among teens enrolled in Teen Accounts. The company insists its system “leads the industry” in safety protections and offers parents effective monitoring options. Still, the company acknowledges that improvements are ongoing.
Advocacy groups and bereaved parents, however, argue that Meta’s repeated safety announcements amount to marketing rather than meaningful protections. They point to previous findings that teens continue to encounter sexual content and predatory behavior despite the new settings. Meta has deleted over 600,000 accounts linked to such activity but continues to face scrutiny from lawmakers, parents, and watchdogs.
The debate is now spilling into policy. Some groups are calling for stronger enforcement of existing laws such as COPPA in the US and the UK’s Online Safety Act, while others advocate for new regulations like the Kids Online Safety Act. Earlier this month, whistleblower testimony before the Senate Judiciary Committee urged federal regulators to independently assess whether Meta’s tools provide genuine protection.
The expansion of Teen Accounts underscores the growing pressure on platforms to shield younger users, but the ongoing gap between Meta’s assurances and researchers’ findings raises questions about whether the company’s tools are capable of meeting that responsibility.

