TLDRs:
- Zuckerberg reached out to Apple CEO in 2018 about teen safety efforts.
- Instagram engagement was not prioritized over youth well-being, court hears.
- Meta rolled out “Teen Accounts” and safety tools amid legal scrutiny.
- Studies show safety tools have limits, prompting ongoing platform reviews.
Meta CEO Mark Zuckerberg testified in Los Angeles Superior Court on February 18, revealing that he had contacted Apple CEO Tim Cook in 2018 to discuss teen and child safety on social media platforms.
The email exchange, Zuckerberg said, was intended to explore collaborative approaches to improving online safety for young users.
The testimony comes amid a broader trial in which social media companies face allegations that their platforms harm teenage users. Zuckerberg emphasized that Instagram’s growth and user engagement metrics were never intended to come at the expense of youth well-being, countering claims that Meta deliberately pushed addictive features to increase time spent on the platform.
Challenges With Underage Users and Filters
During the session, Zuckerberg acknowledged the difficulty of fully preventing underage users from accessing Instagram. While the platform removes accounts for identified under-13 users, reports indicate that some children still bypass age restrictions.
The CEO also discussed consultations with stakeholders regarding features such as beauty filters, which have been linked in some studies to negative self-perception among teenage girls. He maintained that Meta’s approach sought to balance safety with users’ freedom of expression, emphasizing that the company does not deliberately encourage harmful behaviors through its product design.
Product Tweaks Amid Legal Scrutiny
Facing mounting legal pressure, Meta has implemented a series of changes to address teen safety. The company rolled out “Teen Accounts,” which aim to provide age-appropriate experiences while limiting unwanted contact from adults. The updates include stricter controls over who can message young users, as well as other tools designed to help parents and guardians monitor interactions.
However, external reviews of these tools reveal ongoing challenges. One analysis found that approximately 64% of the Instagram safety features tested were either ineffective, easy to bypass, or discontinued. While some issues, such as adults messaging teens who did not follow them, have since been addressed, researchers continue to monitor the effectiveness of these safeguards in day-to-day use.
Trial Implications and Legal Context
The trial is seen as an early test case that could influence thousands of similar lawsuits from families and school districts seeking to hold social media platforms accountable for teen harm. Plaintiffs argue that specific product features, like infinite scroll and autoplay, intentionally drive user engagement in ways that can be harmful, bypassing the traditional Section 230 protections that shield online platforms from liability over user-generated content.
Internal Meta documents cited during the trial, including engineers’ informal remarks comparing Instagram to drugs, gambling, and Big Tobacco, have further fueled public and legal scrutiny. As the case unfolds, the court is also examining Meta’s broader approach to artificial intelligence, platform addiction, and youth-focused policies, including how effectively the company enforces age restrictions and responds to underage users.
Zuckerberg’s testimony underscores Meta’s attempts to demonstrate proactive engagement on teen safety while navigating growing regulatory and legal pressures. Investors appear to be watching closely, as the company’s response to these challenges could have implications not only for its reputation but also for its stock performance.


