Instagram will begin notifying parents when teenagers repeatedly search for suicide or self-harm-related content on the platform, owner Meta Platforms said Thursday, as the social media giant faces growing legal and regulatory pressure over child safety concerns.
The new parental supervision feature will send alerts to guardians if teens repeatedly search for terms linked to suicide, self-harm or expressions suggesting emotional distress within a short period of time.
Meta said the alerts are intended to help parents intervene early and access support resources for vulnerable young users.
“These alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content, and to give them the resources they need to support their teen,” the company said in a statement.
The feature will begin rolling out next week in the United States, United Kingdom, Australia and Canada, with notifications delivered through email, text message, WhatsApp or directly within Instagram.
The system will only function when both parents and teenagers enroll in Instagram’s parental supervision programme.
Parents receiving alerts will be informed about concerning search patterns and directed to mental health resources, though Meta acknowledged some notifications may not necessarily signal immediate risk.
The announcement comes as Meta — which operates Instagram, Facebook and WhatsApp — continues to face multiple court cases examining whether social media platforms contribute to declining mental health among young users.
Legal challenges involving major technology firms, including Google’s YouTube, TikTok and Snap Inc., have increasingly drawn comparisons from experts to the tobacco industry’s historic legal battles over product safety.
Meta chief executive Mark Zuckerberg recently testified in a California court case alleging that social media platforms were intentionally designed in ways that foster addiction among underage users.
During testimony, Zuckerberg argued that mobile operating system providers — including Apple and Google — are better positioned to handle age verification rather than app developers themselves.
The company also said it plans to expand parental alerts to certain artificial intelligence interactions in the future, notifying guardians if teenagers attempt to engage in conversations related to suicide or self-harm with AI-powered systems.
The planned expansion follows rising scrutiny of AI chatbots developed by major technology companies amid concerns that automated systems may respond inadequately to sensitive mental health discussions.
Regulatory attention has intensified in the United States, where authorities are reviewing online child protection rules governing how digital platforms collect data used for age verification technologies.
Meanwhile, separate legal filings tied to another case have raised questions about whether encryption measures on Meta platforms could complicate reporting of child exploitation material to law enforcement — allegations the company has denied.
The growing wave of lawsuits and investigations has placed digital safety for minors at the centre of global debate over social media regulation, with policymakers weighing stricter oversight of platform design and content moderation practices.
Meta said the new parental alert system represents an early step as it continues testing safeguards intended to balance youth safety, privacy protections and parental oversight on its platforms.