Instagram on Thursday said that it would roll out a new feature to protect teenagers from searching out content related to suicide or self-harm.
The social media platform, which has already restricted access to such content on teen accounts—redirecting them to helplines instead—has said that "repeated" searches for such content will lead to parents being alerted.
However, this feature will only work for parents who are already enrolled in Instagram’s parental supervision programme.
"Our goal is to empower parents to step in if their teen’s searches suggest they may need support. We also want to avoid sending these notifications unnecessarily, which, if done too much, could make the notifications less useful overall," Meta said in a blog post.
The alerts will be sent to parents via email, text, or WhatsApp, depending on the contact information provided in the programme, as well as through an in-app notification.
"These alerts will roll out to parents who use Instagram’s parental supervision tools in the US, UK, Australia, and Canada next week, and will become available in other regions later this year," the social media giant added.
This comes as Instagram parent Meta is currently embroiled in two cases related to allegedly harming teenagers' online wellbeing—a Los Angeles trial that is probing claims of social media platforms deliberately causing addiction in minors, and a New Mexico trial probing whether Meta failed to protect minors from online sexual exploitation on its platforms.
Thousands of families—along with school districts and government entities—have sued Meta and other social media platforms (like YouTube), claiming that these platforms were addictive by design, and failed to protect kids from content that could lead to depression, eating disorders, and suicide, as per an Associated Press report.
Meta's executives, including CEO Mark Zuckerberg and lawyer Paul Schmidt, have maintained that there was a rift in the scientific community over social media addiction and heavy social media use.
Meanwhile, in the Thursday blog post, Meta has also said that it plans to roll out similar protections for teens interacting with its AI.
"We’re now building similar parental alerts for certain AI experiences. These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI. This is important work and we’ll have more to share in the coming months."