Meta will soon notify parents when their teenagers repeatedly search for suicide or self-harm terms on Instagram. The company activates the feature through its existing parental supervision tools. The move marks the first time Meta proactively informs parents about harmful search activity instead of only blocking terms.
Until now, Instagram blocked certain searches and redirected users to external support services. Meta now adds direct alerts to parents as an extra safeguard. Parents and teens enrolled in Instagram’s Teen Accounts in the UK, US, Australia, and Canada will receive notifications starting next week. The company plans a global rollout at a later stage.
Strong Criticism From Suicide Prevention Charity
The Molly Rose Foundation has sharply criticized the new measure. Chief executive Andy Burrows warns that the alerts could create serious risks. He says forced disclosures may cause more harm than good.
The family of Molly Russell founded the charity after she died in 2017 at age 14. She had viewed self-harm and suicide material on several platforms, including Instagram. Burrows says every parent wants to know if their child struggles. However, he argues that sudden notifications could leave parents panicked and unprepared for sensitive conversations.
Meta states that it will accompany alerts with expert guidance. The company says it will provide resources to help parents handle difficult discussions. Ian Russell, who chairs the foundation, remains skeptical. He says a parent receiving such a message at work could react with shock and confusion. He questions whether support materials can help in that immediate moment of panic.
Charities Say Platforms Must Do More
Several charities argue that the announcement shows Meta acknowledges deeper problems. Ged Flynn, head of Papyrus Prevention of Young Suicide, welcomes the step but calls it insufficient. He says young people continue to get pulled into a dark online environment.
Flynn reports that parents contact his organization daily with concerns about online risks. He says families do not want warnings after harmful searches occur. They want platforms to prevent dangerous material from appearing in the first place.
Leanda Barrington-Leach, executive director at 5Rights Foundation, urges Meta to redesign its systems. She calls for age-appropriate protections by default. Burrows also points to research from his foundation. He claims Instagram still recommends harmful content about depression and suicide to vulnerable users.
He insists companies must address systemic risks instead of shifting responsibility to parents. Meta disputes the foundation’s findings from last September. The company says the report misrepresents its efforts to protect teenagers and empower families.
Increased Scrutiny on Social Media Platforms
Instagram designed the Teen Account alerts to flag sudden changes in search behavior. Meta says the system builds on existing protections. The platform already hides suicide and self-harm content and blocks certain dangerous searches.
Parents will receive alerts by email, text message, WhatsApp, or directly within the app. Meta selects the channel based on the contact details families provide. The company says its system may occasionally send alerts without serious cause. It states that it prefers to err on the side of caution.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says any such alert will alarm parents. He argues that the value of the system depends on the quality of guidance that follows. He stresses that companies must not leave parents alone after sending a notification. He believes Meta recognizes that responsibility.
Instagram also plans to extend alerts to conversations with its AI chatbot. The company notes that many young users increasingly turn to artificial intelligence for support. Governments worldwide continue to pressure social media firms to strengthen child safety measures.
Australia has already banned social media use for children under 16. Spain, France, and the UK are considering similar restrictions. Regulators are closely examining how large technology firms engage with young users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court. They defended the company against allegations that it targeted younger audiences.
