← NewsAll
Instagram to alert parents when teens repeatedly search self-harm and suicide content
Summary
Meta says Instagram will notify parents in selected countries if teens repeatedly search for suicide or self-harm terms; the change begins next week for Teen Accounts in the UK, US, Australia and Canada and is drawing criticism from the Molly Rose Foundation.
Content
Meta says Instagram will notify parents if a teen repeatedly searches for suicide or self-harm terms on the platform. The alerts are part of Instagram's Teen Accounts experience and build on existing protections such as hiding related material and blocking some searches. Instagram will initially notify parents and teens in the UK, US, Australia and Canada next week, with other countries to follow later. The measures have been criticised by the Molly Rose Foundation and are under public scrutiny.
Key points:
- Meta will send alerts to parents when a teen's searches for self-harm or suicide terms increase within a short period on Instagram.
- The initial rollout covers parents and teens enrolled in Instagram's Teen Accounts in the UK, US, Australia and Canada next week; a wider rollout is planned later.
- Alerts may be delivered by email, text, WhatsApp or in the Instagram app depending on available contact details, and Meta says they will include expert resources and may sometimes "err on the side of caution."
- The Molly Rose Foundation criticised the approach as potentially harmful, while Meta disputed findings cited by the foundation; experts emphasise the importance of the quality of resources sent with alerts.
- Meta said it may apply similar alerts if teens discuss self-harm or suicide with AI chatbots on Instagram in the coming months.
Summary:
Instagram's alerts are intended to flag sudden changes in a teen's search behaviour and to provide parents with expert resources, while Meta acknowledges the system may sometimes flag non-concerning activity. The announcement has prompted criticism from a suicide-prevention charity and continued scrutiny from experts and regulators. The rollout starts next week in several countries and further extensions, including alerts related to AI chatbot conversations, are being reviewed.
