Instagram has announced a new feature that will notify parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short period of time. The news from Meta details how alerts will be sent to parents who use Instagram’s parental supervision tools and will subsequently provide guidance and resources to help them approach sensitive conversations with their child.
We welcome measures that aim to strengthen online protections for young people and give parents greater awareness of what potential risks there are around exposure to harmful content online. Providing parents with timely information can be an important step in helping them support their children when they may be struggling with their wellbeing.
Intervention When Searching for Harmful Content
The new alerts are designed to trigger when a teen repeatedly attempts to search for phrases related to suicide or self-harm. Instagram aims to block many of these searches and redirects users to support resources and helplines instead, but this additional feature aims to ensure parents are aware if their child continues to look for such content.
Providing parents with guidance around approaching sensitive topics alongside the alert is also a valuable aspect of the approach. Conversations about mental health can be difficult, and many parents may feel unsure around how to respond if they believe their child is struggling so the importance of offline conversations remains a constant priority.
The Limits of Keyword Detection
While the new alerts are well intentioned and will likely support with preventing potential safeguarding concerns going forwards, relying on harmful keyword searches alone will never capture the full picture of a young person’s wellbeing online.
Young people may use a wide range of language when discussing or searching for difficult emotions. They may avoid obvious keywords altogether, use slang or coded language, or engage with content through images, videos or conversations rather than direct searches. As well, as this is only limited to Instagram, there is the likelihood that searches will be happening across other platforms as well.
The timeframe for notification and clarity around ‘repeatedly searching within a short period of time’ must be more developed and clearly explained. Repeatedly searching for harmful content in a short space of time should prompt an intervention but a young person’s wellbeing cannot be defined by how quickly someone searches for this type of content. Searches may be more spaced out, they can develop over a longer period of time and may not reflect an immediate crisis. In this sense, alerts should not be limited to these rather specific scenarios.
Technology Must Support (Not Replace) Communication
Online safety tools are most effective when they complement open communication between young people and trusted adults. If children feel that searching online could immediately trigger a notification to their parents, some may simply move their searches elsewhere or stop looking for online information entirely, especially if the child is worried about their trusted adult finding out. In some cases, this could prevent them from getting the help they require.
For this reason, it is vital that any monitoring or alert system in place is accompanied by clear communication from a trusted adult about why these tools exist and how they are used. Young people should feel reassured that the purpose of these alerts is not to punish or monitor them, but to ensure that if they are struggling, there is support available.
The most important protective factor for young people online and offline is a trusted relationship with a supportive adult. Parents and carers can help create this environment by:
- Talking regularly with children about their online experiences
- Reassuring them that they can speak openly about difficult feelings
- Responding calmly and supportively if concerns arise
- Helping children understand where to find trusted sources of support outside of family and friends.
When young people feel safe speaking to adults, they are far more likely to seek help directly.
Support for Schools
Despite the limitations, Instagram’s new parental alerts represent a meaningful step toward giving parents better tools to support their teenagers. Initiatives like this show that platforms are continuing to recognise and respond to ongoing wellbeing concerns and adapt accordingly to the changing climate. You can catch up on what additional safety features Instagram has through our Instagram checklists as well as other social media platforms.
We continue to recognise the importance of these technological solutions through our own Assisted Monitoring Service (AMS) that supports schools with identifying harmful online searches. Moving beyond just social media, AMS notifies around self-harm/ suicide, radicalisation, pornography, extremism and other types of harmful content across school devices. Prioritising early intervention, AMS acts as the platform to initiate conversations with young people and can help schools create a culture of support from the offset.





