Current Measures In Place

What currently exists to protect us from harmful online content?



Facebook has developed an algorithm which touches nearly every post, rating each piece of content on a scale from zero to one, with one expressing the highest likelihood of "imminent harm," according to a Facebook representative.

Once a post is flagged for potential suicide risk, it's sent to Facebook's team of content moderators who they claim are trained to accurately screen posts for potential suicide risk.

Facebook do not scan posts in the European Union because of the area's special privacy protections under the General Data Protection Regulation (GDPR), which requires users give websites specific consent to collect sensitive information such as that pertaining to someone's mental health. 

Facebook Logo.jpg.png


Users can alert the Twitter team focused on handling reports associated with accounts that may be engaging in self-harm or suicidal behaviour if you encounter this type of content on Twitter.

After receiving a report of someone who may be thinking about self-harm or suicide, Twitter will contact the affected individual to let them know that someone who cares about them identified that they might be at risk of harm.

Twitter will also encourage the user to seek support, and provide information about dedicated online and hotline resources that can help.

Twitter Logo.jpg.png


Users can let Instagram know about a harmful post by reporting it. Instagram may send some resources that they have developed with suicide prevention experts to the person concerned. In some cases, Instagram may contact emergency services if an individual seems to be in immediate danger.

Users are able to report at-risk behaviour during a live broadcast. The user will receive a message offering help, support, and resources.

To help people avoid posts that might be upsetting, Instagram may limit the visibility of certain posts that have been flagged by the Instagram community for containing sensitive content.

Instagram Logo.jpg.png


When a user searches for a term relating to self-harm or suicide via Google, a Samaritans helpline displays with a telephone number and website address.

Google Logo.png

Google Trends

Google trends can be used to determine popularity of keywords. It also provides Related Topics and Queries, which came in handy when determining relevance and accuracy of keywords.

However, Related Topics and Queries for terms may often indicate that the outcome was influenced by viral trends e.g. Suicide Squad.

Below, is the Top five Related Queries for “How to kill myself” using the Google Trends tool.

Google Trends.png

Ahrefs Keywords Explorer

AKE is a tool designed to access exact search metrics. When testing the metrics of 10 keywords established from Google Trends, AKE showed that each of these keywords belonged to a broader parent topic such as “how to tie a noose,” “I hate my life,” and “kill me.”

When reviewing the parent topics, hundreds of keywords that represent suicide and depression become evident.

Below is a combined analysis for all suicide related keywords, with 611,000 suicidal searches per month just in the United States

•Of the individuals who searched for suicidal terms, only 40,000 users clicked on the first link (Suicide Prevention Lifeline).



Apify is the one-stop shop for all your web scraping, data extraction, and robotic process automation (RPA) needs.

The tool enables scraping and downloading of posts, profiles, places, hashtags, photos, and comments via the social media platforms. Apify also supports search queries and URL lists.

Apify Logo.png
  • Facebook
  • Twitter
  • LinkedIn
  • Instagram

©2021 R;pple Suicide Prevention Ltd