How Reporting Violations on Social Media Works

How Reporting Violations on Social Media Works

There is a lot of confusion about reporting content on social media platforms. Here we hope to debunk some myths, help you understand the process and therefore make more effective reports on social media.

What happens to my report?

When you report a piece of content or profile on a social media platform you have to select what type of problem it is, be it abuse, graphic content or copyright infringement for example. Once you have selected this, the report is scanned by a specific algorithm assessing the content by a pre-set standard. The best way to explain this is with an impersonation report:someone on Facebook has taken your name and your pictures to create an account they then use to start adding your friends. When you report this account and chose “they are pretending to be me” the algorithm will kick into action. It will scan the profile you have reported it from, compared with the profile you are reporting, it will then be able to see that your genuine account has been active for a few years, whereas the fake will have only been set up in the last week or so. It will see that your genuine profile has posted statuses, shared pictures, checked into places and been tagged by other friends fairly steadily since the account was opened, whereas the fake will have very little or none of this. In this situation the algorithm can quite quickly identify the account as fake and it will be removed. If however you had reported the page for being “abusive” (it can feel like a very abusive invasion of privacy) the algorithm will be scanning the page for abusive language, which may not be there. Its therefore really important to make the right report.

Who can see my report?

The short answer is, only you. As we have explored above, it is really unlikely that your report will even be assessed by a human. Facebook alone has over 2 billion users. If they were to employ enough staff to manually check over every report they receive, it would probably bankrupt them. That’s not to say they can’t do more, and the value of human moderation is widely appreciated, but very expensive. We also still hear people ask, “If I report someone/something, will the person know I have reported them/their content?”, the answer is categorically ’NO’. This myth has put a lot of people off reporting content, in fear that the person posting it will know. No main-steam social media platform will ever inform someone who has reported them or their content as obviously this information could be used to further abuse.

Why hasn’t my report worked?

It can be very frustrating when you can see something online that you know shouldn’t be there, but reporting the violation hasn't been successful. The best tip is to report the violation correctly, as we have mentioned - if you are reporting an impersonation, report the impersonation. Remember that a computer is assessing this, and while they’re designed to be clever, no machine will pick up context like a human can. You are giving them a set of instructions of what to look for, so make sure they are the right ones.

On the Professionals Online Safety Helpline we are in the unique position of being able escalate reports to social media companies, however we are only able to extend this help to the children’s workforce in the UK. As it stands there is nowhere for the general public to escalate these reports or get a second opinion. The Government are starting to recognise this and a lot of conversations are being had around legislation and accountability. Perhaps in the future there will be a service for this, but until then, follow guidance we’ve provided, familiarise yourself with the functions on the network, and you may have a better result.

Back to Magazine

Related Articles