Facebook's internal policies guiding moderators revealed for first time: The Guardian
For the first time, a behind-the-scenes look at Facebook's policies and standards for what the social media giant will and will not publish has been accessed in a report by The Guardian.
The results show how complicated it is to reconcile the opposing needs to permit a wide variety of speech and images on the one hand, and to prohibit language that can be inflammatory, or give rise to violence or danger on the other.
The British publication cites leaked policy guidelines, which show how ethically difficult it can be to moderate a site used by two billion people.
The Guardian has seen more than 100 training manuals meant for Facebook employees, which give an outline of how the giant website moderates issues like violence, hate speech, terrorism, pornography, and self-harm.
Moderaters say they are so overwhelmed by the sheer bulk of posts that censorship decisions must be made in "just 10 seconds". Reportedly, there are 6.5 million reports a week which relate to purely fake accounts. The site even has guidelines on match-fixing and cannabalism.
One source told The Guardian, "Facebook cannot keep control of its content...It has grown too big, too quickly".
Moderating sexual content is seen as perhaps the most difficult challenge. But separating what constitutes a real threat from mere frustration expressed in dry, sarcastic but violent terms has proven difficult, and is often perplexing to users.
The report claims that remarks like "Someone shoot Trump" ought to be deleted, because as head of state he is in a protected category. But Facebook can find statements that many may find disturbing to be non-credible threats, like: "To snap a bitch's neck, make sure to apply all your pressure to the middle of her throat".
Facebook is under pressure in the US and in Europe to be regulated in the same way as mainstream broadcasters and publishers.