How Facebook decides what violent and explicit content is allowed

SAN FRANCISCO — Facebook is taking a lot of heat over the way it handles violent and disturbing content.

As the company scrambles to deal with videos of suicide and murder posted on its platform, a report from The Guardian gives new insights into the uncomfortable role the social media giant now plays as a content regulator.

The report is based on leaked documents that purportedly lay out the internal rules and guidelines that Facebook uses to review posts containing violence, hate speech, nudity, self-harm and terrorism.

It highlights the company’s struggle to censor harmful content without being accused of trampling on freedom of expression.

The Guardian says the documents were “supplied to Facebook moderators within the last year.” Facebook declined to confirm the Guardian’s reporting, but it didn’t dispute it.

“We work hard to make Facebook as safe as possible while enabling free speech,” said Monika Bickert, the company’s head of global policy management. “This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously.”

Facebook CEO Mark Zuckerberg announced earlier this month the company was hiring 3,000 more people to help “review the millions of reports we get every week.”

Documents instruct Facebook moderators to delete remarks such as “Someone shoot Trump” because as president of the United States he is in a protected category.

However, comments such as “To snap a b*tch’s neck, make sure to apply all your pressure to the middle of her throat,” or, “I hope someone kills you,” can be allowed to stay on the site because they aren’t viewed as credible.

Videos of violent deaths are also not always deleted because they can help create awareness for issues such as mental illness or war crimes, according to the guidelines.

Such videos are instead marked as disturbing and hidden from minors.

Facebook will allow users to livestream self-harm and suicide attempts because it doesn’t want to censor or punish people in distress, according to the documents.

If the person can no longer be helped, the footage will be taken down, unless the incident is deemed newsworthy.

Moderators are advised to ignore suicide threats expressed through hashtags or emoticons, or when the proposed method is unlikely to succeed.

Facebook has said in the past it is in a unique position to do more about the suicide epidemic.

In March, it announced it was testing the ability for artificial intelligence to identify potential “suicide or self injury” posts based on pattern recognitions from posts that have previously been flagged on the site in the past.

The company said in a statement that it is trying to make it easier for reviewers to contact law enforcement if someone needs help. The goal, Facebook says, is to provide resources to someone in a more timely manner.

The documents show new guidelines on nudity after critics slammed Facebook last year for removing an iconic Vietnam War photo because the girl in the picture was naked. Such images are now permitted under “newsworthy exceptions.”

Photos of “child nudity in the context of the Holocaust,” however, are not allowed on the site.

“Handmade” art showing nudity and sexual activity is allowed, the documents show. Digitally made art depicting the same kind of content isn’t.

Revenge porn violates the social network’s abuse standards and will be removed from the site.

Facebook defines revenge porn as sharing nude/near-nude photos of someone publicly or to people whom they didn’t want to see them in order to shame or embarrass them.