Facebook Releases Content Moderation Guidelines

Facebook Releases Content Moderation Guidelines

A year after the Guardian revealed Facebook’s secret rules for content moderation, the company has released a public version of its guidelines for what is and is not allowed on the site, and for the first time created a process for individuals to appeal censorship decisions.

The disclosure comes amid a publicity blitz by the company to regain users’ trust following the Observer’s revelation in March that the personal Facebook data of tens of millions of users was improperly obtained by a political consultancy.

In a Facebook post on Tuesday, the company’s chief executive, Mark Zuckerberg, said that the publication of the guidelines was a step toward his personal goal “to develop a more democratic and independent system for determining Facebook’s Community Standards”.

The 27-page document provides insight into the rationale Facebook uses to develop its standards, which must balance users’ safety and the right to free expression while maintaining a sufficiently sanitized platform for advertisers.

The company has long banned content from terrorist organizations and hate groups, for example, but the document reveals how Facebook actually defines such groups.

“You have told us that you don’t understand our policies; it’s our responsibility to provide clarity,” said Monika Bickert, Facebook’s vice-president of global policy management, in a blogpost.

Other details include bans on posts in which an individual admits to using non-medical drugs “unless posted in a recovery context”, posts that compare a private individual to “animals that are culturally perceived as intellectually or physically inferior or to an inanimate object”, and images that include the “presence of by-products of sexual activity” unless the content is intended to be satirical or humorous.

One guideline bans images that include a “visible anus and/or fully nude close-ups of buttocks unless photoshopped on a public figure”. Spam messages are not allowed to “promise non-existent Facebook features”. The company also bans so-called “false flag” assertions (eg claiming that the victims of a violent tragedy are lying or are paid actors), defining such statements as harassment.

Facebook’s role as a global publisher and censor has long created controversy for the platform, and free speech advocates have for years advocated for more transparency from Facebook around its decision-making process for content takedowns.

Sarah T Roberts, a UCLA assistant professor of information studies, said: “(The disclosure) shows in no uncertain terms the great power these platforms have in terms of shaping people’s information consumption and how people formulate their points of view – and how important it is to understand the internal mechanisms at play.”

But the document leaves out the kind of specific examples of policy enforcement that the Guardian published in the Facebook Files. Those documents  which were used for training content moderators – revealed how complicated it is to apply seemingly straightforward standards. According to training slides on hate speech, for example, the statements “You are such a Jew” or “Migrants are so filthy” are allowed, but writing “Irish are the best, but really French sucks” is not.

The release of Facebook’s rules draws attention to the difficult work of the 7,500 content moderators, many of them subcontractors  who are tasked with applying the rules to the billions of pieces of content uploaded to the platform. The low paid job involves being exposed to the most graphic and extreme content on the internet, making quick judgments about whether a certain symbol is linked to a terrorist group or whether a nude drawing is satirical, educational, or merely salacious.

“Now that we know about the policies, what can we know about the people who can enforce them?” said Roberts. “How can we be assured they have the support and information they need to make decisions on behalf of the platform and all of us?”

While speech advocates have long called on Facebook to provide some form of appeals process for content removal, many said Tuesday that the company’s current plan was inadequate.

The new plan, which the company said would be built upon over the next year, will allow users whose posts were removed for nudity, sexual activity, hate speech or violence to request having their posts re-reviewed by a human moderator.

“Users need a meaningful, robust right to appeal the removal of any post and before it is removed,” Nicole Ozer, the director of technology and civil liberties for the ACLU, said in a statement. She called on Facebook to provide a process whereby users can explain why their content should not be censored; she also urged Facebook to release statistics about its content removals.

Ozer also raised concerns about Facebook’s reliance on artificial intelligence to enforce content rules. “AI will not solve these problems,” she said. “It will likely exacerbate them.”



from NaijaExclusive https://ift.tt/2r0HZVR
via IFTTT
Facebook Releases Content Moderation Guidelines Facebook Releases Content Moderation Guidelines Reviewed by flexysong on April 25, 2018 Rating: 5

No comments:

Powered by Blogger.