FACEBOOK started rating its users internally to determine trustworthiness when it comes to fighting fake news.
The social media giant developed the system over the course of a year as part of its response to combating the spread of fake news, reports The Washington Post.
Through this system, users will be assigned a reputation score to rate their trustworthiness from zero to one.
The system is not an absolute indicator of a person’s credibility, but simply one of the many safeguards that Facebook has implemented to prevent the spread of misinformation.
They designed the credibility system to help moderators predict which users tend to erroneously flag content published by others as a form of information warfare, as stated in the report.
“[It is] not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” said Tessa Lyons, the product manager in charge of fighting misinformation, in an interview.
Facebook did not clarify the criteria by which they determine a user’s score as a way to keep the system from being manipulated. However, the lack of details on how such fake news countermeasures work tend to make some users uncomfortable.
Claire Wardle, director of First Draft, a research lab within the Harvard Kennedy School that focuses on the impact of misinformation and that is a fact-checking partner of Facebook, said, “The irony is that they can’t tell us how they are judging us—because if they do, the algorithms that they built will be gamed.”
Other tech companies have also kept mum about their methods of curtailing the spread of fake news to prevent what is referred to as “gaming,” or the manipulation of anti-fake news systems to further spread fake news.