[ad_1]
Original caption: Internal data showed FB was unable to stop the spread of Trump’s false claims after tagging him. Source: cnBeta.COM
According to internal data seen by BuzzFeed News,Facebook has been labeling the fake election posts of the President of the United States, Donald Trump, but has not been able to stop their spread on the platform.After the 2020 US presidential election, Trump has repeatedly spread false information, questioning President-elect Biden’s victory, and has gained a lot of engagement on Facebook. The company tried to alleviate the situation by adding labels to false claims and guiding people to obtain accurate information about the elections and their results.
But according to the discussion on the company’s internal discussion board, this has little effect on preventing the viral spread of Trump’s false claims on Facebook. After an employee asked Facebook last week if there was any data on the validity of the tags, a data scientist revealed that these tags, internally called “informs,” did little to reduce their sharing.
“We have evidence that applying this information to publications will reduce their forwarding by approximately 8%,” said the data scientist. “However, since Trump has so many shares in a given publication, the reduced proportion will not change by orders of magnitude.”
Data scientists note that adding tags is not expected to reduce the spread of bogus content. Instead, they are used to “provide factual information related to the publication.”
“Prior to this election, we developed information labels and applied them to candidate posts with the goal of connecting people with trusted sources about the election,” Facebook spokeswoman Liz Bourgeois said in a statement. He added that the label “is just one part of our greater electoral integrity efforts.”
Before the election, both Facebook and Twitter clarified their content policies and practices and informed the public that they will attach tags to misleading posts to point out more accurate information about the game. Twitter has been active in restricting the dissemination of misleading electoral information and, in some cases, prevented Trump’s tweets from being liked or republished. Last week, the company stated that it had labeled approximately 300,000 tweets as misleading information about the elections and restricted more than 450 tweets from being liked or republished.
The company wrote in a blog post: “We have seen an estimated 29% reduction in tweet citations in these tagged tweets, in part due to tips to warn people before sharing.” The company mentioned one. This approach involves users adding their own comments to the tweet while sharing it.
On the other hand, Facebook has not implemented measures to prevent users from participating in posts related to the Trump elections. Even if they are tagged, people can still share or like Trump’s posts.
Earlier this year, Facebook removed a post from Trump, but only because he violated company rules on disinformation about COVID-19.
“We have a responsibility to help maintain the integrity of the election to eliminate confusion and provide credible and authoritative information when possible,” Facebook CEO Mark Zuckerberg said at a company-wide meeting on October 15. Tell the employees. Although he spoke about the use of labels in that conversation, he did not mention efforts to limit the spread of misinformation about Trump’s election.
The 8% drop in engagement due to election labels is worse than Facebook’s similar efforts to add context to fake content. In 2017, the company claimed that once fact-checkers tag false content, it will reduce its spread by 80%. Facebook will tag politicians’ fake election content, but will not reduce the scope of this content.
Facebook doesn’t allow its fact-checking partners to evaluate the content of politicians like Trump. Instead, Facebook has created a set of labels designed to recommend credible election information to people, rather than issuing fact-checks directly. These labels have hardly stopped Trump, nor have they stopped the spread of his false information. On Sunday night and Monday morning, Trump posted twice: “I won the election!” These two fake posts attracted more than 1.7 million responses, 350,000 comments, and 90,000 shares.
According to data from CrowdTangle, a Facebook-owned analytics platform, these posts – and another post by Trump on Sunday who doubted the election results – are the three most engaged posts on Facebook in the past 24 hours.
Trump’s post and Facebook’s decision to keep it online sparked public criticism and surveillance from Facebook employees.
“Does anyone think that the tag” This post may not be true “will still be effective in curbing the spread of misinformation?” Asked a Facebook employee on the company’s internal message board. “I feel like people have quickly learned to ignore these labels at this point. Are we limiting the influence of these posts or do we hope people can do it organically?”
One employee pointed out that a Trump post that lied about his win had a high exchange rate, saying he “feels people are not put off by our soft context.”
“We refuse to demand a higher standard for accounts with millions of fans to meet other people’s standards (and often get lower standards). This is one of the most disturbing things about working here.” Another employee said.
A Facebook researcher working on a citizen integrity investigation stated that the company was unable to measure people’s reactions to these tags, noting that the impact of data scientists’ information on engagement is negligible. The investigator also said that given the company’s policy not to conduct investigations on politicians, the company has no other choice. They noted: “This will also indicate that given the company’s policy of not conducting inquiries on politicians, there is no other option at this time.”