[ad_1]
There was a long list of election day horror stories that could have appeared on social media. There could have been rampant interference by foreign governments, widespread hoaxes, an avalanche of deliberately false information about the vote, and much more.
The worst chances appeared unfulfilled Tuesday, some tech researchers said, though they weren’t ready to give tech platforms a great review just yet, especially after President Donald Trump unleashed a new wave of misinformation early Wednesday by falsely claiming. that he had won.
A full account of how the end of the campaign unfolded is yet to come on sites like Facebook, Twitter and YouTube, as researchers and the companies themselves examine how people used the platforms. But early evaluations indicated that, publicly at least, social media was not highlighted as a problem on Election Day.
“We could find more information in the next few days, but I haven’t seen any evidence of anything significant,” said Dipayan Ghosh, co-director of the Democracy and Digital Platforms Project at Harvard Kennedy School. Ghosh, a former Facebook adviser, has often been critical of the company.
“Despite the massive target the United States is, we haven’t really seen much, and social media has been quite effective in addressing the issues,” he said.
Any success the platforms achieved was not due to a lack of misinformation. There were many examples of misleading information and misstatements, even if their precise impact on the election results remains uncertain.
False information about voting in Pennsylvania appeared on all right-wing social media and websites, while in Virginia, election officials said a misleading video was circulated showing a person burning sample ballots.
In one of the most viral videos of the day, posted by the editor of a conservative news website, it appeared that a poll watcher was denied entry to a polling station in Philadelphia. It was shared more than 33,000 times and had accumulated 3 million views as of Wednesday, although there was no evidence of a widespread or deeper issue.
And more examples are still coming in, as people post false information about polling or counting practices.
Some disinformation efforts before the election appeared to have gained some traction, particularly those targeting Florida Latinos and black voters. Private messaging apps have also been a concern, as misinformation flowing between individuals or small groups can be difficult to track.
“We’re not done,” said Alex Stamos, a former Facebook security chief who is now director of the Internet Observatory at Stanford University. This election season he helped organize the Alliance for Election Integrity, which involves more than 120 taxpayers in various institutions documenting misinformation.
“We will continue to operate, find and flag electoral misinformation, as long as there is a significant opportunity for that because the election result is in doubt,” Stamos said. He said Wednesday night that the day had been as busy as Tuesday for his team.
YouTube, for example, faced questions Wednesday over a video claiming unsubstantiated that Democrats were committing voter fraud against Republican ballots. Disinformation was also spreading on the video app TikTok, researchers said. The platform had also announced efforts to curb misleading information.
At least one example of misinformation on social media was self-inflicted on Tuesday. Some Instagram users reported seeing posts from the app telling them to remember to vote “tomorrow,” a problem the company said was due to users not restarting the app.
“It is early to declare victory in many respects, even if the platforms were successful in dealing with the problems for which they were preparing, but it seems that the catastrophic scenarios did not occur,” said Matt Perault, director of the University Center of Duke. in Science and Technology Policy and former policy director at Facebook.
If tech companies finally get high marks for their handling of elections, you can give them peace of mind that their services have improved after four years of relentless criticism from lawmakers, users, and their own employees.
Almost immediately after the 2016 election, executives like Facebook CEO Mark Zuckerberg were faced with questions about whether they had distorted political debates and whether they had given Trump generous help.
Since then, tech companies have imposed a series of changes to stop the flow of misinformation online, including more aggressive investigation of secret foreign networks, limiting the types of targeting advertisers can use, and reviewing their Policies for posts that could lead to voter suppression. . They have stepped up the use of fact check labels, although not always in an unbiased way.
In the weeks and months leading up to Election Day, platforms were quick to inoculate their platforms against known super-spreaders of disinformation and political violence. Facebook banned accounts depicting the QAnon conspiracy theory, and Twitter restricted their reach. Facebook also removed thousands of “militia” groups after various planned events on the platform ended in real-world violence.
They even made some last minute changes that went to the heart of how online social media operates, as Facebook suspended its recommendations from political groups and Instagram disabled a hashtag search feature. Twitter said it would Speed up some posts which included disinformation.
The full scope of how people may have used technology platforms during elections may not be known for some time. The fact that Russian operatives bought ads on Facebook in 2016, for example, was not publicly known until September 2017.
Joan Donovan, research director for the Shorenstein Center for Media Policy and Public Policy at Harvard Kennedy School, said the platforms still largely operate behind the scenes with little transparency about how they enforce their policies.
“We do not know the extent of influence operations on social media platforms or the actions these companies have taken in recent weeks,” Donovan said. And even when tech companies remove problematic content, he said, they don’t always explain their actions well, adding to the narrative that they are suppressing speech.
However, the election is not over, and platforms now face considerable challenge in the president’s false claims about votes by mail.
Both Facebook and Twitter acted quickly early Wednesday when Trump began making false claims, putting warning labels on his posts. By Wednesday night, the platform actions had almost become routine.
And at the top of the platform feeds, businesses proactively posted information – votes were still being counted.
[ad_2]