The leaders of America’s big tech companies came to testify about the legislation that shaped the modern Internet. They ended up being accused by the senators of abusing their power over political speech six days before the elections.
“Who the heck chose you and put you in charge of what the media can report?” Sen. Ted Cruz, a Republican from Texas, asked Twitter Inc. CEO Jack Dorsey at a congressional hearing on Wednesday.
While the fiercest attacks came from Republicans, Democrats on the Senate Commerce Committee also questioned Dorsey, Google’s Sundar Pichai, and Facebook Inc.’s Mark Zuckerberg to determine whether Section 230 of the Communications Decency Act must be updated.
The rule gives online platforms some legal immunity from content posted by users. It has come under intense scrutiny after Facebook and Twitter recently limited the online reach of a New York Post story about the family of former vice president and Democratic presidential candidate Joe Biden, prompting claims of bias and censorship. Twitter backtracked, but the episode sparked tense exchanges during Wednesday’s hearing.
“The time has come for that free pass to end,” said Senator Roger Wicker, a Mississippi Republican who chairs the panel, referring to Section 230. The questions turned into partisan disputes. Republicans criticized the tech companies’ restraint of US President Donald Trump’s posts, while Democrats said they feared the hearing was a Republican attempt to sway CEOs days before the election.
“It seems they want to intimidate and intimidate the platforms here to try and tilt them in favor of President Trump,” said Senator Richard Blumenthal, a Democrat from Connecticut. “The timing seems inexplicable except to try to fool the referees.”
The tech executives, all showing up remotely, began their testimony by explaining the importance of the rule to building their businesses. Dorsey called Section 230 “the most important law on the Internet for freedom of expression and security,” and argued that repealing it would lead to more surveillance of content, not less.
Dorsey was asked about what Wicker called Twitter’s “double standard” for tagging posts from different world leaders, saying he has compiled dozens of examples of uneven application of company policies. The CEO said that real-world damage is one of the factors Twitter considers when deciding whether to place a warning on a specific tweet.
Section 230, passed as part of the CDA in 1996, allows internet companies to wait to moderate user speech until it is posted online, helping their platforms not be hampered by constant legal challenges. Google’s YouTube video sharing site doesn’t have to pre-screen the millions of videos uploaded daily, and Facebook doesn’t have to read every comment; they can allow users’ posts to flow freely and clean them later if something bad happens.
“Our ability to provide access to a wide range of information is only possible thanks to existing legal frameworks,” Pichai said.
The audience eventually turned to the actual language of Section 230, which gives platforms the ability to remove content they deem lewd, harassing, or “otherwise objectionable,” among other criteria, from their services as long as they act “in good faith. “. Republican lawmakers have said the language is too vague and protects the removal of political speech.
Sen. Shelley Moore Capito, a West Virginia Republican, asked executives how they define the phrase “otherwise objectionable.” Multiple Republican lawmakers have introduced bills that seek to reduce the phrase and only allow companies to remove particular categories of content, such as those promoting terrorism or self-harm.
Zuckerberg argued that current language allows companies like Facebook to capture more content that could be bullying or harassment, and Pichai said companies need flexibility. Many of the rules on Facebook and Twitter, for example, are written in a way that gives the company more leeway to tackle new and unexpected problems.
Capito also challenged Dorsey and Zuckerberg’s argument that repealing Section 230 would hurt startups. “How many small innovators and what kind of market share could they have when we see the dominance of the three of you?” she challenged.
Zuckerberg said Section 230 was pivotal when Facebook started. “If we were subject to more content demands because 230 didn’t exist, that probably would have made it prohibitive for me as a dorm room college student to start this company,” he said.
Next week’s US presidential election was a common thread throughout the audience, often cited as the subject of examples of good and bad content moderation. Sen. Tom Udall, a Democrat from New Mexico, asked the three executives if Russia and other foreign countries continue to try to use their platforms to influence elections. All three CEOs said yes.
Twitter and Facebook have recently removed account networks that originated in Iran and Russia. Facebook now makes monthly announcements about the networks it removes, with its last update on Tuesday.
One idea that has been raised is to create a single system to moderate all platforms together, making sure they all follow the same rules. But the three tech companies have no incentive to create a shared system to identify and moderate harmful information, said Sridhar Ramaswamy, former head of the advertising business at Google, owned by Alphabet Inc., in a recent interview.
He argued that any new system should not be restricted to the largest tech companies, but should be implemented for the Internet more broadly. “A result that gives more responsibility to small teams is not a great result,” said Ramaswamy, who now runs startup Neeva. “Because that becomes a huge moat that no one else can cross.”
.