Live:
Guide to the post-election hearing

Sen. Amy Klobuchar asked Jack Dorsey whether Facebook's decision to cut off Vine led to the demise of Twitter's beloved video-sharing platform at today's Senate hearing.

"Could you tell me about the actual impact of Facebook's actions on Vine's business, Vine's ability to compete and your decision to shut down the service?" Klobuchar, the top Democrat on the Senate Judiciary antitrust subcommittee, asked Dorsey.

Dorsey said that Facebook's actions made it "challenging" for Vine to compete, leading to the app's shutdown in 2016.

"I don't know about the intent on the other side but I know our own experience was, we found it extremely challenging to compete ... and ultimately decided that the ball moved past us and we shut it down," Dorsey said. "Again, I don't know the specifics and the tactics and what was done, but we did find it [to be a] very, very challenging market to enter even though we existed prior to some of our peers doing the same thing."

Klobuchar mentioned Vine as a potential example of Facebook squashing an upcoming competitor.

During Tuesday's Senate Judiciary Committee hearing, Facebook CEO Mark Zuckerberg called on Congress to pass regulations that would require tech platforms to issue regular transparency reports, laying out the actions they've taken on violative content.

"That way, t he people who are responsible for holding all of us accountable, whether it's journalists, Congress, academics, could have an apples- to-a pples comparison about how all the different companies are doing," Zuckerberg said. He also suggested that such a law "require that companies even maintain a certain level of effectiveness."

That level of detail was new for Zuckerberg, who has, in the past, called for Section 230 to be modified to increase transparency, but has stopped short of saying exactly what he believes such transparency should look like. Facebook already releases intermittent transparency reports, showing not only the amount of content it's taking action on, but also what percentage of that content is caught by automatic filters, what percentage of it is appealed and what percentage is restored upon appeal.

Such a scientific dissection of these actions is complex even for a company of Facebook's size. It would be far more challenging for smaller platforms, leading some to argue that Zuckerberg's proposal is, in and of itself, anticompetitive. "This is why he wants to make it a requirement for everyone. The resources it would require from smaller platforms is now a competitive advantage for Facebook," Ifeoma Ozoma, a former Facebook policy staffer, tweeted Tuesday . "Him saying it would provide apples to apples comparisons is wrong. Also, [without] context, removal numbers are useless."

Mark Zuckerberg and Jack Dorsey testify Tuesday at 7 a.m. Pacific before the Senate Committee on the Judiciary, where they'll be grilled about "censorship, suppression, and the 2020 Election."

Jack Dorsey and Mark Zuckerberg are set to testify before Congress . Again. The hearing will be focused on evidence-free allegations that their platforms routinely censor right-wing voices. Again. But this time, the hearing will take place only a few weeks after election night, and it will focus on how the platforms handled their most consequential test since 2016.

The Senate Judiciary Committee's Tuesday hearing, titled "Breaking the News: Censorship, Suppression, and the 2020 Election," will be the third time the tech executives address Congress this year, and it's certain to be rife with political theater.

But it will be important to hear how Dorsey and Zuckerberg, the heads of two of the world's most powerful social networks, reflect on their handling of the 2020 election, especially after their mistakes and negligence in 2016 set off so-called "techlash" in Washington. And those reflections will double as a defense as Republicans and Democrats question their decisions over the last several weeks.

Both Dorsey and Zuckerberg will also offer their fullest comments yet about how they'd like to see Section 230 reformed.

Here are the most important sections of Zuckerberg and Dorsey's prepared remarks.

Mark Zuckerberg

"We partnered with election officials to remove false claims about polling conditions and displayed warnings on more than 150 million pieces of content after review by our independent third-party fact-checkers."

"We strengthened our enforcement against militias, conspiracy networks, and other groups to help prevent them from using our platform to organize violence or civil unrest in the period after the election. We have already removed thousands of these groups from our platform, and we will continue our enforcement during this transitional period."

"Earlier this year, we announced a partnership with a team of independent external academics to conduct objective and empirically grounded research on social media's impact on democracy … We hope that the insights these researchers develop will help advance society's understanding of the intersection of technology and democracy and help Facebook learn how we can better play our part."

"Updating Section 230 is a significant decision, but we support the ideas around transparency and industry collaboration that are being discussed in some of the current bipartisan proposals, and I look forward to a meaningful dialogue about how we might update the law to deal with the problems we face today."

Jack Dorsey

"Knowing that overly burdensome government regulatory schemes are not always nimble nor quick and can have unintended consequences, encourage Congress to work with industry and civil society to build upon Section 230's foundation, whether it be through additions to Section 230, industry-wide self-regulation best practices, or a new legislative framework."

"Approximately 300,000 Tweets have been labeled under our Civic Integrity Policy for content that was disputed and potentially misleading … Approximately 74% of the people who viewed those Tweets saw them after we applied a label or warning message. We saw an estimated 29% decrease in Quote Tweets of these labeled Tweets due in part to a prompt that warned people prior to sharing."

"We … recognize that we can do even more to improve to provide greater algorithmic transparency and fair machine learning. The machine learning teams at Twitter are studying these techniques and developing a roadmap to ensure our present and algorithmic models uphold a high standard when it comes to transparency and fairness."

"We want to be very clear that we do not see our job in this space as done."

When they testify before the Senate Judiciary Committee on Tuesday, Mark Zuckerberg and Jack Dorsey will undoubtedly try to convince lawmakers that their companies took unprecedented actions this year to protect the 2020 election.

If lawmakers actually do their job this time, they could get answers about whether any of those actions worked.

Yes, the last Senate hearing featuring Zuckerberg, Dorsey and Sundar Pichai (who will not attend Tuesday's hearing) was an unmitigated partisan disaster , and there's no guarantee this one will be any different. But with the election behind us and attempts to undermine it still very much ongoing, members of the committee have a chance to redeem themselves by getting to the bottom of important questions about how these platforms have dealt with those attempts from President Trump on down.

The hearing was initially scheduled after Facebook and Twitter limited the spread of a viral New York Post story about president-elect Biden's son Hunter in October. Now, Republicans seem even more primed to cry censorship than when the hearing was first announced, given the huge volume of warning labels the two companies have since slapped on President Trump's own posts. That means if anyone is going to get to the bottom of whether the platforms' strategies were effective, it will likely be the Democrats.

Perhaps the most important question Zuckerberg and Dorsey could answer is whether warning labels actually stopped or slowed the spread of misinformation. That's nearly impossible for researchers outside of the companies to figure out. "As external researchers to those platforms, it's difficult to measure the effects of their interventions because we don't know what those interventions are or when they happen, or what combination of interventions are happening," Kate Starbird, an associate professor at the University of Washington, said on a recent call with the disinformation research group the Election Integrity Partnership.

Twitter hinted at some of these answers in a blog post last week, in which executives said tweets that included warning labels saw a 29% decrease in quote retweets. But that figure didn't distinguish between the subtle labels that appeared below some tweets and the more forceful ones that required users to click through an interstitial before they could view the tweet at all.

Twitter also touted its "pre-bunk" notifications, which informed users that voting by mail was safe and that the election results might be delayed at the top of their feeds. Those prompts were viewed by 389 million people, according to Twitter, but that number says very little about the impact those prompts had on those people.

So far, Facebook hasn't shared any such numbers illustrating its labels' effectiveness. "We saw the same posts on Twitter and Facebook receive pretty different treatments," said Jessica González, co-CEO of the advocacy group Free Press. "Facebook had a more general message, which was almost the same as the message they put on any post people posted that had anything to do with the election. I'm worried about the milquetoast nature."

González said lawmakers should use this opportunity to press both companies on whether and how they're studying those qualitative questions about their warning labels and what results, if any, they've found so far.

Erin Shields, national field organizer at MediaJustice, which is part of a group called the Disinfo Defense League, said Zuckerberg and Dorsey need to answer questions about their treatment of repeat offenders. This is a concern other disinformation researchers at the Election Integrity Partnership have recently raised as well, regarding a slew of far-right personalities who have repeatedly spread voting misinformation. Twitter recently permanently suspended an account belonging to Steve Bannon over a video in which he argued Dr. Anthony Fauci and FBI director Christopher Wray should be beheaded. Facebook took down the video, but left Bannon's account untouched .

"At what point do those rule violators get suspended?" Shields said. "Regular, everyday people get booted off the platform and get their accounts suspended for much less. It's interesting to see how much grace these platforms are giving political actors who are consistently violating their policies."

One related question from Shields: How much of the violative content consisted of live videos, like Bannon's, or memes? And how much longer did it take to take action on those posts, as opposed to posts containing text? The answer to that question, Shields argues, could say a lot about how porous these platforms' defenses are when it comes to video and imagery.

"We know they have some ability to check words, but what are they doing about memes and graphics and, in particular, live video where disinformation and misinformation is being shared with no pushback from the platforms?" Shields said.

This question is what makes YouTube CEO Susan Wojcicki's absence from the hearing so conspicuous. YouTube took by far the most hands-off approach to election misinformation, allowing videos falsely declaring President Trump's victory to remain on the platform and rack up views. YouTube added subtle warning labels to some videos and removed their ability to run ads, but was far less proactive than either Facebook or Twitter in directly contradicting misinformation within warning labels.

YouTube has pushed back on some of the criticism it's faced, stating that 88% of the top 10 results in the U.S. for election-related searches come from authoritative sources. But that stat elides the fact that people often encounter YouTube videos on other websites or social media platforms, without ever touching YouTube search. Given how much coordination there was between tech platforms and federal agencies leading up to Election Day, it's unclear why YouTube took such a markedly different approach. "YouTube has been let off the hook here," Shields said. Without Wojcicki there, the Senate will have to save those questions for another day.