Election 2020 updates

Hello, and welcome to Protocol Policy! Today, we’re talking about whether all those bans on political ads post-2016 were actually good for democracy. Plus, privacy groups aren’t happy with the privacy bill and the Chips Act’s future is in doubt.

Bad ads

Digital political ads got a bad rap after the 2016 election — and for good reason. The Internet Research Agency’s antics on just about every Big Tech platform made it clear that tech giants needed more guardrails around what political actors could pay to say on their sites.

But while the post-2016 scrutiny led giants like Facebook and Google to set up stronger vetting systems, it also drove other platforms — namely Spotify, LinkedIn, TikTok and Pinterest — to prohibit political advertising altogether. So far, Spotify is the only one that’s decided to bring it back.

As we barrel toward the midterms, the question now is: What did these political ad bans really accomplish, beyond shifting the spotlight off of the platforms that imposed them?

The bans aren’t as apolitical as they seem. Take Twitter’s ban:

  • Under Twitter’s rules, ExxonMobil can advertise about the wonders of natural gas, but a climate PAC like 314 Action can’t buy an ad pushing back. That’s because Twitter prohibits political candidates, parties, PACs and elected officials from advertising.
  • This has “tilted the playing field” in favor of businesses, said Erik Polyak, managing director of 314 Action.
  • It’s a problem for unions too. “Hotel chains and airlines say, ‘We’re rated the No. 1 employer. Our employees love us,’ and then the union that represents the workers can’t advertise at all because of how they’re organized,” said Stephanie Grasmick, CEO of Democratic marketing firm Rising Tide Interactive.

Not that other platforms have fared much better.

  • TikTok banned political ads in 2019, but paying influencers to peddle political messages instead has become a widespread workaround , despite the company’s stated policy against it.
  • “It’s performative,” Tatenda Musapatike, a former Facebook employee and current CEO of the non-profit Voter Formation Project, said of these bans. “They don’t want to end up like Facebook in 2017.”

The bans don’t seem to have curbed misinformation, either — but they have made it harder for smaller campaigns to compete.

  • Researchers analyzed the effects of the temporary ban on new political ads that Facebook put in place immediately before and immediately following the 2020 election and found that while it had little effect on misinformation, it did seem to hurt smaller campaigns, since digital ads are cheaper than radio, TV or other platforms.
  • The bans also appeared to hurt Democrats more than Republicans, as Democrats generally relied more on Facebook ads for small-dollar fundraising.
  • “It hurts challengers who might not have name recognition, who are trying to figure out: For 100 bucks, where can they get a good return on their investment?” said Matt Perault, Facebook’s former director of Public Policy and a professor at UNC's School of Information and Library Science.

All of this has made digital political ads a lot harder to track, as money moves from mainstream platforms to programmatic ads.

  • Not only do programmatic ad platforms have few rules around what advertisers can say, but there’s very little transparency around the ads those platforms run. And there’s very little public scrutiny of the companies that run them.
  • Buying ads this way can also drive up costs for campaigns, because navigating programmatic systems often takes a trained marketing consultant.

Allowing foreign governments and bad actors to spread misinformation through unchecked digital ads has obvious costs for democracy. But creating lopsided rules around political speech has costs too — costs that Perault argues aren’t being adequately considered by companies that have imposed these bans or lawmakers that have at times pressured them into it.

“Are we — going into the next momentous moment in the democratic governance of our nation — going to be in a better position to make informed decisions about what the right approaches are?” Perault said. “I think it's a travesty that we don't have that information.”

— Issie Lapowsky ( email | twitter )

A version of this story first appeared on Protocol.com. Read it here.

In Washington

The American Data Privacy and Protection Act is riddled with loopholes, consumer groups warn . The bill prohibits companies from serving ads if they have “actual knowledge” that targeted users are younger than 17, which advocates say gives companies too much leeway to argue they just didn’t know minors were using their apps.

The clock is ticking on the Chips Act. A group of 123 CEOs — including those from Alphabet, Microsoft, Intel, Samsung and Amazon — urged Congress to act quickly to sign competitiveness legislation into law. Commerce Secretary Gina Raimondo has also warned that Congress needed to pass the Chips Act or “companies will have no other choice but to build” abroad.

A bipartisan bill would require U.S. companies to notify the federal government of investments in China considered crucial to supply chains. According to The Wall Street Journal, a draft of the bill would let the government block private investments in China on national security grounds.

Sen. Ed Markey called Ring doorbells a “surveillance system” that “threatens the public in ways that go far beyond abstract privacy invasion. ” In a letter to Amazon CEO Andy Jassy, he said the audio capture features put at risk “the public’s right to assemble, move, and converse without being tracked.”

The National Highway Traffic Safety Administration released its first major study into self-driving and driver assist systems. It examined 392 accidents, with more than two-thirds of those involving Tesla vehicles. The agency’s administrator warned, however, that the study may raise more questions than it answers.


The idea that politicians could restrict cost-effective online advertising and marketing is daunting. These laws could potentially cripple the way small companies like ours do business in this ever-evolving digital age.

Learn more

On Protocol

CFPB’s regulatory sandbox experiment isn’t going so well . Last week, the CFPB revoked a no-action letter at the behest of fintech underwriter Upstart. Upstart said it wanted to leave the program due to “changing priorities” at the CFPB. The decision came shortly after CFPB Director Rohit Chopra said that no-action letters and the fintech sandbox program were "ineffective" and reorganized the office behind them.

EV adoption may be faster than anticipated. BCG researchers estimated that EVs will account for one in five global car sales by 2025, and around 59% by 2035. A report last year from the same research group had considerably lower estimates: EVs were on track for 11% adoption by 2025 and 45% by 2035.

Net zero is out, “real zero” is in … at least if you ask NextEra, which said it would achieve real zero emissions by 2045. The company trademarked the term to mean achieving zero carbon emissions without the use of carbon credits, offsets or capture.

Around the world

Amazon, Google and Microsoft are fighting cybersecurity legislation in the EU. The proposed law would force cloud providers to prove they meet certain cybersecurity standards. That EU legislation could go directly against laws in the U.S. that require government access to some data; to comply, the cloud providers might need to restructure operations, according to POLITICO.

Germany is investigating whether Apple’s App Tracking Transparency program is anticompetitive. In a statement to TechCrunch, Apple said in part that ATT “simply gives users the choice whether or not they want to allow apps to track them or share their information with data brokers.”

The EU dropped its 997 million euro fine against Qualcomm. The fine had been issued by the EU over Qualcomm’s payments to Apple to exclusively supply chips. The court cited “a number of procedural irregularities” as the reason for dropping the fine.

Alphabet offered to allow rival ad firms to serve ads on YouTube as part of a potential settlement with the EU. The EU launched a probe in 2021 to determine whether Alphabet gave itself an anticompetitive advantage in the ad space by limiting access to user data.

Nigeria will require social media companies to open local offices and give the government reports on their efforts to combat disinformation. Nigeria’s government has been critical of U.S.-based social media platforms and even issued a ban on Twitter after the company removed a post from President Muhammadu Buhari.

In the media, culture and metaverse

Instagram launched its “Take a Break” feature for teens. The rollout comes just a week after Meta was hit with a string of lawsuits alleging that it knowingly contributed to mental health issues in young adults.

In the C-suite

Microsoft entered a legal agreement to remain neutral on employee unionization efforts at Activision Blizzard. The agreement with the Communications Workers of America also stipulates that Microsoft will give employees “innovative” ways to decide whether to unionize. Meanwhile, the FTC still has to figure out how to handle the Activision acquisition in the first place.

In data

0.75 percentage points: That’s how much the Federal Reserve is expected to raise interest rates today, according to The Wall Street Journal. Recent inflation reports showed the half-percentage-point hike from May didn’t have the hoped-for impact of slowing inflation, which is still at 40-year highs. The interest rate bump could further strain tech markets at a time when they’re already reeling.


Internet advertising has enabled us to grow our business to what it is today, but proposed regulations limiting advertisers’ ability to reach target audiences would hurt media publishers like us.

Learn more

What could possibly go wrong?

Elon Musk is expected to address Twitter employees at a company all-hands meeting tomorrow. Twitter executives told Tweeps they’re unwilling to negotiate the acquisition deal with Musk, who has questioned the company’s transparency when it comes to bots on the platform. The meeting sounds like a recipe for disaster, but who knows? Maybe we’ll all be surprised.

Thanks for reading — see you Friday!

Say ExxonMobil wanted to run an ad on Twitter about how natural gas is actually totally climate-friendly. The company could get certified as a “cause-based” advertiser, provide some basic details like its company ID and country of origin, and fire away.

But if Erik Polyak, managing director of the climate advocacy group 314 Action, wanted to run an ad debunking that very debunkable claim, he couldn’t. 314 Action is registered as a political action committee and, in late 2019, Twitter announced it would no longer take ads from PACs — or political candidates, parties or government officials, for that matter.

“We believe political message reach should be earned, not bought,” Jack Dorsey tweeted at the time. (Twitter spokesperson Elizabeth Busby shared a verbatim statement with Protocol.)

Political ads had become too susceptible to abuse, Dorsey wrote, presenting “entirely new challenges to civic discourse,” and he believed the company couldn’t credibly claim to be cleaning up its act while also taking money to push whatever misleading message political advertisers wanted.

Twitter wasn’t the only one reaching that conclusion. After facing intense scrutiny over the ways digital ad systems were abused during the 2016 election, some tech platforms — Spotify, LinkedIn, TikTok and Pinterest among them — decided that it might be easier to simply sit elections out and imposed various bans on political advertising. Only Spotify has opted to bring them back, albeit in a limited fashion.

But as the midterm elections loom, questions remain about how apolitical these policies really are and whether they’re actually reducing abuse or simply taking the spotlight off of the companies that imposed them. “They launched this policy that's really tilted the playing field,” Polyak said of Twitter. “Instead of getting serious about disinformation on the platform, they've just gravitated towards this one-size-fits-all policy that really favors big corporations and penalizes advocacy groups like us.”

“It’s performative,” added Tatenda Musapatike, CEO of the Voter Formation Project, a voter turnout nonprofit focused on underrepresented communities. Musapatike previously worked on political ads at Facebook. “[Platforms] have political messages on there, but they don’t want to open themselves up to the risk or the appearance of the risk,” she said. “They don’t want to end up like Facebook in 2017.”

It’s hard to measure the impact these political ad bans have had on elections, or on platforms, in part because none of the platforms has shared any research measuring their impact — that is, if any have done that research at all. Protocol asked Twitter, TikTok, LinkedIn and Pinterest if they had any data on the effect of the ad bans: Twitter, TikTok and Pinterest did not respond directly to the question, and LinkedIn said it didn't have data to share.

That’s a problem, said Matt Perault, a professor at UNC's School of Information and Library Science and a former director of Facebook’s public policy team. Tech platforms large and small made what Perault calls “historic interventions” in paid political speech before the 2020 race. “It might be that some form of ad restrictions are positive, and we want to keep them in place if they worked; or maybe they failed, and because they failed, we don't want to implement them in the midterms or in 2024,” Perault said. “We don’t know the answer to any of those questions.”

In the absence of data from the companies, Perault and his colleague Scott Babwah Brennen, head of Online Expression Policy at UNC’s Center on Technology Policy, went looking for answers of their own. In a paper published last year, they analyzed the effects of the temporary ban on new political ads that Facebook put in place immediately before and immediately following the 2020 election. They found that while the bans likely had minimal impact on curbing misinformation, they did seem to hurt smaller campaigns, since digital ads are cheaper than other mediums. The bans also appeared to hurt Democrats more than Republicans, since Democrats generally relied more on Facebook ads for small-dollar fundraising.

“It doesn't hurt Donald Trump. He has massive organic content, and he can spend money wherever he wants to spend it,” Perault said. “It hurts challengers who might not have name recognition, who are trying to figure out: For 100 bucks, where can they get a good return on their investment?”

The irony, of course, is that it was Democrats who primarily demonized digital political ads after 2016, warning of the dangers of “ dark posts ” and castigating tech giants for the way their ad systems had been misused by Russian operatives. Sens. Amy Klobuchar and Mark Warner introduced the Honest Ads Act, which would have required more disclosure and transparency around online political ads. Momentum behind that bill fizzled after its Republican co-sponsor, Sen. John McCain, died, but the increased public pressure did prompt Facebook, Google, Snap, Reddit and even, temporarily, Twitter to create ad archives of their own.

That was a clear win for transparency, but the increased scrutiny may have inadvertently driven some platforms away from political advertising altogether. These archives, after all, took substantial resources, and the new visibility has created unending bad press for platforms, especially Facebook. Ultimately, Brennen said, “Small companies just don't think it's really worth it.”

But the rules those platforms have since put in place forbidding political bans are imperfect at best. Take TikTok: The company banned political ads in 2019, but paying influencers to peddle political messages instead has become a widespread workaround , despite the company’s stated policy against it. Twitter’s prohibition on political ads, meanwhile, has effectively created a loophole for businesses while stymying advocacy groups, said Stephanie Grasmick, CEO of the Democratic marketing firm Rising Tide Interactive.

“Hotel chains and airlines say, ‘We’re rated the No. 1 employer. Our employees love us,’ and then the union that represents the workers can’t advertise at all because of how they’re organized,” Grasmick said. “If they’re going to have rules — and I don’t think there’s anything wrong with having rules — then the rules should apply equally to everybody.”

The bans also haven’t stopped potential misinformation from spreading through online ads. They’ve just made those ads harder to find. Perault and Brennen have been tracking where political advertising has moved online in the wake of these bans and discovered that a lot of it is migrating to programmatic advertising platforms, which have few rules about what advertisers can say and almost no transparency systems in place. Buying ads this way can also drive up costs for campaigns, Brennen said, because navigating programmatic systems often takes a trained marketing consultant. That only further obscures the flow of campaign cash: When campaigns hire consultants to do their digital advertising, those consultants by and large aren’t required to publicly report where the money goes.

Even on Facebook, where it’s still possible to run political ads, the landscape for political advertisers looks a lot different — and a lot pricier — in 2022. That’s due to Apple privacy changes that have torpedoed every app’s ability to track users, as well as to Facebook’s own decision to prevent advertisers from targeting users based on “ sensitive ” categories, including their political beliefs. There are still ways to find relevant audiences, Musapatike said, but “the targeting is less efficient.”

All of this has made it increasingly expensive for smaller, less resourced political campaigns to advertise online. That, Perault said, is a “social cost” that tech platforms need to assess at least as much as they assessed the upside of banning political ads. But nearly three years after many of these companies made that decision, and with a contentious midterm election just months away, Perault said, our understanding of the effects of those bans is still woefully inadequate.

“Are we — going into the next momentous moment in the democratic governance of our nation — going to be in a better position to make informed decisions about what the right approaches are?” he said. “I think it's a travesty that we don't have that information.”

Spotify stopped hosting political ads on its services in early 2020, citing a lack of “robustness” in its systems, ahead of what turned out to be the ugliest U.S. election in recent history.

Two years later, as the midterm primaries get going, the company is courting political advertisers once again, according to a company presentation and marketing email viewed by Protocol.

Keep Reading Show less

Hello, and welcome to Protocol Policy! Today, we’re talking about how Mark Zuckerberg found himself at the center of one of the U.S. election’s most enduring conspiracies. Plus, D.C. sues Zuckerberg and Twitter pays the “chaos tax.”

Zuck Bucks

If you’ve paid any attention to the ongoing efforts to overturn and undermine the 2020 election, you have almost certainly come across the term “Zuck Bucks.”

It’s the name a conspiratorial-minded conservative might use to refer to the whopping $419 million Mark Zuckerberg and his wife Priscilla Chan donated in 2020 to help election officials manage a historically high turnout election in the midst of a pandemic.

Ask those election officials — from around 2,500 election departments in 47 states — and they’ll tell you the money was heaven-sent and critical to their ability to buy essential equipment like ballot sorters and PPE. Ask a hardcore Trump supporter, and they’ll tell you the money was corrupt, and used by one of the country’s most divisive billionaires to buy the 2020 election for President Biden.

Today, Protocol published the story of what really happened and how Zuckerberg’s millions went from saving the 2020 election to becoming the beating heart of efforts to undermine it — and future elections going forward.

As in everything he does, Zuckerberg tried his best to stay neutral in 2020.

  • He offered nearly half a billion dollars in grants to any election official who wanted one, as long as those officials spent it on what a lot of people would consider mundane essentials: ballot sorters, drop boxes, poll workers and — because it was 2020 — hand sanitizer.
  • And when those election officials applied for more money than he originally offered, he kicked in another $119 million to satisfy the rest of the requests. Because the last thing he wanted was for anyone to claim they got stiffed and accuse him of bias. What a disaster that would be.

It was a fool’s errand.

  • At a time when Republicans are rapidly restricting access to the ballot box in states across the country, spending nearly half a billion dollars to do the exact opposite of that is tantamount to a partisan choice. Or, at least, it was bound to be viewed that way.
  • And Zuckerberg is hardly a neutral figure. The Zuck Bucks theory is in many ways the real world analog of the accusations of bias Meta has been facing for years.

As much as the money was essential to carrying out the 2020 election, the backlash has had lasting consequences for elections in America.

  • It inspired new restrictions on election funding in more than a dozen states, leading to death threats and harassment against the nonprofit leaders who distributed the money and contributing to the resignations of election officials who accepted it.
  • In some ways, that’s left election offices across the country worse off than they were two years ago — still strapped for cash, and now with no ability to raise outside funding.

It’s likely to have lasting consequences for Meta too: Whatever moves the company makes in the midterms and beyond, this supposed scandal will be just another data point used to pressure Meta to bend to one party’s will.

Read the full story at Protocol.com.

— Issie Lapowsky ( email | twitter )

In Washington

More than a dozen senators wrote to the FTC asking it to do more to protect people’s location data in the face of the Supreme Court’s likely decision to overturn Roe v. Wade. The letter follows reports on how SafeGraph and other companies have sold data on devices located near Planned Parenthood clinics.

Microsoft and the German Marshall Fund are launching a task force on transatlantic data sharing in hopes of advising policymakers as they work to negotiate the U.S.-EU Trans-Atlantic Data Privacy Framework. The task force, co-chaired by Microsoft’s Julie Brill and GMF’s Karen Kornbluh, also includes representatives from Meta and BSA, also known as the Software Alliance.

Speaking of international data flows, more than 50 countries are working to control how their citizens’ data can be accessed by other nations. That includes the U.S., where the Biden administration is circulating a draft order cutting China off from American data, the New York Times reports.

In the courts

Match Group withdrew a request for a temporary restraining order against Google after Google said it would let Match use alternative payment systems. The request for the restraining order came as part of Match’s antitrust suit over the Play Store’s fees. That suit is still ongoing and is set to go to trial in April 2023.

D.C. Attorney General Karl Racine has sued Mark Zuckerberg over Cambridge Analytica. The charges against Zuckerberg follow Racine’s separate 2018 suit over Facebook’s data practices. “This unprecedented security breach exposed tens of millions of Americans’ personal information, and Mr. Zuckerberg’s policies enabled a multi-year effort to mislead users about the extent of Facebook’s wrongful conduct,” Racine said in a statement.

Florida’s social media law likely violates the First Amendment, the 11th Circuit Court of Appeals wrote in an opinion. The court upheld an injunction on most of the law, with the exception of the less onerous provisions requiring disclosure of content standards, among other things. The decision comes as the tech industry awaits the Supreme Court’s reply to an emergency application in the Texas case.


There are three things that companies need to know when it comes to setting climate goals. The first thing I would say is that if you're going to set a climate goal as a business, it needs to be a businesswide effort. It cannot live within just the corporate responsibility or the sustainability team as it often does.

Learn more

On Protocol

Meta will finally share political ad targeting data with pre-vetted researchers. The company has publicly clashed with researchers at New York University over this very data. Meta will also share aggregate targeting information through its Ad Library, which is accessible to anyone.

Around the world

Apple is looking to India and Vietnam as potential production hubs as China’s COVID-19 restrictions inhibit the company’s operations. According to The Wall Street Journal, Apple has been telling contractors to increase production outside of China.

Here’s a look at a day in the life of startup founders in Ukraine, where the once-vibrant tech community is trying to make a comeback while huddling in bedroom closets and underground bunkers.

In the media, culture and metaverse

YouTube has taken “unprecedented action” related to the war in Ukraine, according to Chief Product Officer Neal Mohan.The company has removed more than 70,000 videos and 9,000 channels pertaining to the war in Ukraine under a policy that forbids denial of major violent events. That includes videos that refer to the war as a “liberation mission.”

Clearview AI was ordered to delete all facial recognition data on U.K. residents. The order from the U.K.’s Information Commissioner’s office follows similar demands by France, Italy and Australia. But it’s unclear how these orders can be enforced.

In the C-suite

Twitter’s new head of Product, Jay Sullivan, told employees the “chaos tax” at Twitter is likely to continue, according to internal messages viewed by The Wall Street Journal. Elon Musk’s on-again, off-again attempt to acquire Twitter has led to 15 all-hands or large-scale meetings at Twitter in just a matter of weeks, the Journal reports.

In data

$10 billion: That’s how much Elon Musk’s net worth dropped in a single day, after Business Insider reported on the sexual harassment allegations against him. To think: Now, he’s only got $201 billion to his name.


Once a company understands its sustainability baseline, it is important to identify areas that the company can feasibly make more sustainable, and then address those areas. Implementing technology that improves connectivity and provides greater insight into operations will prove to be the solution for many companies.

Learn more

Pour one out

The last pay phone in Manhattan was removed today. Just watch as it sails away to pay phone heaven. Rest in peace, old friend. The rotaries and Nokia bricks await you on the other side.

Thanks for reading — see you Wednesday!

Meta will finally give researchers access to targeting data for political ads — information that academics have been clamoring for and using legally risky workarounds to collect on their own for years.

Keep Reading Show less

If Mark Zuckerberg could have imagined the worst possible outcome of his decision to insert himself into the 2020 election, it might have looked something like the scene that unfolded inside Mar-a-Lago on a steamy evening in early April.

There in a gilded ballroom-turned-theater, MAGA world icons including Kellyanne Conway, Corey Lewandowski, Hope Hicks and former president Donald Trump himself were gathered for the premiere of “Rigged: The Zuckerberg Funded Plot to Defeat Donald Trump.”

The 41-minute film, produced by Citizens United’s David Bossie, accuses Zuckerberg of buying the election for President Biden. Its smoking gun? The very public $419 million in grants Zuckerberg and his wife Priscilla Chan donated to local and state election officials in 2020 to help them prepare for the unprecedented challenge of pulling off an election in a pandemic. On the film’s poster, Zuckerberg is pictured smugly dropping a crisp Benjamin into a ballot box.

Suffice it to say, this was not exactly what Zuckerberg had in mind.

The Facebook founder had tried in vain to make his grand entrance into the election appear impartial. He didn’t plow tens of millions of dollars into a single candidate’s super PAC, like his buddy Dustin Moskovitz did for Biden . He didn’t spread his wealth between Senate campaigns, like his other buddy Peter Thiel is doing right now .

He did it the Zuckerberg way. The Facebook way. Instead of explicitly picking a party — God forbid he be the arbiter of anything — he threw open the vault to his vast fortune and said: Have at it, America. He offered grants to any election official who wanted one, so long as they spent it on what a lot of people would consider mundane essentials that make it easier and safer for everyone to vote: ballot sorters, drop boxes, poll workers and — because it was 2020 — hand sanitizer.

And when those election officials from red, blue and purple places, all starved for funding, applied for more money than the whopping $300 million he already offered, he kicked in another $119 million to satisfy the rest of the requests. Because the last thing he wanted was for anyone to claim they got stiffed and accuse him of bias. What a disaster that would be.

By almost all accounts , the funding from Chan and Zuckerberg was heaven-sent for the people — left, right and center — who actually had to carry out a historically high-turnout election in the midst of a pandemic when poll workers, many of them elderly, were risking their health by just showing up. And some of the equipment the money paid for should last cash-strapped local governments for years.

But a year and a half later, Zuckerberg now finds himself smack in the center of one of the 2020 election’s multitudinous conspiracies, this one with its own catchy name: Zuck Bucks.

The truth is, Zuckerberg’s attempt to appear neutral was a fool’s errand. Because at a time when Republicans are rapidly restricting access to the ballot box in states across the country, spending nearly half a billion dollars to do the exact opposite of that is tantamount to a partisan choice. Or, at least, it was bound to be viewed that way by about half of the country .

If anyone could have predicted that, it should have been Zuckerberg; the Zuck Bucks ordeal is in many ways the real world analog of the accusations of bias Meta has been facing for years. Much as Facebook’s efforts to combat hate speech have become synonymous in some circles with conservative censorship, expanding voter access has become equally synonymous with cheating. Both views lack substantive evidence to back them up, but that hasn’t much mattered. What’s true online is true in the real world: Turning the proverbial knob in any direction is only going to be viewed as neutral if you agree with the direction it’s turning.

As with seemingly everything Zuckerberg touches, the donations — and their ensuing backlash — have had disastrous unintended consequences, inspiring new restrictions on election funding in more than a dozen states, leading to death threats and harassment against the nonprofit leaders who distributed the money and contributing to the resignations of election officials who accepted it. The grants have been the subject of shambolic investigations and — as shown by what The Washington Post described as the “fraud fete” in honor of the “Rigged” premiere in April — have become a big part of the Big Lie.

Zuckerberg couldn’t have been naive to how his donation would be spun. But maybe he was willing to take his lumps. Or maybe he, like so many others, could never have imagined how bad things were actually about to get. Zuckerberg’s millions may have saved the 2020 election, but they’ve also become the beating heart of bad-faith efforts to undermine it — and future elections going forward.

‘How would you spend it?’

David Becker was on vacation with his family in the Outer Banks, in the socially distanced days of late August 2020, when Zuckerberg’s philanthropic organization, the Chan Zuckerberg Initiative, called him with a question that could have been plucked from a dream: “If you had a lot of money right now,” Becker remembers them saying, “how would you spend it?”

The election was two months away, and Becker, a former senior trial attorney for the voting section of the Justice Department’s Civil Rights Division and the current executive director of the nonprofit Center for Election Innovation & Research, had been talking to election officials about the severe funding gap they were experiencing. COVID-19 was writing and rewriting new rules for how Americans would cast their ballots in November and how those ballots would be counted.

Congress awarded $400 million through the Cares Act to help states navigate those changes, but it wasn’t enough. By the time he got the call in August, Becker said, “it was clear the government wasn't going to step in and perform or satisfy its responsibility.”

Becker told the person on the other end of the line that if he had money to spend, he’d offer it to election officials to help them educate voters on the onslaught of changes that were coming their way. “I had no idea of the scope of the funding,” Becker said. “I didn't know if they were talking about $100,000 or what.”

It was more like “or what.” A little over a week after Becker got the call, Zuckerberg and Chan awarded $50 million to Becker’s organization to distribute voter education grants to any state that wanted one.

The money ended up coming not from CZI, but directly from Chan and Zuckerberg’s personal funds, which they routed through the Silicon Valley Community Foundation. “Like most major philanthropies, we regularly consider a wide variety of grant-making opportunities for alignment with our organizational priorities,” CZI spokesperson Jenny Mack told Protocol. “In this case, Mark and Priscilla chose to make a personal donation to help ensure that Americans could vote during the height of the pandemic.”

Becker rushed to email every state election director in the country, encouraging them to apply. About two dozen took him up on it (though Louisiana eventually withdrew), collectively asking for even more money than the $50 million Becker had to offer. So he went back to the money well, asking Zuckerberg and Chan — or rather, their people — to kick in the $19.5 million difference so he wouldn’t have to turn anyone down. “I thought it was really important to use as little discretion as possible,” he said. “I’m very grateful that they agreed.”

All in, CEIR got nearly $70 million, the vast majority of which it passed on to 22 states , plus Washington D.C. — from bright blue Massachusetts to bright red Missouri — in the full amount they had requested.

Tiana Epps-Johnson sitting in profile Tiana Epps-Johnson founded and is the executive director at the Center for Tech and Civic Life. Photo: Abel Uribe/Chicago Tribune/Tribune News Service via Getty Images

Meanwhile, about 1,000 miles away in Chicago, Tiana Epps-Johnson had been having similar conversations with the Zuckerberg-world. Her organization, the Center for Tech and Civic Life, did some work with Facebook during the 2016 election, helping the company with its sample ballot generator . One of Meta’s public policy managers, Maurice Turner, also sits on the group’s advisory committee.

Since 2016, CTCL had been focused on running cybersecurity training for election officials. That training had been licensed by the U.S. Election Assistance Commission under President Trump in 2020, and was offered to all election offices in the country.

But the COVID-19 crisis reoriented CTCL’s focus. Epps-Johnson had watched the Wisconsin state presidential primary, just one month into lockdowns. Nervous voters who had scarcely left their homes in weeks stood six feet apart in hours-long lines — and in some places, in a hail storm — waiting to cast ballots. Major cities had closed most of their polling places. Milwaukee, for example, usually operates more than 100 polling places; that primary day, the city was able to open just five. “We wanted to figure out how we could use any tool in our toolbox to support these folks,” Epps-Johnson said.

In July, two months before Chan and Zuckerberg announced their election grants, CTCL awarded $6.3 million in grants to the five biggest cities in Wisconsin to help them with early voting and voting by mail, PPE and poll-worker recruitment and training, among other things. From there, the organization spent the summer doling out additional grants to select jurisdictions in Pennsylvania and Michigan, as well as launching a rural grant program for areas CTCL said were “often overlooked in the national election landscape.”

But those early grants largely went unnoticed until September, when CTCL also got funding from Chan and Zuckerberg — a whopping $250 million worth. Suddenly, CTCL had the money to expand its earlier grant program to every jurisdiction in the country. “That moment is indescribable,” Epps-Johnson said.

According to Zuckerberg spokesperson Ben LaBolt, who is himself a longtime Democratic communications consultant, CTCL and CEIR got picked based on their prior track records of working with election offices. “The team conducted due diligence to see what nonpartisan organizations had helped fund election infrastructure in states and local election jurisdictions previously,” LaBolt said. “CTCL and CEIR were identified as organizations that had relevant experience.”

Unlike the CEIR money that went to states for voter education, CTCL’s portion of the money would be regranted to local jurisdictions to pay for things like poll-worker recruitment, ballot-processing equipment, drive-through voting, protective gear and more. The announcement emphasized , as if predicting the backlash to come, that the money would go to “urban and rural counties in every corner of America.”

Like Becker, Epps-Johnson also fielded more applications than she had the money to match, and she too was wary of turning anyone down. She went back to Zuckerberg and Chan for more, and got it. In all, CTCL used Zuckerberg and Chan’s money to award more than $330 million in grants to around 2,500 election departments in 47 states, including Washington D.C. More than half of those grants went not to big left-leaning cities, but to jurisdictions with 25,000 voters or less. Every district that asked for a grant got one. “Even our biggest dreams about what might be possible with this program, we were able to exceed,” Epps-Johnson said.

‘One enormous conspiracy theory’

But the sudden influx of cash from one of the world’s most divisive billionaires instantly thrust CTCL’s spending into the spotlight. Conservative critics conflated the early grant program that came out of CTCL’s budget with the Zuckerberg grants that came later, seizing on the idea that CTCL and Zuckerberg had conspired to give Democratic stronghold cities like Milwaukee and Philadelphia a head start before making additional funding available to everyone.

This detail is not only a big part of the plot of “Rigged,” but also central to an investigation in Wisconsin that has been seeking to “decertify” the election there. Michael Gableman, the special counsel leading that investigation, now refers to the first cities CTCL funded in Wisconsin as the “ Zuckerberg 5 ."

“Why just pick the top five Democratic cities?” Gableman, who received a round of applause at the film’s premiere at Mar-a-Lago, asks in one scene in “Rigged.” “And then when they received criticism about that, then they sprinkled relatively minor amounts of money.”

In truth, Epps-Johnson said, before Zuckerberg and Chan made their donation, CTCL was in triage mode, spending its own modest operating budget on the places that were most likely to have a big problem on their hands come November. CTCL had been working with election offices in Michigan and Pennsylvania even before COVID-19, as both states had expanded access to voting by mail well before the pandemic began. Once COVID-19 hit, Epps-Johnson said, CTCL focused its attention and its grants on the counties in those states that were both struggling to contain the virus and bound to see the biggest influx of mail-in ballots. “In nearly every place, that leads you to population centers,” Epps-Johnson said.

The early grants weren’t the only reason CTCL had a target on its back, though. While the organization is nonpartisan, with both Democrats and Republicans sitting on its board, it’d be hard to claim the same about Epps-Johnson, a former Obama fellow, who, along with her co-founders, had worked at the progressive New Organizing Institute before launching CTCL. When former President Obama gave his April speech about disinformation, it was Epps-Johnson who introduced him and welcomed him to the stage with a hug.

Becker, too, had at least some progressive bona fides. While he’d spent nearly a decade at nonpartisan Pew Charitable Trusts, where he oversaw election initiatives, he’d also done a brief stint as a senior staff attorney at People for the American Way, a progressive advocacy group that now describes itself as being “founded to fight right-wing extremism.”

If Chan and Zuckerberg erred at all in their efforts to appear impartial, it was in selecting organizations to accept the money whose founders’ resumes could easily double as Steve Bannon’s dartboard. Republicans argue that was a feature of Zuckerberg’s plan, not a bug.

An avalanche of lawsuits soon followed. In Louisiana, Attorney General Jeff Landry barred election officials from accepting the money they’d been granted and sued CTCL, alleging it had engaged in an illegal “financial contribution scheme.” In nine other states, The Amistad Project, which would go on to join the Trump campaign in challenging the election results, also backed lawsuits to block the grants from going through. “It was really clear there were legal challenges that were misinformation campaigns that were designed to undermine voter confidence,” Epps-Johnson said.

Zuckerberg himself made a rare statement about the suits in October 2020. “Since our initial donation, there have been multiple lawsuits filed in an attempt to block these funds from being used, based on claims that the organizations receiving donations have a partisan agenda,” he wrote in a Facebook post. “That's false.”

One by one, the suits were dismissed. In one Colorado case, a district judge even imposed sanctions on the attorneys who filed the suit, calling their complaint “one enormous conspiracy theory.” Only the Louisiana case still stands, after the dismissal was reversed on appeal in April 2022.

But the grant program’s court victories hardly stopped the Zuck Bucks theory from metastasizing and transforming from a nuisance into something a lot more menacing. CTCL was bombarded with death threats, forcing the company to spend $180,000 on security in the last months of 2020 alone.

Becker of CEIR got a few threats, too, but whatever he dealt with, Becker said, “it’s nothing compared to what local election officials in cities and counties are experiencing.”

‘The consequences of telling the truth’

On a Thursday afternoon last year, Al Schmidt walked into a farmer’s market in a northwest corner of Philadelphia for what had to be his umpteenth interview, wearing a mask that read “VOTE” in big block letters. It was December of an off-cycle year, but such is Schmidt’s commitment to the role he now fills as defender of the franchise.

The former Philadelphia City Commissioner was among the election officials in the belly of the Pennsylvania Convention Center in November 2020, working around the clock and not returning home for days as ballots were counted and protests raged out front. At the time, and still to this day, Schmidt, who’s been a registered Republican since the ‘90s, has delivered the same message: that the 2020 election was legitimate, but that the future of elections is in jeopardy.

For that, Schmidt was called a RINO by Trump on Twitter. And for that, Schmidt and his family have faced merciless harassment and targeted threats, which temporarily forced his wife and two kids to move in with family while Schmidt had a security system installed in their home. For months after the election, police escorts followed the family from the grocery store to the sledding hill. “The consequences of telling the truth aren’t easy,” Schmidt said. “That doesn't mean you shouldn't tell the truth.”

Schmidt is one of the many election officials across the country who have resigned from their positions since 2020. It was a transition he’d planned well before the 2020 election, but he said, “2020 certainly confirms in my mind that it's the right decision for my family.”

Philadelphia received a little over $10 million from CTCL, the most of any county in the hard-fought state of Pennsylvania. Philadelphia is by far Pennsylvania’s biggest county, so it stands to reason it would also get the biggest check. But the grant program’s conservative critics , including some Pennsylvania lawmakers , argue it’s not just that Philadelphia and other counties Biden won in Pennsylvania got more money in total. It’s also that they got more money per voter.

“Counties won by Biden in 2020 received an average of $4.99 Zuckerbucks per registered voter, compared to just $1.12 for counties won by Trump,” reads one analysis on Pennsylvania by the right-leaning think tank Foundation for Government Accountability. Other reports have looked at CTCL’s spending in Texas , Florida and Georgia and reached similar conclusions

These reports have their own partisan roots. The Pennsylvania analysis was authored by a former Department of Labor official under Trump. Another report on Zuckerberg’s spending in Texas comes from an organization whose board includes, among other prominent Trump supporters, John Eastman, the Jan. 6 leader who pushed Mike Pence to reject the election results. Still more articles arguing Zuckerberg bought the election for Biden have been published by a new think tank called The Caesar Rodney Institute for American Election Research, whose primary purpose appears to be exposing the bias behind the CTCL grants. “Even if it wasn't partisan in its intent, and I would argue that it almost certainly was, it was certainly partisan in its effect,” said William Doyle, a former University of Dallas economics professor, who co-founded Caesar Rodney with an anonymous partner he described as his “shadow conspirator.”

Whatever the political motivations of their authors, anyone can see that the spending numbers do look lopsided. “Rigged” makes this the centerpiece of its argument. But what these analyses miss, Schmidt argues, is what’s behind those numbers. In elections, scale doesn’t necessarily drive down costs. “In a smaller county, if they have a [turnout] increase, they might be able to hire, you know, five more people to sort ballots by hand,” Schmidt said. But in a city like Philadelphia, that received 375,000 mail-in ballots in 2020, you need machines. And machines cost money — sometimes, a lot of it.

About half of the CTCL grant went toward equipment costs in Philadelphia, including the purchase of two ballot sorters that reportedly cost more than $500,000 each . It’s a similar story in states across the country that have been the subject of conservative scrutiny. “It’s no doubt that cities are going to be a little more expensive than rural areas in terms of their needs,” said Nate Persily, a professor at Stanford Law School and co-founder of the Healthy Elections Project. It also just so happens that cities across America tend to vote for Democrats.

In Philadelphia at least, the Zuckerberg money was essential, Schmidt said. He’d seen during the primaries how the slower-than-usual process of counting ballots was creating opportunities to exploit uncertainty among voters. “All that equipment allowed us to really speed up the whole process,” Schmidt said. That includes the process of comparing mailed ballots and in-person poll books to ensure people weren’t voting twice. And that equipment will continue to benefit Philadelphians, he said, “as long as that equipment keeps working.”

Al Schmidt posing, arms crossed, outdoors Al Schmidt, a former Philadelphia City Commissioner, is one of the many election officials across the country who have resigned from their positions since 2020. Photo: Lynsey Addario/Getty Images

Of course, none of that stopped people — including one very powerful person — from exploiting the uncertainty of it all anyway. “Very few of us who work in this space were prepared for the degree to which the losing candidate would continue to lie to their supporters and, in so doing, continue to weaken American democracy, just to keep the anger going and the donations coming,” said Becker, who is now running a pro bono legal defense network to help election officials fend off frivolous prosecution and harassment.

What bothers Schmidt most about the backlash to the grants is the notion that Zuckerberg alone created an imbalance or a distortion of the electoral system. He calls that line of argument “deceitful” because, in a country where elections are run and financed at the local level, there’s never been balance to begin with. “If Philadelphia wanted to spend $100 million on elections, it could,” Schmidt said. “There is no equality from county to county.”

It’s why studies show that districts with more minority voters often have fewer voting machines, leading to longer lines. Those studies have hardly animated conservatives the way the Zuck Bucks studies have. There are obvious partisan reasons for that. Underfunding elections in minority districts tends to hurt Democratic turnout. Funding elections in those districts, as Zuckerberg did, well, doesn’t.

But there’s another reason why the Zuck Bucks debacle is different. While the chronic starvation of election officials is a perpetual problem with plenty of blame to go around, this unprecedented funding of an election can be traced back to a single perfect villain: the Big Tech billionaire who conservatives believe has had it out for them all along.

‘Government should have provided these funds’

Anyone investing in voting access is bound to face opposition from the right. But the other big reason why Zuckerberg’s attempt to appear neutral was doomed from the start is because Zuckerberg is not seen by either party as anything close to a neutral figure. He’s somehow both the guy who got Trump elected and placated his administration and the guy who censored and conspired to defeat him. Nothing Zuckerberg does gets to be impartial. The $419 million he spent in 2020 ensures it never will be again.

Try though he did to ingratiate himself with the right throughout the Trump years, Republicans already decided long before he spent a penny that his all-powerful company had rigged the election against them. Or, at least, they decided it was in their interest to say so in order to raise money off the message and spook Facebook out of stifling the speech of the party’s most extreme factions — and its leader.

Then, of course, there’s the fact that Zuckerberg’s personal politics, if not his professional politics, have tended to lean left. He’s spent millions on causes like immigration reform and criminal justice reform . His spokesperson recently came off of Supreme Court Justice Ketanji Brown Jackson’s confirmation team, and the person leading CZI’s policy operation, David Plouffe, was President Obama’s campaign manager. (CZI said Plouffe wasn’t involved with the grant program).

That Zuckerberg was the boogeyman coughing up half a billion dollars to expand voting access almost made it too easy for the right to argue those kinds of donations should be forbidden altogether. Note the absence of outrage over the millions of dollars Arnold Schwarzenegger also spent in 2020, on a similar grant program that was, in fact, far more selective than Zuckerberg’s.

Zuckerberg’s election donation sparked a conservative scramble not just to prevent his money from being spent in 2020, but also to prevent anyone from personally spending money on any election ever again.

Since 2020, some 14 states have passed laws forbidding private funding of elections. Similar bills have passed the legislature in another five states, including Pennsylvania, but have been blocked by Democratic governors. The conservative group Heritage Action for America has backed these bills with a $10 million investment spread across eight states. ”There is nothing more important than ensuring every American is confident their vote counts — and we will do whatever it takes to get there,” Jessica Anderson, executive director of Heritage Action, said when the investment was announced.

The Zuck Bucks theory was key to getting these laws passed. First you plant the seed of distrust, then you promise to nip what you planted in the bud.

The irony of that is that Becker, Epps-Johnson, Schmidt and even Zuckerberg himself all tend to agree that individual donors shouldn’t be the ones funding elections. After all, Zuckerberg may have been assiduously nonpartisan in his giving last time around, but he didn’t have to be. There was nothing stopping him from publicly picking favorites if he’d wanted to. And most everyone agrees that’s hardly a way to ensure trust or equity in the system. Zuckerberg even said as much in his October 2020 Facebook post about the grants. “[G]overnment should have provided these funds, not private citizens,” he wrote at the time, not missing the chance, for once, to rap Congress on the knuckles for not doing its job.

David Becker testifying in a hearing Center for Election Innovation & Research executive director and founder David Becker had spoken to election officials about the severe funding gap they were experiencing. Photo: Joshua Roberts-Pool/Getty Images

The problem is, the government didn’t provide those funds. With the midterm primaries now underway and the pandemic ongoing, it still hasn’t. That leaves some states now underfunded and unable to raise funding from anywhere else.

“What we're seeing is no ability for philanthropy to step in in the ways that they would if there was a struggling library or a school,” Epps-Johnson said of the laws being passed across the country, “and also no additional public funding at a time when we have election officials using technology that they purchased before the iPhone was invented.”

“We can’t both defund them and restrict their ability to get other funds,” Persily of Stanford said.

In his most recent budget, President Biden called for what would be a historic $10 billion investment in election infrastructure over the next 10 years. CTCL has been pushing for double that investment over the same period of time. But so far, Congress has shown little indication it’s going to act.

That’s one reason why Epps-Johnson’s new focus has been on helping election officials help each other through a group called the U.S. Alliance for Election Excellence. It will invite election officials from every district in the country to come together and swap expertise, and is backed by $80 million, some of which will go to election administration grants in states where that kind of thing is still possible.

But this time, the money’s not coming from Zuckerberg.

The man who spent years atoning for his company’s failure to secure the 2016 election — then rushed to the rescue of the 2020 election — appears to be taking a step back from politics. Or at least, he’s trying to. He made that much clear when he promoted Nick Clegg to president of Global Affairs at Meta earlier this year, freeing Zuckerberg up to post even more legless videos of himself inside the metaverse he’s desperately trying to build. Since then, Zuckerberg has been more quiet than usual about the political events of the day, even letting Clegg take the lead after Meta was banned throughout Russia.

So despite the breathless headlines, it wasn’t any big surprise when LaBolt said last month that Zuckerberg wouldn’t be making any new grants this year. Citizens United’s Bossie, for one, celebrated the news as a “major victory.” In truth, LaBolt said, the grants were always meant to be a one-time deal.

But Zuckerberg can’t just walk away from this ordeal so easily. Like Cambridge Analytica and the Russian troll scandal before it, Zuck Bucks seems likely to hover over Zuckerberg, and Meta, for years, regardless of whether he makes any more donations. Whatever moves Meta makes in the midterms and beyond — including deciding whether to reinstate Trump’s account — this supposed scandal will be just another data point used to pressure Meta to bend to one party’s will.

A year and a half since polls closed in November 2020, Zuck Bucks remains one of the 2020 election’s most enduring conspiracy theories. It’s grown beyond what Trump and his acolytes say happened in 2020, and has now formed the basis of predictions about what will happen this year and two years from now. “They're gonna try and do it again and ‘22 and ‘24,” Trump says in “Rigged.”

Of course, what no one mentions in that film is that preventing private donors from funding ballot sorters and drop boxes won’t free U.S. elections from the unchecked influence of tech billionaires. If anyone understands that, it’s the folks at Citizens United who won the Supreme Court case to make it so. The midterm elections and the presidential race in 2024 will still be awash in tech money, and some of the very people who were in that screening room at Mar-a-Lago are already happily accepting it.

It’s just that, instead of paying for poll workers, tech money will cover the cost of attack ads and “ strategic litigation ” funds to take out members of the media. The money won’t be announced in press releases or disclosed in public reporting, and it won’t be offered up to everyone, regardless of party. It’ll flow into partisan dark-money groups that don’t disclose their donors and that work hard to hide their footprints. It’ll still be there. It’ll just be harder to find. Who knows? It may even wind up financing a film some day that tries to convince the world the election was rigged. And it may even work.

Larry Ellison was among the participants on a call in November 2020, during which top Trump allies discussed ways to contest the election results, according to The Washington Post. It's unclear what role Ellison played on the call, but The Post found evidence of Ellison's apparent involvement in court records and confirmed with one of the call's other participants.

Keep Reading Show less

As the Russian invasion of Ukraine continues, social media platforms are fighting an avalanche of misinformation and propaganda with a common weapon: information labels.

But do those labels actually stop people from spreading questionable content? Not exactly, according to a newly released study that analyzed how labels affected former President Trump’s ability to spread election lies on Twitter in 2020.

Researchers for the German Marshall Fund analyzed replies and engagement data for 1,241 tweets Trump wrote between October 2020 and January 2021. They found that labels alone didn’t change Twitter users’ likelihood of sharing or engaging with the tweets. That doesn’t mean, however, the labels were totally ineffective. When the label itself included a particularly strong rebuttal of a tweet containing false information, the researchers found, it led to fewer user interactions and less toxicity in the replies and comments.

This suggests that while labels on their own may not stop the spread of misinformation, labels designed in a particular way might. “As policymakers and platforms consider fact checks and warning labels as a strategy for fighting misinformation, it is imperative that we have more research to understand the impact of labels and label design on user engagement,” Ellen P. Goodman, a law professor at Rutgers, wrote in a statement. Goodman co-authored the study with Orestis Papakyriakopoulos, a postdoctoral research associate at Princeton.

“This new empirical research shows that the impacts can differ depending on label design, ranging from no impact at all to reductions in engagement and even toxicity of replies,” Goodman said.

Twitter did not immediately respond to a request for comment. But the company has shared its own data about labeling in the past, yielding somewhat different results. Shortly after the election, the company reported that it had observed a 29% decrease in quote tweets of labeled tweets between Oct. 27 and Nov. 11 of 2020. “These enforcement actions remain part of our continued strategy to add context and limit the spread of misleading information about election processes around the world on Twitter,” company executives wrote at the time.

Twitter has also pointed to data that shows warning labels that prompt users to reassess mean tweets before sending them actually work. In a test , 34% of users opted to delete or change their tweet after receiving a prompt.

Labels have continued to be one of the primary tools tech companies have used in the days since Russia invaded Ukraine. Twitter and Facebook have both said they will label all tweets with links to Russian state media and demote them in users’ feeds. The question is whether simply seeing a label on a tweet or post will be enough to stop users from engaging with them. The new study would suggest that the most effective way to actually stop a lie from spreading is to directly call it out as one.

There was a brief window when it almost looked like tech platforms were going to emerge from the 2020 U.S. election unscathed. They’d spent years nursing their wounds from 2016 and building sturdy defenses against future attacks. So when Election Day came and went without some obvious signs of foreign interference or outright civil war, tech leaders and even some in the tech press considered it a win.

“As soon as Biden was declared the winner, and you didn’t have mass protests in the streets, people sort of thought, ‘OK, we can finally turn the corner and not have to worry about this,’” said Katie Harbath, Facebook’s former public policy director.

One year ago today, it became clear those declarations of victory were as premature as former President Trump’s.

Much has been said and written about what tech platforms large and small failed to do in the weeks leading up to the Capitol riot. Just this week, for example, ProPublica and The Washington Post reported that after the election, Facebook rolled back protections against extremist groups right when the company arguably needed those protections most. Whether the riot would have happened — or happened like it did — if tech platforms had done things differently is and will forever be unknowable. An arguably better question is: What’s changed in a year and what impact, if any, have those changes had on the spread of election lies and domestic extremism?

“Ultimately what Jan. 6 and the last year has shown is that we can no longer think about these issues around election integrity and civic integrity as something that’s a finite period of time around Election Day,” Harbath said. “These companies need to think more about an always-on approach to this work.”

What changed?

The most immediate impact of the riot on tech platforms was that it revealed room for exceptions to even their most rigid rules. That Twitter and Facebook would ban a sitting U.S. president was all but unthinkable up until the moment it finally happened , a few weeks before Trump left office. After Jan. 6, those rules were being rewritten in real time, and remain fuzzy one year later. Facebook still hasn’t come to a conclusion about whether Trump will ever be allowed back when his two-year suspension is up.

But Trump’s suspension was still a watershed moment, indicating a new willingness among social media platforms to actually enforce their existing rules against high profile violators. Up until that time, said Daniel Kreiss, a professor at University of North Carolina’s Hussman School of Journalism and Media, platforms including Facebook and Twitter had rules on the books but often found ways to justify why Trump wasn’t running afoul of them.

“There was a lot of interpretive flexibility with their policies,” Kreiss said. “Since Jan. 6, the major platforms — I’m thinking particularly of Twitter and Facebook — have grown much more willing to enforce existing policies against powerful political figures.” Just this week, Twitter offered up another prominent example with the permanent suspension of Georgia Rep. Marjorie Taylor Greene.

Other work that began even before Jan. 6 took on new urgency after the riot. Before the election, Facebook had committed to temporarily stop recommending political and civic groups, after internal investigations found that the vast majority of the most active groups were cesspools of hate, misinformation and harassment. After the riot, that policy became permanent. Facebook also said late last January that it was considering reducing political content in the News Feed, a test that has only expanded since then.

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line. Twitter and Facebook began to embrace a middle ground between completely removing posts or users and leaving them alone entirely by leaning in on warning labels and preventative prompts.

They also started taking a more expansive view of what constitutes harm, looking beyond “coordinated inauthentic behavior,” like Russian troll farms, and instead focusing more on networks of real users who are wreaking havoc without trying to mask their identities. In January of last year alone, Twitter permanently banned 70,000 QAnon-linked accounts under a relatively new policy forbidding “coordinated harmful activity.”

“Our approach both before and after January 6 has been to take strong enforcement action against accounts and Tweets that incite violence or have the potential to lead to offline harm,” spokesperson Trenton Kennedy told Protocol in a statement.

Facebook also wrestled with this question in an internal report on its role in the riot last year, first published by Buzzfeed. “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” the authors of the report wrote. “What do we do when that authentic movement espouses hate or delegitimizes free elections?”

Those questions are still far from answered, said Kreiss. “Where’s the line between people saying in the wake of 2016 that Trump was only president because of Russian disinformation, and therefore it was an illegitimate election, and claims about non-existent voting fraud?” Kreiss said. “I can draw those lines, but platforms have struggled with it.”

In a statement, Facebook spokesperson Kevin McAlister told Protocol, “We have strong policies that we continue to enforce, including a ban on hate organizations and removing content that praises or supports them. We are in contact with law enforcement agencies, including those responsible for addressing threats of domestic terrorism.”

What didn’t?

The far bigger question looming over all of this is whether any of these tweaks and changes have had an impact on the larger problem of extremism in America — or whether it was naive to ever believe they could.

The great deplatforming of 2021 only prompted a “ great scattering ” of extremist groups to other alternative platforms, according to one Atlantic Council report. “These findings portray a domestic extremist landscape that was battered by the blowback it faced after the Capitol riot, but not broken by it,” the report read.

Steve Bannon’s War Room channel may have gotten yanked from YouTube and his account may have been banned from Twitter, but his extremist views have continued unabated on his podcast and on his website, where he’s been able to rake in money from Google Ads. And Bannon’s not alone: A recent report by news rating firm NewsGuard found that 81% of the top websites spreading misinformation about the 2020 election last year are still up and running, many of them backed by ads from major brands.

Google noted the company did demonetize at least two of the sites mentioned in the report — Gateway Pundit and American Thinker — last year, and has taken ads off of individual URLs mentioned in the report as well. “We take this very seriously and have strict policies prohibiting content that incites violence or undermines trust in elections across Google's products,” spokesperson Nicolas Lopez said in a statement, noting that the company has also removed tens of thousands of videos from YouTube for violating its election integrity policies.

Deplatforming can also create a measurable backlash effect, as those who have been unceremoniously excised from mainstream social media urge their supporters to follow them to whatever smaller platform will have them. One recent report on Parler activity leading up to the riot found that users who had been deplatformed elsewhere wore it like a badge of honor on Parler, which only mobilized them further. “Being ‘banned from Twitter’ is such a prominent theme among users in this subset that it raises troubling questions about the unintended consequences and efficacy of content moderation schemes on mainstream platforms,” the report, by the New America think tank, read.

“Did deplatforming really work or is it just accelerating this fractured news environment that we have where people are not sharing common areas where they’re getting their information?” Harbath asked. This fragmentation can also make it tougher to intervene in the less visible places where true believers are gathering.

There’s an upside to that, of course: Making this stuff harder to find is kind of the point. As Kreiss points out, deplatforming “reduces the visibility” of pernicious messages to the average person. Evidence overwhelmingly shows that the majority of people who were arrested in connection to the Capitol riot were average people with no known connections to extremist groups.

Still, while tech giants have had plenty to make up for this last year, ultimately, there’s only so much they can change at a time when some estimates suggest about a quarter of Americans believe the 2020 election was stolen and some 21 million Americans believe use of force would be justified to restore Trump as president. And they believe that not just because of what they see on social media, but because of what the political elites and elected officials in their party are saying on a regular basis.

“The biggest thing that hasn’t changed is the trajectory of the growing extremism of one of the two major U.S. political parties,” Kreiss said. “Platforms are downstream of a lot of that, and until that changes, we’re not going to be able to create new policies out of that problem.”



While we were all Zooming, the Zoom team was thinking ahead and designing new offerings that could continue to enable seamless collaboration, communication and connectivity while evolving with the shifting workplace culture. Protocol sat down with Yuan to talk about Zoom's evolution, the future of work and the Zoom products he's most excited about.

Learn more

Correction : This was updated Jan. 6, 2022 to clarify that Facebook was just considering reducing political content in the new News Feed on its late January earnings call.

Political advertisers on Facebook are supposed to identify themselves as such. That way, Facebook can log their ads in an archive and, as was the company's policy in 2020, even prevent those ads from running close to an election.

Of course, both accidentally and intentionally, not every political advertiser plays by Facebook’s rules, meaning Facebook often has to decide what is and isn’t a political ad even after the ad runs. Now, a new study by researchers at New York University and KU Leuven in Belgium suggests that the vast majority of the time Facebook has to make these decisions, it makes the wrong call.

The researchers analyzed 189,000 political ads from around the world that Facebook needed to make an enforcement decision on between July 2020 and February 2021. They found that Facebook misidentified a whopping 83% of those ads.

According to the study, 117,000 of those ads clearly fell under Facebook’s definition of a political ad, but went undetected by Facebook. Another 40,000 ads were flagged as political, but the researchers say they clearly were not. For example , some ads run by Sen. Elizabeth Warren and Alaska Rep. Don Young were not marked as political, while ads for a “silky peanut butter pie” recipe and a Ford pickup truck listing were.

The researchers also found Facebook’s enforcement varied wildly depending on where the ads appeared. In the U.S, Facebook overcorrected and ended up mislabeling more ads that weren’t actually political. In Malaysia, the opposite was true, with Facebook letting some 45% of these political ads go unlabeled. That disparity mirrors the company’s struggles to moderate other aspects of its platform in non-English speaking parts of the world.

But perhaps the most glaring issue the researchers found was related to Facebook’s much-debated decision to ban political ads leading up to and immediately following the U.S. election in 2020. According to the study, more than 1,000 advertisers who had run nothing but political ads prior to the ban were able to continue running ads — more than 70,000 of them — during the ban. This, the researchers argue, suggests that some political advertisers stopped self-reporting their ads to evade Facebook’s ban — and Facebook failed to stop them.

These failures, the researchers argue, suggest that Facebook needs a new approach to enforcing this policy that is consistent across borders and includes penalties for advertisers who don’t self-report. Right now, there are some penalties for advertisers that skirt these rules, but the researchers argue they’re not adequately enforced.

“Facebook isn’t a very good cop,” said Laura Edelson, one of the study’s authors and a computer science Ph.D. candidate at NYU. “We’re very fortunate that the vast majority of political advertisers are voluntarily complying with Facebook’s policy here, because when we actually study how good Facebook is at enforcing their own policy, when they have to do the enforcing, as you can see, their accuracy rate is pretty low.”

Protocol presented Meta with a summary of findings from the researchers, none of which the company directly contested. In a statement, a Meta spokesperson said, "The vast majority of political ads they studied were disclosed and labeled just as they should be. In fact, their findings suggest there were potential issues with less than 5 percent of all political ads."

The spokesperson said that Meta offers "more transparency into political advertising than TV, radio or any other digital advertising platform," and said that even overtly political figures need to disclose whether each individual ad is political or not.

If Edelson’s name sounds familiar, it’s because she is part of the NYU team that Facebook cut off from its platform earlier this year. Edelson and her fellow researchers had been scraping ads and ad-targeting data from users who installed their browser plug-in, a method Facebook said violated its terms of service.

But the NYU team had already been working on this project, and it relied not on the browser extension but on Facebook’s own ad library. That library contains an archive of all of the ads Facebook has identified as political, along with a repository of all the other ads running on the platform at any given moment. Because those non-political ads disappear from the library after they’ve stopped running, they’re difficult to study over time. So the NYU researchers built a tool that could scrape the library every 24 hours and store all of that information before ads disappeared. That way, the researchers could see which political ads Facebook had failed to identify.

The researchers scraped the ads' content and metadata for 14 days after they ran so they could observe how quickly Facebook detected political ads that hadn’t been labeled — and if it detected them at all. When Facebook detects unlabeled political ads, it takes those ads down.

Edelson and her fellow researchers then compared this larger trove of all Facebook ads to the smaller political ad archive. To find political ads Facebook missed — which the researchers referred to as “false negatives” — they looked exclusively at pages that are overtly political because they belong to, for instance, a registered politician or political group or self-identified as a political organization on their Facebook pages. They then looked at all the ads those pages ran to see which ones appeared in the political ad archive. The ads that didn’t were considered false negatives.

Edelson argues that means the error rate she and her colleagues found is a “floor” because there may have been lots of other missed political ads that didn’t come from one of these pages. “This is a very conservative estimate,” she said.

This is something that Facebook — and Meta as a whole — will have to grapple with as the U.S. midterm election approaches. The company is simultaneously struggling to adapt to new privacy protections put in place by Apple that make it harder to measure the effectiveness of ads that campaigns use for fundraising and voter turnout efforts. And it recently made changes that prevent advertisers from targeting users based on “sensitive” topics, including their politics, health, religion and more. This, too, will create challenges for political advertisers that have used Facebook to target key demographics for years. All of it ensures that the way political groups and campaigns have used Facebook for campaigning in the past is about to dramatically change.

All that said, the researchers found that more than 4 million political ads that ran during the study period were self-reported by advertisers. That makes the problem they’re describing — the 189,000-plus ads that slipped through the cracks — a small slice of the overall political advertising picture on Facebook. And Facebook has said that in 2020 it rejected 3.3 million ads targeting the U.S. before they ever ran for failing to complete the authorization process for political advertisers.

But the political ads Facebook is missing — and the regular ads it’s removing for no good reason — still matter, Edelson said. “Ads are theoretically where Facebook has the potential to do the best. They have information about who pays for the content. They literally have a profit motive to get this right, and this is what the accuracy rate is."

Researchers leading a study of Facebook's impact on the 2020 election are delaying the release of their study results until the first half of 2022, Protocol has learned. Facebook and the researchers had said last year that the results of the study would be ready by the summer of 2021.

Keep Reading Show less

Shortly after November's presidential election, a story appeared on the website of far-right personality Charlie Kirk, claiming that 10,000 dead people had returned mail-in ballots in Michigan. But after publishing, a correction appeared at the top of the story, completely debunking the misleading headline, which remains, months later, unchanged.

"We are not aware of a single confirmed case showing that a ballot was actually cast on behalf of a deceased individual," the correction, which quoted Michigan election officials, read.

The note was a clear bait-and-switch on a story that boldly pushed a false conspiracy theory about the election. But just as striking as the editor's note is what sits at the top of this story, and every story on Kirk's site: a Google ad.

Since the 2016 election, endless attention has been paid to the way election misinformation can spread through targeted ads on social media platforms like Facebook and YouTube. But an equally insidious and less-discussed problem is how programmatic advertising, a field dominated by Google, has become the lifeblood of misinformation sites. With or without social platforms, these ads allow misinformation sites to exist and even thrive all on their own by providing a source of revenue, and companies like Google have a shoddy record of policing them.

In a new report released Thursday by the news rating company NewsGuard, researchers found ads for more than 1,600 mainstream brands, from Disney to Procter & Gamble, running on 160 sites (including Kirk's) that have openly pushed election conspiracies. Google was responsible for ads on a whopping 80 percent of those sites. Another ad exchange, The Trade Desk, was running ads on roughly half.

Among the examples NewsGuard lists: ads for Harvard University appearing on One America News Network's website, ads for AARP appearing on sites like The Gateway Pundit and ZeroHedge, and Walmart ads appearing on NOQ Report, a site that recently argued Satan uses Democrats to do his bidding, including stealing the election.

In some cases, the ads create discordant messaging between publisher and advertiser. While reporting this story, Protocol found ads for Planned Parenthood on Kirk's site, despite Kirk's frequent calls for Planned Parenthood to be defunded. Thanks to these ads, Planned Parenthood is effectively funding Kirk.

A Planned Parenthood fundraising ad appears on a story containing election misinformation at the top of Charlie Kirk's website. Screenshot: Protocol

Google has policies forbidding publishers who post demonstrably false election misinformation and other types of content from placing Google ads on their websites. "Claims that voter fraud was widespread or that the election was stolen are all prohibited by our policies. When we find content that violates our policies we remove its ability to monetize," a Google spokesperson said.

The company demonetizes individual stories first, reserving site-wide demonetization for egregious, persistent offenders (The company recently demonetized a far-right militia site following the Capitol riot). In 2019, the company removed ads from 21 million individual pages, the Google spokesperson said. The problem: Google serves billions of ads every day.

It's not that the company is unaware it's serving ads on the sites NewsGuard noted in its report. In August, a group of philanthropists wrote to Alphabet CEO Sundar Pichai after reports showed that ads for groups like the Red Cross and Save the Children were appearing alongside COVID-19 misinformation on TheGatewayPundit and elsewhere. The philanthropists urged Google to institute a new model that "does not put [advertisers] into unwanted and damaging associations that undermine their good works and values."

Since then, the company has taken action on individual Gateway Pundit articles, but five months later, Google has yet to fully demonetize the site despite repeated violations, enabling it to continue growing and spreading misinformation.

The power programmatic advertising has in sustaining these sites is particularly relevant now as social media giants begin to more forcefully crack down on accounts, including the president's, that regularly post dangerous conspiracy theories or incitements to violence. For many of these companies, last week's riot in the U.S. Capitol was a wake-up call, showing them the disastrous real-world consequences of allowing people to believe lies about the election being stolen. But chasing the accounts that peddle those lies off of Facebook, Twitter and YouTube is only part of the solution. As long as the people behind those accounts have a way to make money on their falsehoods, why would they ever stop?

On Friday night, Twitter announced that it was forever banning President Trump from the digital podium where he conducted his presidency and where, for more than a decade, he built an alternate reality where what he said was always the truth.

There are moral arguments for not doing business with the guy who provoked a violent mob to invade the U.S. Capitol, leaving several people dead. There have been moral arguments for years for not doing business with the guy who spent most of his early mornings and late nights filling the site with a relentless stream of pithy, all-caps conspiracy theories about everything from Barack Obama's birthplace to the 2020 election. There are also moral arguments against tech companies muzzling the president of the United States at all.

Wherever you stand, there's a hard truth to Twitter's decision to stop doing business with Trump now: Trump has already gotten all he needs from Twitter. He's already used the platform — and the company — to become the most powerful man in the world, bending Twitter's carefully-crafted rules and dancing across its neatly-drawn lines, knowing that by virtue of his position, he'd never face repercussions. Now that he has, that power is still his to keep, and it will follow him wherever he winds up next.

Deplatforming a controversial or even dangerous figure doesn't always work that way. Remember Milo Yiannopoulos? The conservative provocateur was credibly threatening to launch his own media company after being removed from Twitter and resigning from Breitbart in scandal — but when Yiannopoulos lost access to these forums, he lost his audience and effectively disappeared from public view.

But the fact that Twitter's decision is unprecedented also means that no one as powerful as the president of the United States has ever been permanently banished from any social media platform. Yes, he will soon lose the office of the presidency and the levers of power that office holds. But Wednesday's show of force by his adherents was evidence that there are many thousands of people willing to commit open insurrection in his name. And that's just the people who could afford a ticket to D.C. in the midst of an unemployment crisis. So it goes without saying that this time will be different.

As soon as Twitter broke the news , explaining that Trump's most recent tweets could be interpreted as incitements to violence in the context of this week, conservative commentators announced (on Twitter, of course) that they were boycotting the site , and invited their followers to seek them out on Parler, the far-right echo chamber of the moment. Shortly after, Google suspended Parler from the Play Store, saying in a statement that the company was "aware of continued posting in the Parler app that seeks to incite ongoing violence in the U.S.," including threats on elected officials and plans for a militia march. Apple also reportedly threatened a ban on Parler as well, giving the company 24 hours to develop a moderation policy.

The president also briefly breached Twitter's blockade, tweeting Friday night from the official @POTUS account , which remains unaffected by the ban, to say he was looking into "building out our own platform in the near future." That tweet quickly disappeared, too.

Whether Trump takes his diatribes to Parler or Rumble or 4chan or any of the other platforms that would love his help in quadrupling their engagement, the man who was a media mogul long before he was president will find his venue. The only question is whether the mainstream media will, out of habit, morbid curiosity or concern for public safety, follow him there. If they — if we — do, the next four years will be no different from the last in terms of Trump's chokehold on the national conversation. If not, his audience might be smaller. But it won't be any less angry, any less capable of the violence that Twitter and Google and Facebook and the other tech companies that have taken action this week are afraid of. It will only be more insular, more tunnel-visioned, more removed from a world of information that Trump has said all along is fake news anyway.

In his post announcing that President Trump would be blocked from posting on Facebook until at least Inauguration Day, Mark Zuckerberg wrote that the president's incitement of the violent mob that stormed the U.S. Capitol building Wednesday was "fundamentally different" than any of the offenses he's committed on Facebook before. "The risks of allowing the President to continue to use our service during this period are simply too great," he wrote on Thursday.

That may be true. But there's another reason why — after four years spent insisting that a tech company has no business shutting up the president of the United States, no matter how much he threatens to shoot protesters or engages in voter suppression — Zuckerberg finally had a change of heart: Republicans just lost power.

Since at least 2016, when conservatives first set off on their crusade against Big Tech, armed with spurious claims of liberal bias, Facebook's leaders have cowered in fear of the right. That year, after Gizmodo reported that Facebook had kept some conservative news out of its trending topics feature, Zuckerberg invited a ragtag delegation of conservative pundits including Glenn Beck and Tucker Carlson to Facebook's headquarters to extend an olive branch.

From that day forward, Facebook has repeatedly sought to avoid the right's ire, elevating Joel Kaplan, a former George W. Bush staffer and himself a Brooks Brothers rioter , to the company's highest public policy position to navigate a Washington that was no longer starry-eyed about Silicon Valley. During Trump's tenure and under a Republican-controlled Congress, Facebook refused to prohibit white nationalist content and conspiracy theories like QAnon, shelved plans to promote healthier political dialogue for fear it would stoke Republican outrage and lavished special treatment on far-right pages that repeatedly violated its policies. All the while, conservative voices dominated the site in the U.S.

None of it helped, of course. Zuckerberg was still regularly excoriated by Republican members of the House and Senate in public hearings over imagined slights and asked to explain again and again why a given political ad had been taken down or why Diamond and Silk's Facebook traffic was falling. In those moments, Zuckerberg would steel himself and politely vow to look into the matter, bending over backward to prove he was taking each bad faith argument to heart.

But, now that the people of Georgia have spoken, Democrats are about to assume control of both the White House and the Senate. So it stands to reason then that Zuckerberg is beginning to bend in a new direction.

That fact didn't go unnoticed by some on the left, like former White House communications director Jennifer Palmieri, who tweeted Thursday, "It has not escaped my attention that the day social media companies decided there actually IS more they could do to police Trump's destructive behavior was the same day they learned Democrats would chair all the congressional committees that oversee them."

"Most big content moderation decisions are a reaction to policymakers, negative press coverage or advertisers, and we're seeing that play out this week," said Nu Wexler, a former policy spokesperson for Facebook.

That's not to say Zuckerberg was wrong or lying about the unprecedented and dangerous nature of Trump's actions on Wednesday. Facebook has historically avoided touching his posts, in part out of respect for the office of president and the significance that the president's words are supposed to hold in this democracy. But as Trump used the platform to express his love and understanding for the Capitol rioters and again repeat the baseless claims that sent them to the Capitol in the first place, he shattered the last democratic norm that Facebook was supposedly protecting. In muzzling Trump for the rest of his presidency and possibly beyond, Facebook has taken stronger action than either Twitter or YouTube have so far, though both companies have removed some Trump posts and warned that the president risks permanent bans if he commits more violations in the future.

That's the generous read on Facebook's actions, at least. Another read reveals more cynical motives. Unlike Republicans on the Hill, who have fixated on how they and their conservative constituents are supposedly silenced, Democrats have spent the last four years fixated on how real-world harm manifests from Facebook's decision not to silence people more. But with the possible exceptions of the shooting at a Congressional baseball game in 2017 and the mail bombing attempts by the so-called MAGA bomber in 2018, rarely have the real-world repercussions of allowing people to become radicalized by lies online hit as close to Congress' home as they did on Wednesday.

Facebook's leaders, including Zuckerberg, know Washington well enough by now to expect this week's chaos to become Democrats' animating issue for the foreseeable future. "The Mazie Hironos of the world are definitely going to call for hearings on online harms, and they should," said Graham Brookie, director of the Atlantic Council's Digital Forensic Research Lab, which researches disinformation online. "But now, it's gonna be like: 'Your platform coordinated an attempt on my life. What do you have to say?'"

And in those moments, Zuckerberg will steel himself once more and politely list all the steps Facebook has taken since that awful day to try to control the chaos, including, at least temporarily, banning Trump. Republicans will rant, as Democrats have done so often these last four years, about how the party holding the gavel is trying to force Facebook's hand and cow the company into carrying out their wishes. But without control of Congress or the White House, Republicans won't be able to do much about it. So Zuckerberg will hope instead that, at least in the eyes of the new people in power, these actions will help make up for the last four years. And he will almost certainly be wrong.

Democrats have won both Senate races in Georgia, taking back control of the Senate after five years of Republican leadership. The dramatic shift will undoubtedly reenergize the legislative landscape over the next several years — and could bring Democrats' tech agenda one step closer to reality.

That's good news and bad news for tech. An antitrust crackdown and other regulation becomes more likely — stock futures fell Wednesday as investors anticipated a new regulatory regime for Big Tech — but the tech sector could see gains on immigration and some relief from the Republicans' attacks on Section 230.

Here are the top reforms and nominations that could stand a chance in a new Congress controlled by Democrats.

Breaking up Big Tech

President-elect Joe Biden's campaign argued that tech giants have "not only abused their power, but misled the American people, damaged our democracy and evaded any form of responsibility." With Democrats in control of both the House and the Senate, Biden will have much more latitude to do something about it.

Antitrust reform actually has a shot in the 117th Congress, and Democrats have already put together a 449-page report laying out their game plan. Conversations about updating centuries-old trust-busting statutes will likely begin with that blueprint from Rep. David Cicilline, which claims Big Tech has "monopoly power" and should be broken up.

Cicilline campaigned aggressively for President-elect Joe Biden and maintains close relationships on his transition team. While he won't be able to get all of his biggest ideas through a narrowly divided Senate — which still requires 60 votes for most legislation — even moderate Democrats like Sen. Amy Klobuchar have said it's time to overhaul antitrust laws for the digital age. Reforms could include making it harder for Big Tech to acquire potential rivals and passing new rules around how corporations can muscle into new markets. At the very least, Congress is more likely than ever to inject real money into the Federal Trade Commission and Department of Justice to support their antitrust lawsuits against the tech giants.

Passing a federal privacy bill

Democratic Sen. Maria Cantwell will likely become the new chair of the powerful Senate Commerce Committee — if she wants it. If she does, she'll no doubt elevate her Consumer Online Privacy Rights Act, a comprehensive federal privacy framework that she first introduced last year.

Earlier this year, Cantwell criticized the array of other privacy bills in Congress, particularly those from her Republican counterparts on the committee. "These bills allow companies to maintain the status quo, burying important disclosure information in long contracts, hiding where consumer data is sold, and changing the use of consumer data without their consent," she said .

COPRA would give users the right to see and delete any personal information that companies have amassed about them and require tech companies to clearly explain what they are doing with users' data. It also includes provisions that would allow individuals to sue companies over privacy violations and enable states to pass their own separate privacy legislation. Those line items will certainly spur partisan wrangling and invite significant pushback from tech giants, who have consistently argued that federal legislation should override state laws.

But an updated COPRA could have legs – especially if Biden throws his weight behind it. COPRA is already co-sponsored by Klobuchar and Democratic Sen. Ed Markey, and it was the result of more than a year of behind-the-scenes conversations.

A federal privacy bill will still need support from both parties, but the Democratic win gives Cantwell a boost in negotiations.

Curbing bias in AI

Biden and Vice President-elect Kamala Harris have pledged to focus on civil rights across all policy areas, and tech won't be any exception. It's safe to expect that Congress will work to tackle issues including discriminatory algorithms and biased facial recognition technology over the next year, especially considering Harris herself has signed on to legislation that would tackle racial bias in AI.

Democrats' police reform bill, which could make a comeback in some form, included provisions to restrict the use of facial recognition technology by police officers. Many privacy and civil rights advocates, including a key member of Biden's transition team , have been pushing Congress to address AI bias in any forthcoming privacy legislation. Those conversations are expected to start with the Algorithmic Accountability Act, co-sponsored by Sens. Cory Booker and Ron Wyden, which would require companies to study and fix algorithms that result in discriminatory decision-making.

Expanding high-skilled immigration

Biden has promised to overturn some of President Trump's more draconian restrictions on high-skilled immigration , including the H-1B visa program that both tech giants and companies across sectors use to recruit technical talent from overseas. But with control of both chambers of Congress, Democrats could pursue more full-throated reforms, like the long-promised path to citizenship for "Dreamers," which tech giants like Microsoft, as well as tech leaders like Mark Zuckerberg and Tim Cook, have overwhelmingly endorsed.

The Democrats could also take another stab at legislation that would expand high-skilled immigration and fast-track permanent residence for people on student visas in STEM fields, similar to the policies they pursued in 2013. Back then, a bipartisan group known as the Gang of Eight successfully pushed an immigration bill through the Senate, with 14 Republicans joining Democrats in support. Ultimately, it failed in what was then a Republican-controlled House.

"We need legislation to build a modern, high-skilled immigration system — one that expands the ability for America to be the destination of talent from around the world," said Todd Schulte, president of FWD.us, an immigration advocacy group backed by Zuckerberg. At the top of the agenda, Schulte expects to see legislation that offers a path to citizenship for Deferred Action for Childhood Arrivals beneficiaries and officially gives the spouses of H-1B visa holders who are on H-4 visas the ability to work.

Taking a scalpel, not a hammer, to Section 230

While Democrats from Biden on down have expressed concerns about Section 230, they have not been nearly as preoccupied with the law as their Republican counterparts and, in particular, Trump. During recent congressional hearings on Section 230, leading Democrats expressed dismay and even disdain for Republican attempts to force social media giants' hands on content moderation decisions. With competing priorities like vaccine distribution and delivering additional stimulus funding to Americans on Democrats' to-do list, Section 230 reform will likely fall farther down on the legislative agenda.

But that doesn't mean Democrats will leave the law alone. Unlike Republicans, who have repeatedly blamed Section 230 for tech companies' ability to moderate content, Democrats have primarily expressed concerns about tech companies' lack of moderation and the protections that Section 230 affords them when extremist, violent or otherwise offensive content causes real-world harm.

Recently, leading voices on the left, including Biden's new deputy chief of staff, Bruce Reed, have called for changes to Section 230, particularly as it pertains to content that hurts children online. In the Senate, Democrats Brian Schatz and Richard Blumenthal have each sponsored bills that would reform Section 230, albeit in dramatically different ways. And in the House, Reps. Anna Eshoo and Tom Malinowski introduced a bill of their own in 2020, which would allow companies to be held liable if their algorithms amplify extremist content. They've promised to reintroduce that bill this year.

Nominating progressives as top tech watchdogs

With Democrats in control of the Senate, Biden will no longer have to consider how Mitch McConnell will feel about his nominees to various federal agencies, including the FTC, DOJ and Federal Communications Commission. It's likely that he will quickly seek to fill the open Democratic slot at the FCC, ensuring Democrats can push through their agenda at the agency, despite the last-minute GOP confirmation of Nathan Simington last year. And he's now freer than ever to elevate progressives that a GOP-led Senate would have resisted, like handing the FTC chairman's slot to FTC Commissioner Rohit Chopra, a close ally of Sen. Elizabeth Warren.

Updated: This story was updated at 1:48 p.m. PT after Jon Ossaff's win was confirmed.

Facebook and Google are still banning political ads on their platforms, but that hasn't stopped Republican super PACs from spending millions of dollars on other platforms, including Hulu, to reelect Sens. Kelly Loeffler and David Perdue. The only difference: On those platforms, there's no way of knowing what the ads say.

After the 2016 election, Facebook and Google created imperfect but extensive databases of every political ad that runs on their sites, complete with information on who's running them and how much they're spending. But these measures are entirely self-imposed and haven't been adopted by the majority of companies, including large streaming platforms. When Facebook and Google decided to prohibit all political ads after the U.S. election — and then decided to extend that ban , likely through the end of the year — they more or less pushed all digital ad spending for the Georgia runoffs onto platforms that offer no transparency at all.

"We're not going to know what the content of these ads are," said Brendan Fischer, director of federal reform at the Campaign Legal Center. "It'll be much harder in the Georgia senate race to identify what digital messages these super PACs are disseminating to voters and make it harder to correct the record if misinformation is distributed."

The Campaign Legal Center reviewed Federal Election Commission filings for digital ads targeting the Georgia runoffs and found that as of Tuesday, the vast majority of the spending is coming from Republican super PACs. Americans for Prosperity Action, the Koch brothers-funded group, has reported $1 million in spending this month on digital ad expenses in support of Perdue and in opposition to his Democratic opponent Jon Ossoff. Another group, the National Victory Action Fund, reported $2.75 million in online advertising, email communication and SMS messages to support Loeffler and Perdue, though it's not clear what percentage of that went into ads.

The groups aren't required to say where their ads are running, but one group, FreedomWorks for America, reported spending about $346,000 in pro-Loeffler and Perdue ads directly on Hulu.

All in, the Campaign Legal Center found Republican super PACs have spent over $5 million on digital ads and outreach in the Georgia runoff, compared to under $700,000 in digital ads from Democrats. None of those ads are visible to anyone who wasn't targeted by them.

"Many of the ads are pretty ugly," Fischer said of the Georgia attack ads running on television. "We're seeing some of those kinds of inflammatory messages being disseminated in public. You could only imagine what kind of messages would be communicated in secret with targeted ads that are not otherwise publicly available."

This is not the first time the Campaign Legal Center has found millions of dollars in advertising flowing to platforms that don't disclose political ads. But Facebook and Google's ongoing ad ban essentially ensures that those platforms with no accountability are the only place digital political ads can go.

To Fischer, this is yet another data point to illustrate why Facebook and Google's self-imposed transparency initiatives are insufficient and why the country needs laws that make digital platforms subject to the same record-keeping and disclosure requirements that television and radio broadcasters are held to. New York state passed a law imposing disclosure requirements on digital political ads targeting the state, but federal efforts to pass such reforms through a bill called the Honest Ads Act have stalled out in Congress.

Democrats have vehemently

opposed Facebook and Google's ongoing ad ban, arguing it gives Republican incumbents, both of whom are independently wealthy and may be able to afford bigger budget TV ads, a leg up. "Facebook and Google are putting their fingers on the scale for millionaire Republican candidates while ignoring the rampant disinformation on their platforms and engaging in their own version of voter suppression," Ossoff communications director Miryam Lipper recently told Protocol in a statement. "Facebook and Google should exempt Georgia Senate candidates from the ban."

When they testify before the Senate Judiciary Committee on Tuesday, Mark Zuckerberg and Jack Dorsey will undoubtedly try to convince lawmakers that their companies took unprecedented actions this year to protect the 2020 election.

If lawmakers actually do their job this time, they could get answers about whether any of those actions worked.

Yes, the last Senate hearing featuring Zuckerberg, Dorsey and Sundar Pichai (who will not attend Tuesday's hearing) was an unmitigated partisan disaster , and there's no guarantee this one will be any different. But with the election behind us and attempts to undermine it still very much ongoing, members of the committee have a chance to redeem themselves by getting to the bottom of important questions about how these platforms have dealt with those attempts from President Trump on down.

The hearing was initially scheduled after Facebook and Twitter limited the spread of a viral New York Post story about president-elect Biden's son Hunter in October. Now, Republicans seem even more primed to cry censorship than when the hearing was first announced, given the huge volume of warning labels the two companies have since slapped on President Trump's own posts. That means if anyone is going to get to the bottom of whether the platforms' strategies were effective, it will likely be the Democrats.

Perhaps the most important question Zuckerberg and Dorsey could answer is whether warning labels actually stopped or slowed the spread of misinformation. That's nearly impossible for researchers outside of the companies to figure out. "As external researchers to those platforms, it's difficult to measure the effects of their interventions because we don't know what those interventions are or when they happen, or what combination of interventions are happening," Kate Starbird, an associate professor at the University of Washington, said on a recent call with the disinformation research group the Election Integrity Partnership.

Twitter hinted at some of these answers in a blog post last week, in which executives said tweets that included warning labels saw a 29% decrease in quote retweets. But that figure didn't distinguish between the subtle labels that appeared below some tweets and the more forceful ones that required users to click through an interstitial before they could view the tweet at all.

Twitter also touted its "pre-bunk" notifications, which informed users that voting by mail was safe and that the election results might be delayed at the top of their feeds. Those prompts were viewed by 389 million people, according to Twitter, but that number says very little about the impact those prompts had on those people.

So far, Facebook hasn't shared any such numbers illustrating its labels' effectiveness. "We saw the same posts on Twitter and Facebook receive pretty different treatments," said Jessica González, co-CEO of the advocacy group Free Press. "Facebook had a more general message, which was almost the same as the message they put on any post people posted that had anything to do with the election. I'm worried about the milquetoast nature."

González said lawmakers should use this opportunity to press both companies on whether and how they're studying those qualitative questions about their warning labels and what results, if any, they've found so far.

Erin Shields, national field organizer at MediaJustice, which is part of a group called the Disinfo Defense League, said Zuckerberg and Dorsey need to answer questions about their treatment of repeat offenders. This is a concern other disinformation researchers at the Election Integrity Partnership have recently raised as well, regarding a slew of far-right personalities who have repeatedly spread voting misinformation. Twitter recently permanently suspended an account belonging to Steve Bannon over a video in which he argued Dr. Anthony Fauci and FBI director Christopher Wray should be beheaded. Facebook took down the video, but left Bannon's account untouched .

"At what point do those rule violators get suspended?" Shields said. "Regular, everyday people get booted off the platform and get their accounts suspended for much less. It's interesting to see how much grace these platforms are giving political actors who are consistently violating their policies."

One related question from Shields: How much of the violative content consisted of live videos, like Bannon's, or memes? And how much longer did it take to take action on those posts, as opposed to posts containing text? The answer to that question, Shields argues, could say a lot about how porous these platforms' defenses are when it comes to video and imagery.

"We know they have some ability to check words, but what are they doing about memes and graphics and, in particular, live video where disinformation and misinformation is being shared with no pushback from the platforms?" Shields said.

This question is what makes YouTube CEO Susan Wojcicki's absence from the hearing so conspicuous. YouTube took by far the most hands-off approach to election misinformation, allowing videos falsely declaring President Trump's victory to remain on the platform and rack up views. YouTube added subtle warning labels to some videos and removed their ability to run ads, but was far less proactive than either Facebook or Twitter in directly contradicting misinformation within warning labels.

YouTube has pushed back on some of the criticism it's faced, stating that 88% of the top 10 results in the U.S. for election-related searches come from authoritative sources. But that stat elides the fact that people often encounter YouTube videos on other websites or social media platforms, without ever touching YouTube search. Given how much coordination there was between tech platforms and federal agencies leading up to Election Day, it's unclear why YouTube took such a markedly different approach. "YouTube has been let off the hook here," Shields said. Without Wojcicki there, the Senate will have to save those questions for another day.

If politics is all about people, the tech industry might be set for an excellent four years. Analysis by Protocol shows that Joe Biden, Kamala Harris and their teams have close and sprawling links to the industry, with almost all of the major tech companies represented in some capacity.

To better understand the incoming administration's ties to tech, Protocol mapped out those connections. The map reveals a complex web of relationships, with familial and professional ties overlapping to create a network of interests.

Some of the ties are simple: Apple is linked to the transition team through former VP Cynthia Hogan; Harris' brother-in-law is Uber's chief legal officer. But others are more complex: Harris' niece, Meena, used to work at Uber, Slack and Facebook, and Meena's husband is a current Facebook executive. Biden's son-in-law, meanwhile, is involved in a health-tech investment firm, and Airbnb staff are well-represented in the transition team. Oh, and Harris attended Sean Parker's wedding.

Protocol used certain criteria to make this map: To be included, people must be current or former employees of Biden or Harris (so people that only worked for former President Obama in the last Democratic administration are excluded), or must have well-known, close professional or personal links to one of the two. Potential Cabinet members and people on the transition's agency review teams are also included. Click on individual people and connections to see more detail on the links.

Do you know of other people that should be included on the map? Email shakeel@protocol.com and we'll add them.

Additional reporting by Emily Birnbaum, Anna Kramer, Issie Lapowsky and Biz Carson.

When Joe Biden won the presidential election last week, he joined a rarified group. Sure, he'll be one of just 46 men to have ever served as president — but he's also one of the only politicians ever to draw more attention on Twitter than President Trump.

During the first week of November, Biden's tweets received more likes and retweets on average than President Trump's, even as Trump's Twitter feed swelled with a relentless stream of baseless voter fraud claims. That's according to new data from the social media insights company Conviva. According to its data, Biden has only bested Trump at Twitter engagement once before in 2020 — narrowly, in September. This time, Biden pulled in an average of 300,000 engagements per tweet in a single week, roughly double that of an average month for Trump.

Image: Conviva

It's unclear what role Twitter's decision to hide Trump's tweets behind misinformation labels played in those numbers. But it is clear that Biden's social media audience has expanded exponentially in November. While Biden had amassed just 7.67 million Twitter followers between January and October, he got another 4.76 million followers in the first eight days of November alone. Between Nov. 3 and 8, he gained 11.4 million followers across Facebook, YouTube, Twitter and Instagram. That's compared to roughly 4 million new followers for Trump across those platforms in the same time frame.

Image: Conviva

Of course, Trump's social media audience is so immense that getting new followers isn't really the problem, and Biden still has a long way to go to catch up with Trump's overall audience. As of Nov. 8, Trump had 146.8 million followers across those four social networks, compared to Biden's 32.7 million. He also has a massive network of hyperpartisan media companies and personalities working overtime to echo his unsubstantiated claims about voter fraud, including on Twitter. While Biden may be gaining ground on Trump's personal Twitter engagement, on Facebook, Trump is still getting millions of interactions a day on his posts.

According to CrowdTangle data, at a certain point on Tuesday afternoon, Trump's Facebook page was drawing 89% of all interactions on presidential and vice presidential candidates' pages. On Wednesday, groups like Nationwide Recount & Audit 2020 and Stop the Steal topped CrowdTangle's charts among Facebook groups discussing voting in the U.S. election. Meanwhile, Trump's own page and other conservative pages that support him continue to dominate the top U.S. posts containing links on Facebook.

In any other race, getting the most interactions on social media doesn't win you much more than bragging rights. But in this spectacularly peculiar election, in which the sitting president is pushing an alternate reality to nearly 150 million people in which he claims he didn't actually lose the election, social media chatter has consequences. According to a new study published Tuesday, Trump's tweets about alleged voter fraud have serious deleterious effects on his followers' trust in elections.

As for who's winning the race on Instagram? Neither Trump nor Biden can claim that title for November. Vice president-elect Kamala Harris ran away with it.

Image: Conviva

The announcement, which Facebook made in an update to a blog post Wednesday, fueled mounting frustration among Democrats, who say that the ban is limiting Georgia senate candidates Jon Ossoff and Rev. Raphael Warnock's ability to reach voters ahead of a crucial runoff election in January.

Keep Reading Show less