Policy

One year since Jan. 6, has anything really changed for tech?

Tech platforms have had a lot to make up for this last year. Did any of it matter?

Rioters scaling the U.S. Capitol walls during the insurrection

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line.

Photo: Blink O'faneye/Flickr

There was a brief window when it almost looked like tech platforms were going to emerge from the 2020 U.S. election unscathed. They’d spent years nursing their wounds from 2016 and building sturdy defenses against future attacks. So when Election Day came and went without some obvious signs of foreign interference or outright civil war, tech leaders and even some in the tech press considered it a win.

“As soon as Biden was declared the winner, and you didn’t have mass protests in the streets, people sort of thought, ‘OK, we can finally turn the corner and not have to worry about this,’” said Katie Harbath, Facebook’s former public policy director.

One year ago today, it became clear those declarations of victory were as premature as former President Trump’s.

Much has been said and written about what tech platforms large and small failed to do in the weeks leading up to the Capitol riot. Just this week, for example, ProPublica and The Washington Post reported that after the election, Facebook rolled back protections against extremist groups right when the company arguably needed those protections most. Whether the riot would have happened — or happened like it did — if tech platforms had done things differently is and will forever be unknowable. An arguably better question is: What’s changed in a year and what impact, if any, have those changes had on the spread of election lies and domestic extremism?

“Ultimately what Jan. 6 and the last year has shown is that we can no longer think about these issues around election integrity and civic integrity as something that’s a finite period of time around Election Day,” Harbath said. “These companies need to think more about an always-on approach to this work.”

What changed?

The most immediate impact of the riot on tech platforms was that it revealed room for exceptions to even their most rigid rules. That Twitter and Facebook would ban a sitting U.S. president was all but unthinkable up until the moment it finally happened , a few weeks before Trump left office. After Jan. 6, those rules were being rewritten in real time, and remain fuzzy one year later. Facebook still hasn’t come to a conclusion about whether Trump will ever be allowed back when his two-year suspension is up.

But Trump’s suspension was still a watershed moment, indicating a new willingness among social media platforms to actually enforce their existing rules against high profile violators. Up until that time, said Daniel Kreiss, a professor at University of North Carolina’s Hussman School of Journalism and Media, platforms including Facebook and Twitter had rules on the books but often found ways to justify why Trump wasn’t running afoul of them.

“There was a lot of interpretive flexibility with their policies,” Kreiss said. “Since Jan. 6, the major platforms — I’m thinking particularly of Twitter and Facebook — have grown much more willing to enforce existing policies against powerful political figures.” Just this week, Twitter offered up another prominent example with the permanent suspension of Georgia Rep. Marjorie Taylor Greene.

Other work that began even before Jan. 6 took on new urgency after the riot. Before the election, Facebook had committed to temporarily stop recommending political and civic groups, after internal investigations found that the vast majority of the most active groups were cesspools of hate, misinformation and harassment. After the riot, that policy became permanent. Facebook also said late last January that it was considering reducing political content in the News Feed, a test that has only expanded since then.

The last year also saw tech platforms wrestle with what to do about posts and people who aren’t explicitly violating their rules, but are walking a fine line. Twitter and Facebook began to embrace a middle ground between completely removing posts or users and leaving them alone entirely by leaning in on warning labels and preventative prompts.

They also started taking a more expansive view of what constitutes harm, looking beyond “coordinated inauthentic behavior,” like Russian troll farms, and instead focusing more on networks of real users who are wreaking havoc without trying to mask their identities. In January of last year alone, Twitter permanently banned 70,000 QAnon-linked accounts under a relatively new policy forbidding “coordinated harmful activity.”

“Our approach both before and after January 6 has been to take strong enforcement action against accounts and Tweets that incite violence or have the potential to lead to offline harm,” spokesperson Trenton Kennedy told Protocol in a statement.

Facebook also wrestled with this question in an internal report on its role in the riot last year, first published by Buzzfeed. “What do we do when a movement is authentic, coordinated through grassroots or authentic means, but is inherently harmful and violates the spirit of our policy?” the authors of the report wrote. “What do we do when that authentic movement espouses hate or delegitimizes free elections?”

Those questions are still far from answered, said Kreiss. “Where’s the line between people saying in the wake of 2016 that Trump was only president because of Russian disinformation, and therefore it was an illegitimate election, and claims about non-existent voting fraud?” Kreiss said. “I can draw those lines, but platforms have struggled with it.”

In a statement, Facebook spokesperson Kevin McAlister told Protocol, “We have strong policies that we continue to enforce, including a ban on hate organizations and removing content that praises or supports them. We are in contact with law enforcement agencies, including those responsible for addressing threats of domestic terrorism.”

What didn’t?

The far bigger question looming over all of this is whether any of these tweaks and changes have had an impact on the larger problem of extremism in America — or whether it was naive to ever believe they could.

The great deplatforming of 2021 only prompted a “ great scattering ” of extremist groups to other alternative platforms, according to one Atlantic Council report. “These findings portray a domestic extremist landscape that was battered by the blowback it faced after the Capitol riot, but not broken by it,” the report read.

Steve Bannon’s War Room channel may have gotten yanked from YouTube and his account may have been banned from Twitter, but his extremist views have continued unabated on his podcast and on his website, where he’s been able to rake in money from Google Ads. And Bannon’s not alone: A recent report by news rating firm NewsGuard found that 81% of the top websites spreading misinformation about the 2020 election last year are still up and running, many of them backed by ads from major brands.

Google noted the company did demonetize at least two of the sites mentioned in the report — Gateway Pundit and American Thinker — last year, and has taken ads off of individual URLs mentioned in the report as well. “We take this very seriously and have strict policies prohibiting content that incites violence or undermines trust in elections across Google's products,” spokesperson Nicolas Lopez said in a statement, noting that the company has also removed tens of thousands of videos from YouTube for violating its election integrity policies.

Deplatforming can also create a measurable backlash effect, as those who have been unceremoniously excised from mainstream social media urge their supporters to follow them to whatever smaller platform will have them. One recent report on Parler activity leading up to the riot found that users who had been deplatformed elsewhere wore it like a badge of honor on Parler, which only mobilized them further. “Being ‘banned from Twitter’ is such a prominent theme among users in this subset that it raises troubling questions about the unintended consequences and efficacy of content moderation schemes on mainstream platforms,” the report, by the New America think tank, read.

“Did deplatforming really work or is it just accelerating this fractured news environment that we have where people are not sharing common areas where they’re getting their information?” Harbath asked. This fragmentation can also make it tougher to intervene in the less visible places where true believers are gathering.

There’s an upside to that, of course: Making this stuff harder to find is kind of the point. As Kreiss points out, deplatforming “reduces the visibility” of pernicious messages to the average person. Evidence overwhelmingly shows that the majority of people who were arrested in connection to the Capitol riot were average people with no known connections to extremist groups.

Still, while tech giants have had plenty to make up for this last year, ultimately, there’s only so much they can change at a time when some estimates suggest about a quarter of Americans believe the 2020 election was stolen and some 21 million Americans believe use of force would be justified to restore Trump as president. And they believe that not just because of what they see on social media, but because of what the political elites and elected officials in their party are saying on a regular basis.

“The biggest thing that hasn’t changed is the trajectory of the growing extremism of one of the two major U.S. political parties,” Kreiss said. “Platforms are downstream of a lot of that, and until that changes, we’re not going to be able to create new policies out of that problem.”

A MESSAGE FROM ZOOM

www.protocol.com

While we were all Zooming, the Zoom team was thinking ahead and designing new offerings that could continue to enable seamless collaboration, communication and connectivity while evolving with the shifting workplace culture. Protocol sat down with Yuan to talk about Zoom's evolution, the future of work and the Zoom products he's most excited about.

Learn more

Correction : This was updated Jan. 6, 2022 to clarify that Facebook was just considering reducing political content in the new News Feed on its late January earnings call.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins