Politics

The hardest questions tech CEOs could be asked at the Section 230 hearing

There will be plenty of political point-scoring on Wednesday. But here's what senators should actually ask if they're serious about fixing the internet's favorite law.

The hardest questions tech CEOs could be asked at the Section 230 hearing

Mark Zuckerberg, Sundar Pichai and Jack Dorsey are all set to testify before the Senate on issues related to Section 230 of the Communications Decency Act.

Photo: Graeme Jennings-Pool/Getty Images

Mark Zuckerberg, are your views on freedom of expression hypocritical? Sundar Pichai, are you ready for collective responsibility for online harm? Jack Dorsey, should revenge porn sites really have the same legal protections as Twitter?

Those are the kinds of hard questions that top experts on Section 230 of the Communications Decency Act think could stop the CEOs of Facebook, Google and Twitter in their tracks on Wednesday, when they're due to testify before the U.S. Senate Committee on Commerce, Science and Transportation about how the law has enabled "bad behavior" by Big Tech.

In the past, when Zuckerberg, Pichai and Dorsey have appeared before Congress, they've been faced with a deluge of questions from lawmakers about how their companies favor or suppress various viewpoints, using cherry-picked examples of controversial content that was either taken down or left online. With Election Day just one week away and tensions about tech platforms' treatment of political discourse at an all-time high, Wednesday's hearing will surely feature plenty of that.

But this is the first Congressional hearing featuring these CEOs to focus on Section 230, and could provide lawmakers with the opportunity to develop their understanding of how Section 230 really ought to be updated. In case they're willing to look beyond partisan quarrels, Protocol asked some of the top experts on Section 230 the toughest questions they'd ask Zuckerberg, Pichai and Dorsey. Here's what they had to say:

There's bipartisan support for the PACT Act , which would mean that you couldn't use Section 230 as a defense if you leave content up after a judge orders you to remove it. Do you support this reform?

— Matt Perault, former Facebook director of public policy and current director of Duke University's Center on Science and Technology policy

This bipartisan bill, sponsored by Sens. Brian Schatz and John Thune, would make relatively light-touch changes to Section 230, including requiring platforms to explain their moderation policies, issue quarterly reports on moderation decisions and take down content deemed illegal in court within 24 hours. Facebook, Google and Twitter already comply with many of the provisions in the bill, but the Internet Association, which represents all three companies, has expressed concerns about it. Pinning these powerful CEOs down on their personal feelings about the legislation would be a meaningful contribution to the debate.

Let's say Congress repeals Section 230 tomorrow. How does that change your content moderation practices?

Jeff Kosseff, assistant professor of cybersecurity law at the United States Naval Academy's Cyber Science Department

Because Section 230 protects companies from liability for filtering out offensive or objectionable content, one concern is that by removing Section 230 protection altogether, tech companies would stop filtering content altogether. Kosseff posits the opposite is true: that companies would filter even more to limit their liability for whatever might be left up. What the CEOs might say in response could be telling.

How should the platforms address false statements and disinformation camouflaged as opinion? A statement that "I believe all Blacks are lazy" is not on its face an assertion of fact, but could be considered hate speech. What safeguards can ensure that any restrictions levied against such speech will be employed in the interest of public safety, and not merely to stifle a viewpoint with which a platform simply disagrees?

Lateef Mtima, professor of law at Howard University

Tech platforms are under increasingly intense pressure to crack down on hate speech against minority groups, particularly as research shows that Facebook, Twitter and Google have fanned the flames of racism in the U.S. and abroad. The platforms have recently taken action against speech that promotes real-world violence, but they're still working out how aggressively they should act against bigoted opinions. "There's not yet a perfect tool or system that can reliably find and distinguish posts that cross the line from expressive opinion into unacceptable hate speech," a Facebook executive wrote in 2017. This is an area where the platforms' stances are changing quickly, and it will be important to hear the executives' thoughts on it now.

In the physical world, collective responsibility is a familiar concept: A person can be partly responsible for harm even if he did not intend for it to happen and was not its direct cause. Do you believe that tech companies should continue to be granted a special exemption from the rules of collective responsibility? Why?

Mary Anne Franks, professor of law at University of Miami School of Law and president of the Cyber Civil Rights Initiative

There's an ongoing debate over why tech platforms aren't subject to the same liability that brick-and-mortar businesses face in the offline world. Steering the conversation toward addressing the actual harms that tech platforms facilitate, and not baseless accusations of political bias, would be one way to facilitate a more substantive conversation.

Would you support an amendment to Section 230 that excludes from protection any interactive computer service provider that manifests deliberate indifference to harmful content? Why or why not?

Franks

Though they often fail, Facebook, Google and Twitter arguably at least attempt to make their platforms safe for users. But Section 230 doesn't just protect companies that are trying to do the right thing and sometimes get it wrong; It also shields companies that either invite or completely ignore bad behavior. Tech companies spend so much time answering for their own misdeeds, they rarely get asked how the law ought to handle explicitly bad actors.

Narrowing Section 230 immunity doesn't mean platforms will automatically be held liable. Victims still must prove their case. If they have a credible claim they've been harmed at the hands of platforms, why should victims be denied an opportunity for justice?

— Neil Fried, founder of DigitalFrontiers Advocacy, former chief counsel of the House Energy and Commerce Committee and SVP of the Motion Picture Association

Twitter, Facebook and Google have argued that reforming Section 230 could unleash a barrage of frivolous lawsuits against any company with an online footprint. But Section 230 has also been a major obstacle in court for very real victims of crimes facilitated by tech platforms, including genocide and online impersonation . Most judges throw out cases against the platforms immediately because Section 230 makes them so difficult to try. Section 230 reformers want to make it easier for victims to sue major online platforms for those harms. Tech giants have fought these cases vigorously in court but have rarely addressed them publicly.

Should a business that is knowingly facilitating an illegal activity be exempt from state and local criminal laws?

— Rick Lane, former 21st Century Fox SVP currently advising victim's advocacy groups on Section 230

Section 230 defenders often point out that the law doesn't protect companies from being charged with federal crimes. The subtext: If the feds are so concerned about criminal activity happening online, they should enforce the law themselves. But the counter-argument boils down to a lack of resources at the federal level. Opening platforms up to state and local criminal liability would essentially expand the number of cops on the beat. It could also invite more activist enforcement from politically appointed attorneys general.

How consistent are your defenses of 230 with the rest of your views around maintaining freedom of expression and preventing a chilling effect? Those values seem to vanish into the ether when it comes to removing NDAs that keep employees from exercising that same freedom of expression. Where is the fear of a chilling effect when company whistleblowers are intimidated, retaliated against, then fired without recourse?

— Ifeoma Ozoma, First Draft board member, former public policy and social impact manager at Pinterest

The tech executives will likely argue that reforming Section 230 could limit free expression online, potentially forcing the companies to more aggressively remove content posted by their billions of users. But their companies have been accused of silencing criticism by maintaining restrictive NDAs and firing employees who speak out . It could be revealing to hear Pichai and Zuckerberg in particular talk about their recent employee unrest and how they plan to navigate future internal dissent.

Your services enable users to treat each other awfully. However, people also treat each other awfully in the offline world. What specific steps does/will your service take to reduce the quantum of awful behavior on your service so that it is lower than the offline baseline of awfulness?

Eric Goldman, professor at Santa Clara University School of Law

This question feels tailor-made for Dorsey, who has spoken at length about creating "healthier" conversations on Twitter. Tech CEOs are used to being grilled about all the ways they punish people for the bad things they do online, but there's often less of a focus on whether anything can be done to discourage people from doing so many bad things online in the first place.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins