Workplace

How I decided to call out the 'toxic culture' of CS

Edward Lee, a professor emeritus at Berkeley, wrote a blog post chastising the rejection culture at computer science conferences. Here’s the story of how he decided to write it and why it went viral.

A desktop computer with a "Boom!" coming out of it

"I have seen some extremely talented people leave the field because of brutal reviews. And that's just unacceptable," computer science professor Edward Lee told Protocol.

Illustration: Christopher T. Fong/Protocol

Click banner image for more How I decided series

Wannabe computer science superstars must all run the same rather scary and capricious gauntlet, one that sounds deceptively dull: the computer science conference paper review process. To have a research paper accepted for presentation at a CS conference is a coveted rite of passage among academics and professionals, bestowing on its author a status symbol that can open the door to tenure or competitive job offers.

Last month, the University of California, Berkeley’s much-respected Edward Lee, a professor emeritus of electrical engineering and computer sciences who for several decades has served on program committees that judge research papers, caused an uproar in the CS community after he publicly shared a scathing review of the system, which he’d sent earlier to fellow judges. Program committee members who decide which papers are accepted are volunteers, members of the academic community who agree to spend hours of their time (theoretically) reading submissions, writing opinions and voting on whether papers are worthy of the hallowed halls of whatever conference is in session.

But ever since conferences adopted a new review process that shields the names of judges, as well as papers’ authors — many made the change in the early 2000s — critics say a new problem has arisen: Rejection notes are often so random, or just factually incorrect, that applicants suspect nobody actually read their paper. The issue came to a head for Lee earlier this year. Here is his story of why he’s chastising the community he’s been a loyal member of for so long, and what he thinks can be done to address the problem.

Lee’s story, as told to Protocol, has been edited for clarity and brevity.

It started when I was serving on a program committee for one of my favorite conferences. It was that program committee experience that pushed me over the edge, and I wrote a letter to the entire program committee and resigned.

I had found myself fighting with a lot of the program committee members over determinations about papers, and a number of papers that I thought were highly worthy got rejected. There were two papers that were principally authored by students that I had worked with closely, and of course I couldn't participate in the deliberations because of conflict of interest rules. But those papers got rejected with what I considered to be unsound reviews. And I have enough experience to know that these two papers were excellent papers.

My resignation and protest letter got quite a few people upset. I got quite a bit of feedback. It has become very clear to me that there are a lot of people who are very frustrated with the current situation. This piece seems to have really resonated with a lot of people because everyone in the community is facing 10% acceptance rates for their papers. I have seen some extremely talented people leave the field because of brutal reviews. And that's just unacceptable.

Employers need to know the reality is that getting conference papers accepted is extremely random. Looking at published conference papers in computer science as a measure of the quality of the candidate is flawed. You're looking at luck. If you want to hire lucky people, OK. That’s usually not what they're looking for.

And then the Sigbed blog editors somehow got wind of my open letter to the program committee and asked me if I would submit it as a blog, which is how the Sigbed blog post came about.

I've been doing these kinds of reviews for my entire career. So that's 40 years. I get invited to be on a lot of these program committees, but I simply don't have the bandwidth for them. So I typically serve on two or three a year, trying to pick the type of conferences that I can contribute the most to.

The problem has been there all along, but it was much less visible to me because the reviews didn't use to be double-blind. The students that I worked with the most closely were almost always from Berkeley, and Berkeley papers weren't rejected as often as papers from other places.

So in some ways, the institution of double-blind review processes has been a very good thing, because there were prejudices creeping into the review process unknowingly. The papers from the best institutions were more likely to be accepted, papers written by males rather than females were more likely to be accepted. Papers with Chinese names were more likely to be rejected. The double-blind review process put an end to that problem.

But that also exposed to me the high rejection rates and the frustration that accompanies them, because the reviews are frankly capricious and often unsound.

Part of the problem is that the program committees are being asked to do more than is actually possible. In the past, they could rely on a kind of a crutch. It’s an MIT paper, it's probably pretty good. Let's just accept it. But they can't do that anymore.

There’s also the anonymity. There are good reasons for keeping the reviewers anonymous — you don't want junior people who are reviewing to be vulnerable to retribution from senior people who get their papers rejected. But people can be much more mean when they are anonymous. And moreover, when you combine that with the fact that reviews themselves never get published, their critique of that paper is protected.

One thing that we could do that would improve things quite a bit is keep the double-blind process, but the original submission and the reviews get published, right along with the paper. That way the conference gets associated with the reviews, and if the conference has a lot of capricious reviews, that's going to degrade the reputation of the conference. Right now there’s basically a lot of power with no accountability, which is almost never a good thing.

The first open letter that I sent got circulated to all the new program committee members in another related conference just shortly thereafter. I've seen quite a bit of discussion about being a lot more careful about using novelty as a criteria for rejection, for example, which is one of the things I argue against in this blog.

I'm hoping that there will be some impact. I've been collecting notes from all the feedback I've been getting, and I might have enough to put together a more upbeat follow-up blog that discusses some real concrete actions that can be taken.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins