Workplace

How Twitter hired tech's biggest critics to build ethical AI

Twitter's META team is made up of some of tech's most notorious critics, and two more will soon be joining them: Sarah Roberts and Kristian Lum.

Twitter's Ethical AI lead, Rumman Chowdhury

Rumman Chowdhury, the head of Twitter's META team, sees her job as finding ways to distribute power and authority, rather than collect it for herself.

Photo: Rumman Chowdhury

Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear.

Font was the manager of Twitter's machine learning platforms teams — part of Twitter Cortex, the company's central ML organization — at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.

So she volunteered to help rebuild Twitter's META team (META stands for Machine Learning, Ethics, Transparency and Accountability), embarking on what she called a roadshow to persuade Jack Dorsey and his team that ML ethics didn't only belong in research. Over the course of a few months, after a litany of conversations with Dorsey and other senior leaders, Font hadn't secured just a more powerful, operationalized place for the once-small team. Alongside the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter's board of directors to make Responsible ML one of Twitter's main 2021 priorities, which came with the power to scale META's work inside of Twitter's products.

"I wanted to ensure that the very important research was having an impact on product, and was scaling. It was a very strategic next step for META that would allow us to take it to the next level," Font said. "We had strategy talks with Twitter staff, including Jack, and ultimately with the board. It was a very intense and fast process."

One year later, Twitter's commitment to Font's team has convinced even the most skeptical people in tech — the ethics research community itself. Rumman Chowdhury, notorious and beloved by her fellow researchers for her commitment to algorithmic auditing, announced that she would be leaving her new startup to become Twitter's META leader. Kristian Lum, a University of Pennsylvania professor renowned for her work building machine-learning models that could reshape criminal justice, will join Twitter at the end of June as their new head of research. And Sarah Roberts, famous for her critiques of tech companies and the co-director of the Center for Critical Internet Inquiry at UCLA, will become a consultant for the META team this summer, researching what Twitter users actually want from algorithmic transparency.

(If something about this team feels different, it's because all of its leaders are women, and four of them have Ph.D.s. Twitter has been on a massive hiring spree not just for META, and the outcome has been proof that actually, there is no shortage of top talent with widely-varying backgrounds in tech.)

These hires are a massive coup for a social media platform desperate to escape the waves of vitriol and criticism enveloping Google and Facebook's work around algorithms, machine learning and artificial intelligence. While Google was forcing out prominent AI ethicists and researchers Timnit Gebru and Margaret Mitchell and Facebook was trying and failing to persuade politicians and researchers that it did not have the power to manipulate the way algorithms amplified misinformation, Twitter was giving Font and Jutta Williams, the product manager in charge of helping operationalize META'S work, the resources and leeway to hire a team of people who could actually act on Twitter's promise to listen to its researchers.

Font's "roadshow" happened before Gebru and Mitchell's very public dismissals — Chowdhury said she would join Twitter the same week Google forced Mitchell out — but that explosion of attention on algorithms in 2020 nonetheless helped persuade Dorsey and his board of directors that ethical algorithms are worth spending money on.

Over the last year, the amplification of former President Donald Trump's social media posts via Facebook engagement algorithms drew widespread outrage from the left; Facebook's decision to very temporarily adapt those algorithms in response drew even sharper rebuke from the right. The spread of coronavirus misinformation followed a similar trajectory, while the nationwide conversation about criminal justice and race-based policing awakened the general public to the biases inherent in algorithms. All of this new awareness found a flashpoint in Google's Gebru. Her forced exit made the entire world pay attention to ethical AI.

"The ideological polarization … is also coming into responsible AI. We are being specifically targeted by names that I will not mention to you because then they will specifically come after me the way they have come after Timnit," Chowdhury said. "The very violent ideological divide is being pulled into our field."

The birth of META

Font wanted Chowdhury to run META from the beginning, but she thought there would be no way to persuade her. "We needed to get the right leader. I spent months doing this. I was OK that it took that long," Font said. "I wanted someone who was already established and well-respected, which, as you know, is not a community that is easy to please necessarily. This was a tricky quest."

But something about that first phone call made Chowdhury — who'd recently left her job as the senior principal for Responsible AI at Accenture to found her own startup — reconsider her future. "My goal was always to drive change in this industry. The industry is so young. I just want to see it succeed," she explained. If Twitter was actually serious about META, this job offer could be the chance she thought she might never have.

"I asked to talk to everybody. From leadership at Twitter down, I talked to everyone, from policy, from comms. It was absolutely critical to me that every single person who would be interacting with META was really on board. And I always left every interview so impressed. There was never any question of whether or not Twitter had the right kind of ethos," she said.

She took the job four months ago. Since then, in addition to the company's public commitment to its 2021 Responsible ML Initiative ( which means Twitter will publicly share how it makes decisions about its algorithms, and how race and politics shape its ML), Twitter has already released an assessment of its image-cropping algorithm and removed the algorithm entirely based on the findings from the research.

Senior leadership said it would commit to Chowdhury's team, promising regular communication. They've been acting on that promise since before she arrived: Team members meet with Dorsey and his senior staff regularly to discuss progress, explain their work, secure additional resources and get buy-in from Dorsey on the research, education and changes they hope to implement.

"We present to Jack and his staff about every six weeks — we report our progress and where we are. They are most interested in learning what we've learned and how they can help. They actually really want to know — what did you learn, where are you going next — they very quickly want to help," Font said.

Williams, the program manager, was skeptical of Twitter's intentions when she agreed in 2020 to leave her job as the senior technical leader for privacy at Facebook and join the team. "It's incredibly disheartening as a very committed person, you go to a place and you think you're going to make a difference. I've had to make pivots and changes in my career because I bought into the hype," Williams said. "I was a bit disheartened about social media when Twitter told me, 'Please come and just talk to this team about this job.'"

Williams took the job, but she didn't give up on the idea that she might go back into health care privacy or nonprofit work: "I carried that healthy skepticism for quite some time."

The reality of change

Solving Twitter's problems means actually defining what users' "problems" are. "It's a lot easier to teach a model how to do something on behalf of people with their input," Williams explained. Roberts, who will be joining Twitter in early July, agreed to come on board to help answer precisely that question. She'll be given independence and latitude to help Twitter learn how to give people choice in usable ways. "We don't really know the answer to that," Williams said.

One of the few easily identifiable problems users had long vocalized was how Twitter's algorithm auto-cropped images, which many people felt often cropped uploaded images in a way that preferred lighter-skinned people and sexualized female bodies. Williams, Font and Chowdhury cited their work on that algorithm as an example for how they plan to run their team.

In their first publicly detailed research project since Chowdhury's start, META created a test to assess how the algorithms actually performed on a wide range of photos. They found a slight race-based bias, and though they could have dismissed the numbers as small, they decided instead to work with the engineers to help remove the algorithm entirely. Rather than conduct their work separately from the team that would be affected if changes were made to the algorithm, they worked alongside them, letting them know early in the process about the research project. And when their findings showed that change should happen, they helped create the plan to remove the algorithm in partnership with the engineers in question.

And after the algorithm was removed, META published both a press release explaining how they reached their conclusions and a scientific paper showing how they conducted their research.

"To be perfectly honest, people have no problem taking Jack to task on Twitter. And Congress is literally just following what they heard people say," Chowdhury said.

"That's why we just develop in the open now," Williams added.

Beyond user choice and public transparency, Chowdhury's goal is to create a system of rules and assessments that function like government over the models: a system that could prevent harms from occurring, rather than just address the causes after people are hurt.

The team centers the idea that machine-learning engineers don't have bad intentions; they often just lack an understanding of what they're capable of doing and how to go about governing their work in an ethical way. An ethical, holistic approach isn't necessarily taught in most artificial intelligence grad programs, and very few tech companies support ethicists, auditors, and researchers of Chowdhury's caliber with freedom and buy-in (see: Google's collapse of its own ethical AI work ).

"Our engineers are looking for guidance and expertise. Things are actionable because they know we can do better, it's hard to know what to do differently unless you have a workflow," Font said. "People don't always know what it is they can do, even if they are smart and good-hearted."

What the META team doesn't have is serious enforcement power. They say they don't want it at the moment — "You can't really drive change through fear of enforcement, but for long-term investment in change you do much better by growing education," according to Williams — but at the end of the day, META is a knowledge-creating team, not a police force. While they can research and propose changes, they cannot necessarily force other teams to fall into line. Their work is democratic, not authoritarian.

"There's a life cycle to enacting change," Williams explained. "You have to focus on enhancement; your first iteration or two is more on monitoring than it is on auditing. This as a concept is so new that focusing very directly on discipline and enforcement, you can't really drive change through fear."

"Ethics is literally about the world of unintended consequences. We're talking about engineers who are well-mentioned in trying to build something who didn't have the background or education," Chowdhury said. "We're talking to people who wanted to do the right thing and didn't know how to do the right thing."

Chowdhury reads widely as a way of processing her thoughts — she cited countless books and papers during our conversation — and she sees herself creating a leadership style through a feminist lens. Rather than punishing or controlling the people she works with, her definition of leadership is about finding ways to share resources and power, not keep it for herself. Seeking enforcement authority would oppose that kind of leadership definition. "I worry very much about the consolidation of ruthless authority," she said.

Many of the researchers and leaders in the ethical machine-learning worlds believe that working inside a tech company and accepting a role as an adviser (rather than an enforcer) makes the work useless. That idea frustrated Chowdhury, Williams and Font, all of whom kept returning to the idea that you can't make real progress if you're forever apart from the industry you're critiquing. "Everyone outside the industry is pointing their fingers at you as if you are the problem. You are trying your best to do your job and do a good job and people are like, you are fundamentally unethical because you take a paycheck from them," Chowdhury said.

"But the goal of META is not to be this shining example of finger-pointing where we get to be the good guys while throwing our company under the bus," she added. "That's actually not very productive if our goal is to change the industry and drive the industry toward actionable positive output."

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins