Policy

White House AI Bill of Rights lacks specific recommendations for AI rules

The document unveiled today by the White House Office of Science and Technology Policy is long on tech guidance, but short on restrictions for AI.

White House

While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI.

Photo: Ana Lanza/Unsplash

It was a year in the making, but people eagerly anticipating the White House Bill of Rights for AI will have to continue waiting for concrete recommendations for future AI policy or restrictions.

Instead, the document unveiled today by the White House Office of Science and Technology Policy is legally non-binding and intended to be used as a handbook and a “guide for society” that could someday inform government AI legislation or regulations.

Blueprint for an AI Bill of Rights features a list of five guidelines for protecting people in relation to AI use:

  • People should be protected from unsafe or ineffective automated systems.
  • They should not face discrimination enabled by algorithmic systems based on their race, color, ethnicity, or sex.
  • They should be protected from abusive data practices and unchecked use of surveillance technologies.
  • They should be notified when an AI system is in use and understand how it makes decisions affecting them.
  • They should be able to opt out of AI system use and, where appropriate, have access to a person, including when it comes to AI used in sensitive areas such as criminal justice, employment, education, and health.

What’s not in the AI Bill of Rights

While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI such as systems that identify people in real time using facial images or other biometric data, or for use of lethal autonomous weapons.

In fact, the document begins with a detailed disclaimer noting that the principles therein are “not intended to, and do not, prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or intelligence activities.”

Alondra Nelson, the OSTP’s deputy director for science and society, pushed back on suggestions that the document could disappoint human rights and AI watchdogs who had hoped for a document recommending more concrete rules for AI.

“I categorically reject that kind of framing of it,” Nelson told Protocol. “The document moves as the title says from principles to practice. Upwards of 80% of the document is about precise prescriptive things that different stakeholders can do to ensure that people’s rights are protected in the design and use of technologies,” she said, adding, “Our job at OSTP is to offer technical advice and scientific advice to the president.”

A year ago, Nelson and former OSTP Director Eric Lander co-authored a splashy Wired opinion piece announcing the agency’s plans to produce an AI Bill of Rights that might help alleviate problems with AI systems that had been unleashed by industry for use with no federal regulatory guidelines.

Nelson and Lander mentioned AI systems that reinforce discriminatory patterns in hiring and health care as well as faulty policing software using inaccurate facial recognition that has led to wrongful arrests of Black people. And, linking to an article about surveillance tech used in China to track and control the Muslim minority Uyghur population there, they alluded to use of AI by autocracies “as a tool of state-sponsored oppression, division, and discrimination.”

I categorically reject that kind of framing of it,” Nelson told Protocol. “The document moves as the title says from principles to practice.

Soon after the announcement, OSTP held several public listening sessions in November 2021 on AI-enabled biometric technologies, consumer and “smart city” products, and AI used for employment, education, housing, health care, social welfare, financial services, and in the criminal justice system.

While some advocacy groups have indicated frustration with the slow process for publishing the AI Bill of Rights, Nelson said by one measure — the Biden-Harris administration’s Summit for Democracy held in December 2021 — it is actually early.

“We had committed by December to finish this, and we are completing it with a little bit of time to spare,” Nelson said.

Scandal has plagued OSTP this year. Former OSTP Director Lander resigned In February amid accusations he created “an atmosphere of intimidation at OSTP through flagrant verbal abuse.” Later in March, POLITICO revealed that Lander had helped enable an organization led by former Google CEO Eric Schmidt to pay the salaries of some OSTP staff.

Lawmakers have proposed legislation that would take ethical commitments made by the government out of the realm of theory and into practice. Legislation introduced in February, for example, would require companies to assess the impact of AI and automated systems they use to make decisions affecting people’s employment, finances, and housing and require those companies to submit annual reports about assessments to the FTC.

Despite a lack of federal AI laws or regulations, the U.S. has agreed to uphold international principles established in 2019 by the Organization for Economic Cooperation and Development that call on makers and users of AI systems to be held accountable for them, and ensure they respect human rights and democratic values including privacy, non-discrimination, fairness, and labor rights. Those principles also called on AI builders and users to make sure that the systems are transparent, provide understandable and traceable explanations for their decisions, and are safe and secure.

In conjunction with the publication of the AI Bill of Rights, other federal agencies are expected to signal commitment to take actions reflective of its tenets. For example, the Department of Health and Human Services plans to release recommendations for methods to reduce algorithmic discrimination in health care AI and the Department of Education is planning to release recommendations on the use of AI for teaching and learning by early 2023.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins