Enterprise

Why an 'us vs. them' approach to China lets the US avoid hard AI questions

As national security framing and fears of China’s AI advancements propel U.S. AI policy, some human rights watchdogs worry it will facilitate a focus on AI investments with military applications, and allow the U.S. to deflect scrutiny for its own AI practices.

This photo taken on June 4, 2019 shows schoolchildren walking below surveillance cameras in Akto, south of Kashgar, in China's western Xinjiang region. - While Muslims around the world celebrated the end of Ramadan with early morning prayers and festivities this week, the recent destruction of dozens of mosques in Xinjiang highlights the increasing pressure Uighurs and other ethnic minorities face in the heavily-policed region. (Photo by GREG BAKER / AFP) / To go with AFP story China-politics-rights-religion-Xinjiang, FOCUS by Eva Xiao and Pak Yiu (Photo credit should read GREG BAKER/AFP via Getty Images)

By planning AI strategy and investment through a national security lens, the U.S. could drown out important efforts such as drug discovery, development of climate change-related technologies, and global AI standards that could benefit from collaboration with China.

Photo: Greg Baker/AFP via Getty Images

Click here for more coverage

Before the COVID-19 pandemic, William McClellan “Mac” Thornberry — a former U.S. congressman representing Texas and the top Republican on the Armed Services Committee from 2015 to 2019 — traveled around the U.S. speaking to business and community leaders and showing them photos of surveillance tech in China.

“I’d show pictures of the Chinese surveillance cameras and talk about their social credit system , and how the government is using technology to control its population. And they’re exporting it to other countries, and so there’s a real competition about what the future is going to look like between government control and not,” Thornberry said while speaking at an event held by the Special Competitive Studies Project, a nongovernmental organization funded by former Google CEO and AI investor Eric Schmidt that advocates for more U.S. AI spending.

A member of the Pentagon’s emerging tech advisory group, the Defense Innovation Board, Thornberry also has championed the use of AI and emerging tech to help the U.S. defend against China and preserve democratic values. By displaying those photos showing China’s AI-fueled surveillance apparatus, Thornberry aimed to illustrate exactly what the U.S. defense department is up against.

“You have to remind people the context, the bigger picture and why it matters,” Thornberry, also a member of SCSP’s board, said.

But as national security fears of China’s AI advancements propel U.S. AI policy, some human rights and AI watchdogs worry investments in AI with military applications will become a major focus, allowing the U.S. to deflect scrutiny or legal guardrails for its own AI practices.

“I’m far more worried about the risks to our society from failing to regulate AI than the risk that we fall behind China in some aspects of the technology,” said Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace.

Renard Bridgewater, a member of New Orleans’ Eye On Surveillance coalition who has advocated against surveillance tech including AI-based technologies there , questioned Thornberry’s use of China surveillance photos.

“It feels vaguely hypocritical, if we’re talking about China in one way, and using that as a motivation of sorts to spend more money on AI here, when metropolitan areas across the country — predominantly Black and brown communities — are negatively and directly impacted by that same technology or similar tech,” Bridgewater said during a Protocol event last week .

One of hundreds of networked surveillance cameras installed throughout the city of New Orleans hangs on a lamppost. In July, the New Orleans City Council voted to reverse its facial recognition prohibition. Photo: Kate Kaye/Protocol

China’s use of AI-based surveillance technologies to monitor and penalize minority Uyghurs is often pointed to by U.S. lawmakers, national security officials, and tech investors as a key justification for blocking China’s access to tech that could advance its surveillance and military AI capabilities, as well as for increasing federal spending on unregulated AI in this country.

Not only is the so-called AI race considered a competition with China for economic or technological superiority, but one of democratic values. Miriam Vogel, co-chair of the White House National AI Advisory Committee, suggested at a POLITICO event in September that democratic values can be baked into U.S. tech like cinnamon and nutmeg in an apple pie.

“AI embeds our culture, and our culture in the U.S. is trust and democratic values,” Vogel said.

Vogel’s remarks mirrored sentiments found in one of the most influential documents guiding U.S. AI policy and investments thus far: the 2021 final report of the National Security Commission on Artificial Intelligence.

I’m far more worried about the risks to our society from failing to regulate AI than the risk that we fall behind China in some aspects of the technology.

“The AI competition is also a values competition,” stated the report. In an effort to stay ahead of China and combat what the report called the “chilling precedent” created by China’s use of “AI as a tool of repression and surveillance,” the commission called on the federal government to double annual non-defense funding for AI research and development to $32 billion per year by 2026.

Today, people including Thornberry and others working at Schmidt’s SCSP have picked up the NSCAI’s mantle in the hopes of influencing federal spending on AI and emerging tech.

Still, the U.S. has yet to pass any federal regulations or laws governing AI development and use, despite an explosion of AI deployment by businesses and government. Letting China’s AI threat distract the U.S. from meaningful AI regulations would be a mistake, Sheehan said.

“We’ve already seen the way technology left to its own devices can widen inequality, deepen social divisions, and exacerbate political extremism. Unchecked AI deployment could put risks like those on steroids in a way that threatens the foundations of our democracy,” he said.

Surveillance in the USA

In September, when China’s Suzhou Keda Technology promoted its “smart community” project involving 2,000 facial recognition-enabled cameras installed in communities in Xinghua, a city about 150 miles north of Shanghai, the company said the system would identify people and vehicles to accurately warn of security risks and improve the level of safety for residents there.

It sounded familiar. When U.S. municipalities and everyday homeowners in the U.S. implement surveillance technology, protecting safety is often a primary reason.

“All I want is a safer city,” said New Orleans city council member Freddie King III in July when he voted for the heavily surveilled city to reverse a facial recognition prohibition , allowing use of the technology by the New Orleans Police Department.

Other cities in the U.S. including Detroit and San Francisco are home to growing publicly and privately owned surveillance camera networks that law enforcement can access. In small towns, AI-based license plate readers and vehicle recognition cameras with police access are being installed by private homeowners associations . There is little accountability or transparency when surveillance tech is deployed by private entities.

There’s also a buildup of AI-enabled surveillance tech in use by U.S. Customs and Border Protection at the southern U.S. border. Earlier this year the U.S. Government Accountability Office warned of the border protection agency's failure to notify people of its use of facial recognition at U.S. airports.

“The way that the Uyghur people of China are continuously surveilled in such a highly oppressive way, that could readily happen here [in] a slow, creep-like fashion,” Bridgewater said.

In the U.S. Black people and women have been subjected to discriminatory AI systems used in hiring , banking , and health care . Some Black men have been wrongfully arrested because of inaccurate facial recognition in policing software.

And use of other controversial forms of AI that have sparked concern among civil and human rights advocates when deployed in China is growing in the U.S. Emotion AI, which is intended to determine people’s emotional attitudes, has been baked into software sold and used throughout the U.S. by companies including Google and Microsoft . Emotion AI providers in the U.S. have attracted millions of dollars in venture capital funding .

But even though various U.S. agencies including the Department of Defense , intelligence agencies , and the White House Office of Science and Technology Policy have released nonbinding guidance on AI principles and rights , there are no federal AI regulations or laws in the U.S. And the country still has not enacted federal data privacy legislation despite indiscriminate harvesting and use of people’s data to build AI.

[A lot of people] have this notion that AI that's developed in China somehow embeds a different system of ethics and values that's uniquely Chinese.

At the same time, China has established new data protections and AI-related regulations. The country established its Personal Information Protection Law in 2021 , which some consider to be similar to Europe’s General Data Protection Regulation. That year, China’s Supreme People's Court ruled to require businesses to obtain consent to use facial recognition . In January, China’s Cyberspace Administration was among the first regulatory bodies to establish rules requiring algorithmic transparency and explainability, allowing people to opt out of algorithmic content targeting.

AI policy watchdogs recognized that China’s regulations serve a dual purpose, allowing the government to censor and shape public discourse. However, they said China’s regulations could have some positive influence on how other governments craft regulations and how corporations implement them.

“These regulations will cause private companies to experiment with transparency and explainability and impact assessments. China can help the global conversations around that because they’re moving from principle to practice,” said Merve Hickok, senior research director and chair of the board for the Center for AI and Digital Policy, a nonprofit AI policy and human rights watchdog.

Sheehan also saw value in China’s AI laws. “The irony here is that Chinese leaders get this,” he said. “They are putting out some of the most concrete regulations on algorithms anywhere in the world, and they’ve spent two years going after monopolies in their tech sector. We obviously shouldn’t try to mimic China’s controls on free speech, but we should recognize that strong regulation doesn’t need to be in opposition to innovation.”

Fighting regulations with AI values assumptions

Schmidt, who has the ear of several high-powered U.S. lawmakers and current and former government officials when it comes to AI policy, has vocally advocated against U.S. AI regulations.

In October, when the White House unveiled a nonbinding “Blueprint for an AI Bill of Rights ,” he told The Wall Street Journal that the U.S. should not regulate AI yet because “there are too many things that early regulation may prevent from being discovered.” It’s a stance inspired by a common motto in Silicon Valley: “Move fast, break things.” It’s an approach that Schmidt seems to openly espouse when it comes to AI advancement.

“Why don’t we wait until something bad happens and then we can figure out how to regulate it — otherwise, you’re going to slow everybody down. Trust me, China is not busy stopping things because of regulation. They’re starting new things,” he said during an interview last year.

At the same time, Schmidt and others suggest that AI built in China is ethically flawed. Earlier this year during a panel discussion at the Aspen Institute’s Security Forum , when Schmidt referenced Microsoft software that automatically writes programming code , he implied that it would be inherently nefarious had it been built in China: “Now imagine if all of that was being developed in China and not here. What would it mean?” he said.

Since then, Microsoft has been sued for copyright infringement in relation to that software.

“[A lot of people] have this notion that AI that's developed in China somehow embeds a different system of ethics and values that's uniquely Chinese,” said Rebecca Arcesati, an analyst at the Mercator Institute for China Studies.

Listen: Kate Kaye talks with Rebecca Arcesati about the pros and cons of China’s potentially influential AI regulations.

“I fear that sometimes we may risk falling into this Orientalist trap, seeing China as this alien place where things are just different from what we are used to in the West,” Arcesati said.

There’s little indication that a technology’s country of origin automatically instills values — particularly AI technologies that are commonly constructed from borderless, open-source components . For instance, computer vision AI researchers from the U.S. and around the world have resisted requests to consider fairness or prevent discrimination in their work, despite the fact that some of it can be used to build controversial systems such as facial recognition and surveillance tech, deepfake videos, and AI that is meant to detect people’s emotions.

When chairs of one of the world’s most important computer vision AI conferences, held this year in New Orleans, tried to make minor ethics-related changes to research reviews, they were met with resistance from researchers , including some from the U.S. who told Protocol that requiring ethical reviews would hamper their independence and is “not their job.”

It’s very easy to use that approach to deflect any responsibility and accountability for the [things] that other countries are doing, and use this AI race framing for more funding into military or surveillance technologies.

Abigail Coplin, an assistant professor of sociology and science, technology, and society at Vassar College who studies research and development in the AI-enhanced realms of biotech and agro-biotechnology in China, agreed. “There’s a very prevalent discourse right now, definitely in political circles, [about] whether values are intrinsically baked into technologies. I would say I’m a little bit skeptical of that,” Coplin said.

“It’s easy to criticize China or some of the other autocratic governments and shield the U.S. and other democratic countries from criticism,” Hickok said. “Some of it is legitimate criticism, but it’s very easy to use that approach to deflect any responsibility and accountability for the [things] that other countries are doing, and use this AI race framing for more funding into military or surveillance technologies which then find their way into experiments in domestic law enforcement or migration management,” she said.

Ultimately, by planning AI strategy and investment through a national security lens, the U.S. could drown out important efforts such as drug discovery, development of climate change-related technologies, and global AI standards that could benefit from collaboration with China, Arcesati said.

“At the time when this rhetoric of an AI arms race is really crowding out other conversations, global links with Chinese academia and Chinese researchers are fundamental and should be strengthened even further,” Arcesati said. “While countering and pushing back against China’s use of AI in ways incompatible with international human rights law and norms, democracies like the U.S. will also have to find ways not to shut the door on cooperation completely.”

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more .

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison ( @dgoodison ) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich ) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins