IBM's quantum computer
Photo: IBM Research

The next front in the ​US-China tech war

Protocol Enterprise

Hello and welcome to Protocol Enterprise! Today: export restrictions on quantum computing technology and AI software could be the next front in the U.S.-China tech war, why AI is sometimes actually pretty explainable, and how the use of edge computing continues to evolve.

The tech war's next front

U.S. efforts to freeze China’s technological progress in key areas may turn out to be about more than just advanced chips .

  • Bloomberg reports that technologies that could be used in quantum computing , along with “artificial intelligence software,” might be the Biden administration’s next targets for export controls.
  • That would potentially represent a significant expansion of the U.S. tech war with China . This could not only stifle a major U.S. rival economically, but also prevent the Chinese government, down the road, from potentially leveraging quantum computers for encryption-breaking cyberattacks .
  • Quantum computing and the associated threat to encryption are years, possibly even decades, away from commercialization. There's no guarantee that quantum will ever actually work the way that’s been theorized.
  • At the same time, the U.S. government has clearly been taking the threat of quantum-based attacks very seriously , while China has openly placed a high priority on acquiring quantum computers. Given that China ranks as a large and growing cyberthreat to U.S. industry and government, it’s not far-fetched to think the White House is at least considering ways to nip China’s quantum capabilities in the bud.

The Bloomberg report indicates that the Biden administration planning is at an early stage, and doesn't specify what sorts of quantum technologies might be targeted.

  • The recent U.S. blockade on exports of advanced chip technology to China essentially already prevents the country from getting the chip technologies it needs for quantum computing from U.S. suppliers, according to Aidan Madigan-Curtis, a supply chain expert who formerly worked for Apple.
  • Any restrictions on quantum-related technologies, then, might focus on the software and material sciences that would be needed as other underpinnings for the technologies. For instance, "I could definitely see additional restrictions coming down the line that would limit further scientific developments and experimentation [in quantum]," said Madigan-Curtis, now a partner at Eclipse Ventures.
  • Such an action would also “likely hurt the ability of some young Chinese scientists to receive U.S. training in quantum computing,” said Chris Monroe, a quantum computing pioneer and Duke University physics professor, in an email to Protocol.
  • Over the short term, restrictions along these lines would “certainly have an impact” on China’s quantum capabilities, said Monroe, who is also co-founder and chief scientist at quantum computing vendor IonQ. “In the long run, however, the Chinese quantum program will probably not be affected much by these actions.”

The Bloomberg report also has a notable lack of specifics regarding "AI software," an extremely broad category that can mean different things depending on the context.

  • Since the report says that the discussions are at an early stage, it wouldn't be surprising if the White House hasn't actually narrowed down what sort of AI software might be restricted just yet.
  • If the Biden administration follows a playbook that resembles the semiconductor tech export controls for quantum computing and AI software, it would likely establish technical thresholds for the export of quantum machines themselves, and the tools necessary to build them.
  • For chips, that meant issuing rules to block devices capable of performance above a computational throughput needed to train large AI models, for example — the goal of which was to block their use by the People’s Liberation Army and the domestic surveillance apparatus. Presumably, the administration could develop a similar rule tailored to quantum computing technologies.
  • The White House published a list of critical and emerging technologies in February that could be used to inform national security-related activities such as new export controls or investment screening. The list included network sensing, quantum information technologies, and AI.

But in the past, the tech industry has pushed back on potential new rules that could catch a too-wide array of tech in its net.

  • And Monroe told Protocol that he’d expect export controls on quantum computing technologies would ultimately hurt the U.S. when it comes to the field: “Restricting the building and use of quantum computers based on borders could have a negative impact and shrink our lead.”
— Kyle Alspach ( email | twitter ), Kate Kaye ( email | twitter ), and Max A. Cherney ( email | twitter )

A MESSAGE FROM CAPITAL ONE SOFTWARE

As companies scale in the cloud, they face new data management challenges. Join our webinar to learn how Capital One approached scaling its data ecosystem by federating data governance responsibility, and how companies can operate more efficiently by combining centralized tooling and policy with federated data management responsibility.

Register to attend

Rules for model disruption

These days, much of the conversation around AI — including when it comes to AI used by businesses — centers on machine and deep learning, complex models that are built to automate decisions based on spotting patterns and making predictions. One drawback of models built purely with machine learning is a lack of control or explainability.

But sometimes, AI does incorporate more controllable and transparent elements.

“It's very rare that you'd have an automated system that's just machine learning,” said Danny Shayman, AI and machine-learning product manager at InRule, a company that sells automated intelligence software to employment, insurance, and financial services customers.

Rules-based algorithmic systems have been used for decades, of course: for making bank loan decisions, pricing insurance rates, or even determining whether someone should receive a discount email. But companies today are finding ways to blend the best of both rules-based systems and machine-learning automation.

“You can go and use your human understanding of what these disruptions are, encode those via rules, and apply those rules on top of your model, and you don't have to start a model from zero again,” Shayman said.

The process is appealing for companies that want to employ older models that were trained on data collected before the pandemic or other disruptive events, like bad weather in a particular region, for example.

The hybrid approach also allows companies to automate decisions related to a majority of standard situations while ensuring that a human makes the final decision in certain cases, such as when a home property firm wants a person to weigh the pros and cons of issuing a jumbo mortgage loan.

“You're still augmenting the decision, but you're not automating 100% because you want a human in the loop,” said Rik Chomko, InRule’s CEO.

— Kate Kaye ( email | twitter )

It’s not privacy vs. security anymore

In the last few years, the roles of privacy and security executives — and the budgets they control — have grown significantly as organizations have worked to stymie the growing threat of cyberattacks and navigate the ever-changing landscape of data regulation. But good privacy and security strategies are often as much about people as they are policy, and the push and pull between the two remits can sometimes create friction within an organization.

Join Protocol Enterprise’s Kyle Alspach for an event recorded live at KubeCon North America at 11 a.m. PDT on Thursday, Oct. 27. Kyle will be joined in discussion by Chris Burrows, chief information security officer at Rocket Companies; Jacob DePriest, vice president and deputy chief security officer at GitHub; Elise Houlik, chief privacy officer at Intuit; and Deepak Goel, chief technology officer at D2iQ. RSVP here .


Living on the (consolidated) edge

Equinix’s Jon Lin chatted with Protocol this week about edge computing and artificial intelligence as growth areas for the data center REIT.

“We're still excited about digital transformation within the enterprise,” said Lin, executive vice president and general manager of Equinix’s global data center services business. “That continues to be a massive driver, [including] customers that are really starting to identify use cases around what they think of as their edge, which is multinational deployment and their applications and end users all around the world.”

While there’s murkiness around what exactly constitutes the edge, Lin said Equinix is seeing more of a consolidated edge.

“Where in the past, a customer might have one or maybe two locations on a continent or region, what we're seeing is they're driving more towards three or four or five or six deployments per continent now,” he said. “That's sufficient for the near term. Being within … a 20- to 30-millisecond radius from wherever there's data being generated, consolidating that back into those locations for processing, and then basically analytics-sharing back into their central compute and analytics engines is what we've seen continuing to emerge there.”

What Equinix hasn’t seen a ton of is the really far-edge use cases such as autonomous vehicles, “that kind of use case that folks have been talking about for data centers in hundreds of cities around the world,” he said. “That one we're still watching carefully, but we haven't seen that emerge as a big sector yet.”

The growth of data-intensive artificial intelligence technology is starting to impact how data centers are built, according to Lin.

“Certainly the AI use cases that we've seen and the technology have generally been more dense than most deployments in the past,” he said. “That's the one area that we see as pushing our technology and our designs more than most, which is how do we make sure we're supporting these workloads that are far more compute-intense than in the past and also more data-heavy in terms of getting those data and streams from a customer — wherever their sensor data lives — back into our locations. We have existing customers doing large AI workloads in our data centers today, so we know how to … support that well, and it's an exciting growth vector for us.”

— Donna Goodison ( email | twitter )

Around the enterprise

The cybersecurity vendor that disclosed a leak of Microsoft customer data this week said it is standing by its claim that the incident was “one of the largest B2B data leaks” to date, following Microsoft’s criticism that the vendor “ greatly exaggerated the scope ” of the issue.

Microsoft is reportedly eyeing an increase in its investment in OpenAI, beyond the $1 billion it invested in 2019.

Texas’ attorney general has filed a lawsuit against Google that accuses the company of collecting residents’ biometric data without their consent.

A MESSAGE FROM CAPITAL ONE SOFTWARE

As companies scale in the cloud, they face new data management challenges. Join our webinar to learn how Capital One approached scaling its data ecosystem by federating data governance responsibility, and how companies can operate more efficiently by combining centralized tooling and policy with federated data management responsibility.

Register to attend

Thanks for reading — see you Monday!

Recent Issues