Quantum computing manual

The best quantum computers money can buy: Underpowered, unreliable but amazing

Despite their shortcomings, researchers still say they're going to be useful.

IBM's quantum computer

It may look like an exotic water heater, but this is in fact the cutting-edge of quantum computing.

Photo: IBM Research

At the dawn of the computing age, the world's most powerful computers occupied entire rooms. Now, at the dawn of the quantum computing age, history appears to be repeating itself.

Today's quantum computers are very, very large: In early 2019, IBM announced its first standalone device, which fit into a cube 9 feet wide. Typically, the guts of the current best-in-class devices are housed in a large metal cylinder, which looks like a water heater, hooked up to banks of electronic control devices and ordinary computers.

Open the cylinder and you'll find a tangle of wires and tubes snaking down through several tiers, like an inverted wedding cake, until they reach the heart of the device. There, you'll find a microcircuit the size of a thumbnail, which looks much like those you'd find in your laptop but bears a handful of quantum bits, or qubits.

In other words, most of the machine is housing and controls. And much of the bulk is occupied by cryogenic systems — a liquid-helium coolant and clever electronic cooling systems — to keep the qubits at a temperature just a fraction of a degree above absolute zero (-273 Celsius), the coldest temperature possible.

Under these conditions, the qubits of devices being used by Google and IBM become superconducting: able to carry electrical current without any resistance. That's a property governed by the rules of quantum mechanics, giving quantum computers the ability to do extraordinary calculations . (There are other, competing technologies hot on the tail of superconducting qubits, though; read more about that here .)

Engineering breakthroughs

Computers that conjure phenomenal power from quantum physics were still a dream 10 years ago. But when in 2016 IBM made a quantum computer with 5 qubits available to researchers via the cloud, it looked inevitable that the technology, proposed in 1982 by legendary physicist Richard Feynman, would finally arrive.

By early 2019, IBM and Google were announcing 20-qubit devices; today both companies, along with others such as Rigetti, founded by its current CEO, Chad Rigetti, and based in Berkeley and Fremont, boast machines with 30 to 50 qubits, and 70-qubit machines are on the way. More qubits means more computing power — although already, just this handful appears capable of some feats that would tax the billions of regular bits in conventional computers.

"This ramping up of industrial activity has happened sooner and more suddenly than most of us expected," says quantum information theorist John Preskill of the California Institute of Technology in Pasadena. Since researchers figured out how to make a decent qubit, there's been no holding back.

But the quantum components in these devices are still far from ideal. In particular they are prone to errors: Randomizing influences such as heat in the surroundings might nudge a qubit to switch, say, from a 1 to a 0 state, a phenomenon known as noise.

And while there has been good progress increasing the number of qubits to the several tens, that could soon plateau. Keeping many of them coordinated in their delicate, intertwined quantum state for long enough to perform a computation is a challenge that gets rapidly harder as the numbers grow: The quantum information that they hold can become scrambled within a fraction of a second, a problem called decoherence.

Underpowered, unreliable but amazing

We are, says Preskill, currently — and for the foreseeable future — stuck with what he calls noisy, intermediate-scale quantum, or NISQ, devices. And now we have to figure out how to get the best from them.

The problem with error-prone qubits is that errors can potentially upset an entire calculation, yet we can't deal with them in the same way as we do in classical computers. There, you can just make several copies of each bit for backup, and take the consensus value to be the correct one; the chance of a majority all switching in error is tiny. But in quantum computers this isn't possible: The entire quantum computation relies on not knowing which state the qubit is in until the calculation is completed, and a fundamental principle of quantum mechanics says that you can't make a copy of an unknown quantum state without changing it.

So one key to making better use of the kinds of machines that are now in use is to drive down the rate at which qubits incur errors. In other words, the performance of a device is not by any means just a matter of how many qubits it has, but also of how good they are. Researchers at IBM argue that performance should be measured not in terms of crude qubit counts but using a quantity they call "quantum volume," which takes into account such things as error rate. Researchers are working out how to make these kinds of improvements , but it will be a long, slow journey.

Still, there's plenty that can be done on these imperfect machines, from investigating new chemistry to build better batteries to testing whether quantum computing could transform the world of finance. With noisy qubits, running a quantum computation is a bit like making some experimental measurement: There are error bars on the result, but you can measure it well enough if you know how to keep the random noise within limits, so the results can still be useful.

For now, the steady march of progress looks set to continue. The devices that have been built so far show "that we know how to build a very complex quantum system," says John Martinis, who until recently led Google's quantum computing hardware lab. "So more good results will be coming."

Correction: And earlier version of this article misstated the year in which IBM made a quantum computer with 5 qubits available to researchers via the cloud. It was 2016, not 2018. Updated May 5, 2020.

More from Quantum computing manual