Quantum computing manual

How a quantum computer actually works

Your five-minute guide to how the computers of the future actually operate. Honest.

A 4-qubit quantum chip

Quantum computers use the inherently undefined properties of quantum objects as the basis for their equivalent of a classical computer's bits. Each of these circuits contains four of these so-called qubits.

Photo: IBM

Quantum computing sounds exotic. But really it's just the next step in a historical progression that you need to get your head around.

Technology has always harnessed natural phenomena to perform useful tasks. In the 19th century, high-energy water molecules — steam — powered the industrial revolution. In the 20th century, charge on electrons was harnessed by the electronics industry. In the 21st century we will use the quantum properties of materials to process information.

Quantum information processing is often described as providing a speedup for traditional computing. But that's misleading: The advantages of quantum computing come from its ability to perform only certain tasks faster. What's more, it works by doing them in an entirely new — and admittedly sometimes confusing — way.

Allow us to demystify things for you.

Ditch your bits for qubits

In a classical computer, tiny electronic switches called bits take values of either 0 or 1. Programs that run on them use this binary logic to perform operations: If this is true, then do that.

In a quantum computer, you can forget that kind of certainty. What we tend to treat as "particles" of matter — molecules, atoms and subatomic entities such as electrons — have hidden depths, with inherently undefined properties. An electron's position, for instance, can be impossible to pin down precisely.

Quantum computing manipulates these undefined properties. It starts with deciding how we will define a binary digit, or bit, when using a quantum object. For, say, an electron, we can describe each one as having a quality known as spin, a measure of how it rotates, at one of two extremes, "up" or "down," and then assign the binary value 0 to an up spin and 1 to a down spin. (But it could equally be any other quantum object, like a photon or a molecule, with a different quality used to define the binary values.) This defines the quantum equivalent of a binary bit and is known as a qubit.

Now, we know the electron follows quantum laws and can therefore exist with its spin undefined in a "superposition" state that is a complex combination of the 0 and 1 states. It will only become defined (as 0 or 1) if it interacts strongly with something in its environment, an interaction that quantum physicists often refer to as a measurement.

This doesn't mean it is 0 and 1 at the same time. You could say it will maybe turn out to be 0 or 1, but MIT's Scott Aaronson has called this a "new kind of 'maybe,'" where the process of defining the final outcome can change what you get.

That process is called "interference." When you assemble multiple qubits together, it's possible to apply a certain set of starting conditions to each of those qubits — creating what quantum physicists call "interference effects" — that skew the undefined spins and set their interactions off on a particular course. Like waves rippling together on an ocean, they head toward an increased likelihood of the qubits arriving at a particular final state.

This is basically what software on a quantum computer has to be designed to do. "The art of designing quantum algorithms lies in designing an interference that increases the probability of measuring a state that encodes the solution," says Andrea Rocchetto, who researches quantum computing at the University of Texas at Austin.

Scaling up for speed

Superposition and interference alone are not quite enough to allow a quantum computer to beat a classical one. The secret sauce that makes quantum computers perform certain computations faster than classical computers also requires a vital third phenomenon, known as quantum entanglement.

Entanglement, which has no parallel in the classical world, arises between quantum particles that interact in particular ways, causing all their properties to become shared between the particles. The result is that measurements on one particle can affect the outcome of subsequent measurements on any particles entangled with it — even when they are not physically connected. That's why Albert Einstein described the phenomenon as "spooky action at a distance."

Because entanglement is woven into the mathematics of superposition and interference, it plays a subtle role in the way quantum algorithms work — a role that is extremely difficult to replicate on classical computers. "Quantum computers without entanglement would be easy to simulate classically and thus unable to achieve any computational speedup," Rocchetto says.

Quantum software

Perhaps this all sounds a little suspect. After all, how do you increase the probability of getting a right answer using interference, when you don't know what the right answer is? Essentially, it comes down to knowing the kinds of properties a correct answer will have, which is why quantum computers give us a potential advantage only in certain kinds of problems.

Those problems are generally ones where it is easy to check an answer after you have it, but incredibly difficult to find it in the first place. Take factoring, for instance. This is the process where you have to find the two numbers that multiply together to make a bigger, given number. There is no known process in mathematics that can perform this process efficiently: All classical computers can do is keep trying combinations of numbers until they find the ones that work. The difficulty of factoring is what protects our best encryption algorithms. But in 1994, MIT mathematician Peter Shor created an algorithm that uses quantum interference effects to factor large numbers with ease.

The unknown factors are represented by the undefined qubits. Shor's algorithm allows their wave properties to interfere in carefully orchestrated ways. The algorithm then uses some of the known properties of the right answer to boost its "loudness" and set it apart from all the wrong answers. "With a few repetitions of this process, the right answer becomes very easy to distinguish from the rest," says James Wootton, who works on quantum computing at IBM's Zurich Research Laboratory. And out pops the answer.

Well, in theory — no quantum computer is yet advanced enough to run Shor's algorithm in any meaningful way.

A future of possibility

Despite all this near-magical potential, Shor's algorithm is one of only three applications where we are confident that quantum computers will eventually outperform classical machines. Another is Grover's algorithm, which offers some speedup in search functions, such as looking for data in an unstructured database. The third is in simulation of quantum systems such as atoms and molecules.

But — as the rest of this Manual shows — these three applications are enough to excite the interest of the military and intelligence communities, pharmaceutical companies and high-tech engineering and information processing industries. And we are still only at the beginning of algorithm design: More significant applications may be discovered down the line.

"There is an enormous space of unexplored territory," says Travis Humble, director of Oak Ridge National Laboratory's Quantum Computing Institute.

A TIMELINE OF QUANTUM COMPUTING

May 1981 — Caltech physicist Richard Feynman gives a lecture outlining the potential advantages of computing with quantum systems. A major application: simulating the physical properties of matter, which creates a new path toward developing novel materials and pharmaceuticals.

July 1985 — British physicist David Deutsch publishes the idea of a " universal quantum computer " that would operate beyond the limits of any classical machine.

November 1994 — MIT mathematician Peter Shor presents an algorithm that can efficiently find the factors of large numbers, in theory significantly outperforming the best classical algorithm.

May 1996 — Lov Grover, a mathematician at Bell Labs, presents an algorithm that would offer significant quantum advantage in searching unstructured databases.

May 1998 — Jonathan Jones, Michele Mosca and Rasmus Hansen of Oxford University publish the first implementation of a quantum algorithm . They use a 2-qubit quantum computer to run Grover's algorithm.

December 2001 — A collaboration between IBM and Stanford University publishes the first implementation of Shor's algorithm , factoring 15 into its prime factors on a 7-qubit processor.

March 2012 — Caltech physicist John Preskill describes the moment when "well-controlled quantum systems can perform tasks surpassing what can be done in the classical world" as the arrival of " quantum supremacy ."

October 2019 — Google claims the achievement of quantum supremacy. The precise details are disputed, but the claim is ultimately accepted as valid.

More from Quantum computing manual