More Quantum Computing with Fewer Qubits? Meet our New Error-Correction Code

Posted on 25 January 2024
  • By Paul Magnard, Senior Quantum Physicist at Alice & Bob
A female physicist checking on a dilution refrigerator, (quantum computer).
Physicist at Alice & Bob’s Paris lab

Quantum computing needs quantum error correction

Quantum computers hold the promise of literally revolutionizing many fields of research. This is because there are groundbreaking problems which could never be cracked using classical supercomputers, but which could be solved with just a few hundred qubits. The only thing is, for these algorithms to work, you need to operate qubits with exceedingly low error rates. And such algorithms would require between tens of millions and a quadrillion (107 and 1015) operations. That means that the probability of making an error in any given quantum operation has got to be less than one part in ten billion if we want quantum computers to be really useful, while today, even the best-quality qubits with their 0.1 to 1% error rates are way off target. Added to the fact that improvement in physical error rates has not been impressive in the past few years.   

Fortunately, there is a way of dealing with a large part of this problem. It’s called quantum error correction, or QEC for short. The idea is to operate multiple noisy qubits in such a way that they encode the information of an error-free logical qubit. The most widely studied QEC code, called a surface code, uses about a thousand noisy qubits arranged on a square array to encode a single logical qubit.

It’s like using a whole orchestra to play a single perfect-pitch note.

The hidden cost of QEC

“Well,” you might say, “let’s put together a few hundred logical qubits. That’s a few hundred thousand noisy qubits. That sounds doable.” But not so fast! What about the connections between them? What infrastructure would you need? A single logical qubit with just a thousand physical, hence noisy, qubits already involves a significant overhead. For superconducting qubits – one of the most advanced kinds of quantum computing hardware, championed by big tech companies like Google, IBM, and AWS, and start-ups like Rigetti, IQM, and Alice & Bob – you need an extra-large dilution refrigerator and three racks tightly packed with microwave cables, components, and instruments. You’re talking about 10 million USD and a 50 kW power supply.  

And that’s just one logical qubit. But you need to multiply this by a hundred to solve the first use-cases of quantum computing in areas like fundamental physics and chemistry. Now you’re talking about a warehouse-sized machine, costing around 1 billion USD and consuming 5 MW. And to handle the hardest but most impactful problems, like Shor’s factorization algorithm, you’ll need to multiply these numbers by another factor of 200.  

Of course, we can expect technological progress and scale economies in cryogenics and microwave instruments to bring the cost and energy consumption down, perhaps by a factor of 10, but even then, such a machine would cost tens of billions of dollars, and it would swallow up a significant fraction of the output of a nuclear reactor.  

In short, the thousand-fold increase in the qubit-count overhead introduced by surface code QEC means that quantum computing would simply not be economically viable. This brings us to a key question of practical quantum computing: how can we reduce the QEC overhead?  

Or put in another way, how to make quantum computers hardware efficient?

For just a thousand physical, hence noisy, qubits, you’re talking about 10 million USD and a 50 kW power supply, and that’s just one logical qubit.

Cat Qubits

One option is to use a kind of qubit which, by its very design, is inherently protected against one type of noise. Think, for instance, of the “cat qubits” we use at Alice & Bob. QEC then only has to correct one remaining noise dimension. This reduces correction to a 1D problem. Look at the explanation in the insert. A surface code would need a 30 X 30 square array of physical qubits, but now you only need one row of this. That means 30 cat qubits, as compared with nearly 1000 qubits in the standard approach!

Standard qubits suffer from two kinds of noise because they occupy a two-dimensional space. You can picture the state of the qubit as an arrow from the center of a sphere to its surface. The poles correspond to the 0 and 1 states, and the rest of the sphere represents all possible quantum superpositions of these states. We call this the Bloch sphere. Noise can be viewed as unwanted rotations of this sphere around its center, making a given state wander over the two dimensions of its surface, for example, along the meridians or along the parallels or anywhere in-between.  The surface code is designed to correct noise in both of these dimensions. This requires a 2D square array of physical qubits. It’s an oversimplification, but you can think of each dimension of this square as correcting one dimension of noise. 

Cat qubits are a special type of qubit. They encode the 0 and 1 states into classical oscillations of a resonator. Imagine two pendulums, swinging completely out of phase. Because the 0 and 1 states are so different, it is practically impossible for the system to jump from state 0 to state 1 or vice versa. What does this mean on our sphere? States 0 and 1 can no longer move. It’s as though the sphere were fixed on an axis through the poles. In this case, noise can only rotate the sphere in such a way that the qubit moves along parallels, as the sphere turns one way or the other around its polar axis. This is how cat qubits project out one dimension of noise, reducing the error correction scheme to a 1D problem. And to deal with this, you only need a repetition code, that is, a single row of the square array forming the surface code. This is good news because it means you need far fewer qubits.

Actually, in a publication last year, we showed that you would require 60 times fewer qubits with a cat-qubit architecture than with a standard approach for large practical problems like Shor’s factorization algorithm. You can read more about that in this blogpost.  

LDPC codes

A second option is to outsmart the surface code. This is no easy matter. The surface code is popular for good reason. It’s the best QEC code you can get if you want: 

  1. To perform a bounded number of operations on each qubit per round of QEC as the code size increases,
  2. To have local operations, i.e., only involving neighboring qubits, on a 2D array of qubits.   

A code complying with the first condition is called a low-density parity check (LDPC) code. Why so? First, error correction is done by checking the parity of a group of qubits. Now, the density of a parity check refers to the number of qubits it involves, so we say “low density” when this number is bounded as the code size increases. And since for each qubit involved in a parity check, we perform a quantum operation on it, the qualifier LDPC implies a bounded number of operations per qubit. This condition is crucial for quantum error correction, because the checking operations themselves introduce errors. If the number of checking operations per qubit per round of QEC is unbounded, then so will be the error those operations introduce, and the QEC code will introduce more errors than it can correct. In short, QEC codes must be LDPC if we want them to scale up.

Basically then, a surface code is LDPC. To do better than the surface code, the second condition has to be relaxed. You must introduce extra, non-local operations between distant data qubits and parity check qubits, or go to a 3D array of qubits. A recent theoretical study has produced a simple LDPC recipe: start with a 2D surface-code array of qubits, add two extra long-range connections per qubit on top of it, and hey presto! You can get about fourteen times more logical qubits of similar quality out of the same number of physical qubits. And the resulting fourteen-fold overhead reduction would probably be higher, more like thirty-fold, for more demanding applications like Shor’s algorithm.

Low density parity check, LDPC. Applied to the surface quantum error correction code.
More quantum computing with fewer qubits.

LDPC codes on standard qubits can be implemented by adding extra connections between distant qubits on a surface code tile. A set of 14 to 30 logical qubits can then be encoded from a single surface code tile, in other words, from a tile that would otherwise encode only one logical qubit. The problem is then knowing how to perform gates on such a compact quantum memory. At the present time, the best way to perform gates on LDPC qubits is to apply them one at a time on a surface-code-like tile next to it. This already doubles the number of physical qubits. To parallelize logical gates, additional surface-code tiles would be required, quickly cancelling the overhead reduction of the LDPC approach. 

These results should have a huge impact on the feasibility of quantum computing. But there are two snags:  

  1. The connectivity scheme is extremely complex and poses major engineering challenges, at least for fixed qubit architectures like superconducting qubits. It’s a real can of worms.  
  2. There is no straightforward way to apply gates to LDPC qubits. Instead, gates must be applied to an auxiliary system made of a non-compact code, like the surface code, which is then mapped to the LDPC chip. These operations can only be done on one logical qubit at a time, which limits operational speed. 

And if we could choose both options at once?

As LDPC enthusiasts and true cat lovers, we naturally asked ourselves: What about cat qubits? Can we correct their errors with LDPC codes? And could this reduce the QEC overhead even further?   

So, we decided to try the standard LDPC recipe on a row of cat qubits, as in a repetition code, and add some long-range connections.

But it was immediately clear that any such connectivity would just be an unimaginable mess unless we first folded the row of cat qubits back and forth a few times, rather like a snake. That way, distant qubits became neighbors, easy to connect. In other words, we got the equivalent of a 1D row of cat qubits with extra, long-range connections by making a 2D array of cat qubits using only local connections. This met the first challenge of producing LDPC codes on standard qubits: using cats, you can outsmart the repetition code with LDPC and local connectivity! 

Just one more thing, though. Not all 2D arrays of cats give a good LDPC code. There are countless ways to make a 2D array of qubits, and almost all of them lead to poor LDPC codes, too complex for too small a gain. Actually, finding a good LDPC code is like finding a needle in a haystack. So, guess what! We went through the haystack. And we found a 2D pattern of cat qubits with low-degree local connectivity that encodes up to five times more logical qubits than if you arrange them in a row. The result is so good it’s close to the theoretical optimal!   

Low density parity check, LDPC. Applied to the cat qubit, repetition code for quantum error correction.
More quantum computing with fewer qubits.

That still leaves the question of gates. As for LDPC codes on standard qubits, gates must be applied to an auxiliary system made of a non-compact code which is then mapped to the LDPC chip. However, because we use cat qubits, the minimum shape of this auxiliary qubit is a simple 1D repetition code. If the auxiliary system takes the form of a 2D array of cat qubits, it can then host multiple repetition codes, enabling the operation of many logical gates simultaneously. Logical gates can be parallelized! 

A processor based on LDPC cat qubits thus takes the following simple form. We have the LDPC array on one chip, acting as an efficient quantum memory, and the 2D auxiliary array of cat qubits on a flip chip, acting as the computing part of the processor.

So, using cat qubits in an LDPC code kills a whole flock of birds with one stone. More than any lone cat ever could! It gets over all the experimental hurdles involved in using LDPC codes while pushing the quantum-error-correction overhead to its theoretical limit. 

You can read the paper that Alice & Bob just published in collaboration with our partners at Inria.

How do we finish this race sooner?

As separate options, LDPC codes on standard qubits and 1D codes on cat qubits are already quite efficient in reducing the number of qubits needed to build a useful quantum computer. But when you combine the two, the gain is dramatic! You would need just 100 000 qubits to run Shor’s algorithm. That’s 200 times less than with the state-of-the-art with standard qubits, and it can only be done using cat qubits. 

At the end of the day, the hardware needed to run useful quantum algorithms is substantially reduced. The first applications to solve fundamental problems in physics and chemistry would require a 1500 cat-qubit processor, all it takes to operate 100 logical qubits. This only requires hardware that could in principle be built today. For more complex algorithms like Shor’s, the machine would still be large, but it would be realistic and much more likely to be economically viable, although that depends on the application.   

The race to provide useful quantum computing is actually two races running along opposing tracks: a race to build increasingly large quantum processors, and a race to reduce the size at which such a processor is useful. The quantum computing era will get under way when the two runners meet. Our way of using LDPC codes on cat qubits has given us a tremendous boost on the second track, and we may be on the verge of meeting the champion on the first track. When that happens, we’ll be able to build the first useful quantum computer.

Feeling like quantum physics
and Cats in your inbox?