# Computing 256-bit elliptic curve logarithm in 9 hours with 126,133 cat qubits

- By
**Jérémie Guillaud**, Chief of Theory at Alice&Bob

*In this blog post, I review the recent results of the paper [1] obtained in collaboration with CEA, discuss the motivations for this work and the working hypotheses.*

## Cat qubits for hardware-efficient error correction

Many envisioned applications of quantum computing (quantum dynamics, quantum chemistry, financial optimization, …) require a large-scale quantum computer – typically composed of a few thousands of qubits and the ability to run quantum gates [2-4]. Running such algorithms is well beyond the capabilities of the quantum hardware, which is currently in the range of a few hundreds to a few thousands of noisy qubits (with error rate in the range for the best hardware, allowing to run only up to a few thousand quantum gates). Although the theory of fault-tolerant quantum computing (FTQC) solves this issue, operating a quantum error correcting code (QECC) below its accuracy threshold remains challenging experimentally, and this strategy comes with an important hardware overhead (say, the number of qubits), such that building a fault-tolerant quantum computer at scale (composed of millions of qubits) is a daunting engineering task.

The bosonic approach to fault-tolerant quantum computing attempts to ease some of this burden by leveraging the infinite dimensional Hilbert space of a quantum harmonic oscillator to implement some the of redundancy of quantum information required in error correcting schemes. Therefore, bosonic qubits are ‘hardware-efficient’ as they enable to implement quantum error correcting codes with a lower hardware footprint. Our work focuses on the case of the ‘cat qubits’ built by Alice&Bob, where the fundamental bit of quantum information is encoded in the superposition of two coherent states of the same complex amplitude but with opposite phase. The crux of this qubit is the *tunable noise bias* inherited from the fact that the typical noise affecting the hardware produces extremely asymmetric noise at the level of the encoding: the bit-flip error rate is suppressed exponentially with the average photon number in the cat states , at the cost of a linear increase of the phase-flip error rate. Therefore, assuming that for a sufficiently large average photon number, the bit-flip errors of cat qubits can be made sufficiently rare, one may build a logical qubit by only correcting the phase-flip errors. This concatenated error correction scheme, where half of the errors are suppressed passively at the hardware level by using cat qubits, and the other half using an active quantum error correcting code, produces a resource efficient architecture.

## Context & results of our work

While the analysis of concatenated error correction schemes with bosonic qubits already gives a hint on the resource savings of this approach, one may wonder how those savings impact the final size of a full-scale quantum computer. In our work [1], we perform an end-to-end resource analysis of the number of qubits and computational time required to compute the solution of two cryptographic-relevant problems using a repetition cat code architecture: integer factorization and elliptic curve discrete logarithm computation. Quantifying the quantum resources required to solve these problems is important as they are widely considered amongst the most difficult; therefore, these estimates shed light on the ultimate scale that quantum computers will have to reach to fulfill their potential. These numbers may also be useful in assessing the level of maturity required for quantum computers to become a real threat to currently used cryptographic protocols.

The objective of this work is three-fold: 1. Serve as a guideline to benchmark fault-tolerant architectures, 2. Describe in detail the architecture (physical layout of the processor, physical and logical level description of gates, compilation of the algorithm on this specific processor) and optimize the architecture in a co-design approach 3. Precisely quantify the space (number of qubits) and time (algorithm run time) cost to solve such difficult problems on a ‘hardware-efficient’ architecture, and more precisely the reduction in complexity inherited from the use of (extremely) biased noise qubits.

Under the physical hardware assumptions (*see below for a discussion*) that a physical bosonic mode lifetime of ms, a two-photon stabilization are realized, and that a 1D repetition code can be used, we find that a repetition cat code architecture can compute a 256-bit elliptic curve discrete logarithm in 9 hours using 126,000 cat qubits, and that a 2048-RSA integers can be factorized in 4 days using 350,000 cat qubits. While these numbers are well beyond current capabilities, they are lower than what was previously thought [4, 5], which reflects how the ‘built-in’ error correction capabilities of cat qubits at the lowest hardware level reduces importantly the final size of a large-scale fault-tolerant quantum computer.

## Benchmarking fault-tolerant architectures

Performing an end-to-end resource analysis from the quantum algorithm down to detailed hardware-level details may provide a useful benchmark of quantum architectures. Indeed, the zoo of quantum hardware with different physical realizations (trapped ions, photonics, superconducting circuits, …) or different computational models (analog quantum computing, gate or measurement based, …) makes it difficult to directly compare the different approaches to large-scale quantum computing, or to estimate the time-to-market of the technology. Recently, some metrics (such as the Quantum Volume or the Q-score) have been proposed to compare the small-to-medium scale, noisy quantum processors available today. These metrics capture most of the important features of a quantum processors, such as clock speed vs. coherence times, single and multiple qubit gates fidelities, qubit connectivity, etc.; but somewhat miss some of the important features of a fault-tolerant architecture, such as structure in the noise, physical noise vs. accuracy threshold (also known as the factor), subthreshold scaling of a given code, QECC decoding complexity, ‘teraquops’ footprint, etc.

Some of these features, such as the factor, are of immediate short-term importance, as they shed light on what architectures are good candidates to reach experimental macroscopic quantum coherence times. Others, such as the ‘teraquops’ or application footprint (the number of physical qubits required to implement a logical qubit with an error rate of or to solve a specific problem), are of mid-to-long term importance in assessing the potential of a given technology towards addressing the full range of applications envisioned for quantum computers. At a theoretical level, use-case level resource estimation provides a valuable insight of the potential of a technology as it aggregates all the advantages of a platform into a single number and may be useful in assessing the practical complexity of a given problem.

## Hardware hypotheses and outlook

When comparing the obtained numbers of qubits and computational time for a given algorithm, one should however always bear in mind that these numbers are highly dependent on the assumptions made about the hardware. Indeed, in a fault-tolerant architecture operated below its fault-tolerance threshold, the logical error rate is proportional to where is some constant, is the ratio between the assumed physical noise and the threshold of the architecture and is the code distance; such that the final results are dependent on the assumed levels of physical noise. Our work relies on three main hardware assumptions.

First, we assume that the classical processing associated with error correction (decoding the streams of syndromes) is instantaneous and that classical computations are error-free. While this assumption is fairly common in such works, building (classical) decoders that can keep up with the clock speed of a superconducting architecture is a major challenge for the field that has yet to be addressed. Note, however, that the ‘hardware-efficient’ cat qubit approach with some ‘built-in’ partial error correcting capabilities decreases the complexity of the decoding problem.

The second one is about the hardware quality itself. Focusing on single photon loss, by far the dominant error channel of our physical platform, we assume a memory lifetime ms. While this number has been achieved experimentally in bare high-quality 3D resonators, maintaining such long lifetimes in a complex circuit Quantum Electrodynamics (cQED) environment will be challenging. A key metric capturing the quality of a cat qubit processor is the ratio between the single photon loss rate and the stabilization rate of the cat qubit . We here assume that a 2-to-1 photon loss ratio of can be achieved, which translates to MHz, a rather conservative value deemed attainable by crafting proper circuits to stabilize cat qubits. Note that achieving even higher values of will ease the challenging requirements on , and achieving higher values of both of and of for the same ratio is in general desirable as it increases the clock speed of the architecture.

Third, we assume in this work that the correction of bit-flip errors may be entirely handled by the passive ‘self-correcting’ capabilities of cat qubits (no saturation of bit-flips up to the working point), such that a simple 1D repetition code against phase-flips may be used to produce a computational logical qubit of sufficient quality. In practice, we find that this means cat qubits of size photons must be used. This hypothesis about the noise bias that may be achieved in practice is crucial in the ‘hardware-efficiency’ of the concatenated cat codes approach: assuming an unbiased noise of strength , it is estimated that a few tens of millions physical qubits are required to factor an RSA-2048 integer, splitting into a few tens of thousands logical qubits multiplied by a thousand of physical qubits per logical qubit. The use of a 1D repetition code enables to roughly square the latter number (although other factors also help, such as the higher threshold of the repetition code or shorter code cycle leading to smaller code distances), reducing the total numbers of qubits to a few hundred thousand. While we have focused our work exclusively on the working hypothesis of a 1D repetition code, we roughly estimate that if a saturation of bit-flip errors around photons happen (necessarily implying using first-order correction of bit-flips), a rectangular surface code would be required instead of a repetition code, an almost increase of the number of physical qubits. While this considerably complexifies the construction of the processor, this is still a reduction compared to unbiased noise architectures. This sheds light on the importance of this hypothesis on the expected performance of the cat qubit approach.

Finally, the bosonic approach to FTQC is still under fast theoretical developments that aim at easing some of the requirements that we have just discussed. Some of the recent theoretical developments that will either further reduce the final overhead (under identical hardware hypotheses) or ease the constraints on the hardware (for an identical overhead) include recent proposals to use squeezed cat qubits [6-8] to improve the scaling of bit-flip error suppression (therefore reaching sufficient level of bit-flip suppression with photons instead of photons at the cost of a more complex cat stabilization scheme), or to use an asymmetric architecture [9] that significantly enhances the speed of the error correction cycle, and therefore the threshold, leading to a low footprint in the deep subthreshold regime.

For this reason, I believe the obtained numbers should not be interpreted as the ultimate goal of the hardware effort but rather, illustrate how concatenated cat codes may reduce drastically the engineering challenge.

## **Reference**s

[1] Gouzien, D. Ruiz, F.-M. L. Régent, J. Guillaud, and N. Sangouard, “Computing 256-bit elliptic curve logarithm in 9 hours with 126 133 cat qubits,” 2023. [Online]. Available: https://arxiv.org/abs/2302.06639

[2] M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, “Elucidating reaction mechanisms on quantum computers,” Proceedings of the National Academy of Sciences, vol. 114, no. 29, pp. 7555–7560, jul 2017.[Online]. Available: https://doi.org/10.1073%2Fpnas.1619152114

[3] S. Chakrabarti, R. Krishnakumar, G. Mazzola, N. Stamatopoulos, S. Woerner, and W. J. Zeng, “A threshold for quantum advantage in derivative pricing,” Quantum, vol. 5, p. 463, jun 2021. [Online].Available: https://doi.org/10.22331%2Fq-2021-06-01-463

[4] M. E. Beverland, P. Murali, M. Troyer, K. M. Svore, T. Hoefler, V. Kliuchnikov, G. H. Low, M. Soeken, A. Sundaram, and A. Vaschillo, “Assessing requirements to scale to practical quantum advantage,” 2022. [Online]. Available: https://arxiv.org/abs/2211.07629

[5] C. Gidney and M. Eker ̊a, “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits,” Quantum, vol. 5, p. 433, apr 2021. [Online]. Available: https://doi.org/10.22331%2Fq-2021-04-15-4332

[6] D. S. Schlegel, F. Minganti, and V. Savona, “Quantum error correction using squeezed schrödinger cat states,” Physical Review A, vol. 106, no. 2, aug 2022. [Online]. Available: https://doi.org/10.1103%2Fphysreva.106.022431

[7] T. Hillmann and F. Quijandr ́ıa, “Quantum error correction with dissipatively stabilized squeezed cat qubits,” 2022. [Online]. Available: https://arxiv.org/abs/2210.13359

[8] Q. Xu, G. Zheng, Y.-X. Wang, P. Zoller, A. A. Clerk, and L. Jiang, “Autonomous quantum error

correction and fault-tolerant quantum computation with squeezed cat qubits,” 2022. [Online]. Available:

https://arxiv.org/abs/2210.13406

[9] F.-M. L. Régent, C. Berdou, Z. Leghtas, J. Guillaud, and M. Mirrahimi, “High-performance repetition cat code using fast noisy operations,” 2022. [Online]. Available: https://arxiv.org/abs/2212.119272