Alice & Bob tests Azure Quantum Resource Estimator, highlighting the need for fault-tolerant qubits

At Alice & Bob, we meet many companies looking to understand when Quantum Computing can help them. The questions I get asked, as a Chief Product Officer, often are : “Can a quantum computer solve this computational problem that our classical computers can’t? When? For what size of our problem?”. Historically, answering those questions with quantified precision has been hard. I had to use results from quantum complexity theory, or cite papers on error correction. Microsoft latest release will now help me formulate quantified answers.
Why does the quantum industry need the Azure Quantum Resource Estimator?
Because the hardware is not available yet, we turn to simulators to explore quantum computing. But simulators can only solve problems of limited size. So, we need a way to estimate the hardware specs required to reach quantum advantage for each specific problem.
This is why we are so excited by Azure Quantum’s latest release : the Resource Estimator. By providing answers to those questions it allows enterprise end users to set their expectations about when a quantum computer will become a relevant solution to their problem. It solves a real need in the quantum value chain, a subject that used to be discussed with much hand waving and can now be the topic of quantified conversations.
We anticipate that this tool will become a key part of our customer engagements. The Resource Estimator will enable entreprise end users to make informed technology decisions based on their use cases.
Let’s take it for a test drive on Shor’s algorithm
When Jérémie Guillaud, our Chief of Theory, learned about the tool, he said “Let’s test it on Shor”. Shor’s algorithm is pivotal in our industry, and Jérémie is currently exploring how well suited our technology is for running it.
We decided to use Qiskit for this test. Using the Resource Estimator in Qiskit is really simple, and we knew Qiskit would give us the pre-packaged circuits necessary for running Shor’s algorithm. After setting up the resource as explained in this Microsoft post, it can be instantiated simply by using it as a target for a run.
So, after the ceremony of importing the right modules,
from azure.quantum import Workspace
from azure.quantum.qiskit import AzureQuantumProvider
from qiskit.tools.monitor import job_monitor
from qiskit import QuantumCircuit
from qiskit.aqua.algorithms import Shor
from qiskit.visualization import plot_histogram
from matplotlib import pyplot as plt
import numpy as np
Declaring the right workspace and provider:
workspace = Workspace (
subscription_id = "XXXXXXX",
resource_group = "azurequantum",
name = "YYYY",
location = "westeurope"
)
provider = AzureQuantumProvider (
resource_id = "/subscriptions/XXXXXXXX/resourceGroups/AzureQuantum/providers/Microsoft.Quantum/Workspaces/YYYY",
location = "West Europe"
)
The magic starts with
backend = provider.get_backend('microsoft.estimator')
First we need to define the kind of physical qubits we want to use. We started with those parameters, as proposed in the Microsoft documentation.
qubitParams={
"name": "qubit_gate_ns_e3",
"instructionSet": "GateBased",
"oneQubitMeasurementTime": "100 ns",
"oneQubitGateTime": "50 ns",
"twoQubitGateTime": "50 ns",
"tGateTime": "50 ns",
"oneQubitMeasurementErrorRate": 1e-3,
"oneQubitGateErrorRate": 1e-3,
"twoQubitGateErrorRate": 1e-3,
"tGateErrorRate": 1e-3
}
Then we can instantiate the relevant Shor circuit and run the Resource Estimator on it. Since we are going to do it often, let’s define a function before running it. This function will take as input the number to factorize (N), the acceptable error rate of the algorithm (error) and the qubit parameters:
def resource_estimate_factorize(N, error, qubitParams):
shor = Shor(N)
circuit = shor.construct_circuit()
job = backend.run(circuit,
errorBudget = error,
qubitParams = qubitParams
)
job_monitor(job)
return(job.result())
result = resource_estimate_factorize(9, 0.7,qubitParams)
The Resource Estimator displays results in the form of a toggable list :
result

There is also a possibility to get a more compact display, in which case the additional details appear on hover. This is the format we will use in this post.
result.summary


Of course the number is quite a sticker shock. We need 130,000 physical qubits and the execution time is 55 seconds, under our assumptions!!!
So let’s understand where it comes from. The first thing to do is to expand “Pre-Layout logical resources”. It has a summary of the input circuit :

First there is a cost due to topology. The tools estimates that we need more logical qubits to be able to do our layout. Let’s open “Resource breakdown”

We can see that we have a factor of 49/18 ~= 2.7 to account for the layout constraints. Again, the Resource Estimator explains how it came to this conclusion :

And how big are this logical qubits? Let’s open “Logical qubit parameters” and find out

242 is the answer. So we need 18 * 2.7 * 242 = 11,858 qubits for the computation. How do we get to a total number of 130k? We need an additional 116k qubits for the 18 T factories (numbers from “Resource breakdown”).
Let’s open “T factory parameters” for more details:

So we need 18 factories with 6,480 qubits each, explaining the total budget.
Let’s explore further. It is quite tempting to play with those numbers: how do they change when N gets bigger? When the error rate target changes ?
In order to scale the problem, we run it on numbers close to 2^p (p from 2 to 10) that happen to be a product of odd primes. We plotted the number of qubits required to factorize them. Also, we also toggled the targeted error rate of the algorithm from low (10%) to high (70%).
tab1 = []
tab7 = []
products = [3*1, 1*5, 3*5, 5*5, 3*19, 3*41, 11*23, 7*73, 3*337]
for n in products:
print(n,end=' ')
result1 = resource_estimate_factorize(n, 0.1, qubitParams)
result7 = resource_estimate_factorize(n, 0.7, qubitParams)
tab1.append(result1.data()['physicalCounts']['physicalQubits'])
tab7.append(result7.data()['physicalCounts']['physicalQubits'])
plt.plot(range(2,11),tab1,'-bo')
plt.plot(range(2,11),tab7,'-rx')
plt.legend(['Error rate = 0.1','Error rate = 0.7'] )
plt.xlabel('nb of bits of the number to factorize')
plt.ylabel('nb of physical qubits')

Of course, there is much more to explore: execution time, importance of gate fidelity etc. I would very much like to understand what is happening when factorizing 25 for example.
Small note : as the circuits get bigger (more than 6 bits), we start getting into limitations due to using Qiskit. It increases the circuit generation time to the point that, beyond 11 bits, the script crashes. We talked to our Microsoft contacts and they told us that, when using Qiskit, the circuit has to be flattened and optimized before being passed to the Resource Estimator. It is this phase, happening within Qiskit, that is time consuming. They tell us switching to a Q# implementation would solve the issue as any function is represented a single time for every instance that it is called.
What does it mean for Quantum Computing and for Alice & Bob?
On this small experiment, we can have a few takeaways
- The targeted error rate matters a lot. Shor algorithm can get away with a high error rate, as the results can be checked through a simple multiplication. This makes very clear why the error correction needs are problem-dependant.
- Overall, the numbers are big. For real world use, you would like to go to 500 bits and more. This short investigation emphasizes the burden that surface code and magic state factory overhead puts on quantum computing. Cat qubits, with their lean error correction and low magic states requirements address this issue.
At Alice & Bob, it has long been our belief that practical uses of quantum computing will require error correction. This strengthens our conviction. Our technology is based on cat qubits, and therefore allows for a leaner error correction roadmap.
The future has never been so exciting for Alice & Bob roadmap. Thanks to Microsoft’s Resource Estimator, we can now say precisely how much!
We will soon publish a more detailed blog post regarding our roadmap. We will also share results on the resources required to run Shor at scale on cat qubits. (Spoiler, we think the numbers will be much smaller). So stay tuned.