Stop Asking What Number a Quantum Computer Factored. Ask These Five Questions Instead
Table of Contents
The “factored 15” or “factored 21” trope mistakes a toy demo for a progress meter. A better way to track the real quantum threat is to watch the engineering milestones that actually determine whether a machine could break cryptography.
One of the laziest talking points in quantum security is that quantum computing has “gone nowhere” because people still talk about factoring 15. That confuses an early proof-of-concept with the real engineering path to a cryptographically relevant quantum computer. The 2001 Nature experiment explicitly described factoring 15 as the “simplest instance” of Shor’s algorithm, and later analysis showed that compiled factoring demos can depend more on how aggressively the circuit was simplified than on the size of the number itself.
If you want the full technical version, I unpack that in my CRQC Quantum Capability Framework. That framework breaks the problem into nine interdependent capabilities across foundational, logical-gate, and system layers, then compresses them into three executive levers: Logical Qubit Capacity, Logical Operations Budget, and Quantum Operations Throughput.
For a busy security team, though, the same idea can be simplified into five practical questions. Three are the framework’s top-level metrics. Two are reality checks that tell you whether those numbers are becoming operational. You can play with those numbers in my CRQC Readiness Benchmark (Q-Day Estimator) tool.
1. How many usable logical qubits does the machine actually have?
This is the first question because raw physical qubit counts are not the same thing as cryptanalytic capability. A physical qubit is noisy. A logical qubit is an error-corrected qubit you can actually trust. In the framework, this is Logical Qubit Capacity (LQC): how many usable logical qubits the machine can field at once. For RSA-2048, the framework’s simplified methodology uses a rough requirement of about 1,400 logical qubits, not just “a lot of qubits.” That is why recent results on logical qubits matter far more than another tiny factoring demo. Google’s below-threshold surface-code result reported a 101-qubit distance-7 logical memory, and Microsoft with Quantinuum reported 12 highly reliable logical qubits on trapped-ion hardware.
2. How deep a protected computation can it survive before failing?
A machine is not threatening just because it can hold a state briefly. It has to keep going. In the framework, this is Logical Operations Budget (LOB): how many logical operations a system can execute before the probability of failure becomes overwhelming. Put simply, this is the endurance metric. The framework uses a rough RSA-2048 requirement in the neighborhood of 1011 to 1012 logical operations. That is why short demos, pretty visualizations, or one-off gate benchmarks are not enough. A machine can look impressive for milliseconds and still be nowhere near the circuit depth needed for real cryptanalysis.
3. How fast can it run once error correction is switched on?
Even if a machine has enough logical qubits and enough depth, speed still matters. A cryptanalytic machine that would take months or years to complete one job is not the same risk as one that can finish in days. In the framework, this is Quantum Operations Throughput (QOT): how many logical operations per second the system can sustain under fault-tolerant operation. The framework’s RSA-2048 view is simple: throughput has to be high enough that an enormous logical workload finishes in days, not geologic time. This is where system-level details suddenly matter. Google’s below-threshold result did not just show better logical behavior with larger codes; it also reported real-time decoding with average latency of 63 microseconds and a 1.1-microsecond cycle time. That is the kind of number that starts to matter once you stop judging the field by toy factorizations.
4. Can the machine supply the expensive gates that Shor’s algorithm really needs?
This is where many public conversations become too vague. In the full framework, one of the blocking capabilities is magic-state production and injection. In plain English: can the machine produce the costly non-Clifford resources that large fault-tolerant algorithms consume in huge numbers? It is not enough to have logical qubits on a slide deck. The system has to feed the hard gates fast enough, and reliably enough, to keep a large algorithm moving. The framework treats this as a major bottleneck and notes that no large-scale magic-state factory has yet been demonstrated. That bottleneck also shows up in attack estimates: Craig Gidney’s 2025 RSA-2048 estimate cut the requirement to under one million noisy qubits and under a week partly by improving arithmetic and partly by reducing the magic-state burden through techniques such as magic-state cultivation.
5. Can the whole stack run end to end, continuously, without human babysitting?
This is the reality check that ties everything together. A cryptographically relevant quantum computer (CRQC) is not just a chip. It is a full stack: logical qubits, logical gates, routing, measurements, decoding, classical control, and operational stability working together for hours or days. The framework treats full fault-tolerant algorithm integration, decoder performance, and continuous operation as separate capabilities for exactly that reason. It also notes that multi-day stability remains a major hurdle. Recent progress is real: Google reported below-threshold memories with real-time decoding sustained up to a million cycles, and Microsoft with Quantinuum reported repeated rounds of error correction together with logical computation. But those are stepping stones, not the final state. Security teams should pay close attention when a vendor moves from “we protected one logical qubit briefly” to “we ran a multi-logical-qubit, fault-tolerant workflow continuously and at useful speed.”
The broader point is simple. “Largest number factored” is the wrong scoreboard. It assumes progress toward CRQC should look like a tidy sequence of classroom integers: 15, then 21, then 35, then 77. Real progress does not look like that. It looks like more usable logical qubits, deeper reliable circuits, faster logical clocks, scalable supply of non-Clifford resources, and longer end-to-end fault-tolerant operation. The full framework gives the detailed map. These five questions are the simplified dashboard.
And for defenders, this is not just an academic debate. NIST finalized its first three principal PQC standards in August 2024, selected HQC as a backup KEM in March 2025, and has published transition material pointing toward deprecating some quantum-vulnerable public-key algorithms after 2030 and disallowing them after 2035 in many cases. The right response is therefore not to obsess over whether “15” sounds small. It is to track the right metrics and migrate in time. Measure the machine, not the toy number.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.