Why “They’ve Only Factored 15” Is the Wrong Way to Judge Quantum Computing
Early Shor demos were proofs of control, not the real scoreboard. The path to a cryptographically relevant quantum computer runs through error correction, logical qubits, and fault tolerance – not through a neat sequence of ever-larger classroom factorizations.
One of the most persistent anti-quantum talking points goes like this: “After 25 years, quantum computers still haven’t factored anything bigger than 15, so the field clearly hasn’t progressed.” It sounds devastating. It is also a badly chosen metric – and, strictly speaking, not even quite right as trivia. The landmark 2001 Nature experiment implemented the simplest instance of Shor’s algorithm for N=15, and a later compiled photonic experiment reported factoring N=21. But whether the demo integer was 15 or 21 is almost beside the point.
The real issue is that factoring 15 was never meant to be the summit of the field. In the 2001 experiment, Lieven Vandersypen and colleagues used seven spin-1/2 nuclei to implement the simplest instance of Shor’s algorithm. The authors themselves were clear about what mattered: not “we have started a smooth march toward RSA-2048 by increasing the toy number,” but that they had demonstrated precise control and modeling of a small quantum system. They also explicitly noted that scalability was not implied by that result.
That distinction matters because many early factoring demos were heavily simplified or “compiled.” In 2013, John Smolin, Graeme Smith, and Alex Vargo made the critique explicit: previous experimental implementations had used simplifications that depended on knowing the factors in advance, and the difficulty of the experiment depended on the level of simplification chosen, not on the size of the number being factored. In other words, “largest number factored” is a deeply misleading scoreboard. It can reward clever circuit compression and prior knowledge more than genuine progress toward scalable quantum computation.
So what should we measure instead? Not the biggest toy integer in a compiled demo, but the capabilities that actually determine whether a machine is becoming cryptographically relevant: usable logical qubits, computational depth, throughput, and the rest of the fault-tolerant stack. I simplify that in this companion explainer, adapted from my broader CRQC capability framework: have a look at “Stop Asking What Number a Quantum Computer Factored. Ask These Five Questions Instead“.
This is the part many critics miss: quantum factoring is not a staircase problem where progress should look like 15, then 21, then 35, then 77, then 221. It is a threshold problem. Shor’s original result showed that factoring and discrete logarithms could, in principle, be solved in time polynomial in the input size on a quantum computer. But turning that asymptotic result into a machine that can actually break RSA depends less on the next cute integer demo and far more on whether you can build a fault-tolerant computer that keeps errors under control for a very long computation.
A better analogy is rocketry. You do not measure progress toward orbit by asking why prototypes did not go from 37 meters, to 370 meters, to 3.7 kilometers in tidy linear increments. Before orbit, there is a long phase in which engineers are solving propulsion, guidance, materials, stability, and staging. Quantum computing looks similar. Below the error-correction threshold, adding more qubits can actually make things worse because you are adding more places for noise to enter the system. Only once physical error rates are low enough does increasing the size of the code begin to reduce logical error rates. That is why modern researchers celebrate below-threshold error correction, not another lab demo of a tiny compiled factoring circuit.
And on that real scorecard, the field has advanced materially. In 2023, Google reported that a distance-5 surface-code logical qubit modestly outperformed distance-3 instances, marking a point where error correction began to improve performance as code size increased. Then, in its later below-threshold result on the Willow generation, Google reported a 101-qubit distance-7 surface-code memory with logical error suppression as code distance increased, and a logical memory lifetime exceeding that of its best physical qubit by a factor of 2.4. Those are not flashy public-facing milestones like “we factored 35,” but they are much closer to the heart of what must happen before Shor becomes cryptographically relevant.
The same pattern shows up elsewhere. A 2024 trapped-ion result reported logical error rates below physical error rates, including entangled logical qubits with error rates ranging from 9.8x to 500x lower than the physical level, and in some configurations up to 800x lower. Microsoft and Quantinuum then reported 12 highly reliable logical qubits on Quantinuum’s H2 system, with a logical circuit error rate about 22 times better than the corresponding physical-qubit circuit error rate. Again, none of this breaks RSA today. But all of it is exactly the kind of machine-building progress that matters. (On a personal note: these were the achievements that convinced me to drop my high-paying Big 4 Partner job and start Applied Quantum)
Hardware scale has also moved far beyond the era of seven-qubit demos. IBM introduced its 1,121-qubit Condor processor in 2023, while also highlighting Heron, a 133-qubit processor designed around better performance rather than raw count, with a reported 3–5x improvement over its earlier 127-qubit Eagle generation. That distinction is important: serious observers no longer treat qubit count alone as the measure of progress. What matters is how qubit count, connectivity, fidelity, control, and error correction come together into a system that can support logical computation.
The cryptography angle makes the “still only 15” line even weaker. The relevant question for defenders is not whether someone has staged a media-friendly factoring demo of a slightly larger semiprime. It is how the resource estimates for a real attack are evolving. In 2021, Craig Gidney and Martin Ekerå estimated that factoring RSA-2048 would require around 20 million noisy qubits and about eight hours under a specific set of assumptions. In 2025, Gidney published a new estimate cutting that to less than one million noisy qubits and less than a week, again under stated assumptions. Those systems are still well beyond today’s hardware. But the direction of travel is obvious: the engineering frontier and the algorithmic frontier both move, and neither is captured by shouting “15!” from the sidelines.
That is also why security planning has not waited for a dramatic public factoring stunt. In August 2024, NIST finalized its first three principal post-quantum cryptography standards: ML-KEM, ML-DSA, and SLH-DSA. In March 2025, it selected HQC as a backup algorithm for general encryption, explicitly telling organizations to continue migrating to the 2024 standards. Standards bodies are not making those moves because a lab has publicly factored RSA-2048. They are making them because prudent risk management works ahead of the final visible “break” moment.
So the right answer to the “still only 15” taunt is simple: you are looking at the wrong scoreboard. Early factoring experiments were proofs of control over primitive hardware. The real modern scorecard is fault tolerance: below-threshold operation, logical qubits that outperform physical qubits, logical gates with manageable error rates, real-time decoding, and hardware architectures that can scale from dozens of logical qubits to the thousands required for a cryptographically relevant machine. See my CRQC Capability Framework if you want to really understand what to look for. On that score, quantum computing has not been frozen for 25 years. It has been climbing the hard part of the mountain.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.