How Close Is the Quantum Threat? Resource Estimates for Breaking Blockchain Cryptography
Table of Contents
Introduction
On March 30, 2026, two papers landed within hours of each other. Google Quantum AI published estimates showing that breaking the 256-bit elliptic curve cryptography securing Bitcoin and Ethereum would require fewer than 500,000 superconducting qubits and roughly nine minutes of runtime. The same day, a team from Oratomic, Caltech, and UC Berkeley published estimates showing that the same computation could be performed with as few as 10,000 reconfigurable neutral atom qubits over a period of roughly ten days.
Two hardware paradigms. One cryptographic target. The same conclusion: the engineering specification for a machine that breaks cryptocurrency cryptography is far smaller than the field believed even twelve months earlier.
Neither paper claims such machines exist today. What they demonstrate is that the goalposts keep moving in one direction. This article traces that trajectory, examines what it means for each quantum computing platform, and maps the remaining engineering challenges against my CRQC Quantum Capability Framework to assess how much stands between the current state of the art and a cryptographically relevant quantum computer.
The Pattern: Attacks Always Get Better
The single most important observation in quantum cryptanalysis is not any individual resource estimate. It is the trajectory.
For RSA-2048, the benchmark that the quantum computing community has optimized against for over a decade, the physical qubit requirement on comparable superconducting hardware has dropped steadily for fifteen years. In 2012, estimates from Fowler et al. and Jones et al. placed the requirement at roughly 1 billion physical qubits. By 2017, improved error correction architectures brought it below 300 million. Gidney and Ekerå’s 2019 paper established 20 million as the new standard. Gidney’s 2025 update dropped that below 1 million on the same planar surface code assumptions. The Pinnacle architecture paper in early 2026 pushed below 100,000, though on more aggressive hardware assumptions (degree-ten non-planar connectivity using qLDPC codes that have not been demonstrated at scale).
The reduction averages roughly 20x per major publication cycle, sustained for over a decade. As the Google paper quotes the cryptography community’s standing observation: “attacks always get better.”
For ECC-256, the specific problem that protects blockchain cryptography, the optimization history is shorter and less populated. That asymmetry is itself the critical observation.
ECC: The Under-Researched Target
For years, I have argued that ECC was receiving disproportionately little attention from quantum cryptanalysis researchers relative to its real-world importance. RSA-2048 became the unofficial benchmark for quantum computing progress, attracting the bulk of circuit optimization effort. Meanwhile, the cryptographic primitive protecting the largest pool of immediately stealable value (ECDSA on the secp256k1 curve) had far fewer researchers working on efficient Shor’s algorithm implementations.
The thesis was straightforward: once serious algorithmic talent turned its attention to ECC, the resource estimates would compress fast. That is what happened in the first half of 2026.
The published landscape for ECDLP-256 on secp256k1 evolved as follows:
Proos and Zalka (2004) established the earliest rigorous estimates at roughly 1,500 logical qubits and 100 billion Toffoli gates, a computation that would have required astronomical runtimes on any conceivable hardware.
Roetteler et al. (2017) improved the gate count to roughly 50 billion Toffoli gates with ~2,300 logical qubits, but the spacetime volume remained enormous.
Häner et al. (2020) made incremental improvements, pushing the frontier forward without a qualitative leap.
Litinski (2023) achieved the first major modern reduction. The paper’s title advertises 50 million Toffoli gates, but that figure reflects amortized costs when solving multiple ECDLP instances in batch using Montgomery’s trick. For a single-instance attack (the scenario most relevant to stealing from one address), the cost is roughly 200 million Toffoli gates with ~2,500 logical qubits. On a photonic architecture, this translated to approximately 9 million physical qubits. Through 2025, this was the estimate most cited in the cryptocurrency community.
Chevignard, Fouque, and Schrottenloher (EUROCRYPT 2026) attacked the problem from the qubit-minimization direction, achieving ~1,100 logical qubits. The cost: more than 100 billion Toffoli gates. Spacetime volume barely improved because of the extreme gate count.
Google/Babbush et al. (March 2026) found the sweet spot: 1,200 logical qubits with 90 million Toffoli gates, or 1,450 logical qubits with 70 million Toffoli gates. The spacetime volume (the product of qubits and gates that ultimately drives physical resource overhead) dropped by roughly 10x compared to the best prior single-instance estimate.
In the span of three years, the physical qubit estimate for breaking secp256k1 fell from Litinski’s 9 million to Google’s 500,000 on comparable surface code assumptions. An 18x reduction. And Google’s authors are explicit that more optimization headroom likely remains: RSA and quantum chemistry algorithms “have been the focus of significantly more published research historically than quantum algorithms for breaking ECDLP, so it may be the case that algorithms for those applications are closer to optimal than they are for ECDLP.”
The 500,000-qubit, 9-minute estimate is almost certainly not the floor.
ECC vs. RSA: Why Blockchain Cryptography Is More Immediately Vulnerable
The comparison between ECC-256 and RSA-2048 resource estimates reveals why the cryptocurrency ecosystem should be paying closer attention than traditional enterprise IT.
ECC-256 requires roughly 100 times fewer Toffoli gates than RSA-2048 to break: 70-90 million versus 6.5 billion. This follows from the underlying mathematics; the elliptic curve discrete logarithm problem has a different structure than integer factorization, and Shor’s algorithm exploits that structure more efficiently at these key sizes. Breaking RSA-2048 takes days to weeks of continuous fault-tolerant computation. Breaking ECC-256 takes minutes.
On physical qubits, ECC-256 requires roughly half what RSA-2048 demands on comparable surface code assumptions: ~500,000 versus ~1 million. The machine that breaks cryptocurrency is smaller than the machine that breaks traditional web encryption.
| Target | Logical Qubits | Toffoli Gates | Physical Qubits (surface code) | Runtime | Source |
|---|---|---|---|---|---|
| RSA-2048 (2019) | ~6,000 | ~3 billion | ~20 million | ~8 hours | Gidney & Ekerå |
| RSA-2048 (2025) | ~1,400 | ~6.5 billion | <1 million | <1 week | Gidney |
| RSA-2048 (Pinnacle) | ~1,000-1,500 | (optimized) | <100,000 | ~1 month | Webster et al. |
| ECC-256 (2023) | ~2,500 | ~200 million | ~9 million | Hours | Litinski |
| ECC-256 (EUROCRYPT 2026) | ~1,100 | >100 billion | (very large) | Very long | Chevignard et al. |
| ECC-256 (Google 2026) | 1,200-1,450 | 70-90 million | <500,000 | 9-23 minutes | Babbush et al. |
| ECC-256 (Oratomic 2026) | ~1,200 | (Google circuits) | ~10,000-26,000 | ~10 days | Cain, Preskill et al. |
The final two rows tell the story of 2026. Two independent teams, using different hardware paradigms, arrived at the same cryptographic target with strikingly different machine designs, both vastly smaller than anything published before.
Fast-Clock vs. Slow-Clock: The Architecture Fork
One of the most analytically valuable contributions from the Google whitepaper is the distinction between “fast-clock” and “slow-clock” CRQC architectures. I have incorporated this distinction into my CRQC Quantum Capability Framework analysis.
Fast-clock architectures include superconducting qubits (Google, IBM, Rigetti), photonic qubits (PsiQuantum, Xanadu), and silicon spin qubits (Intel, Diraq). These platforms operate with error correction cycle times on the order of 1 microsecond. At this speed, executing 70 million Toffoli gates takes roughly 18 minutes (or about 9 minutes from a “primed” state where the first half of Shor’s algorithm has been precomputed). Fast-clock machines can launch on-spend attacks: deriving a private key from a public key within a blockchain’s block settlement window.
The magic state production bottleneck illustrates why clock speed matters beyond just runtime. Executing 70 million Toffoli gates in 9 minutes requires generating roughly 500,000 T states per second. On a fast-clock architecture, this demands about 25,000 physical qubits dedicated to T state production, a small fraction of the ~500,000 total. On-spend attacks become viable at essentially the same machine size as at-rest attacks.
Slow-clock architectures include neutral atom (QuEra, Infleqtion, Atom Computing, Pasqal, Oratomic) and ion trap (IonQ, Quantinuum, Alpine Quantum Technologies) platforms. These operate with error correction cycle times roughly 100-1000x slower, on the order of 100 microseconds to 1 millisecond. The same computation takes hours to days rather than minutes.
For slow-clock architectures, the magic state production economics change in kind, not just in degree. The same T state throughput that costs 25,000 qubits on a fast-clock machine would cost roughly 2.5 million qubits on a slow-clock machine: five times the total qubit budget of the fast-clock design. Slow-clock platforms must either build a much larger machine for the same throughput (enabling fast attacks) or accept slower T state production and run the algorithm over days (limiting themselves to at-rest attacks).
The Oratomic/Caltech paper takes the second approach. By accepting a ~10-day runtime and exploiting the high connectivity of reconfigurable atom arrays (which can implement more efficient error correction codes with fewer physical qubits per logical qubit), they achieve the same cryptographic result with a machine roughly 20-50x smaller in physical qubit count.
This creates two distinct threat scenarios that the cryptocurrency community must plan for:
Scenario 1: Fast-clock CRQC arrives first. On-spend and at-rest attacks become viable simultaneously. Every blockchain transaction is at risk during the settlement window. Faster block times provide partial mitigation (Litecoin, Zcash, Dogecoin are harder targets), but Bitcoin’s 10-minute average block time creates a ~41% on-spend attack success probability. Private mempools and commit-reveal schemes become essential interim defenses.
Scenario 2: Slow-clock CRQC arrives first. Only at-rest attacks are viable initially. The attacker has unlimited time but cannot intercept transactions in real time. Immediate targets include the 1.7 million BTC in P2PK scripts, the ~5 million BTC in reused addresses, every Ethereum account that has sent a transaction, and any blockchain address type that permanently exposes its public key. Users following strict address hygiene (no reuse, no P2TR, no public key exposure) would be temporarily safe.
Given the breadth of quantum computing platforms under active development, with major well-funded programs on both fast-clock and slow-clock architectures, the prudent planning assumption is that both scenarios are plausible.
Where the Difficulty Remains: The CRQC Capability Framework Assessment
Resource estimates are blueprints, not machines. The 500,000-qubit figure describes what needs to be built; it does not guarantee a particular timeline. Through the lens of my CRQC Quantum Capability Framework, the remaining engineering challenges map to specific capability dimensions, each at a different stage of maturity.
Below-Threshold Operation & Scaling (B.3): Google demonstrated below-threshold surface code operation on its Willow processor in late 2024, confirming that adding more physical qubits reduces rather than increases the logical error rate. USTC in China replicated the result in early 2026. But these demonstrations involved a handful of logical qubits. Maintaining below-threshold operation across three orders of magnitude of qubit count introduces qualitatively different challenges: correlated noise, fabrication uniformity across large chips, thermal management at scale.
Magic State Production & Injection (C.2): The recent “magic state cultivation” technique from Google’s team reduced the overhead for producing T states (the non-Clifford operations that dominate fault-tolerant computation costs) by a substantial factor. This contributed directly to the physical qubit estimate dropping so sharply. Producing 500,000 T states per second continuously for 9-23 minutes, with sufficiently low error, remains undemonstrated at anything close to the required scale.
Decoder Performance (D.2): Real-time syndrome decoding for a 500,000-qubit device means processing terabytes of measurement data per second. Sub-microsecond decoders exist for small code blocks, but scaling decoder throughput to the full machine introduces latency that can itself cause logical errors. Promising results exist, but no demonstrated solution at cryptographic scale.
Continuous Operation / Long-Duration Stability (D.3): The algorithm runs for 18-23 minutes of continuous fault-tolerant computation (or ~10 days on slow-clock architectures). Current experimental demonstrations of error-corrected logical operations span hours at most, on far fewer qubits. Drift, calibration decay, leakage accumulation, and correlated errors over sustained runtimes at production scale remain largely uncharacterized.
Engineering Scale & Manufacturability (E.1): Producing ~500,000 physical qubits at 10⁻³ error rates on a planar grid, or ~10,000-26,000 neutral atom qubits with the connectivity and fidelity assumed in the Oratomic paper, represents a manufacturing challenge roughly 100-500x beyond today’s largest demonstrated devices. For superconducting platforms, this means scaling cryogenic wiring, control electronics, and chip fabrication simultaneously. For neutral atom platforms, it means scaling trap arrays, laser control systems, and atom transport mechanisms well beyond current demonstrations.
These are engineering problems. Below-threshold operation has been demonstrated. Error correction codes work. Shor’s algorithm is mathematically sound. What remains is scaling, integration, and sustained reliability, the kind of work that money, talent, and determination can solve. When Google announces a 2029 internal PQC migration deadline and publishes resource estimates on their own demonstrated architecture, their internal assessment of these timelines is likely more aggressive than what they publish.
The “Nothing and Then All at Once” Risk
Conventional models of technological progress assume steady, incremental, measurable advancement. Quantum computing’s path toward cryptographic relevance does not follow that model. The Google paper makes this point directly, citing the technology evolution framework from Anderson and Tushman where an “era of ferment” characterized by competing approaches and discrete capability jumps eventually produces a dominant design.
Quantum computing is still in the era of ferment. Progress arrives as threshold crossings, not smooth curves: achieving below-threshold error correction, demonstrating logical operations, implementing coherent interconnects for modular architectures. Simple metrics like physical qubit count fail to capture these qualitative transitions. A machine with 1,000 qubits below threshold is qualitatively more capable than a machine with 10,000 qubits above it.
One practical consequence: the ECDLP challenge ladder proposed by Dallaire-Demers, Doyle, and Foo (2025), a sequence of increasingly difficult ECDLP instances from 6-bit to 256-bit intended as an early warning system, may provide less warning than expected. If a leading platform overcomes all scaling barriers before producing a device capable of solving 32-bit ECDLP, there could be little time between breaking 32-bit and breaking 256-bit curves. The Google paper warns that the scaling between small instances and cryptographically relevant ones is not the bottleneck; the engineering thresholds are.
The paper’s starkest warning follows from this logic: “a successful public demonstration of Shor’s algorithm on a 32-bit elliptic curve should not be seen as a wake-up call to adopt PQC as much as a potential signal that PQC adoption has already failed.”
And the scenario that should keep CISOs awake: “it is conceivable that the existence of early CRQCs may first be detected on the blockchain rather than announced.” A covert actor that achieves CRQC capability has every incentive to exploit it silently, draining vulnerable addresses over weeks rather than publishing a paper.
The Convergence
Resource estimates for breaking ECC-256 have converged from multiple independent research groups, using different optimization strategies and targeting different hardware platforms, to a consistent range of 1,200-1,450 logical qubits and 70-90 million Toffoli gates. The physical qubit overhead varies by architecture: 500,000 for conservative superconducting surface code, potentially under 100,000 for aggressive qLDPC codes, and 10,000-26,000 for reconfigurable neutral atoms. All represent machines that are plausibly within reach of current scaling roadmaps.
The gap between current capability and what is needed is shrinking from both directions. Algorithmic advances lower the target. Hardware advances raise the capability. The organizations building the hardware are the same ones publishing the resource estimates, and they are setting their own PQC migration deadlines for 2029.
For the cryptocurrency ecosystem, the implication is that the margin for error in every migration timeline is narrowing. The detailed vulnerability analysis for Bitcoin, Ethereum, and the Lightning Network in subsequent articles translates these resource estimates into specific exposure numbers and concrete mitigation strategies. The technical migration roadmaps for fixing Bitcoin and fixing Lightning at the protocol level describe what needs to be built. Whether the ecosystem builds it in time depends on governance coordination more than it depends on engineering.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.