Pinnacle Architecture: 100,000 Qubits to Break RSA-2048, but at What Cost?
Table of Contents
Introduction
In 2019, the consensus estimate for breaking RSA-2048 on a quantum computer stood at 20 million physical qubits. By May 2025, Craig Gidney had collapsed that to under one million. Now, in February 2026, a team of eight researchers at a Sydney startup claims the number is under 100,000. Each order-of-magnitude drop took less time than the last.
But here is the pattern that almost nobody in the press is reporting: every generation of these papers achieves its qubit reduction by making the remaining hardware problem harder in ways that are not captured by a single number. The qubits get fewer; the engineering gets nastier. Understanding where the difficulty went – not just how many qubits are left – is what separates a useful risk assessment from a misleading headline.
Iceberg Quantum’s Pinnacle Architecture paper is the latest and most aggressive entry in this tradition. The result is credible – acknowledged as valid by Craig Gidney himself, the very researcher whose record it claims to beat. But it shifts the burden from qubit count to equally daunting challenges: non-local qubit connectivity, fast QLDPC decoding that nobody has demonstrated at scale, and month-long sustained fault-tolerant operation that exceeds anything attempted by orders of magnitude. In my initial analysis published the day after the paper dropped, I argued that the hyped-up headlines were not warranted. Since then, every major expert who has weighed in agrees. No government agency has revised any cryptographic timeline in response.
Since the headlines – even from outlets like New Scientist – are not abating, let’s pull the architecture apart and examine what Pinnacle actually achieved, where its assumptions buckle, and what it means for organizations planning their PQC migration.
The paper and its claims in detail
The preprint (“The Pinnacle Architecture: Reducing the cost of breaking RSA-2048 to 100,000 physical qubits using quantum LDPC codes,” arXiv:2602.11457, February 12, 2026) comes from eight researchers at Iceberg Quantum in Sydney: Paul Webster, Lucas Berent, Omprakash Chandra, Evan T. Hockings, Nouédyn Baspin, Felix Thomsen, Samuel C. Smith, and Lawrence Z. Cohen.
It has not been peer-reviewed.
The headline resource estimates assume superconducting qubits with p = 10⁻³ physical error rate, 1 µs code cycle time, and 10 µs classical reaction time (This assumption is explicitly stated in Section III.D of the paper: “For simplicity, we assume throughout that the reaction time is equal to ten times the code cycle time, i.e., tᵣ = 10t꜀.“
| Configuration | Physical qubits | Runtime |
|---|---|---|
| Absolute Minimum qubits | ~97,000 | ~1 year |
| 1-Month Minimum | ~98,000 | ~1 month |
| Balanced | ~151,000 | ~1 week |
| Minimum time | ~471,000 | ~1 day |
For slower hardware platforms typical of trapped ions (1 ms cycle, p = 10⁻⁴), the paper estimates 3.1 million physical qubits for a one-month factoring run. For neutral atoms (1 ms cycle, p = 10⁻³), it balloons to 13 million qubits. (This explosion in spacetime volume occurs because the paper rigidly assumes classical reaction time is always 10x the code cycle time, $$t_r = 10t_c$$. Slowing the quantum cycle to 1 ms inherently forces the model to assume a sluggish 10 millisecond classical reaction time. To compensate for this crippled logical throughput and still hit a 1-month runtime, the algorithm is forced to massively parallelize – which is exactly why the physical qubit count balloons back into the millions)
The architecture uses generalized bicycle (GB) QLDPC codes – specifically instances including [[254,14,16]], which is a known valid code in the literature, and [[510,16,24]], which the paper itself presents as conjectured parameters based on a proposed family of generalized bicycle codes (Section V.A: ‘We conjecture that the GB codes constructed in this way have parameters…’). (I use[[n,k,d]] notation which is equivalent to the paper’s Jn,k,dK notation).
The encoding rate is where the qubit savings actually come from, and it is worth quantifying precisely. A distance-24 surface code encodes a single logical qubit in roughly 1,152 physical qubits (2d² for the rotated variant). Gidney’s “yoked” surface codes improve idle storage to approximately 430 physical qubits per logical qubit, but active “hot” patches still cost around 1,352 physical qubits per logical qubit at d=25. Pinnacle’s distance-24 GB code block, by contrast, encodes 16 logical qubits in 1,020 data and check qubits – and after adding the four static measurement gadgets and inter-block bridges, the full processing block comes to approximately 1,620 physical qubits. That works out to roughly 101 physical qubits per logical qubit at the processing-block level – a 13× density improvement over Gidney’s hot patches.
This is not a marginal gain. It is a structural advantage baked into the mathematics of QLDPC codes: they can encode k logical qubits in n physical qubits at rates k/n that approach non-zero constants as distance grows, while surface codes are locked at k/n = 1/d², which decays to zero. The question has never been whether QLDPC codes are theoretically denser – that has been known for years. The question is whether you can compute on them efficiently. That is what the Pinnacle architecture attempts to answer.
The algorithm is Shor’s (not Regev’s), building directly on Gidney’s May 2025 implementation using approximate residue number system arithmetic. Pinnacle extends this with a parallelization scheme that processes multiple residue primes simultaneously across separate processing units while sharing the input register via read-only memory access.
A novel “magic engine” module performs magic state distillation and injection in a single code block, drawing on fold-transversal cultivation techniques from Sahay et al. (arXiv:2509.05212). This is significant because magic state production (Capability C.2 in my CRQC framework) is typically the dominant cost center in fault-tolerant architectures – Gidney’s 2025 design devoted six dedicated factory modules to it.
The magic engine mechanism deserves closer attention because it is, in my assessment, the most architecturally novel element of the paper. Each magic engine is a single QLDPC code block whose logical qubits are partitioned into two sectors – Left (L) and Right (R). On odd-numbered logical cycles, the L sector runs a 15-to-1 distillation circuit on noisy input states while the R sector simultaneously injects a previously distilled magic state into the adjacent processing unit. On even cycles, the roles swap. This alternating pipeline delivers one high-fidelity magic state (target infidelity ~10⁻¹¹) per logical cycle per processing unit – eliminating the dedicated “magic state factory” that consumes the majority of physical space in conventional surface-code designs.
The trade-off is rejection rate. At p = 10⁻³, the paper reports a magic-engine rejection probability of approximately 6%, driven primarily by the upstream state preparation rather than the distillation protocol itself. Each rejection wastes a logical cycle – tolerable when amortized over billions of cycles in a month-long computation, but a meaningful source of variance in runtime estimates. Ironically, to hit their targets at the $$10^{-3}$$ physical error rate, Pinnacle’s ‘magic engines’ do not entirely escape the surface code. The authors quietly admit in Section V.B.2 that they must still rely on a hybrid approach – running ‘fold-transversal cultivation’ inside fifteen ancillary $$d=9$$ rotated surface codes just to prepare the initial noisy magic states before injecting them into their highly efficient QLDPC blocks. The ‘surface code killer’ architecture still relies on the surface code tax to spark its engines.
The paper also benchmarks the architecture on the 2D Fermi-Hubbard model – a condensed-matter simulation relevant to high-temperature superconductivity research. For a 16×16 lattice, Pinnacle estimates 62,000 physical qubits at p = 10⁻³ and just 22,000 at p = 10⁻⁴, with runtimes of 1.6 to 3.6 minutes per shot. Comparable surface-code estimates require 940,000 and 200,000 physical qubits respectively – a 15× reduction. These numbers are arguably more meaningful than the RSA headline because the runtimes are minutes rather than months, eliminating the month-long stability concern that dominates the cryptanalysis estimate.
One technical note that has received insufficient attention: some of the code parameters in the generalized bicycle family are presented as conjectural. The paper describes a parameterization derived from simplex-code structure and tabulates specific instances with simulated performance, but the underlying family is not presented as a fully closed mathematical result. Concrete instances are verified through simulation, but the broader family’s properties remain an active area of research. This does not invalidate the estimates, but it does mean parts of the theoretical foundation are still provisional – a fact that should temper any impulse to treat the 98,000-qubit figure as a hard floor.
Craig Gidney’s measured criticism
Gidney – the Google Quantum AI researcher whose May 2025 paper established the previous benchmark of fewer than one million qubits – offered the most technically pointed assessment. On Scott Aaronson’s blog (Comment #18, February 16, 2026), he wrote:
“I agree with the assessment that your mileage from this paper depends entirely upon how much you’re willing to tolerate the additional demands they are making of the hypothetical hardware.”
“I have various issues with the paper (e.g. they assume the same decoder reaction time but have a much harder decoding problem), but in a sense it’s sort of irrelevant. Even if this paper’s details were all wrong, the overarching point that non-nearest-neighbor-connections are a resource that can be used to substantially reduce qubit overhead would remain correct. The question is only whether the tradeoffs you have to make to get that benefit are worth the cost.”
In his X/Twitter thread, Gidney stated the paper demands “a lot more qubit connectivity” for its headline number. In a New Scientist interview, he added: “These more stringent requirements make it harder to make hardware, and making hardware is already the hardest part.”
I want to be direct about why this matters more than any other technical detail in the paper: the decoder assumption is the load-bearing wall of the entire estimate.
Here is the problem in concrete terms. Both Gidney’s 2025 paper and the Pinnacle Architecture assume a classical decoder reaction time of 10 µs – ten code cycles. For surface codes, this is achievable and demonstrated: the decoding problem reduces to minimum-weight perfect matching on a planar graph, solvable in microseconds on commodity FPGAs. Union-Find decoders do it even faster. These algorithms are well-characterized, with known scaling and proven implementations.
For QLDPC codes, the decoding problem is categorically different. The Tanner graph of a generalized bicycle code is non-planar, dense with short cycles, and riddled with “trapping sets” that cause standard belief-propagation (BP) decoders to converge to incorrect solutions. The practical workaround – BP followed by ordered statistics decoding (BP+OSD) – requires matrix inversion with O(n³) worst-case complexity. For a [[510,16,24]] code block with 1,020 physical qubits, that inversion must complete within 10 µs. As one decoder paper put it bluntly: “inverting the matrix of a graph cannot realistically be achieved within the decoherence time of a typical qubit.”
The Pinnacle paper sidesteps this entirely. Its simulations use most-likely-error (MLE) decoding – a computationally optimal but exponentially expensive approach implemented via mixed-integer programming. This gives you the best-case logical error rates the code could achieve, but it is not a real-time algorithm and never will be. The authors explicitly acknowledge that developing a fast enough decoder “is outside the scope of this paper” (in Section V.C). They are not hiding this – but the media coverage has buried it. When New Scientist writes that “breaking encryption just got 10 times easier,” it is treating the decoder problem as a footnote when it should be the headline.
Aaronson’s endorsement – and the question he’s really asking
Scott Aaronson’s February 15 blog post provided the most visible expert assessment, calling the work “serious” and the claim “entirely plausible” while noting he had not verified the details. His primary concern echoed Gidney’s: QLDPC codes require “wildly nonlocal measurements of the error syndromes,” which is especially problematic for superconducting qubits.
But the most revealing moment in Aaronson’s post was a backstory about responsible communication. The paper’s original title was about “breaking RSA-2048” with 100,000 physical qubits. Aaronson told the authors to change it, warning that journalists would “predictably misinterpret it to mean that they’d already done it.” The concern proved prescient within 48 hours.
What I found most valuable in Aaronson’s commentary, however, was not his endorsement or his caveats – both were predictable from a rigorous theorist. It was his reframing of the timeline question. He was frank about his uncertainty: “I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno.” But he then offered, in a later comment, the sharpest formulation of why the paper matters strategically: when you “merely” need to scale up qubit count by 1,000× while maintaining their quality, “it becomes important to ask, well, how many years? 3? 4? 5?”
That framing cuts through months of debate. It concedes that a CRQC is years away while insisting that the number of years is now a meaningful and urgent question. And it implicitly acknowledges what I have been arguing since the paper dropped: the resource estimates keep falling, but the engineering gap – Aaronson’s “1,000×” – is measured in physics, not theory. Whether that gap closes in 5 years or 15 will not be determined by architecture papers.
The spacetime volume question is more nuanced than “1.5×”
A common claim in early commentary was that Pinnacle is “only ~1.5× better” than Gidney 2025 in total spacetime cost (qubit-days). The evidence suggests this figure is incorrect or at best incomplete. A cleaner way to compare Pinnacle to Gidney (2025) is to compute qubit-days (physical qubits × wall-clock days) using the explicit numbers each paper provides. Gidney’s 2025 estimate is 897,864 physical qubits and 4.96 days expected runtime (rounded up to “<1 week” for slack), i.e., about 4.45 million qubit-days. Pinnacle’s Table VI gives several Pareto points under the same headline assumptions (p=10⁻³, t_c=1 μs): 98k qubits for ≤1 month (~2.94M qubit‑days if one takes “month”≈30 days), 151k for ≤1 week (~1.06M qubit‑days), and 471k for ≤1 day (~0.471M qubit‑days). On that basis, Pinnacle ranges from about ~1.5× lower spacetime at the minimum‑qubit point to about ~9–11× lower spacetime in the aggressively parallelized day‑scale regime (depending on whether you compare Table VI’s 471k/1‑day point or the text’s “1M qubits in ~10 hours” point).
Importantly, none of these ratios are “free”: the qubit‑days reduction is purchased with stricter demands on qLDPC-scale decoding latency and module connectivity, which the Pinnacle preprint does not yet demonstrate end-to-end.
The practical implication: Pinnacle’s advantage scales with available hardware. With minimal qubits, it is modestly better than Gidney’s approach in total computational work. With more qubits available, its parallelization yields much larger gains. This nuance has been largely lost in media coverage.
What most of the qubits are actually doing
One detail that has been almost entirely absent from the public discussion is what kind of qubits dominate Pinnacle’s 98,000-qubit budget. The answer reshapes how the headline number should be interpreted.
In the ~100k-qubit example shown in Figure 1(a) in the preprint,it labels MEMORY (75kq), PROCESSING UNIT (15kq), and MAGIC ENGINE (9kq) – so roughly ~76% of the physical qubits are allocated to memory rather than active compute. These are QLDPC code blocks that store the large input register (the number being factored) without full processing gadgetry. The actual compute happens on a much smaller set of processing units and magic engines. The architecture works because Shor’s algorithm, as compiled through the residue-number-system approach, spends most of its time doing structured lookups against a large but mostly idle register.
This matters for two reasons. First, it means the headline figure is dominated by storage qubits that need only maintain encoded data at low error rates – a less demanding task than active computation. The processing qubits face a much harder job. Second, it means the architecture is fundamentally sensitive to how cheaply you can store quantum data. If QLDPC memory blocks turn out to have higher effective error rates than the simulations predict (due to correlated errors, leakage, or drift over a month-long computation), the entire estimate unravels from its largest component.
The technique that makes the memory architecture work – and that enables Pinnacle’s parallelization advantage – is called Clifford frame cleaning. This is, in my assessment, the paper’s most underappreciated contribution, and it is independent of the QLDPC substrate.
In Pauli-based computation, Clifford gates (like CNOTs) are not physically executed. Instead, they are tracked in classical software by updating a “Clifford frame” that records how the Pauli measurement bases transform under conjugation. This is computationally free – until an entangling Clifford gate links two different processing units. At that point, their frames become entangled, and any subsequent T-gate injection on one unit spreads its measurement support across both. Since each processing unit can perform only one logical measurement per cycle, this “software entanglement” forces sequential execution and destroys parallelism.
Pinnacle’s solution is to physically “clean” the frame after each memory access. When a processing unit reads from shared memory via a controlled operation, the resulting frame entanglement is actively un-computed through an optimized sequence of just 2w Pauli rotations (where w is the window size of the memory accessed). Because memory access relies only on Z-type controls, it bypasses the heavier 4k-rotation cost required for general entanglement. The cost is modest – a handful of logical cycles per memory-access window – but the benefit is decisive: processing units are immediately decoupled and can resume parallel T-gate execution.
This is what allows multiple processing units to share a single input register without duplicating it – and it is the specific mechanism that makes the space-time trade-off curve so favorable. Without Clifford frame cleaning, parallelizing the residue-prime computation would require replicating the memory register for each processing stream, inflating the qubit count by roughly ρ× (where ρ is the parallelization factor). With it, the memory is shared and the qubit overhead for parallelization is sublinear.
Importantly, this technique is portable. It does not depend on QLDPC codes. In principle, Clifford frame cleaning could be applied to surface-code architectures to achieve similar parallelization benefits, albeit at higher absolute qubit counts. If this is correct – and I have not seen it disputed – then part of Pinnacle’s end-to-end advantage is not uniquely QLDPC-dependent, which complicates the “10× better than surface codes” narrative.
How we got here: the accelerating resource reduction
The Pinnacle paper sits at the end of a remarkable trajectory of declining resource estimates for RSA-2048 factoring:
| Year | Paper | Physical qubits | Runtime | Error correction |
|---|---|---|---|---|
| 2012 | Fowler et al. | ~1 billion | Days | Surface code |
| 2019 | Gidney & Ekerå | ~20 million | 8 hours | Surface code + distillation |
| 2021 | Gouzien & Sangouard | 13,436 processor qubits (+ 28M optical memory modes) | 177 days | 3D gauge color codes + multimode memory |
| 2025 | Gidney (Google) | <1 million | <1 week | Surface code + cultivation + yoked codes |
| 2025 | IBM Tour de Gross | N/A (general arch.) | N/A | Bivariate bicycle QLDPC |
| 2026 | Pinnacle (Iceberg) | ~98,000 | ~1 month | Generalized bicycle QLDPC |
Each generation achieved its reduction through distinct innovations. Gidney’s May 2025 paper (arXiv:2505.15917) introduced three key advances while maintaining the same conservative hardware assumptions as the 2019 paper: approximate residue arithmetic that avoids storing full 2048-bit numbers; yoked surface codes that triple idle-qubit storage density; and magic state cultivation that grows high-fidelity magic states inside surface code patches far more efficiently than traditional distillation factories. His Toffoli count of ~6.5 × 10⁹ was over 100× fewer than the Chevignard-Fouque-Schrottenloher approach.
Gouzien and Sangouard’s 2021 paper achieved the lowest processor qubit count (13,436) but required a multimode quantum memory with 28 million spatial modes in rare-earth-ion-doped solids – a technology that does not yet exist. Their Gouzien et al. 2023 follow-up targeted 256-bit elliptic curve discrete logarithms (not RSA-2048) using cat qubits and repetition codes, estimating 126,133 cat qubits in 9 hours, but required an extremely demanding single-to-two-photon loss ratio of 10⁻⁵.
IBM’s Tour de Gross (arXiv:2506.03094, June 2025, Yoder et al.) established the bivariate bicycle code framework – specifically the [[144,12,12]] “gross” code encoding 12 logical qubits in 288 physical qubits. While not providing RSA-specific estimates, it demonstrated ~10× qubit efficiency over surface codes for general computation and introduced the Relay-BP decoder achieving sub-480ns decoding on FPGAs. This paper represents IBM’s roadmap toward its Starling processor (200 logical qubits by 2029).
The QLDPC decoding gap remains the hardest open problem
The most technically significant criticism of Pinnacle centers on real-time QLDPC decoding. For surface codes, decoding maps to minimum-weight perfect matching – a well-understood problem solvable in microseconds on FPGAs. For QLDPC codes, the standard approach is belief propagation plus ordered statistics decoding (BP+OSD), where OSD has O(n³) worst-case complexity and involves matrix inversion over graphs with tens of thousands of nodes. As one decoder paper put it: “Even with specialized hardware, inverting the matrix of a graph cannot realistically be achieved within the decoherence time of a typical qubit.”
Recent progress has been rapid but incomplete:
- IBM’s Relay-BP: Achieves <480 ns on the [[144,12,12]] gross code on FPGA hardware – well within Pinnacle’s 10 µs budget. But this is for a distance-12 code; Pinnacle requires distance-24+ codes.
- NVIDIA NVQLink + Quantinuum: Demonstrated the world’s first real-time QLDPC decoding on a live QPU in November 2025, decoding Bring’s code [[30,8,3]] on the Helios processor with 67 µs median latency using BP+OSD on an NVIDIA GH200 GPU.
- GARI-NMS-ensemble: Achieves 273 ns average decoding time for bivariate bicycle codes, sub-microsecond in 99.99% of cases.
- Vegapunk FPGA accelerator: Targets ~1 µs with accuracy comparable to BP+OSD.
It is worth pausing on what these numbers actually tell us. The NVIDIA-Quantinuum demonstration decoded Bring’s code – a [[30,8,3]] code with 30 physical qubits and distance 3 – in 67 µs on a GPU. Pinnacle requires decoding a [[510,16,24]] code with 510+ physical qubits and distance 24. That is 17× more physical qubits and 8× higher distance, with a decoder whose worst-case complexity scales as O(n³). A naive scaling estimate suggests the decoding time could increase by a factor of 4,900× or more – from 67 µs to potentially hundreds of milliseconds. Even optimistic projections using faster algorithms (Relay-BP, GARI-NMS) face the same fundamental challenge: distance-24 codes have exponentially more failure modes than distance-3 codes, and any decoder that works well at small scale may hit error floors or convergence failures at the distances Pinnacle requires.
The crucial unknown is whether these approaches scale to the larger codes (distance 16-24) that Pinnacle requires. IBM’s Relay-BP results are for a 144-qubit code; Pinnacle’s [[510,16,24]] code is 3.5× larger with nearly double the distance. Decoding is the single biggest gap – what my CRQC Capability Framework classifies as Capability D.2: Decoder Performance. For surface codes, D.2 is at TRL 5 with FPGA/ASIC solutions approaching the required throughput. For the QLDPC codes Pinnacle requires, it drops to something closer to TRL 3 – the paper’s error rate simulations use near-optimal decoding that is not itself a real-time algorithm.
There is a deeper issue here that I think the decoder research community has not yet fully confronted. Pinnacle’s architecture does not need fast decoding for one code block in isolation – it needs it for dozens of code blocks operating simultaneously, each generating syndrome data every microsecond, all feeding into a classical control system that must issue correlated feedback across the entire machine. The aggregate classical throughput requirement – potentially tens of gigabytes of syndrome data per second, processed with sub-10µs latency – is a systems engineering challenge as much as an algorithmic one. No current decoder demonstration has even attempted this regime.
The “shift the difficulty” pattern
The Pinnacle paper exemplifies a recurring phenomenon in quantum resource estimation: each generation of papers reduces one metric (qubit count) while increasing demands elsewhere. This is not a criticism – it reflects genuine progress – but it demands clear-eyed accounting of what has actually gotten easier versus what has merely been relocated.
Pinnacle’s specific difficulty shifts include:
- non-local qubit connectivity (generalized bicycle codes require bounded but non-local connections, fundamentally more demanding than surface codes’ nearest-neighbor 2D grid – this is Capability B.4 in my CRQC framework, currently at TRL 3-4);
- month-long sustained operation (the 98k configuration requires continuous fault-tolerant operation for weeks – an extraordinary demand that maps to Capability D.3: Continuous Operation in my CRQC framework, currently at TRL 1-2, with the longest demonstrated stability runs measured in hours, not weeks);
- fast QLDPC decoding (assuming 10 µs reaction time for a problem much harder than surface code decoding); and
- undemonstrated code families (the largest QLDPC code demonstrated on hardware is Bring’s [[30,8,3]] – Pinnacle requires codes ~17× larger at much higher distance).
I want to state a view on this directly: I believe the “shift the difficulty” pattern is not a bug in the research process – it is the research process. Every generation of resource-estimation papers should explore different trade-off surfaces. Gidney’s genius in 2019 and 2025 was to push the arithmetic as far as possible within the surface-code paradigm. Pinnacle’s contribution is to ask: what if we relax locality constraints and accept different engineering demands? Both approaches are valuable. But readers – especially CISOs and board-level decision-makers – need to understand that a 10× reduction in one metric does not mean the overall problem got 10× easier. It means the shape of the problem changed.
The most honest way to think about Pinnacle’s 98,000-qubit number is as the output of a constrained optimization: minimize physical qubits, subject to the constraints of Shor’s algorithm, QLDPC encoding rates, and a specific noise model. The result tells you the theoretical floor for that specific set of assumptions. It does not tell you how hard it is to build the machine – because the difficulty of building the machine lives in the constraints, not the objective function.
Previous papers exhibited analogous shifts. Gouzien and Sangouard 2021 reduced processor qubits to 13,436 but required 28 million spatial modes of exotic quantum memory. Chevignard et al. 2024 reduced logical qubits to ~1,700 but needed ~2 × 10¹² Toffoli gates and ~40 repeated runs. The Chinese team (Yan et al. 2023) claimed 372 qubits via a hybrid approach that Aaronson characterized as “the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates).” Each represented real theoretical insight wrapped in engineering requirements that remain years from realization.
For readers who want to track these requirements systematically, my CRQC Quantum Capability Framework maps the nine interdependent capabilities – from quantum error correction and below-threshold scaling to real-time decoding and continuous multi-day operation – that must all converge for a cryptographically relevant quantum computer to exist. Pinnacle’s difficulty shifts land squarely on the capabilities that are currently least mature: qubit connectivity (TRL 3-4), continuous operation (TRL 1-2), and QLDPC-scale decoder performance that goes well beyond the surface-code decoders currently at TRL 5.
How far is the hardware?
Current quantum systems are roughly 100-1000× short of Pinnacle’s requirements in qubit count, though error rates are already in range:
| Platform | Current best qubits | Best 2Q fidelity | Gap to 100k |
|---|---|---|---|
| IBM (superconducting) | 156 (Heron r3) | 99.75% median | ~640× |
| Google (superconducting) | 105 (Willow) | 99.67% average | ~950× |
| Quantinuum (trapped ion) | 98 (Helios) | 99.92% | ~1,020× |
| Neutral atoms (research) | ~448 computing | 99.71% (specific pairs) | ~225× |
The physical error rate of 10⁻³ that Pinnacle assumes is already achieved by multiple platforms. IBM’s Kookaburra processor, expected in 2026, will be the first QLDPC-native module with 1,386 qubits per chip (4,158 in 3-chip systems). IBM’s Starling, targeting 2029, aims for ~10,000 physical qubits and 200 logical qubits. Reaching 100,000 physical qubits with QLDPC-grade connectivity likely requires the early 2030s at the earliest. For a structured assessment of each engineering bottleneck on that path – from below-threshold scaling to manufacturing and cryogenic infrastructure – see my CRQC Quantum Capability Framework, which tracks nine capabilities across Technology Readiness Levels.
Aaronson captured the scale challenge precisely: “State-of-the-art systems right now have ~100 physical qubits, or a few hundred at most. Some companies advertise systems with thousands of physical qubits, but then they can’t control them as well. The new paper assumes that you can control the 100,000 qubits about as well as they’re currently controlled in the 100-qubit systems.“
Media coverage ranged from careful to misleading
The Quantum Insider and Quantum Computing Report both provided balanced coverage, noting the preprint’s unreviewed status and simulation-only basis. I previously published what I was trying to make the most thorough independent analysis, titled “No, the ‘Pinnacle Architecture’ Is Not Bringing Q-Day Closer 2-5 Years (but It Is Credible Research).” The Quantum Pirates newsletter offered sharp skepticism: Iceberg’s partner claims of “3-5 years” to hardware realization were characterized as “marketing-grade timelines – this is where my eyebrow tries to file for asylum.”
New Scientist’s coverage was the most problematic, with its headline “Breaking encryption with a quantum computer just got 10 times easier” implying imminent practical impact. On social media, claims that Pinnacle “brings Q-Day 2-5 years closer” circulated without authoritative sourcing.
Iceberg Quantum: the ARM of quantum computing?
Iceberg Quantum is a quantum architecture company (software and IP, not hardware) founded in mid-2024 in Sydney by three University of Sydney PhD colleagues: Felix Thomsen (CEO), Lawrence Z. Cohen (CSO), and Samuel C. Smith (CTO). Their advisor is Professor Stephen Bartlett, a leading quantum error correction researcher. Paul Webster, the Pinnacle paper’s lead author, was hired after founding.
The company raised a $2 million pre-seed in March 2025 (led by Blackbird, with LocalGlobe) and a $6 million seed round in February 2026 (led by LocalGlobe, with Blackbird and DCVC), for roughly $8 million total. DCVC, whose portfolio also includes IonQ, Atom Computing, and Q-CTRL, described Iceberg as “aspiring to be the ARM of the quantum computing world.” Prineha Narang of DCVC stated: “The path to FTQC needs exactly the type of innovations we’ve seen from the Iceberg team.”
Iceberg has announced partnerships with PsiQuantum (photonics), Diraq (spin qubits), IonQ (trapped ions), and Oxford Ionics (trapped ions). Andre Saraiva, Diraq’s Head of Theory, stated: “Iceberg’s advances in qLDPC-based architectures will bring forward utility-scale applications on our devices by years.” The company does not build quantum computers; it designs fault-tolerant architectures for hardware partners to implement.
Cohen, in a New Scientist interview, offered a notably aggressive framing: “I think it’s important to never be conservative about the timelines of when things like this happen. There would be big consequences for someone breaking RSA, and it’s always much, much better to be wrong because it could happen sooner rather than later.”
The Pinnacle paper’s publication also reignited a recurring debate in the quantum security community: should highly optimized decryption architectures be published openly? Some engineering-oriented practitioners have questioned the wisdom of open-sourcing what amounts to increasingly detailed blueprints for breaking deployed cryptography. The cryptographic community’s answer has been consistent and, in my view, correct: open publication is not only defensible but essential. Under Kerckhoffs’s principle – the foundational doctrine that a cryptographic system’s security should depend on key secrecy, not on the obscurity of its design – transparent resource estimates serve as the alarm bells that catalyze migration. The alternative – classified estimates circulating only among nation-state actors while the commercial world remains complacent – is strictly worse for defenders. Pinnacle’s value to the security community is not that it helps attackers build a CRQC; it is that it gives defenders an increasingly precise picture of what they are racing against.
What this actually means for your PQC migration timeline
The Pinnacle paper has not caused any government agency to revise its quantum threat or migration timeline. NIST finalized three post-quantum cryptography standards in August 2024 (FIPS 203/ML-KEM, FIPS 204/ML-DSA, FIPS 205/SLH-DSA) and selected HQC as a backup KEM in March 2025. NIST’s IR 8547 sets deprecation of quantum-vulnerable algorithms by 2030 and disallowance by 2035. The UK’s NCSC published a three-phase migration roadmap targeting completion by 2035. NSA’s CNSA 2.0 requires new national security system acquisitions to be compliant by January 2027.
But I want to flag something that even technically informed organizations frequently get wrong: elliptic curve cryptography may fall before RSA, and most migration plans are not structured around this reality.
Aaronson raised this point in his blog post, and it deserves amplification. Shor’s algorithm attacks the mathematical structure underlying both RSA and ECC, but the resource requirements scale with key size. RSA-2048 requires factoring a 2048-bit integer. ECDSA over P-256 – the curve that underpins the vast majority of TLS handshakes, code signing, and digital authentication today – requires computing a discrete logarithm on a 256-bit curve. The quantum circuit for ECC is smaller. The qubit count is lower. The runtime is shorter.
This is not an abstract concern for any industry. ECDSA P-256 is embedded across the entire digital trust stack – in TLS certificates protecting data in transit, in HSM key hierarchies managing secrets at rest, in the digital signatures that validate software and firmware updates, in authentication protocols for IoT devices and operational technology, and in the identity infrastructure that governs access to enterprise systems and cloud services. If a CRQC arrives that can handle 256-bit elliptic curves but not yet 2048-bit RSA, it is these ECC-dependent systems – not RSA key exchange – that face the most immediate exposure.
The practical implication: migration plans that treat RSA deprecation as the primary deliverable and defer ECC migration are optimizing against the wrong threat vector. Organizations should be running dual-track cryptographic inventories, and ECC-dependent systems with long data-retention requirements or harvest-now-decrypt-later exposure should be prioritized alongside – or ahead of – RSA systems.
Conclusion
The Pinnacle Architecture represents genuine scientific progress in an accelerating field. Its core contribution – demonstrating that QLDPC codes can yield an order-of-magnitude qubit reduction for Shor’s algorithm – is acknowledged as valid by the very researcher (Gidney) whose record it claims to beat. The progression from one billion qubits (2012) to 20 million (2019) to under one million (2025) to under 100,000 (2026) is real and consequential.
But three observations cut through the hype:
- First, the 98,000-qubit headline is the best case for a single hardware platform (superconducting, 99.9% fidelity, 1 µs cycles) running for a full month; for trapped ions or neutral atoms, the numbers inflate to millions.
- Second, every qubit saved is paid for elsewhere – in connectivity complexity, decoder speed requirements, and operational stability demands that have never been demonstrated at any scale.
- Third, as Gidney noted with characteristic precision, the paper does not change a fundamental reality: “making hardware is already the hardest part.”
The most important number in this paper is not 98,000 qubits. It is not even the 10× improvement over Gidney. It is the rate of improvement: from 20 million (2019) to one million (2025) to under 100,000 (2026). Each generation took less time and relied on fewer exotic assumptions than the last. QLDPC codes were a theoretical curiosity five years ago; today they are the basis of IBM’s hardware roadmap and Google’s research agenda. Fast QLDPC decoders were an unsolved problem two years ago; today they are being demonstrated on real hardware, albeit at toy scale.
I have said this before and I will say it again: the question is not whether a CRQC will be built. The question is whether your organization will have completed its PQC migration before one arrives. Pinnacle does not change the answer to the first question in any material way. But it narrows the margin of error on the second – and for organizations that have not yet started their cryptographic inventory, that margin was already uncomfortably thin.
Organizations should treat the paper as further confirmation that PQC migration cannot wait, not as evidence that Q-Day has drawn closer. For those who want to track how the gap between current hardware and CRQC requirements is actually evolving – capability by capability, not headline by headline – I maintain a CRQC Quantum Capability Framework and Q-Day Estimator designed for exactly that purpose.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.
