Post-Quantum, PQC, Quantum Security

Pinnacle Architecture: 100,000 Qubits to Break RSA-2048, but at What Cost?

Introduction

Iceberg Quantum’s Pinnacle Architecture paper claims RSA-2048 can be factored with fewer than 100,000 physical qubits – a genuine 10× reduction over the previous state of the art – by replacing surface codes with quantum LDPC codes. The result is credible but shifts difficulty from qubit count to equally daunting engineering challenges: non-local connectivity, fast QLDPC decoding, and month-long sustained fault-tolerant operation. In my paper announcement post I argued that despite hyped-up headlines, the paper does not materially change when a cryptographically relevant quantum computer might arrive. Since then, a number of major experts have weighed in and agree that the core insight is valid while flagging that the hard part remains building the hardware. No government agency has revised any cryptographic timeline in response.

But since the hyped up headlines, even from organizations like the New Scientist are not abating, let’s dig in into what the Pinnacle Architecture paper actually achieved.

The paper and its claims in detail

The preprint (“The Pinnacle Architecture: Reducing the cost of breaking RSA-2048 to 100,000 physical qubits using quantum LDPC codes,” arXiv:2602.11457, February 12, 2026) comes from eight researchers at Iceberg Quantum in Sydney: Paul Webster, Lucas Berent, Omprakash Chandra, Evan T. Hockings, Nouédyn Baspin, Felix Thomsen, Samuel C. Smith, and Lawrence Z. Cohen.

It has not been peer-reviewed.

The headline resource estimates assume superconducting qubits with p = 10⁻³ physical error rate, 1 µs code cycle time, and 10 µs classical reaction time:

ConfigurationPhysical qubitsRuntime
Minimum qubits~98,000~1 month
Balanced~151,000~1 week
Minimum time~471,000~1 day

For slower hardware platforms typical of trapped ions (1 ms cycle, p = 10⁻⁴), the paper estimates 3.1 million physical qubits for a one-month factoring run. For neutral atoms (1 ms cycle, p = 10⁻³), it balloons to 13 million qubits.

The architecture uses generalized bicycle (GB) QLDPC codes – specifically instances including [[254,14,16]], which is a known valid code in the literature, and [[510,16,24]], which I could not independently verify from external sources. The algorithm is Shor’s (not Regev’s), building directly on Gidney’s May 2025 implementation using approximate residue number system arithmetic. Pinnacle extends this with a parallelization scheme that processes multiple residue primes simultaneously across separate processing units while sharing the input register via read-only memory access. A novel “magic engine” module performs magic state distillation and injection in a single code block, drawing on fold-transversal cultivation techniques from Sahay et al. (arXiv:2509.05212).

Craig Gidney’s measured criticism

Gidney – the Google Quantum AI researcher whose May 2025 paper established the previous benchmark of fewer than one million qubits – offered the most technically pointed assessment. On Scott Aaronson’s blog (Comment #18, February 16, 2026), he wrote:

“I agree with the assessment that your mileage from this paper depends entirely upon how much you’re willing to tolerate the additional demands they are making of the hypothetical hardware.”

“I have various issues with the paper (e.g. they assume the same decoder reaction time but have a much harder decoding problem), but in a sense it’s sort of irrelevant. Even if this paper’s details were all wrong, the overarching point that non-nearest-neighbor-connections are a resource that can be used to substantially reduce qubit overhead would remain correct. The question is only whether the tradeoffs you have to make to get that benefit are worth the cost.”

In his X/Twitter thread, Gidney stated the paper demands “a lot more qubit connectivity” for its headline number. In a New Scientist interview, he added: “These more stringent requirements make it harder to make hardware, and making hardware is already the hardest part.”

The decoder issue Gidney raised deserves emphasis. The Pinnacle paper assumes a 10 µs reaction time – identical to Gidney’s surface-code estimate – but QLDPC decoding is fundamentally harder. Surface codes map to graph-matching problems solvable at MHz speeds on FPGAs, while QLDPC codes require solving hypergraph matching with no known fast exact decoder. The paper’s simulations use most-likely-error decoding (effectively maximum-likelihood), and the authors explicitly acknowledge that building a fast enough real-time decoder is out of scope.

Aaronson’s endorsement with caveats

Scott Aaronson’s February 15, 2026 blog post provided the most visible expert assessment. His core evaluation: “Yes, this is serious work. The claim seems entirely plausible to me, although it would be an understatement to say that I haven’t verified the details.” His primary concern was practical: “LDPC codes are harder to engineer than the surface code (especially for superconducting qubits, less so for trapped-ion), because you need wildly nonlocal measurements of the error syndromes.

Aaronson revealed a telling backstory about responsible communication: “In the acknowledgments of the paper, I’m thanked for ‘thoughtful feedback on the title.’ Indeed, their original title was about ‘breaking RSA-2048’ with 100,000 physical qubits. When they sent me a draft, I pointed out to them that they need to change it, since journalists would predictably misinterpret it to mean that they’d already done it.” This concern proved prescient – New Scientist published under the headline “Breaking encryption with a quantum computer just got 10 times easier.” Others did too. And I am getting pinged daily by my clients and my network.

On the timeline question, Aaronson was frank: “I have no idea by how much this shortens the timeline for breaking RSA-2048 on a quantum computer. A few months? Dunno. I, for one, had already ‘baked in’ the assumption that further improvements were surely possible by using better error-correcting codes.” In a later comment, he offered the sharpest framing of why the paper matters: “Fault-tolerant QC being ‘years away’ is nothing new. What’s new is that, when you ‘merely’ need to scale up the number of qubits by 1000× while maintaining their quality, it becomes important to ask, well, how many years? 3? 4? 5?

The spacetime volume question is more nuanced than “1.5×”

A common claim in early commentary was that Pinnacle is “only ~1.5× better” than Gidney 2025 in total spacetime cost (qubit-days). The evidence suggests this figure is incorrect or at best incomplete. The paper’s data shows two distinct regimes. At the minimum-qubit configuration (~98k qubits, one month), the spacetime volume is roughly half of Gidney’s estimate – about 2× better. But at the parallelized 471k-qubit configuration (one day), it drops to just 6% of Gidney’s spacetime cost, representing roughly a 17× improvement. This dramatic difference stems from Pinnacle’s parallelization scheme, which distributes computation across multiple processing units sharing a common input register.

The practical implication: Pinnacle’s advantage scales with available hardware. With minimal qubits, it is modestly better than Gidney’s approach in total computational work. With more qubits available, its parallelization yields much larger gains. This nuance has been largely lost in media coverage.

How we got here: the accelerating resource reduction

The Pinnacle paper sits at the end of a remarkable trajectory of declining resource estimates for RSA-2048 factoring:

YearPaperPhysical qubitsRuntimeError correction
2012Fowler et al.~1 billionDaysSurface code
2019Gidney & Ekerå~20 million8 hoursSurface code + distillation
2021Gouzien & Sangouard13,436 processor + ~28M memory modes177 days3D gauge color codes
2025Gidney (Google)<1 million<1 weekSurface code + cultivation + yoked codes
2025IBM Tour de GrossN/A (general arch.)N/ABivariate bicycle QLDPC
2026Pinnacle (Iceberg)~98,000~1 monthGeneralized bicycle QLDPC

Each generation achieved its reduction through distinct innovations. Gidney’s May 2025 paper (arXiv:2505.15917) introduced three key advances while maintaining the same conservative hardware assumptions as the 2019 paper: approximate residue arithmetic that avoids storing full 2048-bit numbers; yoked surface codes that triple idle-qubit storage density; and magic state cultivation that grows high-fidelity magic states inside surface code patches far more efficiently than traditional distillation factories. His Toffoli count of ~6.5 × 10⁹ was over 100× fewer than the Chevignard-Fouque-Schrottenloher approach.

Gouzien and Sangouard’s 2021 paper achieved the lowest processor qubit count (13,436) but required a multimode quantum memory with 28 million spatial modes in rare-earth-ion-doped solids – a technology that does not yet exist. Their Gouzien et al. 2023 follow-up targeted 256-bit elliptic curve discrete logarithms (not RSA-2048) using cat qubits and repetition codes, estimating 126,133 cat qubits in 9 hours, but required an extremely demanding single-to-two-photon loss ratio of 10⁻⁵.

IBM’s Tour de Gross (arXiv:2506.03094, June 2025, Yoder et al.) established the bivariate bicycle code framework – specifically the [[144,12,12]] “gross” code encoding 12 logical qubits in 288 physical qubits. While not providing RSA-specific estimates, it demonstrated ~10× qubit efficiency over surface codes for general computation and introduced the Relay-BP decoder achieving sub-480ns decoding on FPGAs. This paper represents IBM’s roadmap toward its Starling processor (200 logical qubits by 2029).

The QLDPC decoding gap remains the hardest open problem

The most technically significant criticism of Pinnacle centers on real-time QLDPC decoding. For surface codes, decoding maps to minimum-weight perfect matching – a well-understood problem solvable in microseconds on FPGAs. For QLDPC codes, the standard approach is belief propagation plus ordered statistics decoding (BP+OSD), where OSD has O(n³) worst-case complexity and involves matrix inversion over graphs with tens of thousands of nodes. As one decoder paper put it: “Even with specialized hardware, inverting the matrix of a graph cannot realistically be achieved within the decoherence time of a typical qubit.”

Recent progress has been rapid but incomplete:

  • IBM’s Relay-BP: Achieves <480 ns on the [[144,12,12]] gross code on FPGA hardware – well within Pinnacle’s 10 µs budget. But this is for a distance-12 code; Pinnacle requires distance-24+ codes.
  • NVIDIA NVQLink + Quantinuum: Demonstrated the world’s first real-time QLDPC decoding on a live QPU in November 2025, decoding Bring’s code [[30,8,3]] on the Helios processor with 67 µs median latency using BP+OSD on an NVIDIA GH200 GPU.
  • GARI-NMS-ensemble: Achieves 273 ns average decoding time for bivariate bicycle codes, sub-microsecond in 99.99% of cases.
  • Vegapunk FPGA accelerator: Targets ~1 µs with accuracy comparable to BP+OSD.

The crucial unknown is whether these approaches scale to the larger codes (distance 16-24) that Pinnacle requires. IBM’s Relay-BP results are for a 144-qubit code; Pinnacle’s [[510,16,24]] code is 3.5× larger with nearly double the distance. Decoding is the single biggest gap – the paper’s error rate simulations use near-optimal decoding that is not itself a real-time algorithm.

The “shift the difficulty” pattern

The Pinnacle paper exemplifies a recurring phenomenon in quantum resource estimation: each generation of papers reduces one metric (qubit count) while increasing demands elsewhere. This is not a criticism – it reflects genuine progress – but it demands clear-eyed accounting of what has actually gotten easier versus what has merely been relocated.

Pinnacle’s specific difficulty shifts include:

  • non-local qubit connectivity (generalized bicycle codes require bounded but non-local connections, fundamentally more demanding than surface codes’ nearest-neighbor 2D grid);
  • month-long sustained operation (the 98k configuration requires continuous fault-tolerant operation for weeks, which is an extraordinary demand for end-to-end stability;
  • fast QLDPC decoding (assuming 10 µs reaction time for a problem much harder than surface code decoding); and
  • undemonstrated code families (the largest QLDPC code demonstrated on hardware is Bring’s [[30,8,3]] – Pinnacle requires codes ~17× larger at much higher distance).

Previous papers exhibited analogous shifts. Gouzien and Sangouard 2021 reduced processor qubits to 13,436 but required 28 million spatial modes of exotic quantum memory. Chevignard et al. 2024 reduced logical qubits to ~1,700 but needed ~2 × 10¹² Toffoli gates and ~40 repeated runs. The Chinese team (Yan et al. 2023) claimed 372 qubits via a hybrid approach that Aaronson characterized as “the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates).” Each represented real theoretical insight wrapped in engineering requirements that remain years from realization.

How far is the hardware?

Current quantum systems are roughly 100-1000× short of Pinnacle’s requirements in qubit count, though error rates are already in range:

PlatformCurrent best qubitsBest 2Q fidelityGap to 100k
IBM (superconducting)156 (Heron r3)99.75% median~640×
Google (superconducting)105 (Willow)99.67% average~950×
Quantinuum (trapped ion)98 (Helios)99.92%~1,000×
Neutral atoms (research)~448 computing99.71% (specific pairs)~225×

The physical error rate of 10⁻³ that Pinnacle assumes is already achieved by multiple platforms. IBM’s Kookaburra processor, expected in 2026, will be the first QLDPC-native module with 1,386 qubits per chip (4,158 in 3-chip systems). IBM’s Starling, targeting 2029, aims for ~10,000 physical qubits and 200 logical qubits. Reaching 100,000 physical qubits with QLDPC-grade connectivity likely requires the early 2030s at the earliest.

Aaronson captured the scale challenge precisely: “State-of-the-art systems right now have ~100 physical qubits, or a few hundred at most. Some companies advertise systems with thousands of physical qubits, but then they can’t control them as well. The new paper assumes that you can control the 100,000 qubits about as well as they’re currently controlled in the 100-qubit systems.

Media coverage ranged from careful to misleading

The Quantum Insider and Quantum Computing Report both provided balanced coverage, noting the preprint’s unreviewed status and simulation-only basis. I previously published what I was trying to make the most thorough independent analysis, titled “No, the ‘Pinnacle Architecture’ Is Not Bringing Q-Day Closer 2-5 Years (but It Is Credible Research).” The Quantum Pirates newsletter offered sharp skepticism: Iceberg’s partner claims of “3-5 years” to hardware realization were characterized as “marketing-grade timelines – this is where my eyebrow tries to file for asylum.”

New Scientist’s coverage was the most problematic, with its headline “Breaking encryption with a quantum computer just got 10 times easier” implying imminent practical impact. On social media, claims that Pinnacle “brings Q-Day 2-5 years closer” circulated without authoritative sourcing.

Iceberg Quantum: the ARM of quantum computing?

Iceberg Quantum is a quantum architecture company (software and IP, not hardware) founded in 2024-2025 in Sydney by three University of Sydney PhD colleagues: Felix Thomsen (CEO), Lawrence Z. Cohen (CSO), and Samuel C. Smith (CTO). Their advisor is Professor Stephen Bartlett, a leading quantum error correction researcher. Paul Webster, the Pinnacle paper’s lead author, was hired after founding.

The company raised a $2 million pre-seed in March 2025 (led by Blackbird, with LocalGlobe) and a $6 million seed round in February 2026 (led by LocalGlobe, with Blackbird and DCVC), for roughly $8 million total. DCVC, whose portfolio also includes IonQ, Atom Computing, and Q-CTRL, described Iceberg as “aspiring to be the ARM of the quantum computing world.” Prineha Narang of DCVC stated: “The path to FTQC needs exactly the type of innovations we’ve seen from the Iceberg team.”

Iceberg has announced partnerships with PsiQuantum (photonics), Diraq (spin qubits), IonQ (trapped ions), and Oxford Ionics (trapped ions). Andre Saraiva, Diraq’s Head of Theory, stated: “Iceberg’s advances in qLDPC-based architectures will bring forward utility-scale applications on our devices by years.” The company does not build quantum computers; it designs fault-tolerant architectures for hardware partners to implement.

Cohen, in a New Scientist interview, offered a notably aggressive framing: “I think it’s important to never be conservative about the timelines of when things like this happen. There would be big consequences for someone breaking RSA, and it’s always much, much better to be wrong because it could happen sooner rather than later.”

PQC migration: urgency confirmed, timelines unchanged

The Pinnacle paper has not caused any government agency to revise its quantum threat or migration timeline. NIST finalized three post-quantum cryptography standards in August 2024 (FIPS 203/ML-KEM, FIPS 204/ML-DSA, FIPS 205/SLH-DSA) and selected HQC as a backup KEM in March 2025. NIST’s IR 8547 sets deprecation of quantum-vulnerable algorithms by 2030 and disallowance by 2035. The UK’s NCSC published a three-phase migration roadmap targeting completion by 2035. NSA’s CNSA 2.0 requires new national security system acquisitions to be compliant by January 2027.

Conclusion

The Pinnacle Architecture represents genuine scientific progress in an accelerating field. Its core contribution – demonstrating that QLDPC codes can yield an order-of-magnitude qubit reduction for Shor’s algorithm – is acknowledged as valid by the very researcher (Gidney) whose record it claims to beat. The progression from one billion qubits (2012) to 20 million (2019) to under one million (2025) to under 100,000 (2026) is real and consequential.

But three observations cut through the hype:

  • First, the 98,000-qubit headline is the best case for a single hardware platform (superconducting, 99.9% fidelity, 1 µs cycles) running for a full month; for trapped ions or neutral atoms, the numbers inflate to millions.
  • Second, every qubit saved is paid for elsewhere – in connectivity complexity, decoder speed requirements, and operational stability demands that have never been demonstrated at any scale.
  • Third, as Gidney noted with characteristic precision, the paper does not change a fundamental reality: “making hardware is already the hardest part.”

The most important number in this paper may not be 98,000. It may be Aaronson’s “1000×” – the gap between current quantum systems and what Pinnacle requires. Whether that gap closes in 5 years or 15 will determine whether this paper is remembered as a roadmap or a milestone, and that answer lies not in architecture papers but in the physics of scaled-up qubit fabrication. Organizations should treat the paper as further confirmation that PQC migration cannot wait, not as evidence that Q-Day has drawn closer.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap