Quantum Low-Density Parity-Check (qLDPC) Codes
Table of Contents
Note: This article is an updated and expanded version of my earlier deep-dive, Quantum LDPC & Cluster States, which introduced the theoretical foundations of qLDPC codes and their relationship to measurement-based quantum computing. Since that piece was published in 2023, the field has moved at extraordinary speed – from IBM’s “Tour de Gross” architecture laying out one of the most detailed end-to-end fault-tolerant blueprints built on qLDPC codes [Yoder et al., arXiv:2506.03094, 2025], to Iceberg Quantum’s Pinnacle Architecture claim that RSA-2048 can be broken with fewer than 100,000 physical qubits using generalized bicycle codes [Webster et al., arXiv:2602.11457, 2026]. This article goes deeper into the technical underpinnings that make those claims possible.
Quantum Low-Density Parity-Check (qLDPC) codes are an emerging class of quantum error-correcting codes that promise to significantly reduce the overhead required for fault-tolerant quantum computing. Much like their classical LDPC counterparts, qLDPC codes are defined by sparse parity-check constraints: each check (stabilizer) acts on only a small number of qubits, and each qubit participates in only a few checks. This sparse structure can enable more efficient error syndrome extraction and potentially higher error correction performance than traditional approaches like the surface code.
In this article, we dive deep into what qLDPC codes are, their origins and theory, how they work, how they differ from surface codes and other QEC methods, and why they are generating excitement – most critically, as the enabling technology behind recent architectures that could dramatically accelerate the timeline to a cryptographically relevant quantum computer (CRQC). We also examine which companies and research groups are pursuing qLDPC codes, the specific code families (such as bivariate bicycle and generalized bicycle codes) now at the center of industry roadmaps, and the challenges to making them work in practice.
For readers coming from our analysis of the Pinnacle Architecture, this article provides the technical foundation to understand why that architecture’s claims are plausible – and what “Big IFs” still stand in the way.
What Are Quantum LDPC Codes?
In essence, a quantum LDPC code is a large quantum error-correcting code (typically a stabilizer code in the CSS framework) where each stabilizer generator involves a bounded number of qubits and each qubit touches only a bounded number of stabilizers. For example, a stabilizer might be a Pauli operator acting on, say, 4 or 6 qubits out of potentially thousands. “Low-density” refers to this sparsity: as the code grows, the weight of each parity check remains constant (or at most grows very slowly), unlike in generic quantum codes where checks might involve very many qubits. By measuring all these multi-qubit parity checks (stabilizers), we get a syndrome that reveals information about errors without collapsing the quantum information. The goal is to preserve one or more logical qubits in a sea of noisy physical qubits, such that the logical qubits are far more robust to errors.
To put it another way, qLDPC codes extend the powerful concepts of classical LDPC codes to the quantum realm. Classical LDPC codes (introduced by Gallager in the 1960s) are widely used in today’s communication systems because they allow efficient error correction with sparse parity-check matrices and iterative decoding algorithms. Quantum LDPC codes attempt to bring similar benefits – high rate (meaning a relatively large number of logical qubits encoded per physical qubits) and low overhead – to quantum error correction. Notably, many quantum LDPC codes are constructed from pairs of classical codes using the CSS construction (named after Calderbank-Shor-Steane), which makes designing them more tractable by leveraging classical coding theory.
Key features of qLDPC codes include:
- Sparse stabilizers: Each parity check involves a small fixed number of qubits (e.g. each stabilizer might be a tensor product of Pauli operators on, say, 3, 4, or 6 qubits). This contrasts with a generic quantum code where a stabilizer could act on dozens of qubits. Sparse checks simplify measurement circuits and may allow parallel syndrome extraction.
- Constant weight and degree: Each qubit is typically involved in a constant number of stabilizers (the stabilizer degree of a qubit is bounded). This locality in the code’s Tanner graph can help with parallel error correction and potentially with decoding complexity.
- Often CSS codes: Many qLDPC codes are of CSS type, meaning they have separate sets of X-type and Z-type parity checks derived from two classical codes. This structure allows leveraging classical LDPC code design (e.g. using two classical LDPC codes whose parity-check matrices satisfy certain orthogonality conditions to define a quantum code).
- High error threshold: In principle, well-designed qLDPC codes can exhibit high error thresholds – potentially comparable to or higher than the surface code – meaning they can tolerate a higher physical error rate before the error correction fails. (Recent theoretical constructions indeed aim for good thresholds by using expander graphs and high-dimensional complexes.) For example, Oxford Ionics notes that qLDPC codes “promise high error thresholds and dramatically lower qubit overhead” compared to surface codes.
- Better scalability of distance and rate: Perhaps the most touted advantage is that qLDPC codes can achieve much better scaling of code distance (the number of physical qubits that need to error for a logical qubit to be corrupted) and coding rate (logical qubits per physical qubit) than the surface code or other topological codes. In fact, asymptotically good qLDPC codes have been proven to exist – meaning you can have a finite fraction of the qubits be logical (constant rate) and distance scaling linearly with the number of physical qubits. This is a stark contrast to the surface code, which encodes a vanishing fraction of qubits (rate → 0 as system grows) and has distance scaling only as √N.
In summary, a qLDPC code aims to protect quantum data with far fewer physical qubits per logical qubit by using clever sparse-check constructions. If successful, this would vastly reduce the overhead on the path to large-scale quantum computing – instead of needing, say, 1000 or more physical qubits to make one logical qubit with error ~$$10^{-9}$$ (as in surface codes), qLDPC codes might do the same job with tens of qubits. This promise has made qLDPC a hot topic in quantum computing as researchers chase more practical error correction.
Bivariate Bicycle and Generalized Bicycle Codes: The Practical qLDPC Workhorses
While the theoretical landscape of qLDPC codes includes many families – hypergraph product codes, lifted product codes, quantum Tanner codes – the constructions that have gained the most traction in industry are the bivariate bicycle (BB) codes and their generalization, generalized bicycle (GB) codes. These deserve special attention because they are the specific qLDPC codes underpinning both IBM’s fault-tolerant roadmap and the Pinnacle Architecture’s claim of breaking RSA-2048 with fewer than 100,000 qubits.
Bivariate bicycle codes, introduced by Bravyi, Cross, Gambetta, Maslov, Rall, and Yoder in their landmark 2023/2024 work (arXiv:2308.07915, published Nature 2024), are CSS codes constructed from two cyclic polynomials over a bivariate group algebra. The key insight is that these codes can encode many logical qubits into a relatively compact block – for example, IBM’s “gross” code encodes 12 logical qubits into 144 data qubits and 144 syndrome ancillas (288 total physical qubits). While denoted as [[144, 12, 12]] in reference to its data qubits, it achieves a net physical encoding rate of $$k/n \approx 4.2\%$$ (or $$\approx 8.3\%$$ relative to data qubits alone). This is a dramatic improvement over surface codes, which encode just 1 logical qubit per patch. IBM adopted this specific code family as the foundation of their “Tour de Gross” architecture [Yoder et al., arXiv:2506.03094, 2025] – the title being a wordplay on “gross” (144 = 12 × 12), referencing the 144 data qubits in each code block.
Generalized bicycle (GB) codes extend the BB construction to broader abelian groups, enabling higher distances (e.g., d=24) at comparable block sizes – the exact family used in the Pinnacle Architecture to reach the error suppression needed for RSA-2048 [Webster et al., arXiv:2602.11457, 2026]. Both BB and GB retain weight-6 stabilizers and degree-6 qubit connectivity, mapping to IBM’s c-coupler hardware.
Crucially, both BB and GB codes have bounded check weight (typically weight-6 stabilizers) and bounded qubit degree (each qubit participates in a small number of checks), making them genuine qLDPC codes. They also admit relatively simple Tanner graph structures that can be mapped onto hardware with degree-6 connectivity – a constraint IBM has shown is achievable with their c-coupler technology and multi-layer chip fabrication. This hardware-code co-design is what makes BB/GB codes the most likely candidates for near-term qLDPC implementations, as opposed to the more exotic quantum Tanner codes that require higher-dimensional expander graph structures.
For readers following the Pinnacle Architecture analysis, the distinction between BB and GB codes matters: Pinnacle’s qubit estimates depend on using GB codes with higher distance than IBM’s current BB implementations, and whether GB codes at distance 24 can be decoded in real time at microsecond latencies remains one of the critical open questions – what I’ve called one of the “Big IFs” of the Pinnacle claim.
Origin and Key Theoretical Breakthroughs
The idea of quantum LDPC codes has been around for over two decades, even if the term “qLDPC” wasn’t used initially. One can trace the origins back to the topological codes of the 1990s. In fact, Kitaev’s toric code (1997) is often cited as an early example of a quantum LDPC code. The toric code is a 2D topological code defined on a lattice; each stabilizer acts on 4 qubits (a small constant) and each qubit participates in 4 stabilizers – this sparsity qualifies it as LDPC. Kitaev’s toric code encodes 2 logical qubits in an $$n$$-qubit lattice and has a distance scaling as $$d = O(\sqrt{n})$$ (which grows sublinearly), demonstrating the power of local-check quantum codes. While the toric code’s rate is essentially 0% (2 logical qubits out of potentially thousands of physical), it showed that highly local, sparse parity checks could preserve quantum information and introduced the idea of a threshold error rate for fault tolerance.
In the early 2000s, other researchers sought to improve on the toric code’s parameters. Freedman, Meyer & Luo (2002) provided one of the first extensions beyond the toric code. They constructed a family of quantum codes using concepts from algebraic topology (e.g. projective planes), achieving a slightly better distance scaling. While still sublinear, this was a theoretical advance hinting that clever constructions could beat the toric code’s distance. These are sometimes called systolic codes and were early examples of non-geometric LDPC codes.
A major milestone came with Tillich & Zémor’s paper (first presented 2009, journal version 2014) introducing the hypergraph product code. This construction took two classical LDPC codes and “tensored” them to produce a quantum LDPC code (a CSS code). Crucially, the hypergraph product codes achieved positive rate (meaning k/n is constant as n → ∞) while still maintaining distance scaling as d = Θ(√n). In other words, unlike the toric code (whose rate goes to 0 for large lattices), the hypergraph product code can encode a linear number of logical qubits. This broke a long-held assumption that quantum codes with good distance had to have vanishing rate – Tillich and Zémor showed you can have your cake and eat some of it too: constant-rate, LDPC, with distance d = Θ(√n). It wasn’t “fully good” (distance wasn’t linear), but it firmly established that qLDPC codes could in principle outperform surface/toric codes in encoding efficiency. Hypergraph product codes spurred a new wave of interest in qLDPC constructions, and variations on this theme have been a rich research area since.
The real breakthroughs toward asymptotically good quantum LDPC codes occurred starting around 2020. Key milestones include:
- Panteleev & Kalachev (2020): “Quantum LDPC Codes with Almost Linear Minimum Distance” (arXiv:2012.04068) achieved Θ(N/log N) distance while keeping the code LDPC, but with only k = Θ(log N) logical qubits. The result was a major step because it broke the long-standing √N-style distance barrier and introduced the lifted-product toolbox that later “good qLDPC” constructions build on.
- Breuckmann & Eberhardt (2020): “Balanced Product Quantum Codes” (arXiv:2012.09271) gave the first explicit (non-random) qLDPC family with improved scaling, such as $$k=\Theta(n^{4/5})$$ and $$d=\Omega(n^{3/5})$$. The rate still vanishes asymptotically, but the construction mattered because it delivered unconditional parameters and helped normalize the idea that qLDPC can beat √N-distance families in explicit form.
- Panteleev & Kalachev (2021/2022): “Asymptotically Good Quantum and Locally Testable Classical LDPC Codes” (arXiv:2111.03654) produced constant-rate qLDPC codes with linear distance scaling, i.e., asymptotically good codes. In coding-theory terms, this is one of the key results that “proved the qLDPC conjecture” and shifted the debate from existence to practicality and implementability.
- Leverrier & Zémor (2022): “Quantum Tanner codes” (arXiv:2202.13641) reformulated the new “good qLDPC” constructions through a Tanner-code lens on expander-like structures (left-right Cayley complexes). It is presented as a simplified variant of the Panteleev–Kalachev approach with improved distance estimates and has driven follow-on work on decoding strategies and more implementation-friendly interpretations.
- Dinur, Hsieh, Lin & Vidick (2022): “Good Quantum LDPC Codes with Linear Time Decoders” (arXiv:2206.07750) strengthened the story by combining good-code parameters with linear-time decoding (in a theoretical construction). The key contribution is not just “good qLDPC exists,” but that efficient decoding is plausible in principle – an important bridge from existential results to systems-level feasibility.
- Bravyi, Cross, Gambetta, Maslov, Rall & Yoder (2023/2024): “High-threshold and low-overhead fault-tolerant quantum memory” (arXiv:2308.07915, published Nature 2024) moved the conversation from asymptotics to engineering by presenting an end-to-end fault-tolerant memory protocol using high-rate LDPC codes (bivariate bicycle codes). This is the work that gave industry a concrete finite-size family and a measurable protocol stack to build hardware roadmaps around.
These academic advances cemented qLDPC codes as a promising route to fault-tolerant quantum computing with far less overhead. To summarize the theoretical status: we now know it’s possible to have quantum codes that approach the performance of classical LDPC codes (constant rate, high distance, high threshold) – something that for many years was uncertain. The remaining leap is to implement these codes in real quantum hardware and demonstrate their advantages in practice.
(For completeness, other notable theory work includes Delfosse & Zémor (2012) on LDPC bounds, Kovalev & Pryadko (2013) on finite-rate constructions, and many papers on decoding algorithms for qLDPC – e.g. Poulin & Chung (2008) introduced iterative belief-propagation decoders for quantum codes, and Roffe et al. (2020)listed decoder approaches. These underpin the ongoing research into making qLDPC codes practical.)
How Do qLDPC Codes Work?
At a high level, qLDPC codes work similarly to other stabilizer QEC codes: by introducing redundancy and performing measurements that don’t disturb the quantum information but reveal errors. However, the structure of those measurements and the strategies for decoding have unique aspects due to the LDPC nature.
Stabilizers and Syndrome Extraction
In a qLDPC code, we have a set of stabilizer generators {S₁, S₂, …, Sₘ} that are tensor products of Pauli operators (X, Y, Z) acting on the physical qubits. Each Sᵢ is of low weight (say it acts on 4 qubits). These stabilizers mutually commute and define the codespace (the joint +1 eigenspace of all all Sᵢ is where the logical qubits reside). In operation, one repeatedly measures all stabilizers to check for violations (syndromes). From my CRQC Framework, CRQC Capability B.2 (Syndrome Extraction) lives here: fast, parallel, high-fidelity stabilizer measurement is the sensing loop of fault-tolerant computing. Because each stabilizer acts on a small number of qubits, measuring it typically involves a small entangling circuit (e.g. using an ancillary qubit that interacts with the few qubits of the stabilizer, then measuring the ancilla). The outcome (±1) tells us whether an error of an odd parity occurred on those qubits. By collecting all the syndrome bits from all parity-check measurements, we get a syndrome pattern pointing to where errors occurred.
The sparsity of qLDPC codes means syndrome extraction can potentially be done in parallel across many checks, since each check touches only a few qubits. This parallelism can speed up the error detection cycle. Additionally, sparse checks might be less prone to introducing additional errors during measurement (since fewer qubits and gates are involved per check).
Decoding
Once a syndrome is obtained, the challenge is to infer the most likely set of errors that caused it, and then apply corrections (or track those errors in software). Decoding a qLDPC code is generally a hard problem (NP-hard in worst-case), but so is decoding a classical LDPC code – yet in practice, we have fast iterative decoders (like belief propagation or message-passing algorithms) that work well for classical LDPCs. Quantum LDPC decoders often adapt similar strategies, with additional considerations for quantum issues (like dealing with correlated X/Z errors in CSS codes).
Imagine a big sparse graph (Tanner graph) where qubits are variable nodes and stabilizers are check nodes, and edges connect a qubit to the stabilizers that involve it. Errors on qubits are like bits flipping, and each stabilizer check node “sees” the parity of errors on its adjacent qubits. A decoding algorithm passes messages along this graph to guess the error bits. This is essentially belief propagation. Poulin and Chung (2008) showed one can do this for quantum codes, though standard BP isn’t always sufficient if the code has many short cycles in the graph. Improved decoders introduce techniques like union-find decoding or randomized decoding or even neural network decoders for qLDPC. The good news is that some of the recent breakthroughs (like IBM’s work discussed later) demonstrate that high-speed decoders for qLDPC codes are feasible – e.g. IBM built a decoder that can process syndromes in under 1 microsecond on FPGAs, which is faster than the physical gate times, thereby keeping up with the experiment.
One important aspect of qLDPC decoding is that unlike the surface code (which has a convenient planar structure that enables efficient minimum-weight perfect matching decoders), qLDPC codes often lack a simple visualizable structure. They might require more computationally heavy decoding, but the trade-off is you have to decode fewer total qubits (due to lower overhead). Recent research into quantum Tanner code decoders and use of expander graphs shows that one can achieve linear-time decoding for certain qLDPC families. In practice, companies like IBM and Rigetti are co-designing classical control hardware to perform decoding in real time as part of their qLDPC code implementations.
Logical Operation and Overhead
qLDPC codes encode many logical qubits per block (e.g., 12 in IBM’s gross code), yielding far lower overhead than surface-code patches. Photonic’s SHYPS codes achieve the same logical error rates as surface code with up to 20× fewer physical qubits; Oxford Ionics targets ~13:1 physical-to-logical at error ~10⁻⁸ versus surface code’s ~1000:1. These gains assume hardware supports non-local interactions – ion traps and photonic links provide this natively, while superconducting platforms use long-range couplers or qubit shuttling.
In summary, qLDPC codes work by spreading quantum information across a large, sparsely connected network of qubits and catching errors through many small joint measurements. They bring the algorithmic and theoretical benefits of LDPC coding into the quantum domain. The catch is that they require more complex connectivity and decoding, which we will examine as part of the challenges.
qLDPC vs. Surface Codes (and Other QEC Methods)
The surface code (and its cousin, the toric code) is currently the most well-known and experimentally advanced QEC code. It’s a prime example of a topological LDPC code with local checks. However, surface codes have significant overhead costs, motivating the search for alternatives like qLDPC codes. Here’s a comparison of qLDPC codes with surface codes and other approaches:
Overhead (Physical Qubits per Logical Qubit)
Surface codes are exceptionally costly in qubit overhead. To achieve a logical error rate suitable for long algorithms (e.g., 10-9), surface codes typically require ~1,000 to 2,000 physical qubits per logical qubit. Google’s 2023 experiment, for instance, used a 72-qubit superconducting device supporting a 49-qubit distance-5 surface code encoding 1 logical qubit (with a reported logical error rate of 2.914% per cycle). By contrast, qLDPC codes aim to drastically lower this ratio. If a qLDPC code can achieve the same error suppression with only tens of physical qubits, it fundamentally alters the scaling math. Indeed, published roadmaps imply overhead improvements in the 10x-100x range: QuEra targets 10,000 physical qubits to support ~100 logical qubits by 2026 (≈100:1), while Oxford Ionics targets >10,000 physical qubits supporting >700 logical qubits (≈13:1). This staggering reduction in overhead is the primary advantage of qLDPC codes.
Coding Rate (Encodings)
A surface code typically encodes only 1 logical qubit per patch; as the code scales to higher distances, the encoding rate ($$k/n$$) approaches zero. In contrast, many qLDPC codes offer finite, constant rates – yielding a proportional number of logical qubits as physical qubits scale. For instance, hypergraph product codes can achieve a constant encoding rate (e.g., $$k/n \approx 0.1$$) while their distance scaling remains $$\Theta(\sqrt{n})$$ up to polylog factors. Quantum Tanner codes even allow k/n to approach a constant while d/n is also constant. This is important for certain applications where you need many logical qubits (e.g. storing a large quantum memory or running algorithms that need hundreds or thousands of logical qubits simultaneously).
Distance and Error Suppression
Surface code distance scales with the linear size of the 2D patch ($$d=O(\sqrt{n})$$), meaning physical qubit counts must grow quadratically ($$n \propto d^{2}$$) to achieve higher distances. In contrast, qLDPC codes can achieve near-linear distance scaling ($$d=\Theta(n)$$), suppressing errors far more rapidly. For instance, using ~1,000 physical qubits, a surface code might yield $$d \approx 31$$, whereas a qLDPC code could reach $$d \approx 100$$. Assuming both operate well below the error threshold, the qLDPC code will yield a profoundly lower logical error rate. This suggests qLDPC codes could tolerate deeper circuits and longer computations before failing.
Moreover, many qLDPC codes are highly redundant, which can translate to higher error thresholds (if the checks are well-chosen). Simulations by groups like Photonic Inc. indicate that their qLDPC codes can indeed reach target logical fidelities with dramatically fewer qubits than surface code, implying a high effective threshold when scaled up.
Connectivity and Locality
Here lies one of the trade-offs. The surface code’s big strength is geometric locality – all parity checks involve nearest neighbors on a 2D grid. This is perfectly suited to, say, a 2D chip of superconducting qubits or a 2D array of trapped atoms. It means the hardware wiring and crosstalk can be managed, and the error-check cycle is uniform across the array. qLDPC codes, especially those with good parameters, usually do not adhere to strict 2D locality. Their parity checks often connect qubits that are not neighbors in any simple geometry.
In hardware like superconducting qubits with limited range couplers, implementing a qLDPC stabilizer might require a series of SWAP operations or long-range couplers that introduce complexity. IBM’s approach with their “Loon” chip is explicitly adding extra-long couplers and multilayer wiring so that distant qubits on the chip can be linked for qLDPC checks.
Photonic and ion-trap systems have an advantage here: ion traps offer basically all-to-all gate connectivity (any ion can be entangled with any other via sequential operations), and Photonic’s modular architecture uses optical links to entangle distant qubits easily. Neutral atoms also can be rearranged or use multi-qubit gates to effectively get nonlocal interactions.
So, while qLDPC codes break the 2D locality of surface codes, emerging hardware platforms are stepping up with creative solutions (move the qubits, or add communication buses) to supply the needed connectivity.
Error Thresholds
The surface code is known for a relatively high error threshold (~0.5–1% for physical gate error rates) which is one reason it’s popular – you don’t need extremely perfect gates to start below threshold. The threshold for a given qLDPC code depends on its construction; some simpler qLDPC codes had modest thresholds, but the recently discovered quantum Tanner codes are expected to have pretty good thresholds (potentially a few percent) because they’re built from expander graphs (which in classical LDPC gives good threshold). One might speculate that some qLDPC codes will have thresholds comparable to surface code, while offering the benefit that once over the threshold, the error suppression with distance is much steeper (due to larger $d$ for same $n$).
In any case, high fidelity hardware is generally prerequisite for any QEC, and the companies pursuing qLDPC are simultaneously pushing physical error rates down into the 0.1% or 0.01% range to have plenty of margin.
Complexity of Syndrome Measurement
Surface code checks are weight-4 and can be measured with a small fixed circuit on a square of 4 qubits using an ancilla. qLDPC checks might be weight-6 or weight-8 (for example) and may not map so neatly onto a small planar circuit. Measuring a higher-weight stabilizer might require a series of two-qubit gates between an ancilla and each qubit in the check, possibly done in a certain order. This can take more time and might expose the ancilla to more error. Techniques like performing multi-qubit parity measurements via ancilla cat states or parallel interactions exist, but they are more control-intensive. In hardware like ion traps, one could do a multi-qubit Mølmer-Sørensen gate to parity-check many ions at once, which could speed it up.
This is a technical challenge where more research is needed: how to efficiently measure a high-weight stabilizer without accumulating too much error. Some proposals break a high-weight check into overlapping smaller gates, or use flag qubits to catch ancillary errors, etc.
Decoding algorithms and latency
Surface codes can use efficient minimum-weight perfect matching decoders (like MWPM or the Union-Find decoder) that operate in near-linear time in the number of checks. qLDPC decoders, as mentioned, might use belief propagation or other iterative methods. These can potentially also be made fast (especially using parallelism on FPGAs or GPUs). IBM’s RelayBP decoder is an example where they optimized a belief-propagation decoder in hardware and got it running in microseconds. Rigetti and others are working with companies like Riverlane to integrate fast decoders into their control systems.
The bottom line is that decoding qLDPC might be more compute-heavy than decoding surface codes, but it appears feasible with dedicated classical hardware. The trade-off is you’ll be decoding fewer qubits (because of lower overhead) so the total decoding work might actually be comparable or even less in some regimes.
Other QEC Approaches
Beyond surface codes and qLDPC codes, there are also bosonic codes (like GKP codes or cat codes), which encode qubits in harmonic oscillator modes rather than many two-level qubits. Companies like Alice & Bob in France focus on cat qubits that autonomously correct certain errors. Those approaches are quite different and can be complementary (one could even imagine using bosonic qubits with qLDPC on top).
There’s also the whole paradigm of error mitigation (avoiding QEC but reducing errors via software post-processing) for near-term devices, but that won’t get to arbitrarily low error rates. And there’s the measurement-based topological codes (like cluster-state codes in a 3D cluster state, which is essentially equivalent to surface codes but in a one-way quantum computing model). Interestingly, cluster-state error correction (as used in e.g. Fusion-based quantum computing or PsiQuantum’s approach) could also potentially use LDPC-like structures by entangling qubits in complex 3D graphs. But currently, the surface code and qLDPC code are two of the main contenders for large-scale fault tolerance.
In summary, qLDPC vs surface code is a classic case of higher efficiency at the cost of higher complexity. qLDPC codes can encode more and protect better with fewer qubits, but need more from the hardware (connectivity, fast processing). Surface codes are simpler to implement in near-term devices but are wasteful in qubit count.
As hardware and control tech improves, the balance is tilting toward trying more efficient codes because the qubit counts needed for surface code are daunting (millions of qubits to do useful algorithms like breaking RSA), whereas qLDPC codes might cut that down to the hundreds of thousands or even tens of thousands. That can be the difference between reaching CRQC in a decade versus several decades. This is exactly why I developed the CRQC Quantum Capability Framework: it reframes “qubit count” into capability gates and executive metrics (LQC, LOB, QOT). Thus, many quantum hardware companies are now actively exploring qLDPC codes instead of relying solely on surface codes.
Industry Adoption: Who Is Pursuing qLDPC Codes?
Thanks to the theoretical progress and the pressing need to reduce overhead, several leading quantum computing companies and research teams have embraced qLDPC codes as part of their roadmap to fault tolerance. Here are some notable examples and their approaches:
IBM
IBM has undergone what may be the most consequential strategic pivot in quantum error correction. After championing surface codes for years, IBM has now made qLDPC codes – specifically bivariate bicycle codes – the centerpiece of their fault-tolerant roadmap. This shift crystallized in two landmark developments.
First, in November 2025, IBM unveiled the Loon processor, an experimental chip explicitly designed as a testbed for qLDPC code implementation. Loon integrates multi-layer routing and extra-long-range c-couplers that connect distant qubits on the chip, providing the degree-6 qubit connectivity that bivariate bicycle codes require. Alongside Loon, IBM reported real-time qLDPC decoding latencies below 480 nanoseconds (building on earlier sub-microsecond milestones). This is CRQC Capability D.2 (Decoder Performance): the classical loop must close in real time, or the QEC cycle collapses under backlog and miscorrection. By combining the Loon hardware with this fast decoder, IBM claimed to have “the cornerstones needed to scale qLDPC codes on high-speed, high-fidelity superconducting qubits”.
Second, and more architecturally significant, was the June 2025 publication of the “Tour de Gross” paper [Yoder, Beverland et al., arXiv:2506.03094] – a 68-page blueprint for IBM’s planned Starling quantum computer. The title is a wordplay: a “gross” is 144, referencing the [[144, 12, 12]] bivariate bicycle code that encodes 12 logical qubits into 288 physical qubits. This paper represents the first complete, end-to-end fault-tolerant architecture built on qLDPC codes, covering everything from code choice to compilation to decoding to modular interconnects. That scope is CRQC Capability D.1 (Full Fault‑Tolerant Algorithm Integration): the point where all subsystems must work together under one timing and error budget. Key architectural innovations include Logical Processing Units (LPUs) that enable fault-tolerant logical operations via generalized lattice surgery using only 90 additional qubits per code module, and universal adapters and bridges that connect modules via microwave l-couplers (link couplers) spanning up to approximately one meter. The Tour de Gross architecture delivers roughly a 10× qubit efficiency improvement over equivalent surface-code architectures. (For a deeper analysis of this paper, see our Tour de Gross coverage.) IBM’s roadmap built on this architecture targets Loon (2025) for qLDPC proof-of-concept, Kookaburra (2026) for the first memory + LPU module, Cockatoo (2027) for inter-module entanglement, and Starling (2029) for approximately 200 logical qubits executing 100 million gates. Critically, the Tour de Gross architecture became the direct baseline that the Pinnacle Architecture improves upon – Iceberg Quantum’s February 2026 paper [arXiv:2602.11457] explicitly builds on and extends IBM’s bicycle code framework, using generalized bicycle codes at higher distance and introducing innovations that eliminate the time overhead penalties inherent in IBM’s Pauli-based computation model. This IBM → Pinnacle lineage makes the Tour de Gross paper essential context for understanding how qLDPC codes went from theory to a plausible path to breaking RSA-2048.
Google hasn’t publicly announced moving away from surface codes yet – in 2023 they demonstrated a distance-5 surface code logical qubit as a proof-of-concept. Their focus has been on improving surface code fidelity and scalability (e.g. their “Sycamore” and “Rainbow” processors). However, Google is certainly aware of LDPC codes – some of the leading qLDPC theorists (like Vidick and Dinur) have Google affiliations. Moreover, Google invested in startups like QuEra, which do pursue LDPC codes (more on that below). So while Google’s mainline approach is still surface code (owing to their 2D superconducting architecture), it wouldn’t be surprising if they eventually incorporate LDPC ideas or even test them on their hardware, especially if IBM shows results. For now, Google is pushing towards ~1000 physical qubits with surface code and is exploring modular surface-code networks. We might see Google experiment with small LDPC codes once they hit limits with the surface approach.
Rigetti
Rigetti Computing (a superconducting qubit startup in the US) has explicitly embraced qLDPC codes. Under new leadership (CEO Subodh Kulkarni from 2022), Rigetti laid out a plan focusing on efficient QEC codes and modular scaling. Rigetti partnered with Riverlane, a quantum software firm, to develop an integrated QEC stack. In 2025, they were selected for DARPA’s Quantum Benchmarking Initiative (QBI) and proposed a “utility-scale” quantum computer for 2033 that uses multi-chip modules + qLDPC codes.
Rigetti’s roadmap involves building a series of testbeds: they already demonstrated a 36-qubit modular chip (called Cepheus) with improved fidelities. The next steps include a ~100 qubit multi-chip system and a dedicated 36-qubit QEC testbed where they can implement and refine qLDPC codes. Rigetti and Riverlane are co-designing the hardware and firmware to natively support qLDPC code execution, meaning the qubit chip, control electronics, and software all will be optimized for the chosen LDPC code. The idea is to reduce the number of physical qubits per logical qubit and speed up syndrome processing via tight integration.
If successful, Rigetti believes this could give them a more qubit-efficient path to fault tolerance than competitors relying on surface codes. They cite that by ~2030, if they can get on the order of 100 logical qubits with LDPC codes (which might be tens of thousands of physical qubits), that’s a big inflection point – from there, scaling to 1000 logical qubits might put breaking RSA-2048 within reach. Rigetti’s strategy showcases how even smaller players are betting on qLDPC to leapfrog in the race.
IQM
IQM is a Finnish-German quantum hardware company building superconducting qubits, and it’s notable for focusing on on-premises deployments and customized architectures. In their roadmap, IQM emphasizes chip layouts tailored for QEC codes, specifically mentioning quantum LDPC codes. They have developed two layout styles: “Crystal” (a grid-like structure) and “Star” (hub-and-spoke resonator coupling) as alternate ways to connect qubits. These are intended to make certain qLDPC codes more efficient – likely by providing the needed connectivity for parity checks with fewer hops. For instance, a Star topology might connect many qubits to a central bus, allowing a high-weight check to be done via that bus.
IQM’s view is that conventional surface-code lattices are not the only way, and by co-designing the hardware with the error-correcting code in mind, one can get better resource efficiency. They are targeting hundreds of logical qubits around 2030 using these techniques. This kind of co-design (hardware-aware codes, or code-aware hardware) is a trend we’ll see more of.
Oxford Ionics / IonQ
Oxford Ionics, a UK-based startup (recently acquired by IonQ in 2025), uses trapped-ion qubits with a unique “Electronic Qubit Control” method (no big lasers, everything on chip). Their hardware boasts some of the highest fidelities in the industry (99.97% 2-qubit gate fidelity as of 2024). High fidelity plus full connectivity (any ion can talk to any other via electromagnetic interactions) makes their platform ideal for trying advanced error correction schemes.
Indeed, Oxford Ionics has been a vocal proponent of qLDPC codes. They announced a partnership with Iceberg Quantum, an Australian company specializing in LDPC codes and the authors of the above-mentioned Pinnacle Architecture. The goal is to design a fault-tolerant architecture with minimal overhead, marrying Oxford’s hardware with Iceberg’s qLDPC codes.
Oxford Ionics has explicitly stated it is exploring qLDPC codes because they could enable constant-overhead error correction where adding physical qubits increases logical qubits linearly – essentially the holy grail of scaling. In their roadmap, they project that ~700 logical qubits could be made with ~10k physical qubits (roughly 14 physical per logical) once their error rates hit certain targets. By comparison, 10k physical qubits in a surface code might yield only a handful of logical qubits.
Oxford’s confidence in qLDPC is tied to their hardware strengths: long coherence and all-to-all gates mean even checks that require entangling distant qubits are feasible. They are implementing things like mid-circuit measurement and fast feed-forward (real-time classical processing to adjust operations based on syndrome outcomes) on their 256-qubit upcoming devices, explicitly to support QEC experiments.
After being acquired, IonQ’s CEO said the combined company’s mission is to move faster to fault-tolerant quantum computers with 2 million physical and 80k logical qubits by 2030. Achieving tens of thousands of logical qubits in that timeframe almost certainly requires LDPC-style codes. In short, IonQ/Oxford Ionics are heavily investing in qLDPC as a way to leap to scalable quantum computing within this decade.
Iceberg Quantum
Iceberg Quantum, an Australian startup founded in 2024, has rapidly emerged as one of the most consequential players in the qLDPC space – not as a hardware builder, but as an architecture and algorithm company specializing in qLDPC code design, decoding, and fault-tolerant compilation.
In February 2026, Iceberg published the Pinnacle Architecture [Webster, Berent et al., arXiv:2602.11457], which made the extraordinary claim that RSA-2048 can be factored with fewer than 100,000 physical qubits – an order-of-magnitude reduction from prior surface-code estimates. The architecture uses generalized bicycle (GB) codes (a broader family encompassing IBM’s bivariate bicycle codes) at distance 24, combined with three key innovations:
- processing units with measurement gadget systems enabling arbitrary logical Pauli product measurements every clock cycle (eliminating the time overhead that limited IBM’s Tour de Gross architecture);
- magic engines that exploit the multiple logical qubits within a single qLDPC code block to simultaneously distill and consume magic states, delivering one high-fidelity magic state per cycle via pipelined 15-to-1 distillation; and
- Clifford frame cleaning, a technique that enables efficient parallelism across processing units.
In my CRQC Framework this is CRQC Capability C.2 (Magic State Production & Injection): “magic engines” are a throughput story, and throughput is what makes Shor-scale workloads practical or impossible.
The factoring approach uses Shor’s algorithm with residue number system (RNS) arithmetic, and the qubit estimates range from approximately 97,000 physical qubits for a one-month computation to approximately 471,000 for a one-day computation (assuming physical error rate p = 10⁻³ and code cycle time of 1 μs). I have analyzed the Pinnacle paper in detail in my articles “No, the ‘Pinnacle Architecture’ Is Not Bringing Q-Day Closer 2–5 Years (but It Is Credible Research),” and “Pinnacle Architecture: 100,000 Qubits to Break RSA-2048, but at What Cost?” where I identified four critical assumptions – what I call the “Big IFs” – that must hold for these estimates to translate into reality:
- the existence of a practical real-time decoder for GB codes at distance 24 (currently undemonstrated);
- achieving the required non-local qubit connectivity at scale;
- sustaining the magic state throughput pipeline; and
- maintaining fault-tolerant operation continuously for weeks to months.
Iceberg’s significance lies not in building quantum hardware, but in demonstrating that qLDPC code design and compilation can be a force multiplier that dramatically reduces the hardware requirements for the most consequential quantum computing application: breaking public-key cryptography.
Photonic Inc.
Photonic Inc. (not to be confused with PsiQuantum, although both deal with photonics) is a Canadian startup taking a unique approach: using silicon spin qubits (T-centers in silicon) that are optically linked with photons. Essentially, they create small silicon qubit nodes that communicate via fiber-optic entanglement – a distributed quantum computing architecture.
Photonic has explicitly focused on fault tolerance from day one, and notably, in February 2025 they announced an error-correction breakthrough: a new family of qLDPC codes they call “SHYPS” codes. According to their release, these qLDPC SHYPS codes can perform universal quantum logic with up to 20× fewer qubits than surface code approaches. Dr. Stephanie Simmons (Photonic’s founder) referred to high-performance qLDPC codes as a “holy grail” and claimed they have cracked the codes to dramatically reduce overhead. The catch is such codes require extremely high connectivity among qubits – but that’s exactly what Photonic’s entanglement-first modular design provides. Essentially, they can entangle any qubit with any other via photonic links, so even if a SHYPS code stabilizer graph is complex, they can physically implement it. They validated this new code family with extensive simulations, showing the potential to slash qubit requirements for fault tolerance.
Photonic is now working to implement logical qubits using these codes in their upcoming prototypes. It’s an “industry first” to see a startup develop its own qLDPC code family and align it with custom hardware. Photonic’s strategy underlines how new players can innovate on both code and architecture to leap ahead. By 2025 they had 150+ employees and DARPA funding for their utility-scale quantum computer concept, with aims to demonstrate a fault-tolerant module before 2030. Microsoft’s investment in Photonic also indicates big tech interest – Microsoft potentially sees Photonic’s qLDPC-enabled approach as complementary to its own (Microsoft is pursuing Majorana-based qubits, a different paradigm, but also invested in error correction).
QuEra
QuEra Computing is a Harvard/MIT spin-off focusing on neutral atom arrays (Rydberg atom qubits). They already have a 256-qubit analog quantum simulator (Aquila) on AWS. QuEra’s roadmap, buoyed by a $230M funding in 2025 from investors including Google, explicitly targets a useful fully programmable quantum computer in 3–5 years (i.e. by 2028-2030). A core part of their plan is fault tolerance with neutral atoms, and they are indeed exploring qLDPC codes.
In mid-2024 QuEra researchers published a fault-tolerance method to speed up syndrome extraction using reconfigurable atoms, and they are looking at quantum LDPC codes for more efficient encoding of logical qubits. Neutral atoms have an interesting advantage: you can physically move qubits (optical tweezers can transport atoms) and you can arrange them in 2D or even 3D layouts arbitrarily. This means one can implement even complicated check graphs by moving atoms into positions to interact, effectively achieving the high connectivity that qLDPC codes need.
QuEra even demonstrated a variant of a toric code by dynamically connecting edges of a lattice via atom shuttling. Their goal is clearly to minimize overhead – they want low physical-to-logical ratios for their logical qubits. In fact, QuEra has talked about reaching ~30 logical qubits with 3000 physical in 2025 (100:1 ratio), and 100 logical qubits with 10,000 physical by 2026 (100:1 still). Those ratios are impressively low in the industry. QuEra’s neutral atom platform also can do multi-qubit gates (e.g. they can excite a whole subset of atoms and do a collective entangling operation), which might simplify some LDPC stabilizer measurements. Their CTO Nate Gemelke emphasizes moving from NISQ to error-corrected as the key transition now. With companies like QuEra, we see that every modality – not just superconductors or ions – is eyeing qLDPC codes because the overhead challenge is universal.
Others: Beyond the above, many academic labs are testing small qLDPC codes (e.g. there have been recent experiments encoding a qubit in a distance-3 color code or other small LDPC codes on ion traps and superconductors). Startups like Riverlane (UK) focus on QEC software and decoders, partnering with hardware companies to implement LDPC codes (they work with Rigetti, as noted, and others in the UK quantum program). Another startup, QunaSys in Japan, has worked on quantum error correction software and might be involved in LDPC code research. Iceberg Quantum (Australia) we mentioned with Oxford Ionics – they were early in doing demonstrations of logical gates on LDPC-encoded qubits in simulation. PsiQuantum, which is building a photonic fault-tolerant machine, hasn’t explicitly said “LDPC”, but their model (fusion-based quantum computing with photonic cluster states) effectively can implement any code and they likely will choose high-rate codes to reduce the required million-photon overhead. So while surface code was the only game in town a few years ago, now the landscape is rich: multiple companies are deeply investigating qLDPC codes as a way to accelerate the path to useful quantum computers.
Challenges in Making qLDPC Codes Work
Despite their promise, qLDPC codes come with significant challenges. It’s one thing to prove a good code exists or even simulate it; it’s another to reliably implement it on hardware with real noise and imperfections. Here are some of the key hurdles:
Connectivity and Hardware Complexity
As discussed, many qLDPC codes assume a topology where any of a qubit’s few stabilizers could involve qubits that are physically distant. That constraint maps to CRQC Capability B.4 (Qubit Connectivity & Routing Efficiency): code efficiency trades against connectivity realism. If your hardware is a fixed 2D grid (like most superconducting chips), you face a problem: how to perform parity checks on qubits that aren’t neighbors? IBM’s solution with Loon was to add long-range couplers in silicon, effectively adding wiring layers to connect far-apart qubits. This is very demanding on fabrication and design – it’s like turning a flat chessboard into a multi-layer 3D interconnected board. IBM leverages advanced 300mm semiconductor fabrication (e.g., at the Albany NanoTech Complex) to physically enable this complex multi-layer routing and scaling. Not every company has access to such advanced foundries. Other platforms like ions or photonics provide connectivity in other ways, but those come with their own issues (ions can do all-to-all gates but too many in a row causes crosstalk; photonic links can connect modules but with some probability of loss that needs mitigation). Ensuring reliable entanglement between non-neighbor qubits across a large system is a major challenge.
In summary, physically realizing the check matrix of an LDPC code is much tougher than for a surface code, requiring either advanced chip engineering or extra operational overhead (swapping qubits around, etc.), which can introduce more error if not done carefully.
Syndrome Measurement Overhead
If a stabilizer involves 6 qubits, you might need, say, 6 two-qubit gates (between an ancilla and each data qubit) to measure it. If done sequentially, that’s longer than measuring a weight-4 stabilizer (which might be done with 4 gates). Longer circuits mean more chance for errors to creep in during the measurement itself. One way to mitigate this is to run parts of the parity-check measurements in parallel – e.g. use multiple ancillas, each measuring part of the stabilizer network at once – or create GHZ ancilla states to parity-check multiple qubits in one go.
These techniques, however, also create complex error syndromes (e.g. an ancilla failure can spoof multiple syndrome bits). Researchers use flag qubits to detect ancilla errors in high-weight checks, but again it’s more involved than the surface code’s well-studied stabilizer circuit. The challenge is to design fault-tolerant circuits for syndrome extraction that are themselves low-overhead and don’t become the Achilles’ heel of the system. This is an active area of QEC research (known as fault-tolerant syndrome extraction design).
Decoder Complexity and Speed
qLDPC decoders can be complex algorithms. If your code has thousands of checks, a naive belief propagation might require many iterations and could be slow on a classical processor, potentially bottlenecking the QEC cycle. The decoder must keep up with the quantum clock cycle (which might be on the order of microseconds to milliseconds depending on tech). IBM’s achievement of a <1 microsecond decoding is impressive, but that was for a prototype code on a small system. As codes scale up, ensuring the decoder latency and accuracy remains good is hard.
There’s also a risk of oscillations or errors in iterative decoders (belief propagation can sometimes oscillate or get trapped in suboptimal states, especially on loopy graphs which quantum codes have). Researchers mitigate this with techniques like “ordered statistic decoding” or combining decoders (e.g. a quick heuristic decoder followed by a slow maximum-likelihood decoder if needed).
The point is, a lot of classical software and hardware development is needed alongside the quantum hardware to make qLDPC feasible. Companies like Rigetti identified this – they highlight tackling “classical processing bottlenecks in decoding and feedback” as crucial. Indeed, that’s why they integrate FPGAs and fast electronics for decoding. If the decoder is too slow or yields too many logical failures, the whole QEC scheme falters. This is a non-trivial engineering effort that requires multidisciplinary expertise.
Threshold Uncertainties
While some LDPC codes have theoretically good thresholds, the actual threshold in a real system can be lower, especially if there are correlated noise or if the code parameters are small. This is CRQC Capability B.3 (Below‑Threshold Operation & Scaling): a threshold only matters if it survives realistic noise, scaling, and real circuits. Many of the “good” qLDPC codes are asymptotic – meaning you need quite large block sizes to see the huge benefits. At small sizes, they might not outperform a surface code of similar size. It might be that for the first logical qubit demonstrations (distance e.g. 3, 5, 7), surface code is still easier/better. Only when you start aiming for distance >~15 or 20 might the LDPC codes pull ahead in efficiency. So there’s a catch-22: you need to build a fairly large system to truly demonstrate qLDPC advantages, but that large system is exactly what you can’t easily build until you’ve solved QEC. This is why companies are targeting mid-decade to show a logical qubit with LDPC that beats what a surface code logical can do. We don’t yet have experimental proof that any qLDPC code has a high threshold in a lab setting – it’s all simulations. So there’s some risk that unforeseen noise issues (like correlated cross-talk errors or leakage from qubits) could limit the performance of LDPC codes. Surface codes have been studied in experiments since around 2015, so they’re better understood in practice.
Hardware Stability and Uniformity
qLDPC codes often assume a large number of qubits that all have roughly similar error rates and are all connected in some irregular graph. If some qubits are far worse than others (in terms of error), they can become “hotspots” that fail the code. Surface codes, by contrast, are somewhat more forgiving – you can sometimes remove or bypass a few bad qubits on a lattice by rerouting around them, thanks to locality. In a nonlocal code, a single bad qubit or bad link (coupler) that participates in many checks could drag the whole code down or complicate decoding. So hardware has to have high uniformity in performance. Many groups are working on improving qubit uniformity, but scaling to thousands of qubits with uniform quality is a challenge still. Any variation needs to be calibrated out or accounted for in the decoder’s error model.
Integration of New Techniques
To really harness qLDPC, you might need to integrate several advanced techniques: mid-circuit reset (to reuse ancilla qubits frequently), real-time adaptive control (changing gates on the fly based on syndrome, which Oxford Ionics emphasizes), and maybe even ideas like teleportation-based gates (to effectively create long-range gates through entanglement swapping). Each of these is an active area of research on its own. Getting them all to work together in a single system is daunting. For example, mid-circuit measurement and feed-forward was only first shown on small scales in recent years (e.g. Google’s repetition code experiments, IonQ’s early work). Scaling that up to hundreds of qubits doing many measurements per second is uncharted territory.
Intellectual Overhead
A lighter note, but worth mentioning: qLDPC codes and their decoding are mathematically complex. It requires highly specialized knowledge to design and optimize them. The quantum workforce is only so large, and many have been trained so far on surface codes (because that was the paradigm for a while). There’s a learning curve for teams to become proficient in LDPC code design and troubleshooting. Missteps in code construction or decoder design could cause delays. That said, with the flurry of recent papers, knowledge is spreading, and collaborations (like hardware startups teaming with academic experts or specialist companies like Riverlane) help bridge this gap.
Sustained Fault-Tolerant Operation Over Weeks or Months
The Pinnacle Architecture’s minimum-qubit configuration (approximately 97,000 qubits) requires roughly one month of continuous fault-tolerant operation to factor RSA-2048 [Webster et al., arXiv:2602.11457, 2026]. This is CRQC Capability D.3 (Continuous Operation): multi-day/week stability is a gating requirement for cryptanalysis-scale runs, not a “nice to have.” This represents an entirely new category of challenge beyond what current QEC experiments have demonstrated. Today’s longest QEC experiments sustain logical qubits for milliseconds to seconds; a month is roughly 10⁹ times longer. During that time, the system must maintain coherent error correction without any catastrophic failure – no qubit drifts out of calibration fatally, no decoder gets overwhelmed by an error burst, no classical control system crashes. Even more forgiving time estimates (the one-week configuration at approximately 151,000 qubits, or the one-day configuration at approximately 471,000) represent unprecedented operational endurance. This challenge applies to any large-scale quantum computation, but qLDPC architectures surface it more acutely because their qubit efficiency gains are partly “paid for” in longer run times at the minimum-qubit operating point.
The Decoding Gap – From Simulation to Real-Time
Perhaps the single most critical open problem for qLDPC codes today, and the one that most directly determines whether architectures like Pinnacle are feasible, is real-time decoding. The Pinnacle paper’s qubit estimates rely on simulated logical error rates under most-likely-error (MLE) decoding – essentially an idealized maximum-likelihood optimization that is computationally intractable for real-time use. The paper explicitly notes that building a practical decoder matching these performance assumptions is out of scope. IBM’s Relay-BP decoder [arXiv:2506.01779] achieves sub-microsecond latency, but targets only bivariate bicycle codes at distance 12–18, not the distance-24 generalized bicycle codes that Pinnacle requires. Riverlane’s Ambiguity Clustering decoder [Wolanski & Barber, arXiv:2406.14527] approaches MLE accuracy at tractable computational costs, but whether it can be demonstrated at the single-digit microsecond latencies required for distance-24 generalized bicycle codes remains to be shown. This is not just a software problem: the decoder must keep pace with the quantum clock cycle, meaning it needs dedicated FPGA or ASIC hardware co-designed with the specific code family. If the decoder is even slightly too slow or slightly less accurate than the simulated ideal, the logical error rates degrade and the qubit estimates inflate – potentially by orders of magnitude. This is what I have called the first and most important “Big IF” of the Pinnacle Architecture.
In short, making qLDPC codes work is a grand systems engineering challenge. It pushes on all fronts: qubit quality, chip design, control electronics, firmware, software, and theory. But the consensus is that the reward – potentially achieving fault tolerance with far fewer qubits – is worth the struggle. As IBM noted, if pieces of this puzzle (like fast decoders and long-range couplers) weren’t demonstrated, the approach could falter. But step by step, the community is checking off these pieces. Demonstrating a fully error-corrected logical qubit with a qLDPC code, outperforming an equivalent logical qubit in the surface code, will be a watershed moment that many are aiming for in the next year or two.
Implications for CRQC and Q-Day
“CRQC” stands for Cryptographically Relevant Quantum Computer, meaning a quantum computer big and reliable enough to break modern cryptography (like RSA or ECC) by running Shor’s algorithm or other attacks. “Q-Day” (or Y2Q) refers to the day such a machine is realized – effectively the day encrypted data is no longer safe. The progress in qLDPC codes has direct implications on how soon a CRQC might be built, because error correction efficiency is one of the key limiting factors for scaling quantum computers.
Faster path to scale
The most dramatic illustration of qLDPC codes’ impact on CRQC timelines is the progression of estimates for breaking RSA-2048. In 2019, Gidney and Ekerå estimated that factoring a 2048-bit RSA integer would require approximately 20 million noisy qubits using surface codes [arXiv:1905.09749]. In May 2025, Gidney published an updated estimate showing this could be done with fewer than 1 million qubits – still using surface codes, but with improved algorithms and arithmetic circuits [arXiv:2505.15917]. Then in February 2026, the Pinnacle Architecture from Iceberg Quantum demonstrated that by switching from surface codes to qLDPC (generalized bicycle) codes and introducing architectural innovations, the estimate drops to fewer than 100,000 physical qubits [Webster et al., arXiv:2602.11457]. That is a 200× reduction in six years. While algorithmic improvements (such as residue number system arithmetic and better parallelization strategies) account for a significant portion of this reduction, the switch from surface codes to qLDPC codes is the single largest contributing factor in the most recent jump. (It is worth noting, as I discussed inmy Pinnacle analysis, that Pinnacle’s parallelization scheme using RNS arithmetic is actually independent of the choice of error-correcting code and could also benefit surface-code implementations – so the contributions of algorithms vs. qLDPC codes should not be conflated.)
If qLDPC codes let you achieve a given algorithm with 10× or 20× fewer physical qubits, that can dramatically accelerate the timeline for reaching cryptographically relevant scales. For instance, estimates for breaking RSA-2048 with surface codes often run into millions of physical qubits and many hours of operation – something like 1 million to 20 million qubits (depending on the era and optimization level), and billions of gate operations, which seemed a 2035+ prospect. However, if you could cut that overhead by a factor of 20, suddenly you might “only” need e.g. 100k physical qubits and perhaps it becomes a 2030 prospect. Oxford Ionics, for example, argues that because their approach can achieve logical qubits with ~10-20 physical qubits (thanks to high fidelity and LDPC codes), a CRQC could be on the horizon sooner than expected. They project that IonQ’s 2030 goal of ~80k logical qubits (with ~2 million physical) would definitely be enough to break current encryption. Indeed, 80k logical qubits running Shor’s algorithm could factor 2048-bit RSA in maybe days or less, which is game over for RSA/ECC. That timeline – 2030 – is far earlier than many older projections which were 2040s.
Other companies likewise suggest more aggressive timelines if high-efficiency QEC comes to fruition. QuEra’s team and other industry watchers have started revising estimates of Q-Day to around 2030 ± 2 years, given the recent rapid progress. They note improvements in algorithms (like better factoring algorithms that reduce qubit requirements) and hardware gains have shifted the outlook from “decades away” to possibly a late-2020s event in the most optimistic scenario. QuEra’s own roadmap aims for hundreds of logical qubits by before 2030, which is in line with what might be needed for some smaller crypto-breaking tasks or at least to demonstrate at smaller key sizes. Rigetti’s alignment with DARPA and NSA goals also indicates early 2030s as a key horizon: the NSA in 2022 set 2035 as a deadline for completing the transition to PQC, and Rigetti’s work in programs targeting 2033 for utility-scale quantum fits into that. If Rigetti and others can do even 100 logical qubits by 2030, as they hope, then one can see a path to 1000 logical by maybe 2032, and that could indeed threaten RSA-2048.
Urgency for PQC
The potential of qLDPC codes to accelerate quantum progress has not gone unnoticed by cybersecurity experts and governments. If Q-Day could arrive as soon as 2028-2030 in a optimistic case, that is essentially tomorrow in terms of transitioning global cryptography to quantum-safe standards. It typically takes years to adopt new cryptographic infrastructure. Bodies like NIST have been urging migration to post-quantum cryptography (PQC) precisely because these timelines are uncertain and possibly sooner than expected. Oxford Ionics’ achievements, for example, were explicitly noted by national security agencies – e.g. the German Cyberagentur contract with Oxford Ionics cited interest in the tech for national security uses (and implicitly concerns). Germany’s Cyberagentur awarded Oxford Ionics and Infineon a contract to develop a mobile quantum computer (project “MinIon”, also described as “Mini-Q”), signaling government interest in field-deployable quantum systems.
As hardware gets closer to CRQC capability, the urgency for deploying PQC increases. We might see “Q-day” not as a single dramatic event but as a threshold after which certain weaker cryptosystems start falling. For instance, Oxford Ionics hinted that even before full CRQC, smaller quantum computers could attack shorter RSA keys or aid classical cryptanalysis of some schemes. A 64-qubit high-fidelity machine isn’t going to break RSA-2048, but it might break a 256-bit RSA key that some embedded systems still use, or perhaps break symmetric ciphers with insufficient key lengths via Grover’s algorithm. The progression might be: by late 2020s, some niche but alarming cryptographic breaks happen, leading to a final rush to upgrade systems.
From a capability perspective, if qLDPC codes succeed, then once a team demonstrates even ~50 or 100 logical qubits with good error rates, they can start concatenating or compiling bigger algorithms. Breaking RSA might need thousands of logical qubits with long coherence (billions of operations), but we won’t jump from 0 to that overnight. There will be intermediate milestones: e.g. breaking 1024-bit RSA (which might need a few thousand logical qubits) as a proof-of-concept, or breaking some elliptic curve crypto. Those could happen a couple of years before the full 2048-bit break. So Q-Day might be seen not as a singular day but a period where “cryptographically relevant” capability is reached stepwise.
Economic and strategic implications: If qLDPC codes make large-scale quantum computers feasible earlier, whoever masters this technology will have a strategic edge. That’s why we see DARPA initiatives, large funding rounds (Google & others investing in QuEra, Microsoft in Photonic, etc.), and even international competition. The EU, for instance, is heavily funding startups like IQM and Pasqal, partly to not fall behind the US/China. China, it should be noted, also has research on quantum codes and could be working on their own LDPC implementations (though details are scarce publicly). A sudden leap via qLDPC in one country’s program could compress the timeline to a CRQC, catching others off-guard. This drives efforts like the CRQC Readiness Benchmark and Q-Day Estimator tools that attempt to forecast these developments.
In summary, qLDPC codes act as a force multiplier for quantum development. They don’t change the fundamental need for many qubits and low error rates, but they change the multiplier in front of those requirements. If the multiplier drops from ~1,000 (surface code overhead) to ~50 (qLDPC overhead) for each logical qubit, then achieving a computer that threatens cryptography might require only tens of thousands of physical qubits instead of millions. At current pace, a machine with, say, 50,000 physical qubits could plausibly be built around 2030 by a concerted effort (IBM’s roadmap shows modular clusters of chips aiming in that direction; IonQ/Oxford’s plan with networking chips, etc.). That is why some experts are now saying Q-Day could realistically happen in the latter 2020s or very early 2030s if everything goes right. This is a more urgent timeline than the 2035–2040 range that was often quoted a few years ago. It underscores the importance of transitioning to PQC now, since a secure communications system typically needs a few years head start before Q-Day to ensure data isn’t stolen in transit and later decrypted.
On the other hand, it must be acknowledged that huge technical challenges remain to reach thousands of logical qubits; qLDPC codes are a tool to help, not a magic wand.
qLDPC codes change the “multiplier” in front of fault tolerance: they may reduce the number of physical qubits needed per logical qubit compared to surface-code-centric designs, which is why architecture papers now explore RSA‑2048 resource estimates in the ~10⁵‑qubit regime instead of ~10⁶–10⁷.
But the timeline only compresses if four engineering conditions hold:
- real-time decoding at microsecond (or sub‑microsecond) latencies with near‑ideal accuracy;
- non-local connectivity at scale (or a modular equivalent) compatible with qLDPC connectivity graphs;
- magic-state throughput that doesn’t dominate runtime; and
- multi‑day to multi‑week stability without drift-driven logical failure. These map directly onto the CRQC capability gates for decoder performance, connectivity/routing, magic-state production, and continuous operation.
The Pinnacle result does not mean Q-Day is imminent; it means the theoretical floor for what is required has dropped into a range that multiple hardware roadmaps (PsiQuantum, IonQ, Diraq, IBM) project reaching within 3–5 years. Whether crossing that floor translates to actual cryptanalytic capability depends entirely on whether these Big IFs can be resolved on a similar timeline.
My CRQC Quantum Capability Framework organizes the path to a cryptographically relevant quantum computer into nine interdependent capabilities – from physical error correction to continuous multi-day operation – that collectively determine three executive metrics: Logical Qubit Capacity (LQC), Logical Operations Budget (LOB), and Quantum Operations Throughput (QOT). Under this framework, the baseline CRQC target (per Gidney 2025) is ~1,399 logical qubits encoded in ~1 million physical qubits at surface-code distance 25, running for ~5 days. qLDPC codes could dramatically shift the LQC equation: the Pinnacle Architecture claims to cut the physical qubit count to ~100,000 by improving the encoding efficiency of Capability B.1 (QEC)
Conclusion
Quantum LDPC codes have rapidly evolved from a theoretical curiosity into a linchpin of many quantum roadmaps. They represent a paradigm shift in quantum error correction – aiming to do more with less, to reach the era of true quantum advantage and fault tolerance faster and with fewer qubits than previously thought possible. We explored how qLDPC codes work and their key advantages: sparse parity checks enabling parallel error detection, high rates and distances allowing many logical qubits with strong protection, and the tantalizing prospect of constant-overhead scaling of logical qubits. We also discussed how they differ from the stalwart surface code: trading off local simplicity for global efficiency.
The heavy involvement of industry leaders and startups alike in qLDPC development – IBM with its Loon chip, Relay-BP decoder, and the Tour de Gross bicycle-code architecture; Rigetti with integrated LDPC decoding and DARPA plans; IonQ/Oxford Ionics with ion-trap LDPC experiments and their Iceberg Quantum partnership; Iceberg Quantum with the Pinnacle Architecture demonstrating sub-100,000-qubit RSA-2048 factoring using generalized bicycle codes [arXiv:2602.11457]; Photonic with a novel LDPC code family; QuEra with neutral-atom LDPC schemes; and others – is a strong signal that this approach is viewed as the way forward. The progression from IBM’s Tour de Gross (the first end-to-end qLDPC fault-tolerant blueprint) to Iceberg’s Pinnacle (the first qLDPC-based cryptanalysis architecture) in just eight months underscores the speed at which this field is moving. These efforts are in many cases international and collaborative, indicating how critical qLDPC codes are for the global quantum effort.
Challenges remain plentiful: hardware innovation to meet connectivity demands, ultra-fast and reliable decoders, and demonstrating that qLDPC codes can indeed achieve their theoretical potential in real devices. The next couple of years will likely see the first experimental logical qubit maintained by a qLDPC code. From there, it will be a race to scale up logical qubit counts. The companies and labs that master qLDPC codes will be the ones to deliver early fault-tolerant prototypes – and perhaps the first CRQC. As I’ve outlined, that has direct ramifications on cybersecurity and the countdown to Q-Day.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.