Capability 1.1: Quantum Error Correction (QEC)
Table of Contents
This piece is part of an eight‑article series mapping the capabilities needed to reach a cryptanalytically relevant quantum computer (CRQC). For definitions, interdependencies, and the Q‑Day roadmap, begin with the overview: The Path to CRQC – A Capability‑Driven Method for Predicting Q‑Day.
(Updated in Sep 2025)
(Note: This is a living document. I update it as credible results, vendor roadmaps, or standards shift. Figures and timelines may lag new announcements; no warranties are given; always validate key assumptions against primary sources and your own risk posture.)
Introduction
Quantum Error Correction (QEC) is the first and arguably most critical capability in the roadmap toward a cryptographically relevant quantum computer (CRQC). Without QEC, a large-scale quantum computer cannot reliably perform the billions of operations needed to break modern encryption – no matter how many qubits we build. In essence, QEC is what allows many noisy physical qubits to behave like a single near-perfect qubit. This concept underpins fault-tolerant quantum computing, enabling long algorithms to run to completion without being derailed by errors.
In this post, I explore what QEC is and why it matters, tracing its development from theoretical insight to laboratory demonstrations. I’ll highlight foundational papers that kickstarted QEC research in the 1990s, key experimental milestones over the past two decades, the latest breakthroughs achieved in recent years, and what advances are expected in the near future.
(For a primer on QEC concepts and the surface code, see my related posts – here I focus on the timeline of progress and how to gauge future developments.)
What is Quantum Error Correction?
At its core, quantum error correction is a method to preserve quantum information by encoding a single logical qubit into multiple physical qubits. During computation, special multi-qubit measurements (called syndrome measurements) continuously check for error symptoms and allow the system to correct them without revealing the actual data. In effect, a group of imperfect qubits, together with an error-correcting code, can mimic one “near-perfect” qubit.
This concept is analogous to classical error-correcting codes (like adding parity bits or using RAID for disks), except that in quantum computing the error-checking must happen continuously and carefully, so as not to disturb the fragile quantum state.
For example, a leading QEC scheme called the surface code arranges physical qubits in a 2D grid and encodes a logical qubit in a patch of dozens of them. Ancillary qubits repeatedly entangle with their neighbors to detect error syndromes (bit flips or phase flips) without directly measuring the data qubits. If an error occurs on any one qubit, it yields an unusual syndrome pattern that signals which qubit went astray; a corrective operation can then be applied, all while the encoded logical information remains in a protected superposition state. As long as errors are infrequent and mostly local, the surface code (or any robust QEC code) can “catch” and fix errors on the fly, preventing them from propagating into logical faults.
Hardware topology matters: because surface‑code stabilizers assume local neighbor couplings, sparse or modular layouts must route those interactions via SWAP/teleportation, inflating cycle count see qubit connectivity & routing efficiency for how the connectivity graph constrains practical QEC scheduling.
Without QEC, quantum bits (qubits) lose their quantum state (through decoherence or gate errors) in a fraction of a second – far too quickly to perform complex calculations. Today’s best physical qubits have error rates on the order of 0.1-1% per operation. At those error rates, even a circuit with a few thousand operations will likely fail, let alone the billions of operations required for breaking cryptography.
QEC provides a path to suppress errors dramatically by trading quantity for quality: we use many physical qubits and operations to encode each logical qubit, so that the effective error rate of the logical qubit becomes much lower than the physical error rate.
The ultimate goal is a fault-tolerant quantum computer where logical qubits are so reliable that even an algorithm taking days and $$6.5\times10^9$$ quantum gates can run to completion without errors.
From Theory to Threshold: Foundational Breakthroughs
Quantum error correction was once thought impossible – how can you check a quantum state for errors without observing (and destroying) it? The breakthrough came in the mid-1990s, when researchers discovered that it’s possible to spread the information of one qubit across many qubits in an entangled code word. In 1995, Peter Shor unveiled the first quantum error-correcting code, using 9 physical qubits to encode 1 logical qubit. Shor showed how specific joint measurements could detect whether any one qubit had flipped or lost phase, and then recover the original quantum state using the remaining qubits. Hot on Shor’s heels, Andrew Steane developed a 7-qubit code in 1996 that achieved the same error-correcting capability with fewer qubits. Together, the Shor and Steane codes demonstrated that the “no-cloning” principle of quantum mechanics could be finessed – one could redundantly encode quantum data and measure only the syndrome (error pattern) without measuring the data itself.
These theoretical codes gave birth to an explosion of research. Dozens of new codes were discovered in the late 1990s (Calderbank-Shor-Steane codes, Quantum BCH codes, etc.), and crucially, researchers proved that fault-tolerant quantum computation is possible in principle. In 1997-98, work by Dorit Aharonov and Michael Ben-Or, as well as Raymond Laflamme, John Preskill, and others, led to the quantum threshold theorem. This theorem guarantees that if the physical error rate of qubits is below a certain threshold (on the order of 10-3 to 10-2 for typical codes), then QEC can suppress errors arbitrarily well by making the code large enough. In other words, once your hardware is just good enough, adding more qubits and more QEC cycles will reduce the error rate instead of adding to it – enabling scalable quantum computing. This insight was monumental: it turned building a quantum computer from a hopeless task into a long but feasible engineering road, provided that threshold can be met.
Another visionary idea from this period was topological codes. Alexei Kitaev’s 1997 “toric code” showed that encoding qubits in a 2D surface with periodic boundary conditions could naturally localize and suppress errors, taking advantage of exotic physics (anyons and topology). This concept evolved into the surface code – a planar version amenable to actual chip layouts – which was found to have a relatively high error threshold around 1%. By the early 2000s, analyses by Kitaev, E. Dennis, A. Fowler and others firmed up the surface code as a leading choice, thanks to its combination of high threshold (~0.75% in some studies) and use of only local nearest-neighbor interactions. The stage was set to build these codes in real hardware, to see if reality would cooperate with theory.
Early Experimental Demonstrations
Turning QEC theory into practice proved to be a long, painstaking journey. The first proof-of-concept came in 1998, when a group at IBM Almaden led by D. G. Cory demonstrated quantum error correction on a 3-bit NMR quantum computer. They used nuclear magnetic resonance to encode a single logical bit into three nuclear spins and correct simple bit-flip errors – effectively implementing a quantum analog of the classical repetition code. While NMR systems are not scalable, this 1998 result was a crucial “hello world” for QEC, showing that quantum information could indeed be protected in the lab (albeit for one logical qubit in a non-computing system).
Throughout the 2000s and early 2010s, experiments in different platforms incrementally advanced the state-of-the-art. Ion traps and superconducting circuits – the leading hardware modalities – each took on small QEC codes. In 2011, researchers at NIST and University of Colorado demonstrated a 7-qubit quantum error correcting code (the Steane [[7,1,3]] code) in a trapped-ion system, implementing error detection and correction on a single logical qubit. By 2020, the same ion-trap group (led by physicist Chris Monroe, now co-founder of IonQ) showed elements of fault tolerance using a Bacon-Shor code on 9 data qubits + 4 ancillas – meaning the error-correction procedures themselves were designed to not introduce new errors. They prepared, manipulated, and read out a logical qubit in a way that any single physical gate failure would not corrupt it, a hallmark of fault-tolerant design. This was described as a “foundational step” toward scalable QEC on ion traps.
Superconducting qubits, on the other hand, had fewer qubits in the early 2010s, so progress began with simpler codes. In 2015, the Martinis group (then at UC Santa Barbara, later Google Quantum AI) demonstrated a distance-3 surface code on a 9-qubit superconducting chip, showing they could perform repetitive error detection cycles for bit-flip errors.
Around the same time, IBM researchers implemented a 5-qubit error correcting code on a small superconducting device – notable for being the smallest code that can correct an arbitrary single-qubit error (the so-called [[5,1,3]] code). These initial superconducting experiments did not yet beat the break-even point (sometimes the logical qubit was less stable than a single physical qubit, due to overhead), but they allowed researchers to shake out control issues and develop the fast electronics needed for QEC.
By the late 2010s, QEC demos had become more sophisticated. Both industry and academia achieved multi-round error correction. For example, in 2021, Google reported running stabilizer cycles on a 49-qubit array (a proto-surface-code) albeit without full error suppression yet. Importantly, researchers were learning how to handle crosstalk and imperfections – for instance, how to perform mid-circuit measurements (needed for syndrome extraction) and feed the results to a decoder. This period also saw experiments with bosonic codes (QEC in a single high-dimensional quantum mode, like a microwave cavity). In 2021, a Yale/ETH Zurich team achieved an “autonomous quantum error correction” in a bosonic cat qubit, extending its lifetime beyond the break-even point by encoding the qubit into two coherent states of a cavity and letting engineered dissipation stabilize it. This was the first time any qubit (of any type) showed quantitative improvement from QEC – a tantalizing sign that the threshold could be met.
Recent Milestones: Toward Fault-Tolerant Logical Qubits
In the last few years, QEC research has entered a new phase. The focus has shifted from simply demonstrating error correction on one logical qubit to scaling up the code distance and reducing the logical error rates. Several milestones achieved from 2021 to 2023 suggest that we’re at the dawn of the fault-tolerance era, where logical qubits can outperform physical qubits:
2021-22 – First “Break-Even” Experiments
Google and other groups announced they had finally crossed the break-even point for QEC. In 2023, Google Quantum AI published results in Nature showing that a distance-5 surface code (49 physical qubits) had a slightly lower error rate than a smaller distance-3 code (17 qubits). This roughly 4% improvement in logical error rate with a larger code was small but hugely significant. It marked the first time that adding more qubits actually reduced the error rate, indicating the system was operating below the surface code threshold. In other words, their qubits were good enough (errors ~0.1-0.2%) that the QEC overhead was net positive – a turning point hailed as evidence that quantum computing can scale. Google dubbed this achievement a “logical qubit prototype” and noted it as a key step on their roadmap.
2024 – Below-Threshold Operation & Scaling
Building on the 2023 result that first showed error suppression when growing a surface‑code logical qubit, Google moved to its Willow generation in 2024 and demonstrated two below‑threshold surface‑code memories on hardware: a distance‑5 memory on a 72‑qubit processor with an integrated real‑time decoder, and a distance‑7 memory on a 105‑qubit processor (a ~101‑qubit patch). They observed an error‑suppression factor of Λ ≈ 2 when increasing the code distance by 2, consistent with operating well below threshold. The distance‑7 logical memory achieved 0.143% ± 0.003% error per QEC cycle and a lifetime 2.4× longer than the best constituent physical qubit. On the distance‑5 system, the real‑time decoder sustained operation for up to one million cycles at a 1.1 µs QEC cadence with ~63 µs average decode latency. 
These results represent a prolongation of quantum memory far beyond what any single qubit could do, and crucially, they validate that error rates decrease exponentially as code distance grows (the cornerstone of fault-tolerance).
2023-24 – Multiple Logical Qubits & Operations
While much of the focus has been on improving a single logical qubit, researchers have also started demonstrating operations between logical qubits – essential for building a full fault‑tolerant machine.
In April 2024 (preprint; published Dec 2024), IBM researchers used a 133‑qubit heavy‑hex device to realize two logical qubits – a distance‑3 surface‑code (3CX variant) and a distance‑3 Bacon–Shor – and entangled them via lattice surgery combined with transversal CX. They verified a logical Bell state (simultaneous XX,YY,ZZ checks) over up to five rounds of stabilizer measurements; the d=3 case was confirmed with QEC only, and d=2 achieved ~94% logical‑Bell fidelity with post‑selection.
In January 2025, Besedin et al. demonstrated lattice surgery between two distance‑3 repetition‑code qubits by splitting a distance‑3 surface‑code qubit—useful building blocks for surgery, though repetition codes protect only against bit‑flip errors and thus are not full surface‑code logicals.
Fo trapped-ion comparison: earlier, in 2022, Quantinuum demonstrated fault‑tolerant entangling gates between logical qubits with real‑time QEC, comparing two encodings (five‑qubit code and the [[7,1,3]] color code) and reporting logical‑level CNOT sequences with higher fidelity than analogous physical‑level circuits – an important cross‑platform milestone for logical two‑qubit gates.
Separately, the UMD/Duke trapped‑ion team reported Bacon–Shor QEC comparisons (Shor vs Steane extraction) on a 23‑ion chain (preprint Dec 2023; Science Advances 2024), pushing multi‑round QEC on ions and informing logical‑operation design – even though that specific work did not entangle two logical qubits.
Why this matters: these multi‑logical demonstrations (on both superconducting and ion platforms) show that logical gates, measurements, and QEC cycles can be choreographed together, a prerequisite for scaling from single‑logical breakeven to large logical registers needed in cryptography-class workloads. A future cryptography-breaking quantum computer will need thousands of logical qubits all interacting, not just one logical qubit in isolation.
Trapped Ions and Concatenated Codes
In 2025, Quantinuum (a leading trapped-ion company) reported a milestone using concatenated error-correcting codes. They demonstrated a logical qubit where an inner code is protected by an outer code, achieving exponential suppression of error with two layers of QEC. This was essentially an experimental realization of the full threshold theorem vision proposed by Aharonov and Ben-Or: by nesting one code inside another (and using ions’ long coherence to advantage), the team showed that noise could be reduced at the cost of a modest increase in qubit overhead. While Google’s approach used a single large surface code, the Quantinuum result used smaller codes in two levels (a strategy that may prove more efficient for certain platforms). Notably, they claimed this concatenated scheme might offer a faster route to fault tolerance than a brute-force surface code, since each layer handles different aspects of error and can reduce overhead in ancilla qubits.
The fact that multiple QEC architectures (e.g. topological codes, concatenated codes) are now working in practice is a great sign for the field – it means we have options to optimize for different hardware.
Taken together, these recent milestones show that QEC has evolved from theory to small-scale reality. Logical qubits with below-physical error rates have been achieved on both superconducting qubit arrays and in trapped-ion systems. We have seen QEC codes of distance 5, 7, and even concatenated levels, all functioning and improving as qubit quality improves.
However, we should note that current logical qubits still have error probabilities in the 1% range per operation or per circuit cycle – that’s a big improvement over physical qubits, but still far too high for running a lengthy algorithm. The next big step is to push logical error rates down further (e.g. to 10-3, 10-5, and below) by increasing code size and hardware performance.
Competing QEC Codes and Approaches
While the surface code has dominated the spotlight (thanks to its high threshold and 2D layout suitability), it’s not the only game in town. Different quantum hardware may benefit from different QEC strategies, and even within superconducting qubits there’s active research on codes that could reduce the daunting overhead of QEC.
Surface Code Variants (XZZX, Heavy-Hex, etc.)
Researchers have introduced tweaks to the standard surface code to boost its performance. One variant, the XZZX code, changes the mix of Pauli checks and has been shown to raise the error threshold above 2% for biased noise (where one type of error dominates). IBM’s devices use a heavy-hexagon lattice, which is essentially a surface code adapted to a degree-3 connectivity graph – this sacrifices a bit of error threshold (simulations say ~0.8% vs ~1% for the regular surface code) in exchange for easier hardware engineering. So far, these variants perform similarly to the basic surface code at small scales, but they may prove advantageous as systems grow and certain error types need targeting.
Quantum Low-Density Parity-Check (LDPC) Codes
A newer frontier is LDPC codes, which are inspired by classical LDPC error-correcting codes. These codes have many fewer check operators per qubit, potentially allowing a single check to cover a large number of qubits. Theoretically, some quantum LDPC codes promise a constant overhead per logical qubit (not the quadratic overhead of surface codes) if very large block sizes are used.
In 2023-24, experimental groups (including a team in China and IBM as well) demonstrated small instances of LDPC codes on real hardware. For example, one experiment implemented a distance-4 “bicycle” LDPC code with 32 superconducting qubits, encoding four logical qubits in that lattice. They showed that all the multi-qubit parity checks (some involving 6 qubits at a time) could be measured in parallel, and they achieved a logical error rate per cycle around a few percent – not yet better than surface code, but proving the concept.
IBM has a particular interest in LDPC/bicycle codes, as reflected in a 2024 Nature paper and their 2025 roadmap: they claim these codes will form the backbone of a modular fault-tolerant machine by 2029. The allure is that if LDPC codes can be made to work at scale, they might significantly cut down the number of physical qubits needed for each logical qubit (maybe tens of qubits per logical instead of thousands).
However, LDPC codes typically have lower error thresholds and much more complex decoding, so a lot of R&D remains to see if they can outperform surface codes in practice.
Bosonic Codes (Cat & GKP Codes)
An alternative approach is to encode quantum information in oscillator modes (like photons in a cavity) rather than in many discrete qubits. Bosonic codes such as the cat code and Gottesman-Kitaev-Preskill (GKP) code leverage the fact that a single microwave cavity can have a long-lived multi-dimensional state. A “cat qubit” encodes 0 and 1 in two coherent states of a resonator (analogous to Schrödinger’s cat being “alive” or “dead”), which naturally gives some protection against certain errors (phase flips, in this case). Error correction then only needs to correct the remaining error type, reducing the problem to effectively one dimension.
Experiments at Yale and elsewhere have shown that cat qubits can have exponentially longer lifetimes as the average photon number is increased, and recently a distance-5 repetition cat code (five cavities with error correction across them) was demonstrated on a superconducting platform. They reported operating this concatenated bosonic code below threshold for phase errors, achieving logical error rates of ~1.7% per cycle for distance 5 – comparable to distance-3, showing that going to distance-5 didn’t degrade it. While those numbers are still high, the key point is the hardware efficiency: the bosonic approach achieved distance-5 with 5 cavities and 4 ancilla qubits (essentially 9 physical entities), whereas a distance-5 surface code needs 49 physical qubits.
Companies like Alice & Bob in France are pursuing cat qubits as a way to get to useful logical qubits with far fewer physical qubits. They argue that a network of 30 cat qubits might achieve the same logical performance as ~1000 transmon qubits in a surface code – an enticing reduction in scale. Bosonic codes are still in early days, but if they continue to improve, they could significantly accelerate the arrival of practical QEC by relaxing hardware demands.
Other Codes
Several other QEC codes are being explored on the fringes. For instance, color codes (a cousin of surface codes with 3D or 2D colorable lattices) have seen some small demonstrations, and erasure codes (exploiting qubit loss errors that can be detected) are proposed to improve effective error rates.
Thus far, none of these have matched the simplicity and performance of the surface code at small scales, but research is ongoing.
Even Microsoft’s long-shot approach of topological Majorana qubits can be seen as a form of “built-in” error correction (the physical qubit is encoded non-locally in Majorana zero modes). In 2023, Microsoft claimed evidence of creating topological qubits, but a full demonstration of their error correction advantage is still pending.
The landscape of QEC is rich, but given current data, the surface code (and its close relatives) remains the leading candidate for the first generation of true fault-tolerant quantum computers.
Challenges, Overheads, and Interdependencies
Quantum error correction doesn’t exist in a vacuum – it depends intimately on the underlying hardware and even on classical computing support. A critical cross‑dependency is the qubit connectivity graph: it dictates which stabilizers can run locally and which require routing, directly affecting QEC cycle length and effective distance. Here we outline the key challenges and dependencies that QEC research grapples with, as these will determine how quickly we can scale to a CRQC.
1. Physical Qubit Quality (“Below-Threshold” Hardware)
The foremost requirement is that the physical qubits and gates must have error rates below the QEC code’s threshold. If your qubits are too noisy (above threshold), adding QEC only makes things worse. This creates a chicken-and-egg: you need high-fidelity operations to benefit from QEC, but you need QEC to run very large circuits which might be the only way to fully test fidelity. In practice, the community has slowly pushed down error rates to the ~0.1% (1e-3) range for two-qubit gates, which is finally within reach of the surface code threshold window (~0.5-1%). Further improvements in coherence, gate calibration, and materials (for superconducting qubits) are still needed to give more margin below threshold and to reduce correlated errors that can undermine QEC. If next-generation qubits can achieve, say, 0.01% error rates, the overhead to reach a given logical quality will dramatically decrease.
2. Speed of Syndrome Extraction and Feedback
QEC requires frequent measurements of error syndromes – typically every few microseconds in a superconducting system. The faster you can cycle the QEC loop, the more errors you catch before they accumulate. Today’s superconducting qubits can be measured in 1-2 microseconds with high fidelity, and gates operate in tens of nanoseconds, so a QEC cycle of ~1 µs is feasible.
Trapped ions are slower (gate times in tens of microseconds), but they have other advantages like longer memory.
Regardless of platform, the classical decoder must keep up with the flood of syndrome data. This has become its own subfield: building ultra-fast decoders (often on FPGAs or specialized hardware) that can infer the error and suggest a correction in real time.
Google’s latest experiments achieved decoding latencies of around 60 µs, which is fast enough to handle millions of QEC cycles before feedback is applied. IBM is exploring a hybrid approach where some decoding is done in the cloud with slight delays. The bottom line is that real-time error correction is an orchestration of quantum and classical systems, and the classical side shouldn’t be underestimated – it might involve petabytes of data and massive parallel processing in a large-scale quantum computer.
3. Overhead and Scale: Perhaps the biggest challenge
QEC demands a lot of physical qubits and gates for each logical qubit. The surface code, for example, might need on the order of 1,000 physical qubits to make one very reliable logical qubit (and that’s for a logical error rate around 10-9, suitable for breaking RSA). This overhead multiplies when you consider a full algorithm needing hundreds or thousands of logical qubits. The result is a machine of millions of physical qubits – which implies enormous hardware engineering feats, from fabricating chips and controlling electronics to cooling and power requirements. As my analysis noted, a million-qubit superconducting quantum computer might consume on the order of megawatts of power and cost billions of dollars if done with today’s technology. This is why there is such urgency in exploring more hardware-efficient QEC (like bosonic or LDPC codes) that could cut down the overhead by an order of magnitude or more. Even a factor of 10 reduction (from 1000 qubits per logical to 100 per logical) would profoundly change the practicality of building a CRQC. Until such breakthroughs occur, scaling up QEC will likely be a gradual climb – e.g., demonstrating 10 logical qubits, then 50, and so on, while juggling complexity and yield issues.
Sparse connectivity multiplies overhead: every non‑local stabilizer or data‑data interaction paid for with SWAP chains/teleportation adds depth and error opportunities – hence the centrality of connectivity & routing to keep QEC round time and LOB in check.
4. Preservation of Benefits at Scale
A subtle but important point is ensuring that as QEC codes grow (distance increases, more logical qubits), the error suppression continues and doesn’t plateau or reverse. There are concerns about correlated errors (e.g., a cosmic ray or a chip glitch flipping many qubits at once) that could wipe out the gains of a larger code. Indeed, Google found that at distance-25, rare correlated events might dominate unless mitigated. Engineering the system to minimize shared error sources – like using tunable couplers to eliminate unwanted interactions, improving cryogenics to avoid quasiparticle bursts, etc. – is part of making QEC continue to work as we scale.
Similarly, managing the yield and “dead qubits” on a large chip matters: a few bad qubits in a patch could lower the effective distance. So scaling QEC isn’t just a matter of throwing more qubits on a wafer; one must maintain high uniformity and handle error sources that might grow with system size.
5. Interdependencies
QEC is deeply intertwined with other capabilities in the CRQC stack. For instance, it depends on high-fidelity syndrome measurement – fast, accurate readout of qubits is required to get the error signals. It also relies on below-threshold operation of the raw hardware – if the qubits aren’t good enough, QEC fails. The efficiency of QEC feeds into the Logical Qubit Capacity (LQC) metric – essentially how many good logical qubits you have – and the Logical Operations Budget (LOB) – how many operations you can do before failure. In fact, QEC is the primary lever to improve those top-level metrics. But it puts pressure on other parts: e.g., the decoder (often called out as its own capability) must be fast and accurate ; the throughput of operations (how fast you can cycle qubits) matters because slower cycles mean more errors accumulate between corrections. In summary, improving QEC will often require a holistic approach: better qubit hardware, better cryo and control engineering, smarter classical processing, and even software optimizations to schedule and compile fault-tolerant circuits efficiently. It truly touches everything.
QEC performance is inseparable from the hardware connectivity graph. Surface‑code stabilizers assume local neighbor couplings; any missing edge must be synthesized via routing (SWAP chains, shuttling, or teleportation), which adds cycles, compounds two‑qubit error exposure, and can serialize checks – raising LOB and depressing QOT. 
The Road Ahead: How to Track QEC Progress
With QEC now at an experimental inflection point, how can professionals monitor the field’s progress toward a cryptography-breaking quantum computer? Here are some key metrics and milestones to watch for in upcoming research papers and announcements:
Increasing Code Distance
Code distance (d) is the primary lever for lowering logical error rates. We’ve seen d=5 and d=7 in 2023-24; the next few years may bring d=9, 11, 13 on various platforms. Each step up in distance should ideally show an exponential suppression of logical error rate. Tracking papers for phrases like “distance-9 surface code” or “d=11 Bacon-Shor code” will indicate steady progress. The holy grail is distance ~25 for surface code, which is what’s estimated for full-scale cryptographic algorithms.
Logical Qubit Error Rates
Keep an eye on the reported logical error per operation or per circuit figures. Right now they hover around 1e−2 to 1e−3. Getting to 10-5 or 10-6 would be a major milestone, indicating that a logical qubit is as reliable as, say, typical classical memory bits. For context, breaking RSA might require logical error rates ~10-12 or better, so there’s a long way to go. But a steady march downward in logical error probability, year over year, will show that scaling and engineering improvements are paying off.
Number of Logical Qubits Encoded Simultaneously
So far, experiments have typically dealt with one logical qubit (or at most two). The transition to multiple logical qubits in one device will be significant. IBM’s entangled logical qubits were an early example. If we start seeing demonstrations of, say, a register of 5 or 10 logical qubits all maintained with QEC, that will be big news. It will also surface new challenges around crosstalk and scheduling, so it’s a true test of system integration.
Logical Operations and Connectivity
Creating a logical qubit that just sits there (logical memory) is the first step. Next is performing logical gates between them – e.g. a fault-tolerant CNOT or T-gate. Watch for experiments showing two logical qubits interacting while error correction continues in the background. Also, look for lattice surgery or braiding techniques being used to move and combine logical qubits (this is how one performs operations in topological codes). A big future milestone would be a small algorithm implemented on logical qubits, like a logical version of Grover’s search on 2-3 logical qubits.
Real-Time Decoding Latency
On the more engineering side, pay attention to decoder improvements. As codes grow, decoding speed and accuracy must keep up. If papers report faster decoder implementations (e.g. “sub-microsecond decoding achieved” or use of customized decoder chips), that’s a strong signal that we’re solving the classical bottleneck. Similarly, demonstrations of feedback – where a quantum computer actually applies corrections in real time based on decoder output – will be a milestone in closing the QEC loop fully.
QEC in Different Modalities
Thus far, superconducting qubits and trapped ions have led QEC experiments. It’s worth watching if other platforms join in. For example, neutral atom arrays have recently achieved 99.5% fidelity gates; will someone run a small QEC code on a neutral atom system? If so, that broadens the base of hardware options. Or if photonic qubit systems (which inherently require error correction for loss) demonstrate a rudimentary QEC, that would be notable. Each modality might favor different codes (ions might do concatenated codes or small block codes; superconductors do surface codes; photonics do cluster-state codes, etc.) Note the modality‑level connectivity differences (e.g., ions’ all‑to‑all within a trap vs. planar nearest‑neighbor lattices): they change how much routing QEC must perform each cycle. See quantum connectivity & routing efficiency for the implications across platforms.
So keep an eye on QEC news from companies like IonQ, Quantinuum, IBM, Google, as well as emerging startups.
Overhead Reduction Techniques
Finally, track research on reducing overhead: this includes bosonic code advancements, like larger cat codes or the first logic gates between bosonic logical qubits; qLDPC code experiments that push to higher distance or integrate with superconducting multi-chip modules (since long-range couplers can help LDPC); and any signs of alternate error correction strategies (e.g. Microsoft’s Majorana approach or new hybrid schemes combining error mitigation and partial error correction). An interesting development to watch would be if someone demonstrates a logical qubit with, say, only 50 physical qubits that beats a surface-code logical of 1000 qubits – that would turn heads, as it hints at a leap in efficiency.
Conclusion
Quantum Error Correction has progressed from a theoretical curiosity to an experimental reality at small scales. It is truly the foundation for any long quantum program – without it, quantum computers remain short-lived, noisy toys incapable of solving big problems. With it, and with continued improvements, a quantum computer can in principle run forever, churning through billions of operations reliably until a solution emerges. In this article, we reviewed how QEC works and why it’s needed, then traced the arc of progress from Shor’s 9-qubit code in 1995 to today’s fledgling logical qubits that are just beginning to outperform their physical counterparts. The journey has involved inventing clever codes, proving the threshold theorem, and years of meticulous experiments to reach that threshold in practice.
As of 2025, the current status of QEC is that we can encode and maintain single logical qubits with error rates slightly better than physical qubits (Technology Readiness Level ~4 on a 1-9 scale). We have not yet built a fully error-corrected universal quantum computer – far from it – but the “superscaling” era (where adding qubits actually gains us ground) has finally arrived. The gap to a CRQC remains large: we likely need on the order of a few thousand logical qubits at distance ~25, meaning perhaps a million physical qubits, to run something like Shor’s algorithm on RSA-2048. The path to get there will require continued advances across the board: qubit quality, quantity, architecture, and error-correction software. One practical gatekeeper is whether millions of physical qubits can act as a single machine with near‑transparent logical‑level connectivity. The coming years will be about scaling up by an order of magnitude at a time – from ~100 qubits with QEC today, to 1000s with QEC, and so on – while keeping error rates in check.
Encouragingly, major quantum hardware players are now laser-focused on fault tolerance. IBM, for instance, has declared a goal to deliver a 200-logical-qubit fault-tolerant quantum computer by 2029, which implies aggressive improvements in QEC and modular scaling. Google’s roadmap similarly places error-corrected qubits as the next big milestone after quantum advantage. Each breakthrough in QEC – every extra distance, every extra logical qubit, every extra 9’s of fidelity – brings us closer to the era of useful quantum computers that can tackle problems beyond the reach of classical machines.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.