Table of Contents
(Updated in Sep 2025)
(Note: This is a living document. I update it as credible results, vendor roadmaps, or standards shift. Figures and timelines may lag new announcements; no warranties are given; always validate key assumptions against primary sources and your own risk posture.)
Introduction
This guide is the most detailed, end‑to‑end map I know of for understanding what it will actually take to reach a cryptographically relevant quantum computer (CRQC), i.e. break RSA-2048 – not just headline qubit counts. It breaks the problem into the capabilities that determine CRQC feasibility and timing, shows their interdependencies, and anchors each one in observable metrics, current status, gaps, and TRLs. If you’re trying to forecast Q‑Day with rigor (or defend against it), this is designed to be your working reference, not a one‑off blog post.
How to read this guide?
The roadmap is organized into three categories that reflect how real systems mature:
- Foundational Capabilities: (1) Quantum Error Correction, (2) Syndrome Extraction, (3) Below‑Threshold Operation & Scaling
- Core Logical Operations: (4) High‑Fidelity Logical Cliffords, (5) Magic‑State Production & Injection (Non‑Cliffords)
- End‑to‑End Execution: (6) Full Fault‑Tolerant Algorithm Integration, (7) Decoder Performance (real‑time QEC), (8) Continuous Operation (multi‑day stability)
Each of the 8 capabilities is presented in two layers:
- A one‑page table for quick triage: Direct CRQC Impact; What it means; CRQC Requirements; Current Status; The Gap to CRQC; Interdependency Alert; Why it matters; and a Primary Readiness Lever (how this capability moves LQC / LOB / QOT).
- A short chapter that explains the concept in plain language, cites the best public results, and spells out what must still be demonstrated.
- A linked dedicated deep‑dive article with more citations for each capability.
If you’re skimming, browse just the tables to grasp priorities and blockers. If you’re planning or auditing, read the chapters. If you are trying to fully understand each capability, read also the linked dedicated deep‑dive article with more citations, platform nuances, and milestone checklists.
How this connects to the executive view
Alongside this capability map, I maintain a CRQC Readiness Benchmark (Q‑Day Estimator) that compresses the eight capabilities into three top‑level levers:
- Logical Qubit Capacity (LQC),
- Logical Operations Budget (LOB), and
- Quantum Operations Throughput (QOT).
The tool rolls those into a single readiness score and a projected Q‑Day crossing – useful for boards, risk committees, and scenario analysis.
In short: use the Benchmark for a fast, comparable, crypto‑focused dashboard, and use this capability framework to interrogate assumptions, tie claims to observable milestones and TRLs, and make risk‑based PQC migration decisions.
What Is a “Cryptographically Relevant” Quantum Computer (CRQC)?
A cryptographically relevant quantum computer (CRQC) is a quantum system powerful enough to break modern cryptographic algorithms. In practical terms, it means a quantum computer with thousands of error-corrected (“logical”) qubits (built from millions of physical qubits) that can run algorithms like Shor’s factoring at scale to crack widely used public-key encryption (e.g. RSA, ECC). Today’s noisy quantum devices are far from this threshold – they lack the stability, qubit count, and fault-tolerant code needed to threaten even modest cryptographic keys.
To understand the gap, consider a recent milestone estimate by Google’s Quantum AI team (Craig Gidney, 2025). Factoring a 2048-bit RSA key (a standard encryption strength) is projected to require on the order of 1,399 logical qubits (error-corrected qubits), which corresponds to roughly 1 million physical qubits when using the leading quantum error correction code (the surface code at about distance 25). Such a machine would need to perform around 6.5 billion nontrivial gate operations (Toffoli/T gates) over a runtime of about 5 days continuous operation, with each physical gate error kept below ~0.1%. These numbers – 5 days of sustained, error-corrected computation on a million-qubit device – are staggering compared to today’s quantum computers (which have at most a few hundred qubits running for seconds). Not surprisingly, no current quantum system meets these requirements. Achieving a CRQC will demand major advances in multiple dimensions of quantum technology.
Overview: The CRQC Target from Gidney 2025
First, let’s establish what we need for a CRQC based on Gidney’s May 2025 paper:
| Logical qubits: | ~1,399 logical qubits (with 1,280 in “cold storage” and others in “hot storage”) |
| Physical qubits: | ~1 million physical qubits total |
| Code distance: | 25 (using surface codes) |
| Runtime: | ~5 days (4.96 days estimated) |
| Toffoli gates (non-Clifford operations): | ~6.5 billion |
| Physical error rate: | 0.1% (1 error per 1,000 operations) |
| Surface code cycle time: | 1 microsecond |
Now let’s map each capabilities:
1. Foundational Capabilities
Foundational capabilities are the base layers enabling any large-scale, fault-tolerant quantum computer. They include quantum error correction, syndrome extraction, and below-threshold operation & scaling. These are “make-or-break” capabilities – without them, a CRQC cannot be realized. They deal with managing quantum errors and scaling up qubits in a stable way, which is the fundamental obstacle to useful quantum computing.
1.1 Quantum Error Correction (QEC)
| Direct CRQC Impact: | CRITICAL & BLOCKING – the foundation for any long program |
| What it means: | Encode one reliable logical qubit from many physical qubits; continuously detect/correct errors without revealing the data. |
| CRQC Requirements: | Surface code at distance ~25; ~2×(25+1)² = 1,352 physical/“hot” logical qubit; ~430 per “cold” logical (yoked storage); continuous cycles at ~1 µs. |
| Current Status: | ✅ Logical qubits and below‑threshold error suppression shown at small distances (d≈3–7); multi‑logicals still limited (TRL ~4). |
| The Gap to CRQC: | Scale to d≈25 with many logicals operating concurrently; preserve gain at size; reduce overhead. |
| Interdependency Alert: | Depends on syndrome extraction speed/fidelity; below‑threshold hardware; decoder accuracy/latency; physical‑to‑logical overhead impacts everything. |
| Why it matters for CRQC: | Without robust QEC, you can’t store – let alone compute across – 6.5B non‑Cliffords over days. |
| Primary Readiness Lever: | LQC & LOB (secondary effect on QOT via cycle cadence). |
1.1.1 What it is
Quantum error correction is the method of encoding a single logical qubit of information into multiple physical qubits in order to detect and correct errors. In essence, many noisy qubits work together to behave like one near-perfect qubit.
For example, in a surface code (a leading QEC code), a logical qubit might be encoded in a two-dimensional patch of dozens of physical qubits, with entangled measurements (“stabilizers”) constantly checking for errors.
The idea is analogous to classical error-correcting codes for data storage, but for qubits the process is continuous. If one qubit “blips” due to noise, the error correction scheme identifies it from the syndrome (the pattern of measurement outcomes) and corrects it without disturbing the encoded data.
1.1.2 Role in CRQC
QEC is absolutely critical – it is the only known way to run billions of quantum operations reliably.
Physical qubits by themselves are extremely error-prone (today’s best have error rates on the order of 1 in 1,000 per operation). At that error rate, running 6.5 billion operations would almost certainly fail. Error correction suppresses the effective error rate by orders of magnitude, making long computations possible.
A CRQC, by definition, must be a fault-tolerant machine where logical qubits have error rates low enough to carry out cryptographic algorithms to completion. Without QEC, even millions of physical qubits would decohere long before breaking any encryption.
1.1.3 Status & current research
Quantum error correction has progressed from theory to small-scale demonstrations. Notably, in 2023 Google demonstrated a logical qubit built from 49 physical superconducting qubits (surface code of distance 5) that had a lower error rate than any single physical qubit in the group. This was a landmark proof-of-concept: it showed that adding qubits (and performing QEC cycles) can actually reduce error – meaning the hardware was operating below the error threshold (more on that below).
Other groups (IBM, Quantinuum, academic labs) have also realized small logical qubits or repeated error-correction cycles in various platforms. However, these logical qubits are still very limited – typically the logical error rate is only slightly better than physical, or even worse in some cases, and only one logical qubit may be encoded at a time. We have not yet achieved a “fully error-corrected” register of many logical qubits.
Still, the progress is encouraging: the fundamental concept works in practice. Researchers are now trying to increase code distances (to further suppress errors) and to implement QEC on larger arrays of qubits. The surface code is the primary choice for most because of its high error threshold and compatibility with 2D qubit layouts. Alternative codes (e.g. heavy-hexagon variants, bosonic codes) are also being explored for efficiency. At this stage, QEC is laboratory-proven in small scales – TRL ~4 – but not yet a solved engineering problem for large systems.
1.1.4 Key interdependencies
QEC doesn’t stand alone – it relies on fast, reliable syndrome extraction (to get error info) and on physical qubits operating below a certain error rate (the threshold).
It also produces overhead: QEC only works if we can afford many extra qubits and operations for encoding and correcting. This ties QEC to system scale and to decoder performance (a classical processor must rapidly crunch the syndrome data to tell us which correction to apply).
In short, effective QEC is intertwined with almost every other capability on this list: without high fidelity hardware, real-time readout and decoding, and enough qubits to spare, error correction will falter. Conversely, improvements in QEC (better codes, higher distances) can relax demands on physical hardware or decoder speed. It’s a delicate balancing act.
1.1.5 TRL estimate
TRL 4 (Small-scale experimental validation). [QEC has been demonstrated on a few qubits with limited performance. We’ve seen a logical qubit outperform a physical one in error rate, but we are far from a fully error-corrected computer.]
1.2 Syndrome Extraction (Error Syndrome Measurement)
| Direct CRQC Impact: | CRITICAL – sets the heartbeat/clock of fault tolerance. |
| What it means: | Measure stabilizers each cycle to locate errors without collapsing logical information; stream syndrome bits to the decoder in real time. |
| CRQC Requirements: | ~1 µs cycle time sustained for ~5 days (≈4.32×10¹¹ cycles) across ≈10⁶ physical qubits; high‑fidelity, low‑crosstalk readout with rapid reset. |
| Current Status: | ⚠️ Repeated rounds on small codes; ~10⁶ cycles (~1 s) shown; far from 10¹¹-10¹² cycles (TRL ~4). |
| The Gap to CRQC: | 5-6 orders of magnitude longer continuous operation at target speed and scale; robust parallel readout. |
| Interdependency Alert: | Faster cycles -> decoder throughput burden; readout fidelity <-> below‑threshold margin; infrastructure & control bandwidth. |
| Why it matters for CRQC: | Missed/late syndromes cause uncorrected error growth -> logical failure. |
| Primary Readiness Lever: | QOT (secondary lift to LOB through cleaner measurements). |
1.2.1 What it is
Syndrome extraction is the process of measuring the collective properties of a group of qubits to detect errors without collapsing the quantum data. In a QEC code, “syndrome” bits are the outcomes of measuring parity-check operators (stabilizers) that indicate where an error has occurred. For example, in a surface code you continuously measure certain multi-qubit correlations; a change in a measurement result (“syndrome flip”) tells you an error happened on some qubit in that correlate.
Importantly, these measurements are designed so that they don’t reveal the logical data (they give no information about the encoded 0/1, only about errors). Syndrome extraction typically involves entangling the data qubits with dedicated ancilla qubits that are then measured. The result is a stream of classical bits (the syndrome) that feed into the decoder.
1.2.2 Role in CRQC
Timely and reliable syndrome extraction is essential for QEC to work. It’s the sensor system that finds errors so they can be corrected. In a CRQC, syndrome measurements will be happening constantly across the entire processor – effectively forming the “heartbeat” of the machine’s error correction cycle (e.g. one round every microsecond).
If syndrome extraction fails (due to measurement errors, delays, or crosstalk), errors could go undetected and spread. Thus, this capability is critical to keep the quantum computer on track. One can view it as the real-time monitoring layer that makes fault tolerance possible.
1.2.3 Status & current challenges
Basic syndrome extraction has been demonstrated in small QEC experiments. For instance, the Google surface code experiment performed repeated rounds of syndrome measurements on 49 qubits, and other groups have measured syndromes on distance-3 or distance-2 codes in ion traps and superconducting chips. In these small tests, measurements are possible but remain a significant challenge.
Today’s superconducting qubits take on the order of a few microseconds to be measured (and more time to reset), and readouts are not perfectly accurate. In a surface code with a 1 µs cycle time assumption, measurement and processing need to be extremely fast – faster than what we currently achieve in larger systems.
Moreover, performing many measurements in parallel can introduce crosstalk and require very robust classical control systems.
The current state is that we can extract syndromes on small scales, but not yet at the speed and fidelity a CRQC demands. For example, some experiments have demonstrated a few consecutive rounds of error correction, but no system is doing millions of rounds autonomously.
Improving quantum measurement technology (e.g. faster qubit readout resonators, multiplexed measurement, better signal-to-noise) and ensuring the readouts themselves don’t inject too much error are active research areas.
1.2.4 Key interdependencies
Syndrome extraction sits at the intersection of quantum and classical systems. It depends on hardware (the qubits and readout electronics) and on the decoder (which must receive and interpret the syndrome quickly). It also ties into continuous operation – the qubits must endure rapid-fire measurements constantly. Any slow-down or glitch in syndrome extraction could cause a backlog of errors.
Thus, the syndrome rate essentially sets the clock speed for the whole computer. It’s closely linked to decoder performance (the classical processing must keep up with the syndrome data rate).
Finally, syndrome extraction benefits from qubit physical improvements: higher measurement fidelity and lower crosstalk directly translate to more reliable error info for the decoder.
1.2.5 TRL estimate
TRL 4 (Validated in lab at small scale). [Simple syndrome measurement loops have been run on small codes, but not yet at the scale or speed required for CRQC. We have the principle working; scaling it is an ongoing engineering challenge.]
1.3 Below-Threshold Operation & Scaling
| Direct CRQC Impact: | CRITICAL & BLOCKING — the “do or die” criterion. |
| What it means: | As you increase code distance/size, logical error must drop exponentially; requires physical error well below threshold. |
| CRQC Requirements: | Physical error ≲0.1% per operation; maintain exponential suppression up to d≈25 on large arrays; stability over scale. |
| Current Status: | ⚠️ Exponential suppression shown to d≈7; threshold‑adjacent fidelities on small devices (TRL ~3–4). |
| The Gap to CRQC: | Prove suppression to d≈20–25 on 10³-10⁶ qubits while holding/further lowering physical error. |
| Interdependency Alert: | Hardware fidelity/coherence; syndrome quality; decoder quality; correlated noise/crosstalk at scale. |
| Why it matters for CRQC: | If you’re not below threshold at scale, more qubits make things worse -> no CRQC. |
| Primary Readiness Lever: | LQC & LOB (secondary drag on QOT as distance – and cycles per gate – grow). |
1.3.1 What it is
“Below-threshold” refers to operating the quantum hardware at error rates beneath the critical threshold of the error-correcting code. Every QEC code has a threshold: roughly, a physical error rate below which adding more qubits and doing more correction reduces the logical error rate (and above which, error correction fails to help).
For the surface code, the threshold is often cited around ~1% error per gate – if physical gate error is below ~1%, increasing code size will exponentially suppress logical errors. If above 1%, adding more qubits actually makes things worse.
Scaling in this context means growing the number of qubits and operations while staying in that below-threshold regime. Essentially, “below-threshold scaling” is the ability to build a bigger quantum processor without your effective error rates blowing up. It requires both that each qubit and gate is high-quality, and that system-wide noise doesn’t increase too much as you add more qubits.
1.3.2 Role in CRQC
This capability is a fundamental blocking requirement. If we cannot maintain sub-threshold error rates as we scale to thousands or millions of qubits, a CRQC will remain impossible. Error correction only works when each additional layer of redundancy yields net improvement in error rates.
For a CRQC, we need to string together billions of operations; that only becomes feasible if each logical operation fails with extremely low probability (say 10-9 or less). Achieving that means pushing physical error rates low enough and having enough headroom that even when we scale up, the error stays controlled.
In short, below-threshold operation is what enables fault-tolerant scaling: the machine can grow and run longer computations without encountering an error avalanche. This is often considered the critical transition in quantum computing – moving from the noisy intermediate scale (NISQ) regime to the fault-tolerant regime.
1.3.3 Status & feasibility
Evidence is emerging that some platforms are at or near the error threshold in small devices. The Google experiment mentioned earlier was a strong sign – by achieving a logical qubit that outperformed a single qubit, they effectively showed they were operating below threshold for that code.
Superconducting qubits and ion traps have reported two-qubit gate errors on the order of 0.1% – 1%, which is around the threshold range for surface code. However, these rates have only been achieved on systems with tens of qubits. It remains to be seen if a device with, say, 1000 qubits can still have each gate error be 0.1%. Often, scaling up introduces new errors (cross-talk, vibrations, inhomogeneities, etc.).
The current state is that below-threshold performance has been demonstrated in principle on a small scale, but scaling it up is unproven. We know the physical error rates that are needed – roughly on the order of 10-3 or better for gates – and these are at the edge of today’s best results.
For instance, 0.1% (1 in 1000 error) was assumed in Gidney’s design, which is just a bit better than the ~0.5-1% that quantum hardware currently achieves on a few qubits.
There’s also the matter of system size: a million-qubit machine introduces engineering challenges (heat, control wiring, etc.) that could impact error rates. Scaling must therefore go hand-in-hand with improving each qubit’s coherence and gate fidelity.
Companies like IBM, Google, and IonQ have roadmaps to increase qubit counts into the thousands over this decade, and a key question will be whether those larger chips can keep errors in check. If error rates rise above threshold as devices grow, engineers will have to find ways to mitigate noise or else the approach will hit a wall.
1.3.4 Key interdependencies
Naturally, below-threshold operation depends on the hardware quality – materials, fabrication, isolation, better qubit designs – to get error per gate down.
It also depends on stability and calibration: keeping a large array below threshold over time (ties into continuous operation).
There’s interplay with error correction codes: some codes have higher thresholds or can tolerate certain noise biases better, which might ease hardware demands. Conversely, extremely good hardware could allow smaller codes or alternative schemes.
This capability also links to magic state production: magic state distillation has its own “threshold” for input state fidelity. If the physical qubits are below threshold for error correction, they also need to be good enough that raw magic states are above the distillation threshold (otherwise you have to use extra rounds of distillation).
In summary, maintaining below-threshold error rates while scaling up is a holistic challenge involving nearly every part of the system (materials, control electronics, environment).
1.3.5 TRL estimate
TRL 3 (Analytical/experimental proof of concept). [We have signs of below-threshold behavior on small devices and credible physics models, but we haven’t yet scaled it. This is an ongoing experimental challenge as qubit counts grow.]
2. Core Logical Operations
Even with a stable foundation of many logical qubits, we need the logical operations that actually execute an algorithm like Shor’s. Core operations include the full set of quantum gates – especially the distinction between Clifford gates (the “easy” ones) and non-Clifford gates (the “hard” ones that typically require special resources like magic states).
In a fault-tolerant quantum computer, some logical gates can be done transversally or via clever code manipulations, while others (non-Cliffords like the T gate) cannot – they demand a heavy overhead in the form of magic state production. This section covers the capabilities to implement both kinds of logical operations at scale.
2.1 High-Fidelity Logical Clifford Gates
| Direct CRQC Impact: | HIGH – not usually the bottleneck, but essential scaffolding. |
| What it means: | Fast, reliable logical X/Y/Z, H, S, and CNOT (often via lattice surgery) on many logical qubits in parallel. |
| CRQC Requirements: | Low‑latency logical Cliffords with error << logical budget; multi‑cycle surgery that fits 1 µs cadence; scalable parallelism across ~1.4k logicals. |
| Current Status: | ⚠️ Logical Cliffords demonstrated at small distances; surgery patterns improving; platform‑dependent limits (TRL ~4–5). |
| The Gap to CRQC: | Hundreds-thousands of logicals, high‑fan‑out entanglement, and schedule‑aware routing without degrading distance. |
| Interdependency Alert: | Syndrome cadence; decoder latency between surgery rounds; layout/connectivity (esp. superconducting). |
| Why it matters for CRQC: | Cliffords carry the circuit “bulk” and enable distillation; slow Cliffords throttle throughput. |
| Primary Readiness Lever: | QOT (secondary support to LOB by reducing error accumulation per layer). |
2.1.1 What they are
Clifford gates are a set of quantum operations (including things like Pauli X, Y, Z, the Hadamard, Phase, and CNOT gates) that have special algebraic properties. In many error-correcting codes, Clifford operations are relatively “easy” to implement fault-tolerantly – for example, by applying the gate transversally to all physical qubits in a code block, or by using lattice surgery (merging and splitting code patches) to enact entangling gates like CNOT. The key point is that these operations don’t increase the complexity of errors (they map Pauli errors to Pauli errors), so error correction can keep up without extra magic.
In practice, performing a logical Clifford gate might involve a sequence of coordinated physical operations on the encoded qubits (e.g. a series of multi-qubit measurements for lattice surgery). They are called “logical” gates because they act on the encoded logical qubits (as opposed to just physical-level manipulations).
2.1.2 Role in CRQC
Clifford gates form the backbone of quantum circuits. A cryptographic algorithm implementation will contain a large number of Clifford operations (for instance, preparations of Bell pairs, Fourier transform layers, etc.), typically interwoven with non-Clifford T gates.
While Clifford gates alone can be simulated classically (they’re not enough for quantum advantage), in a quantum computer they are the workhorse operations that move data around and perform most of the “easy” parts of the computation.
In a CRQC, we must be able to perform logical Clifford gates reliably on potentially thousands of logical qubits. Fortunately, since they are easier to implement fault-tolerantly, they are generally not the limiting factor – but they are still critical. Every logical Clifford needs to be high-fidelity and fast relative to error correction cycles.
If Clifford gates are too slow or error-prone, they could bottleneck the algorithm or introduce logical faults.
2.1.3 Status & current research
Among all CRQC capabilities, logical Cliffords are relatively advanced in development. Many experiments have already demonstrated elementary logical gate operations on small codes. For example, researchers have used lattice surgery techniques (which involve measuring joint stabilizers between code patches) to entangle and disentangle logical qubits – essentially performing logical CNOT gates – on prototypes. Clifford operations like state initialization, measurement of logical qubits in the X/Z bases, and basic logic between encoded qubits have been shown on distance-2 or distance-3 surface codes in a few labs.
These successes show that we know how to do logical Cliffords with current technology, at least on a limited scale. The main limitation right now is that these have been done with very few logical qubits (often just two logical qubits interacting, or one logical qubit being manipulated while error-corrected).
The fidelities also need to improve as we go to higher distance codes.
But overall, performing a logical CNOT or a logical Hadamard gate is not expected to be the hardest part of reaching a CRQC. It’s more about scaling up the number of such gates and doing them in parallel on many qubits.
Ongoing research is refining methods to make logical Cliffords faster and more efficient – for instance, optimizing lattice surgery patterns, or exploring newer codes where more gates are transversal.
Some new error-correcting code proposals claim to allow an even larger set of gates transversally (e.g. some exotic LDPC codes), which could simplify logic operations.
For now, the surface-code approach means we’ll rely on measured-based operations (like lattice surgery) for entangling gates, which has been proven out on small demos.
2.1.4 Key interdependencies
Logical Clifford gates depend directly on the error-corrected qubits being in place – they assume we have encoded qubits of a certain distance. They also rely on the syndrome extraction and decoding process continuing to run in the background; performing a gate can spread errors, so error correction must keep running during the operation to catch any resulting issues.
The speed of a logical gate is often limited by how fast we can do the necessary sequence of operations and then allow the decoder to catch up. For example, a lattice surgery CNOT might require a couple rounds of syndrome measurements between two patches to complete. If those rounds are too slow, it slows the algorithm. Thus, Clifford gate performance is tied to the cycle time (hardware speed) and decoder latency.
Another interdependency is with magic state injection: some Clifford operations are used in the process of injecting magic states or distilling them, so robust Clifford gates make the non-Clifford generation more efficient too.
On the flip side, because Cliffords are “easy,” one strategy is to use as many Clifford gates as possible in an algorithm and only use non-Cliffords when absolutely necessary (this is the idea behind Clifford+T circuit optimization). So the better we are at Clifford gates, the less overhead we need for the overall computation aside from the T gates.
2.1.5 TRL estimate
TRL 4-5 (Lab demo to component validation). [Logical Clifford operations have been successfully demonstrated on small encoded qubits. The concept is well understood, and scaling is mainly an engineering task.]
2.2 Magic State Production & Injection (Non-Clifford Gates)
| Direct CRQC Impact: | ABSOLUTELY CRITICAL – dominant cost center for Shor‑class workloads. |
| What it means: | Generate high‑fidelity T/CCZ magic states and inject them at scale to realize non‑Clifford gates. |
| CRQC Requirements: | ~6.5 B T/Toffoli‑equivalents; six factories outputting ≈1 CCZ / 150 cycles; cultivated state error ≲10⁻⁷, distilled to <10⁻¹². |
| Current Status: | 🔴 Only small logical magic‑state demos; no sustained factories; cultivation unproven on hardware (TRL ~2–3). |
| The Gap to CRQC: | Billion‑scale, continuous production at target fidelity and rate; robust injection with real‑time conditional corrections. |
| Interdependency Cascade: | Decoder reaction (branching); below‑threshold sets distillation rounds; Clifford engine performance; qubit budget for factories. |
| Why it matters for CRQC: | Non‑Cliffords set both operations budget and throughput; shortfall here explodes runtime or qubit count. |
| Primary Readiness Lever: | LOB & QOT (consumes LQC to provision factories). |
2.2.1 What it is
Non-Clifford gates (like the T gate, CCZ gate, or arbitrary rotations) are the quantum operations that do not belong to the easy Clifford group and are needed to achieve universal quantum computing.
In many QEC codes (like the surface code), there is no simple, direct way to perform a non-Clifford operation on a logical qubit without introducing unmanageable errors. The standard approach to get around this is magic state distillation and injection. A magic state is a specially prepared quantum state that, when consumed via a process called state injection, effectively realizes a non-Clifford gate on a data qubit. For example, a |𝑇⟩ state (an eigenstate of the T gate) can be injected to perform a T gate on a logical qubit.
Magic states are called “magic” because they don’t naturally appear from Clifford operations; they have to be carefully produced. Typically, one produces many noisy copies of such a state and then applies a distillation protocol (which is a fault-tolerant circuit composed of Clifford gates) to purify a smaller number of high-fidelity magic states. These high-fidelity states are then injected (used) to perform the actual non-Clifford gates in the algorithm.
In summary, magic state production & injection is the capability to supply a stream of non-Clifford resources (like T states) with sufficient fidelity and rate to carry out the algorithm’s needs.
2.2.2 Role in CRQC
This is often cited as the primary bottleneck or “cost center” for a cryptographically relevant quantum computer. For algorithms like Shor’s, the vast majority of the overhead (in qubit count and time) comes from handling T gates (or Toffoli gates, which are built from T gates). In the RSA-2048 factoring example, over 6.5 billion Toffoli/T operations are needed, each of which requires a magic state. Without an efficient way to generate those, the computation would be infeasible. Thus, the capability to mass-produce magic states and inject them reliably is blocking – a CRQC cannot be realized until we can supply these non-Clifford operations at scale.
The role of this capability is to unlock universal quantum computing: it provides the “magic ingredient” that, combined with Cliffords, gives the power to run any quantum algorithm.
In a CRQC, magic state factories will likely be a significant portion of the machine, continuously outputting high-quality magic states that are consumed by the computation. If this process is too slow or error-prone, it directly limits the overall performance.
In short, CRQC is impossible without a fast and reliable non-Clifford gate mechanism, and magic states are the leading method to achieve that.
2.2.3 Status & current research
Magic state generation is currently one of the least mature aspects of quantum computing. Only very recently have experiments even begun to demonstrate magic state injection on small codes. A 2023 study on IBM hardware, for example, reported preparing a logical magic state on a distance-3 surface code with fidelity above the distillation threshold. This means they could, in principle, use that state to distill an even higher fidelity magic state – a first step toward the whole distillation pipeline.
Only two experimental works to date have achieved logical magic states at all, and none have implemented a full distillation factory yet. So currently, magic state production is at a proof-of-concept stage (TRL ~3).
Researchers are actively looking for ways to make it more efficient. One major advance (theoretically) has been magic state cultivation and other improved distillation protocols. Gidney’s 2025 paper, for instance, used magic state cultivation to greatly reduce the number of raw magic states and extra qubits needed: this new method “grows” a high-quality magic state from a few lower-quality ones, cutting the cost such that producing a CCZ state was almost as cheap as a Clifford operation. This is an active research front – finding distillation methods that use fewer qubits or fewer rounds (because traditional distillation is extremely resource-hungry).
Additionally, some approaches aim to avoid magic states altogether by using alternative schemes (e.g. certain photonic cluster state schemes or particular code switchings), but those are speculative. As of now, no quantum computer has a “magic state factory”, even at small scale, that runs continuously. But the blueprint exists on paper: for example, to factor RSA-2048, one might design six magic state factory units, each outputting a distilled CCZ state every so-many cycles. The challenge is now to implement and verify such factories step by step in hardware.
2.3.4 Key interdependencies
Magic state production depends heavily on error correction and Clifford operations.
The entire distillation procedure is done with Clifford gates on encoded qubits – so you need a robust Clifford engine to even start producing magic states.
It also depends on having enough below-threshold qubits to devote to the factories (they don’t directly contribute to the algorithm’s main register but run in parallel).
There is a circular dependency in that the quality of the physical qubits and error correction dictates how many rounds of distillation you need: if physical qubits are better (lower error), the initial “raw” magic states are less noisy, so you might need fewer distillation steps. Conversely, if physical error rates improve significantly, at some point magic state production becomes easier (and eventually, if physical error were extremely low, you might not need distillation at all!).
Magic state injection also ties into the decoder and timing – when you inject a magic state, typically you measure some qubits and conditionally apply a correction (a process that requires decoding the outcome quickly to know how the gate affected the state). This means the whole injection process must be synchronized with the error correction cycle and decoder decisions.
Another interdependency is with the algorithm’s structure: some algorithms can be rearranged to use fewer non-Clifford gates (trading them for more Clifford operations), which would reduce the burden on magic state factories. In summary, magic state production sits on top of the foundational layer – it needs everything below (stable qubits, error correction, fast decoding) to function, and it then enables the non-Clifford logic for the algorithm.
2.3.5 TRL estimate
TRL 2-3 (Concept formulated, initial proof-of-concept). [Logical magic states have only been demonstrated in very small codes recently. Full-scale magic state factories remain theoretical, though improved protocols (cultivation, etc.) are actively being developed.]
3. End-to-End Execution
The final category is about the integration and operation of the whole quantum computer as a system. Even if we have good logical qubits and gates, we must orchestrate them to actually perform a large computation reliably. End-to-end execution covers capabilities like running a full fault-tolerant algorithm (not just isolated gates), the performance of the decoder that keeps errors corrected throughout the run, and maintaining continuous operation over long periods. These capabilities ensure that all the pieces work together seamlessly for days, which is what a CRQC requires.
3.1 Full Fault-Tolerant Algorithm Integration
| Direct CRQC Impact: | HIGH – the capstone system demonstration. |
| What it means: | Orchestrate memory, Cliffords, magic‑state factories, measurements, and feedback to run a full target algorithm end‑to‑end. |
| CRQC Requirements: | Modular exponentiation/Shor on ~1,399 logicals, ~5 days; correct scheduling, routing, and just‑in‑time magic supply. |
| Current Status: | 🔴 No end‑to‑end fault‑tolerant algorithm yet; only micro‑demos on few logicals (TRL ~1–2). |
| The Gap to CRQC: | From factor‑21 FT demo -> medium‑scale FT circuits -> RSA‑class depth with continuous QEC. |
| Interdependency Alert: | Decoder (no backlog), continuous operation, magic state throughput, syndrome cadence, control software maturity. |
| Why it matters for CRQC: | Proves the architecture actually delivers usable LQC/LOB/QOT in practice – not just on paper. |
| Primary Readiness Lever: | Delivered LQC, LOB & QOT (system‑level proof, not component proxies). |
3.1.1 What it is
This is the capability to execute an entire quantum algorithm in a fault-tolerant manner, from start to finish, using the above building blocks. It’s essentially the system integration of quantum computing. It involves scheduling and coordinating possibly thousands of logical qubits, millions of physical qubits, and trillions of operations (counting all the error correction cycles and gates) in the precise sequence needed for the algorithm (like Shor’s).
It also means managing intermediate measurements and classical feedback within the algorithm (for example, certain quantum algorithms require measuring a qubit mid-computation and using that result to decide later operations – all of which must be handled without disrupting fault-tolerance).
In simpler terms, algorithm integration is making the quantum computer actually do something useful (like factor a number) reliably, as opposed to just demonstrating one piece of the puzzle in isolation.
3.1.2 Role in CRQC
This is the capstone capability that truly signifies a CRQC. You can have good qubits and good gates, but until you put them together to run a full high-depth circuit reliably, you haven’t achieved the mission.
From a security perspective, a cryptographically relevant quantum attack is essentially a fault-tolerant quantum algorithm in action. So this capability represents the actual threat materializing (or the goal being reached). In that sense, it’s critical.
Achieving full algorithm integration means you can combine error correction, logical operations, and control systems to produce a result like “we factored this 2048-bit number in 5 days and got the prime factors” without any errors corrupting the computation. It is the ultimate validation that the system design works.
Until this is demonstrated on smaller scales, one cannot be fully confident in the whole approach. Hence, milestones like “run a small instance of Shor’s algorithm with full error correction” will be major proof points on the road to CRQC.
3.1.3 Status & challenges
At present, no quantum computer has run a non-trivial algorithm end-to-end with fault tolerance. We are still in the stage of testing components in relative isolation. For example, one experiment might demonstrate a logical qubit memory (storing quantum information for some time with error correction), another might demonstrate a logical gate, but we haven’t strung those together to do an entire multi-step algorithm on logical qubits.
The biggest algorithm run on real quantum hardware so far (Shor’s algorithm to factor small numbers like 15) was done without full error correction (using error mitigation or just brute force with many physical qubits and some classical help).
Fully fault-tolerant algorithm execution remains an open problem and an active area of research. Researchers are currently developing software and compilation techniques to break down large algorithms into sequences of fault-tolerant operations, and verifying through simulations that the protocols hold up.
Gidney’s RSA factoring paper is itself an example of an integration blueprint: it maps out how one could arrange memory regions, compute regions, and factories to perform the algorithm in space and time. But this exists only on paper and in simulations. The actual hardware control systems to coordinate thousands of logical qubits don’t exist yet.
We also anticipate new issues will surface when integration is attempted: for instance, error correlations between distant parts of the circuit, resource contention (multiple operations needing the same ancilla at once), or simply the logistics of feeding in classical decisions (like if an algorithm needs a classical compute step in the middle).
Current status: mostly conceptual and simulation-based. Groups have begun doing “small scale integrations” – for example, running a simple algorithm on two logical qubits (like a tiny logic circuit) – but even that is very cutting-edge. We’re probably a few years away from a demonstration like “factor 15 with fully error-corrected qubits” as a proof of integration, and more years from factoring, say, a 512-bit RSA number.
3.1.4 Key interdependencies
This capability inherently depends on all other capabilities functioning in unison. It’s where the rubber meets the road.
Some specific interdependencies: decoder performance becomes especially critical here, because during a long algorithm, you’ll be continuously decoding errors – any slowdown or backlog can derail the computation. Continuous operation is also a part of this – the system has to remain stable for the full run. Integration also depends on good scheduling and control software: deciding when and where to perform each gate, when to initiate distillation rounds so that magic states are ready just in time, how to route qubits around for interactions (while not introducing too much delay or error).
There’s a dependency on classical compute integration: some algorithms (and even some error correction routines) require real-time classical computation (decoding is one, also something like period finding in Shor’s has a classical post-processing step at the end, though that can wait until after the quantum part). Ensuring the classical and quantum parts talk seamlessly is part of integration. In addition, integration will have to handle faults gracefully: if a component fails (say one ancilla qubit goes bad during the run), there needs to be redundancy or a way to work around it so the algorithm can continue. This edges into architectures that include real-time monitoring and possibly adaptive routines.
All these complexities must come together for a successful end-to-end run.
3.1.5 TRL estimate
TRL 1-2 (Concept and design stage). [No complete fault-tolerant algorithm has been executed yet. We have designs and simulations, but hardware implementation is yet to be done. Initial integration tests on very small scales are just beginning.]
3.2 Decoder Performance (Real-Time Error Correction Processing)
| Direct CRQC Impact: | CRITICAL – the nervous system of fault tolerance. |
| What it means: | Ingest syndrome streams and decide corrections within microseconds, continuously, at scale, without backlog. |
| CRQC Requirements: | Keep pace with ~1 µs cycles across ~10⁶ qubits; ≤10 µs reaction for conditional branches; near‑optimal accuracy. |
| Current Status: | ⚠️ Fast FPGA/ASIC decoders at patch‑scale; integration growing; system‑scale throughput unproven (TRL ~5). |
| The Gap to CRQC: | Proven decoding for 100k–1M qubits in real time for days, power‑efficient and tightly integrated with controls. |
| Interdependency Alert: | Faster syndrome -> higher data rate; code distance -> more bits per cycle; magic injection needs low‑latency branching. |
| Why it matters for CRQC: | If the decoder lags, backlog stalls the machine or forces slower cycles—killing advantage. |
| Primary Readiness Lever: | QOT (secondary lift to LOB from fewer residual logical faults). |
3.2.1 What it is
The decoder is the classical computing component that processes syndrome data from the quantum error correction system and determines the appropriate correction for the qubits. In essence, every cycle of error correction produces a pattern of syndrome bits; the decoder algorithm uses those to infer where errors occurred and then either flags a logical error or tells the quantum controller to apply corrective operations (like flipping a particular qubit) to fix the error.
Decoder performance refers to both the accuracy of this inference (it should correctly identify errors) and the latency/speed at which it runs.
In a CRQC, the decoder must operate continuously, keeping pace with the quantum cycles – this often means megahertz-level throughput, since syndromes might be generated every microsecond from potentially millions of stabilizer measurements. Decoding is a non-trivial computational task (often mapped to a graph problem like matching or a Bayesian inference on a network), and doing it in real time at scale is challenging.
3.2.2 Role in CRQC
The decoder is the brain of the error correction system. Without a fast and accurate decoder, the whole fault-tolerant scheme falls apart. If decoding decisions lag behind, errors might compound faster than they’re corrected, leading to logical qubit failure. Thus, decoder performance is critical for achieving the ultra-low logical error rates needed for CRQC. It’s not enough to have a good code and good qubits – you also need a powerful decoding engine to actually realize the error suppression.
In practical terms, the decoder determines the effective error rate of logical qubits under fast noise. A slow or suboptimal decoder could turn what should be a small error into a larger one by missing corrections or applying them late.
Additionally, the decoder’s efficiency can impact how large a code distance you need – better decoding might squeeze more out of the same number of physical qubits. For CRQC, we need decoders that can handle on the order of 106 – 107 syndrome bits per second (per logical qubit) across thousands of logical qubits, all without choking. This is an enormous data rate, so decoder hardware and algorithms are as important as the quantum hardware in reaching CRQC.
3.2.3 Status & recent progress
Interestingly, decoder development has seen significant progress, leveraging advances in classical computing. Specialized decoder algorithms (e.g. Union-Find, Minimum Weight Perfect Matching, belief propagation, etc.) have been optimized, and importantly, implemented on custom hardware for speed.
For example, engineers have built FPGA and ASIC-based decoders that achieve megahertz-level decoding speeds on sizeable code patches. One recent result demonstrated decoding a ~1000-qubit surface code in under a microsecond using an ASIC (and similarly fast performance on an FPGA). This meets the rough requirement for a 1 µs cycle time – a promising sign that decoding can keep up with a CRQC’s needs.
Riverlane and other companies have shown FPGA decoders processing syndromes for hundreds of qubits with latency on the order of 100-1000 nanoseconds, which is faster than the physical experiment cycle times. These are still prototype systems (tested on benchmark data or small-scale quantum experiments), but they indicate that real-time decoding is feasible with dedicated hardware.
Accuracy-wise, decoders like minimum-weight matching have been extensively tested in simulation and are known to approach optimal error-correction thresholds for codes like the surface code. So the algorithms are quite advanced; the challenge was speed, which dedicated hardware is addressing.
Current research is focusing on scaling decoders to larger codes (more qubits), optimizing power and integration (the decoder hardware might need to sit close to the quantum chip, possibly even at cryogenic temperatures, so power efficiency is important), and handling large-scale parallelism (decoding many logical qubits simultaneously).
We are also seeing work on decoders for more complex codes (beyond surface code) and handling things like burst errors or correlations that simpler decoders might not assume.
In summary, decoder technology is one of the more mature pieces – we’re at a stage of building robust prototypes (TRL perhaps 5). The big integration step next is to incorporate these decoders into an actual quantum computing setup with live data, as opposed to just simulated data. In fact, some experiments have started integrating fast feedback: for example, a team at Rigetti demonstrated a low-latency FPGA decoder feeding back corrections in real time on a small superconducting chip. These are early but crucial steps showing the closed-loop error correction with a real device.
3.2.4 Key interdependencies
The decoder links the quantum and classical worlds. It depends on syndrome extraction – if the syndrome data is faulty (due to measurement errors that are unaccounted for), the decoder might get confused, so often decoders incorporate the error rates of syndrome bits into their model.
It also depends on classical computing capabilities: high-speed digital logic, possibly cryo-compatible electronics if needed.
There’s interplay with continuous operation: the decoder must run reliably for days without crashing or desynchronizing. It’s essentially running a real-time OS for error correction. If the quantum computer scales to more qubits, the decoder architecture must scale accordingly, likely in a modular way (perhaps a network of many decoders each handling a region).
Decoder performance also feeds back into error correction strategy: if a decoder is extremely fast, one might do more frequent QEC cycles or use more complex codes; if it was slow, one might opt for simpler codes or more slack time between operations. In designing a CRQC, one must co-optimize the code, error rates, and decoder to ensure the error suppression is sufficient.
Additionally, decoding might not be 100% perfect – there’s always a small chance it fails to infer the error correctly, which contributes to logical error. Pushing that failure rate down means using good algorithms and maybe additional redundancy (at cost of more computation). So the better the decoder, the closer the system will operate to the theoretical capability of the code.
Finally, decoders need to be adaptable: if the error rates or noise properties change over time (drifts, etc.), an ideal decoder might adjust its parameters. This could tie into machine learning approaches or real-time calibration inputs.
All told, decoder performance is a linchpin that connects to hardware, software, and algorithmic aspects of the CRQC.
3.2.5 TRL estimate
TRL 5 (Component validation in relevant environment). [High-speed decoders have been demonstrated on specialized hardware with performance meeting requirements. Integration with actual quantum hardware is underway, but the concept is proven and scalable solutions are in sight.]
3.3 Continuous Operation (Long-Duration Stability)
| Direct CRQC Impact: | CRITICAL — the difference between a demo and a break of RSA‑2048. |
| What it means: | Run autonomously for ~5 days with active QEC: stable calibrations, drift management, rare‑event resilience, no manual resets. |
| CRQC Requirements: | ≥4.32×10¹¹ cycles at target error rates; automated monitoring/recalibration; fault bypass/spares; robust cryo/control uptime. |
| Current Status: | 🔴 Hours‑scale stability with periodic recal; no multi‑day FT runs yet (TRL ~1–2). |
| The Gap to CRQC: | Two–three orders‑of‑magnitude more uptime under load; automated ops; handling radiation/crosstalk bursts at scale. |
| Interdependency Alert: | Below‑threshold margin over time; decoder reliability; syndrome fidelity; environmental control and system engineering. |
| Why it matters for CRQC: | You must actually consume the operations budget in wall‑clock time – any need to pause equals failure. |
| Primary Readiness Lever: | LOB (secondary preservation of QOT over wall‑clock via zero‑downtime operation). |
3.3.1 What it is
Continuous operation is the ability of the quantum computer to run a complex algorithm non-stop for an extended period (on the order of days) without losing quantum coherence or requiring a reset.
In practical terms, it means the entire system – qubits, control electronics, error correction processes, cooling systems – must sustain stable performance for the whole duration of the computation. This includes maintaining qubit calibrations, keeping error rates low and consistent, preventing excessive downtime (you can’t exactly pause a quantum calculation and resume it later if decoherence catches up), and generally having the reliability of, say, a server running a 5-day computation.
For a CRQC, “continuous operation” likely implies something like 100+ hours of up-time while actively executing error-corrected circuits and feeding in classical control decisions. Essentially, the system must behave like a marathon runner, not a sprinter.
3.3.2 Role in CRQC
This capability is absolutely blocking from a practical standpoint. If all other capabilities exist but the machine can only run for a few minutes before needing a reboot or recalibration, you won’t factor a 2048-bit number.
In Gidney’s scenario, ~5 days of runtime is required. This figure already includes some slack; it’s the order of magnitude to expect. So continuous operation is about ensuring that nothing in the system drifts out of spec or fails over that timescale.
Many current quantum setups can only operate in bursts (due to qubit coherence times, or needing to recalibrate lasers, or replenish cryogenics, etc.). Achieving continuous operation means engineering the system for stability and automation: the system should auto-correct drifts (perhaps via background calibrations that don’t interrupt the algorithm), handle minor hardware faults gracefully, and avoid any manual intervention.
For CISOs thinking about quantum threats, continuous operation is one reason why CRQCs are not here yet – it’s not just getting enough qubits, it’s running them with supercomputer-like reliability. A CRQC will essentially need to be an industrial-grade quantum machine.
3.3.3 Status & issues
Today’s quantum computers are far from this ideal. Typically, a quantum processor requires frequent calibration (many calibrations at least every few hours, sometimes more often). Qubits can drift in frequency, and crosstalk can worsen as devices warm slightly or over the course of usage.
Also, control electronics (AWGs, microwave sources) can have up-time issues or need adjusting. Currently, quantum experiments are usually short – maybe seconds or minutes of coherent operation at most, because either decoherence limits that or the experimental sequence ends.
No one has demonstrated running a quantum algorithm continuously for even hours, let alone days. One fundamental reason is qubit coherence times are limited (microseconds to milliseconds for superconducting qubits, maybe seconds for ion trap qubits, but then gate speeds are slower there, etc.).
Error correction, in principle, extends effective coherence indefinitely if everything works perfectly – but that’s a big if at present. Another factor is cryogenics and infrastructure: superconducing qubits need dilution fridges which can occasionally have disturbances (even vibrations or power spikes could interrupt things).
Five days of continuous 1 µs cycles means 4.3e11 error correction cycles – an immense sequence where nothing catastrophic can happen. We also have to consider cosmic rays or background radiation: a known issue is that high-energy particles can cause bursts of errors in superconducting qubits every once in a while, which could momentarily raise error rates above threshold. Over a 5-day period, you’re almost guaranteed to get some of these events. Part of continuous operation will be designing the system to withstand or quickly recover from such rare error bursts (maybe using buffer qubits or fast reset schemes).
On the positive side, classical computing and control systems are quite advanced in reliability – we know how to build classical servers that run for months. So the heavy lifting lies in the quantum hardware and its interface.
Current status: At best, researchers can maintain stable qubit operations for maybe hours with periodic calibration. For example, some labs have automated calibration routines that run in the background or between runs to keep qubits tuned. But doing that without stopping the algorithm is tricky – it’s an area needing innovation. We’re essentially at TRL 1-2 here; continuous operation for days is an aspiration based on engineering extrapolation rather than demonstration.
3.3.4 Key interdependencies
Continuous operation ties into everything because any weak link can break the chain over long durations. It depends on physical qubit stability (if a qubit’s error rate slowly increases, error correction might eventually fail), and on error correction robustness (the system needs to correct not just steady-state errors but any jumps or drifts). It’s related to below-threshold scaling in time: not only must the average error be below threshold, but the error over time must not spike too often. The decoder and control system must handle a never-ending stream of syndrome data – which means memory management and perhaps fault tolerance on the classical side as well (the decoder can’t crash on hour 100!).
Another interdependency is fault tolerance in a broader sense: can the system tolerate a failing qubit? In a 5-day run, it’s possible a few physical qubits might malfunction (e.g., one qubit might suddenly freeze out or a control line might drop).
A truly robust continuous operation might need redundancy such that the system can map out a bad qubit and continue with spares. That kind of “autonomic” behavior is not yet in place but might be needed for a large, long-running quantum computer.
Continuous operation also heavily depends on environmental control: vibration isolation, temperature stability, shielding from electromagnetic interference, etc., all maintained for days.
Many of these are classical engineering issues similar to keeping large telescopes or particle accelerators running (those systems also need high stability over long times). The difference is the added fragility of quantum coherence.
In summary, continuous operation is the convergence of all subsystems working reliably over time. It will be one of the final milestones to conquer on the path to CRQC – likely proven only when someone actually runs a sustained computation to completion.
3.3.5 TRL estimate
TRL 1-2 (Basic principles observed, concept defined). [Present quantum computers cannot yet run multi-day computations. The requirement is understood (e.g. 5 days for RSA-2048), but achieving it will require significant engineering progress and has not been demonstrated.]
Summary of Key CRQC Capabilities
The table below summarizes each capability discussed, along with its role toward CRQC, the quantitative requirements (where applicable), current status, and a rough Technology Readiness Level:
| Capability | CRQC Role | CRQC Requirement (approx.) | Current Status (Oct 2025) | TRL |
|---|---|---|---|---|
| Quantum Error Correction | Blocking – enables stable logical qubits (foundation) | Logical error rate << 1e-9 per gate (e.g. distance ~25 code to protect 5-day run). | Demonstrated on small codes (d=3–5); first-ever logical qubit error < physical error achieved. Not yet scaled beyond a few dozen qubits. | 4 |
| Syndrome Extraction | Critical – continuous error info readout | ~1 MHz syndrome measurement cycles across millions of qubits (meas. fidelity >99% within ~1 μs each). | Basic stabilizer measurements performed in small QEC experiments. Measurement speed and parallelism still limited (μs-scale readouts, significant error). | 4 |
| Below-Threshold Scaling | Blocking – high fidelity at large scale | Physical error rates ~0.1% or better across 105 – 106 qubits (well below ~1% threshold). Maintain error rates as system grows. | Best two-qubit gates ~0.1-1% error on <100 qubit systems. No data yet on multi-thousand qubit coherence. Scaling tends to introduce new errors; engineering solutions in progress. | 3 |
| Logical Clifford Operations | Optimization – efficient “easy” gates (supporting) | Fault-tolerant CNOT, H, etc., with high success (>99.9% logical fidelity). Execute in a few QEC cycles (microseconds-scale). | Transversal gates and lattice surgery demonstrated on small codes. Successful logical CNOTs, etc. on distance-2/3 codes. Needs extension to many qubits in parallel. | 4 |
| Magic State Prod. & Injection | Blocking – provides non-Clifford gates (T, CCZ) | Ability to supply ~6.5×109 T-gate equivalents over runtime. Magic state error per state ≪ 1% (post-distillation). Factory output ~106 states/second. | Only preliminary demos: e.g. first logical magic states prepared (distance-3) above distillation threshold. No large-scale magic state factory yet; distillation remains extremely resource-intensive (theoretical schemes in development). | 3 |
| Full FT Algorithm Integration | Critical – orchestrates entire attack computation | Integration of all components to run Shor’s algorithm for RSA-2048 (~5 days, 6.5e9 gates) with no fatal errors. Requires ~1400 logical qubits working in concert. | No complete algorithm run with error-corrected qubits yet. So far only component-level tests (e.g. logical memory or a simple logical gate). Designs and simulations exist for large-scale integration, but hardware implementation pending. | 2 |
| Decoder Performance | Critical – real-time error correction decisions | ~1 μs or better decision latency for each QEC round. Throughput to handle ~10^6 syndromes/sec per logical qubit (peta-scale processing overall). <10 μs feedback loop. | High-speed decoders (FPGA/ASIC) demonstrated: e.g. decoding 1000-qubit code in <1 μs. Hardware decoders can reach >1 MHz throughput. Integration with live quantum systems starting (small-scale real-time QEC shown). | 5 |
| Continuous Operation | Blocking – long mission duration stability | 100+ hours sustained quantum operation. No significant degradation or need for external recalibration during run. Automated recovery from minor faults. | Quantum hardware currently limited to minutes or less of stable run-time. Frequent manual calibrations needed. No demonstration of multi-hour quantum algorithm. Achieving days of stability remains a major engineering hurdle. | 2 |
Outlook: Milestones and Monitoring Progress Toward CRQC
Achieving a cryptographically relevant quantum computer will be a gradual process, with clear milestones indicating that a breakthrough is near. For organizations concerned with post-quantum risk, understanding these milestones – and tracking them – is crucial for timing defenses. Below I highlight what signs of progress to watch for in the coming years, and how to stay informed:
Higher-Distance Logical Qubits
One near-term milestone is the creation of logical qubits with significantly higher distance (e.g. distance 7, 11, 25) that demonstrably suppress errors exponentially better than physical qubits. So far we’ve seen distance-5 in a research setting.
When labs announce they have, say, a distance-11 logical qubit with error rates an order of magnitude lower than physical, that will signal that the foundational error correction is truly working. Even more so if they can run multiple logical qubits simultaneously. This will indicate that the below-threshold scaling challenge is being met in hardware.
Small Fault-Tolerant Circuits
Another key milestone will be successfully running a complete algorithm on logical qubits, even a small one. For example, if a team can fault-tolerantly execute a simplified cryptographic task – like factoring a small RSA-keys (e.g. RSA-128 or RSA-256) or computing a discrete log on a short elliptic curve – that would be a proof that end-to-end integration works.
These tasks are far easier than RSA-2048, but achieving them with fully error-corrected operations (no “cheating” with post-selection or error mitigation) would be a watershed moment. It would demonstrate that all the pieces (QEC, gates, decoder, etc.) can run together continuously. We should watch for research papers or press releases about “first demonstration of a fault-tolerant algorithm” in the next few years.
Scaled Hardware Prototypes
On the industrial side, major quantum hardware developers have roadmaps calling for dramatic scale-up. For instance, IBM, Quantinuum, PsiQuantum and others have publicly stated goals to build quantum systems with hundreds of thousands to a million qubits by the early 2030s. Monitoring whether those roadmaps stay on track is important.
If by, say, 2028-2030 we actually see machines with >100,000 physical qubits operating with error correction, that suggests a CRQC is on the horizon. On the other hand, if progress stalls at, e.g., a few thousand physical qubits with no error correction breakthrough, then CRQC might be further out.
Keep an eye on technology announcements at major conferences (APS, IEEE Quantum Week, etc.) and on corporate roadmap updates. Not just qubit count, but also metrics like two-qubit gate fidelity and quantum volume are worth tracking – however, remember that logical qubit count and logical error rates are the real signposts of CRQC readiness, even if companies still mainly quote physical qubit numbers today.
Magic State Factory Demonstrations
Because non-Clifford gates are a huge part of the CRQC challenge, any experimental progress on magic state distillation will be significant. If a group demonstrates a small-scale magic state distillation circuit working – for example, showing that they can take 5 noisy T states and produce 1 purified state on real hardware – that’s a big deal. It moves the “magic state” capability from theory toward practice.
Eventually, a full magic state factory might be demonstrated in a stepwise fashion (first one round of distillation, then multiple rounds pipelined). Watching academic literature for phrases like “fault-tolerant implementation of T gate” or “experimental magic state distillation” will help in catching these milestones.
Improved Decoders & Real-Time QEC
On the supporting tech side, look for continued improvements in decoder technology and its integration. A milestone to note would be a quantum processor running autonomous error correction in real time – for example, a system where every cycle errors are detected and corrected without human intervention, possibly with a fast classical co-processor.
Rigetti and RiverLane 2024 demo of real-time feedback with an FPGA decoder is an early precursor. If we see that scaled to larger codes (say a 50-qubit code being actively stabilized by a custom decoder chip), it will indicate the error correction “nervous system” is reaching maturity. This might be reported by quantum hardware startups or consortium projects focusing on control systems.
System Reliability Achievements
Achieving continuous operation will likely come gradually – perhaps first a 1-hour stable run of a logical qubit memory, then a 10-hour run, etc. When a team reports that they kept a qubit alive with error correction for hours, or that their quantum computer ran a job for an unprecedented duration, that’s a sign of increasing stability.
Similarly, improvements in cryogenic technology (e.g., new dilution refrigerators that can handle larger qubit counts or have less downtime) could indirectly further this goal. In practical terms, any claim like “quantum system operates 24 hours without recalibration” would be a milestone worth noting.
How organizations can track progress
Given the specialized nature of this research, CISOs and professionals can leverage a few strategies to stay informed:
- Regularly follow reputable quantum research outlets and summaries. Resources like the Quantum Computing Report, NIST newsletters, academic review articles, and industry blogs (e.g. IBM’s research blog, Google AI blog) often distill the latest advances. They will highlight when a new record or milestone has been achieved (such as “first error-corrected gate on 50 qubits” or “million-qubit simulator results”).
- Watch for government and standards body assessments. Organizations like NIST, NSA, and the EU Quantum Initiative periodically release reports on the status of quantum technology. They often frame progress in terms of cryptographic relevance. For example, if the NSA or NIST updates their guidance saying “we now estimate a viable CRQC could arrive by year X given recent progress,” that’s a strong signal to heed. (As of now, NIST’s guidance is to switch to post-quantum cryptography by 2030-2035 as a precaution, which implicitly assumes CRQC might be likely in the 2030s.)
- Engage with the quantum community. Consider having an in-house quantum liaison or team that can interpret quantum news. Attending quantum tech conferences or inviting experts for talks can provide insight beyond press releases. Often, what’s published in journals is a few steps behind what’s happening in cutting-edge labs (due to publication lag and proprietary research). Being plugged into the community can yield early warnings of breakthroughs.
- Track quantum-readiness metrics for your organization. In parallel, ensure your own cryptographic inventory is being upgraded. The timeline for CRQC is uncertain – it could surprise us by coming earlier via an unexpected breakthrough, or it might take longer than optimistic projections. The prudent course, as echoed by experts, is to prepare well in advance of the threat becoming real. By monitoring technical milestones as described, you can gauge how urgent the threat is becoming and adjust your quantum-safe migration plans accordingly.
In conclusion, the path to CRQC is an ambitious multi-front campaign. I outlined the major capabilities required – from foundational error correction to the high-level integration – and it’s clear that each is non-trivial. Yet, the steady progress in recent years (algorithmic improvements reducing qubit counts, hardware demonstrations of error correction, faster decoders, etc.) suggests that a CRQC, while not imminent this year or next, is a matter of “when” not “if.” The when could be two decades or it could possibly be less than a decade with some luck – the jury is still out. Because “attacks only get better” in time, the prudent approach for security leaders is to assume the CRQC will arrive on the earlier side of predictions and plan accordingly.
The milestones discussed will serve as the early warning system. By the time a CRQC is operational, many intermediate feats (like those high-distance logical qubits, small fault-tolerant circuits, large qubit counts, etc.) will have been achieved and publicized. Each of those is covered in my related articles focusing on the technical deep-dives of each capability – providing insight into how researchers overcame challenges or what remains unsolved. In the meantime, organizations should accelerate migration to quantum-safe cryptography in line with official guidance, and keep a watchful eye on the quantum computing race.
Appendix: CRQC Readiness Benchmark (Q‑Day Estimator)
As a companion to this deep‑dive capability map, I also maintain a simpler, executive‑facing CRQC Readiness Benchmark (Q‑Day Estimator) that collapses the full stack into three top‑level levers: Logical Qubit Capacity (LQC), Logical Operations Budget (LOB), and Quantum Operations Throughput (QOT); plus an assumed annual growth factor – the tool combines these into a composite readiness score (default baselines LQC₀=1,000, LOB₀=10¹², QOT₀=10⁶, where Score ≈ 1.0 corresponds to week‑scale factoring of RSA‑2048) and projects a Q‑Day when that threshold is crossed.
It is intentionally crypto‑specific (not a generic “quantum advantage” yardstick) and intended for scenario exploration rather than prediction or policy decisions; the underlying methodology explains why these three axes track the core determinants of cryptographic breakability and how to interpret them against current vendor roadmaps and error‑correction progress.
In contrast, this article’s capability framework deliberately opens the black box – mapping how LOB actually depends on below‑threshold scaling, code distance, decoder latency, and magic‑state supply; how LQC emerges from physical‑to‑logical overhead; and how QOT is bounded by cycle time and parallelism.
The two serve different needs by design: reach for the Estimator when you want a fast, comparable, crypto‑focused dashboard for timelines; come back to the capabilities map when you need to interrogate assumptions, tie claims to observable milestones and TRLs, and make risk‑based decisions about PQC migration and controls.
The capabilities map cleanly onto the three levers in the CRQC Readiness Benchmark. Think of LQC (Logical Qubit Capacity) as “how many usable logical qubits you can field at once,” LOB (Logical Operations Budget) as “how deep a logical circuit you can run before it almost certainly fails,” and QOT (Quantum Operations Throughput) as “how many logical operations you can push per second.” (Those are the benchmark’s definitions; for RSA‑2048 the methodology pegs rough needs at LQC ≈ 1.4k logical qubits, LOB ≈ 10¹¹–10¹² logical gates, and QOT high enough to finish in ~days rather than months. )
How each capability moves LQC, LOB, and QOT
- Quantum Error Correction (QEC) (Foundations) Primary -> LQC & LOB; Secondary -> QOT. QEC sets the physical‑to‑logical overhead (how many physical qubits per logical qubit at a given code distance), so it directly determines how many logical qubits you can field from a fixed hardware budget (LQC). Better suppression (higher distance, below‑threshold operation) also lowers the logical error per gate, increasing the number of gates you can run before failure (LOB). Finally, cycle time and stabilizer schedule affect how quickly you can step the code – one ingredient in QOT. (The benchmark explicitly treats LQC as “how many logical qubits you can actually use simultaneously.”)
- Syndrome Extraction (Foundations) Primary -> QOT; Secondary -> LOB. Measurement fidelity and cycle time (e.g., ≈1 µs targets) bound the logical clock rate, so they are a first‑order limiter on QOT. Cleaner, faster readout indirectly improves LOB by keeping decoder decisions timely and reducing measurement‑induced faults that would consume the operations budget.
- Below‑Threshold Operation & Scaling (Foundations) Primary -> LQC & LOB; Secondary -> QOT. Staying well below the code threshold as you scale is what allows code distance to buy exponential error suppression. That yields more reliable logical qubits (LQC) and deeper feasible circuits (LOB). The required distance also sets patch sizes and surgery durations; higher distances typically mean more cycles per logical gate, nudging QOT down unless hardware speed compensates.
- Logical Clifford Operations (Core) Primary -> QOT; Secondary -> LOB. The latency and parallelism of logical CNOT/H/S (often via lattice surgery) determine how quickly you can progress through Clifford layers and how much concurrency you can exploit – key to QOT. Cleaner Cliffords also reduce error accumulation between T layers, modestly enlarging the effective LOB.
- Magic‑State Production & Injection (Core) Primary -> LOB & QOT; Secondary -> LQC. Non‑Cliffords (T/CCZ) dominate the resource curve for Shor. The quality of distilled/cultivated magic states sets the error floor for those gates and thus the operations budget you can consume before failure (LOB ≈ how many high‑fidelity non‑Cliffords you can perform). The throughput of state factories bounds the rate at which you can feed T gates – often the tightest throttle on QOT for factoring. Factories also consume logical qubits, so provisioning more factories to raise QOT trades off against the logical‑qubit headroom available for data/ancilla (LQC). (The methodology calls this out explicitly by folding depth and non‑Clifford cost into the LOB axis and throughput into QOT.)
- Full Fault‑Tolerant Algorithm Integration (End‑to‑End) Systems‑level expression of all three. Integration reveals true usable LQC (how many logical qubits can be orchestrated concurrently), the real LOB (depth survived by a full application circuit), and the delivered QOT (ops/s including factory latencies, measurement waits, routing, and feedback). It’s the on‑bench measurement of the three axes working together.
- Decoder Performance (End‑to‑End) Primary -> QOT; Secondary -> LOB. Real‑time decoding (µs‑scale latency) must keep pace with syndrome firehose; otherwise you get a backlog that stalls or forces slower cycle rates – directly capping QOT. Better decoding (accuracy + latency) also reduces residual logical‑error events over a given depth, extending LOB. (This is why the methodology links QOT to rQOPS‑style thinking – logical ops/s at a target logical error rate. )
- Continuous Operation for Days (End‑to‑End) Primary -> LOB; Secondary -> QOT. A five‑day mission means your operations budget must be consumed without catastrophic drift – so uptime, stability, and rare‑event resilience (e.g., radiation‑induced bursts) are effectively multipliers on LOB. Operational automation (background calibrations, fault bypass) keeps the logical clock running, preserving QOT over wall‑time instead of pausing the machine.
| Capability | LQC (how many logical) | LOB (how deep) | QOT (how fast) | Why |
|---|---|---|---|---|
| QEC | ++ | ++ | + | Overhead & distance set usable logical count; error suppression extends depth; cycle timing affects speed. |
| Syndrome extraction | – | + | ++ | Readout fidelity & 1 µs‑class cycles drive logical clock rate; cleaner readout slightly raises depth. |
| Below‑threshold scaling | ++ | ++ | + | Staying below threshold at scale expands logical fleet and safe circuit depth; distance costs cycles. |
| Logical Cliffords | – | + | + | Gate latency/parallelism set pace through Clifford scaffolding; less error accumulation per layer. |
| Magic‑state prod./injection | + (costs qubits for factories) | ++ | ++ | State fidelity sets non‑Clifford error budget (depth); factory output throttles T‑gate rate (speed). |
| FT algorithm integration | + | + | + | System‑level realization of all three axes (usable count, depth survived, sustained ops/s). |
| Decoder performance | – | + | + | µs‑class decode keeps cycles on‑time; better decoding reduces logical faults over depth. |
| Continuous operation | – | ++ | + | Stability over days lets you actually consume the budget and keep the clock running. |
Heuristics for the benchmark:
- LQC is “how many logical qubits you truly have usable at once” (methodology notes RSA‑2048 ≈ LQC ~ 1400);
- LOB is “how many logical gates you can execute before the failure probability ≈ 1” (RSA‑2048 ≈ 10¹¹–10¹²);
- QOT is “logical ops/sec delivered end‑to‑end,” with back‑of‑envelope reasoning that ~10⁶ ops/s puts 10¹² ops into ~11.6 days, while higher QOT shortens runtime toward the week‑scale target.
Bottom line:
- the Foundations (QEC, syndrome, below‑threshold scaling) mostly set LQC and the quality of each logical step (hence LOB),
- the Core (Cliffords + magic) dictates how much and how fast useful computation you can drive (major on LOB/QOT),
- and the End‑to‑End (integration, decoder, continuous run) determines whether those paper capabilities translate into a delivered QOT and a consumed LOB over wall‑clock days.
That’s why the Benchmark’s three axes are a good “compressed view” of the capability map – and why shifts in any one capability should be reflected as movements in LQC, LOB, or QOT in your dashboard.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.