The Decoder Bottleneck: The CRQC Challenge Nobody Is Talking About
Table of Contents
The Capability Nobody Mentioned
When Google announced below-threshold error correction with its Willow processor in December 2024, the headlines focused on what headlines always focus on: qubit count and error rates. The achievement was genuine and significant. For the first time, adding more physical qubits to a surface code reduced the logical error rate rather than increasing it, crossing the threshold that error correction theory has predicted for decades. I covered this milestone in detail through the CRQC Quantum Capability Framework, where it advanced several capability dimensions simultaneously.
Almost nobody mentioned the decoder.
The decoder is the classical computer sitting next to the quantum processor, receiving a continuous stream of error syndrome measurements and determining, in real time, what corrections to apply. It is the bridge between the quantum world (where errors occur continuously and unpredictably) and the classical control system (which must decide how to fix those errors before they cascade into uncorrectable failures). Without a decoder that can keep pace with the quantum hardware, quantum error correction is an offline statistical exercise, not a fault-tolerant computation. A million qubits without a real-time decoder is a million qubits that cannot compute.
This is the capability dimension that I track as D.2: Decoder Performance in the CRQC Capability Framework, and it is the one I believe is most likely to determine whether a cryptographically relevant quantum computer (CRQC) arrives in 2030 or 2045. It is the classical engineering challenge hiding inside the quantum machine, the piece that connects the quantum physics to the classical computation, and the capability that most threat assessments ignore entirely.
The quantum computing field has trained the public, and by extension the security community, to track one number: how many qubits. Security professionals need a more sophisticated model.
What the Decoder Does and Why It Matters
Quantum error correction works by encoding a single logical qubit across many physical qubits and constantly measuring parity checks (called syndromes) to detect when errors have occurred. These syndrome measurements do not reveal the quantum state itself (that would destroy it), but they reveal the pattern of errors, which can then be corrected. Think of it as a continuous diagnostic system: the quantum hardware is constantly reporting which qubits might have flipped, and something has to interpret those reports and decide what to fix.
The decoder is the classical algorithm that interprets those syndrome reports and determines what correction to apply. It sits between the quantum processor and the classical control system, receiving a torrent of syndrome data and making correction decisions in real time. For a surface code with distance d (a measure of the code’s error-correcting strength), the decoder must process O(d²) syndrome measurements per error correction cycle. These cycles happen on the microsecond timescale for superconducting qubits, the dominant hardware modality today. A distance-23 surface code (which is roughly what you need for the logical qubit quality required by recent RSA-2048 resource estimates) produces approximately 529 syndrome measurements per cycle, and those cycles repeat every microsecond or so.
If the decoder cannot keep up with the syndrome stream, a backlog forms. Syndrome data accumulates faster than the decoder can process it. Errors continue occurring in the quantum hardware while the decoder is still working on corrections from previous cycles. The backlog grows exponentially, and eventually the computation fails. This is the “backlog problem,” and it is the reason that decoder speed is not a performance optimization but a hard requirement. A decoder that is 10% too slow does not produce a 10% slower computation. It produces a computation that fails entirely.
An analogy for security professionals: imagine a network intrusion detection system that must process packets in real time. If the IDS falls behind the packet stream, it does not miss 10% of intrusions. The buffer overflows, packets are dropped unexamined, and the system is blind. The decoder faces the same dynamic: fall behind the syndrome stream, and the quantum computation is no longer protected.
The decoder must satisfy three simultaneous requirements. It must be fast enough: for superconducting qubits, this means sub-microsecond per decoding round. It must be accurate enough: a decoder that is fast but makes poor correction decisions produces a higher logical error rate, which means you need more physical qubits to compensate, which means more syndromes to decode, creating a vicious cycle. And it must be scalable: a decoder that works for one logical qubit must work for 1,399 logical qubits simultaneously (per Gidney’s 2025 estimate for RSA-2048), each producing its own syndrome stream, continuously, for the duration of a computation that may run for hours or days (per D.3: Continuous Operation).
Speed, accuracy, and scale. Getting any two of three is tractable. Getting all three simultaneously is the unsolved engineering problem at the center of the CRQC timeline.
Why Decoding Is Harder Than It Sounds
The difficulty of quantum decoding comes from three interlocking challenges that compound each other as the system scales.
The Speed-Accuracy Trade-off
The theoretically optimal decoder for surface codes is minimum-weight perfect matching (MWPM), which finds the most likely error pattern consistent with the observed syndromes. MWPM is accurate but computationally expensive. Its runtime scales polynomially with code size, but the constants are large enough that it cannot meet real-time requirements for superconducting hardware at the code distances required for cryptographic applications. Running MWPM on a CPU for a distance-23 surface code takes milliseconds, three orders of magnitude too slow for the microsecond deadlines that superconducting qubits impose.
Fast decoders exist. Union-find algorithms can decode in nearly linear time. Lookup tables can decode in nanoseconds for small codes, as the LILLIPUT decoder demonstrated with under 7% FPGA logic utilization. But these fast decoders sacrifice accuracy. A less accurate decoder means a higher residual logical error rate after correction, which means you need a larger code distance to achieve the same logical error rate, which means more physical qubits, more syndrome measurements, and a harder decoding problem. The speed-accuracy trade-off is not a simple engineering compromise; it feeds back into the resource estimates for the entire quantum computer. A 2× loss in decoder accuracy can translate into a 2-4× increase in the physical qubit count required for the same computation, because you need a larger code distance to compensate.
The most promising recent development in this space is the Cascade convolutional neural network decoder, published by Andi Gu and colleagues at Harvard in April 2026. Cascade exploits the geometric structure of quantum error correction codes by using structure-aware 3D convolutions that respect the code geometry. On the [[144, 12, 12]] Gross code (a bivariate bicycle qLDPC code), Cascade achieved logical error rates of 10⁻¹⁰ at a physical error rate of 0.1%, representing a 17-fold improvement over the best prior practical decoders. Perhaps more striking, Cascade revealed what the authors call a “waterfall” regime of error suppression, where logical error rates fall far more steeply than the naive distance scaling predicts. On the Gross code, the error suppression followed a P_L ~ p^{11} scaling, dramatically exceeding the p^{6.4} scaling predicted by the code distance alone.
The implication is significant for CRQC resource estimates: if neural decoders can consistently achieve waterfall-regime error suppression, quantum computers may need fewer physical qubits than current estimates assume, because the effective error correction performance exceeds what the code distance alone predicts. This is precisely the kind of result that could shift the CRQC timeline, and it came from the decoder research, not from a qubit count announcement.
Separately, a team including Google researchers demonstrated a self-coordinating neural network decoder benchmarked on the Zuchongzhi 3.2 superconducting processor, showing that a single TPU v6e could decode surface codes with distances up to 25 within 1 μs per decoding round. This is the first demonstration of a neural decoder meeting superconducting-timescale requirements at a code distance approaching practical relevance.
The risk with neural decoders should be stated clearly: they insert a machine learning inference pipeline into a microsecond-latency critical path. The decoder must produce correct results with extremely high reliability (a single decoding failure at the wrong time can corrupt the entire computation). Neural networks are probabilistic by nature, and their failure modes are harder to characterize than those of deterministic algorithms. Training data requirements, generalization across different error models, and the challenge of deploying GPU/TPU inference hardware alongside cryogenic quantum processors all present engineering obstacles that must be solved before neural decoders are production-ready. But the accuracy advantages are compelling enough that the research community is investing heavily in this direction.
The Modality Dependency
The decoder bottleneck is not equally severe for all quantum computing hardware, and this has direct implications for which quantum computing approaches pose the earliest cryptographic threat.
Superconducting qubits, which currently lead in qubit count (Google, IBM, and others operate processors with 50-1,000+ qubits), have error correction cycle times on the order of 1 microsecond. The decoder must complete its work within that microsecond to avoid backlog. This is the most demanding timing constraint in the quantum computing stack, and it is why FPGA and ASIC implementations are necessary: software decoders running on conventional CPUs cannot reliably meet microsecond deadlines for codes at the distances required for cryptographic applications. The superconducting decoder challenge is fundamentally a hardware problem. The algorithm must be compiled into dedicated circuitry (FPGA or ASIC) that executes in deterministic time, without the variable-latency interrupts, cache misses, and scheduling jitter that make general-purpose CPUs unsuitable for hard real-time constraints.
Trapped-ion qubits have cycle times on the order of 1 millisecond, giving the decoder roughly 1,000 times more time per round. This three-order-of-magnitude difference transforms the decoder problem from a hardware challenge into a software challenge. IonQ’s April 2025 work demonstrated that software decoders running on commodity CPUs can keep pace with trapped-ion hardware. Their analysis estimated that 3 CPUs with 32 cores each could decode 1,000 logical qubits for trapped-ion systems. This is a remarkable result: it means the decoder bottleneck for trapped ions can potentially be solved with off-the-shelf computing hardware, without the custom FPGA/ASIC development that superconducting systems require.
Neutral atom systems (QuEra, Pasqal, Atom Computing) fall somewhere between these extremes, with cycle times typically in the tens to hundreds of microseconds range. Photonic systems (PsiQuantum, Xanadu) have their own timing characteristics depending on the architecture. Each modality faces its own version of the decoder challenge, calibrated to its own cycle time.
This modality dependency has a direct implication for threat timeline assessment. The security community tends to track the modality with the most qubits (superconducting), but the modality that can most easily solve the decoder bottleneck (trapped ions) may reach CRQC-scale fault tolerance sooner despite having fewer physical qubits. A trapped-ion system with 100,000 physical qubits and a real-time software decoder could be more cryptographically threatening than a superconducting system with 1 million physical qubits and a decoder that cannot keep up. Qubit count without decoder performance is not a meaningful threat metric.
Quantinuum’s trapped-ion systems have demonstrated the highest reported two-qubit gate fidelities in the industry, and their logical qubit demonstrations have used real-time decoding. If Quantinuum or IonQ can scale their trapped-ion qubit counts while maintaining the decoder advantage, the CRQC may arrive through a path that most qubit-count-focused threat assessments would miss entirely.
For readers who track the taxonomy of quantum computing modalities I maintain, the decoder bottleneck is one of the reasons I advise against fixating on any single modality when assessing quantum threat timelines.
Scaling to Cryptographic Relevance
The gap between current decoder demonstrations and CRQC requirements is large and precisely quantifiable.
Gidney’s 2025 paper estimated that breaking RSA-2048 requires approximately 1,399 logical qubits with under 1 million physical qubits and under one week of runtime. Under a surface code architecture, each logical qubit at the required code distance (roughly d=23-27) involves approximately 500-730 physical qubits worth of syndrome data per error correction cycle. The decoder must handle all 1,399 logical qubits simultaneously, each producing its own syndrome stream, continuously, for the duration of the computation.
To put concrete numbers on this: 1,399 logical qubits at distance 23, with syndrome extraction every microsecond, produces roughly 740 million syndrome measurements per second. Sustained over one week of computation, that is approximately 450 trillion syndrome measurements that must be decoded correctly in real time. A single decoding error at a critical point in the computation can propagate into a logical failure that invalidates the entire result. The decoder must maintain both speed and accuracy across this entire duration, without interruption, without backlog, and without memory leaks or performance degradation over time.
The Chevignard et al. EUROCRYPT 2026 paper showed that breaking P-256 ECDSA requires 1,193 logical qubits. The decoder requirements are proportionally similar, with a somewhat smaller total syndrome volume but the same fundamental challenges of speed, accuracy, and sustained operation.
Where do current demonstrations stand? The Riverlane/Rigetti FPGA decoder (October 2024) demonstrated sub-1 μs decoding for an 8-qubit stability experiment on Rigetti’s Ankaa-2 processor. This was a genuine milestone: the first real-time FPGA-based decoder integrated into the control system of a superconducting quantum processor and demonstrated on real hardware, not simulation. But 8 qubits is a long way from 1,399 logical qubits.
Riverlane’s Local Clustering Decoder (LCD), announced in December 2025, demonstrated sub-microsecond decoding with adaptive noise tracking on FPGA hardware. It is deployed with Infleqtion, Oxford Quantum Circuits, and Oak Ridge National Laboratory. Riverlane’s collision clustering decoder, in ASIC implementation, demonstrated MHz decoding speed for surface codes up to 1,057 physical qubits (roughly 30-35 logical qubits at d=5). Riverlane’s Deltaflow 3, expected in late 2026, will introduce “streaming logic” for continuous error correction, which is essential for the long-duration computations that Shor’s algorithm requires.
IBM’s research team (November 2025) published FPGA-tailored algorithms for real-time decoding of quantum LDPC codes, analyzing three decoder classes (message passing, ordered statistics, and clustering) and concluding that message passing via their Relay decoder is the most viable route to real-time qLDPC decoding. This work is specifically targeting the decoder challenge for IBM’s qLDPC-based architecture.
The trajectory is encouraging: from no real-time decoder demonstrations in 2023 to 8-qubit real-hardware demonstrations in 2024 to ~1,000-qubit simulated demonstrations in 2025 to neural decoders meeting microsecond requirements at distance 25 in 2026. The progress curve is steep. But the destination (1,399 logical qubits, continuously, for days) remains far ahead.
The qLDPC Complication
Quantum low-density parity-check (qLDPC) codes are the most promising path to reducing the physical qubit overhead required for fault-tolerant quantum computing. Surface codes, the dominant error correction scheme today, encode one logical qubit across roughly d² physical qubits (so a distance-23 surface code uses about 529 physical qubits per logical qubit). qLDPC codes can encode multiple logical qubits with far fewer physical qubits per logical qubit, potentially reducing the total physical qubit count for a CRQC by an order of magnitude. This is why architectures like Pinnacle propose qLDPC codes for CRQC-scale computations, and why IBM is investing heavily in bivariate bicycle codes (a specific family of qLDPC codes) as the foundation of their fault-tolerant roadmap.
But qLDPC codes have more complex syndrome structures than surface codes. A surface code’s syndrome graph is a 2D planar grid, which has simple topological properties that decoders can exploit. Algorithms like union-find and matching are natural fits for this planar structure. A qLDPC code’s syndrome graph is a higher-dimensional, non-planar structure with complex connectivity. The decoding problem becomes more like solving a graph optimization problem on an irregular graph with long-range connections, rather than a regular 2D lattice. The algorithms that work well for surface codes do not directly apply, and new decoder architectures are needed.
IonQ’s December 2025 work specifically addressed qLDPC decoding, achieving a 26× improvement in worst-case decoding latency for qLDPC codes. But this was on trapped-ion timescales (milliseconds), where the timing constraint is relaxed. IBM’s FPGA-tailored qLDPC decoder work (November 2025) is targeting superconducting timescales. Their analysis of three decoder classes concluded that message passing (specifically their Relay decoder) is the most viable route to real-time qLDPC decoding on FPGAs. They demonstrated that the ordered statistics and clustering approaches, while accurate, are far slower and less accurate than message passing when implemented on FPGA hardware. This is a significant finding: it narrows the design space and focuses the engineering effort on a specific algorithmic approach.
The Cascade neural decoder’s April 2026 results on the Gross code (a qLDPC code) are the most promising development for qLDPC decoding to date. By learning the geometric structure of the code, Cascade achieved error suppression that dramatically exceeded what previous decoders could accomplish on qLDPC codes. The “waterfall” regime Cascade discovered suggests that qLDPC codes may be even more powerful than their theoretical distance suggests, but only with the right decoder. A mediocre decoder sees a mediocre code. The right decoder sees a code that suppresses errors far more aggressively than the distance alone predicts.
The decoder is the reason qLDPC codes’ theoretical advantages have not yet translated to experimental demonstrations on real hardware. Solving the qLDPC decoder problem is the critical path to capitalizing on qLDPC’s encoding rate advantages. If qLDPC decoders are solved for superconducting timescales, the physical qubit count for a CRQC drops substantially, potentially bringing the CRQC timeline forward by years. If they are not solved, CRQC-scale quantum computing must rely on surface codes, which require more physical qubits but have better-understood decoding. Either way, the decoder is the determining factor.
What This Means for the CRQC Timeline
Qubit count and error rates get the headlines. Decoder performance determines whether those qubits can actually be used for fault-tolerant computation. A quantum computer that can correct errors offline, by collecting syndrome data and processing it after the computation is finished, is a science experiment. A quantum computer that corrects errors in real time is a computation engine. The decoder is the capability that converts one into the other.
This distinction matters for security planning because most quantum computing milestones reported in the media are closer to the “science experiment” end of the spectrum than the “computation engine” end. When a research group demonstrates error correction on a small surface code by post-processing syndrome data on a classical computer hours after the quantum experiment, they have demonstrated that their hardware can produce correctable errors. They have not demonstrated that their hardware can compute with error correction in real time. The decoder is the missing piece between those two capabilities, and it is the piece that almost never makes the headline.
For security professionals tracking the quantum threat through my CRQC Quantum Capability Framework, the decoder bottleneck has several specific implications for timeline assessment.
The gap between current demonstrations and CRQC requirements is significant but narrowing. In 2023, no real-time decoder had been demonstrated on quantum hardware. By mid-2026, FPGA decoders are deployed on real superconducting processors, neural decoders are achieving microsecond latency at relevant code distances, and software decoders have been shown to keep pace with trapped-ion hardware for 1,000 logical qubits. The progress trajectory is steep. Extrapolating it is dangerous (engineering progress is not linear, and the problems that remain may be harder than the problems that have been solved), but the direction is unambiguous.
The decoder problem is modality-dependent, which means the CRQC threat is modality-dependent. Trapped-ion systems with their relaxed timing requirements may achieve CRQC-scale decoding through software alone, while superconducting systems require specialized FPGA or ASIC hardware. A threat assessment that only tracks the modality with the highest qubit count misses the possibility that a different modality solves the decoder problem first. This is one of the ten reasons why the CRQC Capability Framework tracks multiple dimensions rather than a single qubit-count metric.
The Cascade neural decoder’s “waterfall” effect could meaningfully shift resource estimates. If neural decoders consistently achieve error suppression beyond what code distance predicts, the number of physical qubits required for a CRQC could be substantially lower than current estimates. Gidney’s 2025 estimate of under 1 million physical qubits already assumed a competent decoder; if the decoder is not just competent but achieves waterfall-regime error suppression, the physical qubit requirement could drop further. This is exactly the kind of development that shifts the CRQC timeline: not a new qubit modality or a new error rate record, but a classical algorithm innovation that changes the relationship between physical resources and computational capability.
This result is preliminary and requires validation across code families and hardware implementations. The waterfall regime was demonstrated on specific qLDPC codes and may not generalize to all code families. The GPU inference latency, while fast by CPU standards, is not yet equivalent to the deterministic sub-microsecond latency that FPGA decoders provide. But Cascade illustrates a broader pattern: the decoder research community is finding that the speed-accuracy frontier is not fixed. New algorithmic ideas can shift it, and when they do, the resource estimates for the entire quantum computing stack change with them. This is why decoder papers can be more informative about the CRQC timeline than qubit count announcements.
The Balanced Assessment
This article fights on both fronts of PostQuantum.com’s editorial philosophy, because the decoder bottleneck is precisely the kind of issue that both hype and denialism get wrong.
Against hype: When a company announces that it has 1,000 qubits or 10,000 qubits, the security community should ask: Can they decode fast enough to use those qubits for fault-tolerant computation? At what code distance? For how many logical qubits? Without satisfactory answers to these questions, the qubit count is a manufacturing achievement, not a computational capability. A chip with 10,000 physical qubits and no real-time decoder is, from a cryptographic threat perspective, no more dangerous than a chip with 100 qubits and no real-time decoder. Neither can run Shor’s algorithm fault-tolerantly.
The decoder bottleneck is a concrete, technical reason why qubit count alone is a misleading metric for assessing Q-Day timelines, and why I have warned against qubit-count-driven Q-FUD and the Q-Day confidence crisis that results from mistaking manufacturing milestones for computational ones. The next time a quantum computing company issues a press release celebrating a qubit milestone, look for the decoder. If it is not mentioned, the milestone is incomplete.
Against dismissiveness: The decoder problem is hard, but it is an engineering problem, not a physics problem. There is no fundamental physical law preventing its solution. No theorem states that real-time decoding at CRQC scale is impossible. The problem is bounded: the input is a known-format stream of syndrome data, the output is a correction decision, and the latency requirement is fixed by the hardware’s cycle time. This is exactly the kind of problem that FPGA designers, ASIC architects, and machine learning engineers are good at solving.
And they are solving it. FPGA decoders are demonstrating sub-microsecond performance on real hardware. Neural decoders are approaching microsecond latency at practically relevant code distances. Software decoders can already keep pace with trapped-ion hardware for 1,000 logical qubits. The collision clustering decoder has reached 1,057-qubit surface codes in ASIC simulation at MHz speed. The progress in decoder research over the past two years has been remarkable by any measure.
Anyone who cites the decoder bottleneck as a reason to dismiss the quantum threat or delay PQC migration is making the same mistake in the opposite direction as those who cite qubit counts as proof that Q-Day is imminent. The problem is hard and the problem is being solved. Both statements are true simultaneously. As I argued in my rebuttal of Tim Palmer’s 1,000-qubit ceiling thesis, engineering obstacles should inform timeline estimates, not be used to justify inaction.
The decoder should not be used as an excuse to delay migration. As I have argued repeatedly, the deadlines are already set by regulators, standards bodies, and industry mandates. NIST IR 8547 targets 2030 deprecation and 2035 disallowance of quantum-vulnerable algorithms. These deadlines do not move based on decoder progress. Whether the decoder bottleneck delays CRQC by five years or is solved next year, the action for CISOs remains the same: start your PQC migration now, because the migration timeline is measured in years, and the decoder gap may close faster than your migration completes.
What Security Leaders Should Track
For the CISO and CTO audience that PostQuantum.com serves, the decoder bottleneck translates into five actionable considerations.
Watch decoder research as closely as qubit research. Papers from Riverlane, IBM’s QEC team, Google’s quantum AI group, the Harvard Cascade team, and the broader AI decoder community are more informative about the CRQC timeline than qubit-count press releases. When a team demonstrates real-time decoding at a new code distance on real hardware, that is a capability milestone that advances the CRQC timeline. When a team announces a new qubit count without demonstrating real-time error correction at scale, that is a manufacturing milestone with uncertain computational implications.
Pay attention to trapped-ion progress. The security community’s mental model of the quantum threat is dominated by superconducting qubits because Google, IBM, and others generate the most visible qubit-count headlines. But trapped-ion systems (IonQ, Quantinuum) have a structural advantage on the decoder problem: their slower cycle times give the decoder 1,000× more time per round. A trapped-ion system that solves the decoder problem with commodity CPUs could reach CRQC-scale fault tolerance through a fundamentally different path than superconducting systems. The quantum threat does not come only from the modality with the most qubits.
Understand why qubit-count headlines are misleading for threat assessment. When evaluating vendor announcements or media coverage, ask three questions: Can they decode in real time? At what code distance? For how many logical qubits simultaneously? If these questions are not answered, the announcement tells you about manufacturing capability, not about computational capability. The CRQC Quantum Capability Framework provides the analytical structure for this kind of assessment across all ten capability dimensions.
Do not use the decoder bottleneck as an excuse to delay PQC migration. The bottleneck is real and it adds uncertainty to the CRQC timeline. But migration programs measured in years cannot afford to bet on that uncertainty. The decoder gap may close rapidly, as the progress since 2023 demonstrates. Regulatory deadlines are already binding. The HNDL threat is active today regardless of when a CRQC arrives. Start your migration with PQCFramework.com and Practical Steps to Quantum Readiness.
Follow the full capability picture, not single metrics. The path to a CRQC is not a single number. It is a system of interdependent engineering challenges: error correction, syndrome extraction, below-threshold operation, decoder performance, magic state production, algorithm integration, continuous operation, and engineering scale. The decoder is one dimension among ten. It may be the most consequential single dimension for timeline prediction, but a CRQC requires all ten to be solved. The Framework tracks them all for exactly this reason. If your current threat assessment methodology reduces the quantum threat to a single qubit-count metric, the CRQC Quantum Capability Framework provides the multi-dimensional model that security planning actually requires.
The Invisible Bottleneck
The quantum computing field has trained the world to count qubits. Every new processor is announced with a qubit count in the headline. Conference talks lead with qubit numbers. Investor presentations chart qubit growth on logarithmic scales. The implied narrative is simple: more qubits equals more power, and enough qubits equals a quantum computer that can break cryptography.
That narrative is incomplete in several ways, but the decoder is perhaps the most consequential omission. A million qubits producing a million syndrome measurements per microsecond, with no classical system capable of processing those measurements fast enough, is a million qubits generating noise, not computation. The decoder is the capability that converts physical qubits into computational power, and it is the least visible engineering challenge on the path to a CRQC.
It receives a fraction of the media attention that qubit milestones receive. It is rarely mentioned in vendor press releases. It is almost never discussed in the security community’s quantum risk assessments. Most Q-Day prediction exercises focus on qubit counts and error rates, treating the decoder as an implementation detail that will be solved when the time comes. My CRQC Readiness Benchmark includes decoder performance as an explicit parameter precisely because omitting it produces unrealistically optimistic timelines.
The decoder may be the least visible of the ten capabilities I track. It may also be the most consequential for predicting when the quantum threat to cryptography becomes real.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.