Q-DayPost-Quantum
Trending

Q-Day Revisited – RSA-2048 Broken by 2030: Detailed Analysis

Table of Contents

Introduction

It’s time to mark a controversial date on the calendar: 2030 is the year RSA-2048 will be broken by a quantum computer. That’s my bold prediction, and I don’t make it lightly.

In cybersecurity circles, the countdown to “Q-Day” or Y2Q (the day a cryptographically relevant quantum computer cracks our public-key encryption) has been a topic of intense debate. Lately, the noise has become deafening: some doom-and-gloom reports insist the quantum cryptopocalypse is just a year or two away, or is already here in secret government labs, while hardened skeptics claim it’s so distant as to never happen. The truth lies between these extremes.

As someone who’s been tracking quantum computing progress and making public Q-Day predictions for over 15 years, I’ve consistently argued that it’s not enough to watch the raw count of qubits in labs. I previously developed The Path to CRQC – A Capability‑Driven Method for Predicting Q‑Day that approaches predicting Q-Day by analyzing progress across eight foundational capabilities that determine whether a quantum computer can actually threaten cryptography. In CRQC Readiness Benchmark I then further proposed how to roll up these capabilities into three critical levers:

  • Logical Qubit Capacity (LQC): How many error-corrected qubits you can field
  • Logical Operations Budget (LOB): How deep a circuit you can run before failure
  • Quantum Operations Throughput (QOT): How fast you can execute logical operations

By using these approaches, every few years I update my forecast accordingly. For years I held to an estimate of 2032 for Q-Day – but a string of major developments in just the last few days has compelled me to move that prediction forward. Three pieces of recent news triggered this reassessment, each hitting a different “axis” of quantum progress: algorithmic efficiency, hardware error rates, and engineering roadmaps.

As I summarized, in just the past few weeks, researchers have slashed (physical) qubit requirements for factoring RSA-2048 from millions to under one million, demonstrated quantum gate fidelities at or beyond the threshold needed for effective error correction, and laid hardware roadmaps for large-scale fault-tolerant quantum computers by the end of this decade. In short, the pieces needed to factor a 2048-bit RSA key are rapidly falling into place. While this doesn’t mean an overnight collapse of cryptography, it does mean governments and industry must urgently recalibrate their post-quantum migration plans. The three recent breakthroughs I analyzed in separate posts:

  1. First, a new factoring algorithm published by Google researchers slashed the qubit count needed to factor RSA-2048 by an order of magnitude which I analyzed here: “Quantum Breakthrough Slashes Qubit Needs for RSA-2048 Factoring.”
  2. Second, physicists at Oxford achieved a record-breaking low error rate in quantum operations (only 1 error in 6.7 million), foreshadowing much lower overhead for error correction. I summarize their paper here: “Oxford Achieves 10⁻⁷-Level Qubit Gate Error, Shattering Quantum Fidelity Records.”
  3. And third, IBM unveiled a detailed roadmap promising a fault-tolerant quantum computer by 2029, years ahead of many expectations. I analyzed this announcement here: “IBM’s Roadmap to Large-Scale Fault-Tolerant Quantum Computing (FTQC) by 2029 – News & Analysis.”

These advances, in algorithmics, error correction, and scalable hardware, all point to the same conclusion: the timeline to a cryptoanalytically (or cryptographically) relevant quantum computer (CRQC) is accelerating.

The punchline? If current trends hold, a quantum computer capable of breaking RSA-2048 will likely exist by around 2030 (± 2 years). That doesn’t mean internet encryption collapses overnight or that we should all panic. But it does mean the prudent window for migrating to quantum-safe cryptography is right now. The latest science has shifted Q-Day from an “if” to a concrete question of “when,” and the smart bet is “sooner than previously thought.” Let’s explore why.

Understanding the CRQC Challenge Through Eight Capabilities

Before diving into recent breakthroughs, we need to understand what actually constitutes a cryptographically relevant quantum computer (CRQC). Building on my detailed analysis in The Path to CRQC – A Capability‑Driven Method for Predicting Q‑Day, achieving Q-Day requires mastering eight distinct capabilities:

Foundational Capabilities (the “make-or-break” requirements):

  1. Quantum Error Correction (QEC) – Encoding logical qubits from many physical qubits
  2. Syndrome Extraction – Continuously measuring errors without destroying quantum data
  3. Below-Threshold Operation & Scaling – Maintaining error rates low enough that adding qubits helps

Core Logical Operations (what lets you actually compute): 4. High-Fidelity Logical Clifford Gates – Fast, reliable basic quantum operations 5. Magic State Production & Injection – The dominant bottleneck for non-Clifford gates needed in Shor’s algorithm

End-to-End Execution (making it all work together): 6. Full Fault-Tolerant Algorithm Integration – Orchestrating everything to run a complete algorithm 7. Decoder Performance – Real-time classical processing to correct errors as they happen 8. Continuous Operation – Running stably for days without failures

Each capability has measurable requirements, current status, and interdependencies. For RSA-2048, Gidney’s May 2025 analysis established the target:

Logical qubits (LQC):~1,399 logical qubits
Physical qubits:~1 million physical qubits total
Code distance:25 (using surface codes)
Runtime:~5 days
Toffoli gates (LOB):~6.5 billion non-Clifford operations
Physical error rate: 0.1% (1 error per 1,000 operations)
Cycle time (QOT input)1 microsecond

The three recent breakthroughs I’ll analyze have each moved the needle on specific capabilities, collectively bringing us dramatically closer to meeting these requirements.


Three Breakthrough Developments and What They Actually Mean


Gidney’s Algorithmic Breakthrough: Slashing the Logical Operations Budget

Primary Impact: LOB (Logical Operations Budget) & Magic State Production (Capability 5)

Craig Gidney’s May 2025 paper represents a quantum leap (pun intended) in Capability 5: Magic State Production & Injection, the capability widely recognized as the primary bottleneck for CRQC. To understand why this matters, we need to trace the remarkable evolution of Shor’s algorithm requirements.

Shor’s Algorithm and the Road to Fewer Qubits

The quantum threat to RSA encryption originates with Shor’s algorithm, discovered in 1994, which theoretically allows a large quantum computer to factor integers (and thus break RSA) exponentially faster than any classical method. In simple terms, Shor’s algorithm uses a quantum routine to find the secret period of a modular arithmetic function, from which the prime factors of an RSA modulus can be deduced.

The catch: implementing Shor’s algorithm for a 2048-bit number has always looked prohibitively demanding in terms of quantum resources. Early estimates were downright astronomical. Around 2012, researchers estimated that factoring a 2048-bit RSA key might require on the order of 10⁹ physical qubits under then-known techniques. That figure – a billion qubits – put Q-Day safely beyond any near-term horizon.

Even a few years ago, in 2019, more refined analysis by Craig Gidney and Martin Ekerå brought the requirement down but still pegged it at roughly 20 million physical qubits to factor RSA-2048 in about 8 hours. These numbers seemed fantastical when real devices had only tens of qubits. It’s no wonder many experts felt RSA-2048 would remain secure well into the 2030s or 2040s.

But here’s the thing about cryptographic attacks: given enough brilliant minds and time, they almost always get better.

The Beauregard Breakthrough (2003): Proving Low Qubit Counts Were Possible

The first big breakthrough came from mathematician Stéphane Beauregard in 2003, who showed that you could factor an n-bit number using roughly 2n + 3 logical qubits. In principle, that’s only ≈4,100 logical qubits for n = 2,048.

Beauregard’s approach cleverly re-used quantum registers to cut down qubit count (in contrast to doing everything in parallel which would need ~3n qubits). The trade-off was time: his circuit had enormous depth (scaling on the order of n³ operations) which made it vulnerable to errors. Still, it demonstrated that in theory Shor’s algorithm wasn’t outrageously qubit-hungry – it could run with thousands of qubits, not trillions, if you didn’t mind running it very slowly.

Back then, however, even thousands of error-corrected qubits implied millions of physical qubits, once error correction overhead was accounted for. So Beauregard’s result was academically interesting but didn’t change the bottom line that a CRQC was far out of reach.

In capability terms: Beauregard showed that Capability 5 (non-Clifford gates) was the real bottleneck, not total qubit count. The problem was doing those operations reliably.

Gidney & Ekerå (2019): Making It Practical

Fast-forward to 2019, and we saw a major step toward practicality. Gidney and Ekerå revisited Shor’s algorithm with a bag of modern optimization tricks. They combined improved arithmetic circuits (like better adders and multipliers), qubit recycling strategies, and finely tuned space-time tradeoffs.

The result: RSA-2048 could be factored with about 6,000 logical qubits in an 8-hour quantum computation. Under reasonable error-correction assumptions, that translated to roughly 20 million physical qubits.

While 20 million is still huge, this was a 50× improvement over some prior estimates. Gidney & Ekerå’s paper became a landmark, often cited as the “state of the art” target for a cryptographically relevant quantum computer. It suggested that if you could build a machine with tens of millions of qubits and keep it running for a day, you could crack RSA-2048. In other words, the quantum “wall” defending RSA was high, but perhaps not unclimbable in a multi-decade timeframe.

In capability terms: This established concrete targets for LQC (~6,000 logical qubits), LOB (depth limited by 8-hour runtime), and QOT (needing fast enough throughput to complete in hours, not years).

The Chevignard Approach (2024): Ultra-Low Qubits, Impractical Runtime

The relentless drive to reduce qubit count didn’t stop there. By 2021-2022, researchers were exploring even more radical ways to trade off quantum space (qubits) for time (number of operations).

A breakthrough came in late 2024 when Chevignard, Fouque, and Schrottenloher introduced an algorithm using an Approximate Number System for modular exponentiation. Without diving into heavy detail: they found a way to perform the modular multiplications “piecewise,” using a tiny quantum register that handles a few bits of the number at a time, rather than holding the entire 2048-bit number in quantum memory.

By recycling a small set of qubits over and over for each chunk of the calculation (and cleverly tolerating some approximation error that can later be corrected), they slashed the logical qubit requirement dramatically, down to roughly 1,730 logical qubits for RSA-2048. That’s about half the qubits of the Gidney-Ekerå approach.

The catch? Their method required many more sequential steps. In fact, it would take on the order of 2³⁶ quantum operations (roughly 70 billion) repeated about 40 times. This could mean running the quantum computer non-stop for months for a single factorization – an eternity in quantum-coherence terms. So while the ~1,700 logical qubit ballpark was astonishingly low, the algorithm was impractically slow given foreseeable error-corrected clock speeds.

In capability terms: This dramatically reduced LQC requirements but exploded LOB (operations budget) and made QOT (throughput) requirements impossible with current Capability 8 (continuous operation) maturity.

Gidney 2025: The Best of Both Worlds

Now, here’s where 2025 changed the game. Gidney’s new paper (May 2025) essentially took the best of both worlds: the low qubit footprint of the approximate method and the manageable runtime of the 2019 approach.

His recipe shows “How to factor 2048-bit RSA with less than a million noisy qubits.” In concrete terms, Gidney demonstrated that a fully error-corrected quantum computer could factor a 2048-bit RSA key in under one week with 1,399 logical qubits (including ancilla, magic-state, and “idle” qubits) encoded into <1 million physical qubits.

This is roughly a 20× reduction in qubit count from the 2019 estimate, at the cost of a 20× increase in runtime (8 hours vs a few days). But here’s the crucial insight: a week is still attack-relevant. Days of computation is feasible. Months or years is not.

What Changed: Magic State Cultivation and the LOB Collapse

At the heart of Gidney’s 2025 method are clever space-time optimizations and error-correction-aware techniques that directly address Capability 5: Magic State Production & Injection.

Here’s what most people miss about Shor’s algorithm: the vast majority of the computational cost comes from non-Clifford gates – specifically T gates and Toffoli gates. In Gidney’s design, you need roughly 6.5 billion Toffoli-equivalents. Each of these requires a high-fidelity “magic state” to execute fault-tolerantly.

Traditional magic state distillation is brutally expensive: you might need thousands of physical qubits and dozens of error correction cycles to produce a single high-quality magic state. This was the dominant cost in all previous designs. Magic state factories would consume most of your quantum computer’s resources, becoming the primary bottleneck for QOT (throughput).

Gidney’s breakthrough was magic state cultivation, a new approach developed through 2024-2025. Instead of brute-force distillation, cultivation “grows” high-quality states from lower-quality ones with dramatically reduced overhead. The technique cuts the cost of producing a CCZ state (the key resource for Toffolis) to nearly that of a Clifford operation.

The impact is staggering:

  • Traditional distillation: Might need 10,000+ physical qubits per factory, multiple distillation rounds
  • Cultivation: ~280 logical qubits across 6 factories total, producing states just-in-time

By breaking the huge 2048-bit modular exponentiation into smaller pieces and tolerating tiny calculation errors (that can be corrected later), the algorithm also avoids the old “one qubit per bit of the number” rule. This qubit recycling strategy, combined with “yoked” high-density storage of idle qubits, cuts the logical qubit requirements to the bone.

Crucially, Gidney managed to reduce the total gate count (operations) by over 100× compared to other low-qubit methods, ensuring the runtime doesn’t blow up even as qubit count drops.

The Capability Framework Perspective

In CRQC capability terms, Gidney’s work means:

LOB (Logical Operations Budget) – Dramatic Reduction:

  • Reduced total non-Clifford operations from >10¹³ to ~6.5×10⁹
  • This is a reduction of 3-4 orders of magnitude in the operations budget
  • Makes the depth requirement achievable with current error correction approaches
  • Capability 5 bottleneck shifts from “impossible” to “very challenging but feasible”

LQC (Logical Qubit Capacity) – Factory Efficiency:

  • Only ~280 logical qubits needed for all six magic state factories
  • Remaining ~1,100 logical qubits for computation and storage
  • Physical-to-logical ratio of ~1,000:1 at distance-25 surface code = ~1M physical qubits
  • Within range of IBM’s 2029-2032 roadmap

QOT (Quantum Operations Throughput) – Factory Performance:

  • Six factories producing ≈1 CCZ state every 150 cycles
  • At 1 µs per cycle = ~6,666 CCZ/second across all factories
  • 6.5B operations ÷ 6,666/sec = ~270 hours = ~11 days
  • Week-scale factoring becomes the new benchmark

Interdependency Impact:

  • Directly addresses Capability 5 (currently TRL 2-3), providing a clear blueprint
  • Reduces pressure on Capability 1 (QEC) by lowering qubit count requirements
  • Makes Capability 8 (continuous operation) more feasible – 5 days vs months
  • Eases Capability 7 (decoder performance) by having fewer qubits to monitor

Why This Changes Everything

The trajectory is unmistakable: the “quantum qubit barrier” for breaking RSA has been dropping by orders of magnitude roughly every 5-6 years:

  • 2012: 1,000,000,000 qubits (billion)
  • 2019: 20,000,000 qubits (20 million)
  • 2025: 1,000,000 qubits (1 million)

And importantly, none of these developments violate known physics or require sci-fi tech – they’re working within the constraints of what near-future quantum hardware is expected to achieve. Gidney’s latest work even explicitly aligns with NIST’s conservative planning timeline for quantum risk, noting that migrating to post-quantum cryptography by the early 2030s is vital since attacks only get better.

This rapid algorithmic progress is one big reason I’ve pulled in my Q-Day estimate. When you can track the capability maturation systematically, you see that Capability 5 has gone from “we have no idea how to do this efficiently” (TRL 1-2 in 2020) to “we have a detailed blueprint that works on paper” (TRL 4-5 theoretically, though still TRL 2-3 experimentally).

The remaining gap? Actually building and demonstrating those magic state factories in hardware. But that’s an engineering challenge, not a physics barrier. And with the theoretical overhead reduced by 100×, the engineering becomes far more tractable.


Oxford’s Fidelity Milestone: Enabling Below-Threshold Operation at Scale

Primary Impact: Below-Threshold Operation & Scaling (Capability 3) → Reduces LQC overhead

So far we’ve focused on qubit counts and algorithms, but an equally crucial piece of the Q-Day puzzle is quantum error correction (QEC). No matter how clever your algorithm is, if your qubits are too noisy, you’ll never complete the computation.

That’s why every theoretical qubit estimate (like “1 million qubits needed”) is implicitly talking about physical qubits that are error-corrected to act as a much smaller number of logical qubits. The overhead cost of error correction has traditionally been the biggest barrier to realizing a CRQC. It’s like a tax on quantum computation: if each logical qubit needs 1,000 physical qubits to stay error-free, suddenly your 1,399-logical-qubit algorithm demands ~1.4 million physical qubits.

But what if you could lower that tax? Recent advances suggest we can, by improving the quality of individual qubits – which directly addresses Capability 3: Below-Threshold Operation & Scaling.

The Oxford Achievement: 10⁻⁷ Error Rates

In June 2025, a team at Oxford University announced a milestone in qubit fidelity that truly turned heads. They demonstrated single-qubit gate operations with an error rate below 10⁻⁷ – that is, only one error in ten million operations. In terms of fidelity, that’s 99.99999% accuracy, the highest ever recorded for any quantum hardware.

For perspective, until now the best qubits (in ion traps or superconducting circuits) topped out around 99.9%-99.99% (10⁻³ to 10⁻⁴ error) for single-qubit gates. Hitting the 10⁻⁷ error scale is a big deal – it’s like leaping from a car that occasionally hiccups to one that drives hundreds of miles without a single sputter.

The Oxford group achieved this using a trapped-ion qubit (a calcium-43 ion) manipulated by ultra-stable microwave pulses instead of the usual lasers, which dramatically reduced noise and calibration errors. The ion’s coherence time stretched to an incredible 70 seconds, letting them run millions of gate operations while hardly accumulating any decoherence. In short, they built a qubit that is for practical purposes almost perfectly reliable for single-qubit flips.

Why 10⁻⁷ Matters: The Error Correction Tax

Why does a 10⁻⁷ error rate matter so profoundly? Because in quantum error correction, lower physical error directly translates to lower overhead for achieving a given logical accuracy.

Most Quantum Error Correction (QEC) codes (like the popular surface code) have a threshold error rate around 10⁻² to 10⁻³ – meaning if your qubits are noisier than ~0.1-1%, error correction doesn’t even work. The system crosses below threshold and errors compound faster than you can correct them.

When operating just below threshold (say, 0.1% physical error rate), you might need:

  • Code distance 25: ~1,352 physical qubits per “hot” logical qubit (actively computing)
  • Code distance 15 for storage: ~430 physical qubits per “cold” logical qubit
  • Total overhead: Thousands of physical qubits per logical qubit to push the logical error rate down to, say, 10⁻¹⁵ (good enough for a long computation)

But if your physical error is 10⁻⁷ instead of 10⁻³, the math changes dramatically:

Lower Code Distances Suffice:

  • You might achieve the same logical fidelity (10⁻¹⁵) with only code distance 5-7 instead of 25
  • Distance 5 uses only ~50-100 physical qubits per logical qubit
  • Distance 7 uses ~200-300 physical qubits per logical qubit
  • This is an order of magnitude reduction in the physical-to-logical ratio

Implications for LQC:

  • The 1 million physical qubit target could yield 5,000-10,000 logical qubits instead of ~1,400
  • Alternatively, you could achieve 1,400 logical qubits with only 200,000-300,000 physical qubits
  • Either way, the hardware requirements drop dramatically

The Capability Cascade Effect

The deeper implication is proving that exponential error suppression with increasing code distance – the fundamental promise of QEC – is achievable in real hardware. Oxford’s work validates the theoretical models we’ve been using for years.

Capability 1: Quantum Error Correction (TRL 4 → 5)

  • Demonstrates that surface codes can work at the fidelities needed for high code distances
  • Proves that physical qubits can be “good enough” that error correction actually helps
  • Shifts QEC from “we think this will work” to “we’ve proven the foundation works”

Capability 2: Syndrome Extraction (TRL 4 → 4-5)

  • With 70-second coherence times, you can perform 70 million operations at 1 µs per cycle
  • This is more than enough for the 4.3×10¹¹ cycles needed for RSA-2048 factoring (5 days)
  • Proves that measurement errors won’t dominate at these fidelities

Capability 3: Below-Threshold Operation & Scaling (TRL 3-4 → 4-5)

  • This is the big one: Oxford proved you can operate far below threshold
  • At 10⁻⁷, you’re not barely below threshold (~10⁻³), you’re four orders of magnitude below
  • This gives enormous headroom for:
    • Scaling to more qubits (some can be worse, average still excellent)
    • Running longer (rare error events won’t accumulate)
    • Higher code distances with lower overhead

Capability 8: Continuous Operation (TRL 1-2 → 2-3)

  • 70-second coherence without drift shows that hour-to-day scale stability is physically possible
  • The “can we keep qubits stable for 5 days?” question shifts from “maybe impossible” to “requires engineering, not miracles”
  • Automated calibration and drift correction become tractable

The Remaining Gap: Two-Qubit Gates

Now for the sobering reality check. As the Oxford researchers themselves caution, they only demonstrated this fidelity on a single qubit. The hardest part – two-qubit entangling gates – still lags behind.

In the best ion-trap systems and superconducting chips today, two-qubit gate errors are around 0.1% to 1% (10⁻³ to 10⁻²). That’s much worse than 10⁻⁷. Some recent progress:

  • Quantinuum (Honeywell): Two-qubit gates at 99.9% fidelity (10⁻³ error)
  • MIT superconducting: Two-qubit gates at 99.92% fidelity (8×10⁻⁴ error)
  • Oxford’s work: Effectively says “we’ve eliminated single-qubit errors as a concern; now all focus is on multi-qubit gates, memory errors, and readout errors”

The Path Forward:

  • If two-qubit gates can reach 99.99% (10⁻⁴) in 2-3 years → dramatic LQC reduction
  • If they reach 99.999% (10⁻⁵) by late 2020s → physical-to-logical ratio drops 5-10×
  • Even staying at 10⁻³ but with 10⁻⁷ single-qubit gates → still a major improvement in average error rate

The broader point is this: quantum hardware is rapidly improving in quality, not just quantity. There’s a growing “fidelity-first” movement in quantum engineering that prioritizes perfecting a qubit’s performance before scaling up in number.

Why This Shifts the Timeline

The rationale is simple: a few high-quality qubits can teach us how to build thousands later without an unmanageable error burden. As one observer put it, trying to scale up “faulty” qubits is a dead end; it’s better to make qubits much more reliable now so that when we scale to, say, 1,000 qubits, they behave like 1,000 “nearly perfect” qubits and require far less error-corrective overhead.

The Oxford achievement is proof-of-concept that with relentless engineering, physical qubits can approach the kind of error rates that make fault-tolerance almost easy. If that philosophy takes hold across the industry, we could see the effective timeline to a useful quantum computer shorten dramatically.

From a Q-Day perspective, the takeaway is optimistic (for the quantum builders) and ominous (for our crypto safety nets): the hardware error problem is gradually being tamed. When you hear quantum contrarians say “quantum computers will never work because of decoherence and noise,” point them to the Oxford result. Yes, it’s one qubit, but it’s a qubit that ran 10 million operations with essentially no mistakes. That’s strong evidence that noise can be beaten, at least at the single-qubit level.

And as noise falls, the “effective qubits needed” to break RSA falls too. A machine with fewer but cleaner qubits might do the job just as well as one with lots of noisy qubits. This means we have to keep an eye not only on qubit counts but also on fidelity records when estimating Q-Day.

Capability Assessment Update:

  • Capability 3 shifts from “we’re not sure this will work at scale” (TRL 3) to “we’ve proven it can work, now we need to engineer it across many qubits” (TRL 4-5)
  • This reduces uncertainty in LQC projections significantly
  • Makes IBM’s 2029 target of 200 logical qubits more credible (they need less physical overhead per logical qubit)
  • Brings the 1,399 logical qubit target within reach of late 2020s hardware timelines

Hardware Platform Roadmaps Including IBM’s 2029 Roadmap: Proving Logical Qubit Capacity is Within Reach

Primary Impact: LQC (Logical Qubit Capacity) & End-to-End Integration

The final piece of the puzzle is scaling up the hardware. We’ve seen algorithms and error rates trending favorably, but do we have any realistic path to building a quantum computer with, say, one million physical qubits (or equivalently a few thousand logical qubits)?

Until recently, that question elicited eye-rolls – today’s devices only have on the order of hundreds of qubits, and none are fully error-corrected. However, 2023-2025 has seen quantum industry roadmaps become markedly bolder and more concrete about reaching the million-qubit scale. The key players – IBM, Google, and others – are no longer shy about targeting that milestone within the next decade.

IBM’s Quantum “Starling”: The 2029 Milestone

In June 2025, IBM made waves by announcing plans to deliver a fault-tolerant quantum computer by 2029. Codenamed Quantum “Starling“, this system is slated to have about 200 logical qubits, which, IBM notes, would likely be enough to show clear advantages over classical supercomputers on certain tasks.

To get there, IBM is leveraging a modular architecture – and this is where the capability framework really shines in analyzing their approach.

The Single-Chip Ceiling

IBM had already achieved a 1,121-physical-qubit chip in 2023 (the “Condor” processor), effectively hitting the limits of a single-chip design. The problem with building, say, a 10,000-qubit monolithic chip is not just qubit quality – it’s:

  • I/O bottleneck: Getting control signals in and readout signals out
  • Cooling constraints: Dilution fridges can only handle so much heat dissipation
  • Cross-talk and noise: Qubits interfere with each other at high densities
  • Manufacturing complexity: Yield rates drop precipitously with chip size

The Modular Solution

Instead of a single massive chip, IBM is developing modular quantum chips that can be linked. Their 156-qubit “Heron 2” chips, introduced in 2024, are designed to connect to each other via high-fidelity interconnects.

Essentially, IBM’s plan is to build a quantum computer the way we build supercomputers: multiple modules networked together, with both quantum and classical links coordinating them. This is what makes their 2029 timeline credible from a capability standpoint:

Capability 1: Quantum Error Correction

  • 2023 achievement: Entangled 2 logical qubits (encoded on 133 physical qubits total)
  • Fidelity: ~94% for logical entangled state
  • What it proves: Even with current chips, logical qubits can be realized and operated
  • 2027 target: Multi-module systems with error correction running
  • 2029 target: 200 logical qubits = ~100,000 physical qubits at distance 15-20
  • TRL progression: Currently TRL 4 → targeting TRL 6-7 by 2029

Capability 6: Full Fault-Tolerant Algorithm Integration

  • Current status: Only isolated components tested (logical gates, QEC rounds)
  • IBM’s bet: By 2027, run small fault-tolerant circuits end-to-end
  • By 2029: Full integration of 200 logical qubits running coordinated algorithms
  • What this means: They’re explicitly targeting the systems-level integration that’s currently TRL 1-2
  • Risk: This is the biggest unknown – no one has done this before

Capability 7: Decoder Performance

  • Current status: IBM has partnered with decoder specialists (Riverlane, others)
  • Achievement: FPGA decoders handling hundreds of qubits at microsecond latency
  • 2029 requirement: Scale to 100,000+ physical qubits in real-time
  • Assessment: This is an engineering challenge, not a science problem (TRL 5 already)

The Confidence Signal

IBM’s Chief of Quantum, Jay Gambetta, stated: “We’ve answered the science questions… now it’s an engineering challenge.” This is a crucial statement for capability assessment.

What he’s saying:

  • Capabilities 1-3 (foundational): The physics works, proven in lab
  • Capability 4 (Clifford gates): Well understood, scaling is straightforward
  • Capability 5 (magic states): The hard one, but Gidney’s work provides the blueprint
  • Capabilities 6-8 (integration, decoders, continuous operation): Engineering, not science

When a $150B+ company with a 100+ year history commits billions to deliver fault-tolerant quantum computing by 2029, that’s not hype – it’s a bet on achievable engineering.

By 2027: IBM aims to have multi-module systems with error correction running By 2029: The full Starling system operational with 200 logical qubits By 2031-2032: Straightforward scaling to 1,000+ logical qubits (they’ve explicitly outlined this path)

The 1,399 logical qubits needed for RSA-2048 sits right in this 2030-2032 window.

Google’s Quantum AI: The Silent Competitor

Google has been somewhat quieter on public roadmaps, but their goals are similarly ambitious. In 2020, Google’s CEO hinted at aiming for a useful error-corrected quantum computer by the end of the decade (the “2029” timeframe), and internally, Google’s researchers have eyed the million physical qubit threshold as well.

2023: The Exponential Suppression Milestone

Google’s focus has been demonstrating the fundamentals of error correction. In 2023, they reported the first instance where a larger quantum error-correcting code outperformed a smaller code, meaning adding qubits actually reduced the error rate of a logical qubit.

This was a crucial proof-of-concept that QEC works as advertised on real hardware:

  • Distance-3 code (17 physical qubits): Logical error rate X
  • Distance-5 code (49 physical qubits): Logical error rate < X
  • Key insight: Crossed the threshold – more qubits = better, not worse

They used a 49-qubit grid (distance-5 surface code) and saw lower logical error than with a 17-qubit (distance-3) code, even beating the error of the best single physical qubit. This validated Capability 3 (below-threshold operation) experimentally.

2024-2025: Logical Operations and Gidney’s Work

With that milestone reached, Google is now trying to string together logical operations. In 2025, they have a prototype logical qubit that lasts long enough to perform multi-step algorithms. They are aggressively experimenting with:

  • New chip designs: Dual-layer chips for routing
  • Better materials: To cut error rates
  • Photonic links: For networking modules
  • Superconducting resonators: For module-to-module connections

Google has also been a leader in algorithmic research – Gidney is a Google researcher. This is not a coincidence. Their hardware targets are informed by their algorithmic understanding:

  • By ~2028: 100+ logical qubits (extrapolating from their pace)
  • By early 2030s: Scaling to thousands of logical qubits
  • Partnership with NIST: They clearly see early 2030s as the danger zone

Google’s approach is more research-driven, but they’re on a parallel track to IBM, targeting the same general timeframe.

Trapped Ion Platforms: Quantinuum and IonQ

The ion trap approach trades off speed for fidelity. Companies like Quantinuum (Honeywell) and IonQ have far fewer qubits on their devices (dozens), but with impressively low error rates and full connectivity (any qubit can interact with any other).

Quantinuum’s Record

Quantinuum demonstrated a fully error-corrected logical qubit (using the 7-qubit Steane code) as early as 2021, including real-time error correction cycles. Progress:

  • 2021: First logical qubit with real-time QEC
  • 2023: Kept logical qubit “alive” through multiple QEC rounds
  • 2024-2025: Approaching break-even (logical error < physical error)
  • Roadmap: Small fault-tolerant computer (tens of logical qubits) by late this decade
  • Scaling plan: Ion trap networking via photonic interconnects (distributed quantum computing)

Their approach: don’t build one massive trap, build many small traps and link them.

IonQ’s Scaling Strategy

IonQ is pursuing a different path: moving ions between multiple traps and using advanced ion transport to handle more qubits. Per their published roadmap:

  • Public goal: #AQ > 1,000 by 2028 (their “algorithmic qubit” metric)
  • #AQ roughly: Number of qubits × fidelity × connectivity
  • Translation: ~100 logical qubits by 2028-2029
  • Architecture: Mobile ions + trap arrays + photonic connections

If ion-based systems can crack a few hundred logical qubits with extremely high fidelity by ~2030, they too could threaten RSA soon after. Remember: with 10⁻⁷ fidelity (which ions are approaching), you might not need a million physical qubits – a few hundred logical could do with the right error correction.

These companies haven’t made splashy “2030 we break RSA” claims, but they consistently cite the mid-2030s as when they expect full-blown fault-tolerant machines to be operational.

Photonic Quantum Computers: PsiQuantum’s Moonshot

Perhaps the most bullish of all is PsiQuantum, a well-funded startup that insists a million-qubit photonic quantum computer is the only way to achieve useful quantum computing. They’ve been aiming straight for that goal from the start, with a timeline reportedly targeting around 2027-2028 for having a fundamental large-scale system in place.

The Photonic Advantage

Photonic qubits (single photons) don’t suffer from decoherence in the same way as matter-based qubits:

  • Room temperature operation: No dilution fridges needed
  • Fiber optic routing: Can use existing telecom infrastructure concepts
  • Scalability: Conceptually easier to build a huge network

The Plan:

  • Scale: Room-sized machine with silicon photonic chips
  • Interconnects: Thousands of fiber cables
  • Approach: Generate and manipulate photons to create entangled cluster states
  • Fabrication: Using conventional semiconductor fabs (not exotic materials)
  • Philosophy: “No physics left to be solved, only scale-up”

The Challenges:

  • Photon gates are probabilistic (not deterministic like matter qubits)
  • Requires enormous overhead (optical switches, detectors to manage losses)
  • Each logical qubit might need thousands of photons
  • But if they can field a million-photon system, even at 1,000:1 overhead = 1,000 logical qubits

If PsiQuantum fields a million-physical-qubit (photonic) system by 2028, even with high overhead, they could reach triple-digit logical qubits relatively soon.

One of PsiQuantum’s founders suggested that IBM’s 2029 superconducting FTQC will face enormous cryogenic and wiring challenges, whereas a photonic machine could be built in a more data-center-like environment. Time will tell, but it’s worth noting that multiple approaches are racing neck-and-neck to the same finish line.

Wild Cards: Microsoft, Intel, and Neutral Atoms

Microsoft’s Topological Qubits

Microsoft’s pursuit of topological qubits (based on Majorana zero modes) is high-risk, high-reward. If they succeed, each qubit would be inherently more stable, potentially cutting the error correction cost dramatically.

  • Status: After years of struggle, reported some progress in 2022-2023
  • Evidence: Possible Majorana states detected
  • Reality: Functional qubit not there yet
  • Impact if successful: Could accelerate Q-Day by making quantum computers much more compact
  • Likelihood: Low probability, but non-zero

Intel’s Silicon Spin Qubits

Intel hopes its chip-fab expertise can produce dense arrays of tiny spin qubits and control them with on-chip electronics:

  • Advantage: Avoids wiring bottleneck of superconductors
  • Status: 49-qubit test chips demonstrated
  • Roadmap: Talk of 1,000+ qubit arrays later this decade
  • Philosophy: Leverage existing semiconductor manufacturing

Neutral Atom Arrays

Neutral atoms, as those developed by Pasqal and Infleqtion can naturally trap hundreds of atoms:

  • Demonstrated: >1,000 atom sites in 3D (Pasqal)
  • Challenge: Controlling them with low error
  • Potential: Could suddenly boost qubit counts if control improves

Any of these could have a breakthrough that suddenly boosts qubit counts or reliability beyond the current curve.

The Convergence Across Platforms

All told, the hardware timeline has dramatically firmed up. What used to be tentative “maybe 15-20 years” statements have turned into concrete promises like “by 2029, we will have X.”

The Alignment Signal:

  • IBM: 200 logical qubits by 2029, path to 1,000+ by early 2030s
  • Google: 100+ logical qubits by 2028, thousands by early 2030s
  • Quantinuum/IonQ: Tens to hundreds of logical qubits by 2028-2030
  • PsiQuantum: Million photons (hundreds-thousands of logical) by 2028-2030
  • Multiple platforms: All converging on late 2020s / early 2030s for CRQC-scale machines

This is not one company making wild claims – it’s the entire quantum computing industry aligning on the same timeframe. They are effectively betting billions of dollars that this can be done within ~5 years. I track all of them as well as their track record in hitting published timelines here: Quantum Hardware Companies and Roadmaps Comparison (2025 Edition).

This lends enormous credence to the idea that Q-Day, which requires such a machine, is likely in that same timeframe. Indeed, as media coverage noted when Gidney’s 2025 factoring result came out, multiple corporate and government roadmaps now point to around 2030 for million-qubit processors.

If those machines materialize on schedule, running Shor’s algorithm against 2048-bit RSA “in about a week” becomes a realistic capability. Even if the hardware arrives a little late – say by 2032 or 2033 – that’s still within the planning horizon of today’s cybersecurity roadmaps.

The National Security Wild Card

One more consideration: national security programs. Everything above is based on publicly available info from companies and academic labs. But given what’s at stake, it’s plausible (indeed likely) that some nation-state projects are quietly pushing toward a CRQC on a similar timeline.

Government-funded quantum efforts in the U.S., EU, and China are massive:

  • China: Has poured substantial resources, though details are opaque
  • U.S.: NQIA, DOE labs, DARPA programs
  • EU: Quantum Flagship Initiative with billions in funding

The Unknown Unknown: If one of these programs made a breakthrough (either in hardware or algorithms), they might keep it classified, at least for a while, to exploit the advantage. This adds a layer of uncertainty. It’s conceivable that Q-Day could even arrive sooner than the consensus timeline if a secret project hits paydirt.

That’s speculative, but it’s a risk scenario serious enough that, for example, the U.S. NSA has been warning about “harvest now, decrypt later” tactics for years. They’re taking the threat seriously, which suggests they have intelligence or projections consistent with early-2030s timelines.

Capability Assessment Across Platforms:

PlatformLQC Target (2030)StrengthChallenge
IBM (Superconducting)200-1,000 logicalModular scaling, decoder techCryogenics, wiring complexity
Google (Superconducting)100-1,000 logicalAlgorithm leadership, integration focusSimilar to IBM
Quantinuum (Ion trap)50-200 logicalUltra-high fidelity, full connectivityScaling via networking
IonQ (Ion trap)100-500 logicalMobile ions, trap arraysComplexity of distributed system
PsiQuantum (Photonic)100-1,000 logicalRoom temp, telecom manufacturingProbabilistic gates, high overhead

Bottom line: Multiple independent paths are converging on 1,000+ logical qubits by 2030-2032. The 1,399 required for RSA-2048 sits squarely in this window. This is not reliant on one technology or one company succeeding – there are multiple shots on goal.

From Weeks of Computation to Megawatts of Power: The Practical Realities

While the capability analysis shows CRQC is approaching, I also want to address the practical realities: even once a cryptographically relevant quantum computer exists, breaking RSA-2048 won’t be trivial in practice. It will likely be an expensive, specialized endeavor.

Gidney’s design, for instance, would consume on the order of a week of runtime on a million-qubit machine. Each 2048-bit number factored might require billions of quantum gate operations executed in sequence. In today’s terms, that’s a massive computation – by comparison, current noisy devices struggle to maintain state beyond a few hundred gates.

So while a future CRQC could crack a single RSA key in, say, 3-7 days, it won’t be cracking thousands of keys on a whim without significant upgrades in throughput.

The Energy Equation

Furthermore, the energy cost of such a feat will be enormous. Large-scale quantum computers (especially superconducting ones) demand power-hungry cryogenics and control systems:

  • Dilution fridges: ~25-50 kW per system for million-qubit machines
  • Classical control: ~100-200 kW for real-time decoding and pulse generation
  • Facility infrastructure: Cooling, monitoring, power conditioning
  • Total: ~200-500 kW continuous for 5 days = 24-60 MWh per RSA key

At industrial electricity rates (~$0.10-0.20/kWh), that’s $2,400-$12,000 in electricity alone per key broken. Add in:

  • Amortized hardware costs: $100M+ machine depreciation
  • Personnel: Expert operators and physicists
  • Facility: Specialized quantum computing data center

Total cost per RSA-2048 key broken: likely tens to hundreds of thousands of dollars in the early 2030s.

By contrast, classical supercomputers, while also power-hungry, would need billions of years to do the same task – so we’ve traded impossible time for heavy energy and capital costs.

The Takeaway: Early quantum attacks will be the domain of nation-states or elite organizations – those who can allocate dedicated facilities and power budgets to target the “crown jewels” of encrypted data. As I previously wrote: breaking RSA is “not just about qubits and math; it’s about megawatts” as well.

This doesn’t make the threat less real – it just means that in 2030-2032, quantum attacks will be:

  • Selective: Targeting high-value keys (government secrets, financial systems, critical infrastructure)
  • Strategic: Not mass-decryption of everything, but focused intelligence operations
  • Escalating: As the technology matures, costs drop and capabilities improve

By 2035-2040, what costs $100K per key in 2030 might cost $10K. By 2040-2045, possibly $1K. The threat trajectory only gets worse over time.

Addressing the Outlier Claims

Before finalizing the timeline, let’s address two camps: the doomsayers and the nay-sayers.

The 372-Qubit Hype (2022-2023)

In late 2022, a Chinese research group claimed RSA-2048 might be breakable with just 372 qubits using a hybrid quantum-classical approach. The news sparked exaggerated headlines (“RSA broken!”) in early 2023.

What actually happened:

  • They factored a 48-bit number (trivially small) using variational quantum algorithms
  • Extrapolated wildly that 372 “high-quality” qubits might crack 2048-bit RSA
  • Method relied on heuristic lattice algorithms and QAOA that don’t scale cleanly

Community response:

  • Independent experts could not replicate results for anything larger than toy problems
  • The approach bypasses Shor’s algorithm entirely (relies on lattice reduction heuristics)
  • No rigorous proof it works at scale
  • Consensus: More hype than reality

As of 2025, Shor’s algorithm and its optimized descendants remain the only established path to factoring RSA-2048 on a quantum computer. (I wrote more about this in “Breaking RSA Encryption: Quantum Hype Meets Reality (2022-2025).”)

The Extreme Skeptics

On the other end, you have respected cryptographers or physicists who remain deeply skeptical, saying things like:

  • “Quantum computers will never be scalable”
  • “We won’t see this for many decades, if ever”
  • “The error correction problem is insurmountable”

The rebuttal from capability analysis:

  • Capability 1 (QEC): Proven in lab at small scale, exponential suppression demonstrated (Google 2023)
  • Capability 3 (Below-threshold): Oxford achieved 10⁻⁷ error rates – proof that noise can be beaten
  • Capability 7 (Decoders): Already TRL 5, microsecond latency achieved
  • Multiple platforms: Independent approaches all progressing simultaneously

We’ve seen too many “impossible” milestones reached in the last few years to claim it’ll never happen:

  • 2023: First logical qubit outperforming physical qubits
  • 2024: 99.9% two-qubit gate fidelity in multiple platforms
  • 2025: 10⁻⁷ single-qubit error, cultivation algorithms slashing resource needs
  • 2025: Major corporations putting concrete dates and billions of dollars on fault-tolerant machines

The conversation has shifted from “if” to “when”, even among cautious experts (with most settling on second half of 2030s as the outside guess).

In my view, clinging to “never” is wishful thinking that could leave you badly exposed if wrong. History of technology is full of examples where breakthroughs came sooner than anticipated once a field hit an exponential growth phase – and quantum computing appears to be at the cusp of such a phase right now.

The Capability Maturity Curve: When you track TRL progression systematically, you see clear acceleration:

  • 2015-2020: Most capabilities at TRL 1-2 (basic principles)
  • 2020-2023: Jump to TRL 2-4 (proof of concept, small validation)
  • 2023-2025: Jump to TRL 4-5 (component validation, some at TRL 5-6)
  • 2025-2029: Projected jump to TRL 6-8 (system demo to operational)

This is a classic technology S-curve entering the steep part. Dismissing it as “decades away” ignores the clear trajectory.


Mapping Breakthroughs to CRQC Readiness: The LQC-LOB-QOT Framework

The beauty of the capability-driven approach is that it lets us translate disparate breakthroughs into a unified readiness assessment using the three-lever framework from the CRQC Readiness Benchmark.

How the Three Breakthroughs Stack

BreakthroughPrimary Capability ImpactLQC EffectLOB EffectQOT Effect
Gidney 2025Magic State Production (Cap 5)↓ Factory overhead↓↓ Non-Clifford budget (10¹³→10⁹)↑ Factory throughput
Oxford 2025Below-Threshold Scaling (Cap 3)↓↓ Physical/logical ratio↑ Gates before failure↑ Faster stable cycles
IBM 2029QEC + Integration (Cap 1,6)↑↑ Logical qubits fielded↑ Sustained operation↑ End-to-end throughput

The Composite Effect

When you combine these three advances:

LQC (Logical Qubit Capacity):

  • Gidney showed we only need 1,399 logical qubits (down from 6,000+)
  • Oxford’s fidelity suggests physical-to-logical overhead could drop 5-10×
  • IBM’s roadmap puts 1,000-2,000 logical qubits within reach by 2030-2032
  • Assessment: LQC requirements are approaching hardware capabilities

LOB (Logical Operations Budget):

  • Gidney’s cultivation reduced operations from >10¹³ to ~6.5×10⁹ non-Cliffords
  • Oxford’s fidelity enables higher code distances without overhead explosion
  • IBM’s multi-day stability targets address Capability 8 (continuous operation)
  • Assessment: LOB is achievable with 5-day runtime at microsecond cycles

QOT (Quantum Operations Throughput):

  • Gidney’s factories: ~1 CCZ/150 cycles across 6 parallel factories
  • Oxford’s work: faster cycle times possible at higher fidelities
  • IBM’s decoders: real-time processing at scale is proven
  • Assessment: QOT targets (~10⁶ ops/sec) are engineering challenges, not physics barriers

The Bottom Line: When you map these breakthroughs onto the capability framework, you see a clear convergence. The gaps that existed in 2020 (TRL 1-2 across most capabilities) have closed to TRL 3-5 by 2025. The remaining jumps to TRL 6-8 (system demonstration to operational deployment) are substantial but no longer require scientific breakthroughs – just sustained engineering effort.


The Eight Capabilities: Where We Stand Today

Let me provide a snapshot of the current state across all eight CRQC capabilities, highlighting what recent breakthroughs changed:

Foundational Capabilities

1. Quantum Error Correction: TRL 4 → Trending TRL 5

  • 2023 baseline: Small logical qubits at distance 3-5 demonstrated (Google, IBM)
  • 2025 impact: Oxford’s fidelity + IBM’s roadmap prove distance 25 is feasible
  • Gap to CRQC: Scaling to 1,399 concurrent logical qubits; ~2-3 years away

2. Syndrome Extraction: TRL 4 (Stable)

  • Current: Repeated rounds on small codes; ~10⁶ cycles demonstrated
  • Challenge: Need 5-6 orders of magnitude more cycles (10¹¹-10¹² for 5-day runtime)
  • Gap to CRQC: Continuous operation at microsecond cadence; Oxford’s coherence is encouraging

3. Below-Threshold Operation & Scaling: TRL 3-4 → Accelerating to TRL 4-5

  • 2025 impact: Oxford proved 10⁻⁷ single-qubit error; exponential suppression validated
  • Challenge: Prove at distance 20-25 on 10³-10⁶ qubits
  • Gap to CRQC: Two-qubit gates need to reach ~10⁻⁴ to 10⁻⁵; 2-4 years of development

Core Logical Operations

4. High-Fidelity Logical Clifford Gates: TRL 4-5 (Mature)

  • Status: Demonstrated at small distances; not a primary blocker
  • Gap to CRQC: Scaling to hundreds-thousands of logicals; IBM roadmap addresses this

5. Magic State Production & Injection: TRL 2-3 → Theoretically TRL 5 (Paper)

  • 2025 impact: Gidney’s cultivation slashed costs by 100×+
  • Challenge: No hardware demonstration yet; TRL 2-3 experimentally
  • Gap to CRQC: First factory demonstration needed; likely 3-5 years to TRL 5-6

End-to-End Execution

6. Full Fault-Tolerant Algorithm Integration: TRL 1-2 (Critical Path)

  • Status: No complete FT algorithm demonstrated
  • Gap to CRQC: IBM 2029 target suggests first full integration in 4 years
  • Note: This is where timelines live or die

7. Decoder Performance: TRL 5 (Well Advanced)

  • Status: FPGA/ASIC decoders proven at patch-scale
  • Gap to CRQC: Scaling to 100k-1M qubits; power efficiency; 2-3 years

8. Continuous Operation: TRL 1-2 → Near-term focus

  • Status: Hours-scale with recalibration; no multi-day runs
  • 2025 impact: Oxford’s 70-second coherence shows long stability is possible
  • Gap to CRQC: 2-3 orders of magnitude more uptime; automated ops

The Capability Maturation Timeline

Looking at TRL progression rates:

  • 2020-2023: Jump from TRL 2-3 to TRL 3-4 across Capabilities 1-3 (foundational)
  • 2023-2025: Jump from TRL 3-4 to TRL 4-5 (validation) accelerating
  • 2025-2029: Expected jump to TRL 6-7 (system demo) for key capabilities
  • 2029-2032: TRL 7-8 (operational) – CRQC crosses threshold

This progression is consistent with a 2030 ± 2 year Q-Day estimate.


From Forecast to Reality: Why 2030

Bringing together all these threads through the lens of the CRQC capability framework, let’s answer the key question: When will RSA-2048 actually be broken by a quantum computer?

My updated prediction is 2030, give or take a year. Here’s the reasoning through the capability lens:

The Convergence Argument

LQC Convergence (Logical Qubit Capacity):

  • Required: 1,399 logical qubits
  • Current trajectory:
    • 2023: ~2-3 logical qubits demonstrated
    • 2027-2028: 50-100 logical qubits (IBM intermediate milestones)
    • 2029: 200 logical qubits (IBM Starling target)
    • 2030-2032: 1,000+ logical qubits (multiple vendors)
  • Confidence: High – this is mostly an engineering scaling problem now

LOB Convergence (Logical Operations Budget):

  • Required: ~6.5×10⁹ non-Clifford gates with failure probability <0.5
  • Enablers:
    • Gidney’s cultivation reduces gate count by 100×+
    • Oxford’s fidelity enables deeper circuits before failure
    • 5-day runtime × 1 µs cycles × high fidelity = achievable
  • Confidence: Medium-High – depends on magic state factories (Cap 5) maturing from TRL 2-3 to TRL 5-6

QOT Convergence (Quantum Operations Throughput):

  • Required: ~10⁶ logical ops/sec to complete in ~week
  • Enablers:
    • 6 parallel magic state factories
    • 1 µs cycle times (current hardware achieves this)
    • Real-time decoders (already TRL 5)
  • Confidence: Medium – integration complexity is the wildcard

The Mosca Rule Applied

Using Dr. Michele Mosca’s risk formula:

  • X (years data must stay secure): 5-20 years for most organizations
  • Y (years to deploy new crypto): 3-7 years for large enterprises
  • Z (years before CRQC breaks crypto): 5-7 years (my estimate for first capability)

If X + Y > Z, you’re in danger now.

For most organizations: 10 + 5 = 15 years > 5-7 years → We’re already in the red zone.

Why Not Earlier? Why Not Later?

Why Not 2027-2028?

  • Capability 5 (Magic States) is still TRL 2-3; needs 3-4 years to reach TRL 5-6
  • Capability 6 (Full Integration) has never been demonstrated; needs 4-5 years
  • Capability 8 (Continuous Operation) requires 2-3 orders of magnitude improvement in uptime
  • Even with aggressive timelines, 2027-2028 is too soon for all capabilities to converge

Why Not 2035+?

  • That assumes no further algorithmic improvements (history says we’ll get more)
  • It assumes linear hardware scaling (quantum is showing exponential characteristics)
  • It ignores classified programs (nation-states may be ahead of public timelines)
  • NIST’s 2030-2035 migration deadline was set for a reason – they see the same risk

Why 2030 ± 1-2 years Makes Sense:

  • IBM’s 2029 milestone (200 logical qubits) + 1-2 years of scaling = 1,400 logical qubits
  • Magic state factories: TRL 2-3 (2025) + 4 years aggressive development = TRL 5-6 (2029)
  • Integration: First demos (2026-2027) + 3 years refinement = production (2029-2030)
  • Below-threshold scaling: Current TRL 3-4 + Oxford’s advances = TRL 6 by 2029-2030

The Risk Distribution

I’d characterize the probability distribution as:

  • By 2028: 10-15% (requires accelerated breakthroughs across all capabilities)
  • By 2030: 35-45% (base case scenario with no major setbacks)
  • By 2032: 65-75% (accounts for typical engineering delays)
  • By 2035: 85-95% (high confidence, even with significant setbacks)

This is not a guarantee – it’s a risk assessment based on capability maturation rates, hardware roadmaps, and the historical trajectory of cryptographic attacks getting better over time.

The “Unknown Unknowns”

The capability framework also highlights what could accelerate or delay timelines:

Potential Accelerators:

  • Better error correction codes: New codes beyond surface codes could reduce overhead dramatically
  • Hardware surprises: Microsoft’s topological qubits, if successful, could be game-changers
  • Algorithmic leaps: We’ve seen 100× improvements in just a few years; another could arrive
  • Classified programs: Nation-state efforts may already be further along

Potential Delays:

  • Magic state bottleneck: If Capability 5 stalls at TRL 3-4, the whole timeline slips
  • Integration nightmares: Capability 6 (full algorithm integration) has never been tried; hidden complexity could emerge
  • Decoder scaling: Classical processing at 10⁶-10⁷ syndrome bits/sec might have unforeseen bottlenecks
  • Continuous operation: Cosmic rays, vibrations, drift – keeping a million-qubit machine stable for 5 days is hard

Net Assessment: The accelerators and delays roughly balance out around the 2030 ± 2 year window.


Conclusion

If there’s one message to take away, it’s this: the quantum threat to cryptography is no longer a distant abstraction; it’s a tangible and approaching reality that can be systematically tracked through measurable capabilities.

The capability-driven methodology reveals something crucial that headline qubit counts obscure: progress toward CRQC is multi-dimensional. You can’t just count qubits or watch error rates or track algorithms in isolation. You must monitor all eight capabilities and understand how they interact through the LQC-LOB-QOT framework.

What we’ve seen in 2023-2025 is a synchronized advance across multiple fronts:

  • Gidney’s algorithm collapsed the LOB requirement by orders of magnitude (Capability 5)
  • Oxford’s fidelity proved below-threshold scaling is achievable (Capability 3)
  • IBM’s roadmap demonstrated LQC is within reach (Capabilities 1, 6, 7)

When you map these onto the capability framework, a clear picture emerges: we are on a collision course with CRQC in the 2029-2032 timeframe, with 2030 as the median estimate.

Four Key Takeaways for Security Professionals

1. The Quantum Attack Trajectory is Shortening, and It’s Measurable

Using the capability framework, you can track specific milestones:

  • Next 12 months (2026): Watch for first magic state factory demonstrations (Cap 5, TRL 3→4)
  • 2026-2027: Look for small fault-tolerant algorithm runs (Cap 6, TRL 2→3)
  • 2028-2029: IBM’s 200 logical qubit milestone (Cap 1, 6, 7 at TRL 6-7)
  • 2030-2032: First CRQC demonstrations if all capabilities reach TRL 7-8

This isn’t guesswork – these are concrete technical milestones that can be monitored against the CRQC Readiness Benchmark.

The trajectory is unmistakable:

  • 2012: 10⁹ qubits needed (impossible)
  • 2019: 2×10⁷ qubits needed (decades away)
  • 2025: 10⁶ qubits needed (5-7 years away)
  • Trend: Order of magnitude reduction every 5-6 years

2. PQC Migration is Urgent and Unavoidable

With the capability analysis showing convergence in 5-7 years:

NIST’s 2030-2035 timeline is validated – not conservative. The U.S. National Institute of Standards and Technology selected and standardized PQC algorithms (like CRYSTALS-Kyber and Dilithium) designed to resist quantum attacks. They’ve recommended a clear timeline:

  • Begin phasing out vulnerable crypto by 2030
  • Eliminate it entirely by 2035

This timeline wasn’t picked casually – it aligns with when a quantum threat becomes not just possible but probable. Given the lead time required to transition systems (which can be 5-10 years for large enterprises or government agencies), starting now is the only viable strategy.

Harvest Now, Decrypt Later” is a Real Threat: Data that is encrypted today can be recorded by adversaries and kept until they have a quantum computer to decrypt it. This especially affects sensitive data with long confidentiality needs:

  • National security intelligence
  • Healthcare records (HIPAA requires protection for decades)
  • Confidential business plans and intellectual property
  • Personal data protected by privacy laws (GDPR, etc.)
  • Financial records and transactions

If such data has a shelf life of more than ~5-10 years, assume that anything encrypted with RSA/ECC today might be readable by the 2030s. The only defense is to either:

  • Stop using vulnerable encryption now for long-term data
  • Use hybrid approaches (PQC + classical) immediately
  • Shorten the lifetime of your secrets (secure deletion, key rotation)

Every year of delay increases the risk of being caught by Q-Day before you’ve finished upgrading. Remember, cryptographic agility (the ability to swap out algorithms) is part of resilience. If you haven’t inventoried where you use RSA/ECC and developed a migration plan, you’re already behind.

3. Not All Quantum Computers Threaten RSA Equally

The capability framework clarifies an important point: a quantum computer with 1,000 physical qubits that achieves 10⁻⁷ error rates (strong on Capability 3) might be more dangerous than one with 100,000 qubits at 10⁻² error rates (weak on Capability 3).

Watch the right metrics:

  • Don’t just count physical qubits – track logical qubits (LQC)
  • Don’t just watch error rates – monitor capability TRL progression
  • Don’t just hype-chase – use the Path to CRQC framework to interpret announcements

Key indicators to monitor:

  • Logical qubit demonstrations: When companies announce “X logical qubits demonstrated”
  • Magic state factory milestones: First demonstrations of continuous magic state production
  • Error correction distance: Reports of distance-15, distance-20, distance-25 codes working
  • Integration demos: “First fault-tolerant algorithm run end-to-end” announcements
  • Fidelity records: New records in two-qubit gate error rates approaching 10⁻⁴ or better

Organizations should be asking: “Which of our capabilities (in the cryptographic sense) will fail first when Capability 5 (magic states) matures?” The answer determines your migration priorities.

4. Security Should Not Be Contingent on Progress Being Slow

Whether Q-Day arrives in 2028, 2030, or 2033, the difference is marginal from a risk management perspective. All are soon enough that we must prepare today.

We are roughly five years out from the first potential quantum disruptions to cryptography, and about ten years out from them becoming widespread. This is well within strategic IT planning horizons.

The capability-driven analysis makes one thing clear: we’re essentially in the final countdown to Y2Q – akin to the final stretch before Y2K, except this “millennium bug” for encryption doesn’t have a fixed date and won’t announce itself in advance.

Final Guidance: Act as if Q-Day Will Arrive on the Early Side

The prudent course is to act as if Q-Day will hit in the early 2030s, because the cost of being prepared a little early is far lower than the cost of being even one day late.

Consider the asymmetry:

  • If you prepare now and Q-Day is delayed to 2035: Minor inefficiency, but you’re secure
  • If you delay and Q-Day arrives in 2030: Catastrophic exposure, encrypted data compromised

From a risk management perspective, this is a no-brainer: prepare for the worst-case plausible timeline, not the optimistic one.

Concrete actions:

  1. Inventory your crypto: Know where RSA/ECC is used across your systems
  2. Assess data longevity: Identify data that needs protection beyond 2030-2035
  3. Pilot PQC implementations: Start testing post-quantum algorithms in non-production
  4. Build crypto-agility: Design systems that can swap algorithms without full rewrites
  5. Track quantum progress: Monitor the capability milestones outlined in this article
  6. Engage leadership: Ensure boards and executives understand the quantum timeline

The Bottom Line

The three breakthroughs analyzed in this article – Gidney’s algorithmic work, Oxford’s fidelity milestone, and IBM’s hardware roadmap – represent more than just incremental progress. They represent fundamental shifts in three different dimensions of the CRQC challenge:

  • LOB dimension: From “impossible operations budget” to “challenging but feasible”
  • LQC dimension: From “we need magical error correction” to “we have a proven path”
  • Integration dimension: From “no one knows how to build this” to “it’s an engineering challenge”

When you track these systematically through the eight-capability framework, you see the convergence clearly. The gaps that existed in 2020 have closed to TRL 4-5 by 2025. The remaining jumps to TRL 6-8 are substantial but no longer require scientific breakthroughs – just sustained engineering effort backed by billions in funding from companies and governments worldwide.

2030 is not a prophecy – it’s a data-driven estimate based on capability maturation rates, hardware roadmaps, and the historical trajectory of cryptographic attacks getting better over time. Whether I’m off by a couple years either way doesn’t change the core advice.


To explore how close we are to CRQC using your own assumptions, try the CRQC Readiness Benchmark (Q-Day Estimator). For a detailed capability-by-capability analysis, see “The Path to CRQC – A Capability-Driven Method for Predicting Q-Day”.

For tracking ongoing developments, follow my newsletter “The Quantum Observer” where I analyze each major quantum breakthrough through this capability framework.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap