The Quantum Utility MapQuantum Computing

The Quantum Utility Ladder: What Fault-Tolerant Quantum Computers Will Actually Be Used For

Table of Contents

This article is the technical foundation of my Quantum Utility Map Deep Dive series, a seven-part investigation into what fault-tolerant quantum computers will actually be used for, which industries they will transform, and what organizations and nations should do about it. The series includes: Quantum Computing by 2033 (competitive analysis by industry), Quantum Sovereignty and the Utility Trap (geopolitical implications), Why Quantum Won’t Save Wall Street (finance deep dive), Quantum Chemistry’s Honest Ledger (chemistry and materials assessment), The Error Correction Revolution (technology acceleration), and The Narrow Advantage (capstone synthesis).


Introduction

Every quantum computing headline fixates on the same question: when will a quantum computer break encryption? It’s the wrong question, or at least, it’s the wrong first question. Long before any machine accumulates the roughly 1,200โ€“1,400 of logical qubits (estimate today, might change tomorrow) needed to threaten RSA or elliptic curve cryptography, fault-tolerant quantum computers will start doing something far more interesting: solving problems in chemistry, materials science, and physics that no classical supercomputer on Earth can touch.

The fault-tolerant era will be defined by what quantum computers build: new catalysts, new batteries, new drugs, new materials. And for the first time, we have enough concrete resource estimates to map exactly what becomes possible at each rung of the logical-qubit ladder.

I tried to write this article a few years ago. But things have changed. A lot. I’ve spent the past few weeks assembling what I believe is the most comprehensive catalog of fault-tolerant quantum algorithms mapped to their resource requirements (logical qubits, T-gate counts, estimated runtimes) and the real-world problems they solve.

The picture that emerges is more nuanced, more uneven, and ultimately more exciting than the smooth “every extra 50 qubits unlocks a new killer app” staircase that vendor marketing suggests. The utility landscape is lumpy. Some problems become tractable at 130 logical qubits. Others need 100,000. And a surprising number of the applications people talk about most (finance, logistics, machine learning) face a structural barrier that no amount of better hardware can fix.

This article is my attempt to lay it all out: what fault-tolerant quantum computers will actually do, when, and what the evidence supports. It’s long. There’s a comprehensive table. And I’ve tried to be honest about where the evidence is strong, where it’s speculative, and where the hype has outrun the science.

The Three Dimensions That Actually Matter

Before climbing the ladder, we need to understand what determines whether a quantum algorithm can run on a given machine. It isn’t just the number of logical qubits, though that gets all the attention. Three resource dimensions interact to determine feasibility:

Logical width is the number of simultaneously active logical qubits, including ancillas for arithmetic, phase estimation, and magic state production. This is what people usually mean when they say “you need X qubits.”

Logical depth is the total number of sequential non-Clifford operations (T-gates or Toffoli gates) in the critical path of the circuit. This determines runtime. If your algorithm needs 10ยนโต T-gates and your machine produces one T-gate per microsecond, you’re looking at 30 years. Depth is often the binding constraint, not width.

T-count is the total number of non-Clifford gates consumed across the entire computation, which determines the spatial footprint of “magic state factories”: the dedicated regions of the quantum processor that continuously manufacture the high-fidelity ancillary states needed to execute each T-gate. A higher T-count means either more factories (more physical qubits) or longer runtime (more depth).

These three dimensions interact in ways that make simple “qubit count” comparisons misleading. An algorithm requiring 300 logical qubits and 10โธ T-gates is a fundamentally different engineering challenge than one requiring 300 logical qubits and 10ยนโต T-gates. The first might run in minutes. The second might take centuries on the same hardware.

IBM’s roadmap makes this explicit: their Starling system, targeted for 2029, aims for 200 logical qubits executing 100 million gates. Their Blue Jay, projected for 2033 or beyond, targets 2,000 logical qubits executing 1 billion gates. The qubit count is the headline, but the gate budget is what determines which algorithms actually run.

The Resource Estimate Revolution

One of the most remarkable, and underreported, stories of the past decade is how dramatically fault-tolerant resource estimates have fallen. Every major quantum algorithm has seen its requirements drop by orders of magnitude through better compilation, smarter Hamiltonian decomposition, and architectural innovations.

The canonical example is the FeMo-cofactor of nitrogenase, the enzyme that fixes atmospheric nitrogen at ambient temperature, a trick that the industrial Haber-Bosch process replicates only at 450ยฐC and 200 atmospheres while consuming roughly 1โ€“2% of global energy production. In 2017, Reiher, Wiebe, Svore, Wecker, and Troyer published the first serious resource estimate for simulating FeMoco on a quantum computer: approximately 111 logical qubits, but an astonishing ~10ยนโด T-gates using second-order Trotterization. The runtime would have stretched into years.

Then the algorithms got better. Berry, Gidney, and colleagues introduced sparse qubitization in 2019, bringing the T-count down to ~10ยนโฐ. Lee et al. applied tensor hypercontraction in 2021, reaching 2,142 logical qubits and 5.3ร—10โน Toffoli gates, roughly four days at 4 million physical qubits. Rocca et al. (2024) halved that further with symmetry-compressed double factorization. And in 2025, Low, Berry, Rubin et al. achieved a 4ร—โ€“195ร— speedup through spectrum amplification, while Caesura et al. (PsiQuantum/Boehringer Ingelheim) demonstrated a 278ร— speedup using photonic active-volume architecture.

Five orders of magnitude in eight years. The algorithm didn’t change; it’s still quantum phase estimation applied to an electronic Hamiltonian. What changed was how cleverly the Hamiltonian was decomposed and how efficiently the non-Clifford operations were compiled.

This pattern repeats across every major application. And it means any “utility ladder” I present today will look conservative by 2030. Resource estimates are still falling.

Rung 1: 25โ€“100 Logical Qubits โ€” The Scientific Beachhead

The first clear scientific advantages from fault-tolerant quantum computing won’t require thousands of logical qubits. They’ll require tens, paired with the ability to execute millions to trillions of gates without losing coherence.

A 2025 perspective paper published in the Journal of Chemical Theory and Computation explicitly analyzed the 25โ€“100 logical qubit regime and identified it as the threshold where early fault-tolerant machines can tackle scientifically meaningful electronic-structure problems that are qualitatively beyond what classical solvers handle reliably. The key targets are the bread-and-butter problems where classical quantum chemistry hits a wall: multireference charge-transfer states, conical intersections in photochemistry, and small catalytic fragments with strong electron correlation.

Glycine ground-state energy has been costed at 55 logical qubits using quantum phase estimation in the QuRE resource estimation framework, making it a concrete early target. The width is modest; the challenge is the T-count, which reaches 10ยนโฐโ€“10ยนยน logical operations.

Fermi-Hubbard models on classically intractable lattices represent perhaps the strongest near-term candidate for a scientifically meaningful quantum computation. Campbell’s plaquette Trotterization (2022) achieves approximately 10โถ Toffoli gates for 2D Fermi-Hubbard QPE at lattice sizes (Lโ‰ฅ8) beyond exact classical diagonalization, requiring roughly 130 logical qubits. Kan (PsiQuantum) and Symons (Oxford) pushed further in 2025, achieving ~8ร—10โต Toffolis for Fermi-Hubbard and ~5ร—10โถ for single-orbital cuprate models using catalyzed Hamming-weight phasing.

Google Quantum AI’s Stage-IV quantum utility compendium lists a 121-site 2D Ising quench at 140 logical qubits, a 70-site Heisenberg quench at 131 logical qubits, and a 128-spin-orbital 2D Hubbard quench at 160 logical qubits as concrete early targets. These are condensed matter model dynamics, not the kind of chemistry stories that make pharmaceutical executives excited, but they are among the strongest candidates for the first genuinely classically intractable logical-qubit computation.

In March 2026, IBM and the DOE Quantum Science Center hit a related milestone: a 50-qubit Heron processor simulation of KCuFโ‚ƒ magnetic material that reproduced experimental inelastic-neutron-scattering spectraโ€”the first quantum benchmark validated against a physical experiment rather than a classical computation.

The bottom line for this rung: The first fault-tolerant quantum advantage will likely appear in condensed-matter model dynamics and carefully curated active-space chemistry. It will be scientifically important but won’t transform industries. Think of it as the Wright Brothers momentโ€”proving the thing flies, not replacing airlines.

Rung 2: 100โ€“300 Logical Qubits โ€” First Industrial Signals

This is where the landscape starts to get interesting for anyone beyond academic physics departments. Multiple independent lines of evidence converge on this range as the threshold where quantum computers begin producing results that feed into real industrial workflows.

Rovibrational Spectroscopy

One of the most striking concrete examples comes from a 2025 algorithm for high-accuracy nuclear-motion Hamiltonians using a Walsh-Hadamard QROM architecture. For a 30-dimensional model system representing a 12-atom molecule with six-body coupled potential, classical calculation of spectroscopic-accuracy energy levels would require over 30,000 years on the world’s fastest supercomputer. The optimized quantum algorithm requires fewer than 300 logical qubits and completes in approximately three months on a 1 MHz fault-tolerant processor. This reduces the required quantum computational volume by a factor of 10โต to 10โถ compared to earlier quantum approaches.

This matters for astrochemistry, atmospheric modeling, and molecular spectroscopyโ€”domains where precise understanding of molecular vibration and rotation is foundational.

NMR Spectral Prediction

A 2024 resource estimation paper established zero-to-ultralow-field NMR spectral simulation as an early FTQC candidate. Natural products and small proteins could be simulated with a few hundred logical qubits and fewer than 10ยนยฒ T-gates, corresponding to runtimes of days on plausible hardware. This use case has a powerful advantage: the results can be directly verified against laboratory NMR experiments, providing a natural “proof of quantum utility” channel.

OLED Materials Discovery

Advanced organic light-emitting diode molecules with heavy transition metals (platinum, iridium) introduce severe electron correlation and relativistic effects that push classical simulation to its limits. Resource estimates indicate that robust simulation of these systems requires at least 300 logical qubits with gate depths reaching into millions of logical operations.

The economic stakes are significant. The global OLED market exceeds $40 billion annually, and designing better emitters is largely a computational bottleneck. Classical quantum-inspired algorithms running on massive cloud infrastructure are approaching their theoretical ceilings for these heavy-metal complexes.

Photodynamic Cancer Therapy

Photosensitizer design for photodynamic cancer therapy (PDT) requires exact modeling of excited electronic states, particularly the intersystem crossing from singlet to triplet states that generates the reactive oxygen species that destroy tumors. Classical methods struggle with the heavily correlated excited states of advanced BODIPY-derivative photosensitizers.

In late 2025, Xanadu researchers published fault-tolerant resource estimates specifically targeting cumulative absorption and intersystem crossing rates in these BODIPY systems. Their estimates indicate that simulating active spaces ranging from 11 to 45 spatial orbitals requires 180โ€“350 logical qubits and Toffoli gate depths between 10โท and 10โน.

This is worth pausing on. A cancer drug design calculation that sits within reach of hardware multiple vendors project will exist by 2029โ€“2031, with Toffoli depths in the tens to hundreds of millions rather than the trillions. That is not the usual “quantum computing will someday help with drug discovery” hand-wave. That is a concrete therapeutic application with a highly viable resource profile.

What This Range Does Not Yet Do

A note on what 100โ€“300 logical qubits cannot do. It cannot simulate FeMoco at full scale. It cannot model complete catalytic cycles. It cannot run quantum amplitude estimation for derivative pricing, which requires ~8,000 logical qubits. It cannot solve industrial-scale optimization problems.

What it can do is produce scientific results that are (a) beyond classical exact methods, (b) verifiable against laboratory experiments, and (c) embedded in workflows that inform real industrial decisions about molecules and materials. That is a genuine inflection point: classical computing augmented at specific bottlenecks where it cannot reach the required accuracy alone.

Rung 3: 300โ€“1,000 Logical Qubits โ€” The Early Industrial Era

Between 300 and 1,000 logical qubits, the character of accessible problems changes. We move from benchmark-scale active spaces to systems that start looking like real industrial targets.

Battery Materials

The battery industry represents one of the most compelling near-term quantum computing applications, because the key bottleneckโ€”understanding degradation mechanisms at the atomic levelโ€”involves exactly the kind of strongly correlated electronic states that classical DFT handles poorly.

Lithium-rich NMC (Nickel Manganese Cobalt) cathodes theoretically offer massive energy densities but suffer from structural degradation and voltage fade. Understanding this requires simulating Resonant Inelastic X-ray Scattering (RIXS) spectra, a second-order spectroscopic technique that probes the correlated electronic excited states responsible for degradation.

Initial Trotterized estimates required more than 2,000 logical qubits and 10ยนยณ Toffoli gates, a completely impractical requirement. But a series of breakthroughs by Xanadu and the National Research Council of Canada has changed this. For X-ray absorption spectroscopy (XAS), they applied compressed double-factorized Hamiltonians and Lorentzian-kernel sampling to reduce the cost of simulating a Liโ‚„Mnโ‚‚O cluster to roughly 100โ€“350 logical qubits and under 4ร—10โธ T-gates. Moving to the more complex RIXS spectra, a February 2026 preprint applied active-space reduction to suppress requirements to fewer than 500 logical qubits, making quantum battery design a near-term industrial reality.

Separately, a first-quantization approach to the LiNiOโ‚‚ cathode problem requires only 1,380 logical qubits, compared to roughly 75,000 in second-quantized encoding, a dramatic illustration of how representation choice can shift a problem by nearly two orders of magnitude.

Gene Regulatory Networks

In systems biology, genetic regulatory networks modeled as Boolean networks require finding structural attractors that dictate cellular phenotypes. Specialized quantum algorithms scale linearly with both the number of Boolean agents and the required time steps. Simulating contemporary models featuring 40โ€“50 genes across 10 time steps demands approximately 400 logical qubits, offering a pathway to explore classically infeasible genomic networks.

Thermal State Preparation and Surface Chemistry

Universal Gibbs thermal state preparationโ€”vital for modeling surface chemistry and fundamental magnetic phenomenaโ€”requires approximately 810 logical qubits and ~10โธ logical gates (arXiv:2406.06281). This is a foundational capability: thermal states are the starting point for modeling everything from catalytic surfaces to magnetic materials at finite temperature.

Chemical Dynamics

A 2026 paper on pre-Born-Oppenheimer quantum dynamics provides detailed resource estimates for simulating real chemical reactions with full electron-nuclear coupling. NHโ‚ƒ + BFโ‚ƒ requires 1,362 logical qubits (312 ancilla) and 8.72ร—10โน Toffolis per femtosecond of simulated time. Similar estimates hold for 2NOโ‚‚ (1,419 LQ), Cโ‚‚Hโ‚„ + Oโ‚ƒ (1,341 LQ), and Cโ‚‚Hโ‚„ + Oโ‚‚ (1,453 LQ). This is the regime where quantum computers start modeling actual chemical reactions as they unfold, not just static snapshots of molecular ground states.

Machine Learning: Tensor PCA

An end-to-end algorithmic framework for Tensor Principal Component Analysis and planted kXOR problems demonstrates that 900 logical qubits can solve problems requiring approximately 10ยฒยณ classical FLOPsโ€”a genuine superquadratic quantum speedup. The quantum algorithm demands approximately 10ยนโต total gates and a gate depth of 10ยนยฒ, which is staggering but still dramatically better than the classical alternative. Without recent compilation improvements, the quantum gate count would have exceeded 10ยนโน.

This is one of the rare examples outside chemistry and physics where a fault-tolerant quantum algorithm offers a provable advantage that survives careful end-to-end resource analysis.

Rung 4: 1,000โ€“5,000 Logical Qubits โ€” Grand Challenge Chemistry

This is where quantum computing starts to look like the pitch deck, but with real numbers attached.

The FeMoco Problem at Scale

Google Quantum AI’s Stage-IV compendium places the full FeMoco nitrogen fixation calculation at approximately 1,500 logical qubits for a 76-orbital problem, with a quoted runtime of about 9 hours under their stated assumptions. The latest tensor hypercontraction estimates from Lee et al. (2021) settle at 2,142 logical qubits and 5.3ร—10โน Toffoli gates, running in under four days.

Understanding how nitrogenase fixes nitrogen at ambient conditions could revolutionize ammonia production (currently responsible for roughly 1โ€“2% of global energy consumption and 1.2โ€“3% of global COโ‚‚ emissions through the Haber-Bosch process). The Bellonzi et al. study on homogeneous nitrogen fixation catalysts is one of the few papers that connects physics, economics, and runtime: the highest-utility calculation is valued at approximately $200,000, with the quantum workload estimated at 139,000 QPU-hours versus 400,000 CPU-hours for the equivalent classical DMRG calculation.

Cytochrome P450

Cytochrome P450 enzymes metabolize roughly 50% of marketed drugs through extreme spin-state fluctuations at their heme cores. Goings et al. (PNAS 2022) estimated the quantum resources at approximately 4,900 logical qubits, ~10โน Toffoli gates, and 73 hours of runtimeโ€”roughly 4.6 million physical qubits. Caesura et al. (2025) demonstrated a 234ร— speedup using photonic active-volume architecture, making this a plausible target for machines in the 2030โ€“2035 timeframe.

Ruthenium COโ‚‚ Catalyst

Von Burg et al. estimated that a ruthenium catalyst for COโ‚‚-to-methanol conversion requires approximately 4,000 logical qubits, ~10ยนโฐ Toffoli gates, and 28 hours at 10 ฮผs per Toffoli. This is the kind of computation that could accelerate the design of carbon capture catalystsโ€”a climate application with obvious urgency.

The Chemistry Advantage Debate

Any honest assessment of this rung must contend with the Dalzell-Lee analysis (Nature Communications 14:1952, 2023), which examined the empirical performance of classical methods (DMRG, CCSD(T), FCIQMC, selected CI) across chemical space and concluded that evidence for exponential quantum advantage in ground-state chemistry has yet to be found. Features that enable efficient quantum state preparation tend to also enable efficient classical heuristics.

This doesn’t mean quantum chemistry simulations are uselessโ€”polynomial speedups on strongly correlated subsystems remain genuine targets. But it does mean the “killer app” framing of 2017 has quietly been revised. The quantum advantage in chemistry is likely to be real but narrower than the early hype suggested: specific strongly correlated systems where classical methods cannot achieve the required accuracy, embedded in larger hybrid quantum-classical workflows.

Rung 5: 5,000โ€“100,000+ Logical Qubits โ€” Materials at Scale and Fundamental Physics

Past 5,000 logical qubits, the problems start to look genuinely transformativeโ€”but the timelines stretch correspondingly.

Inertial Confinement Fusion

Rubin, Berry, Baczewski et al. (Google-Sandia, PNAS 2024) computed that ฮฑ-particle stopping power in warm-dense matterโ€”directly relevant to National Ignition Facility target designโ€”requires roughly 10โด logical qubits and 10ยนยนโ€“10ยนยฒ Toffoli gates. This is the clearest near-term fusion-energy quantum application. Tokamak MHD, by contrast, has no credible FTQC resource estimate.

Battery Electrolytes at Scale

Google’s compendium places a LiPFโ‚† battery electrolyte problem at roughly 18,000 logical qubitsโ€”the scale where quantum simulation moves from heroic individual calculations to modeling system sizes that map naturally onto real industrial R&D needs.

Bulk Solid-State Physics

Calculating exact ground-state energies of solid-state lattice structures like NiO and PdO requires double- or triple-ฮถ polarized basis sets to accurately model bulk material properties. While small-unit-cell simulations (8โ€“16 atoms) need only a few thousand logical qubits, larger supercells (up to 72 atoms) drive requirements to approximately 100,000 logical qubits and 65 million physical qubits at a 0.01% physical error rate.

Lattice Gauge Theory

The crown jewel of quantum physics simulation. Rhodes, Kreshchuk, and Pathak (2024) demonstrated up to 25 orders of magnitude reduction in spacetime volume for non-Abelian SU(2)/SU(3) lattice gauge theories via qubitization, compared to Trotterized approaches. Quantum simulation avoids QCD’s notorious sign problem by construction, since Hamiltonian evolution dispenses with importance sampling. Full 3+1D QCD simulation remains well beyond current estimates, but lower-dimensional gauge theories are accessible in the 1,000โ€“10,000 logical qubit range.

The Quadratic Speedup Problem: Why Finance, Logistics, and ML Face a Structural Barrier

I need to address the elephant in the room. Every quantum computing pitch deck includes slides about portfolio optimization, derivative pricing, vehicle routing, and machine learning. The evidence for quantum advantage in these domains is dramatically weaker than for chemistry and physics. And the reason is structural, not merely technological.

The core argument was articulated most clearly by Babbush, McClean, Newman, Gidney, Boixo, and Neven in their 2021 paper “Focus Beyond Quadratic Speedups for Error-Corrected Quantum Advantage.” Surface-code logical clock rates (~10 kHz today, potentially ~1 MHz long term), combined with physical-to-logical overhead of 10ยณโ€“10โด, make quadratic quantum speedups uncompetitive against massively parallelized classical heuristics. And quadratic speedups are exactly what Grover-type search, quantum amplitude estimation for Monte Carlo, QAOA, and quantum-accelerated simulated annealing provide. (I’ve written separately about why Grover’s algorithm won’t kill AES – the same structural argument applies across all these domains.)

Sanders, Berry, Gidney, and Babbush quantified this concretely: quantum-accelerated simulated annealing would require approximately one day and one million physical qubits to match what classical simulated annealing solves in four CPU-minutes on spin-glass instances.

Derivative Pricing

JPMorgan and IBM’s resource estimate for pricing a 3-asset autocallable basket option landed at ~8,000 logical qubits, T-depth 5.4ร—10โท, total ~10ยนโฐ T-gates, running in under one second, requiring a ~10 MHz logical clock, roughly 1,000ร— faster than any currently forecast machine. Stamatopoulos and Zeng (2024) used quantum signal processing to cut this to ~4,700 logical qubits and ~10โน T-gates but still need ~45 MHz T-gate throughput. The global OTC derivatives market is $846 trillion (BIS, mid-2025)โ€”the addressable value is enormous, but the resource gap is 3โ€“4 orders of magnitude from realistic hardware.

Portfolio Optimization

The Goldman Sachs-AWS end-to-end resource analysis of quantum interior-point methods for portfolio optimization (Dalzell et al., PRX Quantum 4:040325, 2023) found per-iteration T-counts of 10ยฒยฒโ€“10ยฒโด on 100-stock problems because tomography dominates. The authors explicitly conclude QIPM is not practically advantageous for portfolio optimization without fundamental algorithmic improvements.

Machine Learning’s Great Dequantization

Ewin Tang’s 2018 breakthrough demonstrated that the Kerenidis-Prakash exponential speedup for recommendation systems evaporates under matching classical sampling access. Subsequent work extended this to quantum SVM, PCA, low-rank regression, and discriminant analysis. What survived dequantization: algorithms on genuinely quantum inputs (learning from quantum experiments) and Hamiltonian-simulation-based linear systems without QRAM. Variational quantum machine learning faces mounting evidence against scalability: barren plateaus, training landscape traps, NP-hardness of optimization, and a 2025 result showing that barren-plateau-free VQA architectures admit efficient classical surrogates.

The honest summary: finance, optimization, and generic ML are the weakest cases for fault-tolerant quantum advantage. This doesn’t mean quantum computing will never help in these domains. But the path requires either (a) discovering super-polynomial speedups for specific problem classes, or (b) engineering quantum clock rates three orders of magnitude beyond current projections. Neither is impossible, but neither should be assumed.

The Comprehensive Utility Table

The table below consolidates the most authoritative end-to-end resource estimates by application domain and logical-qubit tier. I’ve excluded cryptanalysisโ€”I’ve covered that extensively elsewhereโ€”to focus on the positive utility landscape.

Early Fault-Tolerant (25โ€“300 Logical Qubits)

ApplicationProblemLogical QubitsToffoli/T-Gate CountEstimated RuntimeKey Citation
ChemistryGlycine ground-state energy (QPE)55~10ยนโฐโ€“10ยนยนHoursโ€“daysQuRE toolbox
Condensed matterFermi-Hubbard L=8 (QPE)~130~10โถ ToffoliHoursCampbell 2022
Condensed matter2D Ising quench (121-site)140โ€”โ€”Google Stage-IV
Condensed matter2D Hubbard quench (128-orbital)160โ€”โ€”Google Stage-IV
MaterialsSrVOโ‚ƒ supercell (Trotter)180depth 884/layerMany layersClinton et al. 2024
PharmaPhotosensitizers/BODIPY (PDT)180โ€“35010โทโ€“10โน Toffoli depthHoursโ€“daysXanadu 2025
Condensed matterCuprate Hubbard+t’200โ€“500~5ร—10โถ ToffoliHoursKan-Symons 2025
ChemistryRovibrational Hamiltonians (12-atom)<300โ€”~3 months at 1 MHzarXiv:2510.19062
MaterialsOLED heavy-metal complexes (Pt, Ir)~200โ€“300Millions of gatesโ€”Ryabinkin et al. 2025
SpectroscopyNMR spectral predictionFew hundred<10ยนยฒ T-gatesDaysarXiv:2406.09340

Mid-Scale Fault-Tolerant (300โ€“1,000 Logical Qubits)

ApplicationProblemLogical QubitsToffoli/T-Gate CountEstimated RuntimeKey Citation
BiologyBoolean gene regulatory networks (40โ€“50 genes)~400โ€”โ€”Rossini et al. 2025
EnergyLi-rich NMC cathodes (RIXS)<500Massively reduced from 10ยนยณโ€”Xanadu/NRC 2026
PhysicsGibbs thermal state preparation~810~10โธ logical gatesโ€”arXiv:2406.06281
ML/DataTensor PCA & planted kXOR~900~10ยนโต total gatesโ€”arXiv:2510.07273

Large-Scale Fault-Tolerant (1,000โ€“10,000 Logical Qubits)

ApplicationProblemLogical QubitsToffoli/T-Gate CountEstimated RuntimeKey Citation
ChemistryFeMoco (THC, full estimate)2,1425.3ร—10โน Toffoli<4 daysLee et al. 2021
MaterialsLiNiOโ‚‚ cathode (first quantization)1,380โ€”โ€”Berry et al. 2024
ChemistryPre-B-O dynamics: NHโ‚ƒ + BFโ‚ƒ1,3628.72ร—10โน Toffoli/fsโ€”Pocrnic et al. 2026
ChemistryPre-B-O dynamics: Cโ‚‚Hโ‚„ + Oโ‚ƒ1,341โ€”โ€”Pocrnic et al. 2026
PharmaCytochrome P450 (drug metabolism)~4,900~10โน Toffoli~73 hoursGoings et al. 2022
ChemistryRu-COโ‚‚ catalyst~4,000~10ยนโฐ Toffoli~28 hoursvon Burg et al. 2021
EnergyICF stopping power~10,000~10ยนยนโ€“10ยนยฒ Toffoliโ€”Rubin et al. 2024
FinanceDerivative pricing (autocallable)~8,000T-depth 5.4ร—10โทNeeds 10 MHz clockChakrabarti et al. 2021

Extreme Scale (10,000+ Logical Qubits)

ApplicationProblemLogical QubitsToffoli/T-Gate CountEstimated RuntimeKey Citation
EnergyLiPFโ‚† battery electrolyte~18,000โ€”โ€”Google Stage-IV
MaterialsNiO/PdO supercells (72-atom bulk)~100,000โ€”โ€”Hariharan et al. 2024
EnergyLiโ‚‚FeSiOโ‚„ battery cathode (plane-wave)>2,000>10ยนยณ Toffoliโ€”Delgado et al. 2022

What the Ladder Actually Tells Us

Step back from the individual entries and three patterns emerge:

First, physical-system simulation dominates. Of the applications with strong evidence for quantum advantage, the overwhelming majority involve simulating quantum mechanical systems: molecules, materials, lattice models, gauge theories. Quantum computers are literally made of quantum stuff, so this makes sense. They simulate quantum systems natively. Richard Feynman’s original 1982 insight remains the most accurate prediction in the field: quantum computers will simulate quantum systems.

Second, the center of gravity has shifted. The marquee application of 2017 was FeMoco, the full nitrogenase cofactor simulation requiring thousands of logical qubits. The most exciting near-term targets today are condensed-matter lattice models (Hubbard, cuprate, pnictide) at 100โ€“500 logical qubits. These require 10โถโ€“10โธ Toffoli gates rather than 10โนโ€“10ยนยณ, putting them within reach of announced 2029 hardware. The first fault-tolerant scientific resultโ€”almost certainly a condensed-matter Hamiltonian or a photosensitizer calculation, not FeMocoโ€”is plausible before 2030.

Third, the utility landscape is narrower than advertised. Finance, logistics, and generic machine learning face the quadratic speedup barrier. Vendor marketing treats these as equivalent application domains to chemistry and physics, but the evidence base is dramatically weaker. The honest assessment is that chemistry and physics applications have provable super-polynomial advantages (or at least strong polynomial advantages on specific systems), while optimization and ML applications mostly offer quadratic advantages that cannot overcome fault-tolerant overhead.

The Error Correction Revolution That Makes It All Possible

None of this happens without a revolution in error correction efficiency that is already underway.

The surface code, the workhorse of quantum error correction since the early 2000s, encodes one logical qubit per ~2dยฒ-1 physical qubits, where d is the code distance. At distance 25, that’s roughly 1,250 physical qubits per logical qubit. For 1,000 logical qubits at distance 25, you need 1.25 million physical qubits just for data storage, plus magic state factories.

qLDPC codes are changing the math. IBM’s bivariate bicycle “gross” code [[144,12,12]] encodes 12 logical qubits in 144 data qubits (12:1 ratio); including syndrome extraction ancillas, total overhead reaches roughly 24:1, compared to surface code’s 1,000:1 at equivalent error suppression. That’s a 40ร— compression.

Magic state cultivation, developed by Gidney, Shutty, and Jones (2024), dramatically reduces the spatial overhead of T-gate production by integrating distillation into the computational fabric. Algorithmic fault tolerance (QuEra/Harvard/Yale, 2025) cuts runtime overhead 10โ€“100ร— by running one error-check round per logical layer rather than dozens.

These are architectural shifts that change which rungs of the utility ladder are reachable with which hardware. A 100,000-physical-qubit machine with surface codes gives you perhaps 80โ€“100 logical qubits. The same machine with advanced qLDPC codes could give you 1,000โ€“4,000. That is the difference between glycine ground-state calculations and FeMoco.

When Will Each Rung Be Reached?

Based on current vendor roadmaps and the pace of error-correction improvements:

2026โ€“2028: 10โ€“50 logical qubits at modest fidelity. Validation of QPE techniques, small lattice-model demonstrations. Scientific publications, not industrial workflows.

2028โ€“2030: 100โ€“200 logical qubits (IBM Starling at 200 LQ, QuEra at 100 LQ, Quantinuum Apollo at hundreds of LQ). First classically intractable lattice-model computations. Photosensitizer and OLED calculations. Early-FT chemistry with embedded active spaces. This is the “Wright Brothers” moment for fault-tolerant quantum utility.

2030โ€“2033: 500โ€“2,000 logical qubits (IBM Blue Jay at 2,000 LQ, IonQ targeting 800+ LQ). Battery materials RIXS. FeMoco-class calculations. Pre-Born-Oppenheimer dynamics. First results feeding into real pharmaceutical and materials R&D pipelines. This is where the economic case starts to close, and, as I’ve analyzed elsewhere, where the cryptographic implications of 1,200+ logical qubits begin to demand attention.

2033โ€“2040: 5,000โ€“100,000 logical qubits. Bulk solid-state physics. Full catalytic cycle modeling. ICF target design. Lattice gauge theory approaching QCD.

These timelines assume continued progress in error correction, algorithmic compression, and physical qubit quality. They are not predictions, just scenarios. The IonQ projection of 40,000โ€“80,000 logical qubits by 2030 is the most aggressive publicly stated roadmap and depends on successful integration of Oxford Ionics’ chip-integrated ion traps. IBM’s Blue Jay date carries the footnote “2033 or maybe beyond.”

What Should Organizations Do Now?

The natural question for CISOs, CTOs, and R&D leaders reading this is: what should I do about it?

For organizations in chemistry, materials science, energy, and pharma: the time to build quantum computing teams and identify target calculations is now. The Rung 2 applications (300 logical qubits) align with hardware projected for 2028โ€“2030. That means the algorithms need to be identified, the classical preprocessing pipelines built, and the hybrid quantum-classical workflows designed before the hardware arrives. Organizations that wait until machines exist will be three to five years behind those that start today.

For organizations in finance and logistics: maintain awareness but manage expectations. The evidence for near-term quantum advantage in these domains is weak. The quadratic speedup barrier is real. Focus quantum computing investment on understanding your organization’s actual computational bottlenecks and whether any of them involve simulating quantum mechanical systems (e.g., materials science for manufacturing, molecular modeling for biotech). That’s where the near-term utility lives.

For everyone: the error-correction revolution means that physical qubit counts will translate into dramatically more logical qubits than anyone projected five years ago. The qLDPC compression is real and accelerating. The utility thresholds I’ve laid out in this article will be reached sooner than the raw physical-qubit roadmaps suggest.

The Cryptographic Threshold Comes First

One of the most consequential findings from this analysis deserves explicit attention. The logical qubit requirements for breaking RSA-2048 (~1,399 logical qubits) and 256-bit elliptic curve cryptography (~1,200 logical qubits) sit below the requirements for most of the grand challenge chemistry and materials applications on this ladder. FeMoco simulation requires 2,142. Cytochrome P450 requires 4,900. Full-scale materials modeling requires tens of thousands.

This means a cryptographically relevant quantum computer (CRQC) arrives before most of the transformative scientific applications, not after. There is a widespread misconception that CRQC represents the hardest quantum computing milestone, and that the field will pass through a visible sequence of scientific achievements that provides years of warning. The ladder tells a different story. The first fault-tolerant machines capable of running condensed-matter simulations at 100โ€“300 logical qubits will be scientifically exciting but commercially modest. The next threshold up the ladder, at roughly 1,200โ€“1,400 logical qubits, is cryptanalysis. The grand challenge chemistry comes after that.

Organizations waiting for quantum chemistry breakthroughs as their signal to begin post-quantum cryptography migration are reading the ladder upside down. The harvest-now-decrypt-later threat is active today, and the migration itself takes years. By the time a 2,000-logical-qubit machine is running FeMoco simulations, a 1,200-logical-qubit machine will already have been capable of breaking your encryption for some time. The deadlines are already set; the ladder confirms why they are urgent.

The View from the Top of the Ladder

The useful mental model for the fault-tolerant era is not “quantum will be broadly useful soon” but “quantum will be narrowly useful first.” Physical-system simulation (lattice models, strongly correlated chemistry, spectroscopy, materials dynamics) sits at 10โถโ€“10โน Toffoli gates and 10ยฒโ€“10ยณ logical qubits, within one order of magnitude of announced 2029 roadmaps.

Full-scale industrial chemistry (FeMoco, P450, catalytic cycles) sits one rung higher, at 10ยณโ€“10โด logical qubitsโ€”a 2030โ€“2035 story under optimistic assumptions. And the evidence from Dalzell and Lee (2023) suggests the advantage there will be polynomial rather than exponential for most systems, making the case for quantum investment more nuanced than the early hype implied.

Finance and optimization face a structural problem that better hardware cannot fix. The gap between published resource estimates and hardware roadmaps is not narrowing fast enough to close before 2035 on quadratic-speedup applications.

What has changed most since 2022 is not the algorithmic landscape but the error-correction overhead. qLDPC codes, magic state cultivation, and algorithmic fault tolerance are the levers that will determine whether the first meaningful fault-tolerant quantum computation is a 2028 Hubbard model, a 2031 FeMoco calculation, or something further out. The answer depends less on what new algorithms get invented than on whether the engineering compression of the past four years continues at its current pace.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing cryptoโ€‘inventory, cryptoโ€‘agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proofโ€‘ofโ€‘value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, Iโ€™ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, Iโ€™ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap