Grover’s Algorithm vs AES – Why “Ignore It” Is Almost Right
Table of Contents
In September 2024, Sam Jaques, a quantum resource estimation specialist at the University of Würzburg, stood before the CHES (Cryptographic Hardware and Embedded Systems) conference and delivered a talk with a conclusion that would have surprised many security professionals: Grover’s algorithm will probably never break AES-128. Not in our lifetimes. Possibly not ever.
That is not the fringe opinion it might sound like. It is rapidly becoming the dominant view among quantum computing researchers who actually work through the resource estimates. And in April 2026, Filippo Valsorda, the former Go cryptography lead at Google, published a detailed technical analysis reaching the same conclusion, complete with concrete numbers: breaking AES-128 with Grover would require approximately 140 trillion parallel quantum circuits, each running continuously for a decade, at a total computational cost roughly $$10^{23}$$ times higher than breaking 256-bit elliptic curves with Shor’s algorithm.
The argument has been building for years. In 2021, Ryan Babbush and colleagues at Google Quantum AI published “Focus Beyond Quadratic Speedups for Error-Corrected Quantum Advantage” in PRX Quantum, concluding that quadratic speedups (the class that includes Grover’s algorithm) will not enable quantum advantage on early fault-tolerant devices unless there is a fundamental improvement in quantum error correction. In 2023, Torsten Hoefler, Thomas Häner, and Matthias Troyer at Microsoft and ETH Zurich published “Disentangling Hype from Practicality,” reaching an even starker conclusion: with quadratic speedup, even a single floating-point or integer operation per oracle call leads to crossover times of several months against a single classical chip – making non-trivial applications essentially impossible.
These are not marginal voices. This is Google Quantum AI. This is Microsoft Research. This is ETH Zurich. And NIST agrees – explicitly stating that “it is quite likely that Grover’s algorithm will provide little or no advantage in attacking AES, and AES-128 will remain secure for decades to come.”
So the question I want to explore in this article is not whether they are wrong. On the merits of their analysis, they are right. The question is whether “probably never” – the phrase Jaques used – is a statement about the physics, or a statement about the assumptions.
Why the Math Works: Three Pillars of Infeasibility
To understand why the consensus exists, you need to understand three distinct problems that compound against Grover’s algorithm when you move from textbook theory to engineering reality.
Pillar 1: Grover Parallelizes Badly
This is the most important and least intuitive point. In the textbook version, Grover’s algorithm searches a space of size N in approximately √N steps – a quadratic speedup over classical brute force. For AES-128, that means roughly $$2^{64}$$ oracle evaluations instead of $$2^{128}$$. Sounds devastating.
But $$2^{64}$$ sequential quantum operations is an enormous number. At a logical gate speed of 1 microsecond—an optimistic assumption for error-corrected surface-code operations on superconducting hardware—running $$2^{64}$$ operations in sequence would take over 500,000 years.
The classical solution to this problem is obvious: parallelize. Buy more machines, split the work, combine results. Classical brute force parallelizes perfectly.
Grover’s algorithm does not have this property. It was proven by Zalka in 1997 that the optimal strategy for parallelizing Grover is to partition the search space. If you have P quantum computers, each searches a space of size N/P, requiring √(N/P) steps. The speedup is only √P, not P. Worse, the total work across all machines is √(N/P) × P = √N × √P, which increases with parallelization.
This is a structural limitation that fundamentally changes the economics of the attack. As Jaques put it in his CHES talk: “The more you need to parallelize, the more you’re losing your relative advantage to classical computing.”
Valsorda worked through the concrete numbers for AES-128 with a generous 10-year time budget and 1 µs gate speed. To fit within a decade, you need roughly $$2^{47}$$ parallel quantum circuits (about 140 trillion) each of 724 logical qubits. The total DW-cost (depth × width, analogous to core-cycles in classical computing) comes to approximately $$2^{104.5}$$. That is an extraordinary number, and it is the logical cost, before accounting for the physical overhead of error correction.
Pillar 2: The Error Correction Tax
Every one of those logical qubits needs to be implemented using physical qubits through quantum error correction. With the surface code, the ratio is roughly 1,000 physical qubits per logical qubit. This is a fundamental overhead of maintaining quantum coherence through redundancy.
Jaques estimated that for a surface-code implementation of Grover on AES-128, each logical qubit-timestep requires approximately $$2^{16}$$ physical operations for error correction overhead. Multiply that across the depth and width of the computation, and the total quantum operations reach approximately $$2^{136}$$ – compared to $$2^{143}$$ for a classical brute-force attack. The quantum “advantage” has shrunk from a factor of $$2^{64}$$ to barely a factor of 100.
At that scale, as Jaques noted in his CHES presentation, the computation runs into fundamental physical limits. Back-of-the-envelope Landauer calculations push the energy for such an attack into nuclear-power-plant territory, and the implied hardware footprint becomes astronomically large.
Pillar 3: The Constant Factor Is Not a Constant
The textbook presentation of Grover’s algorithm suggests the cost is “$$2^{64}$$ times a small constant.” But as Jaques emphasized, that “small constant” encompasses the entire quantum circuit for AES evaluation, including its translation into fault-tolerant gate sets, the routing overhead of surface-code architectures, magic state distillation for T-gates, and the additional parallelization penalty.
The research community has been optimizing these AES quantum circuits for a decade. Grassl et al. published the first estimates in 2015. Jaques et al. improved them significantly at EUROCRYPT 2020. Jang et al. and Huang et al. pushed further at ASIACRYPT 2022 and 2023. Most recently, Liao and Luo (2025) achieved the lowest known DW-cost. But as Valsorda noted, the AES oracle depth and width contribute only about 17 bits to the total attack cost, and there is probably little room left for improvement. Unlike Shor’s algorithm where resource estimates have dropped by orders of magnitude over the past decade, the Grover circuit cost is approaching a floor.
Hoefler et al. made the same point from a different angle. They compared a hypothetical future quantum computer (10,000 error-corrected logical qubits, 10 µs gate time, all-to-all connectivity) against a single classical chip manufactured today (similar to an NVIDIA A100 GPU). Even with these wildly optimistic quantum assumptions and pessimistic classical assumptions, the classical chip’s operation throughput exceeds the quantum computer’s by a factor of more than $$2^{10}$$ for binary operations. For a quadratic speedup to overcome this constant-factor disadvantage within a two-week runtime budget, the oracle function can contain at most 68 binary operations — far too few for anything non-trivial, let alone AES.
What the Consensus Gets Right
The analytical work behind this consensus is excellent. Jaques, Babbush et al., Hoefler et al., and Valsorda are all doing precisely the kind of rigorous, assumption-explicit resource estimation that the quantum computing field desperately needs more of. They are the antidote to the breathless “Q-FUD” that exaggerates every quantum development into an existential threat.
Their core conclusion that Grover’s algorithm on AES-128 is practically infeasible with anything resembling current or near-future quantum architectures, is well-supported. This matters enormously for prioritization. As Valsorda put it, breaking AES-128 with Grover is approximately $$10^{23}$$ times more expensive than breaking 256-bit elliptic curves with Shor’s. Organizations should be spending their limited PQC migration resources on replacing RSA and ECC, not worrying about AES key sizes.
NIST explicitly states that current applications can continue using AES with 128-, 192-, or 256-bit keys and focuses its PQC transition guidance on public-key algorithms, not symmetric primitives. The BSI (German Federal Office for Information Security) concurs.
Where “Probably Never” Becomes an Assumption
Now here is where I part company (slightly and carefully) with the strongest version of this consensus. Not on the analysis, but on the framing.
Every one of the infeasibility arguments above rests on a specific set of architectural assumptions. And those assumptions are already showing cracks.
Assumption 1: Surface Codes Are the Error Correction Paradigm
All three major analyses (Babbush et al., Hoefler et al., and Jaques) use surface codes as their reference architecture. This is reasonable: surface codes are the most mature, most studied, and most practically demonstrated quantum error correction scheme. Google’s landmark 2023 result demonstrating a logical qubit outperforming its constituent physical qubits used a surface code. It is the standard for a reason.
But surface codes are also extraordinarily expensive. They require roughly 1,000 physical qubits per logical qubit. Their gate operations, particularly T-gates which dominate the cost of cryptographic circuits, require elaborate magic state distillation factories that consume enormous chip area and time.
Quantum low-density parity-check (qLDPC) codes are already changing this picture. In February 2026, Iceberg Quantum published the “Pinnacle Architecture,” demonstrating that qLDPC codes can reduce the physical qubit count for RSA-2048 factoring from roughly one million (surface code) to under 100,000 – a tenfold reduction. Photonic Inc.’s SHYPS codes claim up to a 20x reduction in physical overhead per logical qubit compared to surface codes, with single-shot error correction that reduces logical cycle time by a factor of 30.
Iceberg has hardware partnerships with PsiQuantum, Diraq, Oxford Ionics, and IonQ. Photonic has run fault-tolerant simulations of SHYPS codes on 98 data qubits for 126 clock cycles. IBM announced its transition to qLDPC codes in 2024, and Riverlane predicts industry-wide adoption by 2026-2027.
If the error correction overhead drops by even one order of magnitude, from 1,000:1 to 100:1, the total cost of a Grover attack changes substantially. Not enough to make it feasible tomorrow. But enough to make “probably never” less certain than “probably not in the foreseeable future.”
To be clear: better error correction codes change the constant factors, not the fundamental scaling. Grover’s √P parallelization penalty is algorithmic, not architectural — it applies regardless of whether the underlying machine uses surface codes, qLDPC codes, or something yet to be invented. What qLDPC codes could change is the physical cost per logical operation, which determines how large those constant factors are. That distinction matters, but it is quantitative, not qualitative.
Assumption 2: Gate Speeds Are Inherently Slow
The Babbush et al. analysis assumed surface-code logical gate times on the order of 1-10 microseconds, driven by the physics of superconducting qubits. Hoefler et al. assumed 10 µs. Even Valsorda’s generous 1 µs is a surface-code superconducting assumption.
But not all quantum computing modalities have the same clock speed. Photonic quantum computers operate at the speed of light through optical components. Their physical gate operations occur on nanosecond or sub-nanosecond timescales, potentially 1,000x faster than superconducting logical gates.
The challenge for photonics has always been achieving fault-tolerant error correction, not speed. That challenge is real but evolving. In April 2026, QuiX Quantum demonstrated the first “below-threshold” error mitigation on a photonic quantum computer, achieving a net 1.2x error reduction through photon distillation, with modeling suggesting that combining photon distillation with error correction could substantially reduce the number of photon sources required per logical qubit. This is not fault tolerance. But it is the first time any photonic platform has demonstrated net-positive error suppression, a prerequisite for the path to fault tolerance.
If photonic quantum computing achieves fault tolerance, a significant “if”, the gate speed advantage would directly attack the depth constraint that makes Grover parallelization so expensive. A 1,000x improvement in gate speed means a 1,000x increase in the number of Grover iterations you can run sequentially, which means a 1,000,000x reduction in the parallelization overhead. That would not make Grover feasible, but it would move the needle from “physically impossible” to “extraordinarily expensive.”
Assumption 3: We Can Predict Architectures Decades Out
Perhaps the most important point, and the one Jaques himself acknowledged, is that we are making predictions about quantum computers that do not yet exist for attacks that would need to succeed many decades from now.
Jaques was candid about this in his CHES talk. He noted that while he does not believe Grover will break AES-128 in our lifetimes, he was “not confident enough to say that you shouldn’t use AES-256.” He described the Grover threat as requiring “at least one or two more paradigm shifts” beyond the shift needed for breaking RSA, and acknowledged that predicting architectures across multiple paradigm shifts is “wild guessing.” Even with optimistic breakthroughs in error correction and gate speed, a Grover attack on AES-128 would still require engineering scales orders of magnitude beyond anything needed for Shor’s algorithm against RSA or ECC — making asymmetric cryptography the clear and urgent near-term priority.
Babbush et al. included a crucial qualifier in their own paper: their conclusion persists “unless there is a significant improvement in how we realize quantum error correction.” Hoefler et al. noted that their analysis “will remain valid even with significant advances in quantum technology of multiple orders of magnitude”, but that is because their comparison is already so conservative for classical computing (a single chip) and so generous for quantum (all-to-all connectivity, simultaneous gate operations). The qualifier is built into the premises.
The history of quantum computing resource estimates should give us pause about “never” claims. In 2019, Gidney and EkerÃ¥ estimated that breaking RSA-2048 would require 20 million noisy qubits. By 2025, Gidney alone had reduced that to under a million. The Pinnacle Architecture, published in February 2026, brought it to under 100,000. That is a 200x reduction in six years. We do not know what six more years of architectural innovation will produce.
The Structured Attack Wildcard
There is one more caveat that the consensus explicitly sets aside: structured quantum attacks. Every infeasibility argument above treats AES as a black box and relies on the theoretical optimality of Grover for unstructured search. But AES is not a black box, it has algebraic structure, and quantum algorithms might one day exploit that structure. Quantum collision-finding algorithms like the Brassard–Høyer–Tapp algorithm already offer a theoretical cube-root speedup over classical birthday attacks on hash functions, though practical applicability remains limited by qRAM requirements and the same parallelization constraints that hobble Grover. Whether analogous structured approaches could find purchase against AES’s algebraic structure remains an open question.
Jaques was explicit about this scope limitation: “I want to assume that AES is well-designed and we don’t figure out some structural weakness.” He noted the existence of quantum structured attacks like period finding and cited research on quantum symmetric cryptanalysis, but set them aside.
This is methodologically sound. You cannot estimate the cost of an attack that has not been discovered. But it means the “probably never” conclusion applies specifically to Grover-class key search, not to quantum attacks on AES in general. The quantum threat to symmetric cryptography is broader than Grover, even if Grover is the only fully specified attack today.
What This Means for My CRQC Quantum Capability Framework
In my CRQC Quantum Capability Framework, I track the engineering capabilities required to build a cryptographically relevant quantum computer (CRQC). A CRQC capable of running Shor’s algorithm against RSA-2048 or ECC-256 requires capabilities across error correction (B.1), below-threshold operation (B.3), magic state production (C.2), decoder performance (D.2), and engineering scale (E.1).
A CRQC capable of running Grover’s algorithm against AES-128 would require all of those capabilities, plus continuous operation at scales that exceed RSA-class machines by orders of magnitude on the D.3 continuous operation dimension, and manufacturing scale (E.1) beyond anything currently conceivable.
This is the right way to frame the distinction: Shor’s algorithm against asymmetric cryptography is a plausible engineering target on the visible horizon. Grover’s algorithm against AES-128 is not. But “not on the visible horizon” is different from “impossible,” and the horizon keeps expanding.
The Bottom Line: Right Conclusion, Wrong Framing
The consensus is correct that Grover’s algorithm is not a practical threat to AES-128 today, and will not be one for decades under any plausible near-term architecture. Organizations should not divert PQC migration resources to AES key-size upgrades. NIST is right to define AES-128 as the Category 1 benchmark. The urgency is in replacing RSA, ECC, and other asymmetric primitives vulnerable to Shor’s algorithm, and that urgency is real, as I have argued in “Forget Q-Day Predictions — Regulators, Insurers, Investors, Clients Are Your New Quantum Clock.”
But I think framing this as “Grover is dead” or “completely ignore symmetric key upgrades” overstates what the evidence supports. The infeasibility arguments are built on three pillars: surface-code overhead, slow logical gate speeds, and current error correction paradigms; and all three are under active assault by qLDPC codes, photonic architectures, and novel error correction schemes. “Probably never” is a prediction about quantum architectures that have not been invented yet, applied to a timescale of decades to centuries.
AES-256 is not a PQC migration project. It is a configuration change. As Daniel Bernstein has observed, the performance penalty is modest. For systems protecting secrets with multi-decade lifespans such as government classified information, long-term healthcare records, critical infrastructure designs, it is cheap insurance against assumptions that might be wrong.
The right mental model is not “Grover is dead.” It is: Grover is in a coma, and the medical consensus says it will never wake up, but the consensus is based on the machines we have today, and someone keeps building better machines.
For organizations navigating the quantum transition, the practical guidance is straightforward: migrate away from RSA and ECC immediately – that is where the real threat lives. Use AES-256 where the switch is trivial and the data is long-lived. Do not spend migration budget on AES upgrades at the expense of PQC deployment. And do not assume that any “never” prediction about quantum computing will age well.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.