A Viral Paper Claims Lattice-Based PQC Has “Fundamental Limitations” — The Arguments Are Old and the Conclusion Is Wrong
May 10, 2026 – A preprint posted to arXiv on May 6, 2026, titled “Fundamental Limitations of Post-Quantum Cryptographic Architectures,” has been making the rounds in security circles this week. The paper, authored by Jiho Jung, Donghwa Ji, Mingyu Lee, and Kabgyun Jeong, with affiliations including Seoul National University’s Team QST and the Korean National Police University, argues that calling lattice-based cryptography “post-quantum” is premature because its security rests on unproven computational assumptions rather than proven theoretical lower bounds.
The paper has prompted a reaction I’ve been hearing from multiple contacts: “If PQC isn’t actually quantum-safe, maybe we should wait for something better before migrating.”
That conclusion is exactly wrong. And it reveals a misunderstanding of what this paper actually says, what cryptographers have always known about PQC, and why the case for migrating now is stronger than ever.
What the Paper Claims
The authors examine the Learning with Errors (LWE) problem, the canonical hardness assumption behind much of lattice-based post-quantum cryptography. That includes the lineage of ML-KEM (FIPS 203, formerly CRYSTALS-Kyber) and overlaps with the broader security analysis of module-lattice schemes such as ML-DSA (FIPS 204, formerly CRYSTALS-Dilithium). The paper treats generic LWE as a proxy for lattice-based PQC more broadly, though it does not address the distinct mathematical assumptions behind FN-DSA (which is NTRU-lattice based), SLH-DSA (hash-based), or HQC (code-based). They argue across four domains that LWE’s security is not absolute:
Complexity theory. There is no mathematical proof that LWE problems lie outside BQP (the class of problems efficiently solvable by quantum computers). The security assumption could, in principle, be wrong.
Information theory. Using Shannon’s noisy-channel coding theorem and Fano’s inequality, the authors argue that the noise injected into LWE ciphertexts does not permanently destroy the secret. The information is obscured, not erased.
Quantum error correction. The paper draws a structural analogy between LWE noise and the displacement errors corrected by Gottesman-Kitaev-Preskill (GKP) codes, arguing that quantum error correction techniques could theoretically “filter out” cryptographic noise.
Quantum learning theory. The GKZ algorithm (Grilo, Kerenidis, and Zijlstra, 2019) can solve LWE in polynomial time given access to quantum superposition samples of the LWE problem.
The paper’s conclusion: LWE-based cryptography is a “robust transitional alternative” but should not be considered unconditionally secure. The security stems from “transient physical bottlenecks rather than impenetrable theoretical boundaries.”
My Analysis
I want to be precise about what I think this paper gets right, what it gets wrong, and why the circulating “wait on PQC” interpretation is dangerous.
What the Paper Gets Right
The core observation is correct: lattice-based cryptography relies on computational hardness assumptions, not information-theoretic proofs of security. Nobody has proven that LWE is hard for quantum computers. The security of ML-KEM, ML-DSA, and every other lattice-based scheme rests on the assumption that no efficient quantum algorithm exists for these problems.
This is true. It has also been true since Oded Regev introduced LWE in 2005. Every lattice cryptographer knows this. NIST knows this. The NIST standardization documentation says this explicitly. The European ECCG’s Agreed Cryptographic Mechanisms v2.0 mandates hybrid deployment of lattice-based schemes with classical algorithms precisely because of this uncertainty. France’s ANSSI and Germany’s BSI have been saying this for years.
The paper is demolishing a position that no serious cryptographer holds. Nobody in the PQC community claims lattice-based cryptography offers “unconditional security” or that it is “fundamentally immune to quantum mechanics.” The term “post-quantum” means resistant to known quantum attacks, not provably immune to all possible quantum computation. This is a distinction the paper’s own text acknowledges but that its framing obscures.
What the Paper Gets Wrong
The paper invokes real ideas from quantum algorithms, complexity theory, information theory, error correction, and quantum learning. But the leap from those ideas to the claim that LWE-based cryptography has “fundamental” post-quantum limitations is much weaker than the title and conclusion imply.
The GKZ attack model assumes quantum sample access that no one can instantiate at cryptographic scale. The authors acknowledge that the GKZ algorithm — the most concrete quantum attack discussed — requires the adversary to prepare coherent quantum superpositions of LWE samples (quantum random access memory, or QRAM). They correctly identify this as a “profound physical bottleneck.” But the paper then suggests this bottleneck is being systematically dismantled by divide-and-conquer strategies and ε-net sample compression techniques, without providing evidence that these approaches work at cryptographically relevant parameter scales (ML-KEM-768, ML-KEM-1024). Scalable QRAM and coherent state preparation remain major unresolved bottlenecks in fault-tolerant quantum computing. Waving at theoretical optimizations is not the same as demonstrating a viable attack path.
The GKP error correction analogy is a structural observation, not an attack. The most creative section of the paper argues that LWE noise is “structurally equivalent” to the displacement errors corrected by GKP bosonic codes. This is an interesting physics observation, but it does not constitute a cryptographic attack. The paper provides no resource estimates, no analysis of what GKP-based “decryption” would require for standardized parameter sets (ML-KEM-768, ML-KEM-1024), and no engagement with the practical reality that GKP codes themselves are still in early experimental stages. Structural analogy is not operational equivalence.
The NISQ/hybrid claims are speculative. The paper cites variational quantum algorithms for LWE on noisy intermediate-scale quantum (NISQ) devices (Zeng et al., 2025) as evidence that the threat is not confined to fault-tolerant quantum computing. These algorithms have been demonstrated only on toy-scale problems, orders of magnitude smaller than any standardized parameter set. Extrapolating from a proof-of-concept on a handful of qubits to a threat against ML-KEM is like extrapolating from factoring the number 15 on a quantum computer to breaking RSA-2048. The gap is astronomical.
The paper does not engage with the concrete assumptions in the deployed standards. ML-KEM is based on Module-LWE; ML-DSA is a module-lattice signature scheme; FN-DSA is NTRU-lattice based; and SLH-DSA is hash-based. Treating generic LWE as a stand-in for the entire NIST PQC portfolio is too coarse. A paper claiming to assess the “fundamental limitations” of PQC architectures should engage with the specific mathematical problems underlying the actual deployed standards, not just the generic LWE problem.
What the Paper Does Not Say
Because the misreadings are already circulating, let me state explicitly what this paper does not claim:
It does not present a new quantum attack on LWE. The GKZ algorithm is from 2019. The information-theoretic arguments are textbook results (Shannon, 1948; Fano, 1949) applied to a cryptographic context. There is no new cryptanalytic algorithm, concrete attack, or resource estimate against deployed PQC standards in this paper.
It does not demonstrate that current PQC standards are broken or breakable with near-term quantum hardware, nor does it show that ML-KEM, ML-DSA, FN-DSA, SLH-DSA, or HQC can be compromised at standardized parameter scales. The authors explicitly call lattice-based cryptography a “robust transitional alternative.”
It does not provide any timeline for when the theoretical vulnerabilities it identifies could become practical. There are no resource estimates, no hardware projections, no connection to the CRQC Quantum Capability Framework or any equivalent analytical structure.
It does not address hash-based signature schemes (SLH-DSA), code-based key encapsulation (HQC), or any non-lattice PQC approach. The title says “post-quantum cryptographic architectures” (plural), but the analysis covers only lattice-based schemes.
The “Wait for Something Better” Fallacy
Here is why the “maybe we should wait” reaction is not just wrong but actively harmful.
The entire PQC standardization process was designed for a world where hardness assumptions might turn out to be wrong. This is why NIST standardized multiple algorithmic families, not just lattice-based schemes. SLH-DSA’s security rests on hash functions, not lattice problems. HQC, the code-based backup now progressing through NIST’s process, relies on entirely different mathematical foundations. The system was built with algorithmic diversity and crypto-agility as explicit design goals.
If lattice-based assumptions were broken tomorrow (an event this paper does not come close to predicting), organizations with crypto-agile architectures could transition to hash-based and code-based alternatives. Organizations that “waited for something better” would still be running RSA and ECC, fully exposed to both the classical threat from a CRQC and the ongoing Harvest Now, Decrypt Later threat.
The irony is striking: a paper about the theoretical fragility of one cryptographic assumption is being used to justify continued reliance on cryptographic assumptions (RSA, ECC) that we know are broken by Shor’s algorithm. The “known bad” is being preferred over the “possibly imperfect” — which is not a rational security strategy by any definition.
The Real Lesson
If anything, this paper reinforces what I have been arguing for years: treat PQC migration as an ongoing discipline, not a one-time switch. Build crypto-agility into your architecture so you can swap algorithms when the cryptanalytic landscape evolves. Deploy hybrid implementations as the European ECCG recommends. Maintain awareness of the research frontier. And above all, start now, because the deadlines are already set — by regulators, by certification bodies, by supply chain partners, by procurement requirements, and increasingly by cyber-insurance scrutiny — regardless of whether any particular hardness assumption holds.
The Jung et al. paper is a competent survey of known theoretical observations about lattice-based cryptography, repackaged with a provocative title. It will be cited by quantum-panic vendors selling alternative solutions and by procrastination enthusiasts looking for reasons to delay. It deserves neither role.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.