Quantum Security & PQC

Coinbase Quantum Paper: What It Gets Right, Wrong, and Misses

21 Apr 2026 โ€” Coinbase’s Independent Advisory Board on Quantum Computing and Blockchain published its inaugural 51-page position paper, representing the most comprehensive assessment of quantum threats to cryptocurrency from a major exchange to date. The paper, authored by Scott Aaronson (UT Austin), Dan Boneh (Stanford), Justin Drake (Ethereum Foundation), Sreeram Kannan (Eigen Labs and University of Washington), Yehuda Lindell (Coinbase and Bar-Ilan University), and Dahlia Malkhi (UC Santa Barbara), concludes that a fault-tolerant quantum computer will eventually be built and that blockchains must begin preparing now.

The board’s central position is that the cryptographic threat is “not imminent but is now clearly on the horizon,” and that the debate over exact timelines is “largely irrelevant” since migrations should be planned and prepared immediately. The paper notes that NIST recommends completing post-quantum migrations by 2035, and the board expresses a lack of confidence that a cryptographically relevant quantum computer (CRQC) will not exist by that date.

The paper covers five major areas: the current state of quantum computing hardware, post-quantum cryptography (PQC) standards, migration challenges for both consensus and execution layers, specific plans for Bitcoin, Ethereum, Solana, Algorand, Sui, and Aptos, and post-quantum security challenges beyond digital signatures.

On Bitcoin, the paper highlights that approximately 6.9 million BTC (other analyses, including separate Coinbase-affiliated research, have estimated ~6.5 million BTC using different dataset cutoffs; the order of magnitude is consistent) are currently held in UTXOs where the public key is visible on-chain, with about 1.7 million of those in legacy Pay-to-Public-Key (P2PK) outputs including early Satoshi-era mining rewards. It also flags that all Pay-to-Taproot (P2TR) outputs expose cleartext public keys on-chain.

For Ethereum, the paper identifies four distinct quantum vulnerabilities: ECDSA-signed externally owned account transactions, BLS-signed validator attestations, pairing-based ZK proof precompiles, and KZG commitments used for data availability sampling. The Ethereum community’s plan involves transitioning to hash-based signatures (leanXMSS for consensus, leanSPHINCS for execution) with SNARK-based signature aggregation.

The board recommends a “1-of-2” hybrid signing strategy for the execution layer (essentially Bitcoin’s BIP-360 approach) where blockchains deploy both classical and post-quantum key material now but only require post-quantum signatures when the threat materializes. For the consensus layer, it recommends PQ-signed checkpoints at regular intervals to cryptographically anchor chain history.

The paper explicitly dismisses quantum key distribution (QKD) as impractical and irrelevant for blockchain applications, and notes that Grover’s algorithm poses no practical threat to proof-of-work mining due to the enormous overhead of fault-tolerant quantum operations compared to optimized ASICs. More broadly, the board concludes that symmetric cryptography (AES-256, SHA-256) remains secure against quantum attackers, a position that aligns with the emerging consensus that Grover’s algorithm will likely never break AES in practice.

One of the paper’s most politically charged sections addresses abandoned assets – wallets whose owners cannot or will not migrate to post-quantum addresses. The board frames this as an unavoidable decision every blockchain community must face, presenting two options: a “flag day” after which exposed assets are revoked, or leaving them as permanent quantum honeypots. For Bitcoin specifically, it mentions the “Hourglass” spending rule that would limit spending from legacy P2PK outputs to 1 BTC per block, using Satoshi-era coins as a canary for when a large quantum computer comes online.


My Analysis: A Serious Paper That Underestimates How Fast the Ground Is Shifting

This is an important paper. When Scott Aaronson, Dan Boneh, and Justin Drake jointly tell the crypto industry to stop debating timelines and start migrating, that carries weight no vendor whitepaper or consultant report can match. The caliber of the advisory board spanning quantum theory, applied cryptography, blockchain architecture, and consensus systems, gives this document a credibility that crypto’s quantum discussion has badly needed.

The paper’s central thesis that timeline debates are irrelevant and preparation must begin now, is one I have been arguing for years. Reading their statement that “we believe that this debate on timelines is largely irrelevant (beyond that it is not imminent) since migrations should be planned for and prepared now” felt like a vindication of what PostQuantum.com has been saying since I launched the site. The deadlines are already set by regulators, insurers, and clients. The CRQC arrival date is a red herring.

That said, having spent 30 years in offensive security, crypto security (through Cryptosec), and now quantum security (through Applied Quantum and Secure Quantum), I see several places where this paper, despite its stellar authorship, either understates risks, misses angles, or presents an incomplete picture.

The Resource Estimates Are Moving Faster Than This Paper Acknowledges

The paper’s treatment of how many qubits it would take to break blockchain cryptography is surprisingly vague. It acknowledges that recent work has improved resource estimates “by perhaps two orders of magnitude” and that “at least another two orders of magnitude still remain between the scale that’s been demonstrated experimentally and the scale known to suffice.” But it never names specific numbers or engages with the trajectory of recent research.

This matters because the resource estimates for breaking the elliptic curve cryptography that protects Bitcoin and Ethereum have been dropping steadily. As one example, recent work has demonstrated that breaking 256-bit ECDLP, the specific problem underlying secp256k1 used by Bitcoin and Ethereum, may require fewer than 1,500 logical qubits and under 100 million Toffoli gates, potentially translatable to under half a million physical qubits on a superconducting architecture. As I have tracked, each new research cycle has reduced the resource bar, sometimes dramatically.

And there is good reason to expect this trend to continue. Compared to RSA factoring which has received significantly more published attention, quantum algorithms for ECDLP are less mature and likely further from optimal. The research community has historically focused more on factoring than on elliptic curve discrete logarithms. As more attention shifts to ECDLP, I expect further algorithmic improvements that bring the resource requirements down even more. The ECDLP resource estimate trajectory is one of the most important leading indicators of the quantum threat to crypto, and the paper should have engaged with it more directly rather than leaving readers with a vague “two orders of magnitude” framing.

The Taproot Blind Spot Deserves More Alarm

The paper correctly identifies that all P2TR (Pay-to-Taproot) UTXOs expose cleartext public keys on-chain and are therefore “currently quantum vulnerable.” But it buries this in the Bitcoin section without giving it the emphasis it deserves.

Taproot is not some legacy artifact. It was introduced in November 2021 as Bitcoin’s newest and most advanced transaction type. As I have analyzed in detail, P2TR handled over 21% of all Bitcoin transactions in 2025 and moved approximately 16.8 million BTC. The Bitcoin community actively promoted Taproot adoption for its privacy and scripting benefits, and in doing so, systematically increased the network’s quantum attack surface.

This is a pattern I keep seeing: blockchain developers optimizing for today’s threat model while inadvertently expanding quantum vulnerability. The paper should have been louder about this. Every P2TR UTXO created today is a quantum-vulnerable asset that will eventually need migration – and unlike old P2PK outputs, these represent active, sophisticated users who chose the “latest and greatest” transaction format.

(A minor but telling drafting error: the paper describes P2TR as containing an “onchain cleartext ECDSA key,” when Taproot actually uses Schnorr signatures with x-only public keys. The quantum vulnerability is identical (both rely on the same secp256k1 curve) but getting the signature scheme wrong on Bitcoin’s newest transaction type suggests the paper was drafted at some distance from Bitcoin’s operational details.)

On-Spend Attacks: The Paper’s Reassurance May Not Age Well

The paper mentions that a quantum adversary could theoretically break a key during the several minutes a transaction spends in the mempool, but then reassures readers that “a quantum computer that can break a key in this short amount of time seems further away than one that can do so in days or weeks.”

This assessment deserves more scrutiny. If resource estimates continue their downward trajectory – and I believe they will, since ECDLP optimization is less mature than RSA factoring optimization, on-spend attacks against live Bitcoin transactions could become feasible with smaller quantum computers than the paper implies. Recent research suggests that fast-clock architectures (superconducting, silicon, photonic qubits) could potentially execute Shor’s algorithm for ECDLP within timeframes that overlap with Bitcoin’s 10-minute average block interval.

The paper does not address the critical distinction between “fast-clock” and “slow-clock” quantum architectures at all – a distinction that determines whether on-spend attacks are feasible or only at-rest attacks are. Slow-clock systems (neutral atoms, trapped ions) would take hours or days per key derivation, making them useful only against stationary targets with exposed public keys. Fast-clock systems could potentially attack transactions in flight. This determines whether blockchain users are at risk only for holding assets or also for spending them.

Missing: The Intelligence Motivation

The paper makes an interesting argument that quantum simulation is “the primary driver” of quantum computing development, and that “if quantum simulation does not materialize soon, the timeline for a cryptographic threat might extend significantly as (non-government) funding dries up.” It even goes so far as to suggest that “breaking cryptography alone may not necessarily financially justify the investment in constructing a quantum computer.”

This is where the paper’s academic perspective shows its limits. The analysis largely ignores state-funded intelligence motivations for building CRQCs. Breaking cryptography is not just about cryptocurrency theft – it is about intelligence collection, sanctions enforcement, and strategic disruption. As I have detailed in my China’s Quantum Ambition series, state actors are investing in quantum computing with national security objectives that are independent of commercial returns from quantum simulation.

The paper does acknowledge that “one needs to allow for the possibility that, once cryptographically relevant quantum computing is near, further advances toward that goal might stop being published openly.” But this is presented almost as an afterthought rather than a central planning assumption. For the blockchain industry, which prides itself on trustlessness, assuming that the most significant quantum threat will arrive announced and published seems naive.

The “Don’t Weaken Current Security” Principle Is Exactly Right

One of the paper’s strongest contributions is its articulation of Property P1 for execution layer migration: “The transition does not compromise our current security posture, in the sense that any attacker who can break the new scheme can already break the existing scheme.” This is a principle I have been advocating through the PQC Migration Framework – that PQC migration must not introduce new vulnerabilities in the process of addressing future ones.

The paper’s concern about switching from ECDSA to ML-DSA (formerly CRYSTALS-Dilithium) is well-founded. Lattice-based schemes have been studied for decades, but our operational confidence in them is not comparable to ECDSA. The SIKE debacle, where a leading NIST candidate was completely broken by a classical attack after surviving five years of competition, is a powerful reminder that newer is not always better.

The hybrid 1-of-2 approach the board recommends strikes the right balance. Deploy both key types now, use classical signatures while they remain secure, and switch to PQ signatures when the threat materializes. This is essentially what crypto-agility looks like in practice for blockchain systems.

It is worth noting the sharp asymmetry the paper correctly identifies but could have emphasized more: the symmetric primitives that blockchains rely on are essentially quantum-safe because Grover’s quadratic speedup is consumed by error correction overhead. But the public-key primitives (ECDSA, BLS, KZG) face an existential Shor’s algorithm threat. This asymmetry is why blockchains are simultaneously more resilient (their hash-linked structure survives quantum computing) and more vulnerable (their signature infrastructure does not) than many people appreciate.

Where Does Trust Now, Forge Later Fit?

The paper thoroughly discusses Harvest Now, Decrypt Later (HNDL) in the context of TLS and public key encryption, correctly noting that for blockchain transactions, short-term confidentiality may not be a critical concern. What about the signature analog: Trust Now, Forge Later (TNFL)?

I want to be precise here. In traditional PKI – document signing, code signing, certificate authorities – TNFL is a clear and present danger: signatures trusted today can be forged tomorrow by anyone who derives the private key. But blockchain’s hash-chaining structure provides a natural defense that traditional signed documents lack. You cannot insert a forged transaction into a past block without breaking the hash chain, and the paper’s checkpoint-based strategy for the consensus layer correctly leverages this property.

That said, there are important edge cases where TNFL-like threats do apply to the blockchain ecosystem, and the paper does not explore them. Cross-chain bridges that rely on committee signatures are one example: if bridge relayer keys are derived from quantum-vulnerable schemes, a future attacker could forge bridge messages. Light client protocols that verify block headers via signatures rather than full chain validation are another. Off-chain signed governance proposals, multisig authorizations, and oracle attestations that are verified against known public keys also carry TNFL risk. These are not core blockchain threats, but as the ecosystem builds increasingly complex layers of signed messages above and around the base chain, the TNFL attack surface grows in ways that pure on-chain analysis misses.

The Dormant Assets Dilemma: Well-Framed, but Not New

The paper’s treatment of abandoned assets is useful and well-articulated for its target audience – crypto executives and developers who may not have followed the Bitcoin developer mailing list debates closely. The framing of the fundamental dilemma – flag day versus permanent honeypot – is clear and honest. The observation that market forces would likely favor the fork that revokes exposed tokens (because it reduces supply) is sharp game theory.

But readers should know that this is not new ground. The flag-day-versus-honeypot debate has been active on the Bitcoin developer mailing list for years, with proposals like QRAMP (set a deadline, burn what does not migrate) generating heated discussion about immutability, property rights, and the philosophical foundations of Bitcoin. The Hourglass proposal, originally put forward by developer Hunter Beast as a BIP, deserves attention as a creative compromise: allow P2PK spending but limit it to 1 BTC per block, throttling a potential quantum-driven supply shock from catastrophic to manageable. Under Hourglass V2, it would take over 32 years to drain all P2PK coins, compared to hours without any constraint.

What the paper’s academic authors do not fully reckon with is the market signal dimension of these proposals – and this is where the paper’s biggest blind spot becomes visible.

The Blind Spot: Market Infrastructure, Regulation, and the First Satoshi Coin

The Coinbase advisory board is composed of brilliant academics and researchers. But reading the paper, I was struck by what it does not discuss: the institutional, regulatory, and market infrastructure implications that a practitioner such as a CISO, a custodian, a regulated exchange, would immediately recognize.

Consider the Hourglass scenario. The paper describes it as a technical mitigation: limit P2PK spending to 1 BTC per block, use Satoshi’s coins as a canary. Technically sound. But imagine the actual event. The first Satoshi-era coin moves. Even at 1 BTC. The headline writes itself. Twitter explodes. Bloomberg runs it above the fold. The market does not care that the Hourglass rule limits the flow to 144 BTC per day. What the market hears is: ECDSA is broken. Satoshi’s coins are moving. Every Bitcoin address with an exposed public key is now a target.

The price impact of that information signal would dwarf any concern about the 1 BTC actually moved. This is not a supply shock problem. It is a confidence crisis. And the paper’s framing focused on controlling the rate of coin release, misses that the damage is in the signal, not the flow.

Now extend this to regulated entities. Coinbase itself is a custodian. Major exchanges hold significant amounts of Bitcoin in addresses with exposed public keys due to pervasive address reuse. At what point does a regulated financial institution have a fiduciary obligation to migrate customer assets to quantum-resistant addresses? At what point does a securities regulator require disclosure of quantum vulnerability as a material risk? The paper treats the dormant asset question as a community governance issue, but for any exchange or custodian operating under financial regulation, it is a compliance question that is already relevant today.

And the insurance dimension is entirely absent. As quantum risk becomes quantifiable, and papers like this one make it more quantifiable, insurers will start asking whether custodians have taken reasonable steps to protect against known cryptographic vulnerabilities. The answer “we are waiting for the Bitcoin community to decide” may not satisfy an insurance underwriter or a regulator examining a post-breach failure.

This is perhaps the most important gap in the paper: it was written by academics and cryptographers, and it reads like it. The technical analysis is excellent. But the paper does not think like a CISO, a custodian risk officer, or a financial regulator – and those are exactly the people who need to act on its recommendations. As someone who has served as interim CISO and CTO for Fortune Global 500 organizations and now advises on quantum security, I see this disconnect between academic assessment and operational reality as the paper’s most consequential limitation.

The Bottom Line for Blockchain

The Coinbase advisory board has produced the most authoritative assessment of quantum threats to blockchain to date. Their core recommendation – stop debating timelines, start building PQC readiness now – is correct and overdue. The paper’s technical analysis of consensus layer migration, execution layer strategies, and per-chain migration plans provides a concrete roadmap that the industry can act on.

But the paper’s measured academic tone risks understating the urgency. The resource estimates for breaking blockchain cryptography are improving faster than the paper acknowledges. On-spend attacks against live Bitcoin transactions may become feasible with smaller quantum computers than previously assumed. And the assumption that the quantum threat will arrive via published research rather than classified state programs is a planning vulnerability the crypto industry cannot afford.

The Coinbase paper’s most important contribution may be the simplest: it takes the quantum threat to crypto out of the realm of speculation and places it firmly in the category of strategic planning. Now the industry needs to act on it, before the window for orderly migration closes.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing cryptoโ€‘inventory, cryptoโ€‘agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proofโ€‘ofโ€‘value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, Iโ€™ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, Iโ€™ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap