IonQ Publishes Complete Fault-Tolerant Blueprint for Trapped Ions “The Walking Cat Architecture”
Table of Contents
22 Apr 2026 — IonQ has published a 110-page preprint detailing what it calls the “walking cat architecture”: a complete, end-to-end blueprint for a fault-tolerant quantum computer (FTQC) built on trapped ions. The paper, titled “Fault-Tolerant Quantum Computing with Trapped Ions: The Walking Cat Architecture,” was posted to arXiv on April 21, 2026, with 18 authors, all from IonQ.
The headline numbers are striking. IonQ claims its densest configuration delivers 110 logical qubits capable of executing roughly one million T gates per day using only 2,514 physical qubits, a number that includes every ancilla, every routing qubit, every qubit used for error correction, leakage and loss handling, magic state factories, and qubit reservoirs. With approximately 10,000 physical qubits, the team estimates it could run a quantum Hamiltonian simulation of a Heisenberg model on 100 sites within about one month, a computation they argue would be classically intractable and beyond the reach of any NISQ machine.
The paper also compiles Shor’s algorithm for integer factorization, estimating that roughly 13,000 physical qubits could factor 30-bit numbers in less than a day. The paper specifies the target: 1,071,514,531 = 32,749 × 32,719. The authors note, with the kind of dry understatement that 110-page architecture papers permit, that the current record for Shor’s algorithm on actual quantum hardware remains the factorization of 15 = 3 × 5.
The architecture is entirely theoretical. None of this has been built or experimentally validated at the proposed scale.
My Analysis
This paper is significant not because it promises a revolution (that remains to be earned through engineering) but because of what it represents in the accelerating race to design viable fault-tolerant architectures. Let me unpack what makes this interesting, where it fits among competing proposals, and where healthy skepticism is warranted.
Why qLDPC Codes on Trapped Ions Make Structural Sense
The most consequential design choice in this paper is one that sounds dry but has profound implications: the walking cat architecture is built entirely on quantum low-density parity-check (qLDPC) codes, abandoning the surface code that has dominated fault-tolerant architectures for nearly two decades.
Surface codes encode a single logical qubit into many physical qubits. The IonQ paper states that at distance 9, encoding 22 logical qubits with a surface code would require 1,782 physical qubits. Their new [[102, 22, 9]] code does it with 102. On the paper’s comparison convention, that is roughly a 17× gain in encoding density.
The reason this works for trapped ions, and would be far harder for superconducting qubits, is ion transport. qLDPC codes require non-local qubit connectivity. Superconducting architectures pay for this with physical long-range couplers or complex chip designs that introduce new engineering challenges. Trapped ions get this more naturally through QCCD-style ion transport, though not for free: the paper explicitly budgets transport latency, transport noise, and routing overhead. The QCCD approach has been validated experimentally on trapped-ion systems with up to 98 qubits, though scaling to thousands represents a fundamentally different engineering challenge.
This structural advantage is real and worth underscoring. When Iceberg Quantum’s Pinnacle architecture proposed breaking RSA-2048 with fewer than 100,000 physical qubits using qLDPC codes earlier this year, the proposal required degree-ten, non-planar connectivity, something that has never been demonstrated on superconducting hardware. When the Oratomic/Caltech team proposed Shor’s algorithm with 10,000 neutral atom qubits, they leveraged the natural reconfigurability of atom arrays for the same purpose. IonQ’s contribution is to show that trapped ions, with their established ion transport capabilities, provide yet another natural hardware match for these high-rate codes.
The Walking Cat: What’s Novel in the Design
The architecture’s name comes from its reliance on physical “cat states” (entangled multi-qubit states named after Schrödinger’s cat) which are prepared in dedicated factories and then physically transported (“walked”) through the chip to wherever they are needed for logical operations.
Cat states have been known in fault-tolerant quantum computing since Shor’s original 1996 paper, but they have rarely been used in practical architecture proposals because preparing them requires high-fidelity operations. If gate fidelities are too low, the post-selection process that filters out bad cat states rejects almost everything, and the factory becomes useless. IonQ argues that their demonstrated 99.99% two-qubit gate fidelity, a world record set in October 2025, puts them in the regime where cat-state production becomes practical.
Before diving into specific components, it is worth noting the design philosophy. The IonQ team borrows three principles from classical computer architecture textbooks (Hierarchy, Modularity, Regularity) and adds a fourth of their own: Simplicity. They call this the HMRS framework (and helpfully suggest you pronounce it “hammers”). Most architecture papers optimize for the lowest theoretical resource count. The walking cat paper explicitly takes the opposite stance: the authors state they “prioritize simplicity over hypothetical performance to facilitate the fabrication of the actual machine, recognizing the integration complexity of a system at this scale.” This is a deliberate bet that a less efficient design you can actually build beats a more efficient one you cannot.
That philosophy shows up concretely in several design choices:
The single-code architecture. One configuration uses a single qLDPC code ([[70, 6, 9]]) for both memory and magic state production. Every block on the chip is identical. Any block can dynamically switch roles between memory and magic factory during computation, with the allocation varying from all-memory to all-factory at any point to match the computational phase. Need more T gates for a rotation-heavy circuit stage? Convert memory blocks to magic factories. Entering a memory-heavy phase? Convert them back. This runtime flexibility means the compiler and the hardware can co-adapt, an architectural feature that fixed-function designs like surface-code lattice surgery cannot easily replicate.
Clifford gates for free. For the [[70, 6, 9]] code, the architecture provides frame-tracking for the entire 6-qubit Clifford group, meaning any Clifford gate on the six logical qubits within a memory block is implemented purely in software, with zero physical operations and zero time cost. For comparison, surface code architectures generally only allow frame-tracking of Pauli operations. This is a substantial operational advantage: Clifford gates make up the majority of gates in most quantum algorithms, and implementing them with zero overhead sharply cuts the fraction of computation that depends on expensive magic states.
New high-rate codes with a structural impossibility proof. IonQ introduces two new qLDPC codes: a [[70, 6, 9]] code and a [[102, 22, 9]] code. The [[102, 22, 9]] code is particularly notable: it encodes 22 logical qubits into 102 physical qubits at distance 9. But perhaps more interesting than the code itself is Proposition 6 in the appendix, which proves that any syndrome-extraction Tanner graph with check degree 8 or higher cannot be biplanar. This is a mathematical impossibility result, not an engineering limitation; it means the best-performing codes IonQ found for their architecture are structurally inaccessible to the biplanar architectures assumed in superconducting qLDPC proposals like IBM’s bivariate bicycle codes. Trapped ions and neutral atoms, with their ability to create non-planar connectivity through physical qubit transport, can access a strictly larger family of error-correcting codes.
Comprehensive noise model. Unlike many resource estimations that model only circuit-level depolarizing noise, IonQ’s “moving-qubit model” includes ion loss (qubits physically disappearing from the trap) and leakage (qubits escaping the computational subspace). They include dedicated qubit factories and local reservoirs to replace lost ions, and they simulate the performance impact of these realistic noise sources. The leakage correction gadget uses a teleportation-based approach adapted from recent work on both trapped-ion and neutral-atom systems. The paper even calculates ion loss rates from first-principles collision physics, estimating hydrogen molecule collision rates in cryogenic traps at 5 Kelvin to arrive at target loss rates of one event per ion every 33 minutes. This is the kind of engineering-level detail that separates a buildable blueprint from a theoretical sketch.
A decoder that was actually stress-tested. The paper provides a streaming beam-search decoder and does something I wish every architecture paper did: it runs the decoder over one million consecutive error-correction cycles and reports the full runtime distribution. The mean reaction time (the interval between the final measurement and the decoded result) is 0.35 milliseconds for the [[70, 6, 9]] code and 0.85 milliseconds for the [[102, 22, 9]] code. Even at the 99.9th percentile, reaction times stay under 1.7 milliseconds. This matters because a decoder that is fast on average but occasionally stalls can catastrophically delay the entire computation.
The Numbers in Context
The physical qubit counts IonQ reports are strikingly low. Here is how they compare to recent architecture proposals, holding roughly comparable logical qubit counts:
For approximately 100–200 logical qubits at a million-plus T gates per day, the walking cat architecture requires 2,500 to 13,000 physical qubits depending on configuration. For comparison, surface-code architectures for comparable computational tasks typically require hundreds of thousands to millions of physical qubits.
However, direct comparison across these proposals requires caution. The underlying hardware assumptions differ in ways that matter. IonQ’s baseline estimates use p = 10⁻⁴ for two-qubit gate errors in their moving-qubit noise model (matching their demonstrated fidelity on small devices), with 200-microsecond operation cycles and 10-microsecond transport steps. The Pinnacle architecture assumes 10⁻³ error rates with 1-microsecond code cycles, roughly 200× faster. Oratomic’s neutral atom proposal assumes 10⁻³ error rates with millisecond-scale cycles.
The trapped-ion speed disadvantage matters more than the qubit counts suggest. IonQ estimates that running Shor’s algorithm to factor a 30-bit integer takes about 23 hours. This is a remarkable proof-of-concept demonstration target, but it also exposes the slow-clock nature of the platform. Scaling to cryptographically relevant key sizes like RSA-2048 or ECC-256 would require vastly more qubits and far longer runtimes that the paper does not directly estimate for this architecture, though it references the broader literature suggesting roughly 100,000 physical qubits could suffice with modern codes and algorithms.
Why Fault Tolerance Changes the Game: The NISQ Comparison
One of the most compelling sections in the paper, buried 88 pages in, is the head-to-head comparison between the walking cat architecture and NISQ approaches for the same Hamiltonian simulation problem. The results illustrate why fault tolerance is not merely an improvement over NISQ but a qualitative shift.
The strongest benchmarked NISQ Hamiltonian simulations to date operate at 2D grid connectivity, on fewer than roughly 100 qubits, with observable errors around 3–5%, and with a maximum of about 40 second-order Trotter steps. The walking cat architecture targets a 100-site Heisenberg model on degree-seven random graphs (higher connectivity, more qubits, and 250 second-order Trotter steps) at 10⁻³ accuracy. That is roughly an order of magnitude deeper and two orders of magnitude more precise than NISQ hardware can currently manage.
The authors show that because the NISQ sampling overhead (Γ²) scales exponentially with circuit depth and qubit count, running this same problem on NISQ hardware with 10,000 qubits would require what they call an “astronomical” time-to-solution, even assuming world-leading 99.99% gate fidelity, 50-nanosecond gate layers, and 100 parallel replicas. The exponential penalty of uncorrected errors makes the calculation impossible in any practical sense. On the walking cat architecture with the same 10,000 physical qubits, the estimated runtime is about one month.
This comparison captures the central argument for fault tolerance more clearly than any abstract discussion can: error correction does not just improve performance; it makes otherwise impossible computations tractable.
What This Means for the Path to a CRQC
The paper explicitly situates itself in the broader trajectory toward a cryptographically relevant quantum computer (CRQC). It cites Gidney’s 2025 estimate of under a million qubits for RSA-2048 using surface codes, the Pinnacle architecture’s reduction to 100,000 physical qubits using qLDPC codes, and Oratomic’s 10,000-qubit estimate for neutral atoms. The implication is clear: if IonQ can build a walking cat architecture with thousands of qubits and demonstrate it works, the path to tens of thousands, and eventually the CRQC threshold, becomes a matter of scaling, not fundamental redesign.
This is precisely the kind of development I track in my CRQC Quantum Capability Framework. The paper touches multiple capability dimensions: quantum error correction (B.1), below-threshold operation (B.3), magic state production (C.2), decoder performance (D.2), continuous operation (D.3), and engineering scale and manufacturability (E.1). It provides detailed designs and simulations for most of these, an unusually comprehensive treatment for an architecture paper.
The Claim-Evidence Gap
The gap between this paper and a working machine is substantial and should not be understated.
Scale. The trapped-ion community has demonstrated QCCD operation on devices with up to 98 qubits. The walking cat architecture’s simplest useful configuration requires approximately 2,500 qubits. That is a 25× scale-up, in a domain where every incremental qubit introduces new noise sources, manufacturing defects, and control challenges.
Simulation vs. reality. The paper’s simulations, while thorough, are performed within its own noise model. There is no experimental demonstration of any component of the walking cat architecture: no cat factory, no magic factory, no memory block operating with these codes at these fidelities. The decoder, while impressively stress-tested in simulation, has not been implemented on actual hardware with real-time classical processing constraints.
The “near-term” claim. The paper states that the proposed FTQC “can be built in the near term.” This is a statement of aspiration, not a statement of engineering fact. It is consistent with IonQ’s roadmap targets of 200,000 physical qubits by 2028 and 2 million by 2030, but those targets themselves have not been independently validated.
The Broader Landscape: Architecture Proposals Are Accelerating
This paper arrives in a period of unprecedented activity in fault-tolerant architecture design. In the past year alone:
- Gidney (Google, 2025) reduced RSA-2048 resource estimates to under a million qubits with surface codes
- The Pinnacle architecture (Iceberg Quantum, February 2026) pushed that below 100,000 using qLDPC codes on superconducting hardware
- Oratomic/Caltech (March 2026) claimed 10,000 neutral atom qubits could run Shor’s algorithm at cryptographically relevant scales
- IBM published its Tour de Gross modular architecture based on bivariate bicycle codes
- And now IonQ provides a trapped-ion-specific qLDPC blueprint
What is striking is the shift: many of the newest low-overhead architecture proposals now lean on qLDPC codes rather than surface codes, even though surface-code architectures remain active in both theory and experiment. The physical qubit requirements keep falling. This trend reflects genuine algorithmic and coding-theoretic progress. But it also raises a question that is easy to overlook: as theoretical architectures get more aggressive in their qubit-efficiency claims, the gap between the design and a working machine may actually widen, because the architectures demand more from the hardware (higher fidelities, faster decoders, more complex classical control) to achieve their promised compression.
The Bottom Line for Quantum Security
None of this paper changes the immediate PQC migration calculus. The walking cat architecture, if built as described, would be a landmark achievement, but it would factor 30-bit integers, not 2,048-bit RSA keys. The path from thousands of physical qubits to the hundreds of thousands needed for cryptographic relevance remains long.
What it does reinforce is the message I have been emphasizing: the engineering community is converging on increasingly concrete and efficient paths to fault-tolerant quantum computation. The theoretical qubit requirements for useful computation, and eventually for cryptographic threats, continue to fall. And trapped ions, with their natural advantages for qLDPC code implementation, are now firmly in the conversation alongside superconducting qubits and neutral atoms.
For organizations planning their PQC migration, the lesson is unchanged: the deadlines that matter are the ones set by regulators, insurers, and clients, not the unpredictable timeline of CRQC arrival. But papers like this one make the arrival of useful fault-tolerant quantum computers, capable of computations beyond classical simulation if not yet capable of breaking encryption, feel closer than it did a year ago.
The IonQ team’s own conclusion captures a broader truth worth sitting with: “History teaches us that, until a FTQC capable of running millions of logical operations is in the hands of the broad community of scientists, we will only scratch the surface of what is possible.” That framing, the walking cat as a tool for scientific discovery rather than a cryptanalysis weapon, may be the most important thing about this paper. The threat to cryptography comes later. The capability comes first.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.