Quantum Physics Paper Analysis

This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:

  • CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
  • Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
  • Quantum Sensing – Metrology, magnetometry, and precision measurement advances
  • Quantum Networking – QKD, quantum repeaters, and entanglement distribution

Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.

Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.

This Week: Apr 12 - Apr 16, 2026
200 Papers This Week
618 CRQC/Y2Q Total
5443 Total Analyzed

Scalable Neural Decoders for Practical Fault-Tolerant Quantum Computation

Andi Gu, J. Pablo Bonilla Ataides, Mikhail D. Lukin, Susanne F. Yelin

2604.08358 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a neural network-based decoder for quantum error correction that significantly improves the accuracy and speed of correcting errors in quantum computers. The decoder achieves much lower logical error rates and higher throughput than existing methods, making fault-tolerant quantum computing more practical.

Key Contributions

  • Development of convolutional neural network decoder for quantum error correction codes
  • Demonstration of 17x improvement in logical error rates with 3-5 orders of magnitude higher throughput
  • Discovery of waterfall regime showing practical fault-tolerant quantum computing achievable with modest code sizes
quantum error correction fault-tolerant quantum computing neural network decoder logical error rates quantum low-density parity-check codes
View Full Abstract

Quantum error correction (QEC) is essential for scalable quantum computing. However, it requires classical decoders that are fast and accurate enough to keep pace with quantum hardware. While quantum low-density parity-check codes have recently emerged as a promising route to efficient fault tolerance, current decoding algorithms do not allow one to realize the full potential of these codes in practical settings. Here, we introduce a convolutional neural network decoder that exploits the geometric structure of QEC codes, and use it to probe a novel "waterfall" regime of error suppression, demonstrating that the logical error rates required for large-scale fault-tolerant algorithms are attainable with modest code sizes at current physical error rates, and with latencies within the real-time budgets of several leading hardware platforms. For example, for the $[144, 12, 12]$ Gross code, the decoder achieves logical error rates up to $\sim 17$x below existing decoders - reaching logical error rates $\sim 10^{-10}$ at physical error $p=0.1\%$ - with 3-5 orders of magnitude higher throughput. This decoder also produces well-calibrated confidence estimates that can significantly reduce the time overhead of repeat-until-success protocols. Taken together, these results suggest that the space-time costs associated with fault-tolerant quantum computation may be significantly lower than previously anticipated.

Optimized Gottesman-Kitaev-Preskill Error Correction via Tunable Preprocessing

Xiang-Jiang Chen, Hao-Miao Jiang, Liu-Jun Wang, Qing Chen

2604.08247 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper proposes an improved error correction scheme for Gottesman-Kitaev-Preskill (GKP) quantum error correction codes by introducing a tunable preprocessing stage with squeezing parameters. The new P-Steane scheme can outperform existing methods by actively reshaping noise propagation patterns in bosonic quantum systems.

Key Contributions

  • Introduction of tunable preprocessing stage with squeezing parameters for GKP error correction
  • Unified framework that encompasses existing ME-Steane and teleportation-based schemes as special cases
  • Demonstration of improved performance over ME-Steane scheme under specific conditions with optimized parameter selection
Gottesman-Kitaev-Preskill bosonic codes quantum error correction fault-tolerant quantum computing Steane syndrome extraction
View Full Abstract

The Gottesman-Kitaev-Preskill (GKP) code is a promising bosonic candidate for realizing fault-tolerant quantum computation. Among existing error-correction protocols for GKP code, the Steane-type scheme is a canonical and widely adopted paradigm, yet its intrinsic noise propagation pattern limits further performance improvement. In this work, we propose a preprocessing-based Steane-type (P-Steane) scheme, which introduces a tunable preprocessing stage with squeezing parameters $a$ and $b$ to actively reshape noise propagation, thereby constituting a parameter framework. This framework spans a spectrum of protocols beyond existing methods, reproducing the performance of both the ME-Steane scheme ($a=1$, $b=1$) and the teleportation-based scheme ($a=1/\sqrt{2}$, $b=\sqrt{2}$) as special cases. Crucially, in the small-noise regime and when the data qubit is noisier than the ancilla qubits, P-Steane scheme achieves the minimum product of position- and momentum-quadrature output noise variances when $2a = b$, and consistently outperforms the ME-Steane scheme within a specific squeezing-parameter range under this condition.

Belief Propagation Convergence Prediction for Bivariate Bicycle Quantum Error Correction Codes

Anton Pakhunov

2604.07995 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a simple predictive method for determining whether Belief Propagation decoding will converge for quantum error correction codes by checking if the syndrome defect count is divisible by the code's column weight, achieving 95%+ accuracy and potentially reducing computational overhead in quantum error correction.

Key Contributions

  • Development of a modulo-based convergence predictor for Belief Propagation decoding with AUC = 0.995
  • Identification of structural mechanism linking syndrome defect count divisibility to BP convergence probability
  • Validation across multiple Bivariate Bicycle codes including IBM's targeted Gross codes for 2026-2028 deployment
quantum error correction belief propagation bivariate bicycle codes syndrome decoding LDPC codes
View Full Abstract

Decoding Bivariate Bicycle (BB) quantum error correction codes typically requires Belief Propagation (BP) followed by Ordered Statistics Decoding (OSD) post-processing when BP fails to converge. Whether BP will converge on a given syndrome is currently determined only after running BP to completion. We show that convergence can be predicted in advance by a single modulo operation: if the syndrome defect count is divisible by the code's column weight w, BP converges with high probability (100% at p <= 0.001, degrading to 87% at p = 0.01); otherwise, BP fails with probability >= 90%. The mechanism is structural: each physical data error activates exactly w stabilizers, so a defect count not divisible by w implies the presence of measurement errors outside BP's model space. Validated on five BB codes with column weights w = 2, 3, and 4, mod-w achieves AUC = 0.995 as a convergence classifier at p = 0.001 under phenomenological noise, dominating all other syndrome features (next best: AUC = 0.52). The false positive rate scales empirically as O(p^2.05) (R^2 = 0.98), confirming the analytical bound from Proposition 2. Among BP failures on mod-w = 0 syndromes, 82% contain weight-2 data error clusters, directly confirming the dominant failure mechanism. The prediction is invariant under BP scheduling strategy and decoder variant, including Relay-BP - the strongest known BP enhancement for quantum LDPC codes. These results apply directly to IBM's Gross code [[144, 12, 12]] and Two-Gross code [[288, 12, 18]], targeted for deployment in 2026-2028.

A Review of Variational Quantum Algorithms: Insights into Fault-Tolerant Quantum Computing

Zhirao Wang, Junxiang Huang, Runyu Ye, Qingyu Li, Qi-Ming Ding, Yiming Huang, Ting Zhang, Yumeng Zeng, Jianshuo Gao, Xiao Yuan, Yuan Yao

2604.07909 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper reviews variational quantum algorithms (VQAs), which combine quantum circuits with classical optimization to work on current noisy quantum computers, and analyzes how these algorithms might evolve as quantum computers become fault-tolerant. The review examines current challenges like barren plateaus and explores applications across physics, chemistry, and machine learning.

Key Contributions

  • Systematic analysis of VQA evolution from NISQ to fault-tolerant quantum computing regimes
  • Comprehensive review of training bottlenecks like barren plateaus and mitigation strategies
  • Theoretical roadmap for adapting variational algorithms to error-corrected quantum systems
variational quantum algorithms fault-tolerant quantum computing NISQ parameterized quantum circuits barren plateaus
View Full Abstract

Variational quantum algorithms (VQAs) have established themselves as a central computational paradigm in the Noisy Intermediate-Scale Quantum (NISQ) era. By coupling parameterized quantum circuits (PQCs) with classical optimization, they operate effectively under strict hardware limitations. However, as quantum architectures transition toward early fault-tolerant (EFT) and ultimate fault-tolerant (FT) regimes, the foundational principles and long-term viability of VQAs require systematic reassessment. This review offers an insightful analysis of VQAs and their progression toward the fault-tolerant regime. We deconstruct the core algorithmic framework by examining ansatz design and classical optimization strategies, including cost function formulation, gradient computation, and optimizer selection. Concurrently, we evaluate critical training bottlenecks, notably barren plateaus (BPs), alongside established mitigation strategies. The discussion then explores the EFT phase, detailing how the integration of quantum error mitigation and partial error correction can sustain algorithmic performance. Addressing the FT phase, we analyze the inherent challenges confronting current hybrid VQA models. Furthermore, we synthesize recent VQA applications across diverse domains, including many-body physics, quantum chemistry, machine learning, and mathematical optimization. Ultimately, this review outlines a theoretical roadmap for adapting quantum algorithms to future hardware generations, elucidating how variational principles can be systematically refined to maintain their relevance and efficiency within an error-corrected computational environment.

Fast and Coherent Transfer of Atomic Qubits in Optical Tweezers using Fiber Array Architecture

Jia-Chao Wang, Zai-Zheng Zhang, Xiao Li, Guang-Wei Wang, Xiao-Dong He, Min Liu, Peng Xu

2604.07862 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a new fiber array architecture for neutral-atom quantum computers that enables fast, coherent transfer of atomic qubits between optical trap sites with extremely low heating and high fidelity. The technique allows qubits to be moved between different locations in the quantum processor while maintaining their quantum states, which is crucial for implementing quantum algorithms that require connectivity between distant qubits.

Key Contributions

  • Demonstrated ultrafast qubit transfer (10 μs) with extremely high fidelity (0.99992 per cycle) and ultralow motional heating
  • Developed fiber array architecture with site-resolved trap depth control enabling smooth amplitude exchange between static and moving traps
  • Established theoretical model connecting array inhomogeneity to transfer heating rates through parallel transfer experiments
neutral atoms optical tweezers qubit transfer quantum computing architecture motional heating
View Full Abstract

Programmable neutral-atom arrays offer a promising route toward scalable quantum computing, where coherent qubit transfer enables non-local connectivity and reduces resource overhead. However, transfer speed and motional heating remain key bottlenecks for fast and deep quantum circuits. Here, we employ a fiber array neutral-atom quantum computing architecture with site-resolved control of trap depths to realize smooth amplitude exchange between static and moving traps, thereby enabling fast and coherent qubit transfer with ultralow motional heating. With a 10 $μ$s in situ transfer between static and moving traps, we obtain a per-cycle heating rate of 0.156(9) $μ$K, sustain over 500 cycles with negligible atom loss, and achieve a quantum state fidelity of 0.99992(5) per cycle. For inter-site transfer between two separated static traps, the operation takes 120 $μ$s with 0.783(17) $μ$K heating per transfer, and remains negligible atom loss for up to 100 repeated cycles with a fidelity of 0.9998(1) per transfer. Furthermore, through experimental studies of parallel transfer, we establish a model that elucidates the relationship between array inhomogeneity and the transfer heating rate. This fast, low-heating coherent transfer capability provides a practical route for improving both speed and fidelity in atom-shuttling based quantum computing.

Trotterization with Many-body Coulomb Interactions: Convergence for General Initial Conditions and State-Dependent Improvements

Di Fang, Xiaoxu Wu

2604.07704 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper establishes rigorous error bounds for Trotter formulas when simulating many-body quantum systems with Coulomb interactions, showing that second-order Trotter achieves polynomial scaling in particle number despite the challenging mathematical properties of Coulomb potentials. The work identifies conditions under which convergence rates can be improved and connects these to physically meaningful quantum states.

Key Contributions

  • Rigorous proof that second-order Trotter formulas achieve 1/4 convergence rate with polynomial particle number dependence for Coulomb systems
  • Identification of physically meaningful initial state conditions that improve convergence rates to first and second order
Trotterization Coulomb interactions quantum simulation many-body systems convergence analysis
View Full Abstract

Efficiently simulating many-body quantum systems with Coulomb interactions is a fundamental question in quantum physics, quantum chemistry, and quantum computing, yet it presents unique challenges: the Hamiltonian is an unbounded operator (both kinetic and potential parts are unbounded); its Hilbert space dimension grows exponentially with particle number; and the Coulomb potential is singular, long-ranged, non-smooth, and unbounded, violating the regularity assumptions of many prior state-of-the-art many-body simulation analyses. In this work, we establish rigorous error bounds for Trotter formulas applied to many-body quantum systems with Coulomb interactions. Our first main result shows that for general initial conditions in the domain of the Hamiltonian, second-order Trotter achieves a sharp $1/4$ convergence rate with explicit polynomial dependence of the error prefactor on the particle number. The polynomial dependence on system size suggests that the algorithm remains quantumly efficient, even without introducing any regularization of the Coulomb singularity. Notably, although the result under general conditions constitutes a worst-case bound, this rate has been observed in prior work for the hydrogen ground state, demonstrating its relevance to physically and practically important initial conditions. Our second main result identifies a set of physically meaningful conditions on the initial state under which the convergence rate improves to first and second order. For hydrogenic systems, these conditions are connected to excited states with sufficiently high angular momentum. Our theoretical findings are consistent with prior numerical observations.

Defect-free arrays at the thousand-atom scale in a 4-K cryogenic environment

Desiree Lim, Hadriel Mamann, Grégoire Pichard, Lilian Bourachot, Arvid Lindberg, Clotilde Hamot, Hugo Le Bars, Florian Fasola, Siddhy Tan, Gwennolé ...

2604.07205 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper demonstrates a cryogenic system operating at 4 K that can create and maintain arrays of up to 1024 individual atoms trapped by laser tweezers, achieving extremely long trapping times of around 5000 seconds. The system is designed to be compatible with Rydberg-state manipulation, enabling large-scale quantum computing applications.

Key Contributions

  • Development of 4K cryogenic platform with high numerical aperture optics for thousand-atom scale arrays
  • Achievement of 5000-second trapping lifetimes enabling extended experimental time
  • Demonstration of defect-free arrays up to 1024 atoms using dual-wavelength trapping
optical tweezers Rydberg atoms cryogenic systems neutral atom quantum computing large-scale quantum arrays
View Full Abstract

We report on a cryogenic platform at 4 K incorporating high numerical aperture optics for the generation of large-scale tweezers arrays, and compatible with Rydberg-state manipulation. We achieve trapping lifetimes of around 5000 s, significantly extending the available experimental time for the preparation of large-scale arrays. By combining two trapping lasers at different wavelengths and by minimizing other atom losses during the rearrangement and imaging processes, we demonstrate the preparation of defect-free arrays with up to 1024 atoms. Our cryogenic design opens exciting prospects for analog and digital quantum computing.

Coherence and entanglement dynamics in Shor's algorithm

Linlin Ye, Zhaoqi Wu, Shao-Ming Fei

2604.06639 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes how quantum coherence and entanglement change during the execution of Shor's algorithm for factoring large numbers. The researchers show that Shor's algorithm generally decreases coherence while increasing entanglement, and they establish relationships between these quantum resources throughout the algorithm's steps.

Key Contributions

  • Analysis of coherence and entanglement dynamics throughout Shor's algorithm execution
  • Demonstration that Shor's algorithm depletes coherence while producing entanglement
  • Establishment of relationships between geometric coherence and geometric entanglement in quantum algorithms
Shor's algorithm quantum coherence quantum entanglement prime factorization quantum algorithms
View Full Abstract

Shor's algorithm outperforms its classical counterpart in efficient prime factorization. We explore the coherence and entanglement dynamics of the evolved states within Shor's algorithm, showing that the coherence in each step relies on the dimension of register or the order, and discuss the relations between geometric coherence and geometric entanglement. We investigate how unitary operators induce variations in coherence and entanglement, and analyze the variations of coherence and entanglement within the entire algorithm, demonstrating that the overall effect of Shor's algorithm tends to deplete coherence and produce entanglement. Our research not only deepens the understanding of this algorithm but also provides methodological references for studying resource dynamics in other quantum algorithms.

Quantifying magic via quantum $(α,β)$ Jensen-Shannon divergence

Linmao Wang, Zhaoqi Wu

2604.06604 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new mathematical tools to measure 'magic' in quantum states, which refers to how much a quantum state differs from easily simulatable stabilizer states. The authors propose quantum Jensen-Shannon divergence-based measures that can efficiently quantify this magic property, which is crucial for fault-tolerant quantum computing.

Key Contributions

  • Introduction of two new magic quantifiers based on quantum (α,β) Jensen-Shannon divergence
  • Demonstration that these quantifiers are efficiently computable in low-dimensional systems and have desirable mathematical properties
  • Analysis of how initial nonstabilizerness can enhance magic generation for specific quantum gates
magic states fault-tolerant quantum computing stabilizer states quantum resource theory Jensen-Shannon divergence
View Full Abstract

Magic states play an important role in fault-tolerant quantum computation, and so the quantification of magic for quantum states is of great significance. In this work, we propose two new magic quantifiers by introducing two versions of quantum $(α,β)$ Jensen-Shannon divergence based on the quantum $(α,β)$ entropy and the quantum $(α,β)$-relative entropy, respectively. We derive many desirable properties for our magic quantifiers, and find that they are efficiently computable in low-dimensional Hilbert spaces. We also show that the initial nonstabilizerness in the input state can boost the magic generating power for our magic quantifiers with appropriate parameter ranges for a certain class of quantum gates. Our magic quantifiers may provide new tools for addressing some specific problems in magic resource theory.

Database Reordering for Compact Grover Oracles with ESOP Minimization

Yusuke Kimura, Yutaka Takita

2604.06578 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes optimizing Grover's quantum search algorithm by reordering database entries and using ESOP minimization to reduce the gate count and circuit depth of the quantum oracle circuit. The researchers demonstrate that strategic database reordering combined with simulated annealing can reduce circuit size by approximately 30% compared to unoptimized approaches.

Key Contributions

  • Demonstrated that database reordering can reduce Grover oracle circuit size by up to a factor of two
  • Developed a proxy metric for estimating circuit size without full compilation and combined it with simulated annealing for efficient optimization
  • Showed 30% circuit size reduction compared to ESOP minimization without reordering through experimental validation
Grover's algorithm quantum oracle circuit optimization ESOP minimization QROM
View Full Abstract

Grover's algorithm searches for data satisfying a desired condition in an unstructured database. This algorithm can search a space of size $N$ in $\sqrt{N}$ queries, thereby achieving a quadratic speedup. However, within the Grover oracle circuit that is repeatedly applied, the quantum state preparation circuit -- which embeds database information into quantum states -- suffers from a large gate count and circuit depth. To address this problem, we propose reducing the quantum state preparation circuit by reordering the database. Specifically, we consider a Quantum Read-Only Memory (QROM), where data are assigned to addresses, and assume that the address assignment of data can be freely permuted. By applying Exclusive Sum-of-Products (ESOP) minimization to the resulting truth table, we reduce the quantum circuit. Although the resulting circuit logic differs from the original, the state preparation remains correct in the sense that every desired datum is encoded at some address. Furthermore, we propose a proxy metric that estimates circuit size without compilation, and combine it with simulated annealing to efficiently find a near-optimal data ordering. In our experiments, an exhaustive search over all orderings for databases of size $N=8$ reveals that circuit size varies by up to approximately a factor of two depending on the ordering, demonstrating the utility of reordering. Compared with applying ESOP minimization without reordering, simulated annealing reduces the circuit size by approximately 30\% and yields circuits close to optimal. For $N=64$ and $128$, simulated annealing is shown to discover smaller circuits compared with random search.

Discrete-variable assisted error correction of continuous-variable quantum information

Negin Razian, En-Jui Chang, Hoi-Kwan Lau

2604.06565 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper presents a new quantum error correction method for continuous-variable quantum systems that uses discrete-variable ancilla qubits instead of the difficult-to-prepare GKP states. The approach can suppress quantum errors by over 20% and offers a more practical path to implementing error correction in hybrid quantum systems.

Key Contributions

  • Novel CV quantum error correction scheme using DV ancilla instead of GKP states
  • Demonstration of >20% infidelity suppression with single-qubit ancilla
  • New oscillator-in-oscillator code architecture without GKP states
  • Practical implementation pathway for CV QEC on realistic platforms
quantum error correction continuous-variable quantum computing discrete-variable ancilla bosonic quantum codes hybrid quantum systems
View Full Abstract

Robust continuous-variable (CV) quantum information processing requires correcting realistic errors in bosonic systems, but all existing schemes rely on auxiliary Gottesman-Kitaev-Preskill (GKP) states which the preparation and operation are demanding in many platforms. In this work, we propose a novel CV quantum error correction (QEC) scheme that utilizes a broadly accessible resource: discrete-variable (DV) ancilla. Our scheme extracts information about CV displacement to the DV ancilla, measuring that allows counteracting the unwanted displacement error. We show that a simple single-qubit ancilla can already suppress CV infidelity by more than 20%. By concatenating with DV QEC codes, our scheme is robust against the physical errors in hybrid CV-DV systems, and yields a new class of oscillator-in-oscillator code that does not involve GKP states. Our work facilitates the implementation of CV QEC on realistic platforms.

Error Correction in Lattice Quantum Electrodynamics with Quantum Reference Frames

Elias Rothlin, Carla Ferradini, Lin-Qing Chen

2604.06149 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper explores how gauge symmetries in lattice quantum electrodynamics can be understood as quantum error-correcting codes, showing that gauge redundancy serves as a resource for protecting quantum information. The authors construct explicit error recovery operations using quantum reference frames and demonstrate two QECC structures within lattice QED.

Key Contributions

  • Established lattice QED as a quantum error-correcting code beyond stabilizer codes
  • Constructed explicit recovery operations using quantum reference frames for both gauge and fermionic sectors
  • Demonstrated how gauge symmetry provides encoding structure that supports quantum error correction
quantum error correction gauge theory lattice QED quantum reference frames stabilizer codes
View Full Abstract

Is gauge symmetry merely a redundancy in our description, or does it carry a deeper information-theoretic significance? Quantum error-correcting codes (QECCs) show that redundancy can serve as a resource for protecting information against noise. In this work, we ask whether gauge theories can be understood in similar terms, and make this idea concrete in lattice quantum electrodynamics (QED), building on and extending earlier works that established a bridge between gauge systems, stabilizer codes, and quantum reference frames (QRFs). For Abelian gauge groups, we show that explicit recovery operations can be constructed using group-theoretical methods for error sets determined by both ideal and non-ideal QRFs. Applied to lattice QED, this yields two QECC structures: one in the pure-gauge sector and one including fermions. We construct a gauge-field QRF based on spanning trees of the lattice and a fermionic field QRF from the matter field, thereby making explicit how physical information is encoded. While the syndromes of gauge-violating errors associated with constraint measurements are generically degenerate, QRFs resolve this degeneracy and single out families of correctable errors. This establishes lattice QED as a QECC beyond the stabilizer setting and shows concretely how gauge symmetry provides an encoding structure that supports error correction.

Gauss law codes and vacuum codes from lattice gauge theories

Javier P. Lacambra, Aidan Chatwin-Davies, Masazumi Honda, Philipp A. Hoehn

2604.06087 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a framework for creating quantum error correcting codes from lattice gauge theories, showing how gauge symmetries can be used to protect quantum information. The work demonstrates connections between quantum error correction and gauge theory physics, with potential applications for simulating gauge theories on noisy quantum computers.

Key Contributions

  • Comprehensive framework for constructing QECCs from Abelian lattice gauge theories using quantum reference frames
  • Development of two classes of codes: Gauss law codes and vacuum codes with detailed characterization of their algebraic structures
  • Demonstration of unitary equivalence between vacuum codes and pure gauge theory codes under specific conditions
quantum error correction lattice gauge theory quantum reference frames subsystem codes gauge symmetry
View Full Abstract

We develop a comprehensive framework for constructing quantum error correcting codes (QECCs) from Abelian lattice gauge theories (LGTs) using quantum reference frames (QRFs) as a unifying formalism. We consider LGTs with arbitrary compact Abelian gauge groups supported on lattices in arbitrary numbers of spatial dimensions, and we work with both pure gauge theories and theories with couplings to bosonic and fermionic matter. The codes that we construct fall into two classes: First, Gauss law codes identify the code subspace with the full gauge-invariant sector of the theory. In models with matter coupled to gauge fields, these codes inherit a natural subsystem structure in which gauge-invariant Wilson loops and dressed matter excitations factorize the code space. Second, vacuum codes restrict the code subspace to the matter vacuum sector within the gauge-invariant subspace, yielding codes where errors correspond to gauge-invariant charge excitations rather than to violations of the Gauss law. Despite their distinct setup, we show that when the gauge group is finite, vacuum codes are unitarily equivalent to pure gauge theory Gauss law codes, and that when the group is continuous, this is only true upon a charge coarse-graining of the vacuum code. In all cases, QRFs provide a systematic apparatus for fully characterizing the codes' algebraic structures and correctable error sets. For clarity, we illustrate our general results in $\mathbb{Z}_2$-gauge theory, as well as in scalar and fermionic QED. These findings offer fundamental insights into the parallelism between quantum error correction and gauge theory and point toward practical advantages for simulating LGTs on noisy quantum devices.

Adaptive Deformation of Color Code in Square Lattices with Defects

Tian-Hao Wei, Jia-Xuan Zhang, Jia-Ning Li, Wei-Cheng Kong, Yu-Chun Wu, Guo-Ping Guo

2604.05874 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to adapt color code quantum error correction to work on hardware with defective qubits, proposing a universal scheme that handles both data and ancilla qubit defects while maintaining low error rates and supporting fault-tolerant operations.

Key Contributions

  • Universal superstabilizer scheme for handling data qubit defects in arbitrary stabilizer codes
  • Concrete repair methods for isolated defects in color codes on square lattices
  • Two optimization schemes for ancilla qubit defects that avoid resource waste
  • Comprehensive defect adaptive architecture supporting transversal Clifford gates and lattice surgery
quantum error correction color codes fault tolerant quantum computing topological codes stabilizer codes
View Full Abstract

Quantum error correction is a crucial technology for fault tolerant quantum computing. On superconducting platforms, hardware defects in large scale quantum processors can disrupt the regular lattice structure of topological codes and impair their error correction capabilities. Although defect adaptive methods for surface codes have been extensively studied, other topological codes such as color codes still lack a systematic framework for handling defects. To address this issue, we propose a universal superstabilizer scheme applicable to data qubit defects in arbitrary stabilizer codes. Based on this scheme, we develop concrete repair methods for isolated defects of both internal data qubits and ancilla qubits in color codes defined on square lattices. Furthermore, for ancilla qubit defects, we present two optimization schemes. One scheme reuses neighboring ancilla qubits, and the other employs iSWAP gates. Unlike conventional approaches that directly disable neighboring data qubits and thus cause resource waste, both of our schemes avoid such waste and consequently achieve a lower logical error rate.Integrating the above techniques, we construct a comprehensive defect adaptive architecture for color codes to handle various defect clusters. We also show that our scheme supports a full transversal Clifford gate set and lattice surgery operations. These results provide a systematic theoretical pathway for deploying robust and low overhead color codes on defective quantum hardware.

Dynamical decoupling and quantum error correction with SU(d) symmetries

Colin Read, Eduardo Serrano-Ensástiga, John Martin

2604.05871 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: high Network: medium

This paper develops a general framework for dynamical decoupling in qudit (multi-level quantum) systems using Lie group theory, extending beyond the typical qubit case. The authors show how to systematically identify decoupling sequences for higher-dimensional quantum systems and demonstrate that the same mathematical framework unifies dynamical decoupling with quantum error correction.

Key Contributions

  • General framework for dynamical decoupling in qudit systems based on SU(d) symmetries and Lie group theory
  • Unification of dynamical decoupling and quantum error correction through symmetry-based approach
  • Construction of new pulse sequences for qutrit systems and spin-1 systems with practical experimental considerations
dynamical decoupling quantum error correction qudit systems SU(d) symmetries Lie group theory
View Full Abstract

Dynamical decoupling is a long-established and effective way to suppress unwanted interactions in qubit systems, enabling advances in fields ranging from quantum metrology to quantum computing. For general qudit systems, however, comparable protocols remain rare, mainly because Hamiltonian engineering in higher dimensions lacks the geometric intuition available for qubits. Here we present a general framework for dynamical decoupling in qudit systems, based on Lie group representation theory. By extending the group theory approach to dynamical decoupling, we show how decoupling groups can be systematically identified among the finite subgroups of SU(d) by analyzing their access to the irreducible components of the operator space. As an application, we construct new pulse sequences for interacting qutrit systems based on finite subgroups of SU(3), and show how subgroup factorizations and group orientations can be exploited to obtain shorter and more experimentally practical protocols for spin-1 systems with large zero-field splitting. We further show that the same symmetry-based framework yields quantum error-correcting codes: whenever a finite subgroup of SU(d) acts as a decoupling group for the relevant error algebra, the associated one-dimensional symmetry sectors define codespaces satisfying the Knill-Laflamme conditions, thereby unifying dynamical decoupling and quantum error correction in multi-level quantum systems.

Fault-Tolerant One-Shot Entanglement Generation with Constant-Sized Quantum Devices in the Plane

Dylan Harley, Robert Koenig

2604.05870 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents a fault-tolerant protocol that can generate high-fidelity entangled Bell pairs between distant qubits on a 2D grid in constant time, even in the presence of noise. The protocol works with constant-sized quantum devices and requires only a grid that scales linearly with distance in one dimension and logarithmically in the other.

Key Contributions

  • First one-shot fault-tolerant entanglement generation protocol for 2D grids with constant-sized devices
  • Demonstration of long-range localizable entanglement in short-range entangled 2D states robust to local Pauli noise
  • Construction of 2D-local stabilizer Hamiltonian with long-range entanglement at finite temperature
fault-tolerant quantum computing entanglement generation quantum repeaters 2D quantum systems Bell pairs
View Full Abstract

Consider a rectangular grid of qubits in 2D with single-qubit and nearest-neighbor two-qubit operations subject to local stochastic Pauli noise. At different length scales, this setup describes both a single quantum computing device with geometrically limited connectivity between qubits arranged on a disc, and planar networks composed of quantum repeater stations of constant size. We give a protocol which robustly generates entanglement between distant qubits in this setup. For noise below a constant threshold error strength, it generates a constant-fidelity Bell pair between qubits separated by an arbitrarily large distance $R$. To generate distance-$R$ entanglement, a rectangular grid of qubits of dimensions $Θ(R)\times Θ(\mathsf{poly}(\log R))$ suffices. Our protocol applies quantum operations in one shot, establishing a Bell state in a constant time up to a known Pauli correction. In contrast, existing entanglement generation protocols either require local quantum devices controlling a number of qubits growing with the targeted distance, or are not single-shot, i.e., have a distance-dependent execution time. The protocol leverages many-body entanglement in networks and provides the first example of a short-range entangled state in 2D with long-range localizable entanglement robust to local stochastic Pauli noise. As an immediate corollary, we construct a 2D-local stabilizer Hamiltonian whose Gibbs states possess long-range localizable entanglement at constant positive temperature.

A plug-and-play superconducting quantum controller at millikelvin temperatures enables exceeding 99.9% average gate fidelity

Kuang Liu, Zhiyuan Wang, Xiaoliang He, Siqi Li, Hao Wu, Xiangyu Ren, Zhengqi Niu, Wangpeng Gao, Chenluo Zhang, Pei Huang, Yu Wu, Liliang Ying, Wei Pen...

2604.05693 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents a superconducting quantum controller that operates at millikelvin temperatures and can directly connect to quantum bits, achieving over 99.9% gate fidelity with very low power consumption. The controller addresses a major bottleneck in scaling up superconducting quantum computers by enabling high-precision control operations at the same ultra-cold temperatures where the qubits operate.

Key Contributions

  • Development of a plug-and-play superconducting quantum controller operating at 10 mK with direct chip-to-chip qubit interconnection
  • Achievement of 99.9% average Clifford gate fidelity with ultralow power consumption of 0.121 fJ per gate operation
  • Demonstration of solution to control bottleneck in large-scale superconducting quantum computing
superconducting quantum computing quantum control gate fidelity Josephson junctions randomized benchmarking
View Full Abstract

The development of large-scale superconducting quantum computing requires efficient in-situ control methods that allow high-fidelity operations at millikelvin temperatures. Superconducting circuits based on Josephson junctions offer a promising solution due to their high speed, low power dissipation, and cryogenic nature. Here, we report a superconducting quantum controller that enables direct chip-to-chip interconnection with qubits at 10 mK and high-fidelity, all-digital manipulation. Randomized benchmarking reveals a uniformly high average Clifford fidelity of 99.9% with leakage to high energy levels on the order of $10^{-4}$, and an estimated average gate operation energy of 0.121 fJ, demonstrating the potential to resolve the control bottleneck in superconducting quantum computing.

PQC-Enhanced QKD Networks: A Layered Approach

Paul Spooren, Andreas Neuhold, Sebastian Ramacher, Thomas Hühn

2604.05599 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: none Sensing: none Network: high

This paper presents a hybrid network security architecture that combines Quantum Key Distribution (QKD) with Post-Quantum Cryptography (PQC) to create secure communication networks. The approach uses a layered design where QKD provides hop-by-hop security between trusted nodes, while PQC enables end-to-end encryption across the entire network.

Key Contributions

  • Layered network architecture combining QKD and PQC for scalable quantum-safe security
  • Practical implementation using open-source components with validation in simulated and lab environments
  • Compositional security analysis preserving individual component security properties
quantum key distribution post-quantum cryptography quantum networks network security cryptographic protocols
View Full Abstract

We present a layered and modular network architecture that combines Quantum Key Distribution (QKD) and Post-Quantum Cryptography (PQC) to provide scalable end-to-end security across long distance multi-hop, trusted-node quantum networks. To ensure interoperability and efficient practical deployment, hop-wise tunnels between physically secured nodes are protected by WireGuard with periodically rotated pre-shared keys sourced via the ETSI GS QKD 014 interface. On top, Rosenpass performs a PQC key exchange to establish an end-to-end data channel without modifying deployed QKD devices or network protocols. This dual-layer composition yields post-quantum forward secrecy and authenticity under practical assumptions. We implement the design using open-source components and validate and evaluate it in simulated and lab test-beds. Experiments show uninterrupted operation over multi-hop paths, low resource footprint and fail-safe mechanisms. We further discuss the design's compositional security, wherein the security of each individual component is preserved under their combination and outline migration paths for operators integrating QKD-aware overlays in existing infrastructures.

Phase-Fidelity-Aware Truncated Quantum Fourier Transform for Scalable Phase Estimation on NISQ Hardware

Akoramurthy B, Surendiran. B

2604.05456 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper introduces an optimized quantum Fourier transform algorithm called PFA-TQFT that reduces the number of gates needed for quantum phase estimation from O(m²) to O(m log m) by intelligently truncating low-fidelity operations. The method maintains estimation accuracy while making quantum phase estimation more practical on current noisy quantum computers.

Key Contributions

  • Development of Phase-Fidelity-Aware Truncated QFT algorithm that reduces gate complexity from O(m²) to O(m log m)
  • Theoretical bound showing estimation error grows by at most O(2^-d) while achieving significant gate count reduction
  • Hardware-calibrated truncation strategy that adapts to native gate fidelities of specific quantum devices
  • Demonstration of noise-truncation synergy where the truncated algorithm outperforms full QFT under realistic NISQ noise conditions
quantum phase estimation quantum Fourier transform NISQ gate optimization quantum algorithms
View Full Abstract

Quantum phase estimation~(QPE) is central to numerous quantum algorithms, yet its standard implementation demands an $\calO(m^{2})$-gate quantum Fourier transform~(QFT) on $m$ control qubits-a prohibitive overhead on near-term noisy intermediate-scale quantum (NISQ) devices. We introduce the \emph{Phase-Fidelity-Aware Truncated QFT} (PFA-TQFT), a family of approximate QFT circuits parameterised by a truncation depth~$d$ that omits controlled-phase rotations below a hardware-calibrated fidelity threshold~$\eps$. Our central result establishes $\TV(P_{\varphi},P_{\varphi}^{d})\leqπ(m{-}d)/2^{d}$, showing that for $d=\calO(\log m)$ circuit size collapses from $\calO(m^{2})$ to $\calO(m\log m)$ while estimation error grows by at most $\calO(2^{-d})$. We characterise $\dstar=\Floor{\log_{2}(2π/\eps_{2q})}$ directly from native gate fidelities, demonstrating 31.3 -43.7\% at m = 30, gate-count reduction on IBM Eagle/Heron and IonQ~Aria with negligible accuracy loss. Numerical experiments on the transverse-field Ising model confirm all theoretical predictions and reveal a \emph{noise-truncation synergy}: PFA-TQFT outperforms full QFT under NISQ noise $\eps_{2q}\gtrsim 2\times10^{-3}$.

Phase-Stable Hologram Updates for Large-Scale Neutral-Atom Array Reconfiguration

Erdong Huang, Jiayi Huang, Hongshun Yao, Xin Wang, Jin-Guo Liu

2604.04600 • Apr 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper introduces a new algorithm called weighted-projective Gerchberg-Saxton (WPGS) that improves how large arrays of neutral atoms are assembled and reconfigured for quantum computing by maintaining phase stability when updating holographic optical tweezers, preventing atom loss during transitions.

Key Contributions

  • Development of the WPGS algorithm that enforces inter-frame trap-phase continuity to prevent transient trap loss during hologram updates
  • Demonstration of scalable neutral-atom array reconfiguration with over 1000 traps including 2D/3D configurations and multilayer assembly
neutral atoms Rydberg atoms optical tweezers holographic control quantum array assembly
View Full Abstract

Assembling large-scale, defect-free Rydberg atom arrays is a key technology for neutral-atom quantum computation. Dynamic holographic optical tweezers enable the assembly and reconfiguration of such arrays, but phase mismatches between successive holograms can induce destructive interference and transient trap loss during spatial-light-modulator refresh. In this work, we introduce the weighted-projective Gerchberg--Saxton (WPGS) algorithm, a phase-stable approach to dynamic hologram updates for large-scale Rydberg atom-array reconfiguration. By enforcing inter-frame trap-phase continuity while retaining weighted intensity equalization, WPGS suppresses refresh-induced transient degradation. The phase-difference distribution between consecutive holograms further provides a simple diagnostic of transient robustness. Moreover, enforcing the phase constraint reduces the number of iterations required at each update step, thereby accelerating hologram generation. Numerical simulations of 2D and 3D reconfiguration with more than $10^3$ traps, including multilayer assembly and interlayer transport, show robust transient intensities and significantly faster updates than conventional methods. These results establish inter-frame phase continuity as a practical design principle for dynamic holographic control and scalable neutral-atom array reconfiguration.

Digital-Analog Quantum Simulation and Computing: A Perspective on Past and Future Developments

Lucas Lamata

2604.04438 • Apr 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This perspective paper reviews the emerging digital-analog quantum computing paradigm, which combines large analog quantum operations (from native platform interactions) with digital quantum gates to achieve both scalability and universality. The author provides an overview of the field's evolution over the past decade and discusses future possibilities for this hybrid approach.

Key Contributions

  • Comprehensive review of digital-analog quantum computing paradigm evolution
  • Analysis of how hybrid approaches can overcome limitations of purely digital or analog quantum computing
  • Perspective on future developments combining scalability with universality
digital-analog quantum computing quantum simulation hybrid quantum algorithms quantum gates scalability
View Full Abstract

Quantum simulation and computing traditionally has been based on two main paradigms, namely, digital and analog. In the digital paradigm, usually single and two-qubit gates (where qubit is an acronym for quantum bit) are employed as building blocks for scalable, universal quantum computing, although errors add up fast and error correction will be ultimately needed for scaling up. In the analog paradigm, large analog blocks are normally employed for a unitary dynamics that carries out the computation, enabling quantum operations on many qubits with reduced errors, but with the drawback of a limited choice of evolutions and lack of universality. In the past decade, a new paradigm has emerged, showing interesting possibilities for quantum simulation and computing in the near and mid term. This is the paradigm of digital-analog quantum technologies, which proposes to combine the best of both paradigms: large analog blocks, provided by native interactions of the employed quantum platform, enabling scalability, combined with digital gates, allowing for more versatility and, ultimately, universality. In this Perspective, I give an overview of the evolution of the field along the past decade, and an outlook for its future possibilities.

Noise tolerance via reinforcement in the quantum search problem

Marjan Homayouni-Sangari, Abolfazl Ramezanpour

2604.04137 • Apr 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper demonstrates that reinforcement techniques can exponentially improve quantum search algorithms, reducing computation time from √D to ln D steps and significantly increasing noise tolerance. The researchers use numerical simulations to show that reinforced quantum search maintains higher success probability in noisy environments compared to standard quantum search algorithms.

Key Contributions

  • Exponential speedup of quantum search from √D to ln D complexity through reinforcement
  • Demonstrated exponentially larger noise threshold for reinforced quantum search algorithms
  • Numerical characterization of noise tolerance for both coherent and incoherent noise in multi-qubit and qudit systems
quantum search Grover's algorithm reinforcement noise tolerance error mitigation
View Full Abstract

We find that reinforcement exponentially reduces computation time of the quantum search problem from $\sqrt{D}$ to $\ln D$ in a $D$-dimensional system. Therefor, a reinforced quantum search is expected to exhibit an exponentially larger noise threshold compared to a standard search algorithm in a noisy environment. We use numerical simulations to characterize the level of noise tolerance via reinforcement in the presence of both coherent and incoherent noise, considering a system of $N$ qubits and a single $D$-level (qudit) system. Our results show that reinforcement significantly enhances the algorithm's success probability and improves the scaling of its computation time with system size. These findings indicate that reinforcement offers a promising strategy for error mitigation, especially when a precise noise model is unavailable.

Microstructural Topology as a Prescriptor for Quantum Coherence: Towards A Unified Framework for Decoherence in Superconducting Qubits

Vinayak P. Dravid, Akshay A. Murthy, Peter Lim, Gabriel T. dos Santos, Ramandeep Mandia, James M. Rondinelli, Mark C. Hersam, Roberto dos Reis

2604.03951 • Apr 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a theoretical framework to separate different causes of decoherence in superconducting quantum bits (qubits) by distinguishing between material microstructure effects and device geometry effects. The authors propose a way to independently measure and control these factors to better engineer quantum devices with longer coherence times.

Key Contributions

  • Introduction of separable framework distinguishing classical and quantum microstructure effects from geometry-dependent coupling in superconducting qubits
  • Development of channel-specific prescriptor methodology for independent optimization of decoherence loss pathways
  • Establishment of perturbative separability criterion and falsifiable experimental protocol for validating the theoretical framework
superconducting qubits decoherence transmon quantum coherence microstructure
View Full Abstract

In superconducting quantum circuits, decoherence improvements are frequently obtained through process interventions that simultaneously modify surface chemistry, microstructural topology, and device geometry, leaving mechanistic attribution structurally underdetermined. Predictive materials engineering requires measurable structural statistics to be separated from geometry-dependent coupling coefficients into independently testable factors. We introduce the concept of classical and quantum microstructure. In that context, we formulate a channel-wise separable framework for decoherence in superconducting transmon qubits in which each loss channel is described by a reduced prescriptor. Here, a channel-specific microstructural state variable is determined independently of device geometry, and a geometry-dependent coupling functional is computable from field solutions without reference to surface chemistry. We derive this product form from a spatially resolved kernel representation and establish a perturbative separability criterion that defines the regime where independent variation of the variables is valid. The framework specifies five prescriptor classes for dominant loss pathways in transmon-class devices. Falsifiability is operationalized through a pre-committed 2x2 experimental protocol in which the variables must satisfy independent ratio checks within propagated uncertainty. A Minimum-Dataset Specification standardizes reporting for cross-laboratory inference. Part I establishes the conceptual and mathematical architecture; coordinated experimental validation is reserved for Part II.

Novel permanent magnet array geometries for scalable trapped-ion quantum computing in a laser-free entanglement architecture

Mitchell G. Peaks

2604.03116 • Apr 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents a new permanent magnet array design for trapped-ion quantum computers that creates localized magnetic field gradients to enable laser-free qubit operations and individual qubit addressing. The design improves scalability by allowing easier ion transport and relaxing alignment constraints compared to existing magnetic field approaches.

Key Contributions

  • Novel permanent magnet geometry that creates localized asymmetric magnetic fields for improved ion transport in QCCD architectures
  • Laser-free entanglement scheme using magnetic field gradients that reduces engineering complexity and improves scalability
  • Relaxed alignment tolerances in two dimensions making experimental implementation more practical
trapped-ion quantum computing QCCD permanent magnet arrays magnetic field gradients laser-free entanglement
View Full Abstract

A novel design is presented for a permanent magnet array to address specific challenges with scalable trapped-ion quantum computing systems. Design and optimization of this magnet geometry is motivated by concepts for large-scale Quantum Charge-Coupled Device (QCCD) architectures. This proposal is relevant to magnetic field gradient schemes for laser-free entanglement using long-wavelength radiation, and individual addressing based on spatially dependent, magnetic field sensitive qubits. This configuration generates a localized, asymmetric magnetic field, yielding a region for ion transport into and out of a strong magnetic field gradient, while minimizing the absolute field experienced by the ion. This is a distinct improvement for scalability over dipolar magnet geometries where a strong magnetic field surrounds a magnetic field nil in three dimensions, which is problematic for ion transport applications. The design also relaxes the alignment constraints for experimental setup by allowing greater tolerance to misalignment in two dimensions. Additionally, the potential to scale a permanent magnet scheme in QCCD systems circumvents engineering challenges associated with using large electrical currents to generate the field gradient. Finally, a conceptual discussion is given for incorporating the design into a scalable QCCD type architecture.

Universal Robust Quantum Gates via Doubly Geometric Control

Hai Xu, Tao Chen, Junkai Zeng, Xiu-Hao Deng, Fang Gao, Xin Wang, Zheng-Yuan Xue, Chengxian Zhang

2604.02962 • Apr 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a new framework for creating robust quantum gates that can suppress multiple types of errors simultaneously using geometric quantum computation principles. The approach achieves fourth-order suppression of control errors and can be extended to sixth-order suppression, potentially enabling more fault-tolerant quantum computing.

Key Contributions

  • Established a general framework for doubly geometric quantum gates with systematic error characterization
  • Demonstrated simultaneous fourth-order suppression of control errors with extension to sixth-order suppression
geometric quantum computation fault-tolerant quantum computing error suppression robust quantum gates geometric phases
View Full Abstract

Geometric quantum computation offers a potential route to fault-tolerant quantum information processing by exploiting the global nature of geometric phases. However, achieving controlled high-order suppression of multiple error sources remains a long-standing limitation, particularly in realistic large-scale circuits with complex noise environments. This limitation is largely due to the absence of a general framework that directly characterizes error accumulation and enables systematic improvement. Here we establish such a framework for universal doubly geometric gates by embedding target operations into a hierarchy of level-n identity constructions. This approach enables direct quantification of error accumulation while removing structural constraints inherent in previous schemes. We analytically show that the defining conditions lead to simultaneous fourth-order suppression of control errors, with a systematic extension to sixth-order suppression via higher-level constructions. Our results establish doubly geometric control as a general and scalable route toward high-order robust quantum gates, with potential implications for fault-tolerant quantum information processing.

Space-Efficient Quantum Algorithm for Elliptic Curve Discrete Logarithms with Resource Estimation

Han Luo, Ziyi Yang, Ziruo Wang, Yuexin Su, Tongyang Li

2604.02311 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a more space-efficient quantum algorithm for breaking elliptic curve cryptography by optimizing Shor's algorithm to use fewer logical qubits. The researchers improved the modular inversion operation to reduce the quantum computer requirements from 2124 to 1333 logical qubits for 256-bit curves.

Key Contributions

  • Space-efficient reversible modular inversion algorithm using 3n + 4⌊log₂ n⌋ + O(1) logical qubits
  • Reduced logical qubit requirements for ECDLP from 2124 to 1333 qubits for 256-bit curves
  • Optimized controlled arithmetic components with concrete circuit constructions
Shor's algorithm elliptic curve cryptography quantum cryptanalysis modular inversion logical qubits
View Full Abstract

Solving the Elliptic Curve Discrete Logarithm Problem (ECDLP) is critical for evaluating the quantum security of widely deployed elliptic-curve cryptosystems. Consequently, minimizing the number of logical qubits required to execute this algorithm is a key object. In implementations of Shor's algorithm, the space complexity is largely dictated by the modular inversion operation during point addition. Starting from the extended Euclidean algorithm (EEA), we refine the register-sharing method of Proos and Zalka and propose a space-efficient reversible modular inversion algorithm. We use length registers together with location-controlled arithmetic to store the intermediate variables in a compact form throughout the computation. We then optimize the stepwise update rules and give concrete circuit constructions for the resulting controlled arithmetic components. This leads to a modular inversion circuit that uses $3n + 4\lfloor \log_2 n \rfloor + O(1)$ logical qubits and $204n^2\log_2 n + O(n^2)$ Toffoli gates. By inserting this modular inversion component into the controlled affine point-addition circuit, we obtain a space-efficient algorithm for the ECDLP with $5n + 4\lfloor \log_2 n \rfloor + O(1)$ qubits and $O(n^3)$ Toffoli gates. In particular, for a 256-bit prime-field curve, our estimate reduces the logical-qubit count to 1333, compared with 2124 in the previous low-width implementation of Häner et al.

Lemniscate phase trajectories for high-fidelity GHZ state preparation in trapped-ion chains

Evgeny V. Anikin, Andrey Chuchalin, Dimitrii Donchenko, Olga Lakhmanskaya, Kirill Lakhmanskiy

2604.02301 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper develops improved laser pulse techniques for creating high-fidelity GHZ (maximally entangled) states in chains of trapped ions. The new 'lemniscate pulse' method reduces preparation errors from η⁴ to η⁶ scaling by using special amplitude and phase modulation that traces a figure-eight pattern.

Key Contributions

  • Development of lemniscate pulse technique that improves GHZ state fidelity scaling from η⁴ to η⁶
  • Demonstration of 10⁻⁴ infidelity achievable for 20-ion chains, significantly better than conventional bell-like pulses
trapped-ion GHZ-states multipartite-entanglement Lamb-Dicke-parameter quantum-gates
View Full Abstract

In trapped-ion chains, multipartite GHZ states can be prepared natively with the help of a single bichromatic laser pulse. However, higher-order terms in the expansion in the Lamb-Dicke parameter $η$ limit the GHZ state preparation infidelity for rectangular and bell-like pulses to the order of $η^4$. For tens of ions, the infidelity caused by out-of-Lamb-Dicke effects can reach several percents. We propose an amplitude and phase-modulated pulse shape, an "echoed lemniscate pulse", which cancels this contribution into error in the leading order. For the proposed pulse, the infidelity scales as $η^6$. The improved scaling is achieved because of a special phase trajectory of a collective motional mode following the figure-eight curve (lemniscate). We demonstrate that the lemniscate pulse allows achieving lower infidelity than bell-like pulses, which can be as low as $10^{-4}$ for $20$-ion chains.

Quantum Time-Space Tradeoffs for Exponential Dynamic Programming

Susanna Caroppo, Jevgēnijs Vihrovs, Dārta Zajakina, Aleksejs Zajakins

2604.02233 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops quantum algorithms for dynamic programming problems that use less quantum memory (QRAM) than previous approaches by trading memory requirements for computation time. The work builds on earlier quantum dynamic programming algorithms but makes them more practically implementable by reducing their demanding memory requirements while still maintaining quantum speedups over classical methods.

Key Contributions

  • Novel quantum time-space tradeoffs for dynamic programming algorithms that reduce QRAM requirements
  • Combination of quantum algorithms with quantized classical strategies to achieve better space complexity while retaining speedups
quantum algorithms dynamic programming QRAM time-space tradeoffs NP-hard problems
View Full Abstract

We investigate the quantum algorithms for dynamic programming by Ambainis et al. (SODA'19). While giving provable complexity speedups and applicable to a variety of NP-hard problems, these algorithms have a notable drawback: they require a large amount of Quantum Random Access Memory (QRAM), which potentially could be very challenging to implement in a physical quantum computer. In this work, we study how we can improve the space complexity by trading it for time, while still retaining a speedup over the classical algorithms. We show novel quantum time-space tradeoffs, which we obtain by adjusting the parameters of these algorithms and combining them with "quantized" classical strategies.

High-threshold decoding of non-Pauli codes for 2D universality

Julio C. Magdalena de la Fuente, Noa Feldman, Jens Eisert, Andreas Bauer

2604.02033 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new decoding method for non-Pauli quantum error correction codes that can achieve universal quantum computation in 2D topological codes. The researchers demonstrate a high error threshold of ~2.5% using a just-in-time matching decoder, which is close to the performance of conventional Pauli codes.

Key Contributions

  • Development of just-in-time matching decoder for non-Pauli stabilizer codes achieving ~2.5% error threshold
  • Demonstration of universal gate set implementation on 2D topological codes with comparable performance to quantum memory
quantum error correction topological codes fault-tolerant quantum computing non-Pauli stabilizers universal gate set
View Full Abstract

Topological codes have many desirable properties that allow fault-tolerant quantum computation with relatively low overhead. A core challenge for these codes, however, is to achieve a low-overhead universal gate set with limited connectivity. In this work, we explore a non-Pauli stabilizer code that can be used to complete a universal gate set on topological toric and surface codes in strictly two dimensions. Fault-tolerant syndrome extraction for the non-Pauli code requires mid-circuit $X$ corrections, a key difference to conventional Pauli codes. We construct and benchmark a just-in-time (JIT) matching decoder to reliably decide these corrections. Under a phenomenological error model with equally likely physical and measurement errors, we find a high threshold of $\approx 2.5\,\%$, close to the $\approx 2.9\,\%$ of a decoder with access to the full syndrome history. We also perform a finite-size scaling analysis to estimate how the logical error rate scales below threshold and verify an exponential suppression in both physical error rate and in the system size. A second global decoding step for $Z$ errors is required and the non-Clifford gates in the circuit reduce the threshold from $\approx 2.9\,\%$ to $\approx 1.8\,\%$ with a naive decoder. We show how $Z$ decoding can be improved using knowledge of the $X$ corrections, pushing the threshold to $\approx 2.2\,\%$. Our results suggest non-Clifford logic in 2D codes could perform comparably to 2D quantum memory. Our formalism for efficient benchmarking and decoding directly generalizes to a broader family of CSS codes whose $X$ stabilizers are twisted by diagonal Clifford operators, and spacetime versions thereof, defined by CSS-like circuits enriched by $CCZ$, $CS$, and $T$ gates.

Transversal non-Clifford gates on almost-good quantum LDPC and quantum locally testable codes

Yiming Li, Zimu Li, Zi-Wen Liu

2604.01874 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates that certain quantum error-correcting codes with excellent parameters can implement fault-tolerant non-Clifford gates (specifically multi-controlled-Z gates) directly through transversal operations. The authors use topological methods to construct these gates on quantum LDPC and locally testable codes, achieving nearly optimal code performance while enabling universal quantum computation.

Key Contributions

  • First demonstration of transversal non-Clifford gates on quantum codes with nearly optimal parameters
  • Development of algebraic-topological framework for constructing 'cupcap gates' that enable fault-tolerant universal quantum computation
  • Proof that multi-controlled-Z gates arise naturally as topological phenomena in quantum LDPC codes
quantum error correction LDPC codes fault tolerance transversal gates non-Clifford gates
View Full Abstract

We exhibit nontrivial transversal logical multi-controlled-$Z$ gates on $[\![N,Θ(N),\tildeΘ(N)]\!]$ quantum low-density parity-check codes and $[\![N,Θ(N),\tildeΘ(N)]\!]$ quantum locally testable codes with soundness $\tildeΘ(1)$, combining nearly optimal code parameters with fault-tolerant non-Clifford gates for the first time. Remarkably, our proofs are almost entirely algebraic-topological, showing that such presumably intricate logical gates naturally arise as a fundamental topological phenomenon. We develop a general framework for constructing a rich new family of homological invariant forms which we call ''cupcap gates'' that induce transversal logical multi-controlled-$Z$ and, building on insights from [Li et al., arXiv:2603.25831], covering space methods to certify their nontriviality. The claimed almost-good code results follow immediately as examples.

Twisted Fiber Bundle Codes over Group Algebras

Chaobin Liu

2604.01478 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new method for constructing quantum error-correcting codes called twisted fiber bundle codes, which use mathematical structures over group algebras to potentially create codes with more logical qubits than existing methods while maintaining the same physical qubit count and error-correction capability.

Key Contributions

  • Introduction of twisted fiber bundle construction for quantum CSS codes over group algebras
  • Demonstration that singular chain-compatible twists can increase the number of logical qubits while maintaining blocklength and minimum distance
quantum error correction CSS codes logical qubits group algebras fiber bundle codes
View Full Abstract

We introduce a twisted fiber-bundle construction of quantum CSS codes over group algebras \(R=\mathbb F_2[G]\), where each base generator carries a generator-dependent \(R\)-linear fiber twist satisfying a flatness condition. This construction extends the untwisted lifted product code, recovered when all twists are identities. We show that invertible twists (satisfying a flatness condition) give a complex chain-isomorphic to the untwisted one, so the resulting binary CSS codes have the same blocklength \(n\) and encoded dimension \(k\). In contrast, singular chain-compatible twists can lower boundary ranks and increase the number of logical qubits. Examples over \(R=\mathbb F_2[D_3]\) show that the twisted fiber bundle code can outperform the corresponding untwisted lifted-product code in \(k\) while keeping the same \(n\) and, in our examples, the same minimum distance \(d\).

Simultaneous operation of an 18-qubit modular array in germanium

J. J. Dijkema, X. Zhang, A. Bardakas, D. Bouman, A. Cuzzocrea, D. van Driel, D. Girardi, L. E. A. Stehouwer, G. Scappucci, A. M. J. Zwerver, N. W. Hen...

2604.01063 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper demonstrates the successful operation of an 18-qubit quantum computing system built using germanium semiconductor technology, achieving high-fidelity quantum operations across all qubits simultaneously. The researchers developed a modular architecture that can be scaled up to larger systems while maintaining excellent performance, with single-qubit gate fidelities averaging 99.8%.

Key Contributions

  • Demonstration of simultaneous operation of 18 qubits in a germanium semiconductor platform
  • Achievement of high-fidelity single-qubit gates (99.8% average) across the entire array
  • Development of scalable 2xN modular architecture for semiconductor quantum processors
  • Implementation of controlled-Z gates and generation of three-qubit GHZ entangled states
semiconductor qubits spin qubits germanium quantum gates scalable architecture
View Full Abstract

Utility-scale quantum computing requires the integration and operation of a large-scale qubit register. Semiconductor spin qubits are a primary candidate for this, due to the prospects of building integrated hybrid quantum-classical architectures. However, scaling spin-qubit systems while preserving performance and control has remained a challenge. Here, we demonstrate the operation of an 18-qubit array in germanium based on an extendable 2xN architecture. We achieve simultaneous initialization, control, and readout across the entire array, enabled by parallel operation of modular unit cells. Across the array, we achieve average and median single-qubit gate fidelities of 99.8% and 99.9%, respectively. Finally, we characterize the nearest-neighbor exchange couplings throughout the device and implement high-quality controlled-Z gates to generate a three-qubit Greenberger-Horne-Zeilinger (GHZ) state. These results demonstrate that spin-qubit arrays can be scaled while maintaining high-fidelity operation and establish a modular, extendable architecture for planar semiconductor quantum processors.

Tsim: Fast Universal Simulator for Quantum Error Correction

Rafael Haenel, Xiuzhe Luo, Chen Zhao

2604.01059 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents Tsim, a high-performance quantum circuit simulator designed for quantum error correction research. The simulator uses ZX diagrams to represent quantum circuits and achieves fast sampling performance that scales linearly with Clifford gates and exponentially only with non-Clifford gates.

Key Contributions

  • Development of Tsim simulator using ZX diagram representation for quantum error correction circuits
  • Achievement of linear-time sampling in Clifford gates with GPU acceleration and vectorized compilation
  • Extension of Stim API compatibility to include T gates and arbitrary single-qubit rotations
quantum error correction quantum circuit simulation ZX diagrams Clifford gates GPU acceleration
View Full Abstract

We present Tsim, an open-source high-throughput simulator for universal noisy quantum circuits targeting quantum error correction. Tsim represents quantum circuits as ZX diagrams, where Pauli channels are modeled as parameterized vertices. Diagrams are simplified via parameterized ZX rules, and then compiled for vectorized sampling with GPU acceleration. After the one-time compilation, one can sample detector or measurement shots in linear time in the number of Clifford gates and exponentially only in the number of non-Clifford gates. Tsim implements the Stim API and fully supports the Stim circuit format, extending it with T and arbitrary single-qubit rotation instructions. For low-magic circuits, Tsim throughput can match the sampling performance of Stim.

Distilling Unitary Operations: A No-Go Theorem and Minimal Realization

Jiayi Zhao, Yu-Ao Chen, Guocheng Zhen, Chengkai Zhu, Ranyiliu Chen, Xin Wang

2604.01048 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper investigates methods to purify noisy quantum gates by proving that at least 3 quantum operations are needed to effectively clean up errors from single corrupted quantum gates, and provides the optimal strategy for doing so.

Key Contributions

  • Proved fundamental no-go theorem that 2-slot higher-order operations cannot universally purify single-qubit unitaries
  • Established 3-slot architecture as minimal realization for non-trivial universal purification with optimal fidelity analysis
  • Provided concrete quantum circuit construction for optimal higher-order purification operation
unitary purification quantum error mitigation higher-order operations depolarizing noise indefinite causal order
View Full Abstract

Quantum gates executed on physical hardware are inevitably degraded by environmental noise. While state purification effectively distills static quantum resources, the dynamic execution of quantum algorithms requires a higher-order approach to mitigate errors on the operations themselves. In this work, we investigate unitary purification: the task of utilizing a quantum higher-order operation to partially restore the ideal action of an unknown unitary corrupted by a known noise model. Focusing on canonical depolarizing noise, we first reveal a fundamental operational obstruction. We prove that within the indefinite causal order framework, no nontrivial 2-slot higher-order operation can universally purify the set of single-qubit unitaries. Overcoming this strict limitation, we establish that a 3-slot architecture provides the minimal realization for non-trivial universal purification. We analytically derive the optimal average fidelity for the 3-slot regime, demonstrating that it strictly surpasses trivial strategies by systematically utilizing ancillary qubits as a quantum memory to absorb errors. Furthermore, we provide a concrete quantum circuit construction for this optimal higher-order operation. Our results establish the strict theoretical boundaries of distilling clean operations from noisy gates, offering immediate architectural insights for robust gate design.

Geometry-induced correlated noise in qLDPC syndrome extraction

Angelo Di Bella

2604.01040 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper investigates how the physical layout geometry of quantum error correction circuits affects correlated noise patterns in quantum LDPC codes. The research shows that optimizing the geometric routing of syndrome extraction circuits can significantly reduce logical error rates, demonstrating that physical layout should be considered alongside code design and decoding algorithms.

Key Contributions

  • Derived geometry-conditioned fault models for bivariate-bicycle quantum LDPC codes showing how physical layout affects correlated errors
  • Demonstrated through Monte Carlo simulations that optimized geometric layouts can reduce logical error rates by over 26% in tested quantum error correction codes
  • Established two key geometric metrics (effective fault weight and weighted exposure) that predict logical performance across different layout configurations
quantum error correction LDPC codes fault tolerance syndrome extraction geometric optimization
View Full Abstract

With code and syndrome-extraction schedule fixed, can routed geometry alone change the correlated fault model enough to impact logical performance? Starting from a geometry-conditioned same-tick interaction Hamiltonian, we derive a controlled retained single-and-pair data-fault model for bivariate-bicycle (BB) layouts. Two geometry metrics emerge in two kernel regimes: under a crossing-local diagnostic kernel, a matching argument reduces the support-level effective fault weight; when every support pair appears in at least one retained round with finite same-round separation, strictly positive kernels saturate the support graph, and weighted exposure becomes the discriminating quantity. Circuit-level Monte Carlo on the $[\![72, 12, 6]\!]$ and $[\![144, 12, 12]\!]$ benchmarks confirms that a biplanar bounded-thickness layout suppresses the monomial single-layer embedding penalty, with weighted exposure tracking logical error rate across 101 operating points (Spearman correlation 0.893). A single-layer logical-family optimization on BB72 reduces worst-case exposure by 26.11% and lowers logical error rate in the tested power-law window. Routed geometry should be optimized together with code, schedule, and decoder.

Two Problems on Quantum Computing in Finite Abelian Groups

Ulises Pastor-Díaz, José M. Tornero

2604.00929 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents quantum computing solutions to two mathematical problems involving finite Abelian groups: the Hidden Subgroup Problem (originally solved by Simon) and the newly introduced Fully Balanced Image Problem. The authors develop algorithms using Boolean conversion techniques combined with the Generalized Phase-Kick Back method.

Key Contributions

  • Novel quantum algorithm for the Fully Balanced Image Problem using Generalized Phase-Kick Back technique
  • Boolean conversion framework for solving group-theoretic problems on quantum computers
hidden subgroup problem finite abelian groups phase kickback quantum algorithms boolean conversion
View Full Abstract

In the context of finite Abelian groups two problems are presented and solved using quantum computing techniques. The first is the well--known Hidden Subgroup Problem, originally solved by Simon in a landmark work. The second is the Fully Balanced Image Problem, originally introduced by the authors (joint with J. Ossorio--Castillo), which is related to a certain class of mappings (which contains strictly, for instance, the family of group morphisms). Both problems are tackled using a combination of two techniques: first, a conversion into Boolean objects, better suited for quantum computing arguments, and subsequently a custom--tailored algorithm which takes advantage of the Generalised Phase--Kick Back technique.

Highly-Parallel Atom-Detection Accelerator for Tweezer-Based Neutral Atom Quantum Computers

Jonas Winklmann, Yian Yu, Xiaorang Guo, Korbinian Staudacher, Martin Schulz

2604.00816 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents a specialized computer chip (FPGA) accelerator that dramatically speeds up the process of detecting and measuring individual atoms in neutral atom quantum computers, reducing image analysis time from several milliseconds to just 115 microseconds. The acceleration helps overcome one of the major bottlenecks in operating these quantum computers efficiently.

Key Contributions

  • FPGA-based accelerator achieving 34.9x speedup over CPU baseline for atom detection in neutral atom quantum computers
  • Algorithm-level optimizations and hardware design solutions including prefetching mechanisms for improved scalability
  • Demonstration of consistent resource utilization across various atom array sizes contributing to scalable NAQC control systems
neutral atom quantum computing FPGA acceleration atom detection quantum control systems image analysis
View Full Abstract

Neutral atom quantum computers (NAQCs) are among the most promising computational platforms for quantum computing. Controlling and measuring individual atoms and their states, which often requires multiple imaging and image-analysis procedures, is typically the most time-consuming task during computation and contributes significantly to overall cycle times. To resolve this challenge, we propose a highly-parallel atom-detection accelerator for tweezer-based NAQCs. Our design builds on an existing state-reconstruction method and combines an algorithm-level optimization with a Field Programmable Gate Array (FPGA) implementation to maximize parallelism and reduce the run time of the image-analysis process. We identify and overcome several challenges for an FPGA implementation, such as introducing a prefetching mechanism to improve scalability and customizing bus transfers to support large bandwidths. Tested on a Xilinx UltraScale+ FPGA, our design can analyze a 256x256-pixel fluorescence image in just 115mus, achieving 34.9x and 6.3x speedups over the original and optimized CPU baseline, respectively. Moreover, our accelerator can maintain consistent resource utilization across various atom array sizes, contributing to the ongoing efforts toward scalable and fully integrated FPGA-based control systems for NAQCs.

Quantum-Safe Code Auditing: LLM-Assisted Static Analysis and Quantum-Aware Risk Scoring for Post-Quantum Cryptography Migration

Animesh Shaw

2604.00560 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: none

This paper presents a software tool that automatically scans code to find cryptographic functions that would be vulnerable to quantum computer attacks, uses AI to assess the severity of each finding, and prioritizes which code needs to be updated first when migrating to quantum-safe cryptography.

Key Contributions

  • Development of an automated static analysis framework for identifying quantum-vulnerable cryptographic primitives in codebases
  • Integration of LLM-assisted contextual analysis with VQE-based risk scoring for prioritizing post-quantum cryptography migration
post-quantum cryptography static analysis cryptographic migration Shor's algorithm VQE
View Full Abstract

The impending arrival of cryptographically relevant quantum computers (CRQCs) threatens the security foundations of modern software: Shor's algorithm breaks RSA, ECDSA, ECDH, and Diffie-Hellman, while Grover's algorithm reduces the effective security of symmetric and hash-based schemes. Despite NIST standardising post-quantum cryptography (PQC) in 2024 (FIPS 203 ML-KEM, FIPS 204 ML-DSA, FIPS 205 SLH-DSA), most codebases lack automated tooling to inventory classical cryptographic usage and prioritise migration based on quantum risk. We present Quantum-Safe Code Auditor, a quantum-aware static analysis framework that combines (i) regex-based detection of 15 classes of quantum-vulnerable primitives, (ii) LLM-assisted contextual enrichment to classify usage and severity, and (iii) risk scoring via a Variational Quantum Eigensolver (VQE) model implemented in Qiskit 2.x, incorporating qubit-cost estimates to prioritise findings. We evaluate the system across five open-source libraries -- python-rsa, python-ecdsa, python-jose, node-jsonwebtoken, and Bouncy Castle Java -- covering 5,775 findings. On a stratified sample of 602 labelled instances, we achieve 71.98% precision, 100% recall, and an F1 score of 83.71%. All code, data, and reproduction scripts are released as open-source.

LLM-Guided Evolutionary Search for Algebraic T-Count Optimization

Daniil Fisher, Valentin Khrulkov, Mikhail Saygin, Ivan Oseledets, Stanislav Straupe

2603.29894 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces VarTODD, a method for optimizing quantum circuits by reducing the number of expensive T gates using machine learning-guided search strategies. The approach uses large language models to guide evolutionary algorithms in finding better ways to minimize T-count in fault-tolerant quantum circuits, achieving improvements on standard arithmetic benchmarks.

Key Contributions

  • Introduction of VarTODD, a policy-parameterized variant of FastTODD that separates algebraic correctness from search policy optimization
  • Demonstration of LLM-guided evolutionary optimization (GigaEvo) for automated tuning of quantum circuit optimization policies, achieving significant T-count reductions on arithmetic benchmarks
T-count optimization fault-tolerant quantum computing Clifford+T circuits quantum compilation evolutionary algorithms
View Full Abstract

Reducing the non-Clifford cost of fault-tolerant quantum circuits is a central challenge in quantum compilation, since T gates are typically far more expensive than Clifford operations in error-corrected architectures. For Clifford+T circuits, minimizing T-count remains a difficult combinatorial problem even for highly structured algebraic optimizers. We introduce VarTODD, a policy-parameterized variant of FastTODD in which the correctness-preserving algebraic transformations are left unchanged while candidate generation, pooling, and action selection are exposed as tunable heuristic components. This separates the quality of the algebraic rewrite system from the quality of the search policy. On standard arithmetic benchmarks, fixed hand-designed VarTODD policies already match or improve strong FastTODD baselines, including reductions from 147 to 139 for GF(2^9) and from 173 to 163 for GF(2^10) in the corresponding benchmark branches. As a proof of principle for automated tuning, we then optimize VarTODD policies with GigaEvo, an LLM-guided evolutionary framework, and obtain additional gains on harder instances, reaching 157 for GF(2^10) and 385 for GF(2^16). These results identify policy optimization as an independent and practical lever for improving algebraic T-count reduction, while LLM-guided evolution provides one viable way to exploit it.

Floquet Codes from Derived Semi-Regular Hyperbolic Tessellations on Orientable and Non-Orientable Surfaces

Douglas F. Copatti, Giuliano G. La Guardia, Waldir S. Soares, Edson D. Carvalho, Eduardo B. Silva

2603.29811 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new quantum Floquet codes by using hyperbolic tessellations on various types of surfaces (both orientable and non-orientable). The authors construct these quantum error-correcting codes by mapping surfaces to hyperbolic polygons and analyzing their geometric properties.

Key Contributions

  • Construction of new quantum Floquet codes on compact orientable and non-orientable surfaces
  • Generalization of hyperbolic Floquet code constructions using semi-regular tessellations
  • Performance analysis and asymptotic behavior investigation of the developed codes
quantum error correction Floquet codes hyperbolic tessellations quantum codes fault tolerance
View Full Abstract

In this paper, we construct several new quantum Floquet codes on compact, orientable, as well as non-orientable surfaces. In order to obtain such codes, we identify these surfaces with hyperbolic polygons and examine hyperbolic semi-regular tessellations on such surfaces. The method of construction presented here generalizes similar constructions concerning hyperbolic Floquet codes on connected and compact surfaces with genus $g \geq 2$. A performance analysis and an investigation of the asymptotic behavior of these codes are also presented.

Logical-to-Physical Compilation for Reducing Depth in Distributed Quantum Systems

Folkert de Ronde, Stephan Wong, Sebastian Feld

2603.29536 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper presents a compiler that optimizes distributed quantum computing circuits by identifying sequences of CNOT gates that can be parallelized and rescheduled to reduce circuit depth. The approach combines logical-to-physical decomposition with scheduling to minimize execution time while maintaining logical equivalence, specifically targeting the overhead introduced when quantum operations must be distributed across multiple connected processors.

Key Contributions

  • Development of a compiler that integrates logical-to-physical decomposition with depth-aware rescheduling for distributed quantum circuits
  • Algorithm for parallelizing sequential CNOT gate structures while maintaining logical equivalence and never increasing circuit depth
distributed quantum computing circuit compilation CNOT gates quantum circuit depth entanglement distribution
View Full Abstract

Quantum computing is expected to become a foundational technology for solving problems that exceed the capabilities of classical systems. As quantum algorithms and hardware technologies continue to advance, the need for scalable architectures becomes increasingly clear. Distributed quantum computing offers a promising path forward by interconnecting multiple smaller processors into a larger, more powerful system. However, distributed quantum computing introduces significant circuit depth overhead, as logical operations are typically decomposed into sequential physical procedures that require entanglement generation. These sequential operations limit the reliability of quantum algorithms in the NISQ era due to noise. In this work, we present a compiler that integrates logical-to-physical decomposition with depth-aware rescheduling to reduce the execution cost of distributed quantum circuits. The compiler identifies sequences of logical CNOT gates that share a control or target qubit, reschedules them into parallel instruction groups, and applies decompositions that allow multiple gates to be executed simultaneously using distributed shared entanglement resources. An algorithm is proposed that ensures parallelism is created when possible while keeping logical equivalence and that circuit depth is never increased. Benchmark results demonstrate that the compiler consistently reduces circuit depth for circuits containing inherently sequential CNOT structures, while leaving already-parallel circuits unchanged. These results highlight the value of combining scheduling and hardware-aware decomposition, and establish the compiler as a practical tool for improving the fidelity of distributed quantum computations.

PAEMS: Precise and Adaptive Error Model for Superconducting Quantum Processors

Songhuan He, Yifei Cui, Cheng Wang

2603.29439 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces PAEMS, a new error modeling system for superconducting quantum processors that more accurately simulates qubit errors for quantum error correction. The model uses a qubit-wise separation framework to capture how errors evolve across space and time, showing significant improvements in error correlation accuracy compared to existing models.

Key Contributions

  • Introduction of PAEMS error model with qubit-wise separation framework and leakage propagation
  • Significant reduction in error correlations (19.5x timelike, 9.3x spacelike, 5.2x spacetime) compared to previous methods
  • 58-73% accuracy improvement over Google's SI1000 model across multiple quantum platforms
quantum error correction superconducting qubits error modeling fault tolerance QPU
View Full Abstract

Superconducting quantum processor units (QPUs) are incapable of producing massive datasets for quantum error correction (QEC) because of hardware limitations. Thus, QEC decoders heavily depend on synthetic data from qubit error models. Classic depolarizing error models with polynomial complexity present limited accuracy. Coherent density matrix methods suffer from exponential complexity $\propto O(4^n)$ where $n$ represents the number of qubits. This paper introduces PAEMS: a precise and adaptive qubit error model. Its qubit-wise separation framework, incorporating leakage propagation, captures error evolvements crossing spatial and temporal domains. Utilizing repetition-code experiment datasets, PAEMS effectively identifies the intrinsic qubit errors through an end-to-end optimization pipeline. Experiments on IBM's QPUs have demonstrated a 19.5$\times$, 9.3$\times$, and 5.2$\times$ reduction in timelike, spacelike, and spacetime error correlation, respectively, surpassing all of the previous works. It also outperforms the accuracy of Google's SI1000 error model by 58$\sim$73\% on multiple quantum platforms, including IBM's Brisbane, Sherbrooke, and Torino, as well as China Mobile's Wuyue and QuantumCTek's Tianyan.

YZ-plane measurement-based quantum computation: Universality and Parity Architecture implementation

Jaroslav Kysela, Katharina Ludwig, Nitica Sakharwade, Anette Messinger, Wolfgang Lechner

2603.29379 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies measurement-based quantum computation (MBQC) using measurements restricted to the YZ plane of the Bloch sphere, proving that certain deterministic quantum computations must use such measurements and demonstrating universal quantum computation is possible with only YZ-plane measurements. The authors also show how these restricted measurement patterns can be implemented using local interactions within the Parity Architecture framework.

Key Contributions

  • Proof that uniformly deterministic MBQC with input-output coincidence requires YZ-plane measurements on register-logic graphs
  • Demonstration of universal quantum computation using only YZ-plane measurements and connection to XZ-plane patterns
  • Implementation framework for YZ-plane patterns in Parity Architecture with purely local interactions
measurement-based quantum computation MBQC YZ-plane measurements universal quantum computation register-logic graphs
View Full Abstract

We define the class of register-logic graphs and prove that any uniformly deterministic measurement-based quantum computation (MBQC) where the inputs coincide with the outputs must be driven on such graphs by measurements in the $YZ$ plane of the Bloch sphere. This observation is revisited in the context that goes beyond uniform determinism, where we present a universal $YZ$-plane-only measurement pattern and establish a connection between $YZ$-plane-only and $XZ$-plane-only patterns. These results conclude the line of research on universal patterns with measurements restricted to one of the principal planes of the Bloch sphere. We further demonstrate, within the framework of the Parity Architecture, that $YZ$-plane patterns with the register-logic graph can be embedded into another graph with purely local interactions, and we extend this case to the scenario of universal quantum computation.

Shor's algorithm is possible with as few as 10,000 reconfigurable atomic qubits

Madelyn Cain, Qian Xu, Robbie King, Lewis R. B. Picard, Harry Levine, Manuel Endres, John Preskill, Hsin-Yuan Huang, Dolev Bluvstein

2603.28627 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper shows that Shor's algorithm for breaking cryptography could be implemented with as few as 10,000 neutral-atom qubits, dramatically reducing previous estimates that required millions of qubits. The researchers achieve this by using advanced quantum error correction codes and optimized circuit designs to make cryptographically relevant quantum computing more feasible.

Key Contributions

  • Reduced resource requirements for Shor's algorithm from millions to ~10,000 physical qubits
  • Demonstrated feasibility of cryptographically relevant quantum computing with neutral-atom architectures
  • Provided concrete runtime estimates for breaking P-256 elliptic curves and RSA-2048 encryption
Shor's algorithm quantum error correction neutral atoms fault-tolerant quantum computing cryptography
View Full Abstract

Quantum computers have the potential to perform computational tasks beyond the reach of classical machines. A prominent example is Shor's algorithm for integer factorization and discrete logarithms, which is of both fundamental importance and practical relevance to cryptography. However, due to the high overhead of quantum error correction, optimized resource estimates for cryptographically relevant instances of Shor's algorithm require millions of physical qubits. Here, by leveraging advances in high-rate quantum error-correcting codes, efficient logical instruction sets, and circuit design, we show that Shor's algorithm can be executed at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. Increasing the number of physical qubits improves time efficiency by enabling greater parallelism; under plausible assumptions, the runtime for discrete logarithms on the P-256 elliptic curve could be just a few days for a system with 26,000 physical qubits, while the runtime for factoring RSA-2048 integers is one to two orders of magnitude longer. Recent neutral-atom experiments have demonstrated universal fault-tolerant operations below the error-correction threshold, computation on arrays of hundreds of qubits, and trapping arrays with more than 6,000 highly coherent qubits. Although substantial engineering challenges remain, our theoretical analysis indicates that an appropriately designed neutral-atom architecture could support quantum computation at cryptographically relevant scales. More broadly, these results highlight the capability of neutral atoms for fault-tolerant quantum computing with wide-ranging scientific and technological applications.

Tunable Nonlocal ZZ Interaction for Remote Controlled-Z Gates Between Distributed Fixed-Frequency Qubits

Benzheng Yuan, Chaojie Zhang, Haoran He, Yangyang Fei, Chuanbing Han, Shuya Wang, Huihui Sun, Qing Mu, Bo Zhao, Fudong Liu, Weilong Wang, Zheng Shan

2603.28526 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper presents a method for creating high-fidelity quantum gates between superconducting qubits located in separate modules connected by long cables, using double-transmon couplers to enable remote quantum operations with over 99.99% fidelity across 25 cm distances.

Key Contributions

  • Development of double-transmon coupler architecture enabling remote controlled-Z gates with >99.99% fidelity
  • Demonstration of tunable nonlocal ZZ interaction with on/off ratio exceeding 10^6 for distributed quantum processors
  • Hardware-efficient solution for scaling superconducting quantum computers through modular architectures
superconducting qubits distributed quantum computing controlled-Z gates double-transmon couplers fault-tolerant quantum computing
View Full Abstract

Fault-tolerant quantum computing requires large-scale superconducting processors, yet monolithic architectures face increasing constraints from wiring density, crosstalk, and fabrication yield. Modular superconducting platforms offer a scalable alternative, but achieving high-fidelity entangling gates between distant modules remains a central challenge, particularly for highly coherent fixed-frequency qubits. Here, we propose a distributed hardware architecture designed to overcome this bottleneck by employing a pair of double-transmon couplers (DTCs). By synchronously controlling the two DTCs stationed at opposite ends of a macroscopic cable, our scheme strongly suppresses residual static inter-module coupling while enabling on-demand activation of a non-local cross-Kerr interaction with an on/off ratio exceeding $10^6$. Through comprehensive system-level numerical simulations incorporating realistic hardware parameters, we demonstrate that this mechanism can realize a remote controlled-Z (CZ) gate with a fidelity over 99.99\% between fixed-frequency transmons housed in separate packages interconnected by a 25 cm coaxial cable. These results establish a highly viable, hardware-efficient route toward high-performance distributed superconducting processors.

Open-System Adiabatic Quantum Search under Dephasing

Afaf El Kalai, Peter J. Eder, Christian B. Mendl

2603.28506 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes how to optimize adiabatic quantum search algorithms (like quantum Grover search) when the quantum system experiences dephasing noise, finding the best evolution schedule that balances speed against decoherence effects. The researchers derive mathematical expressions for optimal timing and identify fundamental limits where noise prevents further acceleration of the search algorithm.

Key Contributions

  • Derived closed-form expressions for optimal evolution schedule in noisy adiabatic Grover search
  • Identified critical dephasing threshold that defines fundamental limits for noise-assisted quantum algorithm acceleration
adiabatic quantum computing Grover search decoherence dephasing quantum algorithms
View Full Abstract

Adiabatic quantum algorithms must evolve slowly enough to suppress non-adiabatic transitions while remaining fast enough to be practical. In open systems, this trade-off is reshaped by decoherence. For Hamiltonians subject to dephasing Lindbladians, Avron et al. [1] showed that a unique timetable exists that maximizes the fidelity with a target state. This optimal schedule is characterized by a constant tunneling rate along the adiabatic path. In this work, we revisit their analysis and apply it to the adiabatic Grover search framework, obtaining closed-form expressions for the optimal evolution schedule, the minimum runtime, and the resulting achievable fidelity. Moreover, by invoking an energy-time uncertainty argument, we identify a critical dephasing threshold, beyond which further noise-assisted acceleration is prohibited, thereby defining the physically realizable boundaries for dephasing-based adiabatic quantum search protocols.

Mixed-register Stabilizer Codes: A Coding-theoretic Perspective

Himanshu Dongre, Lane G. Gunderman

2603.28459 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops quantum error correction codes for systems where different quantum locations can have different numbers of basis states (mixed qudits), rather than all being qubits or all having the same dimension. The authors prove theoretical constraints and construct optimal stabilizer codes by combining codes with coprime dimensions.

Key Contributions

  • General theoretical results for mixed-register Pauli operators and forbidden stabilizer code structures
  • Construction of coding-theoretically optimal mixed-register stabilizer codes from coprime local-dimensions
quantum_error_correction stabilizer_codes qudits mixed_register fault_tolerance
View Full Abstract

Protecting information in systems that have more than two basis states (qudits) not only offers a promising route for reducing the number of individual quantum locations that must be protected, while more accurately reflecting the structure of realistic quantum hardware, but also has some possibly enticing foundational strengths. While work in the past has largely focused on protecting information in quantum devices with locations that are some consistent local structure, this work considers coding-theoretic constraints on devices constructed from locations which may vary in their local structures -- these are mixed-register quantum devices. In this work we provide some general results for mixed-register Pauli operators, then identify some stabilizer encoded information forms that are forbidden. Building on these insights, we construct coding-theoretically optimal mixed-register stabilizer codes from sets of codes defined on coprime local-dimensions. The construction of such codes results in codes with logical subspaces that do not directly correspond to any of the constituent local-dimensions.

Autonomous Hamiltonian certification and changepoint detection

Steven T. Flammia, Dmitrii Khitrin, Muzhou Ma, Jamie Sikora, Yu Tong, Alice Zheng

2603.26655 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper develops protocols for quantum devices to autonomously monitor whether their internal components (Hamiltonians) have drifted from their calibrated values, using only simple single-qubit operations to detect when expensive recalibration is needed. The approach allows quantum computers to self-diagnose calibration problems without requiring external reference devices or complex multi-qubit operations that might themselves be miscalibrated.

Key Contributions

  • Autonomous Hamiltonian certification protocol that distinguishes calibration drift using only single-qubit gates and measurements
  • Online changepoint detection algorithm for continuous monitoring of quantum device calibration with optimal scaling
  • Sample complexity bounds of O(nM²ln(1/δ)/ε²) for n-qubit systems with practical evolution time requirements
hamiltonian certification quantum calibration changepoint detection stabilizer states quantum device characterization
View Full Abstract

Modern quantum devices require high-precision Hamiltonian dynamics, but environmental noise can cause calibrated Hamiltonian parameters to drift over time, necessitating expensive recalibration. Detecting when recalibration is needed is challenging, especially since the very gates required for sophisticated verification protocols may themselves be miscalibrated. While cloud quantum computing services implement heuristic routines for triggering recalibration, the fundamental limits of optimal recalibration are not yet known. We develop efficient Hamiltonian certification and changepoint detection protocols in the autonomous setting, where we cannot rely on an external noiseless device and use only single-qubit gates and measurements, making the protocols robust to the calibration issues for multi-qubit operations they aim to detect. For unknown $n$-qubit Hamiltonians $H$ and $H_0$ with operator norm bounded by $M$, our certification protocol distinguishes whether $\|H-H_0\|_F\geqε$ or $\|H-H_0\|_F\leq O(ε/\sqrt{n})$ with sample complexity $O(nM^2\ln(1/δ)/ε^2)$ and total evolution time $O(nM\ln(1/δ)/ε^2)$. We achieve this by evolving random stabilizer product states and performing adaptive single-qubit measurements based on a classically simulable hypothesis state. Extending this to continuous monitoring, we develop an online changepoint detection algorithm using the CUSUM procedure that achieves a detection delay time bound of $O(nM\ln(M\mathbb{E}_\infty[T])/ε^2)$, matching the known asymptotically optimal scaling with respect to false alarm run time $\mathbb{E}_\infty[T]$. Our approach enables quantum devices to autonomously monitor their own calibration status without requiring ancillary systems, entangling operations, or a trusted reference device, offering a practical solution for robust quantum computing with contemporary noisy devices.

Majorana-XYZ subsystem code

Tobias Busse, Lauri Toikka

2603.26311 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new quantum error correction code called the Majorana-XYZ subsystem code that can protect a macroscopic number of logical qubits using topological properties. The code uses 3-local nearest-neighbor check operations on a honeycomb lattice and can encode approximately L/2 logical qubits into L² physical qubits with distance L.

Key Contributions

  • Introduction of a novel subsystem quantum error correction code that combines topological and local gauge protection
  • Demonstration of macroscopic logical qubit encoding with 3-local nearest-neighbor check operations on experimentally feasible Majorana fermion systems
quantum error correction subsystem codes topological codes Majorana fermions honeycomb lattice
View Full Abstract

We present a new type of a quantum error correction code, termed Majorana-XYZ code, where the logical quantum information scales macroscopically yet is protected by topologically non-trivial degrees of freedom. It is a $[n,k,g,d]$ subsystem code with $n=L^2$ physical qubits, $k= \lfloor L/2 \rfloor$ logical qubits, $g \sim L^2$ gauge qubits, and distance $d = L$. The physical check operations, i.e. the measurements needed to obtain the error syndrome, are $3$-local and nearest-neighbour. The code detects every 1- and 2-qubit error, and every error of weight 3 and higher (constrained by the distance) that is not a product of the 3-qubit check operations, however, these products act only on the gauge qubits leaving the code space invariant. The undetected weight-3 and higher operators are confined to the gauge group and do not affect logical information. While the code does not have local stabiliser generators, the logical qubits cannot be modified locally by an undetectable error, and in this sense the Majorana-XYZ code combines notions of both topological and local gauge codes while providing a macroscopic number of topological logical qubits. Taken as a non-gauge stabiliser code we can encode $k \sim L^2 - 3L$ logical qubits into $L^2$ physical qubits; however, the check operators then become weight $2L$. The code is derived from an experimentally promising system of Majorana fermions on the honeycomb lattice with only nearest-neighbour interactions.

Decomposition of Multi-Qubit Gates for Circuit Cutting

Ryota Tamura, Tomoya Kashimata, Yohei Hamakawa, Kosuke Tatsumura, Hiroshi Imai

2603.26278 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a method to reduce the computational overhead when cutting large quantum circuits into smaller pieces by optimizing how multi-qubit gates are decomposed before the cutting process. The approach uses additional helper qubits strategically to minimize the extra sampling required when reconstructing results from the cut circuit pieces.

Key Contributions

  • Novel decomposition strategy for multi-qubit gates that reduces sampling overhead in circuit cutting
  • Demonstration using MCX and CCCX gates showing effectiveness of ancilla-based approach for optimizing cut locations
circuit cutting multi-qubit gates sampling overhead quantum circuit decomposition ancilla qubits
View Full Abstract

A large-scale quantum circuit can be partitioned into multiple subcircuits through circuit cutting, where each subcircuit is executed multiple times and the expectation value of the original circuit is reconstructed by classical post-processing from their measurement (sampling) results. In this process, appropriate cut locations are identified after the user-designed quantum circuit, including multi-qubit gates that act on three or more qubits, has been decomposed into single-qubit gates and two-qubit gates such as the CNOT gate. Here, we present a method for reducing the sampling overhead, which refers to the increase in the number of samples required due to the cutting process, by modifying the decomposition strategy of multi-qubit gates. Using MCX and CCCX gates as representatives of multi-qubit gates, we demonstrate that the proposed decomposition method, which introduces a small number of ancilla qubits according to the identified cut locations, effectively decreases the sampling overhead.

Distributed Quantum Discrete Logarithm Algorithm

Renjie Xu, Daowen Qiu, Ligang Xiao, Le Luo, Xu Zhou

2603.26160 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper proposes a distributed quantum algorithm for solving the discrete logarithm problem that requires smaller quantum registers than Shor's original algorithm. The approach works by identifying intersections of sets that contain the solution, avoiding the need for quantum communication while potentially improving success probability.

Key Contributions

  • Distributed quantum algorithm for discrete logarithm problem with reduced register size requirements
  • Method to determine solution containment in given sets without quantum communication
  • Approach that can improve success probability compared to Shor's algorithm
discrete logarithm Shor's algorithm distributed quantum computing cryptanalysis quantum registers
View Full Abstract

Solving the discrete logarithm problem (DLP) with quantum computers is a fundamental task with important implications. Beyond Shor's algorithm, many researchers have proposed alternative solutions in recent years. However, due to current hardware limitations, the scale of DLP instances that can be addressed by quantum computers remains insufficient. To overcome this limitation, we propose a distributed quantum discrete logarithm algorithm that reduces the required quantum register size for solving DLPs. Specifically, we design a distributed quantum algorithm to determine whether the solution is contained in a given set. Based on this procedure, our method solves DLPs by identifying the intersection of sets containing the solution. Compared with Shor's original algorithm, our approach reduces the register size and can improve the success probability, while requiring no quantum communication.

MoSAIC: Scalable Probabilistic Error Cancellation via Variational Blockwise Noise Aggregation

Maya Ma, Rimika Jaiswal, Murphy Yuezhen Niu

2603.26063 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces MoSAIC, a new quantum error mitigation technique that reduces the computational overhead of probabilistic error cancellation by grouping quantum circuit operations into blocks and learning effective noise models for each block. The method maintains accuracy while dramatically reducing sampling costs, enabling error mitigation on much larger quantum systems than previously possible.

Key Contributions

  • Development of MoSAIC framework that preserves unbiasedness of probabilistic error cancellation while reducing sampling overhead by 1-2 orders of magnitude
  • Largest experimental demonstration of PEC-based error mitigation on IBM's 156-qubit Heron processors with validation on 50-qubit transverse-field Ising model systems
  • Blockwise noise aggregation approach that enables scalable quantum error mitigation beyond the operating regime of standard probabilistic error cancellation
quantum error mitigation probabilistic error cancellation NISQ noise models variational optimization
View Full Abstract

Quantum error mitigation is essential for extracting trustworthy results from noisy intermediate-scale quantum (NISQ) processors. Yet, current approaches face a core scalability bottleneck: unbiased methods such as probabilistic error cancellation (PEC) incur exponential sampling overhead, while approximate techniques like zero-noise extrapolation trade accuracy for efficiency. We introduce and experimentally demonstrate MoSAIC (Modular Spatio-temporal Aggregation for Inverted Channels), a scalable quantum error mitigation framework that preserves the unbiasedness of PEC while dramatically reducing sampling costs. MoSAIC partitions a circuit into noise-aligned blocks, learns an effective block noise model using classical variational optimization, and applies quasi-probabilistic inversion once per block instead of after every layer. This blockwise aggregation reduces both sampling overhead and circuit-depth overhead, enabling mitigation far beyond the operating regime of standard PEC. We also experimentally validate MoSAIC on IBM's 156-qubit Heron processors, performing the largest PEC-based mitigation demonstration on hardware to date. As a physically meaningful benchmark, we prepare the critical one-dimensional transverse-field Ising (TFIM) ground state for system sizes up to 50 qubits. We show that MoSAIC can achieve at least 1 to 2 orders of magnitude better accuracy than standard PEC under identical sampling budgets. This enables MoSAIC to recover accurate observables for larger system sizes, even when standard PEC fails due to its prohibitive sampling overhead. We also present CUDA-Q accelerated simulations to validate performance trends under a range of different noise models. These results demonstrate that MoSAIC is not only theoretically scalable but also practically deployable for high-accuracy, large-scale quantum experiments on today's quantum hardware.

Achieving double-logarithmic precision dependence in optimization-based quantum unstructured search

Zhijian Lai, Dong An, Jiang Hu, Zaiwen Wen

2603.26039 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper improves Grover's quantum search algorithm by reformulating it as an optimization problem and applying Riemannian modified Newton methods. The new approach achieves better precision scaling with O(√N log log(1/ε)) complexity instead of the previous O(√N log(1/ε)), while remaining compatible with standard Grover operators.

Key Contributions

  • Development of Riemannian modified Newton method for quantum search with quadratic convergence rate
  • Achievement of double-logarithmic precision dependence O(√N log log(1/ε)) complexity
  • Proof that Riemannian gradient is an eigenvector of the Riemannian Hessian in quantum search setting
  • Maintaining Grover-compatibility using only standard oracle and diffusion operators
Grover algorithm quantum search Riemannian optimization quantum algorithms unstructured search
View Full Abstract

Grover's algorithm is a fundamental quantum algorithm that achieves a quadratic speedup for unstructured search problems of size $N$. Recent studies have reformulated this task as a maximization problem on the unitary manifold and solved it via linearly convergent Riemannian gradient ascent (RGA) methods, resulting in a complexity of $O(\sqrt{N}\log (1/\varepsilon))$. In this work, we adopt the Riemannian modified Newton (RMN) method to solve the quantum search problem. We show that, in the setting of quantum search, the Riemannian Newton direction is collinear with the Riemannian gradient in the sense that the Riemannian gradient is always an eigenvector of the corresponding Riemannian Hessian. As a result, without additional overhead, the proposed RMN method numerically achieves a quadratic convergence rate with respect to error $\varepsilon$, implying a complexity of $O(\sqrt{N}\log\log (1/\varepsilon))$, which is double-logarithmic in precision. Furthermore, our approach remains Grover-compatible, namely, it relies exclusively on the standard Grover oracle and diffusion operators to ensure algorithmic implementability, and its parameter update process can be efficiently precomputed on classical computers.

Scalable topological quantum computing based on Sine-Cosine chain models

A. Lykholat, G. F. Moreira, I. R. Martins, D. Sousa, A. M. Marques, R. G. Dias

2603.25952 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper proposes a new approach to topological quantum computing using Sine-Cosine chain models that can encode multiple quantum bits (qudits) in single systems, potentially requiring fewer physical resources than current methods. The researchers describe how these chains could be used for quantum gate operations and memory storage with some protection against errors.

Key Contributions

  • Novel scalable framework for topological quantum computing using Matryoshka-type Sine-Cosine chains
  • High-dimensional qudit encoding approach that reduces physical resource overhead
  • Y-junction braiding protocols for gate operations with extended memory architectures
topological quantum computing qudit encoding braiding protocols fault tolerance resource optimization
View Full Abstract

This work proposes a scalable framework for topological quantum computing using Matryoshka-type Sine-Cosine chains. These chains support high-dimensional qudit encoding within single systems, reducing the physical resource overhead compared to conventional qubit arrays. We describe how these chains can be used in Y-junction braiding protocols for gate operations and in extended memory architectures capable of storing multiple qubits simultaneously. Fidelity analysis shows partial topological protection against disorder, suggesting this approach is a possible pathway toward low-overhead quantum hardware.

Theory of (Co)homological Invariants on Quantum LDPC Codes

Zimu Li, Yuguo Shao, Fuchuan Wei, Yiming Li, Zi-Wen Liu

2603.25831 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a mathematical framework for analyzing quantum LDPC (Low-Density Parity-Check) codes by studying their topological and algebraic properties. The work extends theoretical tools from HGP codes to sheaf codes and shows how to construct families of quantum error-correcting codes while preserving their logical operation capabilities.

Key Contributions

  • Systematic mathematical framework for analyzing cohomological invariants of quantum LDPC codes
  • Generalization of canonical logical representatives from HGP codes to sheaf codes
  • First comprehensive computation of cup products in sheaf codes enabling parallel quantum gates
  • Inductive scheme for generating code families while preserving logical operations and invariants
quantum LDPC codes quantum error correction fault tolerance cohomological invariants sheaf codes
View Full Abstract

With recent breakthroughs in the construction of good qLDPC codes and nearly good qLTCs, the study of (co)homological invariants of quantum code complexes, which fundamentally underlie their logical operations, has become evidently important. In this work, we establish a systematic framework for mathematically analyzing these invariants across a broad spectrum of constructions, from HGP codes to sheaf codes, by synthesizing advanced math tools. We generalize the notion of canonical logical representatives from HGP codes to the sheaf code setting, resolving a long-standing challenge in explicitly characterizing sheaf codewords. Building on this foundation, we present the first comprehensive computation of cup products within the intricate framework of sheaf codes. Given Artin's primitive root conjecture which holds under the generalized Riemann hypothesis, we prove that $\tildeΘ(N)$ independent cup products can be supported on almost good qLDPC codes and qLTCs of length N, opening the possibility of achieving linearly many parallel, nontrivial, constant-depth multi-controlled-Z gates. Moreover, by interpreting sheaf codes as covering spaces of HGP codes via graph lifts, we propose a scheme that inductively generates families of both HGP and sheaf codes in an interlaced fashion from a constant-size HGP code. Notably, the induction preserves all (co)homological invariants of the initial code. This provides a general framework for lifting invariants or logical gates from small codes to infinite code families, and enables efficient verification of such features by checking on small instances. Our theory provides a substantive methodology for studying invariants in HGP codes and extends it to sheaf codes. In doing so, we reveal deep and unexpected connections between qLDPC codes and math, thereby laying the groundwork for future advances in quantum coding, fault tolerance, and physics.

Non-linear Sigma Model for the Surface Code with Coherent Errors

Stephen W. Yan, Yimu Bao, Sagar Vijay

2603.25665 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies how well the surface code (a leading quantum error correction scheme) performs when affected by coherent errors rather than random errors. The authors develop a mathematical framework to analyze different decoding strategies and discover a new type of failure mode called a 'thermal-metal' phase that occurs when the decoder doesn't have perfect information about the coherent errors.

Key Contributions

  • Derivation of a non-linear sigma model framework for analyzing surface code performance under coherent errors
  • Discovery of a 'thermal-metal' phase representing a new type of non-decodable regime distinct from conventional Pauli error failures
  • Demonstration of sharp performance differences between optimal decoding (with known error parameters) and suboptimal decoding (with imperfect parameter knowledge)
surface code quantum error correction coherent errors maximum-likelihood decoding non-linear sigma model
View Full Abstract

The surface code is a promising platform for a quantum memory, but its threshold under coherent errors remains incompletely understood. We study maximum-likelihood decoding of the square-lattice surface code in the presence of single-qubit unitary rotations that create electric anyon excitations. We microscopically derive a non-linear sigma model with target space $\mathrm{SO}(2n)/\mathrm{U}(n)$ as the effective long-distance theory of this decoding problem, with distinct replica limits: $n\to1$ for optimal decoding, which assumes knowledge of the coherent rotation angle, and $n\to0$ for suboptimal decoding with imperfect angle information. This exposes a sharp distinction between the two decoders. The suboptimal decoder supports a ``thermal-metal'' phase, a non-decodable regime that is qualitatively distinct from the conventional non-decodable phase of the surface code under incoherent Pauli errors. By contrast, the metal phase cannot arise in optimal decoding, since the metallic fixed-point becomes unstable in the $n\to 1$ replica limit. We argue that optimal decoding may be possible up to the maximally-coherent rotation angle. Within the sigma model description, we show that the decoding fidelity is related to twist defects of the order-parameter field, yielding quantitative predictions for its system-size dependence near the metallic fixed point for both decoders. We examine our analytic predictions for the decoding fidelity as well as other physical observables with extensive numerical simulations. We discuss how the symmetries and the target space for the sigma model rely on the lattice of the surface code, and how a stable thermal metal phase can arise in optimal decoding when the syndromes reside on a non-bipartite lattice.

Weighted Nested Commutators for Scalable Counterdiabatic State Preparation

Jialiang Tang, Xi Chen, Zhi-Yuan Wei

2603.25625 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper introduces a new method called weighted nested commutators (WNC) to efficiently prepare quantum states in large systems by approximating complex nonlocal operations with simpler local ones. The approach significantly improves quantum state preparation for systems with up to 1000 qubits compared to existing methods.

Key Contributions

  • Introduction of weighted nested-commutator (WNC) ansatz that generalizes standard nested-commutator approaches with independent variational weights
  • Demonstration of efficient quantum state preparation for large systems up to 1000 qubits using counterdiabatic driving with local optimization
counterdiabatic driving quantum state preparation adiabatic gauge potentials matrix product states variational optimization
View Full Abstract

Counterdiabatic (CD) driving enables efficient quantum state preparation, but it requires implementing highly nonlocal adiabatic gauge potentials (AGP) that are impractical to compute and realize in large many-body systems. We introduce a \textit{weighted nested-commutator} (WNC) ansatz to approximate AGP using local operators. The WNC ansatz generalizes the standard nested-commutator ansatz by assigning independent variational weights to commutators of local Hamiltonian terms, thereby enlarging the variational space while preserving a fixed operator range. We show that the WNC ansatz can be efficiently optimized using a local optimization scheme. Moreover, it systematically outperforms the nested-commutator ansatz in preparing one-dimensional matrix product states (MPS) and the ground state of a nonintegrable quantum Ising model. We then numerically demonstrate that CD driving based on the WNC ansatz significantly accelerates the preparation of 1D MPS for system sizes up to $N = 1000$ qubits, as well as the two-dimensional Affleck-Kennedy-Lieb-Tasaki state on a hexagonal lattice with up to $N = 3 \times 10$ sites.

Kardashev scale Quantum Computing for Bitcoin Mining

Pierre-Luc Dallaire-Demers

2603.25519 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes the practical feasibility of using quantum computers to mine Bitcoin by applying Grover's algorithm to accelerate the cryptographic hash calculations. The authors find that while quantum mining could theoretically provide advantages, the physical resource requirements (qubits and energy) scale to astronomical levels that make it impractical even for civilizations operating at planetary energy scales.

Key Contributions

  • First comprehensive end-to-end cost analysis of fault-tolerant quantum hardware requirements for Bitcoin mining using Grover's algorithm
  • Open-source estimator tool that models the full attack surface including surface-code error correction, fleet logistics, and energy requirements at astronomical scales
Grover's algorithm Bitcoin mining fault-tolerant quantum computing surface code cryptographic hash functions
View Full Abstract

Bitcoin already faces a quantum threat through Shor attacks on elliptic-curve signatures. This paper isolates the other component that public discussion often conflates with it: mining. Grover's algorithm halves the exponent of brute-force search, promising a quadratic edge to any quantum miner of Bitcoin. Exactly how large that edge grows depends on fault-tolerant hardware. No prior study has costed that hardware end to end. We build an open-source estimator that sweeps the full attack surface: reversible oracles for double-SHA-256 mining and RIPEMD-based address preimages, surface-code factory sizing, fleet logistics under Nakamoto-consensus timing, and Kardashev-scale energy accounting. A parametric sweep over difficulty bits b, runtime caps, and target success probabilities reveals a sharp transition. At the most favourable partial-preimage setting (b = 32, 2^224 marked states), a superconducting surface-code fleet still requires about 10^8 physical qubits and about 10^4 MW. That load is comparable to a large national grid. Tightening to Bitcoin's January 2025 mainnet difficulty (b about 79) explodes the bill to about 10^23 qubits and about 10^25 W, approaching the Kardashev Type II threshold. These numbers settle a narrower question than "Is Bitcoin quantum-secure?" Once Grover mining is lifted from asymptotic query counts to fault-tolerant physical cost, practical quantum mining collapses under oracle, distillation, and fleet overhead. To push mining into non-trivial consensus effects, one must invoke astronomical quantum fleets operating at energy scales that lie far above present-day civilization.

Weak distillation of quantum resources

Shinnosuke Onishi, Oliver Hahn, Ryuji Takagi

2603.25358 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper develops a new framework that allows quantum computers to simulate quantum operations they cannot directly perform by using sampling techniques based on quasi-probability distributions. Instead of just estimating average values, their method can actually sample from the desired quantum distributions while using fewer quantum resources than previous approaches.

Key Contributions

  • General framework converting quasi-probability protocols from expectation value estimation to full weak simulation
  • Significant reduction in sampling requirements compared to naive approaches, with cost proportional to quasi-probability negativity
  • Introduction of weak quantum resource distillation as alternative to physical state distillation
quantum error mitigation quasi-probability decomposition importance sampling magic state distillation entanglement distillation
View Full Abstract

Importance sampling based on quasi-probability decomposition is the backbone of many widely used techniques, such as error mitigation, circuit knitting, and, more generally, virtual quantum resource distillation, as it allows one to simulate operations that are not accessible in a given setting. However, this class of protocols faces a fundamental problem -- it only allows to estimate expectation values. Here, we provide a general framework that lifts any quasi-probability-based protocol from expectation value estimation to a weak simulator, realizing sampling from the desired distribution only using a restricted class of quantum resources. Our method runs with the sampling cost proportional to the negativity of the quasi-probability, in stark contrast to the naive estimation-based approach that requires a large number of samples even in the case of small negativity. We show that our method requires significantly fewer samples in a number of relevant scenarios, such as error mitigation, entanglement distillation and magic state distillation. Our framework realizes the weak simulation of quantum resources without actually distilling the state, introducing a new notion of quantum resource distillation.

T Count as a Numerically Solvable Minimization Problem

Marc Grau Davis, Ed Younis, Mathias Weiden, Hyeongrak Choi, Dirk Englund

2603.25101 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new method to find quantum circuits that minimize the number of T gates (which are expensive in fault-tolerant quantum computing) by formulating it as a continuous optimization problem that can be solved numerically. The authors demonstrate their approach works for small circuits and show how to extend it to larger circuits by breaking them into smaller optimizable pieces.

Key Contributions

  • Formulates T-count minimization as numerically solvable continuous optimization problems using binary search
  • Demonstrates circuit partitioning approach to scale the optimization method to larger quantum circuits
T-count optimization fault-tolerant quantum computing quantum circuit synthesis binary search optimization circuit partitioning
View Full Abstract

We present a formulation of the problem of finding the smallest T -Count circuit that implements a given unitary as a binary search over a sequence of continuous minimization problems, and demonstrate that these problems are numerically solvable in practice. We reproduce best-known results for synthesis of circuits with a small number of qubits, and push the bounds of the largest circuits that can be solved for in this way. Additionally, we show that circuit partitioning can be used to adapt this technique to be used to optimize the T -Count of circuits with large numbers of qubits by breaking the circuit into a series of smaller sub-circuits that can be optimized independently.

Uncertainty Quantification for Quantum Computing

Ryan Bennink, Olena Burkovska, Konstantin Pieper, Jorge Ramirez, Elaine Wong

2603.25039 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This review paper introduces uncertainty quantification methods to quantum computing, showing how mathematical tools like probabilistic modeling and Bayesian inference can help address noise and error propagation in quantum devices. It aims to bridge applied mathematics and quantum information science to improve algorithm design and error mitigation.

Key Contributions

  • Bridging uncertainty quantification methodologies with quantum computing error analysis
  • Providing mathematical framework for noise characterization and error mitigation in quantum devices
  • Establishing rigorous statistical inference approaches for quantum computational reliability
uncertainty quantification quantum error mitigation noise characterization probabilistic modeling Bayesian inference
View Full Abstract

This review is designed to introduce mathematicians and computational scientists to quantum computing (QC) through the lens of uncertainty quantification (UQ) by presenting a mathematically rigorous and accessible narrative for understanding how noise and intrinsic randomness shape quantum computational outcomes in the language of mathematics. By grounding quantum computation in statistical inference, we highlight how mathematical tools such as probabilistic modeling, stochastic analysis, Bayesian inference, and sensitivity analysis, can directly address error propagation and reliability challenges in today's quantum devices. We also connect these methods to key scientific priorities in the field, including scalable uncertainty-aware algorithms and characterization of correlated errors. The purpose is to narrow the conceptual divide between applied mathematics, scientific computing and quantum information sciences, demonstrating how mathematically rooted UQ methodologies can guide validation, error mitigation, and principled algorithm design for emerging quantum technologies, in order to address challenges and opportunities present in modern-day quantum high performance and fault-tolerant quantum computing paradigms.

Finite-Degree Quantum LDPC Codes Reaching the Gilbert-Varshamov Bound

Kenta Kasai

2603.24588 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops new quantum error-correcting codes called quantum LDPC codes that achieve optimal error correction performance (reaching the Gilbert-Varshamov bound) while maintaining practical constraints on code structure. The researchers construct these codes using nested classical error-correcting codes and prove their effectiveness both theoretically and through computer-assisted verification.

Key Contributions

  • Construction of quantum LDPC codes with finite degree that achieve Gilbert-Varshamov bound performance
  • Rigorous computer-assisted proof demonstrating optimal distance properties for practical code parameters
quantum error correction LDPC codes Calderbank-Shor-Steane codes Gilbert-Varshamov bound fault tolerance
View Full Abstract

We construct nested Calderbank-Shor-Steane code pairs with non-vanishing coding rate from Hsu-Anastasopoulos codes and MacKay-Neal codes. In the fixed-degree regime, we prove relative linear distance with high probability. Moreover, for several finite degree settings, we prove Gilbert-Varshamov distance by a rigorous computer-assisted proof.

Flagging the Clifford hierarchy:~Fault-tolerant logical $\fracπ{2^l}$ rotations via measuring circuit gauge operators of non-Cliffords

Shival Dasu, Ben Criger

2603.24573 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops fault-tolerant quantum circuits for implementing specific rotation gates in the Clifford hierarchy using flag-based error detection. The authors provide efficient circuits with linear overhead for performing precise rotations that are essential for fault-tolerant quantum computing applications.

Key Contributions

  • Recursive flag circuits for detecting logical errors in non-Clifford rotation gates
  • O(l) overhead circuits for fault-tolerant logical rotations on CSS codes
  • Methods to increase fault distance through concatenation and Cliffordization
  • Resource state preparation circuits for gate teleportation implementations
fault-tolerant quantum computing error correction Clifford hierarchy CSS codes flag circuits
View Full Abstract

We provide a recursively defined sequence of flag circuits which will detect logical errors induced by non-fault-tolerant $R_{\overline{Z}}(\fracπ{2^l})$ gates on CSS codes with a fault distance of two. As applications, we give a family of circuits with $O(l)$ gates and ancillae which implement fault-tolerant logical $R_{Z}(\fracπ{2^l})$ or $R_{ZZ}(\fracπ{2^l})$ gates on any $[[k + 2, k, 2]]$ iceberg code and fault-tolerant circuits of size $O(l)$ for preparing $|\fracπ{2^l}\rangle$ resource states in the $[[7,1,3]]$ code, which can be used to perform fault-tolerant $R_{\overline{Z}}(\fracπ{2^l})$ rotations via gate teleportation, allowing for implementations of these gates that bypass the high overheads of gate synthesis when $l$ is small relative to the precision required. We show how the circuits above can be generalized to $π( x_0.x_{1}x_{2}\ldots x_{l}) = \sum_{j}^{l} π\frac{x_j}{2^j}$ rotations with identical overheads in $l$, which could be useful in quantum simulations where time is digitized in binary. Finally, we illustrate two approaches to increase the fault-distance of our construction. We show how to increase the fault distance of a Cliffordized version of the T gate circuit to $3$ in the Steane code and how to increase the fault-distance of the $\fracπ{2}$ iceberg circuit to $4$ through concatenation in two-level iceberg codes. This yields a targeted logical $R_{\overline{Z}}(\fracπ{2})$ gate with fault distance $4$ on any row of logical qubits in an $[[(k_2+2)(k_1+2), k_1k_2, 4]]$ code.

Robust Parametric Quantum Gate Against Stochastic Time-Varying Noise

Yang He, Zigui Zhang, Zibo Miao

2603.24345 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops an improved method called FF-QCRL for creating robust quantum control pulses that can handle realistic time-varying noise in quantum processors. The method combines filter function formalism with quantum control robustness landscape techniques to generate better control sequences for quantum gates that remain effective despite environmental disturbances.

Key Contributions

  • Integration of filter function formalism into quantum control robustness landscape framework
  • Development of FF-QCRL algorithm for robust pulse generation under realistic time-varying noise
quantum control robust gates NISQ filter functions noise mitigation
View Full Abstract

The performance of quantum processors in the noisy intermediate-scale quantum (NISQ) era is severely constrained by environmental noise and other uncertainties. While the recently proposed quantum control robustness landscape (QCRL) offers a powerful framework for generating robust control pulses for parametric gate families, its application has been practically restricted to quasi-static noise. To address the spectrally complex, time-varying noise prevalent in reality, we propose filter function-enhanced QCRL (FF-QCRL), which integrates filter function formalism into the QCRL framework. The resulting FF-QCRL algorithm minimizes a generalized robustness metric that faithfully encodes the impact of stochastic processes, enabling robust pulse-family generation for parametric gates under realistic time-varying noise. Numerical validation in a representative single-qubit setting confirms the effectiveness of the proposed method.

Correlated Atom Loss as a Resource for Quantum Error Correction

Hugo Perrin, Gatien Roger, Guido Pupillo

2603.24237 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved quantum error correction decoder for neutral-atom quantum computers that exploits correlations in atom loss events. The new approach reduces logical error rates by up to 10x compared to existing methods that treat atom losses as independent events.

Key Contributions

  • Novel decoding strategy that exploits loss correlations in neutral-atom quantum processors
  • Demonstration of order-of-magnitude reduction in logical error probability and increased loss threshold from 3.2% to 4%
quantum error correction surface code neutral atoms atom loss erasure channels
View Full Abstract

Atom loss is a dominant error source in neutral-atom quantum processors, yet its correlated structure remains largely unexploited by existing quantum error correction decoders. We analyze the performance of the surface code equipped with teleportation-based loss-detection units for neutral-atom quantum processors subject to circuit-level, partially correlated atom loss and depolarizing noise. We introduce and implement a decoding strategy that exploits loss correlations, effectively converting the \textit{delayed} erasure channels stemming from atom loss to erasure channels. The decoder constructs a loss graph and dynamically updates loss probabilities, a procedure that is highly parallelizable and compatible with real-time operation. Compared to a decoder that assumes independent loss events, our approach achieves up to an order-of-magnitude reduction in logical error probability and increases the loss threshold from $3.2\%$ to $4\%$. Our approach extends to experimentally relevant regimes with partially correlated loss, demonstrating robust gains beyond the idealized fully correlated setting.

Mitigating Dynamic Crosstalk with Optimal Control

Matthias G. Krauss, Luise C. Butzke, Christiane P. Koch

2603.24205 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a method to eliminate dynamic crosstalk in quantum computers using optimal control theory and the perfect entangler spectrum. The technique requires only minimal modifications to control pulses to suppress unwanted interactions between qubits that occur during gate operations.

Key Contributions

  • Development of optimal control method using perfect entangler spectrum to suppress dynamic crosstalk
  • Demonstration that minimal pulse modifications can eliminate the most difficult-to-predict form of quantum crosstalk
  • Establishment of generalizable control principle for eliminating unwanted interactions in quantum hardware
dynamic crosstalk optimal control perfect entangler spectrum parametric gates tunable coupler
View Full Abstract

The prevalence of quantum crosstalk is an important barrier to scaling frequency-addressable qubit architectures, with dynamic crosstalk being particularly difficult to detect and suppress. This form of crosstalk refers to unintended interactions driven by the gate control fields themselves. Here, we minimize dynamic crosstalk using quantum optimal control based on the perfect entangler spectrum, where spectral peaks signal unwanted entanglement with spectator qubits. Focusing on parametric gates in tunable coupler systems, we derive pulse shapes that eliminate dynamic crosstalk. Remarkably, only minimal pulse modifications are required to mitigate the form of crosstalk that is otherwise most difficult to predict. The ability to suppress dynamic crosstalk via the perfect entangler spectrum establishes a generalizable control principle for eliminating unwanted interactions in quantum hardware.

STAR-Magic Mutation: Even More Efficient Analog Rotation Gates for Early Fault-Tolerant Quantum Computer

Riki Toshio, Shota Kanasugi, Jun Fujisaki, Hirotaka Oshima, Shintaro Sato, Keisuke Fujii

2603.22891 • Mar 24, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces STAR-magic mutation, a new protocol for implementing rotation gates on fault-tolerant quantum computers that achieves better error scaling and significantly reduces execution time for small-angle rotations. The authors also propose a new quantum computing architecture called 'STAR ver. 3' that could simulate quantum many-body systems with only hundreds of thousands of physical qubits.

Key Contributions

  • Development of STAR-magic mutation protocol with improved error scaling O(θ_L^{2(1-Θ(1/d))}p_ph) for logical rotation gates
  • Introduction of STAR ver. 3 quantum computing architecture using Clifford+T+φ gate set for early fault-tolerant quantum computers
  • Demonstration that realistic quantum many-body system simulations are feasible with hundreds of thousands of physical qubits at 10^-3 error rates
fault-tolerant quantum computing surface codes magic state distillation rotation gates quantum simulation
View Full Abstract

We introduce STAR-magic mutation, an efficient protocol for implementing logical rotation gates on early fault-tolerant quantum computers. This protocol judiciously combines two of the latest state preparation protocols: transversal multi-rotation protocol and magic state cultivation. It achieves a logical rotation gate with a favorable error scaling of $\mathcal{O}(θ_L^{2(1-Θ(1/d))}p_{\text{ph}})$, while requiring only the ancillary space of a single surface code patch. Here, $θ_L$ is the logical rotation angle, $p_{\text{ph}}$ is the physical error rate, and $d$ is the code distance. This scaling marks a significant improvement over the previous state-of-the-art, $\mathcal{O}(θ_L p_{\text{ph}})$, making our protocol particularly powerful for implementing a sequence of small-angle rotation gates, like Trotter-based circuits. Notably, for $θ_L \lesssim 10^{-5}$, our protocol achieves a two-order-of-magnitude reduction in both the execution time and the error rate of analog rotation gates compared to the standard $T$-gate synthesis using cultivated magic states. Building upon this protocol, we also propose a novel quantum computing architecture designed for early fault-tolerant quantum computers, dubbed ``STAR ver.~3". It employs a refined circuit compilation strategy based on Clifford+$T$+$φ$ gate set, rather than the conventional Clifford+$T$ or Clifford+$φ$ gate sets. We establish a theoretical bound on the feasible circuit size on this architecture and illustrate its capabilities by analyzing the spacetime costs for simulating the dynamics of quantum many-body systems. Specifically, we demonstrate that our architecture can simulate biologically-relevant molecules or lattice models at scales beyond the reach of exact classical simulation, with only a few hundred thousand physical qubits, even assuming a realistic error rate of $p_{\text{ph}}=10^{-3}$.

Low Latency GNN Accelerator for Quantum Error Correction

Alessio Cicero, Luigi Altamura, Moritz Lange, Mats Granath, Pedro Trancoso

2603.22149 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a specialized computer chip (FPGA accelerator) that uses neural networks to quickly detect and correct errors in quantum computers. The system can perform quantum error correction within the strict 1 microsecond timing requirement while maintaining higher accuracy than existing methods.

Key Contributions

  • FPGA accelerator implementation of GNN-based quantum error correction decoder
  • Hardware-aware optimizations achieving sub-1μs latency while maintaining high accuracy
  • Demonstrated performance improvements over state-of-the-art methods for surface codes up to distance d=7
quantum error correction surface codes neural network decoder FPGA accelerator superconducting qubits
View Full Abstract

Quantum computers have the potential to solve certain complex problems in a much more efficient way than classical computers. Nevertheless, current quantum computer implementations are limited by high physical error rates. This issue is addressed by Quantum Error Correction (QEC) codes, which use multiple physical qubits to form a logical qubit to achieve a lower logical error rate, with the surface code being one of the most commonly used. The most time-critical step in this process is interpreting the measurements of the physical qubits to determine which errors have most likely occurred - a task called decoding. Consequently, the main challenge for QEC is to achieve error correction with high accuracy within the tight $1μs$ decoding time budget imposed by superconducting qubits. State-of-the-art QEC approaches trade accuracy for latency. In this work, we propose an FPGA accelerator for a Neural Network based decoder as a way to achieve a lower logical error rate than current methods within the tight time constraint, for code distance up to d=7. We achieved this goal by applying different hardware-aware optimizations to a high-accuracy GNN-based decoder. In addition, we propose several accelerator optimizations leading to the FPGA-based decoder achieving a latency smaller than $1μs$, with a lower error rate compared to the state-of-the-art.

The color code, the surface code, and the transversal CNOT: NP-hardness of minimum-weight decoding

Shouzhen Gu, Lily Wang, Aleksander Kubica

2603.22064 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper proves that finding the minimum-weight decoding solution for quantum error correction codes is computationally intractable (NP-hard) for three important cases: color codes with Z errors, surface codes with general Pauli errors, and surface codes with transversal CNOT gates. The results establish fundamental computational limits for optimal decoding in fault-tolerant quantum computing.

Key Contributions

  • Proves NP-hardness of minimum-weight decoding for color codes with Pauli Z errors
  • Demonstrates computational intractability of optimal decoding for surface codes with general Pauli errors and transversal CNOT operations
  • Establishes sharp complexity separation between optimal and approximate decoding methods in fault-tolerant quantum computing
quantum error correction surface codes color codes minimum-weight decoding fault-tolerant quantum computing
View Full Abstract

The decoding problem is a ubiquitous algorithmic task in fault-tolerant quantum computing, and solving it efficiently is essential for scalable quantum computing. Here, we prove that minimum-weight decoding is NP-hard in three quintessential settings: (i) the color code with Pauli $Z$ errors, (ii) the surface code with Pauli $X$, $Y$ and $Z$ errors, and (iii) the surface code with a transversal CNOT gate, Pauli $Z$ and measurement bit-flip errors. Our results show that computational intractability already arises in basic and practically relevant decoding problems central to both quantum memories and logical circuit implementations, highlighting a sharp computational complexity separation between minimum-weight decoding and its approximate realizations.

Neural Belief-Matching Decoding for Topological Quantum Error Correction Codes

Luca Menti, Francisco Lázaro

2603.21730 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a neural network approach to improve quantum error correction decoding for topological codes like the toric code, replacing traditional belief-propagation methods with a neural belief-matching decoder that reduces computational complexity while maintaining performance.

Key Contributions

  • Development of neural belief-matching decoder that reduces average decoding complexity for topological quantum error correction
  • Introduction of convolutional architecture enabling weight sharing and transfer learning from small to large code instances without performance loss
quantum error correction topological codes toric code neural networks belief propagation
View Full Abstract

Quantum error correction (QEC) is critical for scalable fault-tolerant quantum computing. Topological codes, such as the toric code, offer hardware-efficient architectures but their Tanner graphs contain many girth-4 cycles that degrade the performance of belief-propagation (BP) decoding. For this reason, BP decoding is typically followed by a more complex second stage decoder such as minimum-weight perfect matching. These combined decoders achieve a remarkable performance, albeit at the cost of increased complexity. In this paper we propose two key improvements for the decoding of toric code. The first one is replacing the BP decoder by a neural BP decoder, giving rise to the neural belief-matching decoder which substantially decreases the average decoding complexity. The main drawback of this approach is the high cost associated with the training of the neural BP decoder. To address this issue, we impose a convolutional architecture on the neural BP decoder, enabling weight sharing across the spatially homogeneous structure of the code's factor graph. This design allows a model trained on a modest-size topological code to be directly transferred to much larger instances, preserving decoding quality while dramatically lowering the training burden. Our numerical experiments on toric-code lattices of various sizes demonstrate that this technique does not result in a noticeable loss in performance.

All-optical quantum memory using bosonic quantum error correction codes

Kaustav Chatterjee, Niklas Budinger, Kian Latifi Yaghin, Lucas Borg Clausen, Ulrik Lund Andersen

2603.21721 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: high

This paper develops an all-optical quantum memory system that stores quantum information in fiber loops using Gottesman-Kitaev-Preskill error correction codes. The researchers optimize the error correction strategy and identify key performance thresholds, demonstrating storage times exceeding 400ms with high fidelity at sufficient squeezing levels.

Key Contributions

  • Developed optimized syndrome decoder for GKP codes that significantly outperforms standard decoders in finite-squeezing regime
  • Identified squeezing threshold of 6.7 dB and optimal correction spacing for maximizing memory lifetime
  • Demonstrated path to scalable all-optical fault-tolerant quantum storage with clear performance benchmarks
quantum memory GKP codes bosonic quantum error correction all-optical fault-tolerant quantum computing
View Full Abstract

Reliable quantum memory is essential for scalable quantum networks and fault-tolerant photonic quantum computing. We present a quantitative analysis of an all-optical quantum memory architecture in which a Gottesman-Kitaev-Preskill (GKP) encoded qubit is stored in a fibre loop and periodically stabilized using teleportation-based error correction. By modelling fibre propagation as a pure-loss channel and representing each correction round as an effective logical map acting on the Bloch vector, we obtain a compact description of the full multi-round memory channel. We show that syndrome decoder optimization plays a crucial role in the experimentally relevant finite-squeezing regime. The optimal decoder deviates from standard square-grid GKP decoder in both tile-size and tile-shape, leading to significant improved logical performance. Using this optimized decoding strategy, we identify a squeezing-dependent optimal spacing between correction nodes that maximizes the memory lifetime. Remarkably, this optimal segment length is largely independent of the desired storage time, providing a simple and practical design rule for fibre-loop quantum memory. We further find a squeezing threshold of approximately 6.7 dB below which intermediate error correction becomes counterproductive, while above threshold the achievable storage time increases approximately exponentially with squeezing. For example, at 17 dB squeezing, storage times exceeding 400 ms can be achieved with logical infidelity below 1%. These results establish clear performance benchmarks and reveal the fundamental trade-off between photon loss, squeezing, and correction frequency in continuous-variable architectures. Our findings provide actionable design principles for near-term photonic quantum memory and clarify the path toward scalable all-optical fault-tolerant quantum storage.

Neural network approach to mitigating intra-gate crosstalk in superconducting CZ gates

Yiming Yu, Yexiong Zeng, Ye-Hong Chen, Franco Nori, Yan Xia

2603.21631 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a neural network approach called Physics-Guided Neural Control (PGNC) to create better control pulses for quantum gates in superconducting quantum computers. The method specifically targets reducing crosstalk errors during two-qubit CZ gate operations, showing improved gate fidelity compared to existing optimization methods.

Key Contributions

  • Development of Physics-Guided Neural Control framework for quantum gate optimization
  • Demonstration of superior CZ gate fidelity and robustness against crosstalk in superconducting transmon systems
superconducting qubits crosstalk mitigation neural networks CZ gate transmon
View Full Abstract

The potential of quantum computing is fundamentally constrained by the inherent susceptibility of qubits to noise and crosstalk, particularly during multi-qubit gate operations. Existing strategies, such as hardware isolation and dynamical decoupling, face limitations in scalability, experimental feasibility, and robustness against complex noise sources. In this manuscript, we propose a physics-guided neural control (PGNC) framework to generate robust control pulses for superconducting transmon qubit systems, specifically targeting crosstalk mitigation. By combining a hardware aware parameterization with a Hamiltonian-informed objective that accounts for condition-dependent crosstalk distortions, PGNC steers the search toward smooth and physically realizable pulses while efficiently exploring high dimensional control landscapes. Numerical simulations for the CZ gate demonstrate superior fidelity and pulse smoothness compared to a Krotov baseline under matched constraints. Taken together, the results show consistent and practically meaningful improvements in both nominal and perturbed conditions, with pronounced gains in worst-case fidelity, supporting PGNC as a viable route to robust control on near-term transmon devices.

Systematic construction of digital autonomous quantum error correction for state preparation and error suppression via conditional Gaussian operations

Keitaro Anai, Suguru Endo, Shuntaro Takeda, Tomohiro Shitara

2603.21598 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a new approach for autonomous quantum error correction in continuous-variable quantum computing that uses conditional Gaussian operations to automatically steer noisy quantum states toward target states without requiring explicit measurements and feedback. The method is demonstrated for preparing non-Gaussian resource states needed for universal quantum computation and for suppressing errors in cat states.

Key Contributions

  • Development of nullifier-based digital autonomous quantum error correction using conditional Gaussian operations
  • Demonstration of autonomous preparation of non-Gaussian resource states including cubic phase states and trisqueezed states for universal quantum computation
  • Autonomous error suppression scheme for cat and squeezed cat states with explicit gate decompositions and realistic noise analysis
autonomous quantum error correction continuous-variable quantum computing conditional Gaussian operations non-Gaussian states cat states
View Full Abstract

In continuous-variable quantum computing, autonomous quantum error correction (QEC) can dissipatively steer a noisy quantum state into a target state or manifold, enabling robust quantum information processing without explicit syndrome measurements and feedback. Here, we propose a nullifier-based digital autonomous QEC enabled by conditional Gaussian operations. By designing jump operators for target nullifiers and compiling the resulting Lindbladian into a Trotterized sequence of elementary conditional Gaussian operations, we demonstrate two use cases: (i) deterministic preparation of non-Gaussian resource states for universal computation, including finitely squeezed cubic phase states and approximate trisqueezed states, and (ii) autonomous suppression of dephasing error for cat and squeezed cat states. We provide explicit gate decompositions for the required conditional Gaussian operations and numerically evaluate the performance under realistic imperfections, including photon loss in the bosonic mode and ancillary-qubit decoherence. Our results clarify the resource requirements and trade-offs, such as circuit depth, time-step choices, and the required set of conditional Gaussian operations, for scalable, gate-level implementations of autonomous state preparation and error suppression.

High-yield integration design of fixed-frequency superconducting qubit systems using siZZle-CZ gates

Kazuhisa Ogawa, Yutaka Tabuchi, Makoto Negoro

2603.21537 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces the siZZle-CZ gate as an alternative to cross-resonance gates for fixed-frequency superconducting qubits, demonstrating that it can achieve high fidelities while being more robust to frequency collisions that limit manufacturing yields in large quantum processors.

Key Contributions

  • Development of siZZle-CZ gate architecture that relaxes frequency collision constraints in superconducting qubit systems
  • Demonstration of >99.6% fidelity controlled-Z gates across wide operating windows
  • Design of scalable lattice architectures with >1000 qubits showing 80-100% zero-collision yields
superconducting qubits transmon controlled-Z gate quantum gate fidelity scalable quantum computing
View Full Abstract

Fixed-frequency transmon qubits, characterized by simple architectures and long coherence times, are promising platforms for large-scale quantum computing. However, the rapidly increasing frequency collisions, which directly reduce the fabrication yield, hinder scaling, especially in cross-resonance (CR) gate-based architectures, wherein the restricted drive frequency severely limits the available design space. We investigate the Stark-induced ZZ by level excursions (siZZle) gate, which relaxes this limitation by allowing arbitrary drive-frequency choices. Extensive numerical analyses across a broad parameter range -- including the far-detuned regime that has received negligible prior attention -- reveal wide operating windows that support controlled-Z (CZ) fidelities >99.6%. Leveraging these windows, we design lattice architectures containing >1000 qubits, showing that even under 0.25% fabrication-induced frequency dispersion, the zero-collision yields in square and heavy-hexagonal lattices reach 80% and 100%, respectively. Thus, the siZZle-CZ gate is a scalable and collision-robust alternative to the CR gate, offering a viable route toward high-yield fixed-frequency transmon quantum processors.

Optimal Compilation of Syndrome Extraction Circuits for General Quantum LDPC Codes

Kai Zhang, Dingchao Gao, Zhaohui Yang, Runshi Zhou, Fangming Liu, Zhengfeng Ji, Jianxin Chen

2603.21499 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents Auto-Stabilizer-Check (ASC), a software framework that automatically generates optimal quantum circuits for error correction in quantum low-density parity-check codes. ASC reduces circuit depth by approximately 50% and achieves 7-8x better error suppression compared to existing methods, making these advanced error correction codes more practical for large-scale quantum computers.

Key Contributions

  • Development of ASC framework for optimal syndrome extraction circuit compilation for arbitrary qLDPC codes
  • Definitive solution to IBM's open problem regarding depth-6 syndrome extraction circuits for bivariate bicycle codes
  • 50% reduction in circuit depth and 7-8x improvement in logical error rate suppression compared to existing methods
quantum error correction qLDPC codes syndrome extraction circuit compilation fault tolerance
View Full Abstract

Quantum error correcting codes (QECC) are essential for constructing large-scale quantum computers that deliver faithful results. As strong competitors to the conventional surface code, quantum low-density parity-check (qLDPC) codes are emerging rapidly: they offer high encoding rates while maintaining reasonable physical-qubit connectivity requirements. Despite the existence of numerous code constructions, a notable gap persists between these designs -- some of which remain purely theoretical -- and their circuit-level deployment. In this work, we propose Auto-Stabilizer-Check (ASC), a universal compilation framework that generates depth-optimal syndrome extraction circuits for arbitrary qLDPC codes. ASC leverages the sparsity of parity-check matrices and exploits the commutativity of X and Z stabilizer measurement subroutines to search for optimal compilation schemes. By iteratively invoking an SMT solver, ASC returns a depth-optimal solution if a satisfying assignment is found, and a near-optimal solution in cases of solver timeouts. Notably, ASC provides the first definitive answer to one of IBM's open problems: for all instances of bivariate bicycle (BB) code reported in their work, our compiler certifies that no depth-6 syndrome extraction circuit exists. Furthermore, by integrating ASC with an end-to-end evaluation framework -- one that assesses different compilation settings under a circuit-level noise model -- ASC reduces circuit depth by approximately 50% and achieves an average 7x-8x suppression of the logical error rate for general qLDPC codes, compared with as-soon-as-possible (ASAP) and coloration-based scheduling. ASC thus substantially reduces manual design overhead and demonstrates its strong potential to serve as a key component in accelerating hardware deployment of qLDPC codes.

Analyzing Decoders for Quantum Error Correction

Abtin Molavi, Feras Saad, Aws Albarghouthi

2603.20127 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new systematic method for evaluating quantum error correction decoders that can outperform traditional Monte Carlo simulation, especially at low error rates. The approach uses structured search over possible errors and polynomial optimization to quantify both decoder accuracy and robustness to changes in physical error rates.

Key Contributions

  • Novel formal semantics for QEC programs based on the Stim circuit format
  • Systematic decoder evaluation method using structured error space search and constrained polynomial optimization that outperforms Monte Carlo simulation
quantum error correction decoder analysis fault tolerance Stim circuits polynomial optimization
View Full Abstract

Quantum error correction (QEC) enables reliable computation on noisy hardware by encoding logical information across many physical qubits and periodically measuring parities to detect errors. A decoder is the classical algorithm that uses these measurements to infer which error most likely occurred, so that the system can correct it. The decoder's accuracy-how rarely it makes the wrong guess-directly determines the scale of quantum computation that can be reliably executed. With a wealth of competing decoding algorithms, a QEC system designer needs reliable methods to evaluate them. Today, the dominant approach is to evaluate decoders using Monte Carlo simulation. However, simulation has several drawbacks such as requiring many samples to produce low variance estimates. In this work, we develop a new systematic analysis for evaluating decoders. We introduce a novel formal semantics of a core language for QEC programs that captures the de facto standard Stim circuit format, providing a principled theoretical foundation for the emerging space of fault-tolerant quantum systems design. Given a QEC program and a decoder, our verifier can quantify both the decoder accuracy and the decoder robustness to drift in physical error rate. Our approach has two key components: (i) a structured search over the space of possible errors; and (ii) a constrained polynomial optimization kernel. A thorough empirical evaluation of our approach suggests that it can outperform simulation, especially in low error rate regimes, and that it can be deployed to quantify decoder robustness over an interval of physical error rates.

Adaptive Parallelism-Aware Qubit Routing for Ion Trap QCCD Architectures

Anabel Ovide, Andreu Angles-Castillo, Carmen G. Almudever

2603.19969 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents a new method for efficiently moving qubits (trapped ions) between different zones in modular quantum computers, optimizing both the physical transport of ions and the parallel execution of quantum operations to improve overall performance and fidelity.

trapped-ion QCCD qubit routing ion transport quantum compilation
View Full Abstract

Trapped-ion Quantum Charge-Coupled Device (QCCD) architectures promise scalability through interconnected trap zones and dynamic ion transport; however, this transport capability creates a complex compilation challenge: how to move qubits efficiently without degrading fidelity. We introduce a routing strategy that turns this challenge into an advantage by exploiting operational parallelism across traps while adapting to both algorithmic structure and device topology through a configurable multi-parameter scoring mechanism. Across a broad suite of benchmarks and QCCD layouts, the method consistently reduces ion-transport overhead and improves execution fidelity, outperforming state-of-the-art routing techniques. These results highlight that explicitly balancing movement overhead and execution parallelism under architectural constraints is key to unlocking the full potential of modular trapped-ion quantum processors.

SDP bounds on quantum codes: rational certificates

Gerard Anglès Munné, Felix Huber

2603.19901 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops mathematical methods to determine the maximum possible size of quantum error-correcting codes with given parameters. The researchers use semidefinite programming with rational certificates to rigorously prove improved upper bounds on code sizes for quantum systems with 6-19 qubits.

Key Contributions

  • Development of rational infeasibility certificates for semidefinite programming bounds on quantum codes
  • Improvement of 18 upper bounds on maximum quantum code sizes for n-qubit systems with 6 ≤ n ≤ 19
quantum error correction quantum codes semidefinite programming coding theory fault tolerance
View Full Abstract

A fundamental problem in quantum coding theory is to determine the maximum size of quantum codes of given block length and distance. A recent work introduced bounds based on semidefinite programming, strengthening the well-known quantum linear programming bounds. However, floating-point inaccuracies prevent the extraction of rigorous non-existence proofs from the numerical methods. Here, we address this by providing rational infeasibility certificates for a range of quantum codes. Using a clustered low-rank solver with heuristic rounding to algebraic expressions, we can improve upon $18$ upper bounds on the maximum size of $n$-qubit codes with $6 \leq n \leq 19$. Our work highlights the practicality and scalability of semidefinite programming for quantum coding bounds.

Linear-optical generation of hybrid GKP entanglement from small-amplitude cat states

Shohei Kiryu, Yohji Chin, Masahiro Takeoka, Kosuke Fukui

2603.19870 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper proposes a method to create hybrid quantum states that combine two different types of quantum error correction codes using only standard optical equipment and small cat states. The approach could make fault-tolerant quantum computing more experimentally feasible by avoiding the need for complex non-Gaussian resources.

Key Contributions

  • Novel linear-optical scheme for generating hybrid GKP-photon entangled states using only small-amplitude cat states
  • Breeding process method to increase non-Gaussianity without complex resources
  • Extension to hybrid qudit states for enhanced quantum error correction capabilities
GKP codes hybrid bosonic codes linear optics cat states quantum error correction
View Full Abstract

Hybrid bosonic codes combining bosonic codes with photon states offer a promising pathway for fault-tolerant quantum computation. However, the efficient generation of such states in optical setups remains technically challenging due to the requirement for complex non-Gaussian resources. In this paper, we propose a novel scheme to efficiently generate hybrid entangled states between a GKP qubit and a photon-number state using small-amplitude cat states as the primary resource. We apply a breeding process using small-amplitude cat states to increase the non-Gaussianity of the input states. This method requires only linear optical elements and homodyne measurements. Furthermore, we demonstrate that this protocol can be extended to generate hybrid qudit states. This scheme has the potential to provide a resource-efficient and experimentally attractive route toward implementing hybrid quantum error correction.

Beyond-Ten-Hour Coherence in a Decoherence-Free Trapped-Ion Clock Qubit

Jiahao Pi, Xiangjia Liu, Junle Cao, Pengfei Wang, Lingfeng Ou, Erfu Gao, Hengchao Tu, Menglin Zou, Xiang Zhang, Junhua Zhang, Kihwan Kim

2603.19631 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: high Network: medium

This paper demonstrates quantum coherence lasting over 10 hours in trapped ion systems by combining clock-state qubits with decoherence-free subspace encoding. The technique uses pairs of ytterbium and barium ions to reject noise and maintain quantum information without requiring magnetic shielding or complex stabilization systems.

Key Contributions

  • Achieved >10 hour coherence times in trapped ion qubits using decoherence-free subspace encoding
  • Demonstrated passive error correction technique that eliminates technical noise constraints without magnetic shielding
  • Established pathway toward million-year coherence potential in atomic ion quantum systems
trapped ions decoherence-free subspace quantum coherence clock states quantum memory
View Full Abstract

Quantum systems promise to revolutionize information processing science and technology [1-3]. The preservation of quantum coherence, the defining property of qubits, fundamentally constrains the performance of quantum information processing with quantum memories [4]. While trapped atomic ions theoretically support million-year coherence based on spontaneous emission [5-7], experimental demonstrations have reached far less, only about an hour [8-13]. Here we combine clock-state qubits with decoherence-free subspace (DFS) encoding to achieve coherence exceeding ten hours. Using correlation-based phase tracking in 171Yb+ ion pairs sympathetically cooled by 138Ba+ ion, we demonstrate this without magnetic shielding or enhanced microwave phase stabilization that previously limited coherence times. DFS encoding references the qubit phase to the inter-ion energy difference to reject microwave phase noise and common-mode magnetic fluctuations, while clock states provide environmental insensitivity. Throughout measurements extended to 1600 seconds, we observe minimal coherence decay, with exponential fits yielding a coherence time of (3.77 +/- 1.09) x 10^4 seconds. Our results establish DFS encoding as a form of passive error correction that eliminates technical noise constraints, unlocking the million-year coherence potential of atomic ions for scalable quantum information processing.

Stabilizer Formalism for EAQECCs with Noise ebits

Ruihu Li, Guanmin Guo, Yang Liu, Hao Song

2603.19597 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops a mathematical framework called stabilizer formalism for entanglement-assisted quantum error correcting codes (EAQECCs) that can work with imperfect entangled bits (ebits). The work provides theoretical tools to construct and analyze quantum error correction schemes when the shared entanglement resources contain noise.

Key Contributions

  • Development of stabilizer formalism for EAQECCs with noisy ebits
  • Derivation of equivalent formalisms using symplectic geometry and additive codes
  • Construction and performance analysis of specific EAQECCs with noise ebits
quantum_error_correction stabilizer_codes entanglement_assisted_codes noisy_entanglement symplectic_geometry
View Full Abstract

We introduce a stabilizer formalism for EAQECCs with noise ebits, using special subgroups of product groups of two Pauli groups. This formalism includes the two coding schemes,given by Lai and Brun (C.Y. Lai and T. A. Brun, PHYSICAL REVIEW A 86, 032319 (2012)), for EAQECCs with imperfect ebits as special cases. Then two equivalent formalisms of the formalism are derived in nomenclature of sympletic geometry and additive codes. We apply this theory to construct some EAQECCs with noise ebits, and analyze their performance.

Preserving MWPM-Decodability in Fault-Equivalent Rewrites

Maximilian Schweikart, Linnea Grans-Samuelsson, Aleks Kissinger, Benjamin Rodatz

2603.19522 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to preserve the efficient decodability of quantum error correction codes when implementing fault-tolerant quantum circuits. The authors show how to maintain the special mathematical structure that allows fast decoding while constructing practical quantum computing operations.

Key Contributions

  • Formalized how ZX circuit rewrites affect quantum error correction decodability
  • Identified specific circuit transformations that preserve minimum-weight perfect matching decodability
  • Demonstrated construction of efficiently decodable fault-tolerant syndrome extraction circuits for matchable codes
quantum error correction fault tolerance minimum weight perfect matching ZX calculus surface codes
View Full Abstract

Decoding a quantum error correction code is generally NP-hard, but corrections must be applied at a high frequency to suppress noise successfully. Matchable codes, like the surface code, exhibit a special structure that makes it possible to efficiently, approximately solve the decoding problem through minimum-weight perfect matching (MWPM). However, this efficiency-enabling property can be lost when constructing implementations for fault-tolerant gadgets such as syndrome-extraction circuits or logical operations. In this work, we take a circuit-centric perspective to formalise how the decoding problem changes when applying ZX rewrites to a ZX diagram with a given detector basis. We demonstrate a set of rewrites that preserve MWPM-decodability of circuits and show that these matchability-preserving rewrites can be used to fault-tolerantly extract quantum circuits from phase-free ZX diagrams. In particular, this allows us to build efficiently decodable, fault-tolerant syndrome-extraction circuits for matchable codes.

Assessing Spatiotemporally Correlated Noise in Superconducting Qubits via Pulse-Based Quantum Noise Spectroscopy

Mayra Amezcua, Leigh Norris, Tom Gilliss, Ryan Sitler, James Shackford, Gregory Quiroz, Kevin Schultz

2603.19373 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper develops a new method called quantum noise spectroscopy (QNS) to characterize correlated noise between multiple superconducting qubits, which is important for understanding and mitigating errors that can spread across quantum devices. The researchers demonstrate their technique can better identify these problematic noise correlations compared to existing methods.

Key Contributions

  • Development of nonparametric quantum noise spectroscopy protocol for characterizing spatiotemporally correlated noise in multi-qubit systems
  • Demonstration of superior performance over existing comb-based QNS protocols for noise characterization
  • Validation through engineered noise processes and application to quantum crosstalk characterization
quantum noise spectroscopy superconducting qubits spatiotemporal correlation quantum crosstalk error correction
View Full Abstract

Spatiotemporally correlated errors are widespread in quantum devices and are particularly adversarial to error correcting schemes. To characterize these errors, we propose and validate a nonparametric quantum noise spectroscopy (QNS) protocol to estimate both spectra and static errors associated with spatiotemporally correlated dephasing noise and fluctuating quantum crosstalk on two qubits. Our scheme reconstructs the real and imaginary components of the two-qubit cross-spectrum by using fixed total time pulse sequences and single qubit and joint two-qubit measurements to separately resolve spatially correlated noise processes. We benchmark our protocol by reconstructing the spectra of spatiotemporally correlated noise processes engineered via the Schrödinger Wave Autoregressive Moving Average technique, emulating dephasing errors. Furthermore, we show that the protocol can outperform existing comb-based QNS protocols. Our results demonstrate the utility of our protocol in characterizing spatiotemporally correlated noise and quantum crosstalk in a multi-qubit device for potential use in noise-adapted control or error protection schemes.

Low-weight quantum syndrome errors in belief propagation decoding

Haggai Landa

2603.19126 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to identify problematic low-weight error patterns in quantum error correction codes that cause belief propagation decoding algorithms to converge slowly or fail. The authors analyze how these decoding failures occur and propose improvements to the decoder by modifying the decoding matrix to reduce both logical errors and decoding time.

Key Contributions

  • Empirical method to identify low-weight error syndromes that cause belief propagation decoding convergence issues
  • Analysis of BP dynamics for weight-four and weight-five errors showing exponential activation behavior
  • Decoder improvement technique using fault column combinations to reduce logical errors and decoding time
quantum error correction belief propagation syndrome decoding low-density parity check fault tolerance
View Full Abstract

We describe an empirical approach to identify low-weight combinations of columns of the decoding matrices of a quantum circuit-level noise model, for which belief-propagation (BP) algorithms converge possibly very slowly. Focusing on the logical-idle syndrome cycle of the low-density parity check gross code, we identify criteria providing a characterization of the Tanner subgraph of such low-weight error syndromes. We analyze the dynamics of iterations when BP is used to decode weight-four and weight-five errors, finding statistics akin to exponential activation in the presence of noise or escape from chaotic phase-space domains. We study how BP convergence improves when adding to the decoding matrix relevant combinations of fault columns, and show that the suggested decoder amendment can result in the reduction of both logical errors and decoding time.

Post-Quantum Cryptography from Quantum Stabilizer Decoding

Jonathan Z. Lu, Alexander Poremba, Yihui Quek, Akshar Ramkumar

2603.19110 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: low

This paper proposes quantum stabilizer code decoding as a new hardness assumption for post-quantum cryptography, showing it can support key cryptographic primitives like public-key encryption and oblivious transfer. The authors argue this provides a quantum-native alternative to current post-quantum assumptions that could be more resistant to both classical and quantum attacks.

Key Contributions

  • Establishing quantum stabilizer decoding as a viable post-quantum cryptographic assumption with reductions to core cryptographic primitives
  • Developing new scrambling techniques for structured linear spaces with symplectic algebraic structure to enable security proofs
post-quantum cryptography quantum stabilizer codes cryptographic hardness assumptions public-key encryption oblivious transfer
View Full Abstract

Post-quantum cryptography currently rests on a small number of hardness assumptions, posing significant risks should any one of them be compromised. This vulnerability motivates the search for new and cryptographically versatile assumptions that make a convincing case for quantum hardness. In this work, we argue that decoding random quantum stabilizer codes -- a quantum analog of the well-studied LPN problem -- is an excellent candidate. This task occupies a unique middle ground: it is inherently native to quantum computation, yet admits an equivalent formulation with purely classical input and output, as recently shown by Khesin et al. (STOC '26). We prove that the average-case hardness of quantum stabilizer decoding implies the core primitives of classical Cryptomania, including public-key encryption (PKE) and oblivious transfer (OT), as well as one-way functions. Our constructions are moreover practical: our PKE scheme achieves essentially the same efficiency as state-of-the-art LPN-based PKE, and our OT is round-optimal. We also provide substantial evidence that stabilizer decoding does not reduce to LPN, suggesting that the former problem constitutes a genuinely new post-quantum assumption. Our primary technical contributions are twofold. First, we give a reduction from random quantum stabilizer decoding to an average-case problem closely resembling LPN, but which is equipped with additional symplectic algebraic structure. While this structure is essential to the quantum nature of the problem, it raises significant barriers to cryptographic security reductions. Second, we develop a new suit of scrambling techniques for such structured linear spaces, and use them to produce rigorous security proofs for all of our constructions.

Fair Decoder Baselines and Rigorous Finite-Size Scaling for Bivariate Bicycle Codes on the Quantum Erasure Channel

Tushar Pandey

2603.19062 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper evaluates bivariate bicycle quantum error-correcting codes on erasure channels, addressing unfair decoder comparisons in previous work and using rigorous statistical methods to estimate true asymptotic error thresholds. The study shows these codes can achieve near-optimal performance without maximum-likelihood decoding and outperform surface codes in some metrics.

Key Contributions

  • Establishes fair decoder baselines for comparing bivariate bicycle codes against surface codes on quantum erasure channels
  • Provides rigorous finite-size scaling analysis to estimate true asymptotic error thresholds rather than finite-size pseudo-thresholds
  • Demonstrates bivariate bicycle codes achieve ~0.488 threshold within 2.4% of theoretical limit with 12x lower normalized overhead than surface codes
quantum error correction bivariate bicycle codes surface codes quantum erasure channel finite-size scaling
View Full Abstract

Fair threshold estimation for bivariate bicycle (BB) codes on the quantum erasure channel runs into two recurring problems: decoder-baseline unfairness and the conflation of finite-size pseudo-thresholds with true asymptotic thresholds. We run both uninformed and \emph{erasure-aware} minimum-weight perfect matching (MWPM) surface code baselines alongside BP-OSD decoding of BB codes. With standard depolarizing-weight MWPM and no erasure information, performance matches random guessing on the erasure channel in our tested regime -- so prior work that compares against this baseline is really comparing decoders, not codes. Using 200{,}000 shots per point and bootstrap confidence intervals, we sweep five BB code sizes from $N=144$ to $N=1296$. Pseudo-thresholds (WER = 0.10) run from $p^* = 0.370$ to $0.471$; finite-size scaling (FSS) gives an asymptotic threshold $p^*_\infty \approx 0.488$, within 2.4\% of the zero-rate limit and without maximum-likelihood decoding. On the fair baseline, BB at $N=1296$ has a modest edge in threshold over the surface code at twice the qubit count, and a 12$\times$ lower normalized overhead -- the latter is where the practical advantage sits. All runs are reproducible from recorded seeds and package versions.

XCOM: Full Mesh Network Synchronization and Low-Latency Communication for QICK (Quantum Instrumentation Control Kit)

Diego Martin, Luis H. Arnaldi, Kenneth Treptow, Neal Wilcer, Sho Uemura, Sara Sussman, David I Schuster, Gustavo Cancelo

2603.18977 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper presents XCOM, a networking system that enables precise synchronization (within 100 picoseconds) and low-latency communication between multiple quantum control boards in large-scale quantum computing systems. The system addresses the critical challenge of coordinating many hardware components needed to control hundreds or thousands of qubits in superconducting and spin qubit testbeds.

Key Contributions

  • Development of XCOM network achieving sub-100ps synchronization between quantum control boards
  • Enabling scalable multi-board control systems for large qubit count quantum computers
  • Providing deterministic all-to-all communication with sub-185ns latency for quantum control hardware
quantum control hardware synchronization QICK superconducting qubits scalable quantum systems
View Full Abstract

Quantum computing experiments and testbeds with large qubit counts have until recently been a privilege afforded only to large companies or quantum technologies where scaling to hundreds or thousands of qubits does not require a substantial increase in quantum control hardware (neutral atoms, trapped ions, or spin defects). Superconducting and spin qubit testbeds critically depend on scaling their control systems beyond what a single electronics board can provide. Multi-board control systems combining RF, fast DC control, bias, and readout require precise synchronization and communication across many hardware and firmware components. To address this, we present XCOM, a network that synchronizes QICK boards and the absolute clocks governing quantum program execution to within 100 ps, free of drift and loss of lock. XCOM also provides deterministic, all-to-all simultaneous data communication with latency below 185 ns. Like QICK itself, XCOM is compatible with a broad range of qubit technologies and is designed to scale to large systems.

A Flexible GKP-State-Embedded Fault-Tolerant Quantum Computation Configuration Based on a Three-Dimensional Cluster State

Peilin Du, Jing Zhang, Tiancai Zhang, Rongguo Yang, Kui Liu, Jiangrui Gao

2603.18778 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper proposes a new architecture for fault-tolerant quantum computing that uses three-dimensional cluster states built from optical photons with different properties (polarization, frequency, and orbital angular momentum). The researchers combine this with Gottesman-Kitaev-Preskill (GKP) error correction codes to create a flexible, scalable system for reliable quantum computation.

Key Contributions

  • Novel three-dimensional cluster state architecture using multiple optical degrees of freedom
  • Integration of partially squeezed surface-GKP codes with optimal squeezing threshold of 11.5 dB for fault-tolerant quantum computation
fault-tolerant quantum computing GKP states cluster states optical quantum computing error correction
View Full Abstract

The integration of diverse quantum resources and the exploitation of more degrees of freedom provide key operational flexibility for universal fault-tolerant quantum computation. In this work, we propose a flexible Gottesman-Kitaev-Preskill-state-embedded fault-tolerant quantum computation architecture based on a three-dimensional cluster state constructed in polarization, frequency, and orbital angular momentum domains. Specifically, we design optical entanglement generators to produce three diverse entangled pairs, and subsequently construct a three-dimensional cluster state via a beam-splitter network with several time delays. Furthermore, we present a partially squeezed surface-GKP code to achieve fault-tolerant quantum computation and ultimately find the optimal choice of implementing the squeezing gate to give the best fault-tolerant performance (the fault-tolerant squeezing threshold is 11.5 dB). Our scheme is flexible, scalable, and experimentally feasible, providing versatile options for future optical fault-tolerant quantum computation architecture.

High-threshold magic state distillation with quantum quadratic residue codes

Michael Zurel, Santanil Jana, Nadish de Silva

2603.18560 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a unified framework using quantum quadratic residue codes for magic state distillation, showing that several well-known quantum error-correcting codes are special cases of this framework. The authors demonstrate new codes that achieve high thresholds for distilling T states and Strange states, which are essential resources for fault-tolerant quantum computation.

Key Contributions

  • Unified existing magic state distillation codes under quantum quadratic residue framework
  • Presented new quantum quadratic residue codes with high thresholds for T state and Strange state distillation
  • Proved existence of infinitely many quantum quadratic residue codes for T state distillation with non-trivial thresholds
magic state distillation quantum error correction fault-tolerant quantum computing quadratic residue codes T states
View Full Abstract

We present applications of quantum quadratic residue codes in magic state distillation. This includes showing that existing codes which are known to distill magic states, like the $5$-qubit perfect code, the $7$-qubit Steane code, and the $11$-qutrit and $23$-qubit Golay codes, are equivalent to certain quantum quadratic residue codes. We also present new examples of quantum quadratic residue codes that distill qubit $T$ states and qutrit Strange states with high thresholds, and we show that there are infinitely many quantum quadratic residue codes that distill $T$ states with a non-trivial threshold. All of these codes, including the codes with the highest currently known thresholds for $T$ state and Strange state distillation, are unified under the umbrella of quantum quadratic residue codes.

Simulating Quantum Error Correction beyond Pauli Stochastic Errors

Jordan Hines, Corey Ostrove, Kenneth Rudinger, Stefan Seritan, Kevin Young, Robin Blume-Kohout, Timothy Proctor

2603.18457 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new methods to simulate how realistic quantum errors (beyond simple Pauli errors) affect quantum error correction protocols, showing that coherent errors can significantly degrade fault-tolerant quantum computing performance compared to standard error models.

Key Contributions

  • Development of detector error model (DEM) mapping technique for non-Pauli and coherent errors in fault-tolerant quantum circuits
  • Demonstration that coherent errors can shift fault-tolerance thresholds and increase logical error rates by an order of magnitude compared to stochastic Pauli errors
quantum error correction fault-tolerant quantum computing coherent errors surface codes magic state distillation
View Full Abstract

Quantum error correction (QEC), the lynchpin of fault-tolerant quantum computing (FTQC), is designed and validated against well-behaved Pauli stochastic error models. But in real-world deployment, QEC protocols encounter a vast array of other errors -- coherent and non-Pauli errors -- whose impacts on quantum circuits are vastly different than those of stochastic Pauli errors. The impacts of these errors on QEC and FTQC protocols have been largely unpredictable to date due to exponential classical simulation cost. Here, we show how to accurately and efficiently model the effects of coherent and non-Pauli errors on FTQC, and we study the effects of such errors on syndrome extraction for surface and bivariate bicycle codes, and on magic state cultivation. Our analysis suggests that coherent error can shift fault-tolerance thresholds, increase the space-time cost of magic state cultivation, and can increase logical error rates by an order of magnitude compared to equivalent stochastic errors. These analyses are enabled by a new technique for mapping any Markovian circuit-level error model with sufficiently small error rates onto a detector error model (DEM) for an FTQC circuit. The resulting DEM enables Monte Carlo estimation of logical error rates and noise-adapted decoding, and its parameters can be analytically related to the underlying physical noise parameters to enable approximate strong simulation.

Adaptive Loss-tolerant Syndrome Measurements

Yuanjia Wang, Todd A. Brun

2603.17988 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops adaptive protocols for quantum error correction that can handle both traditional Pauli errors and qubit losses (erasures) simultaneously. The authors extend existing fault-tolerant error correction methods to work with mixed error models and optimize syndrome measurement sequences to minimize overhead when qubits are lost.

Key Contributions

  • Development of adaptive syndrome measurement protocols for mixed Pauli error and erasure models
  • Quantification of minimal overhead for converting correctable erasures to located errors
  • Generalization of fault-tolerant error correction conditions to handle qubit losses
  • Extension of adaptive Shor-style measurement sequences to loss-tolerant quantum error correction
fault-tolerant quantum computing quantum error correction syndrome measurement qubit loss erasure errors
View Full Abstract

In the presence of qubit losses, the building blocks of fault-tolerant error correction (FTEC) must be revisited. Existing loss-tolerant approaches are mainly architecture-specific, and little attention has been given to optimizing the syndrome measurement sequences under loss. Schemes designed for the standard Pauli error model are not directly applicable because the syndrome patterns differ when both Pauli errors and erasures can occur. Based on recent advances in loss detection units and loss-tolerant syndrome extraction gadgets, we extend the study of adaptive Shor-style measurement sequences to the mixed error model. We begin by discussing how to adaptively convert correctable erasures into located errors. The minimal overhead is quantified by the number of stabilizer measurements, which can be reduced to a subgroup dimension problem for erasures arising in any FTEC circuit for qubits and prime-dimensional qudits. As a byproduct, we provide the construction of the canonical generating set with respect to a given bipartite partition for a stabilizer group on qudits of composite dimension. We then generalize both the weak and strong FTEC conditions. Finally, we present adaptive syndrome-measurement protocols for the mixed error model, generalizing the adaptive protocols for the standard Pauli error model.

Quantum Depth Compression via Local Dynamic Circuits

Benjamin Hall, Palash Goiporia, Rich Rines

2603.17774 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces Quantum Depth Compression (QDC), a compilation framework that uses dynamic circuits to significantly reduce the depth of quantum circuits by reorganizing non-Clifford gates and utilizing mid-circuit measurements. The method achieves depth linear in the number of non-Clifford gates while avoiding expensive SWAP operations for connectivity constraints.

Key Contributions

  • Development of QDC framework that reduces circuit depth to linear in non-Clifford gates
  • Method to achieve grid connectivity without SWAP networks using dynamic circuits
  • Demonstration of reduced depth and CNOT count compared to standard compilers
quantum circuit compilation dynamic circuits depth compression Clifford gates non-Clifford gates
View Full Abstract

We present Quantum Depth Compression (QDC), a general compilation framework that utilizes dynamic circuits to reduce arbitrary quantum circuits to depth linear in the number of non-Clifford gates and to grid connectivity without the need for expensive SWAP-networks. The framework consists of pushing Clifford gates to the end of the circuit, resulting in a sequence of non-Clifford Pauli-phasors followed by an all Clifford sub-circuit, both of which are then reduced to constant depth via dynamic circuits. We show that applying QDC to random Pauli-phasor circuits lowers both their depth and CNOT count compared to a standard alternative compiler.

Fast stabilizer state preparation via AI-optimized graph decimation

Michael Doherty, Matteo Puviani, Jasmine Brewer, Gabriel Matos, David Amaro, Ben Criger, David T. Stephen

2603.17743 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents AI-optimized methods to prepare stabilizer states (important quantum states used in error correction) more efficiently by reducing the number of two-qubit gates needed. The researchers use reinforcement learning and Monte Carlo tree search to find better ways to construct these quantum states, achieving up to 2.5x reduction in gate count for large quantum error correcting codes.

Key Contributions

  • AI-based method (QuSynth) combining reinforcement learning and Monte Carlo tree search for optimal Clifford gate selection
  • Demonstration of up to 2.5x reduction in two-qubit gate count for stabilizer state preparation including large codes like the 144-qubit gross code
stabilizer states quantum error correction Clifford gates reinforcement learning Monte Carlo tree search
View Full Abstract

We propose a general method for preparing stabilizer states with reduced two-qubit gate count and depth compared to the state of the art. The method starts from a graph state representation of the stabilizer state and iteratively reduces the number of edges in the graph using two-qubit Clifford gates to produce a unitary preparation circuit. We explore various heuristic search and AI-based approaches to optimally choose Clifford gates at each step, the most sophisticated of which is a combination of reinforcement learning and Monte Carlo tree search that we call QuSynth. We apply our method to synthesize code states of various quantum error correcting codes including the 23-qubit Golay code and the 144-qubit gross code, the latter of which is significantly beyond the qubit number that is accessible to prior optimal circuit synthesis methods. We demonstrate that our techniques are capable of reducing the required two-qubit gates by up to a factor of 2.5 compared to previous approaches while retaining low circuit depth.

Independent Trivariate Bicycle Codes

Aygul Azatovna Galimova

2603.17703 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new class of quantum error-correcting codes called independent trivariate bicycle codes that extend existing bicycle codes to three dimensions, achieving better performance metrics and lower error rates than previous multivariate bicycle codes.

Key Contributions

  • Development of independent trivariate bicycle codes extending bivariate framework to three cyclic dimensions
  • Construction of high-performance codes including [[140,6,14]] code with superior kd²/n ratio and pseudothreshold performance
  • Demonstration of improved error correction capabilities on realistic superconducting noise models
quantum error correction LDPC codes bicycle codes fault tolerance quantum computing
View Full Abstract

We introduce six independent trivariate bicycle (ITB) codes, which extend the bivariate bicycle framework of Bravyi et al.\ to three cyclic dimensions. Using asymmetric polynomial pairs on three-dimensional tori, we construct four codes including a $[[140,6,14]]$ code with $kd^2/n = 8.40$. In the code-capacity setting, the $[[140,6,14]]$ code achieves a pseudothreshold of $8.0\%$ and $kd^2/n = 8.40$, exceeding the best multivariate bicycle code of Voss et al.\ ($7.9\%$, $kd^2/n = 2.67$). With circuit-level depolarizing noise, pseudothresholds reach $0.59\%$ for $[[140,6,14]]$ and $0.53\%$ for $[[84,6,10]]$. On the SI1000 superconducting noise model, the $[[140,6,14]]$ code achieves a per-round per-observable rate of $5.6 \times 10^{-5}$ at $p = 0.20\%$. We additionally present two self-dual codes with weight-8 stabilizers: $[[54,14,5]]$ ($kd^2/n = 6.48$) and $[[128,20,8]]$ ($kd^2/n = 10.0$). These results expand the design space of algebraic quantum LDPC codes and demonstrate that the third cyclic dimension yields competitive candidates for practical fault-tolerant implementations.

General circuit compilation protocol into partially fault-tolerant quantum computing architecture

Tomochika Kurita

2603.17428 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a new circuit execution protocol for fault-tolerant quantum computers that can efficiently perform continuous rotation gates using lattice surgery with surface codes. The approach uses optimization techniques to minimize time overhead from probabilistic operations and includes performance prediction tools.

Key Contributions

  • Circuit execution protocol for STAR architecture enabling direct continuous Rz(θ) gate operations
  • QUBO-based optimization for resource state allocation to reduce time overhead
  • Performance estimation framework for predicting execution time and optimizing qubit topology
fault-tolerant quantum computing surface codes lattice surgery logical qubits error correction
View Full Abstract

As we are entering an early-FTQC era, circuit execution protocols with logical qubits and certain error-correcting codes are being discussed. Here, we propose a circuit execution protocol for the space-time efficient analog rotation (STAR) architecture. Gate operations within the STAR architecture is based on lattice surgery with surface codes, but it allows direct execution of continuous gates $Rz(θ)$ as non-Clifford gates instead of $T = Rz(π/4)$. $Rz(θ)$ operations involve creation of resource states $|m_θ\rangle = \frac{1}{\sqrt{2}} (|0 \rangle + e^{iθ} |1\rangle ) $ followed by ZZ joint measurements with target logical qubits. While employing $Rz(θ)$ enables more efficient circuit execution, both their creations and joint measurements are probabilistic processes and adopt repeat-until-success (RUS) protocols which are likely to result in considerable time overhead. Our circuit execution protocol aims to reduce such time overhead by parallel trials of resource state creations and more frequent trials of joint measurements. By employing quadratic unconstrained binary optimization (QUBO) in determining resource state allocations within the space, we successfully make our protocol efficient. Furthermore, we proposed performance estimators given the target circuit and qubit topology. It successfully predicts the time performance within less time than actual simulations do, and helps find the optimal qubit topology to run the target circuits efficiently.

Noise-resilient nonadiabatic geometric quantum computation for bosonic binomial codes

Dong-Sheng Li, Yang Xiao, Yu Wang, Yang Liu, Zhi-Cheng Shi, Ye-Hong Chen, Yi-Hao Kang, Yan Xia

2603.17250 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a method for quantum computing that combines binomial codes (which protect against certain types of errors) with geometric quantum gates (which are naturally resistant to noise) in superconducting systems. The researchers develop control protocols that make quantum computations more reliable by leveraging both error correction techniques and noise-resilient gate operations.

Key Contributions

  • Integration of binomial codes with nonadiabatic geometric quantum computation for enhanced error resilience
  • Development of customized control protocols combining reverse engineering and optimal control for superconducting systems
  • Demonstration of high-fidelity quantum gates with tolerance to parameter fluctuations and decoherence
nonadiabatic geometric quantum computation binomial codes error correction superconducting qubits quantum gates
View Full Abstract

The binomial code is renowned for its parity-mediated loss immunity and loss-error recoverability, while geometric phases are widely recognized for their intrinsic resilience against noise. Capitalizing on their complementary merits, we propose a noise-resilient protocol to realize Nonadiabatic geometric quantum computation with binomial codes in a superconducting system composed of a microwave cavity %off-resonantly dispersively coupled to a %three-level qutrit. The control field %geometric quantum computation is designed by %combining geometric phases, integrating reverse engineering and optimal control. This design provides a customized control protocol featuring strong error-tolerance and inherent noise-resilience. Using experimentally accessible parameters in superconducting systems, numerical simulations show that the protocol yields relatively high average fidelity for geometric quantum gates based on binomial code, even in the presence of parameter fluctuations and decoherence. Thus, this protocol may provide a practical approach for realizing reliable Nonadiabatic geometric quantum computation with binomial codes in current technology.

Optimizing Logical Mappings for Quantum Low-Density Parity Check Codes

Sayam Sethi, Sahil Khan, Maxwell Poster, Abhinav Anand, Jonathan Mark Baker

2603.17167 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new compilation and mapping techniques for quantum low-density parity check (LDPC) codes, specifically the Gross code, to reduce error rates in fault-tolerant quantum computing. The authors introduce a two-stage pipeline using hypergraph partitioning and priority-based algorithms to optimize how logical qubits are mapped onto hardware, achieving significant reductions in program failure rates.

Key Contributions

  • Two-stage mapping pipeline using hypergraph partitioning for logical qubit placement on Gross code architectures
  • Demonstration of up to 36% reduction in error rates from inter-module measurements compared to existing mapping approaches
  • Analysis showing that existing NISQ and FTQC mappers are insufficient for LDPC code architectures due to two-level mapping complexity
fault-tolerant quantum computing LDPC codes logical qubit mapping error correction quantum compilation
View Full Abstract

Early demonstrations of fault tolerant quantum systems have paved the way for logical-level compilation. For fault-tolerant applications to succeed, execution must finish with a low total program error rate (i.e., a low program failure rate). In this work, we study a promising candidate for future fault-tolerant architectures with low spatial overhead: the Gross code. Compilation for the Gross code entails compiling to Pauli Based Computation and then reducing the rotations and measurements to the Bicycle ISA. Depending on the configuration of modules and the placement of code modules on hardware, one can reduce the amount of resulting Bicycle instructions to produce a lower overall error rate. We find that NISQ-based, and existing FTQC mappers are insufficient for mapping logical qubits on Gross code architectures because 1. they do not account for the two-level nature of the logical qubit mapping problem, which separates into code modules with distinct measurements, and 2. they naively account only for length two interactions, whereas Pauli-Products are up to length $n$, where $n$ is the number of logical qubits in the circuit. For these reasons, we introduce a two-stage pipeline that first uses hypergraph partitioning to create in-module clusters, and then executes a priority-based algorithm to efficiently assign clusters onto hardware. We find that our mapping policy reduces the error contribution from inter-module measurements, the largest source of error in the Gross Code, by up to $\sim36\%$ in the best case, with an average reduction of $\sim13\%$. On average, we reduce the failure rates from inter-module measurements by $\sim22\%$ with localized factory availability, and by $\sim17\%$ on grid architectures, allowing hardware developers to be less constrained in developing scalable fault tolerant systems due to software driven reductions in program failure rates.

Secure Quantum Communication: Simulation and Analysis of Quantum Key Distribution Protocols

Mahendra Rasay, Emmanuel D. Sebastian, Subhash Prasad Sah, David Chinamerem Akah, Ajay Kumar Singh

2603.16690 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: high

This paper simulates and analyzes quantum key distribution protocols (BB84, B92, and E91) using IBM Qiskit, evaluating their performance under realistic conditions like noise and eavesdropping. The study aims to assess the practical feasibility of QKD as a secure communication method in the quantum computing era.

Key Contributions

  • Simulation-based comparative analysis of three major QKD protocols (BB84, B92, E91) using IBM Qiskit
  • Evaluation of protocol performance under realistic quantum channel conditions including noise, decoherence, and eavesdropping attacks
quantum key distribution QKD protocols BB84 quantum cryptography quantum communication
View Full Abstract

Quantum computing poses significant threats to conventional cryptographic techniques such as RSA and AES, motivating the need for quantum secure communication methods. Quantum Key Distribution (QKD) offers information theoretic security based on fundamental quantum principles. This paper presents a simulation-based analysis of well-known QKD protocols, namely BB84, B92, and E91, using the IBM Qiskit framework. Realistic quantum channel effects, including noise, decoherence, and eavesdropping, are modeled to evaluate protocol performance. Key metrics such as error rate, secret key generation, and security characteristics are analyzed and compared. The study highlights practical challenges in QKD implementation, including hardware limitations and channel losses, and discusses insights toward scalable and robust quantum communication systems. The results support the feasibility of QKD as a promising solution for secure communication in the quantum era.

CryoCMOS RF multiplexer for superconducting qubit control, readout and flux biasing at millikelvin temperatures with picowatt power consumption

Liam Fallik, Sriram Balamurali, Alican Caglar, Rohith Acharya, Jacques Van Damme, Tsvetan Ivanov, Shana Massar, Ruben Asanovski, A. M. Vadiraj, Massim...

2603.16608 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a cryogenic CMOS RF multiplexer that operates at extremely low temperatures (10 millikelvin) with ultra-low power consumption, designed to address the input-output bottleneck in large-scale superconducting quantum computers by enabling multiple qubits to share the same control and readout lines.

Key Contributions

  • Record-low 200 pW power consumption cryoCMOS RF multiplexer operating at 10 mK
  • Demonstration of direct qubit connection with minimal impact on coherence times
  • Scalable solution for multiplexing readout, flux, and control lines in superconducting quantum processors
cryogenic CMOS superconducting qubits RF multiplexer quantum control scalable quantum systems
View Full Abstract

Large-scale cryogenic quantum systems are constrained by an input-output bottleneck between room-temperature electronics and millikelvin stages, particularly in superconducting qubit platforms. This bottleneck is most acute for output lines, where bulky and expensive microwave components limit scalability. A promising approach for scalable characterization and testing is to perform signal multiplexing directly at the qubit plane. We demonstrate a cryogenic CMOS (cryoCMOS) RF multiplexer operating at 10 millikelvin with record-low static power consumption of 200 pW. The device provides < 2 dB insertion loss and > 30 dB isolation across DC-8 GHz. Direct connection to transmon qubits marginally affects coherence times in the range of 100 microseconds, enabling multiplexing of readout, flux and, in principle, XY drive lines. This work introduces cryoCMOS multiplexers as valuable tools for scalable, high-throughput cryogenic characterization and testing, and advances co-integrated quantum-classical control for future large-scale quantum processors.

Quantum classification and search algorithms using spinorial representations

Lauro Mascarenhas, Vinicius N. A. Lula-Rocha, Marco A. S. Trindade

2603.16564 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents two quantum algorithms - one for classification and one for search with non-uniform initial conditions - both formulated using Clifford algebras and spinorial representations. The approach provides a unified algebraic framework where quantum states and operators are constructed from spinor representations, with the classification algorithm using orthogonal states for different classes and the search algorithm implementing oracles directly through Clifford algebra generators.

Key Contributions

  • Novel algebraic formulation of quantum classification algorithm using spinorial representations
  • Unified framework based on Clifford algebras for both classification and search algorithms
  • Simplified oracle implementation for quantum search using Clifford algebra generators
quantum algorithms Clifford algebras spinorial representations quantum classification quantum search
View Full Abstract

We propose an algebraic formulation for two distinct quantum algorithms: a quantum classification algorithm and a quantum search algorithm with a non-uniform initial distribution, both based on Clifford algebras and spinorial representations. In the classification algorithm, we exploit properties of spinorial representations to construct orthogonal quantum states associated with different classes, allowing the identification of an item's class through the evaluation of expectation values of operators derived from the generators of the Clifford algebra. In the quantum search algorithm, we consider a database with prior information in which the oracle is implemented directly using generators of the Clifford algebra, simplifying its realization. The proposed approach provides a unified algebraic description for both algorithms, employing spinorial representations in the construction of quantum states and operators. Computational implementations are presented.

Distinguishing types of correlated errors in superconducting qubits

Hannah P. Binney, H. Douglas Pinckney, Kate Azar, Patrick M. Harrington, Shantanu Jha, Mingyu Li, Jiatong Yang, Felipe Contipelli, Renée DePencier Pi...

2603.16494 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper investigates two types of correlated errors in superconducting qubits - those caused by radiation-induced quasiparticles and those caused by mechanical vibrations from refrigeration equipment. The researchers develop methods to distinguish between these error types and show that certain qubit designs with larger superconducting gaps can protect against both types of correlated errors.

Key Contributions

  • Method for distinguishing radiation-induced vs vibration-induced correlated errors in superconducting qubits
  • Demonstration that transmon qubits with superconducting gap greater than qubit energy are protected against both radiation and vibration errors
superconducting qubits correlated errors quantum error correction transmon quasiparticles
View Full Abstract

Errors in superconducting qubits that are correlated in time and space can pose problems for quantum error correction codes. Radiation from cosmic and terrestrial sources can increase the quasiparticle (QP) density in a superconducting qubit device, resulting in an increased rate of QPs tunneling across proximal Josephson junctions (JJs) and causing correlated errors. Mechanical vibrations, such as those induced by the pulse tube in a dry dilution refrigerator, are also a known source of correlated errors. We present a method for distinguishing these two types of errors by their temporal, spatial, and frequency domain features, enabling physically motivated error-mitigation strategies. We also present accelerometer data to study the correlation between dilution refrigerator vibrations and the errors. We measure arrays of transmon qubits where the difference in superconducting gap across the JJ is less than the qubit energy, as well as those where the gap is greater than the qubit energy, which has been shown to mitigate radiation-induced errors. We show that these latter devices are also protected against vibration-induced errors.

Reducing C-NOT Counts for State Preparation and Block Encoding via Diagonal Matrix Migration

Zexian Li, Guofeng Zhang, Xiao-Ming Zhang

2603.16492 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper presents algorithms to reduce the number of C-NOT gates needed for quantum state preparation and block encoding, which are fundamental operations in quantum computing. The authors achieve significant improvements in gate counts by developing a diagonal matrix migration technique that takes advantage of how diagonal matrices commute with certain quantum operations.

Key Contributions

  • Improved C-NOT count for n-qubit state preparation from (23/24)2^n to (11/12)2^n gates
  • Single-ancilla block encoding protocol achieving (11/48)4^n C-NOT count for 2^(n-1)×2^(n-1) matrices
  • Diagonal matrix migration technique based on commutativity properties to minimize C-NOT gate usage
  • Optimized algorithms for low-rank matrices with C-NOT count (K+(11/12))2^n for rank-K matrices
C-NOT gates state preparation block encoding gate complexity quantum circuits
View Full Abstract

Quantum state preparation and block encoding are versatile and practical input models for quantum algorithms in scientific computing. The circuit complexity of state preparation and block encoding frequently dominates the end-to-end gate complexity of quantum algorithms. We give algorithms with lower C-NOT counts for both the state preparation and block encoding. For a general $n$-qubit state, we improve the C-NOT count from Plesch-Brukner algorithm, proposed in 2011, from $(23/24)2^n$ to $(11/12)2^n$. For block encoding, our single-ancilla protocol for $2^{n-1}\times 2^{n-1}$ matrices uses the spectral norm as subnormalization and achieves a C-NOT count leading term $(11/48)4^n$. This result even exceeds the lower bound of $(1/4)4^n$ for $n$-qubit unitary synthesis. Further optimization is performed for low-rank matrices, which frequently arise in practical applications. Specifically, we achieve the C-NOT count leading term $(K+(11/12))2^n$ for a rank-$K$ matrix. Our approach builds upon the recursive block-ZXZ decomposition from Krol et al. and introduces a diagonal matrix migration technique based on the commutativity of the diagonal matrix and the uniformly controlled rotation about the $z$-axis to minimize the use of C-NOT gates.

Chipmunq: Fault-Tolerant Compiler for Chiplet Quantum Architectures

Peter Wegmann, Aleksandra Świerkowska, Emmanouil Giortamis, Pramod Bhatotia

2603.16389 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents Chipmunq, a specialized compiler designed to map fault-tolerant quantum circuits onto modular chiplet quantum computer architectures. The compiler addresses the challenge of efficiently compiling large-scale quantum error correction circuits while managing the constraints of distributed quantum hardware connected by noisy inter-chiplet links.

Key Contributions

  • First hardware-aware compiler specifically designed for fault-tolerant quantum circuits on modular chiplet architectures
  • Quantum-error-correction-aware partitioning strategy that preserves logical qubit patch integrity
  • Significant improvements in compilation efficiency and circuit performance metrics including 13.5x speedup and 86.4% depth reduction
fault-tolerant quantum computing quantum error correction chiplet architecture quantum compiler logical qubits
View Full Abstract

As quantum computing advances toward fault-tolerance through quantum error correction, modular chiplet architectures have emerged to provide the massive qubit counts required while overcoming fabrication limits of monolithic chips. However, this transition introduces a critical compilation gap: existing frameworks cannot handle the scale of fault-tolerant quantum circuits while managing the noisy, sparse interconnects of chiplet backends. We present Chipmunq, the first hardware-aware compiler for mapping and routing fault-tolerant circuits onto modular architectures. Chipmunq employs a quantum-error-correction-aware partitioning strategy that preserves the integrity of logical qubit patches, preventing prohibitive gate overheads common in general-purpose compilers. Our evaluation demonstrates that Chipmunq achieves a 13.5x speedup in compilation time compared to state-of-the-art tools. By incorporating chiplet constraints and defective qubits, it reduces circuit depth by 86.4% and SWAP gate counts by 91.4% across varying code distances. Crucially, Chipmunq overcomes heterogeneous inter-chiplet links, improving logical error rates by up to two orders of magnitude.

A Scalable Open-Source QEC System with Sub-Microsecond Decoding-Feedback Latency

Junyi Liu, Yi Lee, Yilun Xu, Gang Huang, Xiaodi Wu

2603.16203 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents an open-source quantum error correction (QEC) system that integrates real-time qubit control with ultra-fast error syndrome decoding and correction feedback. The system achieves 446 nanosecond end-to-end latency for a distance-3 surface code and can theoretically scale to handle ~881 physical qubits with sub-microsecond latency.

Key Contributions

  • First fully integrated open-source QEC system with sub-microsecond decoding-feedback latency
  • Scalable distributed multi-board FPGA architecture that can handle up to distance-21 surface codes
  • Complete hardware platform ready for deployment with superconducting qubits including real-time control and communication
quantum error correction surface codes fault-tolerant quantum computing FPGA real-time control
View Full Abstract

Quantum error correction (QEC) is essential for realizing large-scale, fault-tolerant quantum computation, yet its practical implementation remains a major engineering challenge. In particular, QEC demands precise real-time control of a large number of qubits and low-latency, high-throughput and accurate decoding of error syndromes. While most prior work has focused primarily on decoder design, the overall performance of any QEC system depends critically on all its subsystems including control, communication, and decoding, as well as their integration. To address this challenge, we present an open-source, fully integrated QEC system built on RISC-Q, a generator for RISC-V-based quantum control architectures. Implemented on RFSoC FPGAs, our system prototype integrates real-time qubit control, a scalable distributed multi-board architecture, and the state-of-the-art hardware QEC decoder within a low-latency, high-throughput decoding pipeline, forming a complete hardware platform ready for deployment with superconducting qubits. Experimental evaluation on a three-board prototype based on AMD ZCU216 RFSoCs demonstrates an end-to-end QEC decoding-feedback latency of 446 ns for a distance-3 surface code, including syndrome aggregation, network communication, syndrome decoding, and error distribution. Extrapolating from measured subsystem performance and state-of-the-art decoder benchmarks, the architecture can achieve sub-microsecond decoding-feedback latency up to a distance-21 surface code ($\sim$881 physical qubits) when scaled to larger hardware configurations.

Monolithic Segmented 3D Ion Trap for Quantum Technology Applications

Abhishek Menon, Michael Strauss, George Tomaras, Liam Jeanette, April X. Sheffield, Devon Valdez, Yuanheng Xie, Visal So, Henry De Luo, Midhuna Durais...

2603.16048 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper presents a new design for ion trap quantum computers using a monolithic 3D fused silica structure that can trap heavy ions like Yb+ and Ba+ with very low heating rates and high optical access. The researchers demonstrate high-fidelity two-qubit gate operations (99.3%) and establish this as a scalable platform for quantum computing with trapped ions.

Key Contributions

  • Development of monolithic 3D fused silica blade trap with 250 μm ion-electrode distance enabling stable high RF voltage operation
  • Demonstration of 99.3% two-qubit gate fidelity with heavy ions (Yb+) and low motional heating rates (1.1 quanta/s)
  • Achievement of high numerical aperture optical access (0.7 NA) while maintaining deep trapping potentials for scalable quantum computing
  • Establishment of modular platform suitable for quantum simulation, computation, metrology and networking applications
ion trap trapped ions quantum gates Yb+ Paul trap
View Full Abstract

Monolithic three-dimensional (3D) Paul traps combine the high-precision microfabrication of two-dimensional (2D) chip traps with the deep trapping potentials and low heating rates characteristic of macroscopic Paul traps, which are typically manually assembled. However, achieving low motional heating rates and optical access with a high numerical aperture (NA) while maintaining the high radio-frequency (RF) voltages required for heavy ionic species, such as Yb$^{+}$ and Ba$^{+}$, remains a significant technical challenge. In this work, we present a segmented, monolithic 3D fused silica blade trap, featuring an ion-electrode distance of 250 $μ$m with stable operation at high RF voltages. We benchmark the performance of the trap using Yb$^{+}$ ions, demonstrating axially homogeneous trapping potentials for 200 $μ$m around the axial center of the trap, high multi-directional optical access (up to 0.7 NA), and radial motional heating rate as low as 1.1 $\pm$ 0.1 quanta/s at radial trap frequencies about 3 MHz near room temperature. Furthermore, we observe a motional Ramsey coherence time, $T_{2}$, of around 95 ms for the radial center-of-mass mode. We demonstrate a two-qubit gate fidelity of ${99.3}^{+ 0.7}_{- 1.5}$$\%$ with state preparation and measurement correction. These results establish fused-silica monolithic blade traps as a scalable, modular platform for quantum simulation, computation, metrology, and networking with heavy ionic species.

CSS codes from the Bruhat order of Coxeter groups

Kamil Bradler

2603.16036 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a new method for constructing CSS quantum error-correcting codes using the mathematical structure of Coxeter groups and their Bruhat ordering. The approach generates families of CSS codes with controllable parameters and stabilizer weights by exploiting the geometric properties of these algebraic structures.

Key Contributions

  • Novel method for generating CSS codes using Coxeter group Bruhat order and chain complexes
  • Construction of CSS code families with controlled stabilizer weights and parameters, including codes with thousands of qubits
  • Development of weight-reduction techniques for handling heavy stabilizers in irregular weight distributions
CSS codes quantum error correction Coxeter groups Bruhat order stabilizer codes
View Full Abstract

I introduce a method to generate families of CSS codes with interesting code parameters. The object of study is Coxeter groups, both finite and infinite (reducible or not), and a geometrically motivated partial order of Coxeter group elements named after Bruhat. The Bruhat order is known to provide a link to algebraic topology -- it doubles as a face poset capturing the inclusion relations of the $p$-dimensional cells of a regular CW~complex and that is what makes it interesting for QEC code design. Assisted by the Bruhat face poset interval structure unique to Coxeter groups I show that the corresponding chain complexes can be turned into multitudes of CSS codes. Depending on the approach, I obtain CSS codes (and their families) with controlled stabilizer weights, for example $[6006, 924, \{{\leq14},{\leq7}\}]$ (stabilizer weights~14 and 9) and $[22880,3432,\{{\leq8},{\leq16}\}]$ (weights 16 and 10), and CSS codes with highly irregular stabilizer weight distributions such as $[571,199,\{5,5\}]$. For the latter, I develop a weight-reduction method to deal with rare heavy stabilizers. Finally, I show how to extract four-term (length three) chain complexes that can be interpreted as CSS codes with a metacheck.

Universal Weakly Fault-Tolerant Quantum Computation via Code Switching in the [[8,3,2]] Code

Shixin Wu, Dawei Zhong, Todd A. Brun, Daniel A. Lidar

2603.15610 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a fault-tolerant quantum computing protocol that achieves universal quantum computation by switching between two versions of an [[8,3,2]] quantum error correction code, where one supports single-qubit operations and the other supports multi-qubit gates, circumventing theoretical limitations on gate sets within single codes.

Key Contributions

  • Development of a fault-tolerant code-switching protocol between two versions of the [[8,3,2]] quantum error correction code
  • Demonstration of universal quantum computation using postselected error detection with quadratic logical error suppression
  • Numerical validation through implementation of Grover's search algorithm on three logical qubits
fault-tolerant quantum computing quantum error correction code switching Eastin-Knill theorem transversal gates
View Full Abstract

Code-switching offers a route to universal, fault-tolerant quantum computation by circumventing the limitation implied by the Eastin-Knill theorem against a universal transversal gate set within a single quantum code. Here, we present a fault-tolerant code-switching protocol between two versions of the $[[8, 3, 2]]$ code. One version supports weakly fault-tolerant single-qubit Clifford gates, while the other supports a logical $\overline{\mathrm{CCZ}}$ gate via transversal $T/T^\dagger$ together with logical $\overline{\mathrm{CZ}}$, $\overline{\mathrm{CNOT}}$, and $\overline{\mathrm{SWAP}}$ gates. Because both codes have distance 2, the protocol operates in a postselected, error-detecting regime: single faults lead to detectable outcomes, and accepted runs exhibit quadratic suppression of logical error rates. This yields a universal scheme for postselected fault-tolerant computation. We validate the protocol numerically through simulations of state preparation, code switching, and a three-logical-qubit implementation of Grover's search.

A direct controlled-phase gate between microwave photons

Adrian Copetudo, Amon M. Kasper, Tanjung Krisnanda, Gregoire Veyrac, Shushen Qin, Hui Khoon Ng, Yvonne Y. Gao

2603.15587 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper demonstrates a new method to create direct interactions between microwave photons in superconducting cavities without exciting ancillary nonlinear elements, which reduces noise and decoherence. The researchers use this approach to implement a controlled-phase gate that directly entangles photons, providing a key building block for fault-tolerant bosonic quantum computing.

Key Contributions

  • Engineering a Raman-assisted cross-Kerr interaction between microwave photons without exciting nonlinear elements
  • Implementing a direct controlled-phase gate between oscillators that operates within bosonic code spaces
  • Demonstrating photon-number parity mapping for error detection while preserving coherence
  • Expanding the bosonic cQED toolbox for fault-tolerant quantum computing
bosonic quantum computing controlled-phase gate superconducting cavities cross-Kerr interaction fault tolerance
View Full Abstract

Useful quantum information processing ultimately requires operations over large Hilbert spaces, where logical information can be encoded efficiently and protected against noise. Harmonic oscillators naturally provide access to such high-dimensional spaces and enable hardware-efficient, error-correctable bosonic encodings. However, direct entangling operations between oscillators remains an outstanding challenge. Existing strategies typically rely on parametrically activating interactions that populate the excited states of an ancillary nonlinear element. This induces an effective interaction between the oscillators, at the expense of introducing additional dissipation channels and potential leakage from the encoded manifold. Here, we engineer a Raman-assisted cross-Kerr interaction between microwave photons hosted in two superconducting cavities, without exciting the nonlinear element, thereby suppressing coupler-induced decoherence.This approach generates a direct coupling between microwave photons that is exploited to implement a controlled-phase gate within the single- and two-photon subspaces of two oscillators, directly entangling them. Finally, we harness this dynamics to map the photon-number parity of a storage cavity onto an auxiliary oscillator rather than a nonlinear element, enabling error detection while protecting the storage mode from measurement-induced decoherence. Our work expands the bosonic circuit quantum electrodynamics (cQED) toolbox by enabling coherence-preserving direct photon-photon interactions between oscillators. This realizes an entangling gate that operates entirely within a bosonic code space while suppressing decoherence from nonlinear ancilla excitations, providing a key primitive for fault-tolerant bosonic quantum computing.

Simulating the Open System Dynamics of Multiple Exchange-Only Qubits using Subspace Monte Carlo

Tameem Albash, N. Tobias Jacobson

2603.15577 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a Monte Carlo simulation method for modeling multiple exchange-only qubits in open quantum systems by leveraging the fact that spin projection quantum numbers remain unchanged under exchange operations. The method reduces computational complexity from 8^(2n) to 3^(2n) dimensions and is applied to study multi-round Bell state stabilization circuits using 6 exchange-only qubits.

Key Contributions

  • Development of Subspace Monte Carlo method that reduces computational complexity for simulating multiple exchange-only qubits from 8^(2n) to 3^(2n) dimensions
  • Demonstration of the method on multi-round Bell state stabilization circuits with reset-if-leaked gadgets using 6 EO qubits
exchange-only qubits open system dynamics Monte Carlo simulation Bell state stabilization quantum error correction
View Full Abstract

We propose a Monte Carlo based method for simulating the open system dynamics of multiple exchange-only (EO) qubits. In the EO encoding, the total spin projection quantum number along the $z$-axis of the three constituent spins remains unchanged under exchange operations, in contrast to the open system (or multi-qubit miscalibration) setting where coherent and incoherent mixing of states with different quantum numbers occurs. In our approach, we choose to measure the total spin component along the $z$-axis of each EO qubit after every logical quantum operation, which decoheres coherent mixtures of states with different spin projection quantum numbers. Independent simulations thus give different trajectories of the system in the associated subspaces, so we refer to this method as the Subspace Monte Carlo method. With each EO qubit having a definite spin projection quantum number, the density matrix of $n$ qubits can be represented by a vector of dimension $3^{2n}$, instead of $8^{2n}$, with an additional vector of dimension $n$ to label the quantum number of each qubit. We show that this approximation of the dynamics remains faithful to the true dynamics when the simulated circuits twirl the noise, converting coherent errors to stochastic errors, which can be achieved using randomized compiling. We use this simulation approach to study how correlations in measurement outcomes of circuits with reset-if-leaked gadgets, such as a multi-round Bell state stabilization circuit that uses 6 EO qubits, are affected by the choice of CNOT implementations.

Velocity-Enabled Quantum Computing with Neutral Atoms

Ohad Lib, Hendrik Timme, Maximilian Ammenwerth, Flavien Gyger, Renhao Tao, Shijia Sun, Immanuel Bloch, Johannes Zeiher

2603.15561 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new approach to quantum computing with neutral atoms that uses atom velocity as a control parameter, enabling selective operations on moving atoms through Doppler shifts and spatial phase manipulation. The researchers demonstrate key quantum error correction primitives including high-fidelity gates, cluster state generation, and error detection codes while reducing hardware complexity.

Key Contributions

  • Introduction of velocity as a new degree of freedom for neutral atom quantum computing architectures
  • Demonstration of velocity-selective state preparation and measurement using controlled Doppler shifts
  • Achievement of 99.86% fidelity CZ gates and implementation of quantum error correction primitives including 8-qubit cluster states and [[4,2,2]] error detection code
  • Reduction of hardware overhead by enabling selective operations on moving atoms with global control beams
neutral atoms quantum error correction logical qubits Doppler shifts cluster states
View Full Abstract

Realizing error-corrected logical qubits is a central goal for the current development of digital quantum computers. Neutral atoms offer the opportunity to coherently shuttle atoms for realizing efficient quantum error correction based on long-range connectivity and parallel atom transport. Nevertheless, time overheads in shuttling atoms and complex control hardware pose challenges to scaling current architectures. Here, we introduce atom velocity as a new degree of freedom in neutral-atom architectures tailored to quantum error correction. Through controlled Doppler shifts, we demonstrate velocity-selective mid-circuit state preparation and measurement on moving atoms, leaving spectator atoms unaffected. Furthermore, we achieve on-the-fly local single-qubit rotations by mapping micron-scale atom displacements to the spatial phase of global control beams. Complementing these techniques with CZ entangling gates with a fidelity of 99.86(4)%, we experimentally implement key primitives for quantum error correction and measurement-based quantum computing. We generate an eight-qubit entangled cluster state with an average stabilizer value of 0.830(4), realize an [[4,2,2]] error-detection code with 99.0(3) % logical Bell-state fidelity, and perform stabilizer measurements using a flying ancilla. By enabling selective operations on continuously moving atoms using only global beams, this velocity-enabled architecture reduces hardware overhead while minimizing shuttling and transfer delays, opening a new pathway for fast, large-scale atom-based quantum computation.

Error semitransparent universal control of a bosonic logical qubit

Saswata Roy, Owen C. Wetherbee, Valla Fatemi

2603.15356 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates error semi-transparent gates for bosonic logical qubits, achieving universal quantum computation with reduced errors from photon loss. The researchers show a five-fold reduction in infidelity and construct a complete gate set including non-Clifford operations necessary for fault-tolerant quantum computing.

Key Contributions

  • Introduction of error semi-transparent framework for universal bosonic logical qubit gates
  • Demonstration of complete gate set {X, H, T} with five-fold infidelity reduction
  • Construction of composite non-Clifford operations using error-corrected bosonic qubits
bosonic codes error correction fault-tolerant quantum computing logical qubits universal gates
View Full Abstract

Bosonic codes offer hardware-efficient approaches to logical qubit construction and hosted the first demonstration of beyond-break even logical quantum memory.However, such accomplishments were done for idling information, and realization of fault-tolerant logical operations remains a critical bottleneck for universal quantum computation in scaled systems. Error-transparent (ET) gates offer an avenue to resolve this issue, but experimental demonstrations have been limited to phase gates. Here, we introduce a framework based on dynamic encoding subspaces that enables simple linear drives to accomplish universal gates that are error semi-transparent (EsT) to oscillator photon loss. With an EsT logical gate set of {X, H, T}, we observe a five-fold reduction in infidelity conditioned on photon loss, demonstrate extended active-manipulation lifetimes with quantum error correction, and construct a composite EsT non-Clifford operation using a sequence of eight gates from the set. Our approach is compatible with methods for detectable ancilla errors, offering an approach to error-mitigated universal control of bosonic logical qubits with the standard quantum control toolkit.

Asymptotically good bosonic Fock state codes: Exact and approximate

Dor Elimelech, Arda Aydin, Alexander Barg

2603.15190 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: high

This paper develops new quantum error correction codes for photonic quantum systems that can protect against photon loss (amplitude damping). The authors prove that exact and approximate error correction are equivalent for these codes and construct families of asymptotically good codes with bounded photon numbers per mode.

Key Contributions

  • Proved equivalence of exact and approximate error correction for Fock state codes against amplitude damping
  • Constructed asymptotically good bosonic Fock state codes with bounded per-mode occupancy
  • Established connection to permutation invariant codes and extended results to qudit systems
quantum error correction bosonic codes Fock states amplitude damping photonic quantum computing
View Full Abstract

We examine exact and approximate error correction for multi-mode Fock state codes protecting against the amplitude damping noise. Based on a new formalization of the truncated amplitude damping channel, we show the equivalence of exact and approximate error correction for Fock state codes against random photon losses. Leveraging the recently found construction method based on classical codes with large distance measured in the $\ell_1$ metric, we construct asymptotically good (exact and approximate) Fock state codes. These codes have an additional property of bounded per-mode occupancy, which increases the coherence lifetime of code states and reduces the photon loss probability, both of which have a positive impact on the stability of the system. Using the relation between Fock state code construction and permutation invariant (PI) codes, we also obtain families of asymptotically good qudit PI codes as well as codes in monolithic nuclear state spaces.

Scalable Self-Testing of Mutually Anticommuting Observables and Maximally Entangled Two-Qudits

Souradeep Sasmal, Ritesh K. Singh, Prabuddha Roy, A. K. Pan

2603.15018 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: high

This paper develops a method to verify quantum systems using Bell inequalities, specifically testing high-dimensional entangled states and mutually anticommuting measurements without needing to trust the measurement devices. The framework can scale to certify increasingly complex quantum resources needed for advanced quantum technologies.

Key Contributions

  • Simultaneous self-testing framework for maximally entangled two-qudit states and mutually anticommuting observables
  • Derivation of optimal quantum bounds using Sum-of-Squares decomposition without dimensional assumptions
  • Proof that maximal quantum violation corresponds to Clifford algebra representations with minimal required dimensions
  • Establishment of quantitative robustness bounds relating Bell value deviations to strategy fidelity
self-testing Bell inequalities entanglement anticommuting observables Clifford algebra
View Full Abstract

The next frontier in device-independent quantum information lies in the certification of scalable and parallel quantum resources, which underpin advanced quantum technologies. We put forth a simultaneous self-testing framework for maximally entangled two-qudit state of local dimension $m_*=2^{\lfloor n/2 \rfloor}$ (equivalently $\lfloor n/2 \rfloor$ copies of maximally entangled two-qubit pairs), together with $n$ numbers of anti-commuting observables on one side. To this end, we employ an $n$-settings Bell inequality comprising two space-like separated observers, Alice and Bob, having $2^{n-1}$ and $n$ number of measurement settings, respectively. We derive the local ontic bound of this inequality and, crucially, employ the Sum-of-Squares decomposition to determine the optimal quantum bound without presupposing the dimension of the state or observables. We then establish that any physical realisation achieving the maximal quantum violation must, up to local isometries and complex conjugation, correspond to a reference strategy consisting of a maximally entangled state of local dimension of at least $2^{\lfloor n/2 \rfloor}$ and local observables forming an irreducible representation of the Clifford algebra. This construction thereby demonstrates that the minimal dimension compatible with $n$ mutually anticommuting observables is naturally self-tested by the maximal violation of the proposed Bell functional. Finally, we analyse the robustness of the protocol by establishing quantitative bounds relating deviations in the observed Bell value to the fidelity between the realised and the ideal strategies. Our results thus provide a scalable, dimension-independent route for the certification of high-dimensional entanglement and Clifford measurements in a fully device-independent framework.

Cavity-Free Distributed Quantum Computing with Rydberg Ensembles via Collective Enhancement

Aman Ullah

2603.14854 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents a quantum networking architecture that uses Rydberg atom ensembles to create entangled connections between distant quantum computers without needing optical cavities. The approach achieves high-fidelity quantum gates and atom-photon conversion, enabling practical distributed quantum computing with entanglement generation rates exceeding 600 Hz at 20 km distances.

Key Contributions

  • Cavity-free quantum networking architecture using Rydberg atom ensembles
  • High-fidelity distributed quantum computing protocol with 99.93% gate fidelity and >97.5% Bell state fidelity
  • Practical scalable approach achieving 600+ Hz entanglement rates at 20 km separation
Rydberg atoms distributed quantum computing quantum networking entanglement distribution cavity-free
View Full Abstract

A complete architecture for cavity-free quantum networking based on collective enhancement in Rydberg atom ensembles is presented. The protocol exploits Rydberg blockade and phase-matched directional emission to eliminate optical cavities without sacrificing performance. The architecture comprises three steps: (i) local control-ensemble entanglement via Rydberg blockade with fidelity $F_{\mathrm{gate}} \approx 99.93\%$; (ii) atom-photon conversion via Raman transitions, achieving directional emission ($η_{\mathrm{dir}} \approx 35\%$) and single-node efficiency $η_{\mathrm{node}} \approx 19\%$; and (iii) remote atom-atom entanglement via Hong-Ou-Mandel interference, producing Bell states with fidelity $F > 97.5\%$. With quantum memories enabling retry protocols, entanglement generation rates exceed $600$ Hz at 20 km separation. This cavity-free approach provides a practical and scalable pathway for distributed quantum computing and secure quantum communication.

Protecting Distributed Blockchain with Twin-Field Quantum Key Distribution: A Quantum Resistant Approach

Xuan Li, Ying Guo

2603.14826 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: high

This paper proposes a quantum-resistant blockchain architecture that uses twin-field quantum key distribution (TF-QKD) to protect distributed blockchain networks from quantum computing threats. The approach aims to overcome distance and scalability limitations of traditional QKD systems by implementing a measurement-device-independent topology that reduces infrastructure complexity.

Key Contributions

  • Scalable quantum-resistant blockchain architecture using TF-QKD protocol
  • Linear scaling optimization reducing infrastructure complexity from quadratic to linear
  • Integration of measurement-device-independent topology to overcome rate-loss limits in quantum networks
quantum key distribution twin-field QKD blockchain security measurement-device-independent quantum-resistant cryptography
View Full Abstract

Quantum computing provides the feasible multi-layered security challenges to classical blockchain systems. Whereas, quantum-secured blockchains relied on quantum key distribution (QKD) to establish secure channels can address this potential threat. This paper presents a scalable quantum-resistant blockchain architecture designed to address the connectivity and distance limitations of the QKD integrated quantum networks. By leveraging the twin-field (TF) QKD protocol within a measurement-device-independent (MDI) topology, the proposed framework can optimize the infrastructure complexity from quadratic to linear scaling. This architecture effectively integrates information-theoretic security with distributed consensus mechanisms, allowing the system to overcome the fundamental rate-loss limits inherent in traditional point-to-point links. The proposed scheme offers a theoretically sound and feasible solution for deploying large-scale and long-distance consortium.

Adaptive Control of Stochastic Error Accumulation in Fault-Tolerant Quantum Computation

Tirtha Haque

2603.14687 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a machine learning approach called Chronological Deep Q-Network (Ch-DQN) for adaptive quantum error correction that tracks how noise changes over time, rather than treating each error correction cycle independently. The method aims to prevent the gradual accumulation of errors that can cause logical qubits to fail in fault-tolerant quantum computers.

Key Contributions

  • Introduction of adaptive error correction using deep reinforcement learning that accounts for temporal noise correlations
  • Novel approach treating fault-tolerant quantum computation as a stochastic control problem with hazard accumulation
  • Development of Ch-DQN algorithm with backward trajectory refinement and fractional meta-updates for non-stationary noise environments
fault-tolerant quantum computing quantum error correction adaptive control deep reinforcement learning stochastic noise
View Full Abstract

In realistic hardware for quantum computation that possesses fault-tolerance, non-stationary noise and stochastic drift lead to logical failure from the temporal accumulation of errors, not from independent events. Static decoding and fixed calibration techniques are structurally incompatible with this situation because they do not take into account temporal correlations between errors or control-induced back-action of errors. These effects motivate control policies that must track noise evolution across correction cycles, rather than respond to individual syndromes in isolation. We treat fault-tolerant quantum computation as a stochastic control problem, modelled using reduced quantum dynamics in which Pauli error processes are governed by latent noise parameters that vary temporally. From this perspective, logical failure arises through the accumulation of a hazard variable, and the corresponding control objective depends on the full history of observations. Operating under these conditions, a Chronological Deep Q-Network (Ch-DQN) maintains an internal belief state that tracks both noise evolution and accumulated hazard. During training, backward refinement of trajectories is used to sample slowly drifting modes of operation, while runtime inference remains strictly causal. A fractional meta-update stabilizes learning in the presence of non-stationary, control-coupled dynamics. Through multi-distance simulations that incorporate stochastic drift and feedback from decision-making, Ch-DQN suppresses hazard accumulation and extends logical survival time relative to static and recurrent baselines. Error correction in this regime is therefore no longer a static decoding task, but a control process whose success is determined over time by the underlying noise dynamics.

Quantifying surface losses in superconducting aluminum microwave resonators

Elizabeth Hedrick, Faranak Bahrami, Alexander C. Pakpour-Tabrizi, Atharv Joshi, Q. Rumman Rahman, Ambrose Yang, Ray D. Chang, Matthew P. Bland, Apoorv...

2603.13183 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper investigates how surface defects in aluminum oxide layers limit the performance of superconducting quantum devices. The researchers measure microwave losses caused by two-level systems in aluminum resonators and find that native aluminum oxide contributes significantly to qubit decoherence, providing insights for improving quantum device fabrication.

Key Contributions

  • Quantified that surface two-level systems in 2.7 nm aluminum oxide layers are the primary source of losses in superconducting aluminum resonators
  • Demonstrated that aluminum interface defects contribute approximately 27% of the relaxation rate in state-of-the-art tantalum-on-silicon qubits
  • Showed that HF treatment removes aluminum oxide but rapid regrowth limits long-term improvements in device performance
superconducting qubits two-level systems aluminum oxide transmon qubits microwave resonators
View Full Abstract

The recent realization of millisecond-scale coherence with tantalum-on-silicon transmon qubits showed that depositing the Al/AlOx/Al Josephson junction in a high purity, ultrahigh vacuum environment was critical for achieving lifetime-limited coherence, motivating careful examination of the aluminum surface two-level system (TLS) bath. Here, we measure the microwave absorption arising from surface TLSs in superconducting aluminum resonators, following methodology developed for tantalum resonators. We vary film and surface properties and correlate microwave measurements with materials characterization. We find that the lifetimes of superconducting aluminum resonators are primarily limited by surface losses associated with TLSs in the 2.7 nm-thick native AlOx. Treatment with 49% HF removes surface AlOx completely; however, rapid oxide regrowth limits improvements in surface loss and long term device stability. Using these measurements we estimate that TLSs in aluminum interfaces contribute around 27% of the relaxation rate of state-of-the-art tantalum-on-silicon qubits that incorporate aluminum-based Josephson junctions.

Beta Tantalum Transmon Qubits with Quality Factors Approaching 10 Million

Atharv Joshi, Apoorv Jindal, Paal H. Prestegaard, Faranak Bahrami, Elizabeth Hedrick, Matthew P. Bland, Tunmay Gerg, Guangming Cheng, Nan Yao, Robert ...

2603.13174 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper demonstrates that beta-phase tantalum can be used to create high-quality superconducting qubits for quantum computers, achieving quality factors approaching 10 million despite previous beliefs that this material phase would be inferior to alpha-phase tantalum.

Key Contributions

  • Demonstrated that beta-Ta can achieve exceptionally high qubit quality factors (up to 10.1 million), challenging previous assumptions about material requirements
  • Established beta-Ta on sapphire as a viable platform for scalable qubit fabrication since beta-Ta readily nucleates at room temperature
  • Characterized the loss mechanisms in beta-Ta qubits, showing surface two-level systems as the dominant loss channel
transmon qubits beta tantalum quality factor superconducting qubits two-level systems
View Full Abstract

Tantalum-based transmon qubits are a promising platform for building large-scale quantum processors. So far, these qubits have been made from tantalum films grown exclusively in the alpha phase (α-Ta). The beta phase of tantalum (\{beta}-Ta) readily nucleates at room temperature, making it attractive for scalable qubit fabrication. However, \{beta}-Ta is widely believed to be detrimental to qubit performance because it has a lower superconducting critical temperature than α-Ta. We challenge this prevailing belief by fabricating low-loss transmon qubits from \{beta}-Ta films on sapphire. Across 11 qubits, the mean time-averaged quality factor is (5.6 +/- 2.3) x 10^6, with the best qubit recording a time-averaged quality factor of (10.1 +/- 1.3) x 10^6. Resonator studies demonstrate that the dominant microwave loss channel is surface two-level systems, with the surface loss contribution for \{beta}-Ta being about twice that of α-Ta. \{beta}-Ta films exhibit significant kinetic inductance, consistent with an estimated magnetic penetration depth of (1.78 +/- 0.02) μm. This work establishes \{beta}-Ta on sapphire as a material platform for realizing low-loss transmon qubits and other superconducting devices such as compact resonators, kinetic inductance detectors, and quasiparticle traps.

Circuit Optimization for Universality Transformation

Yasuaki Nakayama, Yuki Takeuchi, Seiseki Akibue

2603.13169 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents more efficient quantum circuit constructions for transforming between different universal gate sets, specifically showing how to convert a computationally universal gate set to a strictly universal one using fewer resources. The work demonstrates that any multi-qubit quantum operation can be generated using only real single-qubit gates, CCZ gates, and a single special quantum state.

Key Contributions

  • Circuit optimization that eliminates non-imaginary ancillary qubits in universality transformation
  • Extension to continuous gate-set setting showing exact generation of any multi-qubit unitary using constrained gate set
quantum gates circuit optimization universal computation gate synthesis quantum compilation
View Full Abstract

It is known that a computationally universal gate set $\{H,CCZ\}$ can be transformed to a strictly universal one $\{Λ(S), H\}$ using one maximally imaginary state $|+i\rangle$ and non-imaginary ancillary qubits. We succeed this transformation with a shorter circuit that eliminates non-imaginary ancillary qubits. We further extend this to the continuous gate-set setting, showing that any multi-qubit unitary can be exactly generated by real single-qubit unitary gates, $CCZ$ gates and $|+i\rangle$.

On-Demand Correlated Errors in Superconducting Qubits from a Particle Accelerator

Thomas McJunkin, A. W. Hunt, Yenuel Jones-Alberty, T. M. Haard, M. K. Spear, James Shackford, Tom Gilliss, Mayra Amezcua, C. A. Watson, T. M. Sweeney,...

2603.13124 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper describes a new experimental facility that uses a particle accelerator to study how ionizing radiation creates correlated errors in superconducting quantum computers. The researchers can now generate radiation-induced errors on demand to better understand and characterize how cosmic rays and other high-energy particles interfere with quantum computations.

Key Contributions

  • Development of a controllable facility coupling electron linear accelerator to dilution refrigerator for studying radiation effects on quantum systems
  • Demonstration of on-demand generation and characterization of radiation-induced qubit errors including relaxation, excitation, and detuning errors
  • Systematic study showing error signatures depend on junction placement and superconducting gap properties
superconducting qubits quantum error correction ionizing radiation correlated errors transmon
View Full Abstract

Ionizing radiation is a known source of correlated errors in superconducting quantum processors, inhibiting the functionality of quantum error correction surface codes. High-energy photons and charged particles deposit pair-breaking energy into these systems leading to excess quasiparticles near Josephson junctions that increase qubit decoherence. Previous investigations of this problem have relied on ambient, stochastic sources of ionizing radiation or alternative methods of quasiparticle generation. Here, we present a facility that couples an electron linear accelerator (linac) to a dilution refrigerator to study ionizing radiation in quantum systems. A single linac electron closely mimics the energy deposition characteristics of a typical cosmic-ray muon, and we demonstrate the facility's usefulness with a multi-qubit superconducting transmon chip. Characteristic radiation-induced relaxation errors are quickly and easily collected with the speed and timing information of the linac. Additionally, we present qubit excitation and detuning errors that can be difficult to detect without the on-demand source of ionizing radiation. These error signatures are shown to be dependent on the junction placement and surrounding superconducting gaps.

Partially Fault-Tolerant Quantum Computation for Megaquop Applications

Ming-Zhi Chung, Ali H. Z. Kavaki, Artur Scherer, Abdullah Khalid, Xiangzhou Kong, Toru Kawakubo, Namit Anand, Gebremedhin A Dagnew, Zachary Webb, Ally...

2603.13093 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes partially fault-tolerant quantum computing approaches for executing large-scale quantum circuits with millions of operations, focusing on the STAR architecture for efficient analog rotations and comparing resource requirements against full fault-tolerant methods. The authors demonstrate that partial fault tolerance could enable practical quantum simulation of condensed matter systems like the 2D Fermi-Hubbard model with hundreds of thousands of qubits.

Key Contributions

  • Quantum resource estimation comparison between partial and full fault-tolerant quantum computing architectures
  • Development of code growth procedure to reduce factory size for analog rotation state production
  • Analysis of space-time trade-offs and identification of optimal circuit regimes for partial FTQC
  • Demonstration that 2D Fermi-Hubbard model simulation is well-suited for STAR architecture implementation
fault-tolerant quantum computing quantum resource estimation STAR architecture analog rotations quantum error correction
View Full Abstract

Partially fault-tolerant quantum computing (FTQC) has recently emerged as a promising approach for the execution of megaquop-scale circuits with millions of logical operations. In this work, we demonstrate the strengths and the limitations of this approach by conducting quantum resource estimation (QRE) of the space--time-efficient analog rotation (STAR) architecture using realistic hardware specifications for superconducting processors, and compare it against the QRE of the full FTQC architecture. We show how the performance of the STAR architecture's protocols is affected by hardware improvements. We also reduce the space requirements for partial FTQC by developing a procedure leveraging code growth to decrease the size of a factory producing analog rotation states. Our results reveal a non-trivial dependence of the optimal pre-growth code distance on the rotation angle with respect to post-growth infidelity. Further, we analyze space--time trade-offs between the factory size and the error-mitigation overhead, and observe that in an application-agnostic setting, there is a Goldilocks zone for circuits in the regime of roughly $10^5$--$10^6$ small-angle rotation gates. We show that quantum simulation of 2D Fermi--Hubbard model systems is a particularly well-suited application for the STAR architecture, requiring only hundreds of thousands of physical qubits and runtimes on the order of minutes for modest system sizes. Due to its favourable algorithmic scaling to larger system sizes, utility-scale simulation of the 2D Fermi--Hubbard model could potentially be attained using partial FTQC.

Asymptotically Optimal Quantum Circuits for Comparators and Incrementers

Vivien Vandaele

2603.12917 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops more efficient quantum circuits for basic arithmetic operations like comparisons and increments, achieving optimal performance in terms of gate count, circuit depth, and qubit usage. The authors show these improvements can significantly reduce the complexity of Shor's factoring algorithm from O(n³) to O(n² log² n) depth.

Key Contributions

  • Asymptotically optimal quantum circuits for comparators and incrementers with Θ(n) gates and Θ(log n) depth
  • Improved Shor's algorithm implementation reducing circuit depth from O(n³) to O(n² log² n)
  • General theorem for trading ancilla qubits for control qubits with low overhead
quantum circuits Shor's algorithm quantum arithmetic fault-tolerant quantum computing circuit optimization
View Full Abstract

We present quantum circuits for comparison and increment operations that achieve an asymptotically optimal gate count of $Θ(n)$ and depth of $Θ(\log n)$ over the Clifford+Toffoli gate set, while using a provably minimal number of qubits. We extend these results to classical-quantum comparators, yielding an improved classical-quantum adder with an optimal qubit count. Given the ubiquity of these operations as algorithmic building blocks, our constructions translate directly into reduced circuit complexity for many quantum algorithms. As a notable example, they can be used to improve a space-efficient circuit for Shor's factoring algorithm, reducing circuit depth from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2 \log^2 n)$ without increasing either the qubit count or the asymptotic gate complexity. Underpinning these results is a general theorem demonstrating how to trade ancilla qubits for control qubits with low overhead in both depth and gate count, providing a broadly applicable tool for quantum circuit design.

Fisher information based lower bounds on the cost of quantum phase estimation

Ryosuke Kimura, Kosuke Mitarai

2603.12706 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: high Network: none

This paper analyzes the fundamental performance limits of quantum phase estimation (QPE) algorithms using Fisher information theory, comparing two main approaches (QFT-QPE and Hadamard test-based QPE) and showing that their optimal choice depends on the overlap between input and target states.

Key Contributions

  • Establishes fundamental lower bounds on QPE performance using Fisher information and Cramer-Rao bounds, separating circuit limitations from classical post-processing
  • Demonstrates performance crossover between QFT-QPE and HT-QPE paradigms depending on state overlap, with QFT-QPE having more favorable scaling
quantum phase estimation Fisher information Cramer-Rao bound quantum algorithms circuit depth
View Full Abstract

Quantum phase estimation (QPE) is a cornerstone of quantum algorithms designed to estimate the eigenvalues of a unitary operator. QPE is typically implemented through two paradigms with distinct circuit structures: quantum Fourier transform-based QPE (QFT-QPE) and Hadamard test-based QPE (HT-QPE). Existing performance assessments fail to separate the statistical information inherent in the quantum circuit from the efficiency of classical post-processing, thereby obscuring the limits intrinsic to the circuit structure itself. In this study, we employ Fisher information and the Cramer-Rao lower bound to formulate the performance limits of circuit designs independent of the efficiency of classical post-processing. Defining the circuit depth as $T$ and the total runtime as $t_{\rm total}$, our results demonstrate that the achievable scaling is constrained by a non-trivial lower bound on their product $T\,t_{\rm total}$, although previous studies have typically treated the circuit depth $T$ and the total runtime $t_{\rm total}$ as separate resources. Notably, QFT-QPE possesses a more favorable scaling with respect to the overlap between the input state and the target eigenstate corresponding to the desired eigenvalue than HT-QPE. Numerical simulations confirm these theoretical findings, demonstrating a clear performance crossover between the two paradigms depending on the overlap. Furthermore, we verify that practical algorithms, specifically the quantum multiple eigenvalue Gaussian filtered search (QMEGS) and curve-fitted QPE, achieve performance levels closely approaching our derived limits. By elucidating the performance limits inherent in quantum circuit structures, this work concludes that the optimal choice of circuit configuration depends significantly on the overlap.

Optimal control with flag qubits

Liang-Xu Xie, Lui Zuccherelli de Paula, Weizhou Cai, Qing-Xuan Jie, Luyan Sun, Chang-Ling Zou, Guang-Can Guo, Zi-Jie Chen, Xu-Bo Zou

2603.12162 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper introduces Flag-GRAPE, a new quantum control algorithm that uses auxiliary 'flag' qubits to actively combat decoherence in quantum operations. By correlating noise errors with measurable ancilla states and using post-selection, the method achieves 51% better fidelity than traditional approaches and converts random errors into more manageable erasure errors for quantum error correction.

Key Contributions

  • Introduction of Flag-GRAPE algorithm that actively tailors noise structure using flag ancillas for improved quantum control
  • Demonstration of 51% infidelity reduction compared to traditional methods and conversion of decoherence into heralded erasure errors
  • Integration with quantum error correction showing enhanced logical state preparation for fault-tolerant quantum computing
optimal control flag qubits quantum error correction fault-tolerant quantum computing decoherence
View Full Abstract

High-fidelity quantum operations are the cornerstone of fault-tolerant quantum computation. In open quantum systems, traditional optimal control only passively resists decoherence, leaving environment-induced uncertainty as a fundamental performance bottleneck. To overcome this, we propose a new optimal control framework with flag ancillas and the Flag-GRAPE algorithm, which can actively tailor the system's noise structure. Through embedding post-selection directly into the objective function, Flag-GRAPE correlates decoherence errors with the ancilla's unexpected state. Subsequent measurement and post-selection effectively expel this uncertainty, circumventing the fidelity bounds of traditional control. Numerical simulations in a superconducting quantum circuit demonstrate a $51\%$ reduction in infidelity compared to traditional closed-system pulses and also show that such enhancement is robust across broad noise regimes. Furthermore, by actively converting unstructured decoherence into heralded erasure errors, Flag-GRAPE is inherently compatible with quantum error correction. We demonstrate this by initializing a logical cat-code state, showing that the combination between Flag-GRAPE and QEC yields immediate state preparation enhancements. This new framework can reduce hardware overhead for fault-tolerant architectures and open up a practical path toward logical state preparation gain in near-term experiments.

Measurement-Induced State transitions in Inductively-Shunted Transmons

Nicholas Zobrist, John Mark Kreikebaum, Mostafa Khezri, Sergei V. Isakov, Brian J. Lester, Yaxing Zhang, Agustin Di Paolo, Daniel Sank, W. Clarke Smit...

2603.12114 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies measurement-induced state transitions (MIST) in superconducting quantum bits, where fast qubit measurements cause unwanted energy transitions. The researchers add inductive shunts to transmon qubits to stabilize these problematic transitions and make them more predictable.

Key Contributions

  • Demonstration of inductive shunts to eliminate offset charge dependence in MIST
  • Experimental characterization and theoretical modeling of MIST in inductively-shunted transmons
superconducting qubits transmon quantum measurement error correction MIST
View Full Abstract

Fast and high-fidelity qubit measurement plays a key role in quantum error correction. In superconducting qubits, measurement is typically performed using a resonant microwave drive on a readout resonator dispersively coupled to the qubit. Shorter measurement times require larger numbers of photons populating the readout resonator, which ultimately leads to undesired measurementinduced state transitions (MIST) of the qubit. MIST can be particularly problematic because these transitions often leave the qubit in a high energy state, and the MIST locations in readout parameter space drift as a function of qubit offset charge. In transmon qubits, these drifts have been avoided using very large qubit-resonator detunings or dedicated offset charge biases. In this work, we take an alternative approach and add an inductive shunt to the transmon to eliminate the offset charge dependence and stabilize the MIST. We experimentally characterize MIST in several different inductively-shunted transmons, in agreement with quantum and semiclassical models for MIST. These results extend to other inductively-shunted qubits.

Climbing the Clifford Hierarchy

Luca Bastioni, Samuel Glandon, Tefjol Pllaha, Madison Stewart, Phillip Waitkevich

2603.12088 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies the Clifford Hierarchy in quantum computation, specifically characterizing which Clifford gates have square roots that advance to the third level of the hierarchy. The work extends understanding of how gates can 'climb' between hierarchy levels through mathematical operations like taking square roots.

Key Contributions

  • Full characterization of Clifford gates whose square roots climb to the third level of the hierarchy
  • Extension of the theoretical framework for understanding gate relationships within the Clifford Hierarchy
Clifford Hierarchy fault-tolerant quantum computation magic state distillation Clifford gates quantum gates
View Full Abstract

The Clifford Hierarchy has been a central topic in quantum computation due to its strong connections with fault-tolerant quantum computation, magic state distillation, and more. Nevertheless, only sections of the hierarchy are fully understood, such as diagonal gates and third level gates. The diagonal part of the hierarchy can be climbed by taking square roots and adding controls. Similarly, square roots of Pauli gates (first level) are Clifford gates (climb to the second level). Based on this theme, we study gates whose square roots climb to the next level. In particular, we fully characterize Clifford gates whose square roots climb to the third level.

Noise Correlations as a Resource in Pauli-Twirled Circuits

Antoine Brillant, Rohan N Rajmohan, Peter Groszkowski, Alireza Seif, Jens Koch, Aashish Clerk

2603.12054 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper studies how randomized compiling transforms correlated quantum noise into simpler Pauli errors in quantum circuits. The researchers show that noise correlations actually improve circuit performance and that randomized compiling reduces the strength and duration of these correlations, making circuits more robust to memory effects.

Key Contributions

  • Analytical proof that correlated Gaussian noise under randomized compiling reduces correlation strength and temporal range
  • Discovery that noise correlations increase circuit fidelity in randomly compiled circuits, making correlations a resource
  • Demonstration that randomized compiling suppresses quantum bath correlations, allowing classical noise treatment for weak coupling
randomized compiling noise correlations Pauli errors circuit fidelity quantum error mitigation
View Full Abstract

Randomized compiling (RC) is an established tool to tailor arbitrary quantum noise channels into Pauli errors. The effect of both spatial and temporal noise correlations in randomly compiled circuits, however, is not fully understood. Here, we show that for a broad class of correlated Gaussian noise, RC reduces both the strength and temporal range of correlations. For Clifford circuits, we derive a simple analytical expression for the circuit fidelity of randomly compiled circuits. Surprisingly, we show that this fidelity is always increased by the presence of correlations, suggesting that correlations are a resource in randomly compiled circuits. To leading order in system-bath coupling, we also show that RC suppresses the quantum component of bath correlations, implying that one can safely treat weak noise as being classical. Finally, through extensive numerical simulations, we show that our results remain valid for many relevant non-Clifford circuits. These results clarify how RC mitigates memory effects and enhances circuit robustness.

Probing the memory of a superconducting qubit environment

Nicolas Gosling, Denis Bénâtre, Nicolas Zapata, Paul Kugler, Mitchell Field, Sumeru Hazra, Simon Günzler, Thomas Reisinger, Martin Spiecker, Mathie...

2603.11889 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper investigates how superconducting qubits interact with their environment, specifically identifying long-lived two-level systems that retain memory of past qubit states and can disrupt fault-tolerant quantum computing. The researchers develop methods to distinguish these problematic memory effects from standard environmental noise by analyzing quantum jump patterns.

Key Contributions

  • Development of method to distinguish non-Markovian environmental memory from standard Markovian noise in superconducting qubits
  • Demonstration that non-Poissonian quantum jump traces can identify long-lived two-level systems that threaten fault-tolerant operation
superconducting qubits fault tolerance non-Markovian dynamics two-level systems quantum jumps
View Full Abstract

Achieving fault tolerance with superconducting quantum processors requires qubits to operate within the regime of threshold theorems based on the Born-Markov approximation. This approximation, which models dissipation as constant energy decay into a memoryless environment, breaks down when qubits couple to long-lived two-level systems (TLSs) that become polarized during operation and retain memory of past qubit states. Here, we show that non-Poissonian quantum jump traces carry the information required to distinguish long-lived TLSs from the standard Markovian bath. By fitting the Solomon equations to measured quantum jumps dynamics arising naturally due to thermal fluctuations, we can disentangle the coupling of the qubit to the two environments. Sweeping the qubit frequency reveals distinct peaks, each associated with a TLS that outlives the qubit, providing a handle to understand their microscopic origin.

Demonstration of High-Fidelity Gates in a Strongly Anharmonic with Long-Coherence C-Shunt Flux Qubit

Silu Zhao, Li Li, Weiping Yuan, Xinhui Ruan, Jinzhe Wang, Bingjie Chen, Yunhao Shi, Guihan Liang, Shi Xiao, Jiacheng Song, Jinming Guo, Xiaohui Song, ...

2603.11692 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates high-fidelity quantum gates on a C-shunt flux qubit that achieves both large anharmonicity and long coherence times. The researchers used advanced pulse techniques to achieve gate fidelities exceeding 99.9%, showing this qubit design could be promising for building large-scale quantum computers.

Key Contributions

  • Demonstration of 99.9% gate fidelity on C-shunt flux qubits with large anharmonicity and long coherence
  • Establishing C-shunt flux qubits as a promising platform for scalable quantum computing
flux qubit gate fidelity anharmonicity DRAG pulses randomized benchmarking
View Full Abstract

We demonstrate high-fidelity single-qubit gates on a C-shunt flux qubit that simultaneously combines a large anharmonicity ($\mathcal{A}/2π=848~\mathrm{MHz}$) with long relaxation time ($T_1 = 23~μ\text{s}$). The large anharmonicity significantly suppresses leakage to higher energy levels, enabling fast and precise microwave control. Using DRAG pulses and randomized benchmarking, the qubit achieves gate fidelities exceeding 99.9\%, highlighting the capability of C-shunt flux qubits for robust and high-performance quantum operations. These results establish them as a promising platform for scalable quantum information processing.

Quantum Error Correction by Purification

Jonathan Raghoonanan, Tim Byrnes

2603.11568 • Mar 12, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper introduces a new quantum error correction method called purification quantum error correction (PQEC) that uses multiple noisy copies of quantum states and the SWAP test to reduce errors without requiring knowledge of the original state. The method achieves high error thresholds of 75% for depolarizing noise and 50% for dephasing noise.

Key Contributions

  • Novel purification-based quantum error correction scheme using SWAP test
  • Demonstration of high error thresholds (75% for depolarizing channel, 50% for dephasing)
  • General-purpose method requiring no prior state knowledge or postselection
quantum error correction state purification SWAP test fault tolerance depolarizing channel
View Full Abstract

We present a general-purpose quantum error correction primitive based on state purification via the SWAP test, which we refer to as purification quantum error correction (PQEC). This method operates on $N$ noisy copies, requires minimally $O(M\log_2 N)$ data qubits to process the $M$-qubit inputs. In a similar way to standard QEC, the purification steps may be interleaved within a quantum algorithm to suppress the logical error rate. No postselection is performed and no knowledge of the state is required. We analyze its performance under a variety of error channels and find that PQEC is highly effective at boosting fidelity and reducing logical error rates, particularly for the depolarizing channel. Error thresholds for the local depolarizing channel are found to be $ 75 \%$ for any register size. For local dephasing, the error threshold is reduced to $ 50 \% $ but may be boosted using twirling.

Mitigating crosstalk errors for simultaneous single-qubit gates on a superconducting quantum processor

Jaap J. Wesdorp, Eric Hyyppä, Joona Andersson, Janos Adam, Rohit Beriwal, Ville Bergholm, Saga Dahl, Simone Diego Fasciati, Alejandro Gomez Friero, Z...

2603.11018 • Mar 11, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper addresses crosstalk errors that occur when multiple qubits are controlled simultaneously on superconducting quantum processors, developing techniques to optimize qubit frequencies and shape control pulses to minimize interference between neighboring qubits. The researchers achieved 99.96% fidelity for simultaneous single-qubit gates on a 49-qubit processor and demonstrated scalability to systems with up to 1000 qubits.

Key Contributions

  • Analytical model for simultaneous single-qubit gate errors caused by microwave crosstalk
  • Model-based optimization strategy for qubit frequencies to minimize crosstalk errors
  • Crosstalk transition suppression (CTS) pulse shaping technique
  • Demonstration of scalability to 1000-qubit systems through simulations
superconducting qubits crosstalk mitigation gate fidelity pulse shaping frequency optimization
View Full Abstract

Single-qubit gates on superconducting quantum processors are typically implemented using microwave pulses applied through dedicated control lines. However, these microwave pulses may also drive other qubits due to crosstalk arising from capacitive coupling and wavefunction overlap in systems with closely spaced transition frequencies. Crosstalk and frequency crowding increase errors during simultaneous single-qubit operations relative to isolated gates, thus forming a major bottleneck for scaling superconducting quantum processors. In this work, we combine model-based qubit frequency optimization with pulse shaping to demonstrate crosstalk error mitigation in single-qubit gates on a 49-qubit superconducting quantum processor. We introduce and experimentally verify an analytical model of simultaneous single-qubit gate error caused by microwave crosstalk that depends on a given pulse shape. By employing a model-based optimization strategy of qubit frequencies, we minimize the crosstalk-induced error across the processor and achieve a mean simultaneous single-qubit gate fidelity of 99.96% for a 16-ns gate duration, approaching the mean individual gate fidelity. To further reduce the simultaneous error and required qubit frequency bandwidth on high-crosstalk qubit pairs, we introduce a crosstalk transition suppression (CTS) pulse shaping technique that minimizes the spectral energy around transitions inducing leakage and crosstalk errors. Finally, we combine CTS with model-based frequency optimization across the device and experimentally show a systematic reduction in the required qubit frequency bandwidth for high-fidelity simultaneous gates, supported by simulations of systems with up to 1000 qubits. By alleviating constraints on qubit frequency bandwidth for parallel single-qubit operations, this work represents an important step for scaling towards larger quantum processors.

Permutation-invariant codes: a numerical study and qudit constructions

Liam J. Bond, Jiří Minář, Māris Ozols, Arghavan Safavi-Naini, Vladyslav Visnevskyi

2603.10981 • Mar 11, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper studies permutation-invariant quantum error-correcting codes that can protect quantum information stored in qudits (quantum systems with more than two levels) from deletion errors. The researchers investigate how the required number of physical qudits scales with the desired error correction capability and find that using higher-dimensional qudits can reduce the overhead needed for error correction.

Key Contributions

  • Conjectured lower bound on block length for qubit PI codes correcting deletion errors with scaling n(d) ≥ (3d² + 1)/4
  • Demonstrated that increasing physical qudit dimension reduces block length requirements and approaches the quantum Singleton bound
  • Extended AAB construction from qubits to qudits using semi-analytic methods with linear programming
quantum error correction permutation-invariant codes qudits deletion errors Knill-Laflamme conditions
View Full Abstract

We investigate Permutation-Invariant (PI) quantum error-correcting codes encoding a logical qudit of dimension $\mathrm{d}_\mathrm{L}$ in PI states using physical qudits of dimension $\mathrm{d}_\mathrm{P}$. We extend the Knill--Laflamme (KL) conditions for $d-1$ deletion errors from qubits to qudits and investigate numerically both qubit ($\mathrm{d}_\mathrm{L} = \mathrm{d}_\mathrm{P} = 2$) and qudit ($\mathrm{d}_\mathrm{L} > 2$ or $\mathrm{d}_\mathrm{P} > 2$) PI codes. We analyze the scaling of the block length $n$ in terms of the code distance $d$, and compare to existing families of PI codes due to Ouyang, Aydin--Alekseyev--Barg (AAB) and Pollatsek--Ruskai (PR). Our three main findings are: (i) We conjecture that qubit PI codes correcting up to $d-1$ deletion errors have block length $n(d) \geq (3d^2 + 1) / 4$, which implies an upper bound $d \leq \sqrt{12n-3}/3$ on their code distance, and that PR codes can saturate this bound. (ii) For qudit PI codes encoding a single qudit we numerically observe that increasing $\mathrm{d}_\mathrm{P}$ results in $n$ monotonically decreasing and approaching the quantum Singleton bound $n(d) \geq 2d-1$. (iii) We propose a semi-analytic extension of the qubit AAB construction to qudits that finds explicit solutions by solving a linear program. Our results therefore provide key insights into lower bounds on the block length scaling of both qubit and qudit PI codes, and demonstrate the benefit of increased physical local dimension in the context of PI codes.

Efficient and accurate two-qubit-gate operation in a high-connectivity transmon lattice utilizing a tunable coupling to a shared mode

Tuure Orell, Hao Hsu, Joona Andersson, Jani Tuorila, Frank Deppe, Hsiang-Sheng Ku

2603.10699 • Mar 11, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a new quantum computer architecture using a honeycomb lattice of superconducting qubits where each group of qubits connects through tunable couplers and a shared central element, enabling faster two-qubit gates and better connectivity. The researchers develop improved gate protocols and analyze how this design reduces errors while allowing more qubits to operate simultaneously.

Key Contributions

  • Novel honeycomb qubit lattice architecture with tunable multi-mode coupling for all-to-all connectivity
  • Efficient single-step conditional-Z gate protocol that improves gate speed compared to previous center-mode architectures
  • Analysis of spectator qubit effects and crosstalk mitigation in simultaneous gate operations
  • Analytical error estimates for relaxation and dephasing in multi-mode coupling structures
superconducting qubits transmon two-qubit gates quantum connectivity conditional-Z gate
View Full Abstract

Increasing connectivity and decreasing qubit-state delocalization without compromising the speed and accuracy of elementary gate operations are topical challenges in the development of large-scale superconducting quantum computers. In this theoretical work, we study a special honeycomb qubit lattice where each qubit inside a unit cell is coupled to every other one via two dedicated tunable couplers and a common central element. This results in an effective multi-mode interaction enabling tunable, on-demand, all-to-all connectivity between each qubit pair within the unit cell. We provide a thorough analysis of the unit cell, including a proposal for a novel and efficient conditional-Z gate scheme which takes advantage of the effective multi-mode coupling. We develop an experimentally viable pulse protocol for a single-step gate implementation which considerably improves the gate speed compared to the previous two-qubit-gate realizations suggested for architectures utilizing a center mode. We also show numerical results on how the presence of spectator qubits affects the average two-qubit-gate fidelity, and analyse how the multi-mode coupling structure mitigates the delocalization-induced crosstalk during simultaneous single-qubit gates within the unit cell. We also provide analytical estimates for the errors caused by relaxation and dephasing during a two-qubit-gate operation, including noise terms for the multi-mode coupling structure. Our multi-mode coupling architecture results in a good balance between increased connectivity and available parallelism, especially when several interacting unit cells form a quantum processing unit. We anticipate that the obtained results pave the way towards high-connectivity quantum processors with efficient and low-overhead quantum algorithms.

Reducing Quantum Error Mitigation Bias Using Verifiable Benchmark Circuits

Joseph Harris, Kevin Lively, Peter Schuhmacher

2603.10224 • Mar 10, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents new methods to reduce bias in quantum error mitigation techniques by using specially designed benchmark circuits that match the noise characteristics of the target quantum computation. The authors demonstrate up to 15% fidelity improvements on 100-qubit circuits compared to standard error mitigation approaches.

Key Contributions

  • Development of verifiable benchmark circuits that mirror application circuit noise profiles for bias mitigation
  • Introduction of benchmarked-noise zero-noise extrapolation (bnZNE) as an improved error mitigation method
  • Demonstration of 15% fidelity improvements on utility-scale 100-qubit circuits with up to 2000 entangling gates
quantum error mitigation zero-noise extrapolation benchmark circuits quantum fidelity NISQ
View Full Abstract

We present a simple, malleable and low-overhead approach for improving generic biased quantum error mitigation (QEM) methods, achieving up to 15% fidelity improvements over standard QEM on 100-qubit circuits with up to 2000 entangling gates. We do so by constructing verifiable benchmark circuits which mirror the application circuit's native-gate structure and thus noise profile. These circuits can be used to benchmark and mitigate the bias of the underlying error mitigation method, requiring only the application circuit and hardware native gate set. We present two methods for generating benchmark circuits; one is agnostic to the target hardware at the expense of a small overhead of single-qubit gates, while the other is specific to the IBM superconducting hardware and has no gate overhead. As a corollary, we introduce benchmarked-noise zero-noise extrapolation (bnZNE) as a simple adaptation of zero-noise extrapolation (ZNE), one of the most popular error mitigation methods. We consider as an example the bias-mitigated ZNE and bnZNE of Trotterized Hamiltonian simulations, observing that our approaches outperform standard ZNE using both small-scale classical simulations and 100-qubit utility-scale experiments on the IBM superconducting hardware. We consider the measurement of both single-site observables as well as two-site correlations along a one-dimensional qubit chain. We also provide a software package for implementing the error mitigation techniques used in this research.

Crosstalk in Multi-Qubit Fluxonium Architectures with Transmon Couplers

Martijn F. S. Zwanenburg, Christian Kraglund Andersen

2603.09870 • Mar 10, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies the scalability of quantum computing architectures that use transmon qubits as couplers between fluxonium qubits, finding that spectator qubit crosstalk limits gate fidelity but can be mitigated through reduced coupling strength and dynamic tuning. The work demonstrates methods to reduce spectator errors to below 10^-4 while maintaining high-fidelity two-qubit operations.

Key Contributions

  • Analysis of scalability limitations in fluxonium-transmon hybrid quantum architectures due to spectator qubit crosstalk
  • Development of mitigation strategies including coupling strength reduction and dynamic transmon tuning to achieve spectator errors below 10^-4
fluxonium transmon superconducting qubits crosstalk two-qubit gates
View Full Abstract

In recent years, several architectures have been proposed for implementing two-qubit operations on fluxonium superconducting qubits. A particularly promising approach, which was demonstrated experimentally by Refs. [1,2], employs a transmon superconducting qubit as a tunable coupler between the fluxonium qubits. These experiments have shown that the transmon coupler enables fast, high-fidelity two-qubit operations while suppressing unwanted ZZ crosstalk between the fluxonium qubits. In this work, we numerically study the scalability of this architecture. We find that, when trivially scaling this architecture, crosstalk from spectator qubits limits the gate fidelity to below 90%. We show that these spectator errors can be reduced to below $10^{-4}$ by reducing the coupling strength and by dynamically tuning transmons that are not used for a two-qubit operation to an off position. We further investigate the resilience of the operation to direct capacitive coupling between the transmon couplers and to microwave crosstalk.

Fictitious Copy Quantum Error Mitigation

Akib Karim, Harish J. Vallury, Muhammad Usman

2603.09302 • Mar 10, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new quantum error mitigation technique called Fictitious Copy Quantum Error Mitigation (FCQEM) that corrects quantum computing errors using only classical post-processing without requiring additional quantum resources. The method works by analyzing joint probability distributions from quantum circuit measurements and was demonstrated to successfully recover ground state energies in molecular and spin models.

Key Contributions

  • Novel quantum error mitigation method requiring no additional quantum resources
  • Classical post-processing technique that corrects expectation values using joint probability distributions
  • Demonstration of compatibility with existing QEM methods like Quantum Computed Moments
  • Experimental validation on 84-qubit superconducting quantum processor
quantum error mitigation error correction quantum algorithms noisy quantum circuits classical post-processing
View Full Abstract

Errors are arguably the most pressing challenge impeding practical applications of quantum computers, which has instigated vigorous research on the development of quantum error mitigation (QEM) techniques. Existing QEM methods suppress errors with a varying degree of efficacy but importantly demand significant additional quantum and classical computational resources. In this work, we present Fictitious Copy Quantum Error Mitigation (FCQEM) method which corrects quantum errors without requiring any additional quantum resources and purely relies on using classical postprocessing of a joint probability distribution to correct expectation values. The joint probability distribution can be measured ``fictitiously'' by sampling one copy of noisy quantum circuit twice, or classically squaring probabilities from simply one copy. We show that FCQEM can recover eigenvalues even if exact eigenstates are not prepared. Furthermore, our technique can benefit other noise mitigation techniques with no additional quantum resources, which is demonstrated by combining FCQEM with the Quantum Computed Moments (QCM) method. FCQEM can compensate for noise that is pathological to QCM, and QCM allows for FCQEM to recover the ground state energy with a larger variety of trial states. We show that our technique can find the exact ground state energy of molecular and spin models under simulated noise models as well as experiments on a Rigetti 84-qubit superconducting quantum processor. The reported FCQEM method is general purpose for the current generation of quantum devices and is applicable to any problem that measures eigenvalues of operators on sharply peaked distributions.

Reconfigurable Superconducting Quantum Circuits Enabled by Micro-Scale Liquid-Metal Interconnects

Zhancheng Yao, Nicholas E. Fuhr, Nicholas Russo, David W. Abraham, Kevin E. Smith, David J. Bishop

2603.09096 • Mar 10, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper demonstrates liquid-metal interconnects for superconducting quantum circuits that allow modular quantum processors to be reconfigured by replacing components without destroying the system. The researchers show these gallium-based connections maintain high microwave performance and can survive thermal cycling between room temperature and millikelvin temperatures.

Key Contributions

  • Demonstration of chip-scale liquid-metal interconnects for superconducting quantum circuits with performance comparable to conventional waveguides
  • Proof of concept for plug-and-play modular quantum processor architecture that enables non-destructive component replacement
  • Characterization of power-dependent loss mechanisms and kinetic inductance effects in liquid-metal quantum interconnects
superconducting quantum circuits modular quantum processors liquid-metal interconnects scalable quantum computing microwave performance
View Full Abstract

Modular architectures are a promising route toward scalable superconducting quantum processors, but finite fabrication yield and the lack of high quality temporary interconnects impose fundamental limitations on system size. Here, we demonstrate chip-scale liquid-metal interconnects that show promise for plug-and-play superconducting quantum circuits by enabling non-destructive module replacement while maintaining high microwave performance. Using gallium-based liquid metals, we realize high-quality inter-module signal and ground interconnects, comparable in performance to conventional coplanar waveguide resonators. We illustrate consistent device characteristics across three thermal cycles between room temperature and 15 mK, as well as the ability to reform superconducting connections following module replacement. A width-dependent resonance frequency shift reveals a significant kinetic inductance fraction, which we attribute to the presence of $β$-phase tantalum as confirmed by X-ray characterization. Finally, we investigate power-dependent loss mechanisms and observe high-power dissipative nonlinearities qualitatively consistent with a readout-power heating model. These results establish liquid metals as viable chip-scale interconnects for reconfigurable, modular superconducting quantum systems.

Coupled-Layer Construction of Quantum Product Codes

Shuyu Zhang, Tzu-Chieh Wei, Nathanan Tantivasadakarn

2603.08711 • Mar 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents a new physical framework for constructing quantum product codes, which are important quantum error correcting codes, by showing how they can be built as coupled layers where one code forms a stack and excitations are condensed according to patterns from another code. The work provides an intuitive physical mechanism for creating these codes that was previously unclear despite their mathematical formulation.

Key Contributions

  • Developed coupled-layer construction framework for tensor and balanced product codes providing intuitive physical assembly mechanism
  • Unified known physical mechanisms for constructing higher dimensional topological phases via anyon condensation and extended to non-topological codes
quantum error correction product codes qLDPC codes topological codes anyon condensation
View Full Abstract

Product codes are a class of quantum error correcting codes built from two or more constituent codes. They have recently gained prominence for a breakthrough yielding quantum low-density parity-check (qLDPC) codes with favorable scaling of both code distance and encoding rate. However, despite its powerful algebraic formulation, the physical mechanism for assembling a general product code from its constituents remains unclear. In this letter, we show that the tensor and balanced product codes admit an intuitive coupled-layer construction by taking a stack of one code and condensing a set of excitations in the pattern given by the checks of the other code. Our framework accommodates both classical or quantum CSS input codes, unifies known physical mechanisms for constructing higher dimensional topological phases via anyon condensation, and naturally extends to non-topological codes.

Scalable Postselection of Quantum Resources

J. Wilson Staples, Winston Fu, Jeff D. Thompson

2603.08697 • Mar 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a technique called scalable postselection to reduce quantum error correction overhead by selectively choosing better-performing quantum resource states based on decoder information. The method achieves a 4x reduction in overhead per logical gate while maintaining the same error probability, potentially making quantum computers more practical.

Key Contributions

  • Introduction of scalable postselection technique that reduces quantum error correction overhead by 4x
  • Development of the partial gap metric to predict resource state quality after consumption
  • Demonstration of scalable improvements in logical error rates through postselection of sub-circuits
quantum error correction postselection fault tolerance resource states cluster states
View Full Abstract

The large overhead imposed by quantum error correction is a critical challenge to the realization of quantum computers, and motivates searching for alternative error correcting codes and fault-tolerant circuit constructions. Postselection is a powerful tool that builds large programs out of probabilistically generated sub-circuits, and has been shown to increase the threshold of quantum error correction based on fusing fixed-size resource states or concatenated codes. In this work, we present an approach to lower the overhead of quantum computing using scalable postselection, based on directly postselecting sub-circuits with a size extensive in the code distance using decoder soft information. We introduce a metric, the partial gap, that estimates what the logical gap of a resource state will be after it is consumed, and show that postselection based on the partial gap leads to scalable improvements in the logical error rate. In the specific context of implementing logical gates via teleportation through a cluster state, we demonstrate that scalable postselection provides a $4\times$ reduction in the overhead per logical gate, at the same logical error probability.

Construction of a Family of Quantum Codes Using Sub-exceding Functions via the Hypergraph Product and the Generalized Shor Construction

Luc Rabefihavanana, Harinaivo Andriatahiny, Randriamiarampanahy Ferdinand

2603.08213 • Mar 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a new family of quantum error-correcting codes by combining classical linear codes with mathematical constructions called hypergraph products and generalized Shor codes. The resulting quantum codes have good error-correction properties and efficient structures that could help build more reliable quantum computers.

Key Contributions

  • Introduction of new quantum LDPC codes with parameters [[6k^2, k^2, d]] derived from sub-exceding functions
  • Combination of hypergraph product framework with generalized Shor construction for scalable quantum code design
quantum error correction LDPC codes stabilizer codes hypergraph product Shor construction
View Full Abstract

In this paper, we introduce a new family of stabilizer quantum LDPC codes derived from the classical linear codes $L_k$ and $L_k^{+}$, defined via sub-exceding functions. In previous work, these codes demonstrated strong performance in minimum distance, decoding efficiency, and structural simplicity. By combining the hypergraph product framework with a generalized Shor construction, we obtain a scalable class of quantum codes with parameters $[[6k^2,\, k^2,\, d]]$. The resulting quantum codes exhibit a rich combinatorial structure and promising properties, particularly in terms of locality, low-density parity-check (LDPC) structure, and asymptotic behavior. The minimum distance satisfies $d=3$ for $k=3$ and $d=4$ for $k\ge4$, establishing a new framework for structured quantum LDPC code design and optimization.

Lattice: A Post-Quantum Settlement Layer

David Alejandro Trejo Pizzo

2603.07947 • Mar 9, 2026

CRQC/Y2Q RELEVANT QC: low Sensing: none Network: none

This paper presents Lattice, a cryptocurrency designed to be resistant to quantum computer attacks through post-quantum cryptographic signatures, CPU-only mining, and adaptive difficulty adjustment mechanisms.

Key Contributions

  • Implementation of ML-DSA-44 post-quantum digital signatures from genesis block
  • Multi-layered defense against quantum threats through hardware, network, and cryptographic resilience
post-quantum cryptography lattice-based signatures ML-DSA-44 quantum-resistant NIST FIPS 204
View Full Abstract

We present Lattice (L, ticker: LAT), a peer-to-peer electronic cash system designed as a post-quantum settlement layer for the era of quantum computing. Lattice combines three independent defense vectors: hardware resilience through RandomX CPU-only proof-of-work, network resilience through LWMA-1 per-block difficulty adjustment (mitigating the Flash Hash Rate vulnerability that affects fixed-interval retarget protocols), and cryptographic resilience through ML-DSA-44 post-quantum digital signatures (NIST FIPS 204, lattice-based), enforced exclusively from the genesis block with no classical signature fallback. The protocol uses a brief warm-up period of 5,670 fast blocks (53-second target, 25 LAT reduced reward) for network bootstrap, then transitions permanently to 240-second blocks, following a 295,000-block halving schedule with a perpetual tail emission floor of 0.15 LAT per block. Block weight capacity grows in stages (11M to 28M to 56M) as the network matures. The smallest unit of LAT is the shor, named after Peter Shor, where 1 LAT = 10^8 shors.

A Scalable Distributed Quantum Optimization Framework via Factor Graph Paradigm

Yuwen Huang, Xiaojun Lin, Bin Luo, John C. S. Lui

2603.07673 • Mar 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents a new framework for distributed quantum computing that breaks down optimization problems using factor graphs to run on multiple small quantum processors connected by entanglement. The approach maintains the quadratic speedup of Grover's algorithm while reducing the number of qubits needed per processor.

Key Contributions

  • Structure-aware distributed quantum optimization framework using factor graph decomposition
  • Proof that Grover-like O(√N) scaling is preserved across distributed processors
  • Hierarchical divide-and-conquer strategy with both fault-tolerant and near-term operating modes
distributed quantum computing quantum optimization Grover algorithm factor graphs quantum entanglement
View Full Abstract

Distributed quantum computing (DQC) connects many small quantum processors into a single logical machine, offering a practical route to scalable quantum computation. However, most existing DQC paradigms are structure-agnostic. Circuit cutting proposed by Peng et al. in [Phys. Rev. Lett., Oct. 2020] reduces per-device qubits at the cost of exponential classical post-processing, while search-space partitioning proposed by Avron et al. in [Phys. Rev. A., Nov. 2021] distributes the workload but weakens Grover's ideal quadratic speedup. In this paper, we introduce a structure-aware framework for distributed quantum optimization that resolves this complexity-resource trade-off. We model the objective function as a factor graph and expose its sparse interaction structure. We cut the graph along its natural ``seams'', i.e., a separator of boundary variables, to obtain loosely coupled subproblems that fit on resource-constrained processors. We coordinate these subproblems with shared entanglement, so the network executes a single globally coherent search rather than independent local searches. We prove that this design preserves Grover-like scaling: for a search space of size $N$, our framework achieves $O(\sqrt{N})$ query complexity up to processors and separator dependent factors, while relaxing the qubit requirement of each processor. We extend the framework with a hierarchical divide-and-conquer strategy that scales to large-scale optimization problems and supports two operating modes: a fully coherent mode for fault-tolerant networks and a hybrid mode that inserts measurements to cap circuit depth on near-term devices. We validate the predicted query-entanglement trade-offs through simulations over diverse network topologies, and we show that structure-aware decomposition delivers a practical path to scalable distributed quantum optimization on quantum networks.

Remote Entanglement in Lattice Surgery: To Distill, or Not to Distill

Sitong Liu, John Stack, Ke Sun, Roel Van Beeumen, Inder Monga, Katherine Klymko, Kenneth R. Brown, Erhan Saglamyurek

2603.06513 • Mar 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper analyzes whether to use error-corrected or raw entangled photons for connecting distributed quantum computers, finding that the optimal choice depends on entanglement quality and can reduce resource requirements by up to 100x in some cases.

Key Contributions

  • Identified fidelity crossover point determining optimal strategy between distillation vs higher surface code distance
  • Demonstrated up to two orders of magnitude resource reduction through proper strategy selection
  • Provided co-design principles for photonic interconnects in fault-tolerant distributed quantum computers
distributed quantum computing lattice surgery surface code entanglement distillation fault tolerance
View Full Abstract

Distributed quantum computing can potentially address the scalability challenge by networking processors through photon-mediated remote entanglement. Prior approaches assumed that remote Bell pairs require distillation, resulting in substantial overhead, to achieve sufficiently high fidelity before use. However, recent results show that lattice-surgery operations at logical qubit boundaries tolerate significantly higher error rates than previously assumed. We quantify the resource trade-offs between distillation overhead and surface-code distance requirements under realistic constraints including probabilistic entanglement generation and memory decoherence. We identify the fidelity crossover point separating the two regimes and show that choosing the right strategy can reduce resource overhead by up to two orders of magnitude at low fidelities and up to 68% at high fidelities. We briefly describe the application of these methods to ion-trap and neutral-atom platforms. These results provide co-design principles for optimizing photonic interconnects and fault-tolerant architectures in distributed quantum computers.

Quantum Hamlets: Distributed Compilation of Large Algorithmic Graph States

Anthony Micciche, Naphan Benchasattabuse, Andrew McGregor, Michal Hajdušek, Rodney Van Meter, Stefan Krastanov

2603.06387 • Mar 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper develops a new algorithm called BURY for efficiently distributing the creation of quantum graph states across multiple quantum processors, reducing the number of entangled Bell pairs needed for distributed quantum computation. The work focuses on optimizing how to partition large quantum computational resources to minimize communication overhead between quantum devices.

Key Contributions

  • Development of BURY heuristic algorithm for balanced k-graph partitioning that minimizes Bell pair requirements
  • Introduction of maximum matching minimization as a better metric than cut edges for measuring entanglement requirements in distributed quantum systems
  • Scalable framework for distributed measurement-based quantum computation with reduced quantum network overhead
distributed quantum computing graph states measurement-based quantum computation Bell pairs quantum networking
View Full Abstract

We investigate the problem of compiling the generation of graph states to arbitrarily many distributed homogeneous quantum processing units (QPUs), providing a scalable partitioning algorithm and graph state generation protocol to minimize the number of Bell pairs required. To this goal, we consider the problem of balanced k graph partitioning with the objective of minimizing the sizes of the maximum matchings between partitions, a more natural measure of entanglement compared to the naive but common metric of cut edges. We show that our heuristic algorithm, BURY, partitions graph states to require fewer Bell pairs for generation than state-of-the-art k partition algorithms. Furthermore, we show that BURY reduces the cut-rank of the partitions, demonstrating that the partitioning found by our algorithm is likely to minimize the Bell pair utilization of any future improved distributed graph state generation protocol. Additionally, we discuss how one could straightforwardly apply our methods to the dynamic case where the graph state generation and measurement are performed concurrently. Our study of the balanced minimum maximum matching k partition problem and the heuristic algorithm we design provides a scalable foundation for reducing quantum network overhead for distributed measurement-based quantum computation (MBQC), as well as any scheme where distributed graph state generation is desired.

A Scheduler for the Active Volume Architecture

Sam Heavey, Athena Caesura

2603.06376 • Mar 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops scheduling software for the Active Volume quantum computing architecture that more accurately estimates resource requirements and execution times. The work shows that improved scheduling can reduce overhead costs and allow larger quantum circuits to run on a given quantum computer than previously predicted.

Key Contributions

  • Development of greedy scheduling algorithm for Active Volume architecture that reduces bridge and stale-state qubit overheads by 1.44x
  • Empirical derivation of novel formula for overhead calculations that improves runtime estimate accuracy by 1.76x
  • Demonstration that larger quantum circuits can execute on given hardware than previously predicted by analytic models
active volume architecture quantum scheduling logical qubits resource estimation fault-tolerant quantum computing
View Full Abstract

We improve the accuracy of Active Volume resource estimates by explicitly scheduling when Active Volume blocks execute. We present software that uses a greedy strategy to assign each logical qubit a role in each logical cycle (e.g., workspace, stale state storage, and bridge qubits). We empirically derive a novel formula for bridge- and stale-state-qubit overheads and improve the accuracy of runtime estimates, revealing that larger circuits can run on a given computer than previously predicted by analytic models. For a $4\times4$ Fermi-Hubbard simulation test circuit, this yields a $1.76\times$ runtime speedup with a $1.44\times$ reduction in bridge- and stale-state-qubit overheads compared to the model used in arXiv:2501.06165. Moreover, we show that for this test circuit, reaction times are insignificant in runtime estimates for computers with fewer than 600 logical qubits and that the number of reaction layers per logical cycle remains 1 in this regime. Our results pave the way for a full compilation pipeline for the Active Volume architecture and improved analytic resource estimates.

Vertical ion transport in a surface Paul trap: escalator and elevator approaches

Alexey Russkikh, Nikita Zhadnov

2603.06208 • Mar 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops methods to move trapped ions vertically (perpendicular to the chip surface) in surface Paul traps, introducing an 'escalator' approach using geometrically optimized transitions and comparing two 'elevator' configurations that dynamically reposition ions using electrode voltages.

Key Contributions

  • Introduction of 'escalator' approach for vertical ion transport using geometrically optimized transitions between trapping zones
  • Comparative analysis of two 'elevator' configurations that dynamically reposition RF null via additional electrode voltages
surface Paul trap ion transport QCCD trapped ion qubits quantum processor
View Full Abstract

Surface ion traps confining and manipulating tens of ion qubits have become the leading platform for quantum processors with high quantum volume. These devices employ the Quantum Charge-Coupled Device (QCCD) architecture, wherein multiple trapping zones are linked by an on-chip transport network that shuttles ion chains, enabling full connectivity through physical ion transport in a plane parallel to the chip surface. The ability to move ions perpendicular to this plane can offer additional advantages, including tuning the laser-ion interaction strength, systematic studies of surface-induced heating mechanisms, and precise alignment with a mode of an external optical cavity. We introduce an "escalator" - a geometrically optimized transition between trapping zones of different confinement heights - and present a comparative analysis of two "elevator" configurations that reposition the RF null dynamically via additional electrode voltages. Both approaches enable nearly a twofold change in the ion confinement height above the chip surface.

Universal quantum computation with group surface codes

Naren Manjunath, Vieri Mattei, Apoorv Tiwari, Tyler D. Ellison

2603.05502 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces group surface codes, which generalize the standard surface code used in quantum error correction. The authors show these codes can perform non-Clifford gates transversally, enabling universal quantum computation by bypassing theoretical limitations that restrict the computational power of standard topological quantum error correction schemes.

Key Contributions

  • Introduction of group surface codes as a generalization of Z2 surface codes
  • Demonstration that non-Clifford gates can be performed transversally in these codes, enabling universal quantum computation
  • Method to bypass Bravyi-König theorem restrictions on topological Pauli stabilizer models
  • Unified framework connecting various recent constructions including sliding group surface codes and magic state preparation
surface codes quantum error correction universal quantum computation non-Clifford gates topological quantum computing
View Full Abstract

We introduce group surface codes, which are a natural generalization of the $\mathbb{Z}_2$ surface code, and equivalent to quantum double models of finite groups with specific boundary conditions. We show that group surface codes can be leveraged to perform non-Clifford gates in $\mathbb{Z}_2$ surface codes, thus enabling universal computation with well-established means of performing logical Clifford gates. Moreover, for suitably chosen groups, we demonstrate that arbitrary reversible classical gates can be implemented transversally in the group surface code. We present the logical operations in terms of a set of elementary logical operations, which include transversal logical gates, a means of transferring encoded information into and out of group surface codes, and preparation and readout. By composing these elementary operations, we implement a wide variety of logical gates and provide a unified perspective on recent constructions in the literature for sliding group surface codes and preparing magic states. We furthermore use tensor networks inspired by ZX-calculus to construct spacetime implementations of the elementary operations. This spacetime perspective also allows us to establish explicit correspondences with topological gauge theories. Our work extends recent efforts in performing universal quantum computation in topological orders without the braiding of anyons, and shows how certain group surface codes allow us to bypass the restrictions set by the Bravyi-K{ö}nig theorem, which limits the computational power of topological Pauli stabilizer models.

Mirror codes: High-threshold quantum LDPC codes beyond the CSS regime

Andrey Boris Khesin, Jonathan Z. Lu

2603.05496 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces mirror codes, a new class of quantum error-correcting codes that go beyond traditional CSS codes and achieve high error correction thresholds. The authors demonstrate these codes can achieve error pseudothresholds around 0.2% with efficient syndrome extraction circuits, making them promising for near-term fault-tolerant quantum devices.

Key Contributions

  • Introduction of mirror codes, a flexible LDPC stabilizer code construction that generalizes beyond CSS codes
  • Development of syndrome extraction circuits with provable fault tolerance using 1-6 ancillae per check
quantum error correction LDPC codes fault tolerance stabilizer codes syndrome extraction
View Full Abstract

The realization of quantum error correction protocols whose logical error rates are suppressed far below physical error rates relies on an intricate combination: the error-correcting code's efficiency, the syndrome extraction circuit's fault tolerance and overhead, the decoder's quality, and the device's constraints, such as physical qubit count and connectivity. This work makes two contributions towards error-corrected quantum devices. First, we introduce mirror codes, a simple yet flexible construction of LDPC stabilizer codes parameterized by a group $G$ and two subsets of $G$ whose total size bounds the check weight. These codes contain all abelian two-block group algebra codes, such as bivariate bicycle (BB) codes. At the same time, they are manifestly not CSS in general, thus deviating substantially from most prior constructions. Fixing a check weight of 6, we find $[[ 60, 4, 10 ]], [[ 36, 6, 6 ]], [[ 48, 8, 6 ]]$, and $[[ 85, 8, 9 ]]$ codes, all of which are not CSS; we also find several weight-7 codes with $kd > n$. Next, we construct syndrome extraction circuits that trade overhead for provable fault tolerance. These circuits use 1-2, 3, and 6 ancillae per check, and respectively are partially fault-tolerant (FT), provably FT on weight-6 CSS codes, and provably FT on \emph{all} weight-6 stabilizer codes. Using our constructions, we perform end-to-end quantum memory experiments on several representative mirror codes under circuit-level noise. We achieve an error pseudothreshold on the order of $0.2\%$, approximately matching that of the $[[ 144, 12, 12 ]]$ BB code under the same model. These findings position mirror codes as a versatile candidate for fault-tolerant quantum memory, especially on smaller-scale devices in the near term.

Improved Decoding of Quantum Tanner Codes Using Generalized Check Nodes

Olai Å. Mostad, Eirik Rosnes, Hsuan-Yin Lin

2603.05486 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper improves the decoding of quantum Tanner codes by grouping check nodes into more powerful generalized check nodes and using enhanced iterative belief propagation decoding. The proposed method significantly outperforms standard quaternary BP decoders and other recent approaches for quantum low-density parity-check codes.

Key Contributions

  • Enhanced generalized belief propagation decoder for quantum Tanner codes that significantly outperforms existing methods
  • Greedy algorithm for combining checks in generalized BP decoding for quantum LDPC codes
  • Theoretical cycle analysis for various quantum LDPC code classes
quantum error correction quantum LDPC codes belief propagation quantum Tanner codes fault tolerance
View Full Abstract

We study the decoding problem for quantum Tanner codes and propose to exploit the underlying local code structure by grouping check nodes into more powerful generalized check nodes for enhanced iterative belief propagation (BP) decoding by decoding the generalized checks using a maximum a posteriori (MAP) decoder as part of the check node processing of each decoding iteration. We mainly study the finite-length setting and show that the proposed enhanced generalized BP decoder for quantum Tanner codes significantly outperforms the standard quaternary BP decoder with memory effects, as well as the recently proposed Relay-BP decoder, even outperforming generalized bicycle (GB) codes with comparable parameters in some cases. For other classes of quantum low-density parity-check (qLDPC) codes, we propose a greedy algorithm to combine checks for generalized BP decoding. However, for GB codes, bivariate bicycle codes, hypergraph product codes, and lifted-product codes, there seems to be limited gain by combining simple checks into more powerful ones. To back up our findings, we also provide a theoretical cycle analysis for the considered qLDPC codes.

High-performance syndrome extraction circuits for quantum codes

Armands Strikis, Dan E. Browne, Michael E. Beverland

2603.05481 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved framework for designing syndrome extraction circuits used in quantum error correction, achieving up to an order of magnitude better performance than existing designs. The authors generalize left-right circuit constructions to work with arbitrary CSS quantum codes and introduce formal tools to analyze error propagation and optimize circuit performance.

Key Contributions

  • Generalization of left-right syndrome extraction circuits to arbitrary CSS codes with optimized performance
  • Introduction of formal residual error analysis framework for quantifying circuit-level error propagation
  • Demonstration of order-of-magnitude improvements in logical performance over existing single-ancilla designs
quantum error correction syndrome extraction circuits CSS codes fault tolerance circuit distance
View Full Abstract

We present a fast and effective framework for analysing and designing syndrome-extraction circuits (SECs). Our approach is based on left-right circuits, a general design for SECs which maintain low depth by staggering $X$ and $Z$ checks without interleaving gates. Initially proposed for specific classes of codes, we generalise this construction to arbitrary CSS codes and optimise the circuit structure to achieve low qubit idling time, large effective distance, and reduced minimum-weight failure mechanisms. A key component of our framework is the formal notion of residual errors and their associated distance metrics, which form lightweight tools for capturing error propagation and quantifying the potential harm of circuit-level errors. Applying our automated framework to diverse classes of codes, we observe consistent improvements in logical performance of up to an order of magnitude compared to existing single-ancilla SEC designs. We also use these tools to prove that no non-interleaving SEC can achieve circuit distance $12$ for the gross code, and identify an explicit circuit that we conjecture achieves distance $11$, exceeding previously known constructions.

Low-depth amplitude estimation via statistical eigengap estimation

Po-Wei Huang, Bálint Koczor

2603.05475 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper develops new algorithms for quantum amplitude estimation by reframing the problem as estimating energy gaps in effective Hamiltonians rather than using traditional phase estimation approaches. The methods achieve optimal performance while using simpler classical post-processing and offering flexible tradeoffs between query complexity and circuit depth.

Key Contributions

  • Reframes amplitude estimation as statistical eigengap estimation of effective Hamiltonians
  • Develops algorithms achieving Heisenberg-limited scaling with simplified classical post-processing
  • Establishes optimal query-depth tradeoffs for low-depth quantum circuits with theoretical guarantees
amplitude estimation quantum algorithms Heisenberg limit fault-tolerant quantum computing eigengap estimation
View Full Abstract

Amplitude estimation, in its original form, is formulated as phase estimation upon the Grover walk operator. Since its introduction, subsequent improvements to the algorithm have removed the use of phase estimation and introduced low-depth variants that trade speedup factors for lower circuit depth. We make the key observation that amplitude estimation is equivalent to estimating the energy gap of an effective Hamiltonian, whereby discrete time evolution is generated by amplitude amplification. This enables us to develop two amplitude estimation algorithms for both Heisenberg-limited and low-depth circuit regimes, inspired by statistical phase estimation techniques developed for seemingly unrelated early fault-tolerant ground-state energy estimation. Our approach has significant technical and practical benefits, and uses simplified classical post-processing compared to prior techniques -- our theoretical and numerical results indicate that we achieve state-of-the-art performance. Furthermore, while our approach achieves Heisenberg-limited scaling, we also establish optimal query-depth tradeoffs up to polylogarithmic factors in the low-depth regime with provable theoretical guarantees. Due to its flexibility, generality, and robustness, we expect our approach to be a key enabler for a broad range of early fault-tolerant applications.

Spatiotemporal Pauli processes: Quantum combs for modelling correlated noise in quantum error correction

John F Kam, Angus Southwell, Spiro Gicev, Muhammad Usman, Kavan Modi

2603.05474 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper introduces Spatiotemporal Pauli Processes (SPPs), a mathematical framework that bridges the gap between simple error models used in quantum error correction and the complex, correlated noise that occurs in real quantum devices. The authors demonstrate how this framework can model realistic noise patterns and show that certain types of correlated noise can cause complete breakdown of quantum error correction codes.

Key Contributions

  • Introduction of Spatiotemporal Pauli Processes framework that maps arbitrary non-Markovian quantum dynamics to tractable multi-time Pauli processes
  • Demonstration that correlated noise can cause complete breakdown of surface code error correction through critical slowing down and macroscopic error avalanches
  • Development of efficient tensor network representations and transfer operator diagnostics for analyzing correlated quantum noise
quantum error correction correlated noise surface codes non-Markovian dynamics process tensors
View Full Abstract

Correlated noise is a critical failure mode in quantum error correction (QEC), as temporal memory and spatial structure concentrate faults into error bursts that undermine standard threshold assumptions. Yet, a fundamental gap persists between the stochastic Pauli models ubiquitous in QEC and the microscopic, non-Markovian descriptions of physical device dynamics. We close this gap by introducing \emph{Spatiotemporal Pauli Processes} (SPPs). By applying a multi-time Pauli twirl -- operationally realised by Pauli-frame randomisation -- to a general process tensor, we map arbitrary multi-time, non-Markovian dynamics to a multi-time Pauli process. This process is represented by a process-separable comb, or equivalently, a well-defined joint probability distribution over Pauli trajectories in spacetime. We show that SPPs inherit efficient tensor network representations whose bond dimensions are bounded by the environment's Liouville-space dimension. To interpret these structures, we develop transfer operator diagnostics linking spectra to correlation decay, and exact hidden Markov representations for suitable classes of SPPs. We demonstrate the framework via surface code memory and stability simulations of up to distance \(19\) for (i) a temporally correlated ``storm'' model that tunes correlation length at fixed marginal error rates, and (ii) a genuinely spatiotemporal 2D quantum cellular automaton bath that maps exactly to a nonlinear probabilistic cellular automaton under twirling. Tuning coherent bath interactions drives the system into a pseudo-critical regime, exhibiting critical slowing down and macroscopic error avalanches that cause a complete breakdown of surface code distance scaling. Together, these results justify SPPs as an operationally grounded, scalable toolkit for modelling, diagnosing, and benchmarking correlated noise in QEC.

Heuristics for Shuttling Sequence Optimization for a Linear Segmented Trapped-Ion Quantum Computer

J. Durandau, C. A. Brunet, F. Schmidt-Kaler, U. Poschinger, F. Mailhot, Y. Bérubé-Lauzière

2603.05464 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops optimization algorithms for moving trapped ions between different zones in a linear ion-trap quantum computer, focusing on minimizing the number of physical ion movements needed to execute quantum circuits. The authors present heuristics for determining optimal initial ion ordering and demonstrate improved performance for quantum Fourier transform-like circuits.

Key Contributions

  • Development of heuristic algorithms for optimizing ion shuttling sequences in trapped-ion quantum computers
  • Implementation of qubit mapping strategies to determine optimal initial ion ordering
  • Demonstration that multiple interaction zones can reduce register reordering overhead
trapped-ion quantum computing shuttling optimization qubit mapping quantum Fourier transform ion displacement
View Full Abstract

An algorithm for the generation of shuttling sequences is necessary for the operation of a linear segmented ion-trap quantum computer. The present work provides an implementation of an algorithm that produces sequences proved to be optimal for circuits with a quantum Fourier transform-like structure. Such optimality was proved in previous work of our group. We first present an approach for qubit mapping, i.e. determining the initial ordering of the ions, termed the common ion order, and develop a heuristic algorithm for its implementation. We explain how this heuristic is integrated in the shuttling sequence generation algorithm described in the previous work. The results show the increased performance of the heuristic in terms of reducing the number of required shuttling operations. The number of ion displacements required exhibits a polynomial increase in terms of the number of qubits, such that these operations become the main contribution to the overall resource cost. Furthermore, we show that multiple zones for gate interactions can reduce the amount of qubit register reordering.

Constant depth magic state cultivation with Clifford measurements by gauging

Bence Hetényi, Benjamin J. Brown, Dominic. J. Williamson

2603.05429 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a method to improve magic state preparation for quantum error correction by using constant-depth measurements instead of depth-scaling measurements, making the approach practical for larger quantum codes. The technique uses 'gauging' to perform logical measurements on color codes with better scalability than previous cultivation methods.

Key Contributions

  • Development of constant-depth logical measurement circuits for color codes using gauging technique
  • Achievement of 10^-12 logical error rates for d=7 color codes with improved scalability over magic state cultivation
magic states quantum error correction color codes Clifford measurements fault tolerance
View Full Abstract

Magic states are a scarce resource for two-dimensional qubit stabilizer codes. Magic state cultivation was recently proposed to reduce the cost of magic state preparation by measuring the transversal Clifford operator of the color code. Cultivation achieves $\sim 10^{-9}$ logical error rates for the $d=5$ color code, with substantially lower space-time overhead than magic state distillation. However, due to the $\mathcal{O}(d)$ depth of the Clifford measurement circuit, magic state cultivation becomes impractical for $d>5$. Here, we perform logical $XS^\dagger$ measurements on the color code by gauging a transversal Clifford gate, resulting in a constant-depth logical measurement circuit. We employ repeated gauging measurements with post-selection rather than performing error correction on the Clifford stabilizer code that emerges during the gauging protocol, thus gaining simplicity at the cost of scalability. Our protocol requires a regular square grid connectivity and yields logical error rates comparable to magic state cultivation. The $d=7$ version of our protocol gives access to the $10^{-12}$ logical error rate regime at $0.05\%$ physical error rate while retaining more than $1\%$ of the shots after the equivalent of the cultivation stage.

Optimal Decoding with the Worm

Zac Tobias, Nikolas P. Breuckmann, Benedikt Placke

2603.05428 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new quantum error correction decoder called the 'worm algorithm' that uses Markov-Chain Monte-Carlo methods to optimally decode errors in quantum low-density parity-check (qLDPC) codes. The decoder can handle various quantum error correction codes including surface codes and hyperbolic surface codes, and demonstrates superior performance compared to existing decoding methods.

Key Contributions

  • Novel worm algorithm decoder for optimal decoding of matchable qLDPC codes using MCMC methods
  • Rigorous analysis of mixing time guarantees and connection to defect susceptibility
  • Demonstration of superior decoding thresholds compared to minimum-weight perfect matching
  • Extension to correlated decoding schemes that work beyond independent error models
quantum error correction qLDPC codes surface codes MCMC decoding fault tolerance
View Full Abstract

We propose a new decoder for ``matchable'' qLDPC codes that uses a Markov-Chain Monte-Carlo algorithm -- called the \emph{worm algorithm} -- to approximately compute the probabilities of logical error classes given a syndrome. The algorithm hence performs (approximate) \emph{optimal} decoding, and we expect it to be computationally efficient in certain settings. The algorithm is applicable to decoding random errors for the surface code, the honeycomb Floquet code, and hyperbolic surface codes with constant rate, in all cases with and without measurement errors. The efficiency of the decoder hinges on the mixing time of the underlying Markov chain. We give a rigorous mixing time guarantee in terms of a quantity that we call the \emph{defect susceptibility}. We connect this quantity to the notion of disorder operators in statistical mechanics and use this to argue (non-rigorously) that the algorithm is efficient for \emph{typical} errors in the entire decodable phase. We also demonstrate the effectiveness of the worm decoder numerically by applying it to the surface code with measurement errors as well as a family of hyperbolic surface codes. For most codes, the matchability condition restricts direct application of our decoder to noise models with independent bit-flip, phase-flip, and measurement errors. However, our decoder returns \emph{soft information} which makes it useful also in heuristic ``correlated decoding'' schemes which work beyond this simple setting. We demonstrate this by simulating decoding of the surface code under depolarizing noise, and we find that the threshold for ``correlated worm decoding'' is substantially higher than for both minimum-weight perfect matching and for correlated matching.

Decay Rates in Interleaved Benchmarking with Single-Qubit References

Ilya A. Simakov, Arina V. Zotova, Tatyana A. Chudakova, Alena S. Kazmina, Artyom M. Polyanskiy, Nikolay N. Abramov, Mikhail A. Tarkhov, Alexander M. M...

2603.05422 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops improved theoretical foundations for characterizing multi-qubit quantum gates using cross-entropy benchmarking with single-qubit reference sequences. The authors identify and correct systematic errors in current approaches, providing more accurate gate fidelity measurements that match standard benchmarking methods while achieving higher precision.

Key Contributions

  • Derived analytical expression for joint decay of simultaneous single-qubit reference sequences
  • Introduced refined expression for interleaved gate fidelity estimation that corrects systematic overestimation
  • Validated theory experimentally on superconducting quantum processor showing agreement with standard interleaved randomized benchmarking
cross-entropy benchmarking gate fidelity interleaved randomized benchmarking multi-qubit gates superconducting quantum processor
View Full Abstract

Cross-entropy benchmarking (XEB) with single-qubit reference sequences is widely used to characterize multi-qubit gates in large-scale quantum processors, despite the lack of a rigorous theoretical justification. Here we show that the commonly employed additive single-qubit errors approximation underlying this approach breaks down and leads to a systematic overestimation of gate fidelities. We derive an analytical expression for the joint decay of simultaneous single-qubit reference sequences and introduce a refined expression for the interleaved gate fidelity estimation. Experiments on a superconducting quantum processor validate the theory and demonstrate that fidelities obtained using XEB with single-qubit references agree with those extracted from standard interleaved randomized benchmarking (IRB), while achieving higher precision due to reduced reference-sequence errors. Our results establish theoretical foundation for the single-qubit-based XEB and show that, with appropriate post-processing, it enables a reliable and robust approach for entangling gates benchmarking without the need for multi-qubit Clifford reference sequences.

Recursive Magic State Distillation on the Surface Code

Jonathan E. Moussa

2603.05409 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a more efficient method for preparing magic states needed for quantum computation by using recursive 15-to-1 distillation with lattice surgery on surface codes. The approach reduces the physical qubit requirements and time needed to create high-quality magic states, though it requires lower physical error rates to be effective.

Key Contributions

  • Recursive implementation of 15-to-1 magic state distillation reducing resource overhead
  • Specific resource estimates for T and CCZ magic state preparation on surface codes with lattice surgery
magic state distillation surface code lattice surgery fault-tolerant quantum computing error correction
View Full Abstract

I reduce the cost to prepare magic states with lattice surgery operations on the surface code by using a recursive implementation of 15-to-1 magic state distillation. On a rotated surface code with distance $d$, $|T\rangle$ preparation requires a $d$-by-$3 d$ grid of data qubits for up to $15 d$ error correction cycles, and $|CCZ\rangle$ preparation requires a $3 d$-by-$2 d$ grid for up to $10.5 d$ cycles. However, a significantly lower physical error threshold than that of the underlying surface code is required to match the error probability of the output magic state with the logical error rate of the output surface code at large code distances.

Generalized matching decoders for 2D topological translationally-invariant codes

Shi Jie Samuel Tan, Ian Gill, Eric Huang, Pengyu Liu, Chen Zhao, Hossein Dehghani, Aleksander Kubica, Hengyun Zhou, Arpit Dua

2603.05402 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new graph-matching decoders for 2D topological quantum error-correcting codes like bivariate bicycle codes, which work by converting the syndrome information into an equivalent toric code representation that can be efficiently decoded using graph-matching techniques.

Key Contributions

  • Development of graph-matching decoders for general translationally-invariant topological codes
  • Proof that these decoders correct errors up to a constant fraction of code distance with provable performance guarantees
  • Numerical demonstration of competitive performance with existing decoders for bivariate bicycle codes
quantum error correction topological codes graph matching fault tolerant quantum computing bivariate bicycle codes
View Full Abstract

Two-dimensional topological translationally-invariant (TTI) quantum codes, such as the toric code (TC) and bivariate bicycle (BB) codes, are promising candidates for fault-tolerant quantum computation. For such codes to be practically relevant, their decoders must successfully correct the most likely errors while remaining computationally efficient. For the TC, graph-matching decoders satisfy both requirements and, additionally, admit provable performance guarantees. Given the equivalence between TTI codes and (multiple copies of) the TC, one may then ask whether TTI codes also admit analogous graph-matching decoders. In this work, we develop a graph-matching approach to decoding general TTI codes. Intuitively, our approach coarse-grains the TTI code to obtain an effective description of the syndrome in terms of TC excitations, which can then be removed using graph-matching techniques. We prove that our decoders correct errors of weight up to a constant fraction of the code distance and achieve non-zero code-capacity thresholds. We further numerically study a variant optimized for practically relevant BB codes and observe performance comparable to that of the belief propagation with ordered statistics decoder. Our results indicate that graph-matching decoders are a viable approach to decoding BB codes and other TTI codes.

QGPU: Parallel logic in quantum LDPC codes

Boren Gu, Andy Zeyi Liu, Armanda O. Quintavalle, Qian Xu, Jens Eisert, Joschka Roffe

2603.05398 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces clustered-cyclic codes, a new family of quantum low-density parity-check (LDPC) codes that enable highly parallel logical operations, and proposes parallel product surgery techniques to perform multiple logical measurements simultaneously with fixed overhead.

Key Contributions

  • Introduction of clustered-cyclic quantum LDPC codes with competitive parameters like [[136,8,14]] and [[198,18,10]]
  • Development of parallel product surgery protocol enabling surface-code-style maximal parallelism for logical operations
  • Proof that parallel product surgery preserves code distance and demonstration of fault-tolerant Clifford group generation
quantum error correction LDPC codes fault-tolerant quantum computing logical qubits parallel operations
View Full Abstract

Quantum error correction is critical to the design and manufacture of scalable quantum computing systems. Recently, there has been growing interest in quantum low-density parity-check codes as a resource-efficient alternative to surface codes. Their adoption is hindered by the difficulty of compiling fault-tolerant logical operations. A key challenge is that logical qubits do not necessarily map to disjoint sets of physical qubits, which limits parallelism. We introduce clustered-cyclic codes, a quantum low-density parity-check code family with finite-size instances such as [[136,8,14]] and [[198,18,10]] that are competitive with state-of-the-art constructions. These codes admit a directly addressable logical basis, enabling highly parallel logical measurement layers. To leverage this structure, we propose parallel product surgery for quantum product codes. Using an auxiliary copy of the data patch and an engineered product-connection structure, the protocol performs many logical Pauli-product measurements in a single surgery round with small, fixed overhead. For clustered-cyclic codes, this yields surface-code-style maximal parallelism: up to k/2 disjoint Pauli-product measurements per round under explicit algebraic conditions. We prove that parallel product surgery preserves the code distance for hypergraph product codes and numerically verify distance preservation for the listed clustered-cyclic instances with k = 8. Finally, for the [[24,8,3]] clustered-cyclic code, treating half of the logical qubits as auxiliaries enables arbitrary parallel CNOTs on disjoint pairs; combined with symmetry-derived operations, these gates generate the full Clifford group fault-tolerantly.

SpiderCat: Optimal Fault-Tolerant Cat State Preparation

Andrey Boris Khesin, Sarah Meng Li, Boldizsár Poór, Benjamin Rodatz, John van de Wetering, Richie Yeung

2603.05391 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops optimal methods for preparing fault-tolerant CAT states (multi-qubit entangled states) needed for quantum error correction, using graph theory to find circuits that minimize the number of CNOT gates while preventing error spread. The authors provide both theoretical lower bounds and practical constructions that significantly improve upon previous resource requirements.

Key Contributions

  • Derived formal lower bounds on CNOT gate requirements for fault-tolerant n-qubit CAT state preparation using ZX-diagram analysis and graph theory
  • Provided explicit optimal circuit constructions for CAT states up to n≤100 qubits that significantly improve resource counts over previous methods
  • Developed constant-depth fault-tolerant implementations using O(n) ancilla qubits and O(n) CNOT gates
fault-tolerant quantum computing CAT states GHZ states quantum error correction CNOT optimization
View Full Abstract

The ability to fault-tolerantly prepare CAT states, also known as multi-qubit GHZ states, is an important primitive for quantum error correction. It is required for Shor-style syndrome extraction, and can also be used as a subroutine for doing fault-tolerant state preparation of CSS codewords. Existing approaches to fault-tolerant CAT state preparations have been found using computationally expensive heuristics involving SAT solving, reinforcement learning, or exhaustive analysis. In this paper, we constructively find optimal circuits for CAT states in a more scalable way. In particular, we derive formal lower bounds on the number of CNOT gates required for circuits implementing $n$-qubit CAT states that do not spread errors of weight at most $t$ for $1\leq t \leq 5$. We do this by using fault-equivalent rewrites of ZX-diagrams to reduce it to a problem of characterising certain 3-regular simple graphs. We then provide families of such optimal graphs for infinitely many values of $n$ and $t\leq5$. By encoding the construction of optimal graphs as a constraint satisfaction problem we find explicit constructions for circuits that match this lower bound on CNOT count for all $n\leq50$ and $t \leq 5$ and for nearly all pairs $(n,t)$ with $n\leq 100$ and $t\leq 5$ or $n\leq 50$ and $t\leq 7$, significantly extending the regimes that were achievable by previous methods and improving the resource counts for existing constructions. We additionally show how to trade CNOT count against depth, allowing us to construct constant-depth fault-tolerant implementations using $O(n)$ ancilla and $O(n)$ CNOT gates.

Achieving Thresholds via Standalone Belief Propagation on Surface Codes

Pedro Hack, Luca Menti, Francisco Lazaro, Alexandru Paler

2603.05381 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new belief propagation decoders for quantum error correction that achieve threshold performance on surface codes by operating on decoding graphs rather than traditional Tanner graphs. The approach achieves performance comparable to minimum weight perfect matching decoders while being more suitable for hardware acceleration.

Key Contributions

  • Novel belief propagation decoders that achieve thresholds on surface codes by operating on decoding graphs instead of Tanner graphs
  • Hardware-scalable decoder implementation that matches minimum weight perfect matching performance
quantum error correction surface codes belief propagation decoding threshold fault tolerance
View Full Abstract

The usual belief propagation (BP) decoders are, in general, exchanging local information on the Tanner graph of the quantum error-correcting (QEC) code and, in particular, are known to not have a threshold for the surface code. We propose novel BP decoders that exchange messages on the decoding graph and obtain code capacity thresholds via standalone BP for the surface code under depolarizing noise. Our approach, similarly to the minimum weight perfect matching (MWPM) decoder, is applicable to any graphlike QEC code. The thresholds observed with our decoders are close to those obtained by MWPM. This result opens the path towards scalable hardware-accelerated implementations of MWPM-compatible decoders.

Simplified circuit-level decoding using Knill error correction

Ewan Murphy, Subhayan Sahu, Michael Vasmer

2603.05320 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper investigates Knill error correction, a quantum error correction technique that uses a single round of measurements instead of repeated syndrome measurements, requiring an auxiliary logical Bell state. The authors prove its fault tolerance and show that it can use simpler decoding algorithms, potentially reducing the classical control requirements for large-scale quantum computers.

Key Contributions

  • Theoretical proof of fault tolerance for Knill error correction under circuit-level noise
  • Demonstration that Knill error correction can use simpler code-capacity decoders instead of complex circuit-level decoders
  • Numerical benchmarking of the protocol's performance on quantum low-density parity-check codes
quantum error correction Knill error correction fault tolerance quantum decoding circuit-level noise
View Full Abstract

Quantum error correction will likely be essential for building a large-scale quantum computer, but it comes with significant requirements at the level of classical control software. In particular, a quantum error-correcting code must be supplemented with a fast and accurate classical decoding algorithm. Standard techniques for measuring the parity-check operators of a quantum error-correcting code involve repeated measurements, which both increases the amount of data that needs to be processed by the decoder, and changes the nature of the decoding problem. Knill error correction is a technique that replaces repeated syndrome measurements with a single round of measurements, but requires an auxiliary logical Bell state. Here, we provide a theoretical and numerical investigation into Knill error correction from the perspective of decoding. We give a self-contained description of the protocol, prove its fault tolerance under locally decaying (circuit-level) noise, and numerically benchmark its performance for quantum low-density parity-check codes. We show analytically and numerically that the time-constrained decoding problem for Knill error correction can be solved using the same decoder used for the simpler code-capacity noise model, illustrating that Knill error correction may alleviate the stringent requirements on classical control required for building a large-scale quantum computer.

Robust and optimal control of open quantum systems

Zi-Jie Chen, Hongwei Huang, Lida Sun, Qing-Xuan Jie, Jie Zhou, Ziyue Hua, Yifang Xu, Weiting Wang, Guang-Can Guo, Chang-Ling Zou, Luyan Sun, Xu-Bo Zou

2603.05249 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops an improved algorithm for controlling quantum systems that accounts for real-world imperfections like noise and parameter uncertainties. The researchers demonstrate their approach experimentally using superconducting quantum circuits, achieving very low error rates of about 0.60%.

Key Contributions

  • Enhanced scalable algorithm for robust quantum control in open systems with noise and imperfections
  • Experimental validation achieving 0.60% infidelity in superconducting quantum circuits
quantum control open quantum systems decoherence superconducting circuits quantum error mitigation
View Full Abstract

Recent advancements in quantum technologies have highlighted the importance of mitigating system imperfections, including parameter uncertainties and decoherence effects, to improve the performance of experimental platforms. However, most of the previous efforts in quantum control are devoted to the realization of arbitrary unitary operations in a closed quantum system. Here, we improve the algorithm that suppresses system imperfections and noises, providing notably enhanced scalability for robust and optimal control of open quantum systems. Through experimental validation in a superconducting quantum circuit, we demonstrate that our approach outperforms its conventional counterpart for closed quantum systems with an ultra-low infidelity of about $0.60\%$, while the complexity of this algorithm exhibits the same scaling, with only a modest increase in the prefactor. This work represents a notable advancement in quantum optimal control techniques, paving the way for realizing quantum-enhanced technologies in practical applications.

Quantum advantages for syndrome-aware noisy logical observable estimation

Kento Tsubouchi, Hyukgun Kwon, Liang Jiang, Nobuyuki Yoshioka

2603.05145 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper develops a theoretical framework to analyze how error syndrome information can improve the estimation of quantum observables in fault-tolerant quantum computers. The research shows that classical post-processing of syndromes provides limited improvement, but quantum protocols that adapt measurements based on syndromes can achieve exponentially better performance.

Key Contributions

  • Proves universal limitation that classical syndrome-aware protocols can improve logical error rates by at most factor of two
  • Demonstrates quantum protocols with syndrome-conditioned control can achieve exponential improvement in effective logical error rate
fault-tolerant quantum computing error correction quantum estimation theory logical observables syndrome information
View Full Abstract

Recent progress in fault-tolerant quantum computing suggests that leveraging error-syndrome information at the logical layer can substantially improve performance, including the estimation of logical observables from noisy states. In this work, based on quantum estimation theory, we develop an information-theoretic framework to quantify the utility of error syndromes for noisy logical observable estimation. We distinguish two operational regimes of such syndrome-aware protocols: classical protocols, in which the logical measurement basis is fixed and syndrome information is used only in classical post-processing, and quantum protocols, in which the logical quantum control can be tailored to depend on the observed error syndrome. For classical syndrome-aware protocols, we prove a universal limitation: on average, syndrome information can improve the effective logical error rate by at most a factor of two, implying at most a quadratic reduction in sampling overhead. In contrast, once syndrome-conditioned quantum control is permitted, we exhibit settings in which the effective logical error rate decays exponentially with the number of logical qubits. These findings provide fundamental guidance for designing future fault-tolerant architectures that actively exploit syndrome records rather than discarding them after decoding.

Parsimonious Quantum Low-Density Parity-Check Code Surgery

Andrew C. Yuan, Alexander Cowtan, Zhiyang He, Ting-Chun Lin, Dominic J. Williamson

2603.05082 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a more efficient method for quantum code surgery in quantum Low-Density Parity-Check codes, reducing the number of ancilla qubits needed from linear to logarithmic scaling when measuring logical operators. The work improves the overhead costs of fault-tolerant quantum computing schemes that rely on measuring logical operators within error-correcting codes.

Key Contributions

  • Development of O(W log W) ancilla system construction for measuring logical Pauli operators of weight W
  • Asymptotic overhead reduction across various quantum code surgery schemes in qLDPC codes
quantum error correction fault-tolerant quantum computing qLDPC codes quantum code surgery logical operators
View Full Abstract

Quantum code surgery offers a flexible, low-overhead framework for executing logical measurements within quantum error-correcting codes. It encompasses several fault-tolerant logical computation schemes, including parallel surgery, universal adapters and fast surgery, and serves as the key primitive in extractor architectures. The efficiency of these schemes crucially depends on constructing low-overhead ancilla systems for measuring arbitrary logical operators in general quantum Low-Density Parity-Check (qLDPC) codes. In this work, we introduce a method to construct an ancilla system of qubit size $O(W \log W)$ to measure an arbitrary logical Pauli operator of weight $W$ in any qLDPC stabilizer code. This new construction immediately reduces the asymptotic overhead across various quantum code surgery schemes.

Quantum Weight Reduction with Layer Codes

Andrew C. Yuan, Nouédyn Baspin, Dominic J. Williamson

2603.04883 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new quantum weight reduction method that makes quantum error correction codes easier to implement by replacing components of existing codes with surface code patches connected together. The method achieves lower weight checks and qubit degrees than existing approaches, making the codes more practical for modular quantum computing architectures.

Key Contributions

  • Novel quantum weight reduction procedure achieving check weight 6 and qubit degree 6
  • Introduction of Layer Codes formed by connecting surface code patches for practical implementation
quantum error correction surface codes Calderbank-Shor-Steane codes weight reduction fault tolerance
View Full Abstract

Quantum weight reduction procedures ease the implementation of quantum codes by sparsifying them, resulting in low-weight checks and low-degree qubits. However, to date, only few quantum weight reduction methods have been explored. In this work we introduce a simple and general procedure for quantum weight reduction that achieves check weight 6 and total qubit degree 6, lower than existing procedures at the cost of a potentially larger qubit overhead. Our quantum weight reduction procedure replaces each qubit and check in an arbitrary Calderbank-Shor-Steane code with an ample patch of surface code, these patches are then joined together to form a geometrically nonlocal Layer Code. This is a quantum analog of the simple classical weight reduction procedure where each bit and check is replaced by a repetition code. Due to the simplicity of our weight reduction procedure, bounds on the weight and degree of the resulting code follow directly from the Layer Code construction and hence are easily verified by inspection. Our procedure is well suited for implementation in modular architectures that consist of surface code patches networked via long-range interconnects.

HyQBench: A Benchmark Suite for Hybrid CV-DV Quantum Computing

Shubdeep Mohapatra, Yuan Liu, Eddy Z. Zhang, Huiyang Zhou

2603.04398 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper introduces HyQBench, a benchmarking framework for hybrid quantum systems that combine continuous-variable (CV) and discrete-variable (DV) quantum computing approaches. The researchers developed a simulation tool and created standardized benchmarks to evaluate the performance and capabilities of these hybrid quantum systems across various computational tasks.

Key Contributions

  • Development of HyQBench simulation and benchmarking framework for hybrid CV-DV quantum circuits using Bosonic Qiskit
  • Creation of standardized benchmark suite including cat state generation, GKP states, hybrid quantum Fourier transform, and Shor's algorithm
  • Definition of CV-DV-specific feature maps and metrics for evaluating circuit complexity, scalability, and hardware requirements
hybrid quantum computing continuous variable discrete variable quantum benchmarking Shor's algorithm
View Full Abstract

Hybrid continuous-variable (CV)-discrete-variable (DV) quantum systems present a promising direction for quantum computing by combining the high dimensional encoding capabilities of qumodes with the control offered by DV qubits on the coupled qumodes. There have been exciting recent progresses on hybrid CV-DV quantum computing, including variational algorithms, error correction, compiler-level optimizations for Hamiltonian simulation, etc. However, there is a lack of a standardized CV-DV benchmark suite for assessing various emerging hardware platforms and evaluating software optimizations on hybrid CV-DV circuits. In this work, we introduce a simulation and benchmarking framework for hybrid CV-DV circuits, implemented using Bosonic Qiskit-a tool specifically designed to model CV-DV systems, along with QuTip for functional correctness verification. We construct and characterize representative CV-DV benchmarks, including cat state generation, GKP state generation, CV-DV state transfers, hybrid quantum Fourier transform, variational quantum algorithms, Hamiltonian simulation, and Shor's algorithm. To assess circuit complexity and scalability, we define a feature map organized into two categories: general features (e.g., qubit/qumode count, gate counts) and CV-DV-specific features (e.g., Wigner negativity, energy, truncation cost). These metrics enable evaluation of both classical simulability and hardware resource requirements. Our results, including one benchmark on real hardware, demonstrate that hybrid CV-DV architectures are not only viable but well-suited for a range of computational tasks, from optimization to Hamiltonian simulation. This framework lays the groundwork for systematic evaluation and future development of hybrid quantum systems.

On Error Thresholds for Pauli Channels: Some answers with many more questions

Avantika Agarwal, Alan Bu, Amolak Ratan Kalra, Debbie Leung, Luke Schaeffer, Graeme Smith

2603.04357 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper analyzes error thresholds for quantum error correction codes, specifically studying how well different stabilizer codes can protect quantum information from Pauli noise channels. The researchers compute bounds on error rates that quantum codes can tolerate and discover some codes perform better when combined than theory predicts.

Key Contributions

  • Numerical computation of lower bounds for error thresholds in Pauli channels using coset weight enumerators
  • Discovery of significant non-additivity in concatenated stabilizer codes and closed-form expressions for repetition code concatenations
  • Optimization of channel parameters for maximal non-additivity and threshold estimates for large concatenated codes
error correction stabilizer codes Pauli channels error thresholds concatenated codes
View Full Abstract

This paper focuses on error thresholds for Pauli channels. We numerically compute lower bounds for the thresholds using the analytic framework of coset weight enumerators pioneered by DiVincenzo, Shor and Smolin in 1998. In particular, we study potential non-additivity of a variety of small stabilizer codes and their concatenations, and report several new concatenated stabilizer codes of small length that show significant non-additivity. We also give a closed form expression of coset weight enumerators of concatenated phase and bit flip repetition codes. Using insights from this formalism, we estimate the threshold for concatenated repetition codes of large lengths. Finally, for several concatenations of small stabilizer codes we optimize for channels which lead to maximal non-additivity at the hashing point of the corresponding channel. We supplement these results with a discussion on the performance of various stabilizer codes from the perspective of the non-additivity and threshold problem. We report both positive and negative results, and highlight some counterintuitive observations, to support subsequent work on lower bounds for error thresholds.

Magic state distillation with permutation-invariant codes and a two-qubit example

Heather Leitch, Yingkai Ouyang

2603.04310 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a new magic state distillation protocol that uses permutation-invariant codes as small as two qubits to create clean quantum states needed for fault-tolerant quantum computing. The protocol achieves better performance than previous methods by allowing non-Clifford gates and flexible output states, with a 0.5 error threshold and can distill magic states with arbitrary magic levels.

Key Contributions

  • Novel magic state distillation protocol using permutation-invariant codes with minimal two-qubit overhead
  • Achievement of 0.5 error threshold and 1/2 distillation rate surpassing comparable schemes
  • Flexible protocol that can distill magic states with arbitrary magic levels by varying ideal input state positions
magic state distillation fault-tolerant quantum computation permutation-invariant codes non-Clifford gates gate teleportation
View Full Abstract

Magic states, by allowing non-Clifford gates through gate teleportation, are important building blocks of fault-tolerant quantum computation. Magic state distillation protocols aim to create clean copies of magic states from many noisier copies. However, the prevailing protocols require substantial qubit overhead. We present a distillation protocol based on permutation-invariant gnu codes, as small as two qubits. The two-qubit protocol achieves a 0.5 error threshold and 1/2 distillation rate, surpassing prior schemes for comparable codes. Our protocol furthermore distils magic states with arbitrary magic by varying the position of the ideal input states on the Bloch sphere. We achieve this by departing from the usual magic state distillation formalism, allowing the use of non-Clifford gates in the distillation protocol, and allowing the form of the output state to differ from the input state. Our protocol is compatible for use in tandem with existing magic state distillation protocols to enhance their performance.

Minimum Weight Decoding in the Colour Code is NP-hard

Mark Walters, Mark L. Turner

2603.04234 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proves that exact decoding of the colour code, a promising quantum error correction scheme, is computationally intractable (NP-hard). Unlike the surface code which can be decoded efficiently, colour code decoding cannot be solved exactly in polynomial time unless P=NP.

Key Contributions

  • Proves that minimum weight decoding in the colour code is NP-hard
  • Establishes fundamental computational limitations that distinguish colour codes from surface codes
quantum error correction colour code topological codes computational complexity NP-hard
View Full Abstract

All utility-scale quantum computers will require some form of Quantum Error Correction in which logical qubits are encoded in a larger number of physical qubits. One promising encoding is known as the colour code which has broad applicability across all qubit types and can decisively reduce the overhead of certain logical operations when compared to other two-dimensional topological codes such as the surface code. However, whereas the surface code decoding problem can be solved exactly in polynomial time by finding minimum weight matchings in a graph, prior to this work, it was not known whether exact and efficient colour code decoding was possible. Optimism in this area, stemming from the colour code's significant structure and well understood similarities to the surface code, fanned this uncertainty. In this paper we resolve this, proving that exact decoding of the colour code is NP-hard -- that is, there does not exist a polynomial time algorithm unless P=NP. This highlights a notable contrast to some of the colour code's key competitors, such as the surface code, and motivates continued work in the narrower space of heuristic and approximate algorithms for fast, accurate and scalable colour code decoding.

Achieving Optimal-Distance Atom-Loss Correction via Pauli Envelope

Pengyu Liu, Shi Jie Samuel Tan, Eric Huang, Umut A. Acar, Hengyun Zhou, Chen Zhao

2603.04156 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new methods to correct atom loss errors in neutral-atom quantum computers, which account for over 40% of physical errors. The researchers propose a 'Pauli Envelope' framework with improved syndrome extraction circuits and decoders that achieve better error correction performance than existing approaches.

Key Contributions

  • Pauli Envelope framework for bounding atom loss effects with efficient computation
  • Mid-SWAP syndrome extraction circuit that reduces error propagation without additional overhead
  • Envelope-MLE decoder achieving optimal effective code distance for atom-loss errors
  • Envelope-Matching decoder providing improved performance within MWPM framework
quantum error correction atom loss neutral atom quantum computing syndrome extraction quantum decoding
View Full Abstract

Atom loss is a major error source in neutral-atom quantum computers, accounting for over 40% of the total physical errors in recent experiments. Unlike Pauli errors, atom loss poses significant challenges for both syndrome extraction and decoding due to its nonlinearity and correlated nature. Current syndrome extraction circuits either require additional physical overhead or do not provide optimal loss tolerance. On the decoding side, existing methods are either computationally inefficient, achieve suboptimal logical error rates, or rely on machine learning without provable guarantees. To address these challenges, we propose the Pauli Envelope framework. This framework constructs a Pauli envelope that bounds the effect of atom loss while remaining low weight and efficiently computable. Guided by this framework, we first design a new atom-replenishing syndrome extraction circuit, the Mid-SWAP syndrome extraction, that reduces error propagation with no additional space-time cost. We then propose an optimal decoder for Mid-SWAP syndrome extraction: the Envelope-MLE decoder formulated as an MILP that achieves optimal effective code distance dloss ~ d for atom-loss errors. Inspired by the exclusivity constraint of the optimal decoder, we also propose an Envelope-Matching decoder to approximately enforce the exclusivity constraint within the MWPM framework. This decoder achieves d_loss ~ 2d/3, surpassing the previous best algorithmic decoder, which achieves dloss ~ d/2 even with an MILP formulation. Circuit-level simulations demonstrate that our approach attains up to 40% higher thresholds and 30% higher effective distances compared with existing algorithmic decoders and syndrome extraction circuits in the loss-dominated regime. On recent experimental data, our Envelope-MLE decoder improves the error suppression factor of a hybrid MLE--machine-learning decoder from 2.14 to 2.24.

Efficient Time-Aware Partitioning of Quantum Circuits for Distributed Quantum Computing

Raymond P. H. Wu, Chathu Ranaweera, Sutharshan Rajasegarar, Ria Rushin Joseph, Jinho Choi, Seng W. Loke

2603.04126 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper develops a time-aware algorithm based on beam search to efficiently partition quantum circuits across multiple quantum processing units in distributed quantum computing networks. The algorithm minimizes communication costs between remote quantum processors while providing significant computational speedup over existing methods.

Key Contributions

  • Time-aware beam search heuristic for quantum circuit partitioning in distributed systems
  • Algorithm with quadratic scaling in qubits and linear scaling in circuit depth, providing computational speedup over metaheuristics
  • Demonstrated reduction in quantum communication overhead across various circuit sizes and network topologies
distributed quantum computing quantum circuit partitioning quantum communication beam search quantum teleportation
View Full Abstract

To overcome the physical limitations of scaling monolithic quantum computers, distributed quantum computing (DQC) interconnects multiple smaller-scale quantum processing units (QPUs) to form a quantum network. However, this approach introduces a critical challenge, namely the high cost of quantum communication between remote QPUs incurred by quantum state teleportation and quantum gate teleportation. To minimize this communication overhead, DQC compilers must strategically partition quantum circuits by mapping logical qubits to distributed physical QPUs. Static graph partitioning methods are fundamentally ill-equipped for this task as they ignore execution dynamics and underlying network topology, while metaheuristics require substantial computational runtime. In this work, we propose a heuristic based on beam search to solve the circuit partitioning problem. Our time-aware algorithm incrementally constructs a low-cost sequence of qubit assignments across successive time steps to minimize overall communication overhead. The time and space complexities of the proposed algorithm scale quadratically with the number of qubits and linearly with circuit depth, offering a significant computational speedup over common metaheuristics. We demonstrate that our proposed algorithm consistently achieves significantly lower communication costs than static baselines across varying circuit sizes, depths, and network topologies, providing an efficient compilation tool for near-term distributed quantum hardware.

Spectrally Corrected Polynomial Approximation for Quantum Singular Value Transformation

Krishnan Suresh

2603.03998 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper improves Quantum Singular Value Transformation (QSVT) by developing a spectral correction method that uses prior knowledge of some eigenvalues to create more efficient polynomial approximations. The approach achieves up to 5× reduction in quantum circuit depth while maintaining high fidelity, demonstrated on solving linear systems like the Poisson equation.

Key Contributions

  • Development of spectral correction method for QSVT that exploits prior eigenvalue knowledge to reduce polynomial degree
  • Demonstration of up to 5× circuit depth reduction while maintaining unit fidelity on linear system solving problems
  • Framework that is agnostic to base polynomial choice and robust to eigenvalue perturbations up to 10%
quantum singular value transformation QSVT polynomial approximation linear systems circuit depth optimization
View Full Abstract

Quantum Singular Value Transformation (QSVT) provides a unified framework for applying polynomial functions to the singular values of a block-encoded matrix. QSVT prepares a state proportional to $\bA^{-1}\bb$ with circuit depth $O(d\cdot\mathrm{polylog}(N))$, where $d$ is the polynomial degree of the $1/x$ approximation and $N$ is the size of $\bA$. Current polynomial approximation methods are over the continuous interval $[a,1]$, giving $d = O(\sqrt{\kap}\log(1/\varepsilon))$, and make no use of any properties of $\bA$. We observe here that QSVT solution accuracy depends only on the polynomial accuracy at the eigenvalues of $\bA$. When all $N$ eigenvalues are known exactly, a pure spectral polynomial $p_{S}$ can interpolate $1/x$ at these eigenvalues and achieve unit fidelity at reduced degree. But its practical applicability is limited. To address this, we propose a spectral correction that exploits prior knowledge of $K$ eigenvalues of $\bA$. Given any base polynomial $p_0$, such as Remez, of degree $d_0$, a $K\times K$ linear system enforces exact interpolation of $1/x$ only at these $K$ eigenvalues without increasing $d_0$. The spectrally corrected polynomial $p_{SC}$ preserves the continuous error profile between eigenvalues and inherits the parity of $p_0$. QSVT experiments on the 1D Poisson equation demonstrate up to a $5\times$ reduction in circuit depth relative to the base polynomial, at unit fidelity and improved compliance error. The correction is agnostic to the choice of base polynomial and robust to eigenvalue perturbations up to $10\%$ relative error. Extension to the 2D Poisson equation suggests that correcting a small fraction of the spectrum may suffice to achieve fidelity above $0.999$.

Overflow-Safe Polylog-Time Parallel Minimum-Weight Perfect Matching Decoder: Toward Experimental Demonstration

Ryo Mikami, Hayata Yamasaki

2603.03776 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved algorithm for quantum error correction that can decode errors much faster than existing methods by solving the minimum-weight perfect matching problem in polylogarithmic time rather than polynomial time. The key innovation is using a truncated polynomial ring framework that prevents numerical overflow issues and reduces memory requirements by over 99.9% while maintaining the speed advantage.

Key Contributions

  • Development of overflow-safe polylog-time parallel MWPM decoder using truncated polynomial ring framework
  • Reduction of arithmetic bit length requirements by over 99.9% while preserving polylogarithmic runtime scaling
  • Hardware-friendly implementation using only bitwise XOR and shift operations
fault-tolerant quantum computation quantum error correction minimum-weight perfect matching polylogarithmic time determinant-based decoding
View Full Abstract

Fault-tolerant quantum computation (FTQC) requires fast and accurate decoding of quantum errors, which is often formulated as a minimum-weight perfect matching (MWPM) problem. A determinant-based approach has been proposed as a promising method to surpass the conventional polynomial runtime of MWPM decoding via the blossom algorithm, asymptotically achieving polylogarithmic parallel runtime. However, the existing approach requires an impractically large bit length to represent intermediate values during the computation of the matrix determinant; moreover, when implemented on a finite-bit machine, the algorithm cannot detect overflow, and therefore, the mathematical correctness of such algorithms cannot be guaranteed. In this work, we address these issues by presenting a polylog-time MWPM decoder that detects overflow in finite-bit representations by employing an algebraic framework over a truncated polynomial ring. Within this framework, all arithmetic operations are implemented using bitwise XOR and shift operations, enabling efficient and hardware-friendly implementation. Furthermore, with algorithmic optimizations tailored to the structure of the determinant-based approach, we reduce the arithmetic bit length required to represent intermediate values in the determinant computation by more than $99.9\%$, while preserving its polylogarithmic runtime scaling. These results open the possibility of a proof-of-principle demonstration of the polylog-time MPWM decoding in the early FTQC regime.

Resource-Efficient Emulation of Majorana Zero Mode Braiding on a Superconducting Trijunction

Rahul Signh, Weixin Lu, Kaelyn J Ferris, Javad Shabani

2603.03645 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a more efficient method for simulating Majorana zero modes (exotic quantum particles) on quantum computers, specifically focusing on braiding operations that could enable fault-tolerant quantum computing. The authors develop direct braiding operators that reduce the computational overhead compared to previous simulation approaches that required very deep quantum circuits.

Key Contributions

  • Development of resource-efficient direct braiding operators for MZM simulation
  • Generalization of the method to extended trijunction architectures based on Kitaev chains
majorana zero modes topological quantum computing fault-tolerant quantum computation braiding operations superconducting qubits
View Full Abstract

Topological superconductivity could host quasiparticles that are key candidates for fault-tolerant quantum computation due to their immunity to noise as they obey non-Abelian exchange statistics. For example, in the case of Majorana Zero Modes (MZM), braiding enables two topologically protected quantum gates. While their direct manipulation in solid-state systems remains experimentally challenging, digital emulation of MZM behavior has provided insight as well as a deeper understanding of controlling these topological quantum systems. This emulation is typically accomplished by mapping the topological and trivial phases of a Majorana system to ferromagnetic and paramagnetic Hamiltonians of a spin-glass model. This approach usually relies on adiabatic evolution of superconducting Hamiltonians, which require circuits with very large depths. In this work, we present a resource-efficient method to emulate MZM braiding in a trijunction geometry using a quantum processor. We introduce direct braiding operators which simulate the evolution more efficiently, reducing the quantum gate overhead. We then further generalize this method to emulate braiding operations in extended trijunction architectures based on Kitaev chains.

Mitigating many-body quantum crosstalk with tensor-network robust control

Nguyen H. Le, Florian Mintert, Eran Ginossar

2603.03639 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper develops a method to suppress quantum crosstalk in large quantum systems by combining tensor network simulations with robust control algorithms. The approach successfully designs high-fidelity quantum operations for up to 50 qubits, achieving order-of-magnitude improvements in performance when unwanted interactions between neighboring qubits are present.

Key Contributions

  • Development of tensor-network based robust control method that overcomes exponential scaling limitations
  • Demonstration of order-of-magnitude fidelity improvements for large-scale quantum operations up to 50 qubits in presence of crosstalk
  • Efficient random sampling technique for noise ensembles combined with GRAPE algorithm for practical implementation
quantum crosstalk robust control tensor networks GRAPE algorithm multi-qubit gates
View Full Abstract

Quantum crosstalk poses a major challenge to scaling up quantum computations as its strength is typically unknown and its effect accumulates exponentially as system size grows. Here, we show that many-body robust control can be utilized to suppress unwanted couplings during multi-qubit gate operations and state preparation. By combining tensor network simulations with the GRAPE algorithm, and leveraging an efficient random sampling over noise ensembles, our method overcomes the exponential scaling of the Hilbert space. We demonstrate its effectiveness for designing control solutions for high-fidelity implementations of parallel X gates and parallel CNOT on a chain of 50 qubits, and for realizing a 30-qubit GHZ state and the ground state of a 20-qubit Heisenberg model. In the presence of many-body quantum crosstalk due to parasitic interaction between neighboring qubits, robust control results in order-of magnitude improvement in fidelity for large system sizes. These findings pave the way for more reliable operations on near-term quantum processors.

Quantum Lego Power-up: Designing Transversal Gates with Tensor Networks

ChunJun Cao, Brad Lackey

2603.03542 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a new approach using tensor networks and 'quantum lego' formalism to systematically design quantum error-correcting codes that support transversal gates, which are the simplest fault-tolerant quantum gates. The method allows construction of codes with addressable non-Clifford gates like T gates and multi-qubit gates, overcoming limitations of traditional stabilizer code constructions.

Key Contributions

  • Development of tensor network framework for systematic construction of quantum error-correcting codes with transversal gates
  • Construction of new finite-rate code families supporting non-Clifford transversal gates including T, CCZ, and other complex gates
  • Demonstration of addressable transversal gates in holographic codes, reducing overhead for universal fault-tolerant computation
quantum error correction transversal gates fault-tolerant quantum computing tensor networks stabilizer codes
View Full Abstract

Transversal gates are the simplest form of fault-tolerant gates and are relatively easy to implement in practice. Yet designing codes that support useful transversal operations -- especially non-Clifford or addressable gates -- remains difficult within the stabilizer formalism or CSS constructions alone. We show that these limitations can be overcome using tensor-network frameworks such as the quantum lego formalism, where transversal gates naturally appear as global or localized symmetries. Within the quantum lego formalism, small codes carrying desirable symmetries can be "glued" into larger ones, with operator-flow rules guiding how logical symmetries are preserved. This approach enables the systematic construction of codes with addressable transversal single- and multi-qubit gates targeting specific logical qubits regardless of whether the gate is Clifford or not. As a proof of principle, we build new finite-rate code families that support strongly transversal $T$, $CCZ$, $SH$, and Gottesman's $K_3$ gates, structures that are challenging to realize with conventional methods. We further construct holographic and fractal-like codes that admit addressable transversal inter-, meso-, and intra-block $T$, $CS$, and $C^\ell Z$ gates. As a corollary, we demonstrate that the heterogeneous holographic Steane-Reed-Muller black hole code also supports fully addressable transversal inter- and intra-block $CZ$ gates, significantly lowering the overhead for universal fault-tolerant computation.

Generalised All-Optical Cat Correction

Ari John Boon, Olivier Landon-Cardinal, Nicolás Quesada

2603.03263 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops improved error correction methods for quantum cat codes using all-optical techniques, showing that higher-order cat codes can dramatically reduce the number of correction iterations needed while using more photons per correction.

Key Contributions

  • Generalized all-optical telecorrection protocol for higher-order cat codes
  • Demonstrated 70x reduction in correction iterations for third-order vs first-order cat codes
  • Introduced probabilistic scheme for correcting state deformation with basis-changing capability
cat codes quantum error correction all-optical telecorrection photonic quantum computing
View Full Abstract

We have generalised an all-optical telecorrection protocol for the higher orders of the cat code, and show that with these higher orders we can achieve target performance at substantially reduced iteration counts at the cost of a higher mean photon-number. We also introduce a probabilistic scheme for correcting deformation of the state, which highlights two interesting abilities of telecorrection: to encode new sets of transformations, and to change the basis of the code. We find that for a target channel fidelity of $99.9\%$ over a channel with $1\text{ dB}$ of loss, a third-order cat code requires $70$ times fewer telecorrection iterations than a first-order one, at a cost of a $3.6$-fold increase in mean photon-number.

Entanglement-Assisted Codes Outside the Stabilizer Framework

Jaszmine DeFranco, Andrew Nemec

2603.03182 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents methods for constructing entanglement-assisted quantum error-correcting codes from arbitrary quantum codes by connecting them to erasure channel codes. The work extends beyond traditional stabilizer codes to include new types like permutation-invariant and XP-stabilizer codes.

Key Contributions

  • Novel construction method for entanglement-assisted codes from arbitrary quantum codes via erasure channel association
  • First examples of entanglement-assisted codes outside stabilizer and codeword-stabilized frameworks
  • Compression techniques for degenerate codes with analysis of error-correction trade-offs
entanglement-assisted codes quantum error correction erasure channels stabilizer codes quantum communication
View Full Abstract

We show how entanglement-assisted codes can be constructed from arbitrary quantum codes by associating them with quantum codes for erasure channels. If a subset of physical qubits is correctable for an erasure error, then it naturally forms the receiver's share of a bipartite state that can be used for entanglement-assisted communications, both in the noiseless and noisy ebit error models. In the case of degenerate codes, we show that the receiver's share of the bipartite state can sometimes be compressed, at the cost of potentially reduced error-correction ability in the noisy ebit error model. We also give examples of permutation-invariant and XP-stabilizer entanglement-assisted codes, the first outside of the stabilizer and codeword-stabilized frameworks.

Scaling of silicon spin qubits under correlated noise

Juan S. Rojas-Arias, Leon C. Camenzind, Yi-Hsien Wu, Peter Stano, Akito Noiri, Kenta Takeda, Takashi Nakajima, Takashi Kobayashi, Giordano Scappucci, ...

2603.03051 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper studies how noise correlations between closely-packed silicon spin qubits affect quantum error correction by measuring noise in a five-qubit array. The researchers found that while magnetic field drifts create problematic correlations, charge noise correlations are manageable and compatible with fault-tolerant quantum computing.

Key Contributions

  • Quantified spatial extent of noise correlations in silicon spin qubit arrays and identified two distinct sources: global magnetic drifts and localized charge noise
  • Established that charge noise correlations are moderate and compatible with fault-tolerant quantum error correction with minimal overhead
silicon spin qubits quantum error correction correlated noise fault tolerance scalable quantum computing
View Full Abstract

The path to fault-tolerant quantum computing hinges on hardware that scales while remaining compatible with quantum error correction (QEC). Silicon spin qubits are a leading hardware candidate because they combine industrial fabrication compatibility with a nanoscale footprint that could accommodate millions of qubits on a chip. However, their suitability for QEC remains uncertain since spatially correlated noise naturally emerges from the resulting close proximity of qubits. These correlations increase the likelihood of simultaneous errors and erode the redundancy that QEC depends on. Here we quantify the spatial extent of noise correlations in a five-qubit silicon array and assess their impact on QEC. We identify two distinct sources of correlated noise: global magnetic field drifts that generate perfectly correlated fluctuations, and charge noise from two-level fluctuators that produces short-range correlations decaying within neighboring qubits. While magnetic drifts represent a critical correlated noise source that can compromise QEC, they can be mitigated. In contrast, the measured charge noise correlations are moderate, electrically tunable, and compatible with fault-tolerant operation with minimal qubit overhead. Our results establish quantitative benchmarks for correlated noise and clarify how such correlations impact the viability of quantum error correction in scalable qubit arrays.

QFlowNet: Fast, Diverse, and Efficient Unitary Synthesis with Generative Flow Networks

Inhoe Koo, Hyunho Cha, Jungwoo Lee

2603.03045 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces QFlowNet, a machine learning framework that combines Generative Flow Networks with Transformers to efficiently decompose quantum unitary operations into sequences of quantum gates, achieving 99.7% success rate on 3-qubit benchmarks while generating diverse solution sets.

Key Contributions

  • Novel combination of GFlowNet and Transformers for unitary synthesis that generates diverse solutions rather than single policies
  • Achievement of 99.7% success rate on 3-qubit unitary synthesis benchmark with efficient learning from sparse reward signals
unitary synthesis quantum compilation generative flow networks quantum gates transformers
View Full Abstract

Unitary Synthesis, the decomposition of a unitary matrix into a sequence of quantum gates, is a fundamental challenge in quantum compilation. Prevailing reinforcement learning(RL) approaches are often hampered by sparse reward signals, which necessitate complex reward shaping or long training times, and typically converge to a single policy, lacking solution diversity. In this work, we propose QFlowNet, a novel framework that learns efficiently from sparse signals by pairing a Generative Flow Network (GFlowNet) with Transformers. Our approach addresses two key challenges. First, the GFlowNet framework is fundamentally designed to learn a diverse policy that samples solutions proportional to their reward, overcoming the single-solution limitation of RL while offering faster inference than other generative models like diffusion. Second, the Transformers act as a powerful encoder, capturing the non-local structure of unitary matrices and compressing a high-dimensional state into a dense latent representation for the policy network. Our agent achieves an overall success rate of 99.7% on a 3-qubit benchmark(lengths 1-12) and discovers a diverse set of compact circuits, establishing QFlowNet as an efficient and diverse paradigm for unitary synthesis.

Ultra-low loss piezo-optomechanical low-confinement silicon nitride platform for visible wavelength quantum photonic circuits

Mayank Mishra, Gwangho Choi, Wenhua He, Gina M. Talcott, Katherine Kearney, Michael Gehl, Andrew Leenheer, Daniel Dominguez, Nils T. Otterstrom, Matt ...

2603.02584 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: high

This paper demonstrates an ultra-low loss silicon nitride photonic platform that combines excellent passive optical properties (0.026 dB/cm loss) with active control via piezo-optomechanical actuation, enabling scalable quantum photonic circuits that operate at visible wavelengths with low power consumption and fast reconfiguration.

Key Contributions

  • Achieved ultra-low propagation loss of 0.026 dB/cm at 780 nm in a low-confinement silicon nitride platform
  • Demonstrated piezo-optomechanical phase shifters with MHz bandwidth and 2.8 V·m voltage-length product
  • Combined passive and active properties to enable scalable visible-wavelength quantum photonic circuits
photonic quantum computing silicon nitride piezo-optomechanical low-loss waveguides visible wavelength
View Full Abstract

The stringent demands of photonic quantum computing protocols motivate photonic integrated circuit (PIC) platforms with passive optical properties such as extremely low losses and correspondingly large circuit depths, as well as active optical properties such as high reconfiguration rates, low power dissipation, and minimal crosstalk. At the same time, many quantum photonic resource state generators, such as single-photon sources and quantum memories, require operation in the visible wavelength range. These requirements make the passive optical properties of CMOS-fabricated, ultralow-loss, low-confinement silicon nitride waveguides especially attractive. However, the conventional active properties of these systems based on thermo-optic modulation are plagued by high levels of crosstalk, slow modulation rates, and high power dissipation. Although there have been recent demonstrations of CMOS-fabricated, visible wavelength, piezo-optomechanical PICs that solve the above challenges associated with implementing active functionality, these have made use of high-confinement waveguides with currently demonstrated losses of order $0.3$-$1~\mathrm{dB/cm}$, precluding circuit depths required for scalable quantum algorithms. Here, we demonstrate that combining piezo-optomechanical actuation with a low-confinement, ultra-low loss silicon nitride platform addresses the scalability challenge while enabling high-performance active functionality at visible wavelengths. This platform achieves a propagation loss $0.026~\mathrm{dB/cm}$ at $780~\mathrm{nm}$, modulation bandwidths in the MHz range, and a phase shifter voltage-length product ($V_πL$) of approximately $2.8~\mathrm{\mathrm{V}\cdot\mathrm{m}}$ and negligible hysteresis. We further demonstrate reconfigurable Mach-Zehnder interferometers based on spiral phase shifters with 0.63 dB loss per phase shifter.

Steering paths mid-flight for fault-tolerance in measurement-based holonomic gates

Anirudh Lanka, Juan Garcia-Nila, Todd A. Brun

2603.02552 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a fault-tolerant framework for implementing holonomic quantum gates using continuous measurements and real-time feedback. The approach can correct errors mid-computation and relaxes strict timing requirements, enabling faster and more robust quantum gate operations.

Key Contributions

  • Fault-tolerant framework for measurement-based holonomic gates with real-time error correction
  • Method to suppress non-Markovian decoherence through quantum Zeno effect
  • Protocol for correcting measurement-induced errors from non-adiabatic effects
  • Relaxation of adiabaticity requirements enabling faster gate implementation
holonomic quantum computation fault tolerance continuous measurement quantum Zeno effect error correction
View Full Abstract

Continuous measurement-based holonomic quantum computation provides a route to universal logical computation in quantum error correcting codes. We introduce a fault-tolerant framework for implementing measurement-based holonomic gates that leverages continuous measurements with real-time feedback. We show that non-Markovian decoherence is intrinsically suppressed through the quantum Zeno effect, while Markovian errors are identified by the decoding of measurement records to reveal the rotated syndrome subspace populated during the evolution. This information enables steering holonomic paths mid-flight to ensure that the final evolution realizes the target logical gate. We further demonstrate that non-adiabatic effects give rise to measurement-induced errors, and we show that these can also be corrected by an analogous protocol. This approach relaxes the stringent adiabaticity requirement and enables faster implementation of holonomic gates.

Constant-Time Surgery on 2D Hypergraph Product Codes with Near-Constant Space Overhead

Kathleen Chang, Zhiyang He, Theodore J. Yoder, Guanyu Zhu, Tomas Jochym-O'Connor

2603.02157 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new techniques for performing fault-tolerant quantum computations on quantum error-correcting codes that dramatically reduce the time overhead from O(d) to constant time O(1) while maintaining very low space requirements. The work focuses on improving 'code surgery' methods that allow logical operations on quantum low-density parity-check codes.

Key Contributions

  • Development of constant-time surgery gadgets for 2D hypergraph product codes that achieve O(1) time overhead
  • Demonstration that performing d surgery operations in O(d) time maintains fault tolerance through amortization
quantum error correction fault-tolerant quantum computing qLDPC codes code surgery hypergraph product codes
View Full Abstract

Generalized code surgery is a versatile and low-overhead technique for performing fault-tolerant computation on quantum low-density parity-check (qLDPC) codes. In many settings, surgery exhibits practical space overheads, while its time overhead remains a bottleneck at $O(d)$ syndrome rounds per operation. In this work, we construct surgery gadgets that perform parallel logical measurements on 2D hypergraph product codes in constant time overhead ($O(1)$) and near-constant space overhead ($\tilde{O}(1)$). The reduced time overhead is a result of amortization, as we show, following the formulation by Cowtan et al. (arXiv:2510.14895), that performing $d$ surgery operations in $O(d)$ time is fault tolerant. Our gadgets combine the strengths of different approaches to fault-tolerant logical operations: they partially retain the flexibility of surgery while achieving overheads comparable to transversal gates. Consequently, they are well-suited for near-term experimental realization and demonstrate new possibilities in the design of gadgets for fast logical computation.

Obstacles to Continuous Quantum Error Correction via Parity Measurements

Anton Halaski, Christiane P. Koch

2603.02106 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper identifies fundamental problems with continuous quantum error correction using parity measurements in circuit quantum electrodynamics platforms. The researchers show that approximating required three-body interactions with two-body couplings corrupts the logical quantum information, limiting practical implementation of continuous error correction.

Key Contributions

  • Demonstrates that common parity-measurement protocols in circuit QED corrupt logical information during continuous operation
  • Identifies that the failure mechanism stems from approximating three-body interactions with two-body couplings to meters
  • Proposes alternative approaches including native three-body interaction architectures and erasure-based encodings
quantum error correction continuous measurements circuit quantum electrodynamics parity measurements stabilizer codes
View Full Abstract

Time-continuous quantum error correction, necessary to protect quantum information under time-dependent Hamiltonians, relies on weak continuous syndrome measurements. Implementing these measurements requires a continuous coupling among at least two qubits and a meter, a demanding requirement. We show that, under continuous operation, common parity-measurement protocols in the circuit quantum electrodynamics platform corrupt the logical information. The failure arises from approximating the three-body interaction by a sum of two-body couplings to the meter, which prevents simultaneous suppression of measurement backaction on the logical and error subspaces. We argue that the same mechanism applies more generally beyond the circuit quantum electrodynamics setting. Taken together, our results impose a practical limitation on continuous stabilizer quantum error correction and point to the viable alternatives -- architectures that realize native three-body interactions, or erasure-based encodings in which the error subspace need not be protected.

No More Hooks in the Surface Code: Distance-Preserving Syndrome Extraction for Arbitrary Layouts at Minimum Depth

Yuga Hirai, Shota Ikari, Yosuke Ueno, Yasunari Suzuki

2603.01628 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a new method called ZX interleaving syndrome extraction for quantum error correction in surface codes that eliminates problematic 'hook errors' while maintaining minimum circuit depth. The technique preserves the full fault tolerance distance for any surface code layout, improving upon existing methods that either add circuit overhead or reduce error correction capability.

Key Contributions

  • ZX interleaving syndrome extraction method that preserves full fault distance d for arbitrary surface code layouts at minimum depth
  • Elimination of hook errors without additional circuit depth or simultaneous measurement/CNOT execution requirements
  • Numerical validation showing full fault distance d achievement versus d-1 for existing minimum-depth approaches
surface code quantum error correction fault tolerance syndrome extraction hook errors
View Full Abstract

Hook errors are a major challenge in implementing logical operations with the surface code, because they can reduce the fault distance below the code distance. This motivates syndrome-extraction circuits that suppress hook-error effects for the stabilizer layouts that appear during logical operations. However, the existing methods either increase circuit depth or require simultaneous execution of measurements and CNOT gates, both of which introduce additional overheads and degrade the threshold. We propose the ZX interleaving syndrome extraction, which preserves the full fault distance $d$ for any surface-code layout with regular stabilizer tiles at minimum depth, i.e., four layers of CNOT gates, without requiring additional circuit depth or simultaneous execution of measurements and CNOT gates. The key idea is to interleave the Z and X stabilizer tiles so that hook-error edges in the decoding graph are shortened and effectively eliminated. Numerical simulations under uniform depolarizing noise for memory and lattice-surgery experiments confirm that the proposed method achieves a full fault distance of $d$, whereas the best existing minimum-depth approach achieves $d-1$. Since the full fault distance is achievable for any regular tiling layout of the surface code, the proposed method may serve as an indispensable technique for practical fault-tolerant quantum computation.

Sustaining high-fidelity quantum logic in neutral-atom circuits via mid-circuit operations

Rui Lin, You Li, Le-Tian Zheng, Tai-Ran Hu, Si-Yuan Chen, Hong-Ming Wu, Yu-Chen Zhang, Hao-Wen Cheng, Yu-Hao Deng, Zhan Wu, Ming-Cheng Chen, Jun Rui, ...

2603.01612 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a neutral-atom quantum computing system that maintains high gate fidelities (~99.8%) across multiple operational rounds by using mid-circuit cooling and qubit reinitialization to counteract atom loss and heating. The approach enables sustained high-performance operation needed for fault-tolerant quantum error correction.

Key Contributions

  • Demonstration of 99.81% fidelity two-qubit gates with erasure detection in neutral atoms
  • In-circuit Raman sideband cooling and qubit re-initialization maintaining ~99.8% fidelity across multiple rounds
  • Hardware-efficient mid-circuit operations framework enabling sustainable deep quantum circuits
neutral atoms fault-tolerant quantum computing mid-circuit operations quantum error correction gate fidelity
View Full Abstract

The realization of fault-tolerant quantum computation hinges on the ability to execute deep quantum circuits while maintaining gate fidelities consistently above error-correction thresholds. Although neutral-atom arrays have recently demonstrated high-fidelity two-qubit gates and early-stage logical quantum processors, sustaining such high performance across deep, repetitive circuits remains a formidable challenge due to cumulative motional heating and atom loss. Here we demonstrate a sustainable neutral-atom framework that overcomes these limitations by integrating a suite of hardware-efficient mid-circuit operations. We report a two-qubit controlled logic gate with a raw fidelity of 99.60(1)%, which is further increased to a fidelity of 99.81(1)% via non-destructive erasure detection. Crucially, by implementing in-circuit Raman sideband cooling and qubit re-initialization, we demonstrate that gate fidelities can be maintained at the ~99.8% level across multiple operational rounds without observable degradation. By actively managing the internal and motional entropy of the system mid-stream, our in-situ refreshable architecture provides a critical pathway for executing the repeated syndrome-extraction cycles required for large-scale, continuous quantum error correction.

QuMeld: A Modular Framework for Benchmarking Qubit Mapping Algorithms

Gabrielius Keibas, Linas Petkevičius

2603.01578 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents QuMeld, an open-source software framework designed to systematically evaluate and compare different algorithms for mapping logical qubits to physical qubits on quantum computers. The framework supports multiple mapping algorithms, quantum computer topologies, and evaluation metrics in a modular design that allows for future extensions.

Key Contributions

  • Development of unified benchmarking framework for qubit mapping algorithms
  • Modular design supporting six algorithms and sixteen quantum computer topologies with extensibility for future additions
qubit mapping quantum circuits benchmarking framework quantum computer topologies compilation optimization
View Full Abstract

The qubit mapping problem is a challenge in quantum computing that is related to mapping logical qubits to the physical ones on the quantum computer. Due to the diversity of quantum computer topologies and circuits, numerous approaches solving this problem exist. Finding the best solution for specific combination of topology and circuit remains difficult and no unified framework currently exists for systematically evaluating and comparing qubit mapping algorithms across different cases. We present QuMeld, an open-source framework that is designed for solving this issue. The framework currently supports six qubit mapping algorithms, sixteen quantum computer topologies and multiple evaluation metrics. The modular design of the framework allows integration of new mapping algorithms, quantum circuits, hardware topologies, and evaluation metrics, ensuring extensibility and adaptability to future developments.

Estimating the performance boundary of Gottesman-Kitaev-Preskill codes and number-phase codes

Kai-Xuan Wen, Dong-Long Hu, Shengyong Li, Ze-Liang Xiang

2602.24102 • Feb 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper compares two types of quantum error-correcting codes that use light particles (bosonic codes) - GKP and number-phase codes - to determine which performs better under different noise conditions. The researchers found that the choice between codes depends critically on the ratio of photon loss to dephasing noise, with a clear crossover point when dephasing is about 100 times weaker than loss.

Key Contributions

  • Established quantitative performance boundary between GKP and number-phase codes under photon loss and dephasing noise
  • Developed practical methodology for benchmarking and optimizing bosonic quantum error-correcting codes
  • Identified sharp crossover regime where dephasing strength is approximately two orders of magnitude smaller than loss strength
bosonic codes Gottesman-Kitaev-Preskill quantum error correction photon loss dephasing
View Full Abstract

Bosonic quantum error-correcting codes encode logical information in a harmonic oscillator, with the Gottesman-Kitaev-Preskill (GKP) and number-phase (NP) codes representing two fundamentally different encoding paradigms. Although both have been extensively studied, it remains unclear under what physical noise conditions (including photon loss and dephasing) one encoding intrinsically outperforms the other. Here we estimate a quantitative performance boundary between GKP and NP codes under general photon loss-dephasing noise. By optimizing code parameters within each encoding family, we identify the noise regimes in which each code exhibits a fundamental advantage. In particular, we find that the crossover occurs when the dephasing strength is approximately two orders of magnitude smaller than the loss strength, revealing a sharp separation between operational regimes. Beyond this specific comparison, our work establishes a practical and extensible methodology for benchmarking bosonic codes and optimizing their parameters, providing concrete guidance for the experimental selection and deployment of bosonic encodings in realistic noise environments.

A frequency-agile microwave-optical interface for superconducting qubits

Yufeng Wu, Yiyu Zhou, Haoqi Zhao, Danqing Wang, Matthew D. LaHaye, Daniel L. Campbell, Hong X. Tang

2602.24098 • Feb 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper demonstrates a frequency-agile interface that converts microwave signals from superconducting qubits to optical signals for transmission over fiber optic cables. The system overcomes bandwidth limitations by cascading a microwave-to-microwave frequency converter with a microwave-to-optical transducer, enabling quantum communication between distant superconducting quantum processors.

Key Contributions

  • Development of a frequency-agile microwave-optical interface with continuous frequency coverage from 5.0 to 8.5 GHz
  • Demonstration of optical readout of a superconducting qubit detuned by 1.7 GHz from the native transducer frequency
  • Cascaded M2M-M2O architecture enabling heterogeneous superconducting device networking
superconducting qubits microwave-optical transduction quantum networking frequency conversion quantum communication
View Full Abstract

Superconducting quantum processors operate at microwave frequencies in millikelvin environments, making it challenging to interconnect distant nodes using conventional microwave wiring. Coherent microwave-to-optical (M2O) transduction enables superconducting quantum networks by interfacing itinerant microwave photons with low-loss optical fiber. However, many state-of-the-art transducers provide efficient conversion only over a narrow frequency span, complicating deployment with heterogeneous superconducting devices that are detuned by gigahertz-scale offsets. Here we demonstrate a frequency-agile microwave-optical interface that overcomes this bandwidth mismatch by cascading an electro-optic M2O transducer with a multimode microwave-to-microwave (M2M) frequency converter, with in situ tunability of the microwave resonances in both stages. Using this architecture, we realize continuous frequency coverage from 5.0 to 8.5 GHz within a single system. As an application relevant to superconducting-qubit networking, we use the cascaded M2M-M2O interface to optically read out a superconducting qubit whose readout resonator is detuned by 1.7 GHz from the native M2O microwave resonance, demonstrating a scalable route toward fiber-linked superconducting quantum nodes.

Optimized Compilation for Distributed Quantum Computing

Michele Bandini, Davide Ferrari, Stefano Carretta, Michele Amoretti

2602.24062 • Feb 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper develops a greedy algorithm to optimize quantum circuit compilation for distributed quantum computing by minimizing the use of Einstein-Podolsky-Rosen (EPR) pairs. The approach groups non-local gates to share EPR pairs and reorders commutative gates to reduce circuit depth and resource consumption.

Key Contributions

  • Greedy algorithm for optimizing EPR pair usage in distributed quantum circuits
  • Circuit compilation strategy that groups non-local gates and reorders commutative operations
distributed quantum computing EPR pairs quantum compilation TeleGate quantum networking
View Full Abstract

In many practical applications, quantum algorithms require several qubits, significantly more than those available with current noisy intermediate-scale quantum processors. Distributed quantum computing (DQC) is considered a scalable approach to increasing the number of available qubits for computational tasks. In the DQC setting, a quantum compiler must find the best partitioning for the quantum algorithm and then perform smart non-local operations scheduling to optimize the consumption of Einstein-Podolsky-Rosen (EPR) pairs. In this work, the focus is on minimizing the use of EPR pairs when the circuit structure allows for multiple non-local gates to utilize a single TeleGate operation. This is achieved by using a greedy algorithm that explores the circuit and groups together the gates that could share an EPR pair while also changing the order of commutative gates when necessary. With this preliminary pass, the compiled circuits show reduced depth and EPR usage. Since the quality of each EPR pair quickly deteriorates, the number of non-local gates using the same EPR pair should also be bounded. This means that, depending on the features of the target quantum network, the user can achieve different levels of optimization. Here, it is shown that this approach brings benefits even while assuming a low EPR pair lifetime.

3D Integrated Embedded Filters for Superconducting Quantum Circuits

Waqas Ahmad, Gioele Consani, Mohammad Tasnimul Haque, Jacob Dunstan, Brian Vlastakis

2602.24003 • Feb 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a new design for microwave filters used in superconducting quantum computers, where the filters are embedded in printed circuit boards rather than on the qubit chip itself. This approach improves qubit isolation and enables better scaling to larger quantum processors while maintaining high qubit coherence times.

Key Contributions

  • Novel off-chip PCB-embedded Purcell filter design that removes filter components from qubit substrate
  • Demonstration of thousand-fold improvement in qubit isolation with multiplexed readout capability for up to 9 resonators
  • Experimental validation showing compatibility with high-coherence qubits and scalability to large qubit counts
superconducting qubits Purcell filters quantum readout qubit coherence scalable quantum computing
View Full Abstract

Microwave filtering for superconducting qubits is a key element of quantum computing technology, enabling high coherence and fast state detection. This work presents the design and implementation of novel microwave Purcell filters for superconducting quantum circuits, integrated within a multilayer printed circuit board (PCB). The off-chip design removes all filter components from the qubit substrate, reducing device complexity, improving layout footprint and allowing better scalability to large qubit counts. Each embedded filter can couple up to nine readout resonators, enabling efficient multiplexed readout. Electromagnetic simulations of the filter predict a thousand-fold improvement in qubit isolation from the readout port. The design was experimentally validated under cryogenic conditions in conjunction with a 35-qubit device, demonstrating compatibility of the PCB-based filter with high-coherence superconducting qubits. The comparison of the measured qubit median T1 of 84 $μ$s with the expected radiative limit from electromagnetic simulations validated the presence of Purcell filtering in the system.

Characterization of Josephson Junction Aging and Annealing Under Different Environments

Rangga P. Budoyo, Rasanayagam S. Kajen, Bing Wen Cheah, Long H. Nguyen, Rainer Dumke

2602.23888 • Feb 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper studies how Josephson junctions used in quantum computers degrade over time under different storage conditions and how thermal annealing can restore their properties. The researchers found that aging follows predictable patterns and can be controlled through proper storage environments and annealing procedures.

Key Contributions

  • Characterized aging behavior of Josephson junctions following logarithmic curves with fabrication-dependent amplitude and storage-dependent speed
  • Demonstrated that controlled annealing can restore junction properties with environment-dependent effects on resistance
Josephson junctions superconducting qubits aging annealing quantum processors
View Full Abstract

Understanding the aging behavior of Josephson junctions and the effect of annealing on junction resistances is important in building large-scale superconducting quantum processors. Here we study the effects of aging of Josephson junctions under different storage conditions from immediately after fabrication up to 2 to 3 months. We find that the aging curve follows a logarithmic curve, with the aging amplitude mainly determined by fabrication conditions and the aging speed determined by storage conditions. Junctions stored at ambient laboratory conditions aged faster compared to junctions stored in a nitrogen atmosphere or vacuum, with the aging speed appreciably changes when the storage condition changed. We also compared the effect of thermal annealing under nitrogen environment with annealing under ambient conditions up to 250$^\circ$ C. We find that under nitrogen environment, the resistances decreased at all temperatures tested, while under ambient environment the resistances increased at 200$^\circ$ C and decreased at 250$^\circ$ C instead. We were unable to decrease the resistance below the initial-time resistance, suggesting a lower limit on the range of resistance tuning.

Spin stiffness and resilience phase transition in a noisy toric-rotor code

Morteza Zarei, Mohammad Hossein Zarei

2602.23751 • Feb 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper studies how well the toric-rotor code (a type of quantum error-correcting code) can protect quantum information from phase-shift noise. The researchers use mathematical connections to classical physics models to identify a critical noise threshold above which the code loses its protective properties.

Key Contributions

  • Mapped the resilience properties of toric-rotor codes under noise to the Kosterlitz-Thouless phase transition in the classical XY model
  • Developed a quantum formalism for spin stiffness that corresponds to gate fidelity in the logical subspace
  • Identified a critical noise threshold (σc ≈ 0.89) for partial resilience in toric-rotor codes
  • Provided mathematical framework using quantum partition functions for studying correctability in continuous-variable quantum codes
quantum error correction toric-rotor code phase transition Kosterlitz-Thouless topological quantum codes
View Full Abstract

We use a quantum formalism for the partition function of the classical $XY$ model to identify a resilience phase transition in a noisy toric-rotor code. Specifically, we consider the toric-rotor code under phase-shift noise described by a von Mises probability distribution and show that the fidelity between the final state after noise and the initial state is proportional to the partition function of the $XY$ model. We map the temperature of the $XY$ model to the width of the noise in the toric-rotor code, such that a Kosterlitz--Thouless phase transition at a critical temperature $T_{c}$ corresponds to a mixed-state phase transition at a critical width $σ_c$. To characterize this phase transition, we develop a quantum formalism for the spin stiffness in the $XY$ model and show that it is mapped to the gate fidelity in the logical subspace of the toric-rotor code. In particular, we introduce a topological order parameter that characterizes the resilience of the toric-rotor code to decoherence within the logical subspace. We show that the logical subspace does not exhibit complete resilience to noise, which is a necessary condition for correctability. However, it exhibits partial resilience to noise for widths less than $σ_c\approx 0.89$, where the resilience order parameter takes values near $1$ and then drops to zero at $σ_c$. We also use our results to shed light on the correctability of toric-rotor codes in higher dimensions $d > 2$. Our work shows that the quantum formalism for partition functions provides a mathematically rigorous framework for studying correctability in continuous-variable quantum codes.

Copy-cup Gates in Tensor Products of Group Algebra Codes

Ryan Tiew, Nikolas P. Breuckmann

2602.23307 • Feb 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops quantum error-correcting codes with built-in constant-depth quantum gates (CZ and CCZ) by analyzing when classical group algebra codes can support specific mathematical structures called copy-cup gates. The researchers connect this problem to graph theory and provide concrete conditions for constructing these enhanced quantum codes.

Key Contributions

  • Established conditions for classical group algebra codes to support copy-cup gates that enable constant-depth CZ and CCZ quantum gates
  • Connected the copy-cup gate problem to perfect matching in graph theory
  • Fully characterized conditions for 2- and 3-copy-cup gates in weight 4 group algebra codes
  • Demonstrated that bivariate bicycle codes lack pre-orientation for copy-cup gates
quantum error correction group algebra codes constant-depth gates CZ gates CCZ gates
View Full Abstract

We determine conditions on classical group algebra codes so that they have pre-orientation for cup products and copy-cup gates. This defines quantum codes that have constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates constructed via tensor products of classical group algebra codes, including hypergraph and balanced products. We show that determining the conditions relies on solving the perfect matching problem in graph theory. Conditions are fully determined for the 2- and 3-copy-cup gates, for group algebra codes up to weight 4, including for codes with odd check weight. These include the bivariate bicycle codes, which we show do not have the pre-orientation for either type of copy-cup gate. We show that abelian weight 4 group algebra codes satisfying the non-associative 3-copy-cup gate necessarily have a code distance of 2, whereas codes that satisfy conditions for the symmetric 3-copy-cup gate can have higher distances, and in fact also satisfy conditions for the 2-copy-cup gate. Finally we find examples of quantum codes from the product of abelian group algebra codes that have inter-code constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates.

Hyperbolic and Semi-Hyperbolic Floquet Codes for Photonic Quantum Computing

Aygul Azatovna Galimova

2602.22906 • Feb 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops new quantum error correcting codes called hyperbolic and semi-hyperbolic Floquet codes that are specifically designed for photonic quantum computing systems. The codes use only simple weight-2 measurements and are tested under various noise models, showing improved performance compared to surface codes for photon-mediated quantum computing applications.

Key Contributions

  • Construction of new hyperbolic Floquet codes from {10,3} and {12,3} tessellations using the LINS algorithm
  • Demonstration that these codes achieve better fault-tolerant performance than surface codes in photonic quantum computing with 2.2x larger fault-tolerant area while encoding 10 logical qubits
quantum error correction Floquet codes photonic quantum computing fault tolerance hyperbolic tessellations
View Full Abstract

Tailoring error correcting codes to the structure of the physical noise can reduce the overhead of fault-tolerant quantum computation. Hyperbolic Floquet codes use only weight-2 measurements and can be implemented directly on hardware with native pair measurements. We construct hyperbolic and semi-hyperbolic Floquet codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations via the Wythoff kaleidoscopic construction with the Low-Index Normal Subgroups (LINS) algorithm. The $\{10,3\}$ and $\{12,3\}$ families are new to hyperbolic Floquet codes. We evaluate these codes under four noise models: phenomenological, ancilla Entangling Measurement (EM3), Single-step Depolarizing EM3 (SDEM3), and erasure. Under phenomenological noise, specific-logical threshold crossings occur near $p_e \approx 0.3$--$0.5\%$ for $\{8,3\}$ ($k=6$--$56$) and $0.15$--$0.2\%$ for $\{10,3\}$ ($k=12$--$146$). EM3 ancilla noise yields a threshold of ${\sim}1.5\%$ for all three families. SDEM3 is a depolarizing noise model motivated by Majorana tetron architectures; fine-grained codes achieve thresholds of ${\sim}1.0$--$1.2\%$ for all three families. The erasure model captures detected photon loss on spin-optical links; fine-grained codes achieve erasure thresholds of ${\sim}8.5$--$9\%$ for $\{8,3\}$, ${\sim}7$--$8\%$ for $\{10,3\}$, and ${\sim}6.5$--$8\%$ for $\{12,3\}$. Photon loss is the dominant error source in photon-mediated quantum computing. Under the full three-parameter SPOQC-2 noise model, the $\{8,3\}$ codes achieve a 2D fault-tolerant area $2.2\times$ that of the surface code compiled to pair measurements while encoding $k = 10$ logical qubits. In a companion paper, we evaluate the same code families in a distributed setting.

Spin-Cat Qubit with Biased Noise in an Optical Tweezer Array

Toshi Kusano, Kosuke Shibata, Chih-Han Yeh, Keito Saito, Yuma Nakamura, Rei Yokoyama, Takumi Kashimoto, Tetsushi Takano, Yosuke Takasu, Ryuji Takagi, ...

2602.22883 • Feb 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper demonstrates the implementation of spin-cat qubits using ytterbium-173 atoms with nuclear spin 5/2 in optical tweezers, showing how these qubits exhibit biased noise that favors dephasing errors over bit-flip errors. The researchers achieved single-qubit gate operations and characterized the noise properties, demonstrating the feasibility of using these qubits for bias-tailored quantum error correction codes.

Key Contributions

  • Demonstration of single-qubit controls for spin-cat qubits in ytterbium-173 with nuclear spin I=5/2
  • Characterization of biased noise in spin-cat qubits showing preference for dephasing errors over bit-flip errors
  • Achievement of covariant SU(2) rotations and benchmarking of gate fidelities for bias-tailored quantum error correction
spin-cat qubits bias-tailored quantum error correction optical tweezer arrays nuclear spin ytterbium
View Full Abstract

Bias-tailored quantum error correcting codes (QECCs) offer a higher error threshold than standard QECCs and have the potential to achieve lower logical errors with less space overhead. The spin-cat qubit, encoded in a large nuclear spin-$F$ system, is a promising candidate for bias-tailored QECCs. Yet its feasibility is hindered by the difficulty of performing fast covariant SU(2) rotation with arbitrary rotation angles for nuclear spins and by a lack of noise characterization for gate operations in neutral atom platforms. Here we demonstrate single-qubit controls of ${}^{173}\mathrm{Yb}$ spin-cat qubits with nuclear spin $I=5/2$ in an optical tweezer array. We implement a covariant SU(2) rotation and non-linear rotations by optical beams and achieve an averaged single-Clifford gate fidelity of $0.961_{-5}^{+5}$. The measurement of the coherence time and spin relaxation time shows that the idling error becomes increasingly biased toward dephasing errors as the magnitude of the encoded sublevel $|m_F|$ increases. Furthermore, we benchmark the noise bias of rank-preserving gates on spin-cat qubits, demonstrating a finite bias of $18_{-11}^{+132}$, in contrast to the case of the two-level system in ${}^{171}\mathrm{Yb}$, which shows no bias within the experimental uncertainty. Our work demonstrates the feasibility of spin-cat qubits for realizing bias-tailored QECCs, paving the way for achieving hardware-efficient quantum error correction.

A matching decoder for bivariate bicycle codes

Kaavya Sahay, Dominic J. Williamson, Benjamin J. Brown

2602.22770 • Feb 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new decoding algorithm for bivariate bicycle quantum error-correcting codes using minimum-weight perfect matching, introducing a 'cylinder trick' method that leverages code symmetries to efficiently find error corrections.

Key Contributions

  • Development of matching-based decoder for bivariate bicycle codes using the 'cylinder trick' method
  • Demonstration of improved decoder performance through augmentation with belief propagation and over-matching strategies
quantum error correction bivariate bicycle codes minimum-weight perfect matching toric codes decoder algorithms
View Full Abstract

The discovery of new quantum error-correcting codes that encode several logical qubits into relatively few physical qubits motivates the development of efficient and accurate methods of decoding these systems. Here, we adopt the minimum-weight perfect matching algorithm, a subroutine invaluable to decoding topological codes, to decode bivariate bicycle codes. Using the equivalence of bivariate bicycle codes to copies of the toric code, we propose a method we call the 'cylinder trick' to rapidly find a correction using matching on code symmetries. We benchmark our decoder on the gross code family, cyclic hypergraph-product codes, generalized toric codes, and recently proposed directional codes, demonstrating the general applicability of our protocol. For a subset of these codes, we find that our decoder can be significantly improved by augmenting matching with strategies including belief propagation and 'over-matching', thus achieving performance competitive with state-of-the-art approaches.

The Road to Useful Quantum Computers

Timothy Proctor, Robin Blume-Kohout, Andrew Baczewski

2602.22540 • Feb 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper provides a comprehensive overview of the current state of quantum computing development, examining the gap between existing quantum computer capabilities and the goal of achieving 'quantum utility' where quantum computers solve practically important problems. The authors analyze the key scientific and engineering challenges that must be overcome to build useful quantum computers.

Key Contributions

  • Comprehensive assessment of current quantum computing capabilities versus requirements for quantum utility
  • Identification and analysis of key scientific and engineering challenges blocking progress toward useful quantum computers
  • Framework for tracking progress from current prototypes to quantum utility applications
quantum utility quantum algorithms error correction quantum computing roadmap Shor's algorithm
View Full Abstract

Building a useful quantum computer is a grand science and engineering challenge, currently pursued intensely by teams around the world. In the 1980s, Richard Feynman and Yuri Manin observed independently that computers based on quantum mechanics might enable better simulations of quantum phenomena. Their vision remained an intellectual curiosity until Peter Shor published his famous quantum algorithm for integer factoring, and shortly thereafter a proof that errors in quantum computations can be corrected. Since then, quantum computing R&D has progressed rapidly, from small-scale experiments in university physics laboratories to well-funded industrial efforts and prototypes. Hype notwithstanding, quantum computers have yet to solve scientifically or practically important problems -- a target often called quantum utility. In this article, we describe the capabilities of contemporary quantum computers, compare them to the requirements of quantum utility, and illustrate how to track progress from today to utility. We highlight key science and engineering challenges on the road to quantum utility, touching on relevant aspects of our own research.

Computing with many encoded logical qubits beyond break-even

Shival Dasu, Matthew DeCross, Andrew Y. Guo, Ali Lavasani, Jan Behrends, Asmae Benhemou, Yi-Hsiang Chen, Karl Mayer, Chris N. Self, Selwyn Simsek, Bas...

2602.22211 • Feb 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates quantum error correction codes that encode many logical qubits and actually perform better than unencoded qubits, using up to 94 logical qubits on a 98-qubit trapped-ion quantum computer. The researchers achieved 'beyond break-even' performance where error correction improves rather than degrades computation quality.

Key Contributions

  • First demonstration of beyond break-even performance with high-rate quantum error correction codes using up to 94 logical qubits
  • Implementation of fault-tolerant operations including state preparation, measurement, and quantum simulation on the 98-qubit Quantinuum Helios processor
  • Development of new encoded operation gadgets for iceberg QED and two-level concatenated iceberg QEC codes
quantum error correction fault tolerance logical qubits trapped ions high-rate codes
View Full Abstract

High-rate quantum error correcting (QEC) codes encode many logical qubits in a given number of physical qubits, making them promising candidates for quantum computation. Implementing high-rate codes at a scale that both frustrates classical computing and improves performance by encoding requires both high fidelity gates and long-range qubit connectivity -- both of which are offered by trapped-ion quantum computers. Here, we demonstrate computations that outperform their unencoded counterparts in the high-rate $[[ k+2,\, k,\, 2 ]]$ iceberg quantum error detecting (QED) and $[[ (k_2 + 2)(k_1 + 2),\, k_2k_1,\, 4 ]]$ two-level concatenated iceberg QEC codes, using the 98-qubit Quantinuum Helios trapped-ion quantum processor. Utilizing new gadgets for encoded operations, we realize this "beyond break-even" performance with reasonable postselection rates across a range of fault-tolerant (FT) and partially-fault-tolerant (pFT) component and application benchmarks with between $48$ and $94$ logical qubits. These benchmarks include FT state preparation and measurement, QEC cycle benchmarking, logical gate benchmarking, GHZ state preparation, and a pFT quantum simulation of the three-dimensional $XY$ model of quantum magnetism. Additionally, we illustrate that postselection rates can be suppressed by increasing the code distance via concatenation. Our results represent state-of-the-art logical component and state fidelities and provide evidence that high-rate QED/QEC codes are viable on contemporary quantum computers for near-term beyond-classical-scale computation.