Quantum Physics Paper Analysis

This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:

  • CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
  • Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
  • Quantum Sensing – Metrology, magnetometry, and precision measurement advances
  • Quantum Networking – QKD, quantum repeaters, and entanglement distribution

Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.

Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.

This Week: Apr 26 - Apr 30, 2026
200 Papers This Week
696 CRQC/Y2Q Total
5976 Total Analyzed

Boundary-Aware Stabilizer Scheduling for Distributed Quantum Error Correction

Sanidhya Gupta, Sanidhay Bhambay, Narges Alavisamani, Neil Walton, Thirupathaiah Vasantam

2604.22471 • Apr 24, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper addresses quantum error correction in distributed quantum computers by developing scheduling algorithms that optimize when to perform error-checking operations across connected quantum processing units. The research focuses on reducing overhead from slow remote operations while maintaining effective error correction, showing improved performance for certain parameter regimes.

Key Contributions

  • Development of Skip-Seam-τ and Adaptive Skip-τ scheduling policies for distributed quantum error correction
  • Demonstration of fault-tolerant scaling behavior with reduced logical error rates compared to baseline approaches
quantum error correction distributed quantum computing topological codes fault tolerance scheduling algorithms
View Full Abstract

Future quantum architectures are expected to be modular, with quantum processors connecting multiple quantum processing units (QPUs) via photonic interconnects. In topological quantum error correction, such as color codes, this creates seam boundaries where parity checks require remote CNOT operations using heralded Bell pairs. These non-local checks are slower and noisier than bulk local checks because entanglement generation is probabilistic, causing data qubits to accumulate idle noise while waiting for remote operations. A natural way to reduce this overhead is to skip some seam measurements; however, doing so makes seam syndrome information stale and can degrade decoding. The central scheduling problem is therefore to determine how frequently seam checks should be measured so as to balance remote-operation and waiting noise against syndrome staleness. To address this trade-off, we develop a scheduling module that integrates directly into standard syndrome-extraction circuits. We consider two policies: Skip-Seam-$τ$ (SS-$τ$), which measures all bulk checks every round while measuring seam checks once every $τ$ rounds and copying the most recent syndrome in skipped rounds, and Adaptive Skip-$τ$ (AST), which selects $τ$ as a function of code distance and entanglement generation rate (EGR). We evaluate these policies on triangular color codes under circuit-level noise in Stim, including idling errors induced by Bell-pair generation delays. Our simulations show that SS-tau and AST reduce remote-operation overhead and can lower the logical error rate (LER) relative to the Measure-All (MA) baseline. For physical error rate $p = 10^{-3}$, we identify an EGR regime in which both SS-$τ$ and AST exhibit behavior consistent with fault-tolerant scaling, with LER decreasing as code distance increases. Across these regimes, SS-$τ$ and AST outperform MA.

Loss-biased fault-tolerant quantum error correction

Laura Pecorari, Gavin K. Brennen, Stanimir S. Kondov, Guido Pupillo

2604.21876 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a technique called 'loss biasing' for neutral-atom quantum computers that converts problematic Rydberg excitation errors into atom loss events, which are easier to handle with error correction. The method enables faster quantum error correction cycles by transforming correlated errors into erasure-like noise that can be more effectively corrected.

Key Contributions

  • Introduction of loss biasing technique to convert Rydberg excitation errors into atom loss for improved error correction
  • Demonstration that loss biasing restores fault-tolerant logical error scaling and enables sub-millisecond QEC cycles
  • Practical implementation pathway using autoionization in alkaline-earth atoms for neutral-atom quantum processors
quantum error correction fault tolerance neutral atoms Rydberg states loss biasing
View Full Abstract

We investigate the limits of quantum error correction (QEC) in neutral-atom processors approaching high-fidelity gates and fast cycle times. We show that shorter QEC cycles amplify platform-specific errors, notably Rydberg excitation hopping, and hinder decay of residual Rydberg population, leading to non-Markovian correlated errors that degrade logical performance. To address this, we introduce loss biasing, where spurious Rydberg excitations are rapidly converted into atom loss via mid-circuit ionization, transforming errors into erasure-like noise and suppressing their propagation. Loss biasing restores the fault-tolerant logical error scaling for intra-cycle Pauli errors; furthermore, we argue that when supported with loss-aware decoding, it can achieve the optimal scaling of erasures while enabling shorter QEC cycles with reduced hardware overhead. We outline an implementation using fast autoionization in alkaline-earth(-like) atoms, establishing loss biasing as a practical route toward fault-tolerant quantum computing with sub-millisecond QEC cycles.

High-performance cellular automaton decoders for quantum repetition and toric code

Don Winter, Thiago L. M. Guedes, Markus Müller

2604.21866 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces SCALA, a new cellular automaton decoder for quantum error correction that processes errors locally rather than globally. The decoder is designed to be fast, scalable, and robust enough for real-time quantum error correction in large-scale quantum computers.

Key Contributions

  • Development of SCALA, a novel non-hierarchical cellular automaton decoder for quantum error correction
  • Demonstration of scalable local decoding architecture with computational resources independent of system size
  • Achievement of strong performance metrics including 7.5% error threshold and robust scaling for toric codes
quantum error correction cellular automaton decoder toric code fault tolerance scalable quantum computing
View Full Abstract

Execution of quantum algorithms on large-scale quantum computers will require extremely low logical error rates, which necessitates the development of scalable decoding architectures. Local decoders are promising candidates for this task, as they avoid the communication and data processing bottlenecks inherent in global decoding strategies. Cellular automaton (CA) decoders represent a distinct class of local decoders, offering a path toward the low-latency, real-time decoding required for practical applications. In this work, we present SCALA (Signaling CA with Local Attraction), a novel non-hierarchical cellular automaton decoder for quantum repetition and toric codes. By evaluating SCALA alongside the hierarchical CA decoder proposed by Harrington, we provide a direct comparison between non-hierarchical and renormalization-group-style local decoding strategies. We characterize SCALA across three key metrics: Performance, scalability, and robustness. Our results show that SCALA achieves a code-capacity threshold of approximately $p_c\approx 7.5\%$ and provides strong sub-threshold scaling of about $p_L\propto p^{d/4}$ on the toric code. In terms of scalability, our non-hierarchical design ensures that the local computational resources remain independent of system size, yielding a modular local architecture suitable for hardware implementation. Finally, SCALA demonstrates strong robustness to qubit measurement errors and noise within the decoder itself, a critical advantage for real-time decoding on noisy hardware. Our results establish SCALA as a high-performance, scalable, and robust local decoder for scalable quantum error correction.

Replay-buffer engineering for noise-robust quantum circuit optimization

Akash Kundu, Sebastian Feld

2604.21863 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops improved machine learning techniques for optimizing quantum circuits, focusing on better ways to store and reuse training data (replay buffers) to make quantum circuit optimization more efficient and robust to hardware noise.

Key Contributions

  • ReaPER+ annealed replay rule that improves sample efficiency 4-32x over existing methods
  • OptCRLQAS method that reduces optimization wall-clock time by up to 67.5% by amortizing quantum evaluations
  • Lightweight replay-buffer transfer scheme that reduces training steps by 85-90% when transitioning from noiseless to noisy quantum hardware
quantum circuit optimization deep reinforcement learning replay buffer noise robustness quantum architecture search
View Full Abstract

Deep reinforcement learning (RL) for quantum circuit optimization faces three fundamental bottlenecks: replay buffers that ignore the reliability of temporal-difference (TD) targets, curriculum-based architecture search that triggers a full quantum-classical evaluation at every environment step, and the routine discard of noiseless trajectories when retraining under hardware noise. We address all three by treating the replay buffer as a primary algorithmic lever for quantum optimization. We introduce ReaPER$+$, an annealed replay rule that transitions from TD error-driven prioritization early in training to reliability-aware sampling as value estimates mature, achieving $4-32\times$ gains in sample efficiency over fixed PER, ReaPER, and uniform replay while consistently discovering more compact circuits across quantum compilation and QAS benchmarks; validation on LunarLander-v3 confirms the principle is domain-agnostic. Furthermore we eliminate the quantum-classical evaluation bottleneck in curriculum RL by introducing OptCRLQAS which amortizes expensive evaluations over multiple architectural edits, cutting wall-clock time per episode by up to $67.5\%$ on a 12-qubit optimization problem without degrading solution quality. Finally we introduce a lightweight replay-buffer transfer scheme that warm-starts noisy-setting learning by reusing noiseless trajectories, without network-weight transfer or $ε$-greedy pretraining. This reduces steps to chemical accuracy by up to $85-90\%$ and final energy error by up to $90\%$ over from-scratch baselines on 6-, 8-, and 12-qubit molecular tasks. Together, these results establish that experience storage, sampling, and transfer are decisive levers for scalable, noise-robust quantum circuit optimization.

Deterministic generation of grid states with programmable nonlinear bosonic circuits

Yanis Le Fur, Javier Lalueza-Puértolas, Carlos Sánchez Muñoz, Alberto Muñoz de las Heras, Alejandro González-Tudela

2604.21824 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes new deterministic methods for generating bosonic quantum error-correcting codes using programmable circuits with squeezing, displacement, and Kerr operations. The authors develop 'phased-comb states' as an alternative to standard grid states, demonstrating comparable error correction performance while being more naturally achievable with current technology.

Key Contributions

  • Deterministic protocol for generating bosonic grid states using only squeezing, displacement, and Kerr operations
  • Introduction of phased-comb states as a new class of bosonic quantum error-correcting codes with near-optimal performance
  • Demonstration of universal gate set implementation for the proposed phased-comb states
  • Analysis showing competitive error correction performance compared to GKP states under boson loss
bosonic quantum error correction GKP states grid states nonlinear bosonic circuits Kerr operations
View Full Abstract

Bosonic quantum error correction enables hardware-efficient protection of quantum information by encoding logical qubits in harmonic oscillators. Bosonic grid states, such as Gottesman-Kitaev-Preskill (GKP) states, are particularly promising due to their potential to correct small displacements and boson loss. However, their generation remains challenging, typically relying on probabilistic protocols or auxiliary qubit systems. Here, we propose deterministic protocols for generating bosonic grid states using programmable nonlinear bosonic circuits composed solely of squeezing, displacement, and Kerr operations. We show that aiming to enforce GKP symmetries in the output of these circuits yields states with competitive performance with respect to current realizations, but whose quality saturates with increasing circuit depth due to imperfect symmetry restoration. Instead, we find that these bosonic circuits naturally give rise to a distinct class of states, that we label as phased-comb states, which are unitarily related to standard grid states but feature an intrinsic phase structure. We demonstrate that these states define a scalable bosonic quantum error-correcting code with near-optimal performance under boson loss comparable to that of approximate GKP states. We further analyze their logical operations and show how to implement a universal gate set for them. Our results establish programmable nonlinear bosonic circuits as a viable route towards the generation of scalable bosonic quantum error-correcting states beyond standard GKP encodings.

Variance Geometry of Exact Pauli-Detecting Codes: Continuous Landscapes Beyond Stabilizers

Arunaday Gupta, Baisong Sun, Xi He, Bei Zeng

2604.21800 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a geometric framework for analyzing quantum error-correcting codes that can detect specific Pauli errors, showing that such codes form continuous families rather than just discrete collections. The authors introduce a parameter λ* that characterizes code performance and demonstrate that stabilizer codes represent only a small subset of possible exact quantum codes.

Key Contributions

  • Introduced geometric framework using higher-rank numerical ranges for exact Pauli-detecting codes
  • Demonstrated that exact quantum codes form continuous families characterized by parameter λ*
  • Showed stabilizer codes occupy only measure-zero subsets, revealing unexplored nonadditive code families
  • Unified analysis of stabilizer, symmetric, and nonadditive codes under single variance framework
quantum error correction Pauli codes stabilizer codes nonadditive codes Knill-Laflamme conditions
View Full Abstract

Exact quantum codes detecting a prescribed set of Pauli errors are approached through algebraic constructions--stabilizer, codeword-stabilized, permutation-invariant, topological, and related families. Geometrically, exact Pauli detection is governed by joint higher-rank numerical ranges of these Pauli operators, whose structure for rank $\geq 2$ is largely uncharted. From this viewpoint, we show that such codes often form connected continuous families rather than collections of disjoint solution regions. These families are characterized by a single scalar derived from the Knill-Laflamme conditions: denoted $λ^*$, it is the Euclidean norm of the signature vector of Pauli expectation values on the maximally mixed code state, and provides a one-parameter summary of the code's joint Pauli variance profile. Within these continuous landscapes, stabilizer codes occupy only discrete, measure-zero subsets of the attainable $λ^*$-spectrum, exposing a largely unexplored continuum of genuinely nonadditive exact codes. We establish this picture by analyzing the geometry of higher-rank operator compressions, and extend it to symmetry-restricted settings where cyclic and permutation symmetries are imposed on both the error model and the code projector. Small-system cases reveal interval, singleton, and empty regimes through eigenvalue interlacing and symmetry-sector decompositions; larger systems are treated numerically via Stiefel-manifold optimization and symmetry-adapted parameterizations. In every unrestricted and symmetry-compatible case analyzed, the attainable $λ^*$-spectrum forms a single closed interval whenever nonempty--although a general proof remains open. These results place stabilizer, symmetric, and nonadditive code families within a unified higher-rank variance framework, suggesting a continuous geometric perspective on the landscape of exact quantum codes.

Partial oracles quantum algorithm framework -- Part I: Analysis of in-place operations

Fintan M. Bolton

2604.21788 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a quantum search algorithm framework called 'partial oracles' that could potentially exceed Grover's quadratic speedup, providing explicit construction methods for the search iteration operator when limited to in-place operations. The authors introduce a 'reciprocal transform' and demonstrate its application to components of the SHA-256 hash algorithm, though they note this specific implementation doesn't yet show quantum advantage.

Key Contributions

  • Introduction of the reciprocal transform with chain rule properties for quantum oracle construction
  • Explicit construction method for partial oracles quantum search algorithm using in-place operations
  • Application to SHA-256 hash algorithm components and development of QFrame python library for automation
quantum search algorithms partial oracles reciprocal transform SHA-256 cryptanalysis
View Full Abstract

The partial oracles framework is a quantum search algorithm that has the potential to exceed the quadratic speedup of Grover's algorithm, up to a theoretical maximum of an exponential speedup. Until now, however, the framework has lacked an explicit method for constructing the operator that represents the search iteration. In this paper, we provide the missing construction, for the special case of an oracle function definable using only in-place operations (that is, where the calculated result of the oracle function can be read just from the qubits in the search index). The restriction to in-place operations means that the current work does not yet exhibit quantum advantage: oracle functions constructed using only in-place operations are always classically reversible. To demonstrate quantum advantage, it will be necessary to extend this construction method to include out-of-place operations (part II). As part of the construction of the search iteration operator, we define a new type of transform, the reciprocal transform, which is applied to the oracle function. We show that the reciprocal transform obeys a chain rule, which makes it possible to break down complex transforms into simple steps. To illustrate the practical application of this search method, we apply the reciprocal transform to elementary operations from the SHA-256 hash algorithm: addition modulo $2^n$, the $Maj(a, b, c)$ function, the $Ch(a, b, c)$ function, and the bit shift functions. We also introduce the QFrame python library, which is used to automate the construction of quantum circuits that represent reciprocal transforms.

Photon Sorting with a Quantum Emitter

Kasper H. Nielsen, Etienne Corminboeuf, Benedikt Tissot, Love A. Pettersson, Sven Scholz, Arne Ludwig, Leonardo Midolo, Anders S. Sørensen, Peter Lod...

2604.21758 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper demonstrates a quantum photon-sorting circuit that uses a solid-state quantum emitter to create nonlinear interactions between photons, enabling more efficient Bell state measurements that exceed the fundamental limits of linear optical systems.

Key Contributions

  • Demonstration of passive photon-sorting with 62% success probability using quantum emitter nonlinearity
  • Achievement of Bell state measurements exceeding 50% linear-optical limit at 57% success probability
  • Integration of directional waveguide-emitter coupling interface into on-chip linear optical circuit
photonic quantum computing Bell state measurement quantum emitter nonlinear optics photon sorting
View Full Abstract

High-quality photonic Bell state measurements (BSMs) enable scalable universal quantum computing and long distance quantum communication. However, when implemented with linear optics, BSMs are fundamentally probabilistic, introducing substantial hardware overheads and limiting noise tolerance in photonic quantum computing architectures. Nonlinear interactions at the single-photon level can overcome these limitations by enabling near-deterministic photon-photon gates. Here, we demonstrate a passive photon-sorting circuit based on the induced nonlinearity arising from photon scattering in a solid-state quantum emitter. The scattering is implemented in a directional waveguide-emitter coupling interface and embedded on-chip into a linear optical circuit, through which we demonstrate sorting of one- and two-photon components with a success probability of 62%. We find that the current system can enable BSMs with a 57% post-selected success probability without ancillary photons, exceeding the linear-optical limit of 50%, and can be readily improved to >65% with design optimisations.

Near-Term Reduction in Nonlocal Gate Count from Distributed Logical Qubits

Bruno Avritzer, Nathan Sankary

2604.21722 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops techniques for efficiently distributing quantum error-corrected computations across multiple quantum processors by optimizing how logical qubits are allocated to minimize costly inter-processor operations. The work focuses on color codes and demonstrates a 10% reduction in nonlocal gates, with methods for implementing universal gate sets in distributed quantum systems.

Key Contributions

  • Development of qubit allocation techniques for color codes that achieve 10% reduction in processor-nonlocal gates
  • Evaluation of methods for universal gate sets in distributed logical quantum computing including magic state distillation and code switching
  • Framework for scalable allocation algorithms for modular quantum computing architectures
distributed quantum computing modular quantum computing color codes error correction logical qubits
View Full Abstract

Modular quantum computing architectures require error correction schemes that remain effective in the presence of noisy inter-processor operations. As such, minimizing the number of such operations on logical circuits partitioned across quantum processors is a primary objective of distributed quantum computing. In this work, we develop basic techniques for qubit allocation using an exemplar color code family and explore generalizations to other color codes. In particular, we show that a 10% reduction in processor-nonlocal gates is achievable in a setting where syndrome extraction occurs after every logical gate, as in today's devices, and that this scales to significantly greater advantages in the multi-qubit case. We also explore methods of achieving universal gate sets efficiently in this distributed logical setting and evaluate the trade-offs of multiple approaches such as magic state distillation, code switching, and a new method based on logical swaps. Finally, we discuss some considerations for an allocation algorithm for these architectures to perform scalably and connect it to existing work on quantum circuit partitions.

Composite quantum gates simultaneously compensated for multiple errors

Hristo Tochev, Nikolay Vitanov

2604.21594 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops composite pulse sequences that create robust quantum gates (X and Hadamard) by simultaneously correcting for multiple types of control errors including amplitude, frequency, and timing errors. The researchers derive both analytical five-pulse solutions and numerically optimized longer sequences that significantly improve gate fidelity across large error ranges.

Key Contributions

  • Symmetric five-pulse composite gate sequences with closed-form phases that cancel first-order error terms for amplitude, detuning, and duration errors simultaneously
  • Demonstration that standard universal five-pulse sequences (U5a/U5b) are special cases of their symmetric solutions
  • Numerical optimization of longer pulse sequences (up to 15 pulses) for higher-order error suppression
  • Construction of variable-area sequences for Rx(π/2) gates equivalent to Hadamard gates up to virtual Z rotations
composite pulses quantum gates error correction gate fidelity robust control
View Full Abstract

Systematic control errors remain a primary obstacle to realizing high-fidelity single-qubit gates. We introduce composite pulse sequences that implement X and Hadamard gates while simultaneously compensating amplitude (Rabi-frequency), detuning (frequency), and duration errors. Our construction uses two complementary strategies: (i) derivative-based cancellation of error terms in the full unitary (not just the transition probability), formulated via the Cayley-Klein parametrization, and (ii) direct minimization of the average gate infidelity over prescribed error ranges. We derive symmetric five-pulse solutions with closed-form phases that cancel all first-order terms (including the mixed derivative), and numerically optimize longer sequences -- up to 15 pulses -- to achieve higher-order suppression. We also show that standard ``universal'' five-pulse sequences (U5a/U5b) emerge as simple phase-shifted instances of our symmetric solutions, yielding broad robustness to both detuning and amplitude errors. Finally, we construct variable-area sequences for $R_x(π/2)$, which, up to virtual Z rotations, benchmark the Hadamard gate. Across all families we observe the expected trade-off between sequence length and robustness window, with substantial boosts in fidelity over large error domains.

Pulse Shaping for Superconducting Qubits

Animesh Patra, Ankur Raina

2604.21565 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper provides a comprehensive educational guide to microwave pulse shaping techniques for controlling superconducting qubits, covering the DRAG technique for reducing errors, hardware implementation considerations, and extensions to two-qubit gates like cross-resonance operations.

Key Contributions

  • Unified pedagogical framework for pulse shaping in superconducting qubits
  • Magnus expansion analysis of DRAG technique for error suppression
  • Integration of hardware considerations with theoretical pulse design
  • Extension to two-qubit cross-resonance gate operations
superconducting qubits pulse shaping DRAG transmon cross-resonance
View Full Abstract

High-fidelity control of superconducting qubits requires carefully shaped microwave pulses that account for multiple error channels. In this work, we present a pedagogical introduction to pulse-shaping techniques for transmon qubits, aiming to provide a unified, accessible framework that integrates physical intuition for pulse design, analytical understanding of gate-level descriptions, and practical considerations of hardware. This article further aims to serve as a guide for students and early researchers entering superconducting quantum computing. We begin by examining simple pulse envelopes and their spectral properties, highlighting how finite bandwidth leads to leakage outside the computational subspace. These observations motivate the introduction of the derivative removal by adiabatic gate (DRAG) technique, which uses a quadrature component proportional to the pulse's time derivative to suppress off-resonant excitations. We analyze the single-qubit case using the Magnus expansion, which provides a clear understanding of the order-by-order introduction of error channels. We discuss the practical hardware realities of control pulse generation, focusing on arbitrary waveform generators (AWG), local oscillators (LO), and IQ mixing. Common imperfections are discussed in terms of their impact on the effective pulse shape and qubit Hamiltonian. Finally, we extend the discussion to two-qubit operations, focusing on the cross-resonance gate and the emergence of effective interactions.

Suppressing the Erasure Error of Fusion Operation in Photonic Quantum Computing

Xiangyu Ren, Yuexun Huang, Zhemin Zhang, Yuchen Zhu, Tsung-Yi Ho, Antonio Barbalace, Zhiding Liang

2604.21475 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops a new compilation method for photonic quantum computing that reduces errors during graph state construction by introducing tree-encoded fusion operations and spin qubit quantum memory to better handle photon loss errors compared to existing approaches.

Key Contributions

  • Tree-encoded fusion strategy that suppresses erasure errors during graph-state generation in photonic quantum computing
  • MBQC compiler framework incorporating spin qubit quantum memory with algorithms to reduce quantum program execution overhead
photonic quantum computing measurement-based quantum computation fusion operations graph states error correction
View Full Abstract

Photonic quantum computing provides a promising route toward quantum computation by naturally supporting the measurement-based quantum computation (MBQC) model. In MBQC, programs are executed through measurements on a pre-generated graph state, whose construction largely depends on probabilistic fusion operations. However, fusion operations in PQC are vulnerable to two major error sources: fusion failure and fusion erasure. As a result, MBQC compilation must account for both error mechanisms to generate reliable and efficient photonic executions. Prior state-of-the-art MBQC compilation, represented by OneAdapt, is designed for all-photonic architectures and mainly focuses on handling fusion failures. Nevertheless, it does not explicitly model fusion erasures induced by photon loss, which can be substantially more damaging than fusion failures. To mitigate fusion erasure errors, we introduce a new MBQC compilation scheme built upon the spin qubit quantum memory. We propose tree-encoded fusion, an encoding strategy that suppresses erasure errors during graph-state generation. We further incorporate this scheme into a compiler framework with algorithms that reduce the execution overhead of quantum programs. We evaluate the proposed framework using a realistic PQC simulator on six representative quantum algorithm benchmarks across multiple program scales. The results show that tree-encoded fusion achieves better robustness than alternative fusion-encoding strategies, and that our compiler provides exponential improvement over OneAdapt. In addition, we validate the feasibility of our approach through a proof-of-concept demonstration on real PQC hardware.

LightStim: A Framework for QEC Protocol Evaluation and Prototyping with Automated DEM Construction

Xiang Fang, Ming Wang, Yue Wu, Sharanya Prabhu, Dean Tullsen, Narasinga Rao Miniskar, Frank Mueller, Travis Humble, Yufei Ding

2604.21472 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents LightStim, a software framework that automatically constructs Detector Error Models (DEMs) for quantum error correction protocols, eliminating the need for manual annotation and enabling systematic evaluation of fault-tolerant quantum computing circuits from simple memory experiments to complex distillation protocols.

Key Contributions

  • Automated DEM construction framework that eliminates manual annotation requirements for quantum error correction protocol evaluation
  • Demonstration of novel heterogeneous cross-code lattice surgery between surface and punctured quantum Reed-Muller codes
  • Unified infrastructure enabling systematic QEC protocol evaluation and accelerated exploration of new fault-tolerant quantum computing approaches
quantum error correction fault-tolerant quantum computing detector error models circuit compilation lattice surgery
View Full Abstract

Fault-tolerant quantum computing increasingly demands rigorous, circuit-level evaluation of diverse quantum error correction (QEC) protocols and efficient prototyping of new ones. Such evaluation requires both the physical circuit and its Detector Error Model (DEM) to simulate end-to-end logical error rates. However, DEM construction today is performed by manual annotation, a tedious and error-prone process that effectively limits evaluation to simple memory experiments. We present LightStim, a framework that automates DEM construction concurrently with circuit compilation by maintaining a Pauli tableau augmented with measurement records, with no protocol-specific input required. We benchmark LightStim across protocols from memory experiments to end-to-end distillation circuits; cross-validation against public implementations confirms exact detector and observable counts and consistent logical error rates. LightStim additionally accelerates the exploration of new protocols, which we demonstrate through a novel heterogeneous cross-code lattice surgery design between surface and punctured quantum Reed-Muller codes. These capabilities together make LightStim a unified infrastructure for systematic QEC protocol evaluation and exploration.

pygridsynth: A fast numerical tool for ancilla-free Clifford+T synthesis

Shuntaro Yamamoto, Nobuyuki Yoshioka

2604.21333 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents pygridsynth, a Python library for efficiently converting quantum operations into sequences of Clifford+T gates (a universal gate set for fault-tolerant quantum computing). The tool provides fast synthesis with logarithmic scaling in precision and introduces techniques to reduce the number of expensive T gates needed for multi-qubit operations.

Key Contributions

  • Open-source Python library for ancilla-free Clifford+T synthesis with O(log(1/ε)) complexity
  • Partial-decomposition technique for n≥3 qubits that reduces T-gate count constant factors
  • Mixed-synthesis workflow using probabilistic mixtures that improves synthesis error from ε to ε²/(2n)
Clifford+T synthesis fault-tolerant quantum computing T-gate optimization quantum compilation ancilla-free synthesis
View Full Abstract

We present pygridsynth, an open-source Python library for ancilla-free approximate Clifford+$T$ synthesis that runs in $O(\log(1/ε))$ for precision $ε$. For $n=1, 2$ qubits, the library builds upon established efficient and high-precision synthesis routines, such as nearly optimal $Z$-rotation synthesis and magnitude approximation. For $n\ge 3$ qubits, we introduce a partial-decomposition technique that generalizes the magnitude approximation, reducing constant factors in the $T$-count as $(\frac{21}{8}\cdot 4^n - \frac{9}{2}\cdot 2^n + 9)\log_2(1/ε) + o(\log(1/ε))$. The package also exposes a mixed-synthesis workflow that approximates target unitary channels by probabilistic mixtures of Clifford+$T$ circuits, for which we empirically find that the synthesis error is reduced from $ε$ to $ε^2/(2n)$. Taken together, these features make pygridsynth a Python-native platform for high-precision Clifford$+T$ synthesis and for benchmarking unitary and mixed synthesis strategies on multi-qubit instances.

StabilizerBench: A Benchmark for AI-Assisted Quantum Error Correction Circuit Synthesis

Andres Paz, Christian Tarta, Cordelia Yuqiao Li, Mayee Sun, Sarju Patel, Sylvie Lausier

2604.21287 • Apr 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces StabilizerBench, a benchmark suite for evaluating AI agents' ability to automatically generate quantum error correction circuits. The benchmark includes 192 stabilizer codes across various sizes and difficulties, with three tasks testing circuit generation, optimization, and fault-tolerant synthesis capabilities.

Key Contributions

  • Creation of the first benchmark suite specifically for AI-assisted quantum error correction circuit synthesis
  • Development of a unified scoring system with capability and quality metrics for evaluating quantum circuit generation
  • Introduction of continuous fault tolerance and optimization metrics that go beyond binary pass/fail assessment
quantum error correction stabilizer codes AI benchmarking fault tolerance circuit synthesis
View Full Abstract

As quantum hardware scales toward fault tolerant operation, the demand for correct quantum error correction (QEC) circuits far outpaces manual design capacity. AI agents offer a promising path to automating this synthesis, yet no benchmark exists to measure their progress on the specialized task of generating QEC circuits. We introduce StabilizerBench, a benchmark suite of 192 stabilizer codes spanning 12 families, 4-196 qubits, and distances 2-21, organized into three tasks of increasing difficulty: state preparation circuit generation, circuit optimization under semantic constraints, and fault tolerant circuit synthesis. Although motivated by QEC, stabilizer circuits exercise core competencies required for general quantum programming, including gate decomposition, qubit routing, and semantic preserving transformations, while admitting efficient verification via the Gottesman Knill theorem, enabling the benchmark to scale to large codes without the exponential cost of full unitary comparison. We define a unified generator weighted scoring system with two tiers: a capability score measuring breadth of success and a quality score capturing circuit merit. We also introduce continuous fault tolerance and optimization metrics that grade error resilience and circuit improvements beyond binary pass or fail. Following the design of classical benchmarks such as SWE-bench, StabilizerBench specifies inputs, verification oracles, and scoring but leaves prompts and agent strategies open. We evaluate three frontier AI agents and find the benchmark discriminates across models and tasks with substantial headroom for improvement.

High-Girth Regular Quantum LDPC Codes from Affine-Coset Structures

Koki Okada, Kenta Kasai

2604.20838 • Apr 22, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a new family of quantum low-density parity-check (LDPC) codes using mathematical structures called affine cosets, creating error correction codes that can protect quantum information with improved performance. The researchers demonstrate a specific code that can correct errors in over 16,000 quantum bits with very low failure rates.

Key Contributions

  • Construction of high-performance quantum LDPC codes using affine-coset structures from 3-dimensional subspaces
  • Demonstration of scalable quantum error correction achieving frame error rates of 10^-8 for large-scale quantum systems
quantum error correction LDPC codes fault tolerance belief propagation CSS codes
View Full Abstract

We construct a quantum low-density parity-check code family from a length-512 Calderbank-Shor-Steane base matrix pair. The base pair is $(3,8)$-regular, both Tanner graphs have girth 8 , and the base code has parameters $[[512,174,8]]$. The construction uses affine cosets of six 3-dimensional subspaces of $\mathbb{F}_2^9$ as check supports, and then applies circulant permutation matrix (CPM) lifts. The main decoding experiment uses the CPM-lifted code with lift factor $P=32$, which has parameters $[[16384, 4142, \leq 40]]$, under the code-capacity depolarizing model. A belief-propagation decoder with post-processing achieved frame error rate about $10^{-8}$ at $p=$ 0.085 , and one observed logical residual of weight 40 gives a decoder-derived upper bound $d \leq 40$.

Controllable non-Hermitian topology in a dynamically protected cat qubit

Tian-Le Yang, Pei-Rong Han, Zhen-Biao Yang, Shi-Biao Zheng

2604.20680 • Apr 22, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper investigates the non-Hermitian topology of dissipatively stabilized cat qubits, showing how the phase of a two-photon drive can coherently control exceptional points in the system's spectrum. The work demonstrates that these quantum error-corrected qubits maintain high-fidelity operation while exhibiting controllable topological features.

Key Contributions

  • Discovery of controllable second- and third-order Liouvillian exceptional points in cat qubits using two-photon drive phase control
  • Introduction of a topological invariant based on winding numbers to characterize exceptional points in open quantum systems
  • Demonstration that cat qubit dynamics remain confined to logical subspace with near-unity fidelity despite non-Hermitian topology
cat qubits non-Hermitian topology exceptional points fault-tolerant quantum computing dissipative stabilization
View Full Abstract

Dissipatively stabilized cat qubits are promising for fault-tolerant quantum information processing, yet their non-Hermitian (NH) spectral topology remains largely unexplored. We uncover rich Liouvillian exceptional structures in a cat-qubit mode stabilized by two-photon drive (TPD) and engineered two-photon loss, in the presence of single-photon drive (SPD) and single-photon loss. In the parameter space spanned by SPD strength and detuning, we identify both second- and third-order Liouvillian exceptional points (LEP2s and LEP3s). Remarkably, we show that the phase $θ$ of TPD provides coherent control over these exceptional points: the LEP3 diverges and vanishes at $θ=π/2$, while remaining stable and tunable elsewhere. We introduce a topological invariant based on the winding number of a resultant vector, which robustly identifies LEP3s with unit topological charge. Full master-equation simulations confirm that the system dynamics remains confined to the logical subspace with near-unity fidelity. Our results bridge dissipative stabilization, phase-coherent control, and NH topology, demonstrating controllable higher-order LEPs in open quantum systems.

Valley-Aware Optimal Control of Spin Shuttling Using Cryogenic Integrated Electronics

Pau Dietz Romero, Nermine Chaabani, Lammert Duipmans, Alessandro David, Felix Motzoi, Stefan van Waasen, Lotte Geck

2604.20482 • Apr 22, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an integrated approach to improve electron spin shuttling in silicon quantum devices by combining cryogenic control electronics with optimization algorithms that account for valley disorder and electronic noise. The work achieves 99.99% fidelity for moving electron spins over 10 micrometers while using low-power on-chip control circuits.

Key Contributions

  • End-to-end co-simulation framework combining valley disorder maps with cryogenic circuit simulations
  • Fully integrated cryogenic shuttling-signal generator with velocity modulation and on-chip memory
  • Noise-aware optimization procedure for high-fidelity spin shuttling using discrete circuit controls
spin shuttling silicon qubits valley splitting cryogenic electronics quantum control
View Full Abstract

Electron shuttling is emerging as a key mechanism for enabling long-range coupling in scalable spin-qubit architectures. Bringing shuttling waveform generation into the cryostat can improve scalability, but imposes strict area and power constraints on the control electronics. Concurrently, shuttling in Si/SiGe is further limited by a spatially varying valley splitting that induces spin--valley mixing and degrades coherence. Here, we make three contributions that address these limitations jointly: (i) an end-to-end co-simulation framework that combines disorder-informed valley maps with transistor-level cryogenic circuit simulations including electronic noise; (ii) a fully integrated cryogenic shuttling-signal generator tailored to velocity modulation, enabling period-wise waveform shaping through discrete circuit settings stored in on-chip memory; and (iii) a noise-aware optimization procedure that tunes only these implementable circuit controls, using one of four discrete resistor settings per period, to generate high-fidelity shuttling sequences. Across simulated valley and noise realizations in our co-simulation framework, the optimized velocity-modulation waveforms improve transport performance, achieving an average shuttling fidelity of $99.99 \pm 0.007\%$ at $v_{\mathrm{avg}} = 20~\mathrm{m\,s^{-1}}$ over a distance of $10~μ\mathrm{m}$, while maintaining active analog power consumption in the tens of $μ\mathrm{W}$ during shuttling. This validates on-chip storage and replay of optimized control settings as a practical strategy to mitigate valley disorder in scalable shuttling architectures.

Direct U(2) approximation via repeat-until-success circuits

Vadym Kliuchnikov, Jendrik Brachter, Marcus P. da Silva

2604.20033 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper presents a new method for approximating arbitrary single-qubit quantum operations using repeat-until-success circuits with one extra qubit, avoiding traditional decomposition methods. The approach uses mathematical tools from lattice theory and can work with various quantum gate sets including Clifford gates.

Key Contributions

  • Direct approximation of U(2) unitaries without Euler decomposition using repeat-until-success circuits
  • Extension to multi-qubit gate sets including Clifford+CS and Clifford+CCZ combinations
  • Application of lattice-based synthesis algorithms and integer point enumeration for quantum gate approximation
quantum gate synthesis repeat-until-success circuits unitary approximation Clifford gates lattice algorithms
View Full Abstract

We show how to directly and efficiently approximate arbitrary one-qubit unitaries, bypassing the Euler decomposition and the magnitude approximation problem, at the cost of one ancillary qubit. Our technique also applies to approximating unitaries with multi-qubit gate sets such as Clifford and CS, or Clifford and CCZ, as well as to approximating orthogonal matrices using multi-qubit gate sets such as Real Clifford and CCZ. The key tools are repeat-until-success circuits, lattice-based exact synthesis algorithms, integer point enumeration in convex sets, and relative norm equations.

Assessing System Capabilities and Bottlenecks of an Early Fault-Tolerant Bicycle Architecture

Kun Liu, Ben Foxman, Gian-Luca R. Anselmetti, Yongshan Ding

2604.20013 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper analyzes early fault-tolerant quantum computers using Bivariate Bicycle codes, identifying inter-module communication as the main bottleneck and developing compiler optimizations to improve performance. The researchers test their optimizations on 40+ quantum algorithm benchmarks and show significant improvements in circuit failure rates and execution times.

Key Contributions

  • Identification of inter-module communication as the dominant bottleneck in modular fault-tolerant quantum computers
  • Development of compiler optimizations including synthesis at factory, transvection-based Clifford deferral, and Clifford insertion techniques
  • Comprehensive evaluation on 40+ benchmark categories showing 9x reduction in circuit failure probability and significant performance improvements
fault-tolerant quantum computing Bivariate Bicycle codes quantum compiler optimization modular quantum architecture magic state factory
View Full Abstract

Early modular fault tolerant quantum computers remain constrained by costly inter-module communication and limited magic state factory service. Understanding such bottlenecks and investigating compiler optimizations most close the gap between algorithm requirements and hardware capabilities is a concrete and practically urgent systems problem. We study the modular architectures based on Bivariate Bicycle codes and identify the dominant bottleneck: inter-module communication induced by non-Clifford operations. We build a compilation pipeline to fill the missing parts of prior works and propose compiler optimizations: synthesizing arbitrary-angle rotations at the factory (syn@fac), transvection based Clifford deferral, and Clifford insertion for critical path duration reduction. We extend the evaluation scope of the prior work to 40+ benchmark categories drawn from PennyLane and MQTBench, including quantum algorithms and Hamiltonian simulations with varying sizes. Under the present instruction cost, syn@fac reduces estimated circuit failure probability by a factor of 9.0 on average across non-Clifford benchmarks. The robustness persists across sweeps of instruction cost ratios, LPU count, and factory count. Besides, transvection reduces Clifford deferral compile time by 77.04\%, while Clifford insertion reduces end-to-end circuit duration by 11.54\% on average on MQTBench, with smaller gains on Hamiltonian simulations. We hope this work inspires the studies on compiler optimizations for early modular FTQC systems.

Reinforcement Learning for Robust Calibration of Multi-Qudit Quantum Gates

Amine Jaouadi, Sahel Ashhab

2604.19990 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper develops a hybrid approach combining optimal control theory with deep reinforcement learning to create more robust quantum gates for qudits (higher-dimensional quantum systems beyond qubits). The method uses optimal control to design initial high-quality control pulses, then applies reinforcement learning to learn small corrections that maintain gate performance when hardware parameters vary from their ideal values.

Key Contributions

  • Hybrid optimization framework combining optimal control theory with contextual deep reinforcement learning for quantum gate calibration
  • Demonstration of robust controlled-phase gates on two qutrits with enhanced transfer robustness across device ensembles
  • Scalable approach for high-fidelity quantum gate control in higher-dimensional quantum systems
quantum gates qudits reinforcement learning optimal control gate calibration
View Full Abstract

Higher-dimensional quantum systems, such as qudits, offer architectural and algorithmic advantages over qubits, but their increased spectral crowding and limited controllability render high-fidelity quantum gates particularly challenging. We propose a hybrid optimization framework that integrates optimal control theory methods with contextual deep reinforcement learning to achieve robust controlled-phase gates on two qutrits. Optimal control is first used to design high-fidelity control pulses for a nominal system model. Reinforcement learning is then employed as a calibration stage that learns small residual corrections to these pulses in the presence of static model mismatch, thereby preserving good gate performance under realistic parameter uncertainties. By learning structured, low-dimensional residual corrections conditioned on device-specific parameter variations, reinforcement learning enhances the transfer robustness of nominally optimal but parameter-sensitive control solutions across ensembles of devices. Crucially, the reinforcement learning step in our framework does not compete with the optimal control step but provides the adaptability required for realistic hardware, systematically reducing the sensitivity to parameter fluctuations. Our results establish reinforcement learning as a practical and scalable ingredient for robust calibration of quantum gates in high-dimensional systems.

Architecting Early Fault Tolerant Neutral Atoms Systems with Quantum Advantage

Sahil Khan, Sayam Sethi, Kaavya Sahay, Yingjia Lin, Jude Alnas, Suhas Kurapati, Abhinav Anand, Jonathan M. Baker, Kenneth R. Brown

2604.19735 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops improved fault-tolerant quantum computing architectures for neutral atom systems by introducing teleportation-based schemes that parallelize logical operations, achieving up to 3x speedup over existing methods. The authors demonstrate that quantum advantage could be achieved with as few as 11,495 atoms in about 15 hours runtime.

Key Contributions

  • Teleportation-based fault-tolerant architecture that achieves ~3x speedup over extractor architectures
  • Comprehensive simulation framework including gate scheduling, shuttling patterns, and resource-state nondeterminism
  • Concrete resource estimates showing quantum advantage achievable with 11,495 atoms and ~15 hours runtime
fault-tolerant quantum computing neutral atoms quantum error correction logical operations quantum advantage
View Full Abstract

Recent advancements in neutral atom platforms have enabled exploration of early fault-tolerant (FT) architectures for applications with quantum advantage, such as quantum dynamics simulations. An efficient fault-tolerant architecture has both spatially efficient quantum error correction codes (low qubit overhead), and efficient methodologies (transversal based gates, extractor based gates, etc.) for logical computation, to minimize overall execution time. Achieving the right balance between space and time can be critical for enabling early FT demonstrations of quantum advantage. In this work, we identify bottlenecks in existing spatially efficient schemes, which tend to be very serial, and do not take advantage of unutilized space. We introduce a teleportation-based scheme that leverages the reconfigurable connectivity of neutral atoms to parallelize logical operations. Our approach achieves up to \textbf{$\mathbf{\sim 3 \times}$ speedup} over extractor architectures at no extra space cost and achieves the best spacetime performance among other viable architectures before accounting for external \textit{resource-states}. To rigorously evaluate performance, we construct explicit quantum advantage benchmarks and \textit{simulate} compilation to a fault-tolerant instruction set, including low-level gate scheduling and shuttling patterns, and resource-state nondeterminism. We find that our speedups still apply and report exact space-time cost along with success probabilities, identifying architectures capable of achieving quantum advantage \textbf{with as little as $\mathbf{11,495}$ atoms and a runtime of $\mathbf{\sim 15}$ hours}.

Qubit Routing for (Almost) Free

Arianne Meijer-van de Griend

2604.19717 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proves mathematical bounds on the number of CNOT gates needed to synthesize quantum phase polynomials and shows that by only using gates natively allowed by the hardware architecture, qubit routing overhead can be reduced from logarithmic-to-polynomial factors down to constant factors.

Key Contributions

  • Mathematical proof of tight bounds O(gn/max(log g,1)) to O(gn) for CNOT gate count in phase polynomial synthesis
  • Demonstration that architecture-aware synthesis reduces routing overhead from O(log n) to O(n log²n) down to O(1) constant factors
CNOT gates phase polynomials qubit routing quantum compilation hardware constraints
View Full Abstract

In this paper, we give a mathematical proof that bounds the number of CNOT gates required to synthesize an $n$ qubit phase polynomial with $g$ terms to be at least $O(\frac{gn}{\max (\log g, 1)})$ and at most $O(gn)$. However, when targeting restricted hardware, not all CNOTs are allowed. If we were to use SWAP-based methods to route the qubits on the architecture such that the earlier synthesized gates are natively allowed, we increase the number of CNOTs by a routing overhead factor of $O(\log n) \leq α\leq O(n \log^2 n)$. However, if we only synthesize allowed gates, we do not need to route any qubits. Moreover, in that case the routing overhead factor is $1 \leq α\leq 4 \simeq O(1)$. Additionally, since phase polynomials and Hadamard gates together form a universal gate set, we get qubit routing for almost free.

Quantum Eigenvalue Transformations for Arbitrary Matrices

Xabier Gutiérrez, Lorenzo Laneve, Mikel Sanz

2604.19688 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper extends quantum signal processing and quantum singular value transformation techniques to work with arbitrary matrices (not just unitary or Hermitian ones) by introducing n-regular block encodings that preserve matrix powers and allow polynomial transformations of eigenvalues.

Key Contributions

  • Introduction of n-regular block encoding concept that extends QSP/QSVT to arbitrary non-Hermitian matrices
  • Construction method to transform any block encoding into n-regular form using O(log n) ancillary qubits
  • Proof that the method works for eigenvalue transformations in Jordan normal form
quantum signal processing quantum singular value transformation block encoding eigenvalue transformation quantum algorithms
View Full Abstract

Quantum Signal Processing (QSP) and Quantum Singular Value Transformation (QSVT) provide an efficient framework for implementing polynomials of block-encoded matrices, and thus offer a systematic approach to quantum algorithm design. However, despite a number of recent advances, important limitations remain. In particular, QSP can only transform unitary matrices, by applying a polynomial to their eigenvalues, while QSVT is a singular-value transformation and thus one can only obtain the polynomial of Hermitian matrices. As a consequence, these techniques do not directly apply to an arbitrary non-Hermitian matrix that is not diagonalizable. In this work, we propose a simple yet powerful method to extend these ideas to arbitrary square matrices by acting on their eigenvalues. To this end, we introduce the notion of an $n$-regular block encoding, namely, a block encoding whose $k$-th power reproduces the $k$-th power of the encoded matrix for every $0 < k < n$. We show that applying QSP to any unitary with this property is equivalent to applying a polynomial of degree at most $n$ to the block-encoded matrix, independently of its internal structure. Moreover, we provide a simple construction that transforms any block encoding into an $n$-regular one using only $O(\log n)$ ancillary qubits and operations. Finally, we show that this construction induces the desired transformation on the eigenvalues associated with the Jordan normal form of the matrix.

Spin Kerr-cat qubits

Z. M. McIntyre, Daniel Loss

2604.19687 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper introduces a new type of noise-robust qubit encoding called 'spin Kerr-cat qubits' that uses nuclear spins in quadrupolar nuclei to suppress dephasing noise. The researchers estimate that using antimony donors in silicon, these qubits could achieve very long coherence times of 100 seconds and high gate fidelities of 99%.

Key Contributions

  • Introduction of spin Kerr-cat encoding using nuclear spins with first-order dephasing noise suppression
  • Theoretical analysis showing potential for 100-second coherence times and 99% gate fidelity using antimony donors in silicon
  • Proposal for two-qubit gates mediated by hopping electrons
spin Kerr-cat qubits noise-robust encoding nuclear spins quadrupolar nuclei dephasing suppression
View Full Abstract

The use of noise-robust qubit encodings provides a way of extending the lifetime of quantum information at the hardware level. In this work, we introduce the spin Kerr-cat encoding, which leverages a clock transition in the spectrum of quadrupolar nuclei (having spin length $I\geq 1$) to achieve a first-order suppression of noise leading to qubit dephasing. The basis states of the spin Kerr-cat qubit are given by the two lowest levels of a $\mathbb{Z}_2$-symmetric nuclear-spin Hamiltonian and are well approximated by spin cat states. We compute the dephasing time of the spin Kerr-cat qubit under a model of $1/f$ noise, as well as relaxation of the qubit due to breaking of the $\mathbb{Z}_2$ symmetry by charge-noise-induced fluctuations of the quadrupolar tensor. Using measured parameters for antimony (${}^{123}\mathrm{Sb}$) donors in silicon, we estimate that a coherence time of $T_2^*=100$ s could be achieved with this encoding. We propose a two-qubit gate mediated by hopping electrons and estimate that with an enhancement of measured quadrupolar splittings by a factor of $\approx 4$, a gate fidelity of $99\%$ could be achieved for spin Kerr-cat qubits encoded in ${}^{123}\mathrm{Sb}$ nuclear spins, neglecting errors that impact the electron while it is being shuttled and read out.

Fault-Tolerant Quantum Computing with Trapped Ions: The Walking Cat Architecture

Felix Tripier, Woo Chang Chung, Jacob Young, Safwan Alam, Bryce Bjork, Aharon Brodutch, Finn Lasse Buessen, Nolan J. Coble, Thomas Dellaert, Dmitri Ma...

2604.19481 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a comprehensive fault-tolerant quantum computer architecture called 'walking cat' for trapped-ion systems, using LDPC quantum error-correction codes and cat state factories to enable practical quantum computing with hundreds of logical qubits. The authors provide detailed blueprints including compiler, decoder, and micro-architecture designs, demonstrating how their approach could achieve classically intractable physics simulations using thousands of physical qubits.

Key Contributions

  • Complete fault-tolerant quantum computer architecture blueprint for trapped ions using LDPC codes
  • Demonstration of three architectural variants with specific resource estimates for hundreds of logical qubits
  • Practical design achieving million T gates per day with only 2,514 physical qubits for dense architecture
  • Resource estimates showing Heisenberg model simulation on 100 sites achievable within one month using 10,000 physical qubits
fault-tolerant quantum computing trapped ions LDPC codes quantum error correction cat states
View Full Abstract

We propose a fault-tolerant quantum computer architecture for trapped-ion devices, which we call the walking cat architecture. Our blueprint includes a compiler, a detailed description of all the quantum error-correction protocols, a micro-architecture, a sufficiently fast decoder, and thorough simulations. The backbone of the architecture is a cat factory, producing cat states distributed throughout the machine, which are consumed to perform logical operations. The walking cat architecture is based entirely on a modern quantum error-correction approach called low-density parity-check (LDPC) codes. We identify promising instances of the walking cat architecture, such as (1) a simple architecture based on a single LDPC code, (2) a fast architecture based on fast logical gates relying on a [[70, 6, 9]] code, equipped with Clifford-frame tracking for any 6-qubit Clifford gate, and (3) a dense architecture based on a [[102, 22, 9]]] code encoding 22 logical qubits per memory block. Our dense architecture provides a design with 110 logical qubits executing about one million T gates per day using only 2,514 physical qubits. We estimate that the quantum Hamiltonian simulation of a Heisenberg model on 100 sites can be executed within one month with 10,000 physical qubits, including all shots required to achieve chemical accuracy, suggesting that such a device could enter the regime of classically intractable physics simulations. Our design relies on hardware components that have been experimentally demonstrated on small devices. We emphasize simplicity over hypothetical performance to facilitate the practical realization of this machine. Based on this approach, we believe that a fault-tolerant quantum computer with hundreds of logical qubits capable of running millions of logical gates can be built in the near term, providing a platform to explore a broad range of applications.

Photonic Chirality for Braiding and Readout of Non-Abelian Anyons

Netzer Moriya

2604.19456 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes using photonic cavities to control and measure non-Abelian anyons in quantum systems by creating rotating electromagnetic fields that can braid these exotic particles and read out their quantum states through cavity measurements.

Key Contributions

  • Novel cavity-based scheme for controlling non-Abelian anyon braiding using photonic chirality
  • Theoretical framework for reading out anyon quantum states through cavity intermode coherence measurements
non-Abelian anyons topological quantum computing photonic chirality braiding operations cavity QED
View Full Abstract

We propose a cavity-based scheme that uses photonic chirality to control braiding and read out non-Abelian anyons in a fractional quantum Hall platform. Counter-propagating cavity modes interfere with a classical reference tone to create a rotating pinning landscape whose direction is set by photon circulation, so that opposite photonic branches drive opposite anyon loops. This realizes a branch-conditioned braid operation and maps the resulting braid response onto cavity intermode coherence. We derive the rotating pinning term and the readout relation at the effective-theory level, identify an operating window set by subgap driving, adiabatic transport, localization, and cavity coherence, and provide phenomenological diagnostics of transport locking. In the minimal four-anyon Ising realization, the leading signal reduces to a calibrated phase; more generally, the same readout structure becomes state dependent when the relative braid operator is non-scalar. The scheme provides a cavity route to braid-sensitive readout of non-Abelian anyons without relying on fragile electronic interference fringes.

Quantum Homomorphic Encryption: Towards Practical and Private Computation on Untrusted Quantum Hardware

Jon Hernández-Bueno, Oscar Lage, Marivi Higuero, Jasone Astorga

2604.19256 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper develops a practical quantum homomorphic encryption scheme that allows secure computation on encrypted quantum data using untrusted quantum hardware. The approach extends quantum one-time pad encryption to enable arbitrary quantum computations while preserving privacy and has been validated on real quantum processors.

Key Contributions

  • Development of QOTPH framework for universal quantum homomorphic encryption
  • Non-interactive homomorphic evaluation of Clifford+T gate sets
  • Experimental validation on real quantum hardware demonstrating practical feasibility
quantum homomorphic encryption quantum one-time pad privacy-preserving quantum computation Clifford+T gates delegated quantum computation
View Full Abstract

As quantum computing matures into a practical paradigm, the need for secure and private quantum computation on untrusted hardware becomes increasingly urgent. While classical fully homomorphic encryption has enabled computation over encrypted data in untrusted environments, a fully homomorphic and practically implementable quantum counterpart remains elusive. In this work, we propose a universal quantum homomorphic encryption (QHE) framework developed from the Quantum One-Time Pad (QOTP) scheme. Our approach (QOTPH) maintains information-theoretic security and supports a broad class of quantum operations on encrypted quantum states through a systematic set of homomorphic gate decompositions and key update rules. By leveraging the symmetric structure of QOTP and exploiting the transformation properties of quantum gates under Pauli encryption, we enable non-interactive homomorphic evaluation of arbitrary circuits expressible in the Clifford+T gate set, as well as controlled and parameterized operations relevant to variational quantum algorithms and delegated computation. We provide a formal specification of the proposed encryption model, detail its implementation procedure, and report the results obtained from both simulated environments and real quantum processors. Experimental validation demonstrates the correctness of the homomorphic operations and the preservation of key secrecy under circuit-level noise and real-device constraints. This work takes a step toward bridging the gap between theoretical quantum homomorphic encryption and practical realization on near-term quantum hardware, offering a scalable and symmetric cryptographic primitive for privacy-preserving quantum computation.

Noise Reduction for Universal Hybrid Oscillator-Qubit Quantum Computation

Mohammad Nobakht, Ivan Kassal

2604.19163 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new noise reduction scheme for hybrid quantum computers that use both continuous-variable (oscillator) and discrete-variable (qubit) systems. The authors introduce an ancilla qubit technique that extends existing error correction methods to work with universal gate sets, including non-Gaussian operations, reducing noise from σ to approximately σ².

Key Contributions

  • Extended GKP-stabilizer codes with ancilla qubits to enable noise reduction for universal CV-DV gate sets
  • Demonstrated quadratic noise suppression (σ to ~σ²) for arbitrary CV-DV operations including non-Gaussian gates
  • Showed improved fidelity in preparation of non-Gaussian cat and Fock states as proof-of-concept
hybrid quantum computing continuous variable discrete variable GKP codes error correction
View Full Abstract

Hybrid continuous-variable--discrete-variable (CV--DV) architectures process quantum information in bosonic modes and qubits, but noise limits their performance. To reduce the noise, existing DV error correction must be complemented by CV noise reduction. Existing CV noise-reduction schemes -- such as GKP-stabilizer codes -- can reduce CV noise, but only for Gaussian gates. Therefore, no current noise-reduction scheme can correct arbitrary CV--DV gates, including non-Gaussian ones. Here, we develop noise reduction for a universal CV--DV gate set, making it applicable to arbitrary CV--DV gates. We do so by introducing an ancilla qubit into a GKP-stabilizer code, allowing us to reduce the standard deviation of Gaussian displacement noise from $σ$ to $\tilde O(σ^2)$. To demonstrate the scheme, we show that it significantly reduces noise and improves fidelity in the preparation of non-Gaussian cat and Fock states.

MonteQ: A Monte Carlo Tree Search Based Quantum Circuit Synthesis Framework

Mulundano Machiya, Matt Menickelly, Paul Hovland, Ji Liu

2604.19029 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents MonteQ, a framework for optimizing quantum circuits used in Hamiltonian simulation by combining Monte Carlo Tree Search with low-level synthesis techniques. The approach explores different orderings of Pauli rotations to reduce gate counts, achieving up to 53% improvement in CNOT gates compared to existing compilers.

Key Contributions

  • Novel two-level quantum circuit synthesis framework combining Monte Carlo Tree Search with low-level heuristics
  • Flexible approach supporting different Pauli term orderings and constraints for various simulation algorithms
  • Significant CNOT gate reduction (up to 53%) compared to state-of-the-art quantum compilers
quantum circuit synthesis Hamiltonian simulation Monte Carlo Tree Search Pauli rotations quantum compiler optimization
View Full Abstract

Hamiltonian simulation is one of the most promising paths toward quantum advantage. Most prior approaches to Hamiltonian simulation circuit synthesis focus on local rewrite rules and low-level optimizations, and give limited attention to high-level scheduling of Pauli terms under varying constraints. In practice, different simulation algorithms require different orderings of the Pauli terms, yet many prior IR-based methods assume a fixed commutation structure, which limits their flexibility. We present MonteQ, a novel quantum circuit synthesis framework for Hamiltonian simulation. MonteQ leverages a two-level design that combines low-level synthesis heuristics with an upper-level tree structure to explore sequences of Pauli rotations. To avoid enumerating this factorially large tree, the Monte Carlo Tree Search algorithm serves as workhorse for judiciously exploring promising paths to leaf nodes. With this two-level design, MonteQ supports both logical-level and hardware-aware synthesis by selecting different low-level heuristics. It also supports different ordering constraints on the Pauli rotations by adjusting the high-level tree structure. For example, MonteQ can preserve the target unitary by using a directed acyclic graph that records the commutation relations among the Pauli rotations, or it can relax unitary preservation constraint to uncover additional optimization options. Our experimental results show that MonteQ can achieve an improvement, as measured in CNOT gate counts, of up to 53% (30% on average) against state-of-the-art compilers like Rustiq on a set of representative synthesis tasks.

Quantum Decoherence of the Surface Code: A Generalized Caldeira-Leggett Approach

E. Novais, A. H. Castro-Neto

2604.18968 • Apr 21, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper studies how quantum error correction in surface codes performs when coupled to realistic continuous quantum environments, rather than the simplified discrete noise typically assumed. The researchers use advanced theoretical methods to show that quantum error correction has fundamental limits depending on the type of environmental coupling, with long-range environments potentially undermining the protective benefits of larger code sizes.

Key Contributions

  • Establishes fundamental limits of surface code error correction under realistic continuous quantum environments using Caldeira-Leggett framework
  • Proves exact mapping between logical qubit evolution and anisotropic Kondo model through boundary conformal field theory
  • Demonstrates existence of thermodynamic threshold only for short-range environments, with long-range coupling undermining topological protection
surface code quantum error correction decoherence Caldeira-Leggett topological protection
View Full Abstract

Standard quantum error correction (QEC) models typically assume discrete, Markovian noise, obscuring the continuous quantum nature of physical environments. In this manuscript, we investigate the fundamental limits of an actively corrected surface code coupled to a continuous, un-reset quantum environment at zero and finite temperature. Using the generalized Caldeira-Leggett framework, we map the long-time evolution of the logical qubit to a boundary conformal field theory, establishing an exact equivalence to the anisotropic Kondo model. We evaluate computational times for a finite code distance $L$ for all spatial and temporal correlations. Our analysis reveals that a true thermodynamic threshold exists strictly for short-range environments ($z>1/(s+1)$). In critical or long-range regimes, the macroscopic footprint of the code weaponizes the continuous bath, hindering the topological protection.

Understanding Quantum Instruments

Akel Hashim

2604.18884 • Apr 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper provides practical guidance for understanding quantum instrument error models, which are essential for accurately modeling mid-circuit measurements in quantum computing applications like adaptive circuits and quantum error correction. The work addresses how to interpret error models when the quantum-classical system has distinct errors for each measurement outcome.

Key Contributions

  • Provides practical guidance for interpreting quantum instrument error models in mid-circuit measurements
  • Clarifies how superoperator error representations work for joint quantum-classical states with outcome-dependent errors
quantum instruments mid-circuit measurements quantum error correction superoperators adaptive circuits
View Full Abstract

The quantum instrument (QI) formalism is required to model mid-circuit measurements (MCMs) and the dependence of the post-measurement state on the measurement outcome. Correctly modeling QIs is essential for applications using MCMs, such as adaptive circuits and quantum error correction. Although QIs yield a joint quantum-classical state after measurement, errors in QIs can still be represented by a $d^2 \times d^2$ superoperator (e.g., process or transfer matrix) for each outcome, just as superoperators describe Markovian errors on unitary gates. However, because the joint quantum-classical system has a distinct error model for each outcome, this complicates the usual interpretation of process- or transfer-matrix error models. This Note offers practical guidance on understanding and interpreting QI error models.

Engineered broadband Purcell protection using a shared $Π$-filter for multiplexed superconducting qubits

Samuel D. Escribano, Yael Kriheli, Samuel Goldstein, Daniel Dahan, Nadav Katz

2604.18387 • Apr 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a new broadband filter design (Π-filter) that protects multiple superconducting qubits from unwanted energy loss through their control lines. The shared filter can simultaneously protect many qubits with minimal additional hardware, achieving protection over a wide frequency range of 1.5 GHz.

Key Contributions

  • Novel Π-filter geometry for broadband Purcell protection of multiple superconducting qubits simultaneously
  • Demonstration of >1ms coherence times over 1.5 GHz frequency span with minimal hardware overhead
  • Scalable architecture compatible with standard dispersive readout protocols
superconducting qubits Purcell effect coherence time quantum error mitigation microwave engineering
View Full Abstract

We propose a broadband Purcell-protection scheme based on a single shared filter integrated directly into the feedline, enabling simultaneous protection of multiple qubits in a compact architecture with minimal hardware overhead. The filter consists of two open-ended stubs connected by an in-line transmission line, forming a $Π$ geometry, and operates via engineered passive microwave interference that suppresses the real part of the environmental admittance over a wide frequency window. Circuit simulations and finite-element modeling show strong suppression of transmission within the target band (the qubit's frequencies) while preserving the readout and reset modes of the multiplexed architecture. For realistic device parameters, the proposed design yields Purcell-limited relaxation times exceeding $1$ ms over a frequency span of approximately $1.5$ GHz, which can be further extended with straightforward modifications of the design. Our results establish the $Π$-filter as a compact and scalable solution for broadband impedance engineering in superconducting quantum circuits, compatible with standard dispersive readout protocols.

Block-encodings as programming abstractions: The Eclipse Qrisp BlockEncoding Interface

Matic Petrič, René Zander

2604.18276 • Apr 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper introduces a software programming interface called BlockEncoding within the Eclipse Qrisp framework that makes advanced quantum algorithms more accessible by providing high-level abstractions for block-encoding techniques. The interface simplifies the implementation of complex quantum algorithms like matrix inversion and Hamiltonian simulation that are foundational to many quantum computing applications.

Key Contributions

  • Development of BlockEncoding programming interface in Eclipse Qrisp framework for implementing advanced quantum algorithms
  • Abstraction of complex block-encoding techniques into accessible high-level programming constructs
  • Integration of key algorithms like Childs-Kothari-Somma algorithm and practical examples for matrix operations and Hamiltonian simulation
block-encoding quantum algorithms QSVT quantum signal processing Hamiltonian simulation
View Full Abstract

Block-encoding is a foundational technique in modern quantum algorithms, enabling the implementation of non-unitary operations by embedding them into larger unitary matrices. While theoretically powerful and essential for advanced protocols like Quantum Singular Value Transformation (QSVT) and Quantum Signal Processing (QSP), the generation of compilable implementations of block-encodings poses a formidable challenge. This work presents the BlockEncoding interface within the Eclipse Qrisp framework, establishing block-encodings as a high-level programming abstraction accessible to a broad scientific audience. Serving as both a technical framework introduction and a hands-on tutorial, this paper explicitly details key underlying concepts abstracted away by the interface, such as block-encoding construction and qubitization, and their practical integration into methods like the Childs-Kothari-Somma (CKS) algorithm. We outline the interface's software architecture, encompassing constructors, core utilities, arithmetic composition, and algorithmic applications such as matrix inversion, polynomial filtering, and Hamiltonian simulation. Through code examples, we demonstrate how this interface simplifies both the practical realization of advanced quantum algorithms and their associated resource estimation.

Fast, High-Fidelity Erasure Detection of Dual-Rail Qubits with Symmetrically Coupled Readout

Jimmy Shih-Chun Hung, Arbel Haim, Mouktik Raha, Gihwan Kim, Ziwen Huang, Ming-Han Chou, Mitch D'Ewart, Erik Davis, Anurag Mishra, Patricio Arrangoiz A...

2604.16292 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a fast and accurate method for detecting when quantum information is lost (erasure detection) in dual-rail qubits using transmon quantum processors. The researchers achieved detection in 384 nanoseconds with very low error rates, and showed they can perform this detection continuously while running quantum gates.

Key Contributions

  • Fast single-shot erasure detection in 384 ns with residual error of 6.0×10^-4 using symmetrically coupled readout
  • Demonstration of time-continuous erasure detection performed in parallel with single-qubit gates achieving median 7.2×10^-5 error per gate
erasure qubits quantum error correction dual-rail encoding dispersive readout transmon
View Full Abstract

Erasure qubits are a promising platform for implementing hardware-efficient quantum error correction. Realizing the error-correction advantages of this encoding requires frequent mid-circuit erasure checks that are fast, high-fidelity, and scalable. Here, we realize erasure detection with a hardware-efficient circuit consisting of a single readout resonator dispersively and symmetrically coupled to both transmons of a dual-rail qubit. We use this circuit to demonstrate single-shot erasure detection in 384 ns with minimal impact on the dual-rail logical manifold, achieving a residual error per check of $6.0(2) \times 10^{-4}$, with only $8(3) \times 10^{-5}$ induced dephasing per check, and an erasure error per check of $2.54(1)\times 10^{-2}$. The high degree of matched dispersive readout coupling ($χ$-matching) within the dual-rail qubit code space also allows us to realize a new modality: time-continuous erasure detection performed in parallel with single-qubit gates. Here we achieve a median $7.2 \times 10^{-5}$ error per gate with $< 1 \times 10^{-5}$ error induced by erasure detection. This demonstrates a reduction in erasure detection overhead as well as a crucial ingredient for soft information quantum error correction. Together, these results establish symmetrically coupled dispersive readout as a fast, hardware-efficient, and scalable component for erasure-based quantum error correction using transmon dual-rail qubits.

Yttrium ion as a platform for quantum information processing

Christopher N. Gilbreth, Dmytro Filin, Marianna S. Safronova, Guanming Lao, Eric R. Hudson

2604.16274 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper investigates yttrium ions as a new platform for quantum computing, performing spectroscopy measurements and theoretical calculations to characterize their properties. The researchers analyze how these ions could be used for quantum information processing tasks like qubit storage, gates, and readout in trapped-ion quantum computers.

Key Contributions

  • High-resolution spectroscopy measurements of yttrium ion hyperfine structure
  • Comprehensive electronic structure calculations for transition properties and lifetimes
  • Analysis of quantum gate schemes and qubit operations for yttrium ions
  • Demonstration of field-insensitive nuclear-spin qubit storage capabilities
trapped ions quantum computing hyperfine structure nuclear spin qubits quantum gates
View Full Abstract

Engineering large-scale quantum computers which simultaneously provide high-fidelity quantum operations, low memory errors, low crosstalk, and reasonable resource usage remains an outstanding challenge across quantum computing platforms. In trapped ions, progress has largely focused on alkaline-earth and ytterbium ions, whose simple electronic structures facilitate control over their internal state. Here we investigate singly-ionized yttrium ($^{89}\mathrm{Y}^+$), a two-valence-electron ion whose ground-state manifold hosts a nuclear-spin qubit and which also features a variety of low-lying metastable manifolds, for applications in quantum information processing. Because experimental data are limited, we perform high-resolution laser-induced fluorescence spectroscopy to measure the hyperfine structure of several low-lying levels, and carry out comprehensive electronic structure calculations to determine lifetimes, transition matrix elements, and hyperfine coefficients for manifolds addressable with visible, near-visible, or infrared wavelengths. Using these results, we analyze schemes for qubit storage, initialization, readout, leakage mitigation, and single- and two-qubit gates. These results position $^{89}\mathrm{Y}^+$ as a uniquely capable next-generation trapped-ion qubit, combining field-insensitive nuclear-spin or clock-qubit storage with spectrally isolated transitions for operations.

A digitally controlled silicon quantum processing unit

Members of the HRL Quantum Team, Collaborators, :, Michael Abraham, Edwin Acuna, Tower S. Adams, Moonmoon Akmal, Matthew R. Alfaro, I. Alvarado, Ja...

2604.16216 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a silicon quantum processing unit that integrates a cryogenic CMOS controller with an array of 54 quantum dots configured as up to 18 exchange-only qubits. The researchers achieved order-of-magnitude improvements in qubit performance and successfully implemented quantum error correction codes, advancing silicon-based quantum computing toward commercial scalability.

Key Contributions

  • Order-of-magnitude improvement in exchange-only qubit performance for both single-qubit and two-qubit operations
  • Integrated quantum processing unit combining cryogenic CMOS controller with high-density superconducting ribbon cable
  • Successful demonstration of distance-5 repetition code and quantum error detecting code on silicon platform
  • Scalable architecture with 54 quantum dots configurable as up to 18 exchange-only qubits
silicon qubits exchange-only qubits quantum error correction cryogenic CMOS quantum processing unit
View Full Abstract

Commercially-relevant quantum computers will require large numbers of high-performing qubits that can be manufactured, integrated, and controlled at scale. Silicon exchange-only (EO) qubits are a strong candidate modality due to their control-signal simplicity and compatibility with advanced semiconductor manufacturing, but questions remain around the achievability of sufficiently low noise and a scalable control and wiring solution. Here we introduce a quantum processing unit composed of a custom-designed cryogenic CMOS controller, a novel high-density superconducting ribbon cable, and a low-noise EO qubit device. The quantum chip features a three-rail array of 54 exchange-coupled quantum dots, configurable to host up to 18 EO qubits. We integrate and use these components to demonstrate qubit performance for both single-qubit and entangling operations that advances the EO state of the art by an order of magnitude. We further validate this system by implementing a distance-5 repetition code and a quantum error detecting code then make detailed comparisons with simulations. Our approach facilitates a utility-scale quantum computer with manageable operational and capital requirements.

Towards Ultra-High-Rate Quantum Error Correction with Reconfigurable Atom Arrays

Chen Zhao, Casey Duckering, Andi Gu, Nishad Maskara, Hengyun Zhou

2604.16209 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops ultra-high-rate quantum error correction codes that can encode more than half of the qubits as logical qubits (encoding rate >1/2) while maintaining very low error rates, specifically designed for implementation on reconfigurable neutral atom quantum computers.

Key Contributions

  • Development of structural conditions for ultra-high-rate quantum codes with encoding rates exceeding 1/2
  • Demonstration of extremely low logical error rates (10^-11 to 10^-13) with large quantum codes on neutral atom arrays
quantum error correction ultra-high-rate codes neutral atom arrays fault tolerance LDPC codes
View Full Abstract

Quantum error correction is widely believed to be essential for large-scale quantum computation, but the required qubit overhead remains a central challenge. Quantum low-density parity-check codes can substantially reduce this overhead through high-rate encodings, yet finite-size instances with practical logical error rates often achieve encoding rates only around or below $1/10$. Here, building on a recent ultra-high-rate construction by Kasai, we identify new structural conditions on the underlying affine permutation matrices that make encoding rates exceeding $1/2$ compatible with efficient implementation on reconfigurable neutral atom arrays. These conditions define a co-designed family of ultra-high-rate quantum codes that supports efficient syndrome extraction and atom rearrangement under realistic parallel control constraints. Using a hierarchical decoder with high accuracy and good throughput, we study the performance under a circuit-level noise model with $p=0.1\%$, achieving per-logical-per-round error rates of $1.3_{-0.9}^{+3.0} \times 10^{-13}$ with a $[[2304,1156,\leq 14]]$ code and $2.9_{-1.5}^{+3.1} \times 10^{-11}$ with a $[[1152,580,\leq 12]]$ code. These results approach the teraquop regime, highlighting the promise of this code family for practical ultra-high-rate quantum error correction.

Coherence dynamics in Simon's quantum algorithm

Linlin Ye, Zhaoqi Wu, Shao-Ming Fei

2604.16190 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes how quantum coherence changes during the execution of Simon's quantum algorithm, using mathematical measures to track coherence in different parts of the quantum system. The researchers find that coherence behavior depends on the system size, with larger systems generally producing coherence while smaller ones deplete it.

Key Contributions

  • Mathematical analysis of coherence dynamics in Simon's algorithm using Tsallis relative entropy and l1,p norm measures
  • Proof that coherence production versus depletion depends on system dimension N, with a critical threshold at N=4
Simon's algorithm quantum coherence Tsallis entropy quantum algorithms coherence dynamics
View Full Abstract

Quantum coherence plays a pivotal role in quantum algorithms. We study the coherence dynamics of the evolved states in Simon's quantum algorithm based on Tsallis relative $α$ entropy and $l_{1,p}$ norm. We prove that the coherences of the first register and the second register both rely on the dimension $N$ of the state spaces of the $n$ qubit systems, and increase with the increase of $N$. We show that the oracle operator $O$ does not change the coherence. Moreover, we study the coherence dynamics in the Simon's quantum algorithm and prove that in overall the coherence is in production when $N>4$ and in depletion when $N<4$.

Quantum-Resistant Quantum Teleportation

Xin Jin, Nitish Kumar Chandra, Mohadeseh Azari, Jinglei Cheng, Zilin Shen, Kaushik P. Seshadreesan, Junyu Liu

2604.16101 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: high

This paper proposes a quantum-resistant quantum teleportation framework that uses post-quantum cryptography to protect the classical communication channel from quantum computer attacks. The researchers analyze how quantum memory limitations affect both communication distance and security, finding optimal attack windows and deriving bounds for information leakage under various attack scenarios.

Key Contributions

  • Framework combining quantum teleportation with post-quantum cryptography to resist quantum adversaries
  • Analysis showing quantum memory coherence time creates bounded optimal attack windows with Bell-shaped probability profiles
  • Closed-form expressions for information leakage bounds under four stochastic attack models on classical correction bits
quantum teleportation post-quantum cryptography quantum communication quantum memory Holevo quantity
View Full Abstract

We propose a quantum-resistant quantum teleportation (QRQT) framework protected by post-quantum cryptography (PQC) to secure the classical correction channel, which is vulnerable to quantum adversaries. By applying PQC to the classical control bits, QRQT eliminates the classical attack surface of quantum teleportation. Our analysis reveals that quantum memory is a hidden bottleneck linking physical and computational security: its finite coherence time simultaneously limits communication distance, constrains tolerable PQC overhead, and restricts the adversary attack window. Under realistic parameters (1 ms coherence, fiber-optic propagation), the maximum secure teleportation distance ranges from 191 km (FrodoKEM-1344) to 199 km (Kyber512). We show that the joint classical-quantum attack probability exhibits a non-monotonic, Bell-shaped profile due to the opposing time dependencies of classical cryptanalysis and quantum decoherence, establishing a bounded optimal attack window beyond which adversarial success decays exponentially. We further analyze how leakage of classical correction bits affects teleportation security under four stochastic leakage models: independent exponential, sequential, burst, and correlated leakage, also accounting for amplitude damping on the shared Bell pair. For each scenario, we derive closed-form expressions for the average Holevo quantity and teleportation fidelity as functions of time, providing measurement-independent upper bounds on extractable information and guiding the design of leakage-resilient quantum communication protocols.

MacWilliams Identities for Intrinsic Quantum Codes

Eric Kubischta, Ian Teixeira

2604.16023 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a mathematical framework for analyzing quantum error correction codes using group theory and representation theory. The authors introduce 'intrinsic quantum codes' and create tools called projector and twirl enumerators that help determine the optimal properties and limits of quantum error correction codes, particularly for systems with certain symmetries.

Key Contributions

  • Development of intrinsic enumerator framework for quantum error correction using group theory
  • Derivation of MacWilliams identities for quantum codes with symmetries
  • Linear programming bounds for permutation-invariant quantum codes
  • Extension to matrix-valued enumerators for multiplicity cases
quantum error correction group theory representation theory MacWilliams identities linear programming bounds
View Full Abstract

We develop an intrinsic enumerator framework for quantum error correction in unitary representations of symmetry groups. An intrinsic quantum code is a subspace of a representation $V$ of a group $G$, and errors are organized by the decomposition of the conjugation representation on $\mathcal{L}(V)$ into isotypic subspaces. Associated with any orthogonal decomposition of $\mathcal{L}(V)$ we introduce two families of quadratic enumerators, called projector and twirl enumerators, which satisfy positivity, normalization, and Knill--Laflamme type inequalities. When the conjugation representation is multiplicity--free, these enumerators are related by a linear transform that we interpret as an intrinsic MacWilliams identity. For $G=\mathrm{SU}(2)$, we compute this transform explicitly in terms of Wigner $6j$-symbols. Applied to symmetric-power representations, this gives linear programming bounds for permutation-invariant qubit and qudit codes, including extremality results for the four-qubit, seven-qubit, and three-qutrit examples treated here. We also develop the general equivariant theory in the presence of multiplicities, where the enumerators become matrix-valued, the MacWilliams transform becomes block unitary, and the resulting feasibility problem becomes semidefinite; we illustrate this theory in a first non-multiplicity-free $\mathrm{SU}(3)$ example.

Digital Predistortion for Flux Control of Tunable Superconducting Qubits

Dharun Venkateswaran, Felice Francesco Tafuri, Yuanzheng Paul Tan, Bruno Aznar Martinez, Alisa Danilenko, Likai Yang, Arnaud Carignan-Dugas, Christoph...

2604.15895 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a digital predistortion framework to correct distortions in flux control pulses used for two-qubit gates in superconducting quantum computers. The method uses digital filters to compensate for hardware imperfections and demonstrates sub-percent deviations from ideal control signals.

Key Contributions

  • Digital predistortion framework combining IIR and FIR filters for flux control correction
  • Experimental demonstration of automated calibration achieving sub-percent deviations from ideal control signals
superconducting qubits flux control digital predistortion gate fidelity quantum processing unit
View Full Abstract

Flux-tunable superconducting qubits rely on fast flux control pulses to implement two-qubit entangling quantum gates, a key building block for quantum algorithms. However, distortion effects introduced by non-ideal control electronics, parasitic components, and the cryogenic quantum chip response can all degrade the gate fidelity. We present a digital predistortion (DPD) framework for characterizing and then compensating for these distortions using a combination of infinite impulse response (IIR) and finite impulse response (FIR) filters. Experiments on a flux-tunable quantum processing unit (QPU) demonstrate a successful correction of step-response distortions on the flux-control line, with a compensated control signal showing only sub-percent deviations from the ideal target linear behavior. The demonstrated method enables automated rapid calibration of flux control channels for superconducting QPUs.

Module Lattice Security (Part I): Unconditional Verification of Weber's Conjecture for $k \le 12$

Ming-Xing Luo

2604.15858 • Apr 17, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: low

This paper provides the first unconditional mathematical proof of Weber's conjecture for cases k ≤ 12, which is important for the security foundations of lattice-based cryptography including Ring-LWE and Module-LWE cryptographic systems.

Key Contributions

  • First unconditional proof of Weber's conjecture for k ≤ 12 without relying on Generalized Riemann Hypothesis
  • Novel combination of Fukuda-Komatsu computational sieve, cyclotomic Z_2-tower structure, and Herbrand's theorem for lattice security analysis
lattice-based cryptography Weber's conjecture Ring-LWE Module-LWE post-quantum cryptography
View Full Abstract

Weber's conjecture (1886) governs three aspects of lattice-based cryptography: the solvability of the Principal Ideal Problem, the freeness of modules over rings of integers, and the tightness of worst-case-to-average-case reductions in Ring-LWE (R-LWE) and Module-LWE (MLWE). Existing verifications for $k \ge 9$ rely on Generalized Riemann Hypothesis (GRH). In this paper, we present the first unconditional proof for $k \le 12$. Our method combines the Fukuda-Komatsu computational sieve, inductive structure of the cyclotomic $\mathbb{Z}_2$-tower, and Herbrand's theorem.

Heuristic Search for Minimum-Distance Upper-Bound Witnesses in Quantum APM-LDPC Codes

Kenta Kasai

2604.15307 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to find better upper bounds on the minimum distance of quantum LDPC codes by systematically searching for low-weight logical operators that aren't stabilizers. The authors create a framework to generate and verify these witnesses across different construction methods, providing certified bounds for codes built from affine permutation matrices.

Key Contributions

  • Unified framework for finding and certifying minimum distance upper bounds in quantum LDPC codes through multiple witness construction methods
  • Development of exact certification criteria for block-compression cases and systematic verification procedures for excluding stabilizer row space membership
quantum LDPC codes minimum distance bounds quantum error correction stabilizer codes fault tolerance
View Full Abstract

This paper investigates certified upper bounds on the minimum distance of an explicit family of Calderbank-Shor-Steane quantum LDPC codes constructed from affine permutation matrices. All codes considered here have active Tanner graphs of girth eight. Rather than attempting to prove a general lower bound for the full code distance, we focus on constructing low-weight non-stabilizer logical representatives, which yield valid upper bounds once they are verified to lie in the opposite parity-check kernel and outside the stabilizer row space. We develop a unified framework for such witnesses arising from latent row relations, restricted-lift subspaces including block-compressed, selected-fiber, and CRT-stripe constructions, cycle- 8 elementary trapping-set structures, and decoder-failure residuals. In every case, search is used only to generate candidates; the reported bounds begin only after explicit kernel and row-space exclusion tests have been passed. For the latent part, we also identify a block-compression criterion under which the certification becomes exact. Applying these methods to representative APM-LDPC codes sharpens previously reported upper bounds and provides concrete certified values across the explored parameter range.

Universal quantum state purification with energy-preserving operations

Xing-Chen Guo, Benchi Zhao, Xin Wang

2604.15228 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper develops methods for purifying quantum states (removing noise) while conserving energy, establishing fundamental limits on what's possible and providing optimal protocols. The work extends traditional quantum error correction by showing how to clean up noisy quantum states using only energy-preserving operations, which is more realistic for practical quantum devices.

Key Contributions

  • Established fundamental limits for energy-preserving quantum state purification under depolarizing noise
  • Derived optimal protocols for universal state purification with energy conservation constraints
  • Provided systematic implementation methods using only energy-preserving operations
quantum_error_correction state_purification energy_conservation depolarizing_noise quantum_error_mitigation
View Full Abstract

Quantum state purification, which operates not by identifying and correcting specific errors but by repeatedly projecting multiple noisy copies onto special subspaces, provides a syndrome-free alternative to quantum error correction. Existing purification protocols, however, generally assume unconstrained operations and thus overlook the energetic restrictions inherent in realistic quantum devices. Here, we establish a general framework for universal state purification under energy-conservation constraints for depolarizing noise. We derive a necessary and sufficient condition for the nonexistence of universal energy-preserving purification and, whenever such purification is feasible, analytically determine the optimal performance and the corresponding protocols. We further show how the optimal protocols can be systematically implemented using only energy-preserving operations. Numerical results confirm the effectiveness of the proposed scheme. Our framework recovers the standard purification setting as a special case and naturally extends to scenarios assisted by external energy resources. These results identify fundamental physical limits on state distillation and provide an energy-efficient route to quantum error mitigation.

Constraints on phantom codes from automorphism group bounds

Arthur S. Morris, Daniel Malz

2604.15111 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes phantom codes, a special type of quantum error correction code where logical CNOT operations can be implemented using simple physical permutations. The authors prove that these codes have fundamental limitations, showing they can only encode a logarithmic number of logical qubits relative to physical qubits, severely restricting their encoding efficiency.

Key Contributions

  • Proved fundamental bound k ≤ log₂(n+1) for phantom codes encoding k logical qubits into n physical qubits
  • Demonstrated that phantom codes have inherently low encoding rates, limiting their practical utility for fault-tolerant quantum computing
  • Established general theorem connecting quantum code length to automorphism group structure with broader applications
phantom codes quantum error correction fault-tolerant quantum computing automorphism groups encoding rate bounds
View Full Abstract

Executing a logical quantum circuit fault-tolerantly incurs a large spacetime overhead. Recent work has proposed and investigated phantom codes, defined by the property that every in-block logical $\mathrm{CNOT}$ circuit can be implemented with a physical permutation, a property that has the potential to greatly reduce the depth of compiled circuits. Here we show that phantomness comes at the cost of low encoding rate. Specifically, we prove that any binary phantom code encoding $k$ logical qubits into $n$ physical qubits with distance $d\geq 2$ obeys the bound $k\leq \log_2(n+1)$ for all $k\neq 4$. For $k=4$ we explicitly construct a nonstabiliser $(\!(8, 2^4, 2)\!)$ phantom code that violates the bound and has a transversal non-Clifford gate. We further show that, within the class of nontrivial CSS phantom codes with $k\neq 4$, there is a unique family of codes saturating this bound. In addition, we prove that this logarithmic ceiling cannot be circumvented by permitting additional local unitary gates, or by making use of subsystem codes: any subspace or subsystem code admitting a $\mathrm{SWAP}$-transversal implementation of every logical $\mathrm{CNOT}$ circuit is constrained to satisfy the same bound. These bounds follow from a general theorem relating the length of a quantum code to the structure of its automorphism group, a result which may find applications beyond phantom codes.

O3LS: Optimizing Lattice Surgery via Automatic Layout Searching and Loose Scheduling

Chenghong Zhu, Xian Wu, Jiahan Chen, Keming He, Junjie Wu, Xin Wang, Lingling Lao

2604.15099 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces O3LS, a framework that optimizes lattice surgery operations for surface code quantum error correction by automatically designing compact circuit layouts and scheduling quantum operations more efficiently. The approach reduces both space overhead (up to 46.7%) and time overhead (up to 36%) while suppressing logical error rates by up to an order of magnitude compared to existing compilers.

Key Contributions

  • Automatic layout search algorithm that generates compact data layouts for lattice surgery operations
  • Loose scheduling framework combined with circuit synthesis to optimize time overhead while maintaining low error rates
  • Comprehensive optimization balancing space-time trade-offs to minimize overall logical error rates in fault-tolerant quantum computation
lattice surgery surface codes quantum error correction fault-tolerant quantum computing compiler optimization
View Full Abstract

Toward the large-scale, practical realization of quantum computing, quantum error correction is essential. Among various quantum error-correcting codes, the surface code stands out as a leading candidate, and lattice surgery based on surface codes has emerged as a promising technique for fault-tolerant quantum computation (FTQC). However, implementing quantum algorithms using lattice surgery introduces both resource and time overhead. Existing approaches typically focus on large layout designs, with compiler passes aimed primarily at optimizing time overhead. This often overlooks the trade-off between rotation bottlenecks and movement distance, which leads to inefficient resource utilization and prevents further reduction of the quantum computation failure rate. To address these challenges, we introduce O3LS, a framework for optimizing lattice surgery through automatic layout search and loose scheduling. O3LS achieves an optimal balance by automatically generating squeezed data layouts to reduce space requirements and employing loose scheduling algorithms combined with circuit synthesis techniques to reduce time overhead, thereby effectively minimizing overall logical error rates. Numerical results indicate that O3LS can reduce space overhead by 28.0% over standard layouts and 46.7% over sparse layouts without increasing the number of time steps, leading to suppression of logical error rates by up to 16% relative to larger data layout designs. O3LS can also achieve time overhead reductions of 36.07% and 24.76% in compact and standard data layout designs, respectively. It suppresses logical error rates by up to an order of magnitude compared to prior compilers that focus primarily on maximizing parallelism.

SyQMA: A memory-efficient, symbolic and exact universal simulator for quantum error correction

George Umbrarescu, David Amaro

2604.15043 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents SyQMA, a quantum circuit simulator specifically designed for quantum error correction that can efficiently represent quantum states with stabilizer formalism and compute exact symbolic expressions for error rates and measurement probabilities. The simulator uses auxiliary qubits to handle non-Clifford operations while maintaining polynomial memory requirements, making it particularly useful for analyzing fault-tolerant quantum protocols.

Key Contributions

  • Memory-efficient quantum circuit simulator with exact symbolic computation of error rates for quantum error correction
  • Novel representation using auxiliary qubits to handle non-Clifford operations while maintaining polynomial memory scaling
  • Circuit-level maximum-likelihood decoding and fault distance verification for fault-tolerant protocols
  • Exact conversion between disjoint and independent error probabilities for multi-qubit Pauli channels
quantum error correction fault tolerance stabilizer codes quantum simulation magic state preparation
View Full Abstract

The classical simulation of universal quantum circuits is crucial both fundamentally and practically for quantum computation. We propose SyQMA, a simulator with several convenient features, particularly suited for quantum error correction (QEC). SyQMA simulates universal quantum circuits with incoherent Pauli noise and computes exact expectation values and measurement probabilities as symbolic functions of circuit parameters: rotation angles, measurement outcomes, and noise rates. This simulator can sample measurement outcomes, enabling the simulation of dynamic quantum programs where circuit composition depends on prior measurement outputs. For QEC, it performs circuit-level maximum-likelihood decoding, provides exact symbolic expressions for logical error rates, and verifies the fault distance of fault-tolerant (FT) stabiliser and magic state preparation protocols. These features are enabled by an intuitive extension of stabiliser simulators, where each non-Clifford Pauli rotation and incoherent Pauli channel is compactly represented via auxiliary qubits and a modified trace. Representing the state requires only polynomial memory and time, while computing expectation values and measurement probabilities takes exponential time in the number of non-Clifford rotations and deterministic measurements, but only polynomial memory. The FT preparation of stabiliser and magic states, including the first stage of magic state cultivation, is analysed without approximations. We also exactly convert the disjoint error probabilities of a general multi-qubit Pauli channel to independent ones, a key step for creating and sampling from detector error models. The code is publicly available and open-source.

Runtime-efficient zero-noise extrapolation from mixed physical and logical data

D. V. Babukhin, W. V. Pogosov

2604.15014 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a hybrid quantum error correction method that combines a small amount of expensive error-corrected data with cheaper uncorrected data to improve zero-noise extrapolation. The approach can reduce computational runtime by orders of magnitude while maintaining accuracy, offering a practical path toward useful quantum computation before full fault tolerance is achieved.

Key Contributions

  • Develops mixed physical/logical data strategy for zero-noise extrapolation that reduces variance amplification
  • Demonstrates orders-of-magnitude reduction in physical runtime requirements when error correction suppression factor γ≤0.1
  • Provides theoretical variance analysis and experimental validation on six-spin transverse-field Ising model
zero-noise extrapolation quantum error correction quantum error mitigation Richardson extrapolation fault tolerance
View Full Abstract

Partial quantum error correction and quantum error mitigation are expected to coexist in the pre-fault-tolerant regime, yet the resource advantage of combining them remains insufficiently quantified. We study zero-noise extrapolation constructed from mixed datasets that contain a small number of error-corrected data points together with data obtained without error correction. The low-noise logical points anchor the extrapolation, while the higher-noise physical points enlarge the noise baseline at a much smaller runtime cost. Under a simple model in which error correction suppresses the effective gate error rate from p to $γ$p, we derive the variance of the zero-noise estimator and compare the physical runtime required to reach a target precision. For Richardson extrapolation, the mixed-data strategy reduces variance amplification and can lower the required physical runtime by several orders of magnitude when $γ\leq 0.1$. As a proof of principle, we apply the method to digital quantum simulation of a six-spin transverse-field Ising model and find that mixed physical/logical datasets yield lower-variance zero-noise estimates and outperform extrapolation based only on error-corrected data in the parameter regime studied here. These results identify hybrid error correction and error mitigation as a practical route to resource-efficient quantum computation before full fault tolerance.

Ultrafast all-optical quantum teleportation

Takumi Suzuki, Takaya Hoshi, Akito Kawasaki, Shotaro Oki, Konhi Ichii, Hironari Nagayoshi, Kazuma Takahashi, Takahiro Kashiwazaki, Taichi Yamashima, A...

2604.14959 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: high

This paper demonstrates quantum teleportation operating at terahertz speeds by using all-optical methods instead of electronic feedforward, achieving 1000 times faster operation than previous electrical methods. The researchers successfully teleported quantum states with high fidelity at unprecedented speeds, limited only by the nonlinear optical medium response time.

Key Contributions

  • First demonstration of 1-terahertz-bandwidth all-optical quantum teleportation bypassing electronic bottlenecks
  • Achievement of quantum teleportation fidelities above classical limits (0.784 and 0.770) at ultrafast speeds with 42-picosecond temporal resolution
  • Establishing that optical quantum processing speeds are fundamentally limited by nonlinear medium response rather than electronic interfaces
quantum teleportation continuous variable all-optical terahertz feedforward
View Full Abstract

Light's intrinsic carrier frequency of hundreds of terahertz theoretically enables information processing at terahertz clock rates. In optical quantum computing, continuous-variable quantum teleportation is the fundamental building block for deterministic logic operations. This protocol transfers unknown quantum states between nodes using quantum entanglement and real-time feedforward of measurement outcomes. However, electrical feedforward bottlenecks currently restrict operational bandwidths to approximately 100 megahertz, preventing the exploitation of light's ultimate speed. Here we show 1-terahertz-bandwidth all-optical quantum teleportation, completely bypassing this electronic limitation. By transferring Bell measurement outcomes optically, we successfully teleported vacuum states across the terahertz band and real-time random coherent wavepackets with a 42-picosecond temporal width. Evaluating the intrinsic state transfer quality, we achieved teleportation fidelities of $\mathcal{F}=0.784$ for the broadband vacuum states and $\mathcal{F}=0.770$ for the dynamic coherent wavepackets. Both results strictly surpass the classical limit of $\mathcal{F}=0.5$, demonstrating genuine quantum teleportation at ultrafast speeds. Our results establish that optical quantum processing speeds are constrained solely by the nonlinear medium's 1-picosecond-scale response, rather than classical electrical interfaces. This methodology provides a cornerstone for terahertz-clock quantum computers capable of overcoming Moore's law, and paves the way for a high-capacity, telecom-compatible quantum internet.

Learning to Concatenate Quantum Codes

Nico Meyer, Christopher Mutschler, Dominik Seuß, Andreas Maier, Daniel D. Scherer

2604.14931 • Apr 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a machine learning approach to automatically optimize the concatenation of quantum error correction codes by adapting the code selection at each level based on the evolving noise structure. The method uses custom learning-based encoders for structured noise and switches to standard codes when noise becomes uniform, achieving significant qubit count reductions.

Key Contributions

  • Automated optimization of quantum error correction code concatenation using learning-based methods
  • Hybrid approach that adapts encoder selection based on noise structure evolution
  • Demonstration of up to two orders of magnitude reduction in qubit requirements for fault-tolerant quantum computing
quantum error correction code concatenation fault tolerance machine learning noise adaptation
View Full Abstract

Concatenating quantum error correction codes scales error correction capability by driving logical error rates down double-exponentially across levels. However, the noise structure shifts under concatenation, making it hard to choose an optimal code sequence. We automate this choice by estimating the effective noise channel after each level and selecting the next code accordingly. In particular, we use learning-based methods to tailor small, non-additive encoders when the noise exhibits sufficient structure, then switch to standard codes once the noise is nearly uniform. In simulations, this level-wise adaptation achieves a target logical error rate with far fewer qubits than concatenating stabilizer codes alone--reducing qubit counts by up to two orders of magnitude for strongly structured noise. Therefore, this hybrid, learning-based strategy offers a promising tool for early fault-tolerant quantum computing.

A Modular and T-Gate Efficient Architecture for Quantum Leading-Zero/One Counter

Lei-Han Yao, Shang-Wei Lin, Yu-Chung Chen, Yean-Ru Chen

2604.13943 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved quantum circuit architecture for counting leading zeros or ones in binary numbers, which is essential for quantum arithmetic operations. The proposed design significantly reduces the number of expensive T-gates required and achieves better scalability compared to existing approaches.

Key Contributions

  • Modular and scalable quantum leading-zero/one counter architecture with functional polymorphism
  • Parallel variants (PQLZOC and FO-PQLZOC) that reduce T-depth from O(m) to O(log m)
  • 40% reduction in T-count and 60% reduction in T-depth compared to state-of-the-art designs
quantum arithmetic T-gate optimization fault-tolerant quantum computing quantum circuit design resource efficiency
View Full Abstract

The Quantum Leading-Zero/One Counter (QLZOC) is a fundamental component in quantum arithmetic, playing a critical role in normalization, floating-point units, dynamic range scaling, and logarithmic approximations. Conventional designs primarily rely on direct Boolean-to-quantum mapping, which results in inefficient resource utilization such as irregular gate growth and width-dependent resource overhead. In this work, we propose a scalable, modular, and resource efficient architecture for QLZOC by reformulating the counting process into a sequence of systematic conditional bit-flip operations. Moreover, our design achieves functional polymorphism so that the same design can be easily toggled between zero and one detection, while ensuring seamless scalability to any bit-width without manual re-tuning. We further introduce a Parallel QLZOC (PQLZOC) variant and a Fan-Out optimized (FO-PQLZOC) design. In this work, we evaluate resource efficiency based on the classic criteria about T gates, including the number of total T gates being used (T-count) and the number of sequential T gate layers (T-depth). By exploiting the properties of all-zero/one qubit blocks and a hierarchical merge strategy, the proposed FO-PQLZOC reduces the T-depth from O(m) to O(log m), where m is the input size. Comparative analysis demonstrates that our optimized architecture achieves a 40% reduction in T-count and a 60% reduction in T-depth over state-of-the-art designs, providing a high-performance, T-gate efficient solution for general-purpose quantum arithmetic processors.

Tsallis relative $α$ entropy of coherence dynamics in Grover's search algorithm

Linlin Ye, Zhaoqi Wu, Shao-Ming Fe

2604.13910 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes quantum coherence dynamics in Grover's search algorithm using Tsallis relative α entropy, proving that coherence decreases as success probability increases and establishing complementarity relations between these quantities. The work provides theoretical insights into how coherence behaves during quantum search operations and its relationship with entanglement.

Key Contributions

  • Proves that Tsallis relative α entropy of coherence decreases with increasing success probability in Grover's algorithm
  • Derives complementarity relations between quantum coherence and success probability
  • Establishes relationships between coherence and entanglement in superposition states of target items
Grover's algorithm quantum coherence Tsallis entropy quantum search complementarity relations
View Full Abstract

Quantum coherence plays a central role in Grover's search algorithm. We study the Tsallis relative $α$ entropy of coherence dynamics of the evolved state in Grover's search algorithm. We prove that the Tsallis relative $α$ entropy of coherence decreases with the increase of the success probability, and derive the complementarity relations between the coherence and the success probability. We show that the operator coherence of the first $H^{\otimes n}$ relies on the size of the database $N$, the success probability and the target states. Moreover, we illustrate the relationships between coherence and entanglement of the superposition state of targets, as well as the production and deletion of coherence in Grover iterations.

dqc_simulator: an easy-to-use distributed quantum computing simulator

Kenny Campbell

2604.13909 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper introduces dqc_simulator, a Python-based simulation toolkit for distributed quantum computing systems. The tool automates complex simulation workflows and enables testing of both hardware and software components in distributed quantum computing architectures.

Key Contributions

  • Development of dqc_simulator toolkit for distributed quantum computing simulation
  • Automation of complex DQC simulation workflows
  • Enabling realistic testing and benchmarking of full DQC stack
distributed quantum computing quantum simulation DQC simulator quantum networking scalability
View Full Abstract

Distributed quantum computing (DQC) is a promising proposal for overcoming the scalability challenges of quantum computing. However, the evaluation of DQC hardware and software is difficult due to the relative dearth of classical simulation tools available for DQC devices. In this work, we introduce dqc_simulator, a novel simulation toolkit, written in Python, which automates many of the most challenging aspects of the DQC simulation workflow. dqc_simulator enables the easy simulation of both hardware and software, making it easy to create realistic and robust tests and benchmarks for the full DQC stack.

AlphaCNOT: Learning CNOT Minimization with Model-Based Planning

Jacopo Cossio, Daniele Lizzio Bosco, Riccardo Romanello, Giuseppe Serra, Carla Piazza

2604.13812 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces AlphaCNOT, a reinforcement learning framework that uses Monte Carlo Tree Search to optimize quantum circuits by minimizing the number of CNOT gates. The method achieves up to 32% reduction in CNOT gates compared to existing approaches, which is important for reducing errors in current quantum devices.

Key Contributions

  • Development of AlphaCNOT, a model-based reinforcement learning framework using Monte Carlo Tree Search for CNOT gate minimization
  • Achievement of up to 32% reduction in CNOT gate count compared to state-of-the-art methods like Patel-Markov-Hayes algorithm
  • Demonstration of effective quantum circuit optimization for both unconstrained and topology-constrained quantum architectures
quantum circuit optimization CNOT minimization reinforcement learning Monte Carlo Tree Search quantum gates
View Full Abstract

Quantum circuit optimization is a central task in Quantum Computing, as current Noisy Intermediate Scale Quantum devices suffer from error propagation that often scales with the number of operations. Among quantum operations, the CNOT gate is of fundamental importance, being the only 2-qubit gate in the universal Clifford+T set. The problem of CNOT gates minimization has been addressed by heuristic algorithms such as the well-known Patel-Markov-Hayes (PMH) for linear reversible synthesis (i.e., CNOT minimization with no topological constraints), and more recently by Reinforcement Learning (RL) based strategies in the more complex case of topology-aware synthesis, where each CNOT can act on a subset of all qubits pairs. In this work we introduce AlphaCNOT, a RL framework based on Monte Carlo Tree Search (MCTS) that address effectively the CNOT minimization problem by modeling it as a planning problem. In contrast to other RL- based solution, our method is model-based, i.e. it can leverage lookahead search to evaluate future trajectories, thus finding more efficient sequences of CNOTs. Our method achieves a reduction of up to 32% in CNOT gate count compared to PMH baseline on linear reversible synthesis, while in the constraint version we report a consistent gate count reduction on a variety of topologies with up to 8 qubits, with respect to state-of-the-art RL-based solutions. Our results suggest the combination of RL with search-based strategies can be applied to different circuit optimization tasks, such as Clifford minimization, thus fostering the transition toward the "quantum utility" era.

Theory of spin qubits and the path to scalability

Z. M. McIntyre, Abhikbrata Sarkar, Daniel Loss

2604.13644 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper provides a comprehensive review of spin qubits as a platform for quantum computing, covering different implementations like electron-spin and hole-spin qubits, and analyzing approaches for scaling up these systems through long-range coupling mechanisms.

Key Contributions

  • Comprehensive review of spin qubit implementations and their theoretical foundations
  • Analysis of scalability mechanisms including circuit QED, Andreev qubits, and spin shuttling
  • Review of topological spin textures for linking spin qubits
spin qubits quantum information processing semiconductor heterostructures circuit QED quantum scalability
View Full Abstract

Spin qubits have emerged as a leading platform for quantum information processing due to their long coherence times, small footprint, and compatibility with the existing semiconductor industry. We first provide an introduction to the different qubit implementations currently being investigated, including single electron-spin qubits, hole-spin qubits, donor qubits, and multispin encodings. We discuss how the confinement and strain present in semiconductor heterostructures produce addressable levels whose spin degree of freedom can be used to encode a qubit. A large emphasis is placed on reviewing the theoretical foundations and recent experimental demonstrations of proposed mechanisms for long-range coupling, including hybrid approaches based on circuit QED and Andreev qubits, as well as spin shuttling. Finally, we review a recent proposal for linking spin qubits using topological spin textures.

A $\boldsymbol{2d \times d \times d}$ Spacetime Volume Implementation of a Logical S Gate in the Surface Code

Yuga Hirai, Shota Ikari, Yosuke Ueno, Yasunari Suzuki

2604.13632 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a more efficient method for implementing logical S gates in surface code quantum error correction, reducing the required spacetime volume from 2d×2d×d to 2d×d×d while maintaining comparable error rates. The work provides detailed circuit-level implementations and numerical analysis of fault tolerance performance.

Key Contributions

  • Novel twist defect braiding protocol that reduces spacetime volume requirements for logical S gates
  • First circuit-level implementations of existing logical S-gate methods with quantitative fault tolerance analysis
  • Demonstration that the space-efficient method maintains comparable logical error rates at practical code distances
surface code fault-tolerant quantum computing logical gates twist defects quantum error correction
View Full Abstract

The logical S gate implemented via twist defect braiding in the surface code is one of the major sources of overhead in fault-tolerant quantum computing, since an S-gate correction is required in every logical T-gate teleportation. Existing logical S-gate implementations require spacetime volumes of \(2d \times 2d \times d\) or \(2d \times 1.5d \times d\), where $d$ is the code distance of the surface code. To the best of our knowledge, their circuit-level implementations have not yet been shown, hindering quantitative comparisons of fault distances and logical error rates. In this work, we provide these missing circuit-level implementations. Additionally, we propose a novel twist defect braiding protocol that reduces the spacetime volume to \(2d \times d \times d\). First, we construct an implementation of the proposed method using constant-length non-local gates, and then refine it to utilize only nearest-neighbor two-qubit gates on a square grid, without requiring additional two-qubit gate depth beyond that of standard syndrome extraction circuits. Through numerical simulations, we evaluate the fault distances and logical error rates for both existing and proposed methods. Our results show that, although the proposed method reduces the fault distance by one or three, its logical error rates remain comparable to those of existing methods at large code distances (\(d \ge 5\)) and at physical error rates near \(p = 10^{-3}\). This demonstrates that the proposed method is promising for near-term fault-tolerant quantum computing.

Stabilization of finite-energy grid states of a quantum harmonic oscillator by reservoir engineering with two dissipation channels

Rémi Robin, Pierre Rouchon, Lev-Arcady Sellem

2604.13529 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: high Network: low

This paper proposes a simplified method to create and stabilize special quantum states called GKP grid states in a harmonic oscillator using engineered dissipation. These states are useful for quantum error correction and precision measurement applications.

Key Contributions

  • Simplified Lindblad master equation approach for stabilizing GKP grid states with reduced implementation constraints
  • Explicit energy estimates and convergence rate analysis for GKP qubit stabilization
  • Numerical studies of noise effects and parameter optimization for metrological applications
GKP states quantum error correction reservoir engineering Lindblad master equation quantum metrology
View Full Abstract

We propose and analyze an experimentally accessible Lindblad master equation for a quantum harmonic oscillator, simplifying a previous proposal to alleviate implementation constraints. It approximately stabilizes periodic grid states introduced in 2001 by Gottesman, Kitaev and Preskill (GKP), with applications for quantum error correction and quantum metrology. We obtain explicit estimates for the energy of the solutions of the Lindblad master equation. We estimate the convergence rate to the codespace when stabilizing a GKP qubit, and numerically study the effect of noise. We then present simulations illustrating how a modification of parameters allows preparing states of metrological interest in steady-state.

Coherent Rydberg excitation of single atoms using a pulsed fiber amplifier

Ying-Wen Zhang, Yang Wang, Chen-Long Xu, Yi-Bo Wang, Peng Xu

2604.13436 • Apr 15, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a fiber-based laser amplifier system to efficiently excite rubidium atoms to high-energy Rydberg states in programmable atom arrays. The technique addresses technical challenges with pulsed lasers and demonstrates performance comparable to continuous-wave methods, enabling scaling to larger quantum systems.

Key Contributions

  • Development of fiber-based master-oscillator power-amplifier system for Rydberg excitation
  • Demonstration of efficient coherent Rydberg excitation comparable to CW methods
  • Technical pathway for scaling programmable neutral-atom arrays
Rydberg atoms neutral atom arrays quantum simulation fiber amplifier coherent excitation
View Full Abstract

In recent years, the growing scale of programmable neutral-atom arrays has led to an increasing demand for higher-power Rydberg excitation light. Although pulsed amplifiers deliver higher peak power than continuous-wave lasers, their use for efficient coherent Rydberg excitation of single atoms in arrays has been limited by challenges such as pulse distortion, synchronization with excitation sequences, and spectral linewidth broadening. Here, we address these issues using a fiber-based master-oscillator power-amplifier system. We demonstrate efficient coherent Rydberg excitation of single atoms in a rubidium atom array, achieving performance comparable to continuous-wave methods. This study provides a potentially new technical pathway toward future large-scale quantum simulation and computation with Rydberg atom arrays.

Quantum-safe IPsec in the banking industry

2604.12985 • Apr 14, 2026

CRQC/Y2Q RELEVANT QC: low Sensing: none Network: high

This paper presents a hybrid quantum-safe communication architecture for banking networks that combines classical cryptography, quantum key distribution (QKD), and post-quantum cryptography within a software-defined networking framework. The researchers validated their approach across a five-node testbed spanning multiple geographic locations to demonstrate scalable, secure financial communications that can withstand future quantum computer attacks.

Key Contributions

  • Development of hybrid quantum-safe IPsec architecture combining CC, QKD, and PQC for banking networks
  • Demonstration of interoperable framework across heterogeneous devices and QKD implementations (DV-QKD, CV-QKD) with multiple key-delivery interfaces
  • Validation of scalable quantum-safe communications through multi-node testbed spanning Spain and Mexico
quantum key distribution post-quantum cryptography quantum-safe communications IPsec banking security
View Full Abstract

The emergence of Cryptographically Relevant Quantum Computers (CRQCs) presents a critical threat to classical cryptographic systems, particularly widely adopted protocols such as RSA, Diffie-Hellman (DH), and Elliptic Curve Cryptography (ECC). Given their extensive use in the financial sector, the advent of quantum adversaries compels banking institutions to proactively develop and adopt quantum-safe communication mechanisms. This paper introduces a hybrid quantum-safe architecture, orchestrated via Software-Defined Networking (SDN) key distribution. The proposed framework enables the early integration of Classical Cryptography (CC), Quantum Key Distribution (QKD), and Post-Quantum Cryptography (PQC) within a Dynamic Multipoint Virtual Private Network (DMVPN) environment, providing highly scalable, full-mesh, site-to-site encrypted communications for enterprise networks. This is particularly relevant at a time when PQC algorithms have not yet been incorporated into finalized IPsec standards. The architecture has been validated across a five-node testbed comprising three physical nodes within a campus network in Madrid and two private-cloud nodes located in the north of Spain and Mexico. The deployment leverages a heterogeneous mix of physical and virtual devices, diverse technology providers, Discrete Variable QKD (DV-QKD) and Continuous Variable QKD (CV-QKD) implementations, and mutually incompatible key-delivery interfaces (ETSI004, ETSI014 and Cisco SKIP), demonstrating flexibility, scalability, and interoperability across environments. Through this framework, we demonstrate that quantum-safe communication in financial networks is not only technically feasible but also scalable, interoperable, and resilient. The proposed architecture establishes a robust, flexible, and future-proof foundation for secure financial communications in the era of quantum computing.

Fast and accurate AI-based pre-decoders for surface codes

Christopher Chamberland, Jan Olle, Muyuan Li, Scott Thornton, Igor Baratta

2604.12841 • Apr 14, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents an AI-based pre-decoder system for quantum error correction in surface codes that can quickly identify and correct most errors locally before passing remaining problems to a global decoder. The system achieves microsecond-scale decoding times on GPUs and can learn optimal decoding parameters directly from experimental data without needing detailed noise models.

Key Contributions

  • Scalable AI-based pre-decoder architecture for surface codes with microsecond decoding times
  • Modular system that works with existing global decoders and reduces logical error rates
  • Noise-learning capability that infers optimal decoding weights from experimental syndrome data
  • Open-source implementation with block-wise parallel processing for multiple GPUs
surface codes quantum error correction fault-tolerant quantum computing AI decoder logical error rates
View Full Abstract

Fast, scalable decoding architectures that operate in a block-wise parallel fashion across space and time are essential for real-time fault-tolerant quantum computing. We introduce a scalable AI-based pre-decoder for the surface code that performs local, parallel error correction with low decoding runtimes, removing the majority of physical errors before passing residual syndromes to a downstream global decoder. This modular architecture is backend-agnostic and composes with arbitrary global decoding algorithms designed for surface codes, and our implementation is completely open source. Integrated with uncorrelated PyMatching, the pipeline achieves end-to-end decoding runtimes of order $\mathcal{O}(1 μ\text{s})$ per round at large code distances on NVIDIA GB300 GPUs while reducing logical error rates (LERs) relative to global decoding alone. In a block-wise parallel decoding scheme with access to multiple GPUs, the decoding runtime can be reduced to well below $\mathcal{O}(1 μ\text{s})$ per round. We observe further LER improvements by training a larger model, outperforming correlated PyMatching up to distance-13. We additionally introduce a noise-learning architecture that infers decoding weights directly from experimentally accessible syndrome statistics without requiring an explicit circuit-level noise model. We show that purely data-driven graph weight estimation can nearly match uncorrelated PyMatching and exceed correlated PyMatching in certain regimes, enabling highly-optimized decoding when hardware noise models are unknown or time-varying, as well as training pre-decoders with realistic noise models. Together, these results establish a practical, modular, and high-throughput decoding framework suitable for large-distance surface-code implementations.

Quasi-Orthogonal Stabilizer Design for Efficient Quantum Error Suppression

Valentine Nyirahafashimana, Sharifah Kartini Said Husain, Umair Abdul Halim, Ahmed Jellal, Nurisya Mohd Shah

2604.12684 • Apr 14, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new approach to quantum error correction that relaxes strict geometric constraints in stabilizer codes while maintaining their error-correcting properties. The quasi-orthogonal framework allows for more flexible code designs that demonstrate significantly improved performance over traditional methods.

Key Contributions

  • Development of quasi-orthogonal geometric framework for stabilizer codes that relaxes orthogonality constraints
  • Demonstration of improved logical error rates and fidelities by up to two orders of magnitude under depolarizing noise
  • Construction of specific quasi-orthogonal code variants with better performance than strictly orthogonal counterparts
quantum error correction stabilizer codes quasi-orthogonal symplectic geometry logical error rates
View Full Abstract

Orthogonal geometric constructions are the basis of many many quantum error-correcting codes (QEC), but strict orthogonality constraints limit design flexibility and resource efficiency. We introduce a quasi-orthogonal geometric framework for stabilizer codes that relaxes these constraints while preserving the symplectic commutation structure on the binary symplectic space $\mathbb{F}_{2}^{2}$. The approach permits controlled overlap between X- and Z-check supports, leading to quasi-orthogonal Pauli operators and a generalized notion of effective distance defined via induced anti-commutation with logical operators. This relaxation expands the stabilizer design space, enabling codes that approach the Gilbert-Varshamov regime with improved logical rates at moderate distances. Finite-length constructions, including quasi-orthogonal variants of the $[[8,3,\approx 3]]$, $[[10,4,\approx 3]]$, $[[13,1,5]]$, and $[[29,1,11]]$ codes, demonstrate consistent improvements over strictly orthogonal counterparts. Under depolarizing noise with error rates up to $p=0.30$, logical error rates, fidelities, and trace distances improve by up to two orders of magnitude. These improvements reflect the increased connectivity of the underlying stabilizer geometry while remaining compatible with standard decoding schemes. The proposed framework offers a principled extension of stabilizer code design through quasi-orthogonal geometric structures.

Design automation and space-time reduction for surface-code logical operations using a SAT-based EDA kernel compatible with general encodings

Wang Liao, Rei Tokami, Yasunari Suzuki

2604.12560 • Apr 14, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents KOVAL-Q, a software framework for optimizing fault-tolerant quantum computing operations using surface codes by formulating the optimization as a satisfiability problem. The tool can verify and minimize the space-time costs of logical operations like CNOT gates and patch rotations, achieving about 10% performance improvements in quantum applications.

Key Contributions

  • Development of KOVAL-Q EDA kernel that uses SAT-based optimization for surface-code logical operations with flexible encodings
  • Demonstration of minimum execution time determination for fundamental operations like d-cycle CNOTs and 2d-cycle patch rotations
  • Achievement of ~10% execution time reduction in fault-tolerant quantum computing applications through optimized logical operations
surface codes fault-tolerant quantum computing lattice surgery SAT optimization logical operations
View Full Abstract

Fault-tolerant quantum computers (FTQCs) based on surface codes and lattice surgery have been widely studied, and there is strong demand for a framework that can identify logical operations with low space-time cost, verify their functionality and fault tolerance, and demonstrate their optimality within a given search space, much like electronic design automation (EDA) in classical circuit design. In this paper, we propose KOVAL-Q, an EDA kernel that verifies and optimizes surface-code logical operations by formulating them as a satisfiability (SAT) problem. Compared with existing SAT-based frameworks such as LaSsynth, our method can handle logical qubits with more flexible surface-code encodings, both as target configurations and as intermediate states. This extension enables the optimization of advanced layouts, such as fast blocks, and broadens the search space for logical operations. We demonstrate that KOVAL-Q can determine the minimum execution time of fundamental logical operations in given spatial layouts, such as $d$-cycle logical CNOTs and $2d$-cycle patch rotations. Their use reduces the execution time of widely studied FTQC applications by about 10% under a simplified scheduling model. KOVAL-Q consists of three subkernels corresponding to different types of constraints, which facilitates its integration as a submodule into scalable heuristic frameworks. Thus, our proposal provides an essential framework for optimizing and validating core FTQC subroutines.

Demonstrating Record Fidelity for the Quantum Fourier Transform

Philipp Aumann, Michael Fellner, David Alber, Max Cykiert, Christoph Fleckenstein, Roeland ter Hoeven, Leo Stenzel, Riccardo J. Valencia-Tortora, Wolf...

2604.12465 • Apr 14, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a new 'Parity Architecture' approach for implementing the quantum Fourier transform (QFT) on quantum hardware, achieving record performance with process fidelity of 10^-2 for 50 qubits on IBM quantum processors. The method provides super-exponential speedup compared to previous swap-based approaches.

Key Contributions

  • Introduction of Parity Architecture for quantum algorithm implementation
  • Record fidelity QFT demonstration on 50 qubits with 10^-2 process fidelity
  • Super-exponential scaling improvement over swap-based methods
quantum Fourier transform parity architecture process fidelity quantum algorithms IBM Heron
View Full Abstract

We demonstrate the Parity Architecture on quantum hardware, using the quantum Fourier transform (QFT) as a benchmark. As a result, a record performance in both fidelity and qubit count is achieved using quantum processors with a native CZ-based instruction set. On the IBM Heron r3 chip, a process fidelity of the QFT algorithm of ${F \approx 10^{-2}}$ for ${N=50}$ qubits is achieved. The scaling of the speedup compared to previous swap-based methods is super-exponential $\mathcal{O}(\exp(N^2))$. Furthermore, we show that the scaling can be improved further by including iSWAP gates in the instruction set.

Quantum circuit optimization for arbitrary high-dimensional bipartite quantum computation

Gui-Long Jiang, Hai-Rui Wei

2604.11534 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper develops an optimized method for constructing quantum circuits that operate on high-dimensional quantum systems with arbitrary dimensions n and m. The authors show that controlled increment gates combined with local gates can efficiently implement any quantum operation on these systems, requiring only O(n²) gates instead of previous less efficient approaches.

Key Contributions

  • Proof that CINC gates combined with local gates form a universal gate set for high-dimensional quantum computation
  • Achievement of O(n²) upper bound for arbitrary quNit-quMit gate implementation, improving from previous 2n requirement to only 2 CINC gates for controlled operations
high-dimensional quantum computing quantum circuit optimization controlled increment gates universal gate sets qudit systems
View Full Abstract

Implementation of high-dimensional (HD) quantum gates shows very promising perspectives for HD quantum computation. A bipartite quantum system with arbitrary dimensions $n$ and $m$ is termed a quNit-quMit. Here we propose a synthesis scheme to construct the quantum circuit for general quNit-quMit gates with controlled increment (CINC) gates and local gates. This shows that CINC gates combined with local gates form a universal gate set for HD quantum computation. An upper bound of $O(n^2)$ CINC gates is achieved for arbitrary quNit-quMit gate implementation in the proposed scheme, which is the best known result. Especially for the controlled quNit-quMit gates, our scheme requires only 2 CINC gates, whereas the previous scheme required $2n$.

Tackling instabilities of quantum Krylov subspace methods: an analysis of the numerical and statistical errors

Maria Gabriela Jordão Oliveira, Karl Michael Ziems, Nina Glaser

2604.11532 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes quantum Krylov subspace methods for finding ground-state energies, showing that in realistic noisy conditions the main problem is statistical fluctuations rather than mathematical ill-conditioning. The authors introduce two new filtering techniques to assess solution reliability without knowing the correct answer beforehand.

Key Contributions

  • Analysis showing statistical noise dominates over ill-conditioning in realistic quantum Krylov methods
  • Introduction of imaginary and unitary filters to assess solution reliability without prior knowledge of eigenspectrum
quantum algorithms Krylov subspace ground state energy fault-tolerant quantum computing numerical stability
View Full Abstract

Krylov subspace methods are among the most extensively studied early fault-tolerant quantum algorithms for estimating ground-state energies of quantum systems. However, the rapid onset of ill-conditioning might make accurate energies difficult or even impossible to retrieve. In this communication, we analyse the numerical stability and statistical problems of these methods using numerical simulations both in the presence and absence of sampling noise. While in ideal numerical simulations the generalized eigenvalue problem indeed becomes unstable with increased Krylov subspace size, we find that, in realistic noisy settings, these methods do not primarily suffer from ill-conditioning. Instead, statistical fluctuations dominate and can prevent reliable solution extraction unless appropriate regularization or filtering techniques are employed. We consequently introduce two new metrics, the imaginary and unitary filters, that successfully assess the reliability of the obtained solutions without any knowledge of the true eigenspectrum.

When T-Depth Misleads: Predicting Fault-Tolerant Quantum Execution Slowdown under Magic-State Delivery Constraints

Boshuai Ye, Arif Ali Khan, Peng Liang

2604.11409 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new metrics to predict the actual execution time of fault-tolerant quantum algorithms by modeling the bottleneck of magic state production rates, showing that traditional T-depth measurements poorly predict real performance under realistic hardware constraints.

Key Contributions

  • Introduction of slack ratio and Delta_max metrics for predicting quantum algorithm execution slowdown under magic state delivery constraints
  • Demonstration that these new metrics are superior predictors of performance compared to traditional T-depth measurements
  • Provable lower bound on executable makespan with empirical validation across 4,904 test instances
fault-tolerant quantum computing magic states T-depth quantum compilation execution scheduling
View Full Abstract

The efficient execution of fault-tolerant quantum algorithms is fundamentally limited by the production rate of magic states required for non-Clifford operations. While circuit optimization typically targets T-depth, static T-depth does not reliably predict executable performance under bounded T-state delivery. We introduce a model that captures demand-supply imbalance using two key quantities: slack ratio, a structural indicator of scheduling flexibility, and Delta_max, a measure of cumulative demand surplus. We show that Delta_max is a strong schedule-level indicator of execution slowdown and yields a provable lower bound on executable makespan for a fixed schedule. Empirical evaluation on constructed directed acyclic graph (DAG) families, with arithmetic circuits and exact quantum Fourier transform (QFT) traces providing additional grounding, shows that slack ratio is a stronger structural predictor than T-depth for stall and inversion risk, while Delta_max is the strongest predictor of slowdown. Across 4,904 instances, the lower bound shows zero violations, with 88.9% of cases within one cycle. These results highlight the importance of explicitly modeling delivery constraints in fault-tolerant quantum compilation.

From GDSII to Wafer: EDA Design Flow and Data Conversion for Wafer-Scale Manufacturing of Superconducting Quantum Chips

Ling Qiao, Fumin Luo, Qinglang Guo

2604.11379 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a comprehensive electronic design automation (EDA) framework for manufacturing large-scale superconducting quantum processors at the wafer level. The work addresses the critical challenge of converting quantum circuit designs into manufacturable chip layouts by developing specialized design rules, verification processes, and data conversion pipelines that bridge the gap between quantum circuit design and semiconductor fabrication.

Key Contributions

  • Development of quantum-specific design rule checking (DRC) rules and multi-layer process stack model for superconducting quantum chips
  • Systematic Q-EDA technology stack architecture enabling seamless conversion from GDSII design files to wafer-scale manufacturing data
  • Comprehensive analysis and benchmarking of manufacturing data-flow coverage for quantum chip fabrication tools
superconducting quantum computing wafer-scale manufacturing electronic design automation quantum chip fabrication design rule checking
View Full Abstract

Superconducting quantum computing is advancing toward the thousand- and even million-qubit regime, making wafer-scale fabrication an essential pathway for achieving large-scale, cost-effective quantum processors. This manufacturing paradigm imposes new requirements on quantum-chip electronic design automation (Q-EDA): design tools must not only generate layouts (GDSII files) that satisfy quantum-circuit physical constraints but also ensure that the design data can be seamlessly converted into a complete set of manufacturing files executable by a wafer foundry, thereby enabling reliable translation from design intent to physical chip. This paper focuses on this critical data-conversion pipeline and presents a systematic treatment of the Q-EDA technology stack for wafer-scale fabrication. Starting from GDSII as the single authoritative data source, we analyze the key stages including process-design-kit (PDK)-based design rule checking (DRC), layout-versus-schematic (LVS) verification, design for manufacturability (DFM) optimization, wafer layout planning, and mask data preparation (MDP). We describe the concrete architecture of a Q-EDA system, present nine quantum-specific DRC rules together with their physical underpinnings and a multi-layer process stack model, and benchmark the manufacturing data-flow coverage of mainstream Q-EDA tools. Finally, we discuss the core challenges and future directions in this field.

Analytical Theory of Greedy Peeling for Bivariate Bicycle Codes and Two-Shot Streaming Decoding

Anton Pakhunov

2604.11352 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an analytical theory for fast 'greedy peeling' decoding of quantum error correction codes called bivariate bicycle codes, achieving 330x faster decoding than standard methods while maintaining the same error correction performance. The work provides mathematical formulas to predict decoding success and demonstrates ultra-fast two-shot decoding that could enable real-time quantum error correction.

Key Contributions

  • Closed-form analytical formula for collision resolution factor A_0 that predicts greedy peeling decoder performance with no free parameters
  • Demonstration of two-shot streaming decoding achieving ~50 ns latency with 89% success rate for quantum error correction
quantum error correction bivariate bicycle codes greedy peeling decoding fault-tolerant quantum computing syndrome decoding
View Full Abstract

We present an analytical theory of greedy peeling decoding for bivariate bicycle (BB) codes under circuit-level noise. The deferred greedy decoder achieves 330x latency reduction over belief propagation (BP) at p = 10^{-3} while maintaining identical logical error rate. Our main theoretical contribution is a closed-form collision resolution factor A_0 = |true collisions| / |birthday collisions|, derived from XOR syndrome analysis with no free parameters, that quantifies the fraction of detector-sharing fault pairs genuinely blocking iterative peeling. For the [[144,12,12]] Gross code, A_0 = 0.8685 (within 0.5% of the empirical value), with shared-2 pairs (4-cycles) always resolving under peeling. We show A_0 depends on the mean fault-graph degree d-bar rather than code size: A_0 = 0.87 for d-bar = 52 (Gross family) versus A_0 = 0.76 for d-bar = 17 ([[32,8,6]]). We establish a syndrome code stopping distance d_S = n/4.5 for the Gross family and demonstrate that [[32,8,6]] (d_S = 4) enables two-shot streaming decoding: T = 2 rounds achieve 89% peeling success with 1.29 +/- 0.03 LER ratio versus T = 12, at estimated latency ~50 ns. The full formula P_peel = exp(-A_0 * gamma_analytic * exp(-BTp) * n * p^2) is validated across five BB codes, four noise levels, and four values of T with R^2 = 0.86. Cross-platform reproduction of the Kunlun [[18,4,4]] experiment matches their hardware LER within 0.73 percentage points.

Autonomous Quantum Error Correction of Spin-Oscillator Hybrid Qubits

Sungjoo Cho, Ju-yeon Gyhm, Hyukjoon Kwon, Hyunseok Jeong

2604.11145 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper proposes a new method for quantum error correction that doesn't require constant measurements and feedback. Instead, it uses engineered dissipation in a hybrid system combining spin qubits with oscillator modes to automatically stabilize quantum information, making error correction more hardware-efficient.

Key Contributions

  • Novel measurement-free autonomous quantum error correction scheme
  • Hardware-efficient hybrid spin-oscillator approach that simplifies system-bath coupling requirements
  • Practical implementation pathway using existing trapped-ion platform capabilities
quantum error correction autonomous QEC spin-oscillator hybrid dissipation engineering Lindbladian
View Full Abstract

We propose a novel measurement-free scheme for stabilizing a spin-oscillator hybrid qubit via autonomous quantum error correction. The engineered Lindbladian renders the code space into an attractive steady-state subspace, realized by coupling the storage mode to a rapidly cooled bath through a controlled beam-splitter and spin-dependent displacement interactions. The continuous variable-discrete variable hybrid approach to autonomous quantum error correction preserves the hardware efficiency of conventional dissipation engineering while simplifying the required system-bath coupling. The construction is compatible with simple logical gates and leverages primitives already demonstrated in experimental platforms, such as trapped-ion systems, suggesting a practical route to hardware-efficient, noise-biased logical qubits without repeated syndrome measurements and feedforward.

QuMod: Parallel Quantum Job Scheduling on Modular QPUs using Circuit Cutting

Vinooth Kulkarni, Aaron Orenstein, Xinpeng Li, Shuai Xu, Daniel Blankenberg, Vipin Chaudhary

2604.11013 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops a scheduling system for modular quantum processing units (QPUs) that can execute multiple quantum jobs in parallel across connected quantum devices. The scheduler manages how quantum circuits are divided and distributed across multiple QPUs while coordinating operations like qubit mapping and teleportation between devices.

Key Contributions

  • Multi-programmable scheduler for modular quantum systems
  • Joint optimization of qubit mapping, parallel circuit execution, and inter-QPU teleportation operations
modular QPUs quantum job scheduling circuit cutting parallel quantum computing quantum teleportation
View Full Abstract

The quantum computing community is increasingly positioning quantum processors as accelerators within classical HPC workflows, analogous to GPUs and TPUs. However, many real-world applications require scaling to hundreds or thousands of physical qubits to realize logical qubits via error correction. To reach these scales, hardware vendors employing diverse technologies -- such as trapped ions, photonics, neutral atoms, and superconducting circuits -- are moving beyond single, monolithic QPUs toward modular architectures connected via interconnects. For example, IonQ has proposed photonic links for scaling, while IBM has demonstrated a modular QPU architecture by classically linking two 127-qubit devices. Using dynamic circuits, Bell-pair-based teleportation, and circuit cutting, they have shown how to execute a large quantum circuit that cannot fit on a single QPU. As interest in quantum computing grows, cloud providers must ensure fair and efficient resource allocation for multiple users sharing such modular systems. Classical interconnection of QPUs introduces new scheduling challenges, particularly when multiple jobs execute in parallel. In this work, we develop a multi-programmable scheduler for modular quantum systems that jointly considers qubit mapping, parallel circuit execution, measurement synchronization across subcircuits, and teleportation operations between QPUs using dynamic circuits.

Compiler Framework for Directional Transport in Zoned Neutral Atom Systems with AOD Assistance: A Hybrid Remote CZ Approach

Lingyi Kong, Chen Huang, Zhemin Zhang, Yidong Zhou, Xiangyu Ren, Shaochen Li, Zhiding Liang

2604.11000 • Apr 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents a new method for creating quantum gates between distant qubits in neutral atom quantum computers by using directional transport of Rydberg excitations through ancilla atoms, rather than physically moving the qubits themselves. The approach significantly reduces gate operation time and enables long-distance qubit connectivity beyond current hardware limitations.

Key Contributions

  • Novel directional-transport based remote CZ gate implementation for neutral atom systems
  • Compiler framework that reduces entangling gate duration by 50-90% compared to AOD-only approaches
  • Method to achieve long-distance qubit connectivity beyond physical shuttling limitations
neutral atoms remote entanglement CZ gate Rydberg excitation AOD
View Full Abstract

We present a directional-transport (DT)-based remote CZ gate and compiler for zoned neutral-atom arrays that overcomes movement-bound entanglement limitations. Current AOD-based shuttling faces row/column non-crossing constraints, device-speed limits, and hardware-restricted range - bottlenecks for long-distance connectivity. Our approach reserves AODs for channel setup and micro-tuning while making DT the default for remote entanglement. Under antiblockade, a detuning-modulated pi-pulse sequence drives directional transport of a Rydberg excitation along a dynamic and resettable ancilla corridor, realizing a CZ gate between stationary, non-adjacent qubits. This cuts entangling-stage duration by approximately 50 to 90 percent versus AOD-only baselines and enables long-distance connectivity beyond objective-limited shuttling.

Crosstalk-robust superconducting two-qubit geometric gates using tunable couplers

Bo-Xun Deng, Jia-Qi Hu, Cheng-Yun Ding, Zheng-Yuan Xue, Tao Chen

2604.08861 • Apr 10, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new method for implementing two-qubit gates in superconducting quantum computers that uses tunable couplers and geometric control to simultaneously reduce crosstalk interference and maintain fast gate operation speeds. The approach adds extra control parameters to steer the quantum system evolution away from crosstalk-sensitive regions while preserving high gate fidelity.

Key Contributions

  • Development of crosstalk-robust two-qubit geometric gates using tunable couplers that simultaneously optimize for both crosstalk suppression and fast gate operation
  • Demonstration of strong robustness against experimental imperfections including qubit frequency drift and decoherence while maintaining high fidelity
superconducting qubits two-qubit gates geometric gates crosstalk suppression tunable couplers
View Full Abstract

The design of coupler-based superconducting two-qubit gates simplifies circuit layout and alleviate frequency crowding, thereby enhancing the scalability and flexibility of quantum chips. However, in such architectures, a trade-off often exists between suppressing crosstalk and reducing gate duration, and how to achieve synergistic optimization of both remains an open challenge. To address this, this paper proposes a coupler-assisted superconducting two-qubit geometric gate scheme oriented towards crosstalk robustness. By introducing additional parametric degrees of freedom, the scheme steers the system evolution along desired trajectories, thereby flexibly avoiding crosstalk-sensitive operational regions. Numerical simulations demonstrate that the proposed scheme can effectively suppress crosstalk errors while enabling fast gate operations, and exhibits strong robustness against typical experimental imperfections such as qubit frequency drift. Moreover, even when accounting for unavoidable high-frequency oscillation terms and qubit decoherence in realistic physical systems, our crosstalk-robust two-qubit geometric gates still achieve high fidelity. This work provides a feasible pathway toward robust and efficient two-qubit gate implementation in superconducting quantum computation.

The MQT Compiler Collection: A Blueprint for a Future-Proof Quantum-Classical Compilation Framework

Lukas Burgholzer, Daniel Haag, Yannick Stade, Damian Rovara, Patrick Hopf, Robert Wille

2604.08674 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents the MQT Compiler Collection, a new quantum computing compilation framework that uses a classical-first approach based on Multi-Level Intermediate Representation (MLIR) to better handle quantum-classical hybrid algorithms. The framework addresses limitations of previous quantum-first approaches by supporting complex optimizations and structured control flow needed for modern quantum algorithms and error correction schemes.

Key Contributions

  • Introduction of a classical-first quantum compilation framework using MLIR
  • Blueprint for supporting quantum-classical hybrid programs with structured control flow
  • Framework designed to enable complex optimizations beyond simple gate operations
quantum compilation MLIR quantum-classical hybrid compiler optimization quantum algorithms
View Full Abstract

As the capabilities of quantum computing hardware continue to rise, algorithms that exploit them are becoming increasingly complex. These developments increase the need for sophisticated compilation frameworks that translate high-level algorithms into executable code. In the past, most solutions were built with a quantum-first approach and handled mostly pure quantum programs without classical elements such as structured control flow. However, developments in quantum algorithms, error correction, and optimization, as well as the integration into high-performance computing (HPC) environments, depend on such classical elements. As quantum-first approaches increasingly struggle to handle these concepts, classical-first approaches are becoming a promising alternative. In this work, we present the MQT Compiler Collection, a blueprint for a future-proof quantum-classical compilation framework built on the Multi-Level Intermediate Representation (MLIR). After years of experience with the quantum-first approach and its shortcomings, we propose a framework that embraces core MLIR concepts to support the full compilation pipeline from high-level algorithms to hardware-specific instructions. The proposed architecture is designed from the ground up to support complex optimizations beyond, e.g., simple gate cancellation. It is publicly available at https://github.com/munich-quantum-toolkit/core.

Hardware-Efficient Erasure Qubits With Superconducting Transmon Qutrits

Bao-Jie Liu, Ying-Ying Wang, Yu-Xin Wang, Manthan Badbaria, Shruti Puri, Chen Wang

2604.08672 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper demonstrates a hardware-efficient approach to quantum error correction by using superconducting transmon qutrits (three-level quantum systems) as erasure qubits, where dominant errors can be detected and converted into erasures. The method achieves significantly improved logical qubit lifetimes and high-fidelity operations while being compatible with existing superconducting quantum computing hardware.

Key Contributions

  • Hardware-efficient erasure qubit implementation using transmon qutrits that is compatible with standard superconducting circuit hardware
  • Demonstration of 10x improvement in logical qubit T1 lifetime (exceeding 500 μs) compared to physical qubits through erasure detection
  • Achievement of single-qubit gate fidelities on the order of 10^-4 and demonstration of heralded Bell state generation between erasure qubits
quantum error correction erasure qubits superconducting transmons fault tolerance quantum computing hardware
View Full Abstract

Quantum error correction using erasure qubits offers higher fault-tolerant thresholds and improved scaling by converting dominant physical errors into detectable erasures. In superconducting circuits, erasure qubits can be constructed using the dual-rail approach, which, however, requires additional qubit-count overhead and tailored coupling elements. Here, we demonstrate a hardware-efficient scheme that operates transmon qutrits as erasure qubits, which is compatible with standard superconducting circuit-QED hardware. The logical states $\ket{0_\text{L}}$ and $\ket{1_\text{L}}$ are represented by the ground and second excited states, while the dominant relaxation errors can be detected via an ancilla qubit using a microwave-activated two-qutrit SWAP gate. We demonstrate a logical qubit $T_1$ lifetime exceeding $500\,μ\mathrm{s}$, post-selected with repeated mid-circuit erasure detection, which is ten times longer than the $T_1$ time of the transmon physical qubit. Coherence times beyond $300\,μ\mathrm{s}$ are achieved using dynamical decoupling. Single-qubit gate operations reach average Clifford gate infidelity on the order of $10^{-4}$. We further demonstrate dual-purposing an ancilla qubit for both erasure detection and parity checking, showing heralded generation of Bell states between erasure qubits. These results suggest that mainstream architectures of transmon qubit arrays may already be capable of implementing erasure-based QEC strategies for hardware-efficient fault-tolerant quantum computing.

An Algorithm for Fast Assembling Large-Scale Defect-Free Atom Arrays

Tao Zhang, Xiaodi Li, Hui Zhai, Linghui Chen

2604.08669 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper presents algorithms to efficiently assemble large-scale defect-free arrays of 10,000+ atomic qubits using optical tweezers, solving key computational bottlenecks in path planning and potential generation that previously limited scalability.

Key Contributions

  • Graph neural network with modified auction decoder for fast path-planning with ~5ms constant overhead
  • Phase and profile-aware Weighted Gerchberg-Saxton algorithm for rapid potential generation in 0.5ms
atom arrays optical tweezers scalable quantum computing path planning spatial light modulators
View Full Abstract

It is widely believed that tens of thousands of physical qubits are needed to build a practically useful quantum computer. Atom arrays formed by optical tweezers are among the most promising platforms for achieving this goal, owing to the excellent scalability and mobility of atomic qubits. However, assembling a defect-free atom array with ~ 10^4 qubits remains algorithmically challenging, alongside other hardware limitations. This is due to the computationally hard path-planning problems and the time-consuming generation of suffciently smooth trajectories for optical tweezer potentials by spatial light modulators (SLM). Here, we present a unified framework comprising two innovative components to fully address these algorithmic challenges: (1) a path-planning module that employs a supervised learning approach using a graph neural network combined with a modified auction decoder, and (2) a potential-generation module called the phase and profile-aware Weighted Gerchberg-Saxton algorithm. The inference time for the first module is nearly a size-independent constant overhead of ~ 5 ms, and the second module generates a potential frame with about 0.5 ms, a timescale shorter than the current commercial SLM refresh time. Altogether, our algorithm enables the assembly of an atom array with 10^4 qubits on a timescale much shorter than the typical vacuum lifetime of the trapped atoms.

High-Fidelity Transmon Reset with a Multimode Acoustic Resonator

Andraž Omahen, Simon Storz, Igor Kladarić, Yiwen Chu

2604.08655 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper demonstrates a new method for resetting superconducting qubits to their ground state by coupling them to a high-overtone bulk acoustic resonator (HBAR) that acts as a cold phononic bath. The technique achieves exceptionally low residual excited-state populations below 10^-4, which is 10-100 times better than existing reset methods.

Key Contributions

  • Demonstration of phononic bath cooling for transmon qubit reset achieving residual excited-state population below 10^-4
  • Introduction of high-overtone bulk acoustic resonator (HBAR) coupling as an alternative to existing reset protocols
  • Achievement of 1-2 orders of magnitude improvement in reset fidelity compared to existing schemes
transmon qubit reset superconducting circuits phononic bath HBAR
View Full Abstract

Achieving sufficiently low residual excited-state populations remains a key challenge in superconducting quantum circuits, particularly for protocols operating close to noise limits or requiring repeated qubit initialization. Existing protocols primarily address this challenge through sophisticated control, engineered dissipation, or feedback mechanisms. Here, we demonstrate an alternative approach in which a superconducting qubit is reset using a physically distinct, intrinsically colder phononic bath. Specifically, we interface a transmon with a high-overtone bulk acoustic resonator (HBAR), enabling cooling of the qubit into GHz-frequency modes. Using this approach, we achieve a residual excited-state population of the qubit below $10^{-4}$, representing an improvement of one to two orders of magnitude compared to existing reset schemes. These results highlight the potential of phononic baths as a resource for high-fidelity qubit initialization in superconducting circuits.

Decoding coherent errors in toric codes on honeycomb and square lattices: duality to Majorana monitored dynamics and symmetry classes

Zhou Yang, Andreas W. W. Ludwig, Chao-Ming Jian

2604.08650 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies how toric codes (quantum error correction schemes) perform when subject to coherent quantum errors that involve interference effects. The authors establish a mathematical connection between decoding these errors and the dynamics of Majorana fermions, revealing that different lattice geometries and error types belong to distinct symmetry classes with different decodability properties.

Key Contributions

  • Established duality between toric code decoding under coherent errors and Majorana fermion monitored dynamics
  • Showed that Altland-Zirnbauer symmetry classes govern decodability phase diagrams
  • Characterized universal behavior of decodability transitions for different lattice geometries and error types
toric codes quantum error correction coherent errors topological codes fault-tolerant quantum computing
View Full Abstract

Topological stabilizer codes, such as the toric and surface codes, are leading candidates for fault-tolerant quantum computation. While their decodability under stochastic noise has been extensively studied, the effects of coherent errors, which involve quantum interference, remain less explored. In this work, we study the decodability of toric codes on honeycomb and square lattices subject to $X$- and $Z$-type coherent errors generated by the $X$- and $Z$-rotations on each qubit. We establish a duality between these decoding problems and 1+1D monitored dynamics of non-interacting Majorana fermions. This duality shows that the Altland-Zirnbauer symmetry class of the dual Majorana dynamics governs the universal structure of the decodability phase diagram. We show that the honeycomb-lattice toric code (hTC) with $X$-type error is dual to class-DIII dynamics, while the hTC with $Z$-type error and the square-lattice toric code (sTC) with both error types are dual to class-D dynamics. The key distinction arises from time-reversal symmetry. In class DIII, the generic transition out of the decodable phase is dual to a measurement-induced transition between dynamical phases with area-law and logarithmic entanglement scaling. In contrast, in class D, the generic decodability transition corresponds to a transition between two topologically distinct area-law phases. To explore these transitions in microscopic models, we consider hTC and sTC with $X$-type errors as representatives and introduce a minimal two-parameter coherent error model with spatially varying rotation angles. Using analytical and numerical methods, we map out the decodability phase diagrams and characterize the universal behavior of the transitions. We find that the decodability of sTC is more vulnerable to spatially varying coherent errors than uniform ones.

Scalable Neural Decoders for Practical Fault-Tolerant Quantum Computation

Andi Gu, J. Pablo Bonilla Ataides, Mikhail D. Lukin, Susanne F. Yelin

2604.08358 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a neural network-based decoder for quantum error correction that significantly improves the accuracy and speed of correcting errors in quantum computers. The decoder achieves much lower logical error rates and higher throughput than existing methods, making fault-tolerant quantum computing more practical.

Key Contributions

  • Development of convolutional neural network decoder for quantum error correction codes
  • Demonstration of 17x improvement in logical error rates with 3-5 orders of magnitude higher throughput
  • Discovery of waterfall regime showing practical fault-tolerant quantum computing achievable with modest code sizes
quantum error correction fault-tolerant quantum computing neural network decoder logical error rates quantum low-density parity-check codes
View Full Abstract

Quantum error correction (QEC) is essential for scalable quantum computing. However, it requires classical decoders that are fast and accurate enough to keep pace with quantum hardware. While quantum low-density parity-check codes have recently emerged as a promising route to efficient fault tolerance, current decoding algorithms do not allow one to realize the full potential of these codes in practical settings. Here, we introduce a convolutional neural network decoder that exploits the geometric structure of QEC codes, and use it to probe a novel "waterfall" regime of error suppression, demonstrating that the logical error rates required for large-scale fault-tolerant algorithms are attainable with modest code sizes at current physical error rates, and with latencies within the real-time budgets of several leading hardware platforms. For example, for the $[144, 12, 12]$ Gross code, the decoder achieves logical error rates up to $\sim 17$x below existing decoders - reaching logical error rates $\sim 10^{-10}$ at physical error $p=0.1\%$ - with 3-5 orders of magnitude higher throughput. This decoder also produces well-calibrated confidence estimates that can significantly reduce the time overhead of repeat-until-success protocols. Taken together, these results suggest that the space-time costs associated with fault-tolerant quantum computation may be significantly lower than previously anticipated.

Optimized Gottesman-Kitaev-Preskill Error Correction via Tunable Preprocessing

Xiang-Jiang Chen, Hao-Miao Jiang, Liu-Jun Wang, Qing Chen

2604.08247 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper proposes an improved error correction scheme for Gottesman-Kitaev-Preskill (GKP) quantum error correction codes by introducing a tunable preprocessing stage with squeezing parameters. The new P-Steane scheme can outperform existing methods by actively reshaping noise propagation patterns in bosonic quantum systems.

Key Contributions

  • Introduction of tunable preprocessing stage with squeezing parameters for GKP error correction
  • Unified framework that encompasses existing ME-Steane and teleportation-based schemes as special cases
  • Demonstration of improved performance over ME-Steane scheme under specific conditions with optimized parameter selection
Gottesman-Kitaev-Preskill bosonic codes quantum error correction fault-tolerant quantum computing Steane syndrome extraction
View Full Abstract

The Gottesman-Kitaev-Preskill (GKP) code is a promising bosonic candidate for realizing fault-tolerant quantum computation. Among existing error-correction protocols for GKP code, the Steane-type scheme is a canonical and widely adopted paradigm, yet its intrinsic noise propagation pattern limits further performance improvement. In this work, we propose a preprocessing-based Steane-type (P-Steane) scheme, which introduces a tunable preprocessing stage with squeezing parameters $a$ and $b$ to actively reshape noise propagation, thereby constituting a parameter framework. This framework spans a spectrum of protocols beyond existing methods, reproducing the performance of both the ME-Steane scheme ($a=1$, $b=1$) and the teleportation-based scheme ($a=1/\sqrt{2}$, $b=\sqrt{2}$) as special cases. Crucially, in the small-noise regime and when the data qubit is noisier than the ancilla qubits, P-Steane scheme achieves the minimum product of position- and momentum-quadrature output noise variances when $2a = b$, and consistently outperforms the ME-Steane scheme within a specific squeezing-parameter range under this condition.

Belief Propagation Convergence Prediction for Bivariate Bicycle Quantum Error Correction Codes

Anton Pakhunov

2604.07995 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a simple predictive method for determining whether Belief Propagation decoding will converge for quantum error correction codes by checking if the syndrome defect count is divisible by the code's column weight, achieving 95%+ accuracy and potentially reducing computational overhead in quantum error correction.

Key Contributions

  • Development of a modulo-based convergence predictor for Belief Propagation decoding with AUC = 0.995
  • Identification of structural mechanism linking syndrome defect count divisibility to BP convergence probability
  • Validation across multiple Bivariate Bicycle codes including IBM's targeted Gross codes for 2026-2028 deployment
quantum error correction belief propagation bivariate bicycle codes syndrome decoding LDPC codes
View Full Abstract

Decoding Bivariate Bicycle (BB) quantum error correction codes typically requires Belief Propagation (BP) followed by Ordered Statistics Decoding (OSD) post-processing when BP fails to converge. Whether BP will converge on a given syndrome is currently determined only after running BP to completion. We show that convergence can be predicted in advance by a single modulo operation: if the syndrome defect count is divisible by the code's column weight w, BP converges with high probability (100% at p <= 0.001, degrading to 87% at p = 0.01); otherwise, BP fails with probability >= 90%. The mechanism is structural: each physical data error activates exactly w stabilizers, so a defect count not divisible by w implies the presence of measurement errors outside BP's model space. Validated on five BB codes with column weights w = 2, 3, and 4, mod-w achieves AUC = 0.995 as a convergence classifier at p = 0.001 under phenomenological noise, dominating all other syndrome features (next best: AUC = 0.52). The false positive rate scales empirically as O(p^2.05) (R^2 = 0.98), confirming the analytical bound from Proposition 2. Among BP failures on mod-w = 0 syndromes, 82% contain weight-2 data error clusters, directly confirming the dominant failure mechanism. The prediction is invariant under BP scheduling strategy and decoder variant, including Relay-BP - the strongest known BP enhancement for quantum LDPC codes. These results apply directly to IBM's Gross code [[144, 12, 12]] and Two-Gross code [[288, 12, 18]], targeted for deployment in 2026-2028.

A Review of Variational Quantum Algorithms: Insights into Fault-Tolerant Quantum Computing

Zhirao Wang, Junxiang Huang, Runyu Ye, Qingyu Li, Qi-Ming Ding, Yiming Huang, Ting Zhang, Yumeng Zeng, Jianshuo Gao, Xiao Yuan, Yuan Yao

2604.07909 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper reviews variational quantum algorithms (VQAs), which combine quantum circuits with classical optimization to work on current noisy quantum computers, and analyzes how these algorithms might evolve as quantum computers become fault-tolerant. The review examines current challenges like barren plateaus and explores applications across physics, chemistry, and machine learning.

Key Contributions

  • Systematic analysis of VQA evolution from NISQ to fault-tolerant quantum computing regimes
  • Comprehensive review of training bottlenecks like barren plateaus and mitigation strategies
  • Theoretical roadmap for adapting variational algorithms to error-corrected quantum systems
variational quantum algorithms fault-tolerant quantum computing NISQ parameterized quantum circuits barren plateaus
View Full Abstract

Variational quantum algorithms (VQAs) have established themselves as a central computational paradigm in the Noisy Intermediate-Scale Quantum (NISQ) era. By coupling parameterized quantum circuits (PQCs) with classical optimization, they operate effectively under strict hardware limitations. However, as quantum architectures transition toward early fault-tolerant (EFT) and ultimate fault-tolerant (FT) regimes, the foundational principles and long-term viability of VQAs require systematic reassessment. This review offers an insightful analysis of VQAs and their progression toward the fault-tolerant regime. We deconstruct the core algorithmic framework by examining ansatz design and classical optimization strategies, including cost function formulation, gradient computation, and optimizer selection. Concurrently, we evaluate critical training bottlenecks, notably barren plateaus (BPs), alongside established mitigation strategies. The discussion then explores the EFT phase, detailing how the integration of quantum error mitigation and partial error correction can sustain algorithmic performance. Addressing the FT phase, we analyze the inherent challenges confronting current hybrid VQA models. Furthermore, we synthesize recent VQA applications across diverse domains, including many-body physics, quantum chemistry, machine learning, and mathematical optimization. Ultimately, this review outlines a theoretical roadmap for adapting quantum algorithms to future hardware generations, elucidating how variational principles can be systematically refined to maintain their relevance and efficiency within an error-corrected computational environment.

Fast and Coherent Transfer of Atomic Qubits in Optical Tweezers using Fiber Array Architecture

Jia-Chao Wang, Zai-Zheng Zhang, Xiao Li, Guang-Wei Wang, Xiao-Dong He, Min Liu, Peng Xu

2604.07862 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a new fiber array architecture for neutral-atom quantum computers that enables fast, coherent transfer of atomic qubits between optical trap sites with extremely low heating and high fidelity. The technique allows qubits to be moved between different locations in the quantum processor while maintaining their quantum states, which is crucial for implementing quantum algorithms that require connectivity between distant qubits.

Key Contributions

  • Demonstrated ultrafast qubit transfer (10 μs) with extremely high fidelity (0.99992 per cycle) and ultralow motional heating
  • Developed fiber array architecture with site-resolved trap depth control enabling smooth amplitude exchange between static and moving traps
  • Established theoretical model connecting array inhomogeneity to transfer heating rates through parallel transfer experiments
neutral atoms optical tweezers qubit transfer quantum computing architecture motional heating
View Full Abstract

Programmable neutral-atom arrays offer a promising route toward scalable quantum computing, where coherent qubit transfer enables non-local connectivity and reduces resource overhead. However, transfer speed and motional heating remain key bottlenecks for fast and deep quantum circuits. Here, we employ a fiber array neutral-atom quantum computing architecture with site-resolved control of trap depths to realize smooth amplitude exchange between static and moving traps, thereby enabling fast and coherent qubit transfer with ultralow motional heating. With a 10 $μ$s in situ transfer between static and moving traps, we obtain a per-cycle heating rate of 0.156(9) $μ$K, sustain over 500 cycles with negligible atom loss, and achieve a quantum state fidelity of 0.99992(5) per cycle. For inter-site transfer between two separated static traps, the operation takes 120 $μ$s with 0.783(17) $μ$K heating per transfer, and remains negligible atom loss for up to 100 repeated cycles with a fidelity of 0.9998(1) per transfer. Furthermore, through experimental studies of parallel transfer, we establish a model that elucidates the relationship between array inhomogeneity and the transfer heating rate. This fast, low-heating coherent transfer capability provides a practical route for improving both speed and fidelity in atom-shuttling based quantum computing.

Trotterization with Many-body Coulomb Interactions: Convergence for General Initial Conditions and State-Dependent Improvements

Di Fang, Xiaoxu Wu

2604.07704 • Apr 9, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper establishes rigorous error bounds for Trotter formulas when simulating many-body quantum systems with Coulomb interactions, showing that second-order Trotter achieves polynomial scaling in particle number despite the challenging mathematical properties of Coulomb potentials. The work identifies conditions under which convergence rates can be improved and connects these to physically meaningful quantum states.

Key Contributions

  • Rigorous proof that second-order Trotter formulas achieve 1/4 convergence rate with polynomial particle number dependence for Coulomb systems
  • Identification of physically meaningful initial state conditions that improve convergence rates to first and second order
Trotterization Coulomb interactions quantum simulation many-body systems convergence analysis
View Full Abstract

Efficiently simulating many-body quantum systems with Coulomb interactions is a fundamental question in quantum physics, quantum chemistry, and quantum computing, yet it presents unique challenges: the Hamiltonian is an unbounded operator (both kinetic and potential parts are unbounded); its Hilbert space dimension grows exponentially with particle number; and the Coulomb potential is singular, long-ranged, non-smooth, and unbounded, violating the regularity assumptions of many prior state-of-the-art many-body simulation analyses. In this work, we establish rigorous error bounds for Trotter formulas applied to many-body quantum systems with Coulomb interactions. Our first main result shows that for general initial conditions in the domain of the Hamiltonian, second-order Trotter achieves a sharp $1/4$ convergence rate with explicit polynomial dependence of the error prefactor on the particle number. The polynomial dependence on system size suggests that the algorithm remains quantumly efficient, even without introducing any regularization of the Coulomb singularity. Notably, although the result under general conditions constitutes a worst-case bound, this rate has been observed in prior work for the hydrogen ground state, demonstrating its relevance to physically and practically important initial conditions. Our second main result identifies a set of physically meaningful conditions on the initial state under which the convergence rate improves to first and second order. For hydrogenic systems, these conditions are connected to excited states with sufficiently high angular momentum. Our theoretical findings are consistent with prior numerical observations.

Defect-free arrays at the thousand-atom scale in a 4-K cryogenic environment

Desiree Lim, Hadriel Mamann, Grégoire Pichard, Lilian Bourachot, Arvid Lindberg, Clotilde Hamot, Hugo Le Bars, Florian Fasola, Siddhy Tan, Gwennolé ...

2604.07205 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper demonstrates a cryogenic system operating at 4 K that can create and maintain arrays of up to 1024 individual atoms trapped by laser tweezers, achieving extremely long trapping times of around 5000 seconds. The system is designed to be compatible with Rydberg-state manipulation, enabling large-scale quantum computing applications.

Key Contributions

  • Development of 4K cryogenic platform with high numerical aperture optics for thousand-atom scale arrays
  • Achievement of 5000-second trapping lifetimes enabling extended experimental time
  • Demonstration of defect-free arrays up to 1024 atoms using dual-wavelength trapping
optical tweezers Rydberg atoms cryogenic systems neutral atom quantum computing large-scale quantum arrays
View Full Abstract

We report on a cryogenic platform at 4 K incorporating high numerical aperture optics for the generation of large-scale tweezers arrays, and compatible with Rydberg-state manipulation. We achieve trapping lifetimes of around 5000 s, significantly extending the available experimental time for the preparation of large-scale arrays. By combining two trapping lasers at different wavelengths and by minimizing other atom losses during the rearrangement and imaging processes, we demonstrate the preparation of defect-free arrays with up to 1024 atoms. Our cryogenic design opens exciting prospects for analog and digital quantum computing.

Coherence and entanglement dynamics in Shor's algorithm

Linlin Ye, Zhaoqi Wu, Shao-Ming Fei

2604.06639 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes how quantum coherence and entanglement change during the execution of Shor's algorithm for factoring large numbers. The researchers show that Shor's algorithm generally decreases coherence while increasing entanglement, and they establish relationships between these quantum resources throughout the algorithm's steps.

Key Contributions

  • Analysis of coherence and entanglement dynamics throughout Shor's algorithm execution
  • Demonstration that Shor's algorithm depletes coherence while producing entanglement
  • Establishment of relationships between geometric coherence and geometric entanglement in quantum algorithms
Shor's algorithm quantum coherence quantum entanglement prime factorization quantum algorithms
View Full Abstract

Shor's algorithm outperforms its classical counterpart in efficient prime factorization. We explore the coherence and entanglement dynamics of the evolved states within Shor's algorithm, showing that the coherence in each step relies on the dimension of register or the order, and discuss the relations between geometric coherence and geometric entanglement. We investigate how unitary operators induce variations in coherence and entanglement, and analyze the variations of coherence and entanglement within the entire algorithm, demonstrating that the overall effect of Shor's algorithm tends to deplete coherence and produce entanglement. Our research not only deepens the understanding of this algorithm but also provides methodological references for studying resource dynamics in other quantum algorithms.

Quantifying magic via quantum $(α,β)$ Jensen-Shannon divergence

Linmao Wang, Zhaoqi Wu

2604.06604 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new mathematical tools to measure 'magic' in quantum states, which refers to how much a quantum state differs from easily simulatable stabilizer states. The authors propose quantum Jensen-Shannon divergence-based measures that can efficiently quantify this magic property, which is crucial for fault-tolerant quantum computing.

Key Contributions

  • Introduction of two new magic quantifiers based on quantum (α,β) Jensen-Shannon divergence
  • Demonstration that these quantifiers are efficiently computable in low-dimensional systems and have desirable mathematical properties
  • Analysis of how initial nonstabilizerness can enhance magic generation for specific quantum gates
magic states fault-tolerant quantum computing stabilizer states quantum resource theory Jensen-Shannon divergence
View Full Abstract

Magic states play an important role in fault-tolerant quantum computation, and so the quantification of magic for quantum states is of great significance. In this work, we propose two new magic quantifiers by introducing two versions of quantum $(α,β)$ Jensen-Shannon divergence based on the quantum $(α,β)$ entropy and the quantum $(α,β)$-relative entropy, respectively. We derive many desirable properties for our magic quantifiers, and find that they are efficiently computable in low-dimensional Hilbert spaces. We also show that the initial nonstabilizerness in the input state can boost the magic generating power for our magic quantifiers with appropriate parameter ranges for a certain class of quantum gates. Our magic quantifiers may provide new tools for addressing some specific problems in magic resource theory.

Database Reordering for Compact Grover Oracles with ESOP Minimization

Yusuke Kimura, Yutaka Takita

2604.06578 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes optimizing Grover's quantum search algorithm by reordering database entries and using ESOP minimization to reduce the gate count and circuit depth of the quantum oracle circuit. The researchers demonstrate that strategic database reordering combined with simulated annealing can reduce circuit size by approximately 30% compared to unoptimized approaches.

Key Contributions

  • Demonstrated that database reordering can reduce Grover oracle circuit size by up to a factor of two
  • Developed a proxy metric for estimating circuit size without full compilation and combined it with simulated annealing for efficient optimization
  • Showed 30% circuit size reduction compared to ESOP minimization without reordering through experimental validation
Grover's algorithm quantum oracle circuit optimization ESOP minimization QROM
View Full Abstract

Grover's algorithm searches for data satisfying a desired condition in an unstructured database. This algorithm can search a space of size $N$ in $\sqrt{N}$ queries, thereby achieving a quadratic speedup. However, within the Grover oracle circuit that is repeatedly applied, the quantum state preparation circuit -- which embeds database information into quantum states -- suffers from a large gate count and circuit depth. To address this problem, we propose reducing the quantum state preparation circuit by reordering the database. Specifically, we consider a Quantum Read-Only Memory (QROM), where data are assigned to addresses, and assume that the address assignment of data can be freely permuted. By applying Exclusive Sum-of-Products (ESOP) minimization to the resulting truth table, we reduce the quantum circuit. Although the resulting circuit logic differs from the original, the state preparation remains correct in the sense that every desired datum is encoded at some address. Furthermore, we propose a proxy metric that estimates circuit size without compilation, and combine it with simulated annealing to efficiently find a near-optimal data ordering. In our experiments, an exhaustive search over all orderings for databases of size $N=8$ reveals that circuit size varies by up to approximately a factor of two depending on the ordering, demonstrating the utility of reordering. Compared with applying ESOP minimization without reordering, simulated annealing reduces the circuit size by approximately 30\% and yields circuits close to optimal. For $N=64$ and $128$, simulated annealing is shown to discover smaller circuits compared with random search.

Discrete-variable assisted error correction of continuous-variable quantum information

Negin Razian, En-Jui Chang, Hoi-Kwan Lau

2604.06565 • Apr 8, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper presents a new quantum error correction method for continuous-variable quantum systems that uses discrete-variable ancilla qubits instead of the difficult-to-prepare GKP states. The approach can suppress quantum errors by over 20% and offers a more practical path to implementing error correction in hybrid quantum systems.

Key Contributions

  • Novel CV quantum error correction scheme using DV ancilla instead of GKP states
  • Demonstration of >20% infidelity suppression with single-qubit ancilla
  • New oscillator-in-oscillator code architecture without GKP states
  • Practical implementation pathway for CV QEC on realistic platforms
quantum error correction continuous-variable quantum computing discrete-variable ancilla bosonic quantum codes hybrid quantum systems
View Full Abstract

Robust continuous-variable (CV) quantum information processing requires correcting realistic errors in bosonic systems, but all existing schemes rely on auxiliary Gottesman-Kitaev-Preskill (GKP) states which the preparation and operation are demanding in many platforms. In this work, we propose a novel CV quantum error correction (QEC) scheme that utilizes a broadly accessible resource: discrete-variable (DV) ancilla. Our scheme extracts information about CV displacement to the DV ancilla, measuring that allows counteracting the unwanted displacement error. We show that a simple single-qubit ancilla can already suppress CV infidelity by more than 20%. By concatenating with DV QEC codes, our scheme is robust against the physical errors in hybrid CV-DV systems, and yields a new class of oscillator-in-oscillator code that does not involve GKP states. Our work facilitates the implementation of CV QEC on realistic platforms.

Error Correction in Lattice Quantum Electrodynamics with Quantum Reference Frames

Elias Rothlin, Carla Ferradini, Lin-Qing Chen

2604.06149 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper explores how gauge symmetries in lattice quantum electrodynamics can be understood as quantum error-correcting codes, showing that gauge redundancy serves as a resource for protecting quantum information. The authors construct explicit error recovery operations using quantum reference frames and demonstrate two QECC structures within lattice QED.

Key Contributions

  • Established lattice QED as a quantum error-correcting code beyond stabilizer codes
  • Constructed explicit recovery operations using quantum reference frames for both gauge and fermionic sectors
  • Demonstrated how gauge symmetry provides encoding structure that supports quantum error correction
quantum error correction gauge theory lattice QED quantum reference frames stabilizer codes
View Full Abstract

Is gauge symmetry merely a redundancy in our description, or does it carry a deeper information-theoretic significance? Quantum error-correcting codes (QECCs) show that redundancy can serve as a resource for protecting information against noise. In this work, we ask whether gauge theories can be understood in similar terms, and make this idea concrete in lattice quantum electrodynamics (QED), building on and extending earlier works that established a bridge between gauge systems, stabilizer codes, and quantum reference frames (QRFs). For Abelian gauge groups, we show that explicit recovery operations can be constructed using group-theoretical methods for error sets determined by both ideal and non-ideal QRFs. Applied to lattice QED, this yields two QECC structures: one in the pure-gauge sector and one including fermions. We construct a gauge-field QRF based on spanning trees of the lattice and a fermionic field QRF from the matter field, thereby making explicit how physical information is encoded. While the syndromes of gauge-violating errors associated with constraint measurements are generically degenerate, QRFs resolve this degeneracy and single out families of correctable errors. This establishes lattice QED as a QECC beyond the stabilizer setting and shows concretely how gauge symmetry provides an encoding structure that supports error correction.

Gauss law codes and vacuum codes from lattice gauge theories

Javier P. Lacambra, Aidan Chatwin-Davies, Masazumi Honda, Philipp A. Hoehn

2604.06087 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a framework for creating quantum error correcting codes from lattice gauge theories, showing how gauge symmetries can be used to protect quantum information. The work demonstrates connections between quantum error correction and gauge theory physics, with potential applications for simulating gauge theories on noisy quantum computers.

Key Contributions

  • Comprehensive framework for constructing QECCs from Abelian lattice gauge theories using quantum reference frames
  • Development of two classes of codes: Gauss law codes and vacuum codes with detailed characterization of their algebraic structures
  • Demonstration of unitary equivalence between vacuum codes and pure gauge theory codes under specific conditions
quantum error correction lattice gauge theory quantum reference frames subsystem codes gauge symmetry
View Full Abstract

We develop a comprehensive framework for constructing quantum error correcting codes (QECCs) from Abelian lattice gauge theories (LGTs) using quantum reference frames (QRFs) as a unifying formalism. We consider LGTs with arbitrary compact Abelian gauge groups supported on lattices in arbitrary numbers of spatial dimensions, and we work with both pure gauge theories and theories with couplings to bosonic and fermionic matter. The codes that we construct fall into two classes: First, Gauss law codes identify the code subspace with the full gauge-invariant sector of the theory. In models with matter coupled to gauge fields, these codes inherit a natural subsystem structure in which gauge-invariant Wilson loops and dressed matter excitations factorize the code space. Second, vacuum codes restrict the code subspace to the matter vacuum sector within the gauge-invariant subspace, yielding codes where errors correspond to gauge-invariant charge excitations rather than to violations of the Gauss law. Despite their distinct setup, we show that when the gauge group is finite, vacuum codes are unitarily equivalent to pure gauge theory Gauss law codes, and that when the group is continuous, this is only true upon a charge coarse-graining of the vacuum code. In all cases, QRFs provide a systematic apparatus for fully characterizing the codes' algebraic structures and correctable error sets. For clarity, we illustrate our general results in $\mathbb{Z}_2$-gauge theory, as well as in scalar and fermionic QED. These findings offer fundamental insights into the parallelism between quantum error correction and gauge theory and point toward practical advantages for simulating LGTs on noisy quantum devices.

Adaptive Deformation of Color Code in Square Lattices with Defects

Tian-Hao Wei, Jia-Xuan Zhang, Jia-Ning Li, Wei-Cheng Kong, Yu-Chun Wu, Guo-Ping Guo

2604.05874 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to adapt color code quantum error correction to work on hardware with defective qubits, proposing a universal scheme that handles both data and ancilla qubit defects while maintaining low error rates and supporting fault-tolerant operations.

Key Contributions

  • Universal superstabilizer scheme for handling data qubit defects in arbitrary stabilizer codes
  • Concrete repair methods for isolated defects in color codes on square lattices
  • Two optimization schemes for ancilla qubit defects that avoid resource waste
  • Comprehensive defect adaptive architecture supporting transversal Clifford gates and lattice surgery
quantum error correction color codes fault tolerant quantum computing topological codes stabilizer codes
View Full Abstract

Quantum error correction is a crucial technology for fault tolerant quantum computing. On superconducting platforms, hardware defects in large scale quantum processors can disrupt the regular lattice structure of topological codes and impair their error correction capabilities. Although defect adaptive methods for surface codes have been extensively studied, other topological codes such as color codes still lack a systematic framework for handling defects. To address this issue, we propose a universal superstabilizer scheme applicable to data qubit defects in arbitrary stabilizer codes. Based on this scheme, we develop concrete repair methods for isolated defects of both internal data qubits and ancilla qubits in color codes defined on square lattices. Furthermore, for ancilla qubit defects, we present two optimization schemes. One scheme reuses neighboring ancilla qubits, and the other employs iSWAP gates. Unlike conventional approaches that directly disable neighboring data qubits and thus cause resource waste, both of our schemes avoid such waste and consequently achieve a lower logical error rate.Integrating the above techniques, we construct a comprehensive defect adaptive architecture for color codes to handle various defect clusters. We also show that our scheme supports a full transversal Clifford gate set and lattice surgery operations. These results provide a systematic theoretical pathway for deploying robust and low overhead color codes on defective quantum hardware.

Dynamical decoupling and quantum error correction with SU(d) symmetries

Colin Read, Eduardo Serrano-Ensástiga, John Martin

2604.05871 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: high Network: medium

This paper develops a general framework for dynamical decoupling in qudit (multi-level quantum) systems using Lie group theory, extending beyond the typical qubit case. The authors show how to systematically identify decoupling sequences for higher-dimensional quantum systems and demonstrate that the same mathematical framework unifies dynamical decoupling with quantum error correction.

Key Contributions

  • General framework for dynamical decoupling in qudit systems based on SU(d) symmetries and Lie group theory
  • Unification of dynamical decoupling and quantum error correction through symmetry-based approach
  • Construction of new pulse sequences for qutrit systems and spin-1 systems with practical experimental considerations
dynamical decoupling quantum error correction qudit systems SU(d) symmetries Lie group theory
View Full Abstract

Dynamical decoupling is a long-established and effective way to suppress unwanted interactions in qubit systems, enabling advances in fields ranging from quantum metrology to quantum computing. For general qudit systems, however, comparable protocols remain rare, mainly because Hamiltonian engineering in higher dimensions lacks the geometric intuition available for qubits. Here we present a general framework for dynamical decoupling in qudit systems, based on Lie group representation theory. By extending the group theory approach to dynamical decoupling, we show how decoupling groups can be systematically identified among the finite subgroups of SU(d) by analyzing their access to the irreducible components of the operator space. As an application, we construct new pulse sequences for interacting qutrit systems based on finite subgroups of SU(3), and show how subgroup factorizations and group orientations can be exploited to obtain shorter and more experimentally practical protocols for spin-1 systems with large zero-field splitting. We further show that the same symmetry-based framework yields quantum error-correcting codes: whenever a finite subgroup of SU(d) acts as a decoupling group for the relevant error algebra, the associated one-dimensional symmetry sectors define codespaces satisfying the Knill-Laflamme conditions, thereby unifying dynamical decoupling and quantum error correction in multi-level quantum systems.

Fault-Tolerant One-Shot Entanglement Generation with Constant-Sized Quantum Devices in the Plane

Dylan Harley, Robert Koenig

2604.05870 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents a fault-tolerant protocol that can generate high-fidelity entangled Bell pairs between distant qubits on a 2D grid in constant time, even in the presence of noise. The protocol works with constant-sized quantum devices and requires only a grid that scales linearly with distance in one dimension and logarithmically in the other.

Key Contributions

  • First one-shot fault-tolerant entanglement generation protocol for 2D grids with constant-sized devices
  • Demonstration of long-range localizable entanglement in short-range entangled 2D states robust to local Pauli noise
  • Construction of 2D-local stabilizer Hamiltonian with long-range entanglement at finite temperature
fault-tolerant quantum computing entanglement generation quantum repeaters 2D quantum systems Bell pairs
View Full Abstract

Consider a rectangular grid of qubits in 2D with single-qubit and nearest-neighbor two-qubit operations subject to local stochastic Pauli noise. At different length scales, this setup describes both a single quantum computing device with geometrically limited connectivity between qubits arranged on a disc, and planar networks composed of quantum repeater stations of constant size. We give a protocol which robustly generates entanglement between distant qubits in this setup. For noise below a constant threshold error strength, it generates a constant-fidelity Bell pair between qubits separated by an arbitrarily large distance $R$. To generate distance-$R$ entanglement, a rectangular grid of qubits of dimensions $Θ(R)\times Θ(\mathsf{poly}(\log R))$ suffices. Our protocol applies quantum operations in one shot, establishing a Bell state in a constant time up to a known Pauli correction. In contrast, existing entanglement generation protocols either require local quantum devices controlling a number of qubits growing with the targeted distance, or are not single-shot, i.e., have a distance-dependent execution time. The protocol leverages many-body entanglement in networks and provides the first example of a short-range entangled state in 2D with long-range localizable entanglement robust to local stochastic Pauli noise. As an immediate corollary, we construct a 2D-local stabilizer Hamiltonian whose Gibbs states possess long-range localizable entanglement at constant positive temperature.

A plug-and-play superconducting quantum controller at millikelvin temperatures enables exceeding 99.9% average gate fidelity

Kuang Liu, Zhiyuan Wang, Xiaoliang He, Siqi Li, Hao Wu, Xiangyu Ren, Zhengqi Niu, Wangpeng Gao, Chenluo Zhang, Pei Huang, Yu Wu, Liliang Ying, Wei Pen...

2604.05693 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents a superconducting quantum controller that operates at millikelvin temperatures and can directly connect to quantum bits, achieving over 99.9% gate fidelity with very low power consumption. The controller addresses a major bottleneck in scaling up superconducting quantum computers by enabling high-precision control operations at the same ultra-cold temperatures where the qubits operate.

Key Contributions

  • Development of a plug-and-play superconducting quantum controller operating at 10 mK with direct chip-to-chip qubit interconnection
  • Achievement of 99.9% average Clifford gate fidelity with ultralow power consumption of 0.121 fJ per gate operation
  • Demonstration of solution to control bottleneck in large-scale superconducting quantum computing
superconducting quantum computing quantum control gate fidelity Josephson junctions randomized benchmarking
View Full Abstract

The development of large-scale superconducting quantum computing requires efficient in-situ control methods that allow high-fidelity operations at millikelvin temperatures. Superconducting circuits based on Josephson junctions offer a promising solution due to their high speed, low power dissipation, and cryogenic nature. Here, we report a superconducting quantum controller that enables direct chip-to-chip interconnection with qubits at 10 mK and high-fidelity, all-digital manipulation. Randomized benchmarking reveals a uniformly high average Clifford fidelity of 99.9% with leakage to high energy levels on the order of $10^{-4}$, and an estimated average gate operation energy of 0.121 fJ, demonstrating the potential to resolve the control bottleneck in superconducting quantum computing.

PQC-Enhanced QKD Networks: A Layered Approach

Paul Spooren, Andreas Neuhold, Sebastian Ramacher, Thomas Hühn

2604.05599 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: none Sensing: none Network: high

This paper presents a hybrid network security architecture that combines Quantum Key Distribution (QKD) with Post-Quantum Cryptography (PQC) to create secure communication networks. The approach uses a layered design where QKD provides hop-by-hop security between trusted nodes, while PQC enables end-to-end encryption across the entire network.

Key Contributions

  • Layered network architecture combining QKD and PQC for scalable quantum-safe security
  • Practical implementation using open-source components with validation in simulated and lab environments
  • Compositional security analysis preserving individual component security properties
quantum key distribution post-quantum cryptography quantum networks network security cryptographic protocols
View Full Abstract

We present a layered and modular network architecture that combines Quantum Key Distribution (QKD) and Post-Quantum Cryptography (PQC) to provide scalable end-to-end security across long distance multi-hop, trusted-node quantum networks. To ensure interoperability and efficient practical deployment, hop-wise tunnels between physically secured nodes are protected by WireGuard with periodically rotated pre-shared keys sourced via the ETSI GS QKD 014 interface. On top, Rosenpass performs a PQC key exchange to establish an end-to-end data channel without modifying deployed QKD devices or network protocols. This dual-layer composition yields post-quantum forward secrecy and authenticity under practical assumptions. We implement the design using open-source components and validate and evaluate it in simulated and lab test-beds. Experiments show uninterrupted operation over multi-hop paths, low resource footprint and fail-safe mechanisms. We further discuss the design's compositional security, wherein the security of each individual component is preserved under their combination and outline migration paths for operators integrating QKD-aware overlays in existing infrastructures.

Phase-Fidelity-Aware Truncated Quantum Fourier Transform for Scalable Phase Estimation on NISQ Hardware

Akoramurthy B, Surendiran. B

2604.05456 • Apr 7, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper introduces an optimized quantum Fourier transform algorithm called PFA-TQFT that reduces the number of gates needed for quantum phase estimation from O(m²) to O(m log m) by intelligently truncating low-fidelity operations. The method maintains estimation accuracy while making quantum phase estimation more practical on current noisy quantum computers.

Key Contributions

  • Development of Phase-Fidelity-Aware Truncated QFT algorithm that reduces gate complexity from O(m²) to O(m log m)
  • Theoretical bound showing estimation error grows by at most O(2^-d) while achieving significant gate count reduction
  • Hardware-calibrated truncation strategy that adapts to native gate fidelities of specific quantum devices
  • Demonstration of noise-truncation synergy where the truncated algorithm outperforms full QFT under realistic NISQ noise conditions
quantum phase estimation quantum Fourier transform NISQ gate optimization quantum algorithms
View Full Abstract

Quantum phase estimation~(QPE) is central to numerous quantum algorithms, yet its standard implementation demands an $\calO(m^{2})$-gate quantum Fourier transform~(QFT) on $m$ control qubits-a prohibitive overhead on near-term noisy intermediate-scale quantum (NISQ) devices. We introduce the \emph{Phase-Fidelity-Aware Truncated QFT} (PFA-TQFT), a family of approximate QFT circuits parameterised by a truncation depth~$d$ that omits controlled-phase rotations below a hardware-calibrated fidelity threshold~$\eps$. Our central result establishes $\TV(P_{\varphi},P_{\varphi}^{d})\leqπ(m{-}d)/2^{d}$, showing that for $d=\calO(\log m)$ circuit size collapses from $\calO(m^{2})$ to $\calO(m\log m)$ while estimation error grows by at most $\calO(2^{-d})$. We characterise $\dstar=\Floor{\log_{2}(2π/\eps_{2q})}$ directly from native gate fidelities, demonstrating 31.3 -43.7\% at m = 30, gate-count reduction on IBM Eagle/Heron and IonQ~Aria with negligible accuracy loss. Numerical experiments on the transverse-field Ising model confirm all theoretical predictions and reveal a \emph{noise-truncation synergy}: PFA-TQFT outperforms full QFT under NISQ noise $\eps_{2q}\gtrsim 2\times10^{-3}$.

Phase-Stable Hologram Updates for Large-Scale Neutral-Atom Array Reconfiguration

Erdong Huang, Jiayi Huang, Hongshun Yao, Xin Wang, Jin-Guo Liu

2604.04600 • Apr 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper introduces a new algorithm called weighted-projective Gerchberg-Saxton (WPGS) that improves how large arrays of neutral atoms are assembled and reconfigured for quantum computing by maintaining phase stability when updating holographic optical tweezers, preventing atom loss during transitions.

Key Contributions

  • Development of the WPGS algorithm that enforces inter-frame trap-phase continuity to prevent transient trap loss during hologram updates
  • Demonstration of scalable neutral-atom array reconfiguration with over 1000 traps including 2D/3D configurations and multilayer assembly
neutral atoms Rydberg atoms optical tweezers holographic control quantum array assembly
View Full Abstract

Assembling large-scale, defect-free Rydberg atom arrays is a key technology for neutral-atom quantum computation. Dynamic holographic optical tweezers enable the assembly and reconfiguration of such arrays, but phase mismatches between successive holograms can induce destructive interference and transient trap loss during spatial-light-modulator refresh. In this work, we introduce the weighted-projective Gerchberg--Saxton (WPGS) algorithm, a phase-stable approach to dynamic hologram updates for large-scale Rydberg atom-array reconfiguration. By enforcing inter-frame trap-phase continuity while retaining weighted intensity equalization, WPGS suppresses refresh-induced transient degradation. The phase-difference distribution between consecutive holograms further provides a simple diagnostic of transient robustness. Moreover, enforcing the phase constraint reduces the number of iterations required at each update step, thereby accelerating hologram generation. Numerical simulations of 2D and 3D reconfiguration with more than $10^3$ traps, including multilayer assembly and interlayer transport, show robust transient intensities and significantly faster updates than conventional methods. These results establish inter-frame phase continuity as a practical design principle for dynamic holographic control and scalable neutral-atom array reconfiguration.

Digital-Analog Quantum Simulation and Computing: A Perspective on Past and Future Developments

Lucas Lamata

2604.04438 • Apr 6, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This perspective paper reviews the emerging digital-analog quantum computing paradigm, which combines large analog quantum operations (from native platform interactions) with digital quantum gates to achieve both scalability and universality. The author provides an overview of the field's evolution over the past decade and discusses future possibilities for this hybrid approach.

Key Contributions

  • Comprehensive review of digital-analog quantum computing paradigm evolution
  • Analysis of how hybrid approaches can overcome limitations of purely digital or analog quantum computing
  • Perspective on future developments combining scalability with universality
digital-analog quantum computing quantum simulation hybrid quantum algorithms quantum gates scalability
View Full Abstract

Quantum simulation and computing traditionally has been based on two main paradigms, namely, digital and analog. In the digital paradigm, usually single and two-qubit gates (where qubit is an acronym for quantum bit) are employed as building blocks for scalable, universal quantum computing, although errors add up fast and error correction will be ultimately needed for scaling up. In the analog paradigm, large analog blocks are normally employed for a unitary dynamics that carries out the computation, enabling quantum operations on many qubits with reduced errors, but with the drawback of a limited choice of evolutions and lack of universality. In the past decade, a new paradigm has emerged, showing interesting possibilities for quantum simulation and computing in the near and mid term. This is the paradigm of digital-analog quantum technologies, which proposes to combine the best of both paradigms: large analog blocks, provided by native interactions of the employed quantum platform, enabling scalability, combined with digital gates, allowing for more versatility and, ultimately, universality. In this Perspective, I give an overview of the evolution of the field along the past decade, and an outlook for its future possibilities.

Noise tolerance via reinforcement in the quantum search problem

Marjan Homayouni-Sangari, Abolfazl Ramezanpour

2604.04137 • Apr 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper demonstrates that reinforcement techniques can exponentially improve quantum search algorithms, reducing computation time from √D to ln D steps and significantly increasing noise tolerance. The researchers use numerical simulations to show that reinforced quantum search maintains higher success probability in noisy environments compared to standard quantum search algorithms.

Key Contributions

  • Exponential speedup of quantum search from √D to ln D complexity through reinforcement
  • Demonstrated exponentially larger noise threshold for reinforced quantum search algorithms
  • Numerical characterization of noise tolerance for both coherent and incoherent noise in multi-qubit and qudit systems
quantum search Grover's algorithm reinforcement noise tolerance error mitigation
View Full Abstract

We find that reinforcement exponentially reduces computation time of the quantum search problem from $\sqrt{D}$ to $\ln D$ in a $D$-dimensional system. Therefor, a reinforced quantum search is expected to exhibit an exponentially larger noise threshold compared to a standard search algorithm in a noisy environment. We use numerical simulations to characterize the level of noise tolerance via reinforcement in the presence of both coherent and incoherent noise, considering a system of $N$ qubits and a single $D$-level (qudit) system. Our results show that reinforcement significantly enhances the algorithm's success probability and improves the scaling of its computation time with system size. These findings indicate that reinforcement offers a promising strategy for error mitigation, especially when a precise noise model is unavailable.

Microstructural Topology as a Prescriptor for Quantum Coherence: Towards A Unified Framework for Decoherence in Superconducting Qubits

Vinayak P. Dravid, Akshay A. Murthy, Peter Lim, Gabriel T. dos Santos, Ramandeep Mandia, James M. Rondinelli, Mark C. Hersam, Roberto dos Reis

2604.03951 • Apr 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a theoretical framework to separate different causes of decoherence in superconducting quantum bits (qubits) by distinguishing between material microstructure effects and device geometry effects. The authors propose a way to independently measure and control these factors to better engineer quantum devices with longer coherence times.

Key Contributions

  • Introduction of separable framework distinguishing classical and quantum microstructure effects from geometry-dependent coupling in superconducting qubits
  • Development of channel-specific prescriptor methodology for independent optimization of decoherence loss pathways
  • Establishment of perturbative separability criterion and falsifiable experimental protocol for validating the theoretical framework
superconducting qubits decoherence transmon quantum coherence microstructure
View Full Abstract

In superconducting quantum circuits, decoherence improvements are frequently obtained through process interventions that simultaneously modify surface chemistry, microstructural topology, and device geometry, leaving mechanistic attribution structurally underdetermined. Predictive materials engineering requires measurable structural statistics to be separated from geometry-dependent coupling coefficients into independently testable factors. We introduce the concept of classical and quantum microstructure. In that context, we formulate a channel-wise separable framework for decoherence in superconducting transmon qubits in which each loss channel is described by a reduced prescriptor. Here, a channel-specific microstructural state variable is determined independently of device geometry, and a geometry-dependent coupling functional is computable from field solutions without reference to surface chemistry. We derive this product form from a spatially resolved kernel representation and establish a perturbative separability criterion that defines the regime where independent variation of the variables is valid. The framework specifies five prescriptor classes for dominant loss pathways in transmon-class devices. Falsifiability is operationalized through a pre-committed 2x2 experimental protocol in which the variables must satisfy independent ratio checks within propagated uncertainty. A Minimum-Dataset Specification standardizes reporting for cross-laboratory inference. Part I establishes the conceptual and mathematical architecture; coordinated experimental validation is reserved for Part II.

Novel permanent magnet array geometries for scalable trapped-ion quantum computing in a laser-free entanglement architecture

Mitchell G. Peaks

2604.03116 • Apr 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents a new permanent magnet array design for trapped-ion quantum computers that creates localized magnetic field gradients to enable laser-free qubit operations and individual qubit addressing. The design improves scalability by allowing easier ion transport and relaxing alignment constraints compared to existing magnetic field approaches.

Key Contributions

  • Novel permanent magnet geometry that creates localized asymmetric magnetic fields for improved ion transport in QCCD architectures
  • Laser-free entanglement scheme using magnetic field gradients that reduces engineering complexity and improves scalability
  • Relaxed alignment tolerances in two dimensions making experimental implementation more practical
trapped-ion quantum computing QCCD permanent magnet arrays magnetic field gradients laser-free entanglement
View Full Abstract

A novel design is presented for a permanent magnet array to address specific challenges with scalable trapped-ion quantum computing systems. Design and optimization of this magnet geometry is motivated by concepts for large-scale Quantum Charge-Coupled Device (QCCD) architectures. This proposal is relevant to magnetic field gradient schemes for laser-free entanglement using long-wavelength radiation, and individual addressing based on spatially dependent, magnetic field sensitive qubits. This configuration generates a localized, asymmetric magnetic field, yielding a region for ion transport into and out of a strong magnetic field gradient, while minimizing the absolute field experienced by the ion. This is a distinct improvement for scalability over dipolar magnet geometries where a strong magnetic field surrounds a magnetic field nil in three dimensions, which is problematic for ion transport applications. The design also relaxes the alignment constraints for experimental setup by allowing greater tolerance to misalignment in two dimensions. Additionally, the potential to scale a permanent magnet scheme in QCCD systems circumvents engineering challenges associated with using large electrical currents to generate the field gradient. Finally, a conceptual discussion is given for incorporating the design into a scalable QCCD type architecture.

Universal Robust Quantum Gates via Doubly Geometric Control

Hai Xu, Tao Chen, Junkai Zeng, Xiu-Hao Deng, Fang Gao, Xin Wang, Zheng-Yuan Xue, Chengxian Zhang

2604.02962 • Apr 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a new framework for creating robust quantum gates that can suppress multiple types of errors simultaneously using geometric quantum computation principles. The approach achieves fourth-order suppression of control errors and can be extended to sixth-order suppression, potentially enabling more fault-tolerant quantum computing.

Key Contributions

  • Established a general framework for doubly geometric quantum gates with systematic error characterization
  • Demonstrated simultaneous fourth-order suppression of control errors with extension to sixth-order suppression
geometric quantum computation fault-tolerant quantum computing error suppression robust quantum gates geometric phases
View Full Abstract

Geometric quantum computation offers a potential route to fault-tolerant quantum information processing by exploiting the global nature of geometric phases. However, achieving controlled high-order suppression of multiple error sources remains a long-standing limitation, particularly in realistic large-scale circuits with complex noise environments. This limitation is largely due to the absence of a general framework that directly characterizes error accumulation and enables systematic improvement. Here we establish such a framework for universal doubly geometric gates by embedding target operations into a hierarchy of level-n identity constructions. This approach enables direct quantification of error accumulation while removing structural constraints inherent in previous schemes. We analytically show that the defining conditions lead to simultaneous fourth-order suppression of control errors, with a systematic extension to sixth-order suppression via higher-level constructions. Our results establish doubly geometric control as a general and scalable route toward high-order robust quantum gates, with potential implications for fault-tolerant quantum information processing.

Space-Efficient Quantum Algorithm for Elliptic Curve Discrete Logarithms with Resource Estimation

Han Luo, Ziyi Yang, Ziruo Wang, Yuexin Su, Tongyang Li

2604.02311 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a more space-efficient quantum algorithm for breaking elliptic curve cryptography by optimizing Shor's algorithm to use fewer logical qubits. The researchers improved the modular inversion operation to reduce the quantum computer requirements from 2124 to 1333 logical qubits for 256-bit curves.

Key Contributions

  • Space-efficient reversible modular inversion algorithm using 3n + 4⌊log₂ n⌋ + O(1) logical qubits
  • Reduced logical qubit requirements for ECDLP from 2124 to 1333 qubits for 256-bit curves
  • Optimized controlled arithmetic components with concrete circuit constructions
Shor's algorithm elliptic curve cryptography quantum cryptanalysis modular inversion logical qubits
View Full Abstract

Solving the Elliptic Curve Discrete Logarithm Problem (ECDLP) is critical for evaluating the quantum security of widely deployed elliptic-curve cryptosystems. Consequently, minimizing the number of logical qubits required to execute this algorithm is a key object. In implementations of Shor's algorithm, the space complexity is largely dictated by the modular inversion operation during point addition. Starting from the extended Euclidean algorithm (EEA), we refine the register-sharing method of Proos and Zalka and propose a space-efficient reversible modular inversion algorithm. We use length registers together with location-controlled arithmetic to store the intermediate variables in a compact form throughout the computation. We then optimize the stepwise update rules and give concrete circuit constructions for the resulting controlled arithmetic components. This leads to a modular inversion circuit that uses $3n + 4\lfloor \log_2 n \rfloor + O(1)$ logical qubits and $204n^2\log_2 n + O(n^2)$ Toffoli gates. By inserting this modular inversion component into the controlled affine point-addition circuit, we obtain a space-efficient algorithm for the ECDLP with $5n + 4\lfloor \log_2 n \rfloor + O(1)$ qubits and $O(n^3)$ Toffoli gates. In particular, for a 256-bit prime-field curve, our estimate reduces the logical-qubit count to 1333, compared with 2124 in the previous low-width implementation of Häner et al.

Lemniscate phase trajectories for high-fidelity GHZ state preparation in trapped-ion chains

Evgeny V. Anikin, Andrey Chuchalin, Dimitrii Donchenko, Olga Lakhmanskaya, Kirill Lakhmanskiy

2604.02301 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper develops improved laser pulse techniques for creating high-fidelity GHZ (maximally entangled) states in chains of trapped ions. The new 'lemniscate pulse' method reduces preparation errors from η⁴ to η⁶ scaling by using special amplitude and phase modulation that traces a figure-eight pattern.

Key Contributions

  • Development of lemniscate pulse technique that improves GHZ state fidelity scaling from η⁴ to η⁶
  • Demonstration of 10⁻⁴ infidelity achievable for 20-ion chains, significantly better than conventional bell-like pulses
trapped-ion GHZ-states multipartite-entanglement Lamb-Dicke-parameter quantum-gates
View Full Abstract

In trapped-ion chains, multipartite GHZ states can be prepared natively with the help of a single bichromatic laser pulse. However, higher-order terms in the expansion in the Lamb-Dicke parameter $η$ limit the GHZ state preparation infidelity for rectangular and bell-like pulses to the order of $η^4$. For tens of ions, the infidelity caused by out-of-Lamb-Dicke effects can reach several percents. We propose an amplitude and phase-modulated pulse shape, an "echoed lemniscate pulse", which cancels this contribution into error in the leading order. For the proposed pulse, the infidelity scales as $η^6$. The improved scaling is achieved because of a special phase trajectory of a collective motional mode following the figure-eight curve (lemniscate). We demonstrate that the lemniscate pulse allows achieving lower infidelity than bell-like pulses, which can be as low as $10^{-4}$ for $20$-ion chains.

Quantum Time-Space Tradeoffs for Exponential Dynamic Programming

Susanna Caroppo, Jevgēnijs Vihrovs, Dārta Zajakina, Aleksejs Zajakins

2604.02233 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops quantum algorithms for dynamic programming problems that use less quantum memory (QRAM) than previous approaches by trading memory requirements for computation time. The work builds on earlier quantum dynamic programming algorithms but makes them more practically implementable by reducing their demanding memory requirements while still maintaining quantum speedups over classical methods.

Key Contributions

  • Novel quantum time-space tradeoffs for dynamic programming algorithms that reduce QRAM requirements
  • Combination of quantum algorithms with quantized classical strategies to achieve better space complexity while retaining speedups
quantum algorithms dynamic programming QRAM time-space tradeoffs NP-hard problems
View Full Abstract

We investigate the quantum algorithms for dynamic programming by Ambainis et al. (SODA'19). While giving provable complexity speedups and applicable to a variety of NP-hard problems, these algorithms have a notable drawback: they require a large amount of Quantum Random Access Memory (QRAM), which potentially could be very challenging to implement in a physical quantum computer. In this work, we study how we can improve the space complexity by trading it for time, while still retaining a speedup over the classical algorithms. We show novel quantum time-space tradeoffs, which we obtain by adjusting the parameters of these algorithms and combining them with "quantized" classical strategies.

High-threshold decoding of non-Pauli codes for 2D universality

Julio C. Magdalena de la Fuente, Noa Feldman, Jens Eisert, Andreas Bauer

2604.02033 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new decoding method for non-Pauli quantum error correction codes that can achieve universal quantum computation in 2D topological codes. The researchers demonstrate a high error threshold of ~2.5% using a just-in-time matching decoder, which is close to the performance of conventional Pauli codes.

Key Contributions

  • Development of just-in-time matching decoder for non-Pauli stabilizer codes achieving ~2.5% error threshold
  • Demonstration of universal gate set implementation on 2D topological codes with comparable performance to quantum memory
quantum error correction topological codes fault-tolerant quantum computing non-Pauli stabilizers universal gate set
View Full Abstract

Topological codes have many desirable properties that allow fault-tolerant quantum computation with relatively low overhead. A core challenge for these codes, however, is to achieve a low-overhead universal gate set with limited connectivity. In this work, we explore a non-Pauli stabilizer code that can be used to complete a universal gate set on topological toric and surface codes in strictly two dimensions. Fault-tolerant syndrome extraction for the non-Pauli code requires mid-circuit $X$ corrections, a key difference to conventional Pauli codes. We construct and benchmark a just-in-time (JIT) matching decoder to reliably decide these corrections. Under a phenomenological error model with equally likely physical and measurement errors, we find a high threshold of $\approx 2.5\,\%$, close to the $\approx 2.9\,\%$ of a decoder with access to the full syndrome history. We also perform a finite-size scaling analysis to estimate how the logical error rate scales below threshold and verify an exponential suppression in both physical error rate and in the system size. A second global decoding step for $Z$ errors is required and the non-Clifford gates in the circuit reduce the threshold from $\approx 2.9\,\%$ to $\approx 1.8\,\%$ with a naive decoder. We show how $Z$ decoding can be improved using knowledge of the $X$ corrections, pushing the threshold to $\approx 2.2\,\%$. Our results suggest non-Clifford logic in 2D codes could perform comparably to 2D quantum memory. Our formalism for efficient benchmarking and decoding directly generalizes to a broader family of CSS codes whose $X$ stabilizers are twisted by diagonal Clifford operators, and spacetime versions thereof, defined by CSS-like circuits enriched by $CCZ$, $CS$, and $T$ gates.

Transversal non-Clifford gates on almost-good quantum LDPC and quantum locally testable codes

Yiming Li, Zimu Li, Zi-Wen Liu

2604.01874 • Apr 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates that certain quantum error-correcting codes with excellent parameters can implement fault-tolerant non-Clifford gates (specifically multi-controlled-Z gates) directly through transversal operations. The authors use topological methods to construct these gates on quantum LDPC and locally testable codes, achieving nearly optimal code performance while enabling universal quantum computation.

Key Contributions

  • First demonstration of transversal non-Clifford gates on quantum codes with nearly optimal parameters
  • Development of algebraic-topological framework for constructing 'cupcap gates' that enable fault-tolerant universal quantum computation
  • Proof that multi-controlled-Z gates arise naturally as topological phenomena in quantum LDPC codes
quantum error correction LDPC codes fault tolerance transversal gates non-Clifford gates
View Full Abstract

We exhibit nontrivial transversal logical multi-controlled-$Z$ gates on $[\![N,Θ(N),\tildeΘ(N)]\!]$ quantum low-density parity-check codes and $[\![N,Θ(N),\tildeΘ(N)]\!]$ quantum locally testable codes with soundness $\tildeΘ(1)$, combining nearly optimal code parameters with fault-tolerant non-Clifford gates for the first time. Remarkably, our proofs are almost entirely algebraic-topological, showing that such presumably intricate logical gates naturally arise as a fundamental topological phenomenon. We develop a general framework for constructing a rich new family of homological invariant forms which we call ''cupcap gates'' that induce transversal logical multi-controlled-$Z$ and, building on insights from [Li et al., arXiv:2603.25831], covering space methods to certify their nontriviality. The claimed almost-good code results follow immediately as examples.

Twisted Fiber Bundle Codes over Group Algebras

Chaobin Liu

2604.01478 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new method for constructing quantum error-correcting codes called twisted fiber bundle codes, which use mathematical structures over group algebras to potentially create codes with more logical qubits than existing methods while maintaining the same physical qubit count and error-correction capability.

Key Contributions

  • Introduction of twisted fiber bundle construction for quantum CSS codes over group algebras
  • Demonstration that singular chain-compatible twists can increase the number of logical qubits while maintaining blocklength and minimum distance
quantum error correction CSS codes logical qubits group algebras fiber bundle codes
View Full Abstract

We introduce a twisted fiber-bundle construction of quantum CSS codes over group algebras \(R=\mathbb F_2[G]\), where each base generator carries a generator-dependent \(R\)-linear fiber twist satisfying a flatness condition. This construction extends the untwisted lifted product code, recovered when all twists are identities. We show that invertible twists (satisfying a flatness condition) give a complex chain-isomorphic to the untwisted one, so the resulting binary CSS codes have the same blocklength \(n\) and encoded dimension \(k\). In contrast, singular chain-compatible twists can lower boundary ranks and increase the number of logical qubits. Examples over \(R=\mathbb F_2[D_3]\) show that the twisted fiber bundle code can outperform the corresponding untwisted lifted-product code in \(k\) while keeping the same \(n\) and, in our examples, the same minimum distance \(d\).

Simultaneous operation of an 18-qubit modular array in germanium

J. J. Dijkema, X. Zhang, A. Bardakas, D. Bouman, A. Cuzzocrea, D. van Driel, D. Girardi, L. E. A. Stehouwer, G. Scappucci, A. M. J. Zwerver, N. W. Hen...

2604.01063 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper demonstrates the successful operation of an 18-qubit quantum computing system built using germanium semiconductor technology, achieving high-fidelity quantum operations across all qubits simultaneously. The researchers developed a modular architecture that can be scaled up to larger systems while maintaining excellent performance, with single-qubit gate fidelities averaging 99.8%.

Key Contributions

  • Demonstration of simultaneous operation of 18 qubits in a germanium semiconductor platform
  • Achievement of high-fidelity single-qubit gates (99.8% average) across the entire array
  • Development of scalable 2xN modular architecture for semiconductor quantum processors
  • Implementation of controlled-Z gates and generation of three-qubit GHZ entangled states
semiconductor qubits spin qubits germanium quantum gates scalable architecture
View Full Abstract

Utility-scale quantum computing requires the integration and operation of a large-scale qubit register. Semiconductor spin qubits are a primary candidate for this, due to the prospects of building integrated hybrid quantum-classical architectures. However, scaling spin-qubit systems while preserving performance and control has remained a challenge. Here, we demonstrate the operation of an 18-qubit array in germanium based on an extendable 2xN architecture. We achieve simultaneous initialization, control, and readout across the entire array, enabled by parallel operation of modular unit cells. Across the array, we achieve average and median single-qubit gate fidelities of 99.8% and 99.9%, respectively. Finally, we characterize the nearest-neighbor exchange couplings throughout the device and implement high-quality controlled-Z gates to generate a three-qubit Greenberger-Horne-Zeilinger (GHZ) state. These results demonstrate that spin-qubit arrays can be scaled while maintaining high-fidelity operation and establish a modular, extendable architecture for planar semiconductor quantum processors.

Tsim: Fast Universal Simulator for Quantum Error Correction

Rafael Haenel, Xiuzhe Luo, Chen Zhao

2604.01059 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents Tsim, a high-performance quantum circuit simulator designed for quantum error correction research. The simulator uses ZX diagrams to represent quantum circuits and achieves fast sampling performance that scales linearly with Clifford gates and exponentially only with non-Clifford gates.

Key Contributions

  • Development of Tsim simulator using ZX diagram representation for quantum error correction circuits
  • Achievement of linear-time sampling in Clifford gates with GPU acceleration and vectorized compilation
  • Extension of Stim API compatibility to include T gates and arbitrary single-qubit rotations
quantum error correction quantum circuit simulation ZX diagrams Clifford gates GPU acceleration
View Full Abstract

We present Tsim, an open-source high-throughput simulator for universal noisy quantum circuits targeting quantum error correction. Tsim represents quantum circuits as ZX diagrams, where Pauli channels are modeled as parameterized vertices. Diagrams are simplified via parameterized ZX rules, and then compiled for vectorized sampling with GPU acceleration. After the one-time compilation, one can sample detector or measurement shots in linear time in the number of Clifford gates and exponentially only in the number of non-Clifford gates. Tsim implements the Stim API and fully supports the Stim circuit format, extending it with T and arbitrary single-qubit rotation instructions. For low-magic circuits, Tsim throughput can match the sampling performance of Stim.

Distilling Unitary Operations: A No-Go Theorem and Minimal Realization

Jiayi Zhao, Yu-Ao Chen, Guocheng Zhen, Chengkai Zhu, Ranyiliu Chen, Xin Wang

2604.01048 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper investigates methods to purify noisy quantum gates by proving that at least 3 quantum operations are needed to effectively clean up errors from single corrupted quantum gates, and provides the optimal strategy for doing so.

Key Contributions

  • Proved fundamental no-go theorem that 2-slot higher-order operations cannot universally purify single-qubit unitaries
  • Established 3-slot architecture as minimal realization for non-trivial universal purification with optimal fidelity analysis
  • Provided concrete quantum circuit construction for optimal higher-order purification operation
unitary purification quantum error mitigation higher-order operations depolarizing noise indefinite causal order
View Full Abstract

Quantum gates executed on physical hardware are inevitably degraded by environmental noise. While state purification effectively distills static quantum resources, the dynamic execution of quantum algorithms requires a higher-order approach to mitigate errors on the operations themselves. In this work, we investigate unitary purification: the task of utilizing a quantum higher-order operation to partially restore the ideal action of an unknown unitary corrupted by a known noise model. Focusing on canonical depolarizing noise, we first reveal a fundamental operational obstruction. We prove that within the indefinite causal order framework, no nontrivial 2-slot higher-order operation can universally purify the set of single-qubit unitaries. Overcoming this strict limitation, we establish that a 3-slot architecture provides the minimal realization for non-trivial universal purification. We analytically derive the optimal average fidelity for the 3-slot regime, demonstrating that it strictly surpasses trivial strategies by systematically utilizing ancillary qubits as a quantum memory to absorb errors. Furthermore, we provide a concrete quantum circuit construction for this optimal higher-order operation. Our results establish the strict theoretical boundaries of distilling clean operations from noisy gates, offering immediate architectural insights for robust gate design.

Geometry-induced correlated noise in qLDPC syndrome extraction

Angelo Di Bella

2604.01040 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper investigates how the physical layout geometry of quantum error correction circuits affects correlated noise patterns in quantum LDPC codes. The research shows that optimizing the geometric routing of syndrome extraction circuits can significantly reduce logical error rates, demonstrating that physical layout should be considered alongside code design and decoding algorithms.

Key Contributions

  • Derived geometry-conditioned fault models for bivariate-bicycle quantum LDPC codes showing how physical layout affects correlated errors
  • Demonstrated through Monte Carlo simulations that optimized geometric layouts can reduce logical error rates by over 26% in tested quantum error correction codes
  • Established two key geometric metrics (effective fault weight and weighted exposure) that predict logical performance across different layout configurations
quantum error correction LDPC codes fault tolerance syndrome extraction geometric optimization
View Full Abstract

With code and syndrome-extraction schedule fixed, can routed geometry alone change the correlated fault model enough to impact logical performance? Starting from a geometry-conditioned same-tick interaction Hamiltonian, we derive a controlled retained single-and-pair data-fault model for bivariate-bicycle (BB) layouts. Two geometry metrics emerge in two kernel regimes: under a crossing-local diagnostic kernel, a matching argument reduces the support-level effective fault weight; when every support pair appears in at least one retained round with finite same-round separation, strictly positive kernels saturate the support graph, and weighted exposure becomes the discriminating quantity. Circuit-level Monte Carlo on the $[\![72, 12, 6]\!]$ and $[\![144, 12, 12]\!]$ benchmarks confirms that a biplanar bounded-thickness layout suppresses the monomial single-layer embedding penalty, with weighted exposure tracking logical error rate across 101 operating points (Spearman correlation 0.893). A single-layer logical-family optimization on BB72 reduces worst-case exposure by 26.11% and lowers logical error rate in the tested power-law window. Routed geometry should be optimized together with code, schedule, and decoder.

Two Problems on Quantum Computing in Finite Abelian Groups

Ulises Pastor-Díaz, José M. Tornero

2604.00929 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents quantum computing solutions to two mathematical problems involving finite Abelian groups: the Hidden Subgroup Problem (originally solved by Simon) and the newly introduced Fully Balanced Image Problem. The authors develop algorithms using Boolean conversion techniques combined with the Generalized Phase-Kick Back method.

Key Contributions

  • Novel quantum algorithm for the Fully Balanced Image Problem using Generalized Phase-Kick Back technique
  • Boolean conversion framework for solving group-theoretic problems on quantum computers
hidden subgroup problem finite abelian groups phase kickback quantum algorithms boolean conversion
View Full Abstract

In the context of finite Abelian groups two problems are presented and solved using quantum computing techniques. The first is the well--known Hidden Subgroup Problem, originally solved by Simon in a landmark work. The second is the Fully Balanced Image Problem, originally introduced by the authors (joint with J. Ossorio--Castillo), which is related to a certain class of mappings (which contains strictly, for instance, the family of group morphisms). Both problems are tackled using a combination of two techniques: first, a conversion into Boolean objects, better suited for quantum computing arguments, and subsequently a custom--tailored algorithm which takes advantage of the Generalised Phase--Kick Back technique.

Highly-Parallel Atom-Detection Accelerator for Tweezer-Based Neutral Atom Quantum Computers

Jonas Winklmann, Yian Yu, Xiaorang Guo, Korbinian Staudacher, Martin Schulz

2604.00816 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper presents a specialized computer chip (FPGA) accelerator that dramatically speeds up the process of detecting and measuring individual atoms in neutral atom quantum computers, reducing image analysis time from several milliseconds to just 115 microseconds. The acceleration helps overcome one of the major bottlenecks in operating these quantum computers efficiently.

Key Contributions

  • FPGA-based accelerator achieving 34.9x speedup over CPU baseline for atom detection in neutral atom quantum computers
  • Algorithm-level optimizations and hardware design solutions including prefetching mechanisms for improved scalability
  • Demonstration of consistent resource utilization across various atom array sizes contributing to scalable NAQC control systems
neutral atom quantum computing FPGA acceleration atom detection quantum control systems image analysis
View Full Abstract

Neutral atom quantum computers (NAQCs) are among the most promising computational platforms for quantum computing. Controlling and measuring individual atoms and their states, which often requires multiple imaging and image-analysis procedures, is typically the most time-consuming task during computation and contributes significantly to overall cycle times. To resolve this challenge, we propose a highly-parallel atom-detection accelerator for tweezer-based NAQCs. Our design builds on an existing state-reconstruction method and combines an algorithm-level optimization with a Field Programmable Gate Array (FPGA) implementation to maximize parallelism and reduce the run time of the image-analysis process. We identify and overcome several challenges for an FPGA implementation, such as introducing a prefetching mechanism to improve scalability and customizing bus transfers to support large bandwidths. Tested on a Xilinx UltraScale+ FPGA, our design can analyze a 256x256-pixel fluorescence image in just 115mus, achieving 34.9x and 6.3x speedups over the original and optimized CPU baseline, respectively. Moreover, our accelerator can maintain consistent resource utilization across various atom array sizes, contributing to the ongoing efforts toward scalable and fully integrated FPGA-based control systems for NAQCs.

Quantum-Safe Code Auditing: LLM-Assisted Static Analysis and Quantum-Aware Risk Scoring for Post-Quantum Cryptography Migration

Animesh Shaw

2604.00560 • Apr 1, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: none

This paper presents a software tool that automatically scans code to find cryptographic functions that would be vulnerable to quantum computer attacks, uses AI to assess the severity of each finding, and prioritizes which code needs to be updated first when migrating to quantum-safe cryptography.

Key Contributions

  • Development of an automated static analysis framework for identifying quantum-vulnerable cryptographic primitives in codebases
  • Integration of LLM-assisted contextual analysis with VQE-based risk scoring for prioritizing post-quantum cryptography migration
post-quantum cryptography static analysis cryptographic migration Shor's algorithm VQE
View Full Abstract

The impending arrival of cryptographically relevant quantum computers (CRQCs) threatens the security foundations of modern software: Shor's algorithm breaks RSA, ECDSA, ECDH, and Diffie-Hellman, while Grover's algorithm reduces the effective security of symmetric and hash-based schemes. Despite NIST standardising post-quantum cryptography (PQC) in 2024 (FIPS 203 ML-KEM, FIPS 204 ML-DSA, FIPS 205 SLH-DSA), most codebases lack automated tooling to inventory classical cryptographic usage and prioritise migration based on quantum risk. We present Quantum-Safe Code Auditor, a quantum-aware static analysis framework that combines (i) regex-based detection of 15 classes of quantum-vulnerable primitives, (ii) LLM-assisted contextual enrichment to classify usage and severity, and (iii) risk scoring via a Variational Quantum Eigensolver (VQE) model implemented in Qiskit 2.x, incorporating qubit-cost estimates to prioritise findings. We evaluate the system across five open-source libraries -- python-rsa, python-ecdsa, python-jose, node-jsonwebtoken, and Bouncy Castle Java -- covering 5,775 findings. On a stratified sample of 602 labelled instances, we achieve 71.98% precision, 100% recall, and an F1 score of 83.71%. All code, data, and reproduction scripts are released as open-source.

LLM-Guided Evolutionary Search for Algebraic T-Count Optimization

Daniil Fisher, Valentin Khrulkov, Mikhail Saygin, Ivan Oseledets, Stanislav Straupe

2603.29894 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces VarTODD, a method for optimizing quantum circuits by reducing the number of expensive T gates using machine learning-guided search strategies. The approach uses large language models to guide evolutionary algorithms in finding better ways to minimize T-count in fault-tolerant quantum circuits, achieving improvements on standard arithmetic benchmarks.

Key Contributions

  • Introduction of VarTODD, a policy-parameterized variant of FastTODD that separates algebraic correctness from search policy optimization
  • Demonstration of LLM-guided evolutionary optimization (GigaEvo) for automated tuning of quantum circuit optimization policies, achieving significant T-count reductions on arithmetic benchmarks
T-count optimization fault-tolerant quantum computing Clifford+T circuits quantum compilation evolutionary algorithms
View Full Abstract

Reducing the non-Clifford cost of fault-tolerant quantum circuits is a central challenge in quantum compilation, since T gates are typically far more expensive than Clifford operations in error-corrected architectures. For Clifford+T circuits, minimizing T-count remains a difficult combinatorial problem even for highly structured algebraic optimizers. We introduce VarTODD, a policy-parameterized variant of FastTODD in which the correctness-preserving algebraic transformations are left unchanged while candidate generation, pooling, and action selection are exposed as tunable heuristic components. This separates the quality of the algebraic rewrite system from the quality of the search policy. On standard arithmetic benchmarks, fixed hand-designed VarTODD policies already match or improve strong FastTODD baselines, including reductions from 147 to 139 for GF(2^9) and from 173 to 163 for GF(2^10) in the corresponding benchmark branches. As a proof of principle for automated tuning, we then optimize VarTODD policies with GigaEvo, an LLM-guided evolutionary framework, and obtain additional gains on harder instances, reaching 157 for GF(2^10) and 385 for GF(2^16). These results identify policy optimization as an independent and practical lever for improving algebraic T-count reduction, while LLM-guided evolution provides one viable way to exploit it.

Floquet Codes from Derived Semi-Regular Hyperbolic Tessellations on Orientable and Non-Orientable Surfaces

Douglas F. Copatti, Giuliano G. La Guardia, Waldir S. Soares, Edson D. Carvalho, Eduardo B. Silva

2603.29811 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new quantum Floquet codes by using hyperbolic tessellations on various types of surfaces (both orientable and non-orientable). The authors construct these quantum error-correcting codes by mapping surfaces to hyperbolic polygons and analyzing their geometric properties.

Key Contributions

  • Construction of new quantum Floquet codes on compact orientable and non-orientable surfaces
  • Generalization of hyperbolic Floquet code constructions using semi-regular tessellations
  • Performance analysis and asymptotic behavior investigation of the developed codes
quantum error correction Floquet codes hyperbolic tessellations quantum codes fault tolerance
View Full Abstract

In this paper, we construct several new quantum Floquet codes on compact, orientable, as well as non-orientable surfaces. In order to obtain such codes, we identify these surfaces with hyperbolic polygons and examine hyperbolic semi-regular tessellations on such surfaces. The method of construction presented here generalizes similar constructions concerning hyperbolic Floquet codes on connected and compact surfaces with genus $g \geq 2$. A performance analysis and an investigation of the asymptotic behavior of these codes are also presented.

Logical-to-Physical Compilation for Reducing Depth in Distributed Quantum Systems

Folkert de Ronde, Stephan Wong, Sebastian Feld

2603.29536 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper presents a compiler that optimizes distributed quantum computing circuits by identifying sequences of CNOT gates that can be parallelized and rescheduled to reduce circuit depth. The approach combines logical-to-physical decomposition with scheduling to minimize execution time while maintaining logical equivalence, specifically targeting the overhead introduced when quantum operations must be distributed across multiple connected processors.

Key Contributions

  • Development of a compiler that integrates logical-to-physical decomposition with depth-aware rescheduling for distributed quantum circuits
  • Algorithm for parallelizing sequential CNOT gate structures while maintaining logical equivalence and never increasing circuit depth
distributed quantum computing circuit compilation CNOT gates quantum circuit depth entanglement distribution
View Full Abstract

Quantum computing is expected to become a foundational technology for solving problems that exceed the capabilities of classical systems. As quantum algorithms and hardware technologies continue to advance, the need for scalable architectures becomes increasingly clear. Distributed quantum computing offers a promising path forward by interconnecting multiple smaller processors into a larger, more powerful system. However, distributed quantum computing introduces significant circuit depth overhead, as logical operations are typically decomposed into sequential physical procedures that require entanglement generation. These sequential operations limit the reliability of quantum algorithms in the NISQ era due to noise. In this work, we present a compiler that integrates logical-to-physical decomposition with depth-aware rescheduling to reduce the execution cost of distributed quantum circuits. The compiler identifies sequences of logical CNOT gates that share a control or target qubit, reschedules them into parallel instruction groups, and applies decompositions that allow multiple gates to be executed simultaneously using distributed shared entanglement resources. An algorithm is proposed that ensures parallelism is created when possible while keeping logical equivalence and that circuit depth is never increased. Benchmark results demonstrate that the compiler consistently reduces circuit depth for circuits containing inherently sequential CNOT structures, while leaving already-parallel circuits unchanged. These results highlight the value of combining scheduling and hardware-aware decomposition, and establish the compiler as a practical tool for improving the fidelity of distributed quantum computations.

PAEMS: Precise and Adaptive Error Model for Superconducting Quantum Processors

Songhuan He, Yifei Cui, Cheng Wang

2603.29439 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces PAEMS, a new error modeling system for superconducting quantum processors that more accurately simulates qubit errors for quantum error correction. The model uses a qubit-wise separation framework to capture how errors evolve across space and time, showing significant improvements in error correlation accuracy compared to existing models.

Key Contributions

  • Introduction of PAEMS error model with qubit-wise separation framework and leakage propagation
  • Significant reduction in error correlations (19.5x timelike, 9.3x spacelike, 5.2x spacetime) compared to previous methods
  • 58-73% accuracy improvement over Google's SI1000 model across multiple quantum platforms
quantum error correction superconducting qubits error modeling fault tolerance QPU
View Full Abstract

Superconducting quantum processor units (QPUs) are incapable of producing massive datasets for quantum error correction (QEC) because of hardware limitations. Thus, QEC decoders heavily depend on synthetic data from qubit error models. Classic depolarizing error models with polynomial complexity present limited accuracy. Coherent density matrix methods suffer from exponential complexity $\propto O(4^n)$ where $n$ represents the number of qubits. This paper introduces PAEMS: a precise and adaptive qubit error model. Its qubit-wise separation framework, incorporating leakage propagation, captures error evolvements crossing spatial and temporal domains. Utilizing repetition-code experiment datasets, PAEMS effectively identifies the intrinsic qubit errors through an end-to-end optimization pipeline. Experiments on IBM's QPUs have demonstrated a 19.5$\times$, 9.3$\times$, and 5.2$\times$ reduction in timelike, spacelike, and spacetime error correlation, respectively, surpassing all of the previous works. It also outperforms the accuracy of Google's SI1000 error model by 58$\sim$73\% on multiple quantum platforms, including IBM's Brisbane, Sherbrooke, and Torino, as well as China Mobile's Wuyue and QuantumCTek's Tianyan.

YZ-plane measurement-based quantum computation: Universality and Parity Architecture implementation

Jaroslav Kysela, Katharina Ludwig, Nitica Sakharwade, Anette Messinger, Wolfgang Lechner

2603.29379 • Mar 31, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies measurement-based quantum computation (MBQC) using measurements restricted to the YZ plane of the Bloch sphere, proving that certain deterministic quantum computations must use such measurements and demonstrating universal quantum computation is possible with only YZ-plane measurements. The authors also show how these restricted measurement patterns can be implemented using local interactions within the Parity Architecture framework.

Key Contributions

  • Proof that uniformly deterministic MBQC with input-output coincidence requires YZ-plane measurements on register-logic graphs
  • Demonstration of universal quantum computation using only YZ-plane measurements and connection to XZ-plane patterns
  • Implementation framework for YZ-plane patterns in Parity Architecture with purely local interactions
measurement-based quantum computation MBQC YZ-plane measurements universal quantum computation register-logic graphs
View Full Abstract

We define the class of register-logic graphs and prove that any uniformly deterministic measurement-based quantum computation (MBQC) where the inputs coincide with the outputs must be driven on such graphs by measurements in the $YZ$ plane of the Bloch sphere. This observation is revisited in the context that goes beyond uniform determinism, where we present a universal $YZ$-plane-only measurement pattern and establish a connection between $YZ$-plane-only and $XZ$-plane-only patterns. These results conclude the line of research on universal patterns with measurements restricted to one of the principal planes of the Bloch sphere. We further demonstrate, within the framework of the Parity Architecture, that $YZ$-plane patterns with the register-logic graph can be embedded into another graph with purely local interactions, and we extend this case to the scenario of universal quantum computation.

Shor's algorithm is possible with as few as 10,000 reconfigurable atomic qubits

Madelyn Cain, Qian Xu, Robbie King, Lewis R. B. Picard, Harry Levine, Manuel Endres, John Preskill, Hsin-Yuan Huang, Dolev Bluvstein

2603.28627 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper shows that Shor's algorithm for breaking cryptography could be implemented with as few as 10,000 neutral-atom qubits, dramatically reducing previous estimates that required millions of qubits. The researchers achieve this by using advanced quantum error correction codes and optimized circuit designs to make cryptographically relevant quantum computing more feasible.

Key Contributions

  • Reduced resource requirements for Shor's algorithm from millions to ~10,000 physical qubits
  • Demonstrated feasibility of cryptographically relevant quantum computing with neutral-atom architectures
  • Provided concrete runtime estimates for breaking P-256 elliptic curves and RSA-2048 encryption
Shor's algorithm quantum error correction neutral atoms fault-tolerant quantum computing cryptography
View Full Abstract

Quantum computers have the potential to perform computational tasks beyond the reach of classical machines. A prominent example is Shor's algorithm for integer factorization and discrete logarithms, which is of both fundamental importance and practical relevance to cryptography. However, due to the high overhead of quantum error correction, optimized resource estimates for cryptographically relevant instances of Shor's algorithm require millions of physical qubits. Here, by leveraging advances in high-rate quantum error-correcting codes, efficient logical instruction sets, and circuit design, we show that Shor's algorithm can be executed at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. Increasing the number of physical qubits improves time efficiency by enabling greater parallelism; under plausible assumptions, the runtime for discrete logarithms on the P-256 elliptic curve could be just a few days for a system with 26,000 physical qubits, while the runtime for factoring RSA-2048 integers is one to two orders of magnitude longer. Recent neutral-atom experiments have demonstrated universal fault-tolerant operations below the error-correction threshold, computation on arrays of hundreds of qubits, and trapping arrays with more than 6,000 highly coherent qubits. Although substantial engineering challenges remain, our theoretical analysis indicates that an appropriately designed neutral-atom architecture could support quantum computation at cryptographically relevant scales. More broadly, these results highlight the capability of neutral atoms for fault-tolerant quantum computing with wide-ranging scientific and technological applications.

Tunable Nonlocal ZZ Interaction for Remote Controlled-Z Gates Between Distributed Fixed-Frequency Qubits

Benzheng Yuan, Chaojie Zhang, Haoran He, Yangyang Fei, Chuanbing Han, Shuya Wang, Huihui Sun, Qing Mu, Bo Zhao, Fudong Liu, Weilong Wang, Zheng Shan

2603.28526 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper presents a method for creating high-fidelity quantum gates between superconducting qubits located in separate modules connected by long cables, using double-transmon couplers to enable remote quantum operations with over 99.99% fidelity across 25 cm distances.

Key Contributions

  • Development of double-transmon coupler architecture enabling remote controlled-Z gates with >99.99% fidelity
  • Demonstration of tunable nonlocal ZZ interaction with on/off ratio exceeding 10^6 for distributed quantum processors
  • Hardware-efficient solution for scaling superconducting quantum computers through modular architectures
superconducting qubits distributed quantum computing controlled-Z gates double-transmon couplers fault-tolerant quantum computing
View Full Abstract

Fault-tolerant quantum computing requires large-scale superconducting processors, yet monolithic architectures face increasing constraints from wiring density, crosstalk, and fabrication yield. Modular superconducting platforms offer a scalable alternative, but achieving high-fidelity entangling gates between distant modules remains a central challenge, particularly for highly coherent fixed-frequency qubits. Here, we propose a distributed hardware architecture designed to overcome this bottleneck by employing a pair of double-transmon couplers (DTCs). By synchronously controlling the two DTCs stationed at opposite ends of a macroscopic cable, our scheme strongly suppresses residual static inter-module coupling while enabling on-demand activation of a non-local cross-Kerr interaction with an on/off ratio exceeding $10^6$. Through comprehensive system-level numerical simulations incorporating realistic hardware parameters, we demonstrate that this mechanism can realize a remote controlled-Z (CZ) gate with a fidelity over 99.99\% between fixed-frequency transmons housed in separate packages interconnected by a 25 cm coaxial cable. These results establish a highly viable, hardware-efficient route toward high-performance distributed superconducting processors.

Open-System Adiabatic Quantum Search under Dephasing

Afaf El Kalai, Peter J. Eder, Christian B. Mendl

2603.28506 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes how to optimize adiabatic quantum search algorithms (like quantum Grover search) when the quantum system experiences dephasing noise, finding the best evolution schedule that balances speed against decoherence effects. The researchers derive mathematical expressions for optimal timing and identify fundamental limits where noise prevents further acceleration of the search algorithm.

Key Contributions

  • Derived closed-form expressions for optimal evolution schedule in noisy adiabatic Grover search
  • Identified critical dephasing threshold that defines fundamental limits for noise-assisted quantum algorithm acceleration
adiabatic quantum computing Grover search decoherence dephasing quantum algorithms
View Full Abstract

Adiabatic quantum algorithms must evolve slowly enough to suppress non-adiabatic transitions while remaining fast enough to be practical. In open systems, this trade-off is reshaped by decoherence. For Hamiltonians subject to dephasing Lindbladians, Avron et al. [1] showed that a unique timetable exists that maximizes the fidelity with a target state. This optimal schedule is characterized by a constant tunneling rate along the adiabatic path. In this work, we revisit their analysis and apply it to the adiabatic Grover search framework, obtaining closed-form expressions for the optimal evolution schedule, the minimum runtime, and the resulting achievable fidelity. Moreover, by invoking an energy-time uncertainty argument, we identify a critical dephasing threshold, beyond which further noise-assisted acceleration is prohibited, thereby defining the physically realizable boundaries for dephasing-based adiabatic quantum search protocols.

Mixed-register Stabilizer Codes: A Coding-theoretic Perspective

Himanshu Dongre, Lane G. Gunderman

2603.28459 • Mar 30, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops quantum error correction codes for systems where different quantum locations can have different numbers of basis states (mixed qudits), rather than all being qubits or all having the same dimension. The authors prove theoretical constraints and construct optimal stabilizer codes by combining codes with coprime dimensions.

Key Contributions

  • General theoretical results for mixed-register Pauli operators and forbidden stabilizer code structures
  • Construction of coding-theoretically optimal mixed-register stabilizer codes from coprime local-dimensions
quantum_error_correction stabilizer_codes qudits mixed_register fault_tolerance
View Full Abstract

Protecting information in systems that have more than two basis states (qudits) not only offers a promising route for reducing the number of individual quantum locations that must be protected, while more accurately reflecting the structure of realistic quantum hardware, but also has some possibly enticing foundational strengths. While work in the past has largely focused on protecting information in quantum devices with locations that are some consistent local structure, this work considers coding-theoretic constraints on devices constructed from locations which may vary in their local structures -- these are mixed-register quantum devices. In this work we provide some general results for mixed-register Pauli operators, then identify some stabilizer encoded information forms that are forbidden. Building on these insights, we construct coding-theoretically optimal mixed-register stabilizer codes from sets of codes defined on coprime local-dimensions. The construction of such codes results in codes with logical subspaces that do not directly correspond to any of the constituent local-dimensions.

Autonomous Hamiltonian certification and changepoint detection

Steven T. Flammia, Dmitrii Khitrin, Muzhou Ma, Jamie Sikora, Yu Tong, Alice Zheng

2603.26655 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper develops protocols for quantum devices to autonomously monitor whether their internal components (Hamiltonians) have drifted from their calibrated values, using only simple single-qubit operations to detect when expensive recalibration is needed. The approach allows quantum computers to self-diagnose calibration problems without requiring external reference devices or complex multi-qubit operations that might themselves be miscalibrated.

Key Contributions

  • Autonomous Hamiltonian certification protocol that distinguishes calibration drift using only single-qubit gates and measurements
  • Online changepoint detection algorithm for continuous monitoring of quantum device calibration with optimal scaling
  • Sample complexity bounds of O(nM²ln(1/δ)/ε²) for n-qubit systems with practical evolution time requirements
hamiltonian certification quantum calibration changepoint detection stabilizer states quantum device characterization
View Full Abstract

Modern quantum devices require high-precision Hamiltonian dynamics, but environmental noise can cause calibrated Hamiltonian parameters to drift over time, necessitating expensive recalibration. Detecting when recalibration is needed is challenging, especially since the very gates required for sophisticated verification protocols may themselves be miscalibrated. While cloud quantum computing services implement heuristic routines for triggering recalibration, the fundamental limits of optimal recalibration are not yet known. We develop efficient Hamiltonian certification and changepoint detection protocols in the autonomous setting, where we cannot rely on an external noiseless device and use only single-qubit gates and measurements, making the protocols robust to the calibration issues for multi-qubit operations they aim to detect. For unknown $n$-qubit Hamiltonians $H$ and $H_0$ with operator norm bounded by $M$, our certification protocol distinguishes whether $\|H-H_0\|_F\geqε$ or $\|H-H_0\|_F\leq O(ε/\sqrt{n})$ with sample complexity $O(nM^2\ln(1/δ)/ε^2)$ and total evolution time $O(nM\ln(1/δ)/ε^2)$. We achieve this by evolving random stabilizer product states and performing adaptive single-qubit measurements based on a classically simulable hypothesis state. Extending this to continuous monitoring, we develop an online changepoint detection algorithm using the CUSUM procedure that achieves a detection delay time bound of $O(nM\ln(M\mathbb{E}_\infty[T])/ε^2)$, matching the known asymptotically optimal scaling with respect to false alarm run time $\mathbb{E}_\infty[T]$. Our approach enables quantum devices to autonomously monitor their own calibration status without requiring ancillary systems, entangling operations, or a trusted reference device, offering a practical solution for robust quantum computing with contemporary noisy devices.

Majorana-XYZ subsystem code

Tobias Busse, Lauri Toikka

2603.26311 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new quantum error correction code called the Majorana-XYZ subsystem code that can protect a macroscopic number of logical qubits using topological properties. The code uses 3-local nearest-neighbor check operations on a honeycomb lattice and can encode approximately L/2 logical qubits into L² physical qubits with distance L.

Key Contributions

  • Introduction of a novel subsystem quantum error correction code that combines topological and local gauge protection
  • Demonstration of macroscopic logical qubit encoding with 3-local nearest-neighbor check operations on experimentally feasible Majorana fermion systems
quantum error correction subsystem codes topological codes Majorana fermions honeycomb lattice
View Full Abstract

We present a new type of a quantum error correction code, termed Majorana-XYZ code, where the logical quantum information scales macroscopically yet is protected by topologically non-trivial degrees of freedom. It is a $[n,k,g,d]$ subsystem code with $n=L^2$ physical qubits, $k= \lfloor L/2 \rfloor$ logical qubits, $g \sim L^2$ gauge qubits, and distance $d = L$. The physical check operations, i.e. the measurements needed to obtain the error syndrome, are $3$-local and nearest-neighbour. The code detects every 1- and 2-qubit error, and every error of weight 3 and higher (constrained by the distance) that is not a product of the 3-qubit check operations, however, these products act only on the gauge qubits leaving the code space invariant. The undetected weight-3 and higher operators are confined to the gauge group and do not affect logical information. While the code does not have local stabiliser generators, the logical qubits cannot be modified locally by an undetectable error, and in this sense the Majorana-XYZ code combines notions of both topological and local gauge codes while providing a macroscopic number of topological logical qubits. Taken as a non-gauge stabiliser code we can encode $k \sim L^2 - 3L$ logical qubits into $L^2$ physical qubits; however, the check operators then become weight $2L$. The code is derived from an experimentally promising system of Majorana fermions on the honeycomb lattice with only nearest-neighbour interactions.

Decomposition of Multi-Qubit Gates for Circuit Cutting

Ryota Tamura, Tomoya Kashimata, Yohei Hamakawa, Kosuke Tatsumura, Hiroshi Imai

2603.26278 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a method to reduce the computational overhead when cutting large quantum circuits into smaller pieces by optimizing how multi-qubit gates are decomposed before the cutting process. The approach uses additional helper qubits strategically to minimize the extra sampling required when reconstructing results from the cut circuit pieces.

Key Contributions

  • Novel decomposition strategy for multi-qubit gates that reduces sampling overhead in circuit cutting
  • Demonstration using MCX and CCCX gates showing effectiveness of ancilla-based approach for optimizing cut locations
circuit cutting multi-qubit gates sampling overhead quantum circuit decomposition ancilla qubits
View Full Abstract

A large-scale quantum circuit can be partitioned into multiple subcircuits through circuit cutting, where each subcircuit is executed multiple times and the expectation value of the original circuit is reconstructed by classical post-processing from their measurement (sampling) results. In this process, appropriate cut locations are identified after the user-designed quantum circuit, including multi-qubit gates that act on three or more qubits, has been decomposed into single-qubit gates and two-qubit gates such as the CNOT gate. Here, we present a method for reducing the sampling overhead, which refers to the increase in the number of samples required due to the cutting process, by modifying the decomposition strategy of multi-qubit gates. Using MCX and CCCX gates as representatives of multi-qubit gates, we demonstrate that the proposed decomposition method, which introduces a small number of ancilla qubits according to the identified cut locations, effectively decreases the sampling overhead.

Distributed Quantum Discrete Logarithm Algorithm

Renjie Xu, Daowen Qiu, Ligang Xiao, Le Luo, Xu Zhou

2603.26160 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper proposes a distributed quantum algorithm for solving the discrete logarithm problem that requires smaller quantum registers than Shor's original algorithm. The approach works by identifying intersections of sets that contain the solution, avoiding the need for quantum communication while potentially improving success probability.

Key Contributions

  • Distributed quantum algorithm for discrete logarithm problem with reduced register size requirements
  • Method to determine solution containment in given sets without quantum communication
  • Approach that can improve success probability compared to Shor's algorithm
discrete logarithm Shor's algorithm distributed quantum computing cryptanalysis quantum registers
View Full Abstract

Solving the discrete logarithm problem (DLP) with quantum computers is a fundamental task with important implications. Beyond Shor's algorithm, many researchers have proposed alternative solutions in recent years. However, due to current hardware limitations, the scale of DLP instances that can be addressed by quantum computers remains insufficient. To overcome this limitation, we propose a distributed quantum discrete logarithm algorithm that reduces the required quantum register size for solving DLPs. Specifically, we design a distributed quantum algorithm to determine whether the solution is contained in a given set. Based on this procedure, our method solves DLPs by identifying the intersection of sets containing the solution. Compared with Shor's original algorithm, our approach reduces the register size and can improve the success probability, while requiring no quantum communication.

MoSAIC: Scalable Probabilistic Error Cancellation via Variational Blockwise Noise Aggregation

Maya Ma, Rimika Jaiswal, Murphy Yuezhen Niu

2603.26063 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces MoSAIC, a new quantum error mitigation technique that reduces the computational overhead of probabilistic error cancellation by grouping quantum circuit operations into blocks and learning effective noise models for each block. The method maintains accuracy while dramatically reducing sampling costs, enabling error mitigation on much larger quantum systems than previously possible.

Key Contributions

  • Development of MoSAIC framework that preserves unbiasedness of probabilistic error cancellation while reducing sampling overhead by 1-2 orders of magnitude
  • Largest experimental demonstration of PEC-based error mitigation on IBM's 156-qubit Heron processors with validation on 50-qubit transverse-field Ising model systems
  • Blockwise noise aggregation approach that enables scalable quantum error mitigation beyond the operating regime of standard probabilistic error cancellation
quantum error mitigation probabilistic error cancellation NISQ noise models variational optimization
View Full Abstract

Quantum error mitigation is essential for extracting trustworthy results from noisy intermediate-scale quantum (NISQ) processors. Yet, current approaches face a core scalability bottleneck: unbiased methods such as probabilistic error cancellation (PEC) incur exponential sampling overhead, while approximate techniques like zero-noise extrapolation trade accuracy for efficiency. We introduce and experimentally demonstrate MoSAIC (Modular Spatio-temporal Aggregation for Inverted Channels), a scalable quantum error mitigation framework that preserves the unbiasedness of PEC while dramatically reducing sampling costs. MoSAIC partitions a circuit into noise-aligned blocks, learns an effective block noise model using classical variational optimization, and applies quasi-probabilistic inversion once per block instead of after every layer. This blockwise aggregation reduces both sampling overhead and circuit-depth overhead, enabling mitigation far beyond the operating regime of standard PEC. We also experimentally validate MoSAIC on IBM's 156-qubit Heron processors, performing the largest PEC-based mitigation demonstration on hardware to date. As a physically meaningful benchmark, we prepare the critical one-dimensional transverse-field Ising (TFIM) ground state for system sizes up to 50 qubits. We show that MoSAIC can achieve at least 1 to 2 orders of magnitude better accuracy than standard PEC under identical sampling budgets. This enables MoSAIC to recover accurate observables for larger system sizes, even when standard PEC fails due to its prohibitive sampling overhead. We also present CUDA-Q accelerated simulations to validate performance trends under a range of different noise models. These results demonstrate that MoSAIC is not only theoretically scalable but also practically deployable for high-accuracy, large-scale quantum experiments on today's quantum hardware.

Achieving double-logarithmic precision dependence in optimization-based quantum unstructured search

Zhijian Lai, Dong An, Jiang Hu, Zaiwen Wen

2603.26039 • Mar 27, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper improves Grover's quantum search algorithm by reformulating it as an optimization problem and applying Riemannian modified Newton methods. The new approach achieves better precision scaling with O(√N log log(1/ε)) complexity instead of the previous O(√N log(1/ε)), while remaining compatible with standard Grover operators.

Key Contributions

  • Development of Riemannian modified Newton method for quantum search with quadratic convergence rate
  • Achievement of double-logarithmic precision dependence O(√N log log(1/ε)) complexity
  • Proof that Riemannian gradient is an eigenvector of the Riemannian Hessian in quantum search setting
  • Maintaining Grover-compatibility using only standard oracle and diffusion operators
Grover algorithm quantum search Riemannian optimization quantum algorithms unstructured search
View Full Abstract

Grover's algorithm is a fundamental quantum algorithm that achieves a quadratic speedup for unstructured search problems of size $N$. Recent studies have reformulated this task as a maximization problem on the unitary manifold and solved it via linearly convergent Riemannian gradient ascent (RGA) methods, resulting in a complexity of $O(\sqrt{N}\log (1/\varepsilon))$. In this work, we adopt the Riemannian modified Newton (RMN) method to solve the quantum search problem. We show that, in the setting of quantum search, the Riemannian Newton direction is collinear with the Riemannian gradient in the sense that the Riemannian gradient is always an eigenvector of the corresponding Riemannian Hessian. As a result, without additional overhead, the proposed RMN method numerically achieves a quadratic convergence rate with respect to error $\varepsilon$, implying a complexity of $O(\sqrt{N}\log\log (1/\varepsilon))$, which is double-logarithmic in precision. Furthermore, our approach remains Grover-compatible, namely, it relies exclusively on the standard Grover oracle and diffusion operators to ensure algorithmic implementability, and its parameter update process can be efficiently precomputed on classical computers.

Scalable topological quantum computing based on Sine-Cosine chain models

A. Lykholat, G. F. Moreira, I. R. Martins, D. Sousa, A. M. Marques, R. G. Dias

2603.25952 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper proposes a new approach to topological quantum computing using Sine-Cosine chain models that can encode multiple quantum bits (qudits) in single systems, potentially requiring fewer physical resources than current methods. The researchers describe how these chains could be used for quantum gate operations and memory storage with some protection against errors.

Key Contributions

  • Novel scalable framework for topological quantum computing using Matryoshka-type Sine-Cosine chains
  • High-dimensional qudit encoding approach that reduces physical resource overhead
  • Y-junction braiding protocols for gate operations with extended memory architectures
topological quantum computing qudit encoding braiding protocols fault tolerance resource optimization
View Full Abstract

This work proposes a scalable framework for topological quantum computing using Matryoshka-type Sine-Cosine chains. These chains support high-dimensional qudit encoding within single systems, reducing the physical resource overhead compared to conventional qubit arrays. We describe how these chains can be used in Y-junction braiding protocols for gate operations and in extended memory architectures capable of storing multiple qubits simultaneously. Fidelity analysis shows partial topological protection against disorder, suggesting this approach is a possible pathway toward low-overhead quantum hardware.

Theory of (Co)homological Invariants on Quantum LDPC Codes

Zimu Li, Yuguo Shao, Fuchuan Wei, Yiming Li, Zi-Wen Liu

2603.25831 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a mathematical framework for analyzing quantum LDPC (Low-Density Parity-Check) codes by studying their topological and algebraic properties. The work extends theoretical tools from HGP codes to sheaf codes and shows how to construct families of quantum error-correcting codes while preserving their logical operation capabilities.

Key Contributions

  • Systematic mathematical framework for analyzing cohomological invariants of quantum LDPC codes
  • Generalization of canonical logical representatives from HGP codes to sheaf codes
  • First comprehensive computation of cup products in sheaf codes enabling parallel quantum gates
  • Inductive scheme for generating code families while preserving logical operations and invariants
quantum LDPC codes quantum error correction fault tolerance cohomological invariants sheaf codes
View Full Abstract

With recent breakthroughs in the construction of good qLDPC codes and nearly good qLTCs, the study of (co)homological invariants of quantum code complexes, which fundamentally underlie their logical operations, has become evidently important. In this work, we establish a systematic framework for mathematically analyzing these invariants across a broad spectrum of constructions, from HGP codes to sheaf codes, by synthesizing advanced math tools. We generalize the notion of canonical logical representatives from HGP codes to the sheaf code setting, resolving a long-standing challenge in explicitly characterizing sheaf codewords. Building on this foundation, we present the first comprehensive computation of cup products within the intricate framework of sheaf codes. Given Artin's primitive root conjecture which holds under the generalized Riemann hypothesis, we prove that $\tildeΘ(N)$ independent cup products can be supported on almost good qLDPC codes and qLTCs of length N, opening the possibility of achieving linearly many parallel, nontrivial, constant-depth multi-controlled-Z gates. Moreover, by interpreting sheaf codes as covering spaces of HGP codes via graph lifts, we propose a scheme that inductively generates families of both HGP and sheaf codes in an interlaced fashion from a constant-size HGP code. Notably, the induction preserves all (co)homological invariants of the initial code. This provides a general framework for lifting invariants or logical gates from small codes to infinite code families, and enables efficient verification of such features by checking on small instances. Our theory provides a substantive methodology for studying invariants in HGP codes and extends it to sheaf codes. In doing so, we reveal deep and unexpected connections between qLDPC codes and math, thereby laying the groundwork for future advances in quantum coding, fault tolerance, and physics.

Non-linear Sigma Model for the Surface Code with Coherent Errors

Stephen W. Yan, Yimu Bao, Sagar Vijay

2603.25665 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper studies how well the surface code (a leading quantum error correction scheme) performs when affected by coherent errors rather than random errors. The authors develop a mathematical framework to analyze different decoding strategies and discover a new type of failure mode called a 'thermal-metal' phase that occurs when the decoder doesn't have perfect information about the coherent errors.

Key Contributions

  • Derivation of a non-linear sigma model framework for analyzing surface code performance under coherent errors
  • Discovery of a 'thermal-metal' phase representing a new type of non-decodable regime distinct from conventional Pauli error failures
  • Demonstration of sharp performance differences between optimal decoding (with known error parameters) and suboptimal decoding (with imperfect parameter knowledge)
surface code quantum error correction coherent errors maximum-likelihood decoding non-linear sigma model
View Full Abstract

The surface code is a promising platform for a quantum memory, but its threshold under coherent errors remains incompletely understood. We study maximum-likelihood decoding of the square-lattice surface code in the presence of single-qubit unitary rotations that create electric anyon excitations. We microscopically derive a non-linear sigma model with target space $\mathrm{SO}(2n)/\mathrm{U}(n)$ as the effective long-distance theory of this decoding problem, with distinct replica limits: $n\to1$ for optimal decoding, which assumes knowledge of the coherent rotation angle, and $n\to0$ for suboptimal decoding with imperfect angle information. This exposes a sharp distinction between the two decoders. The suboptimal decoder supports a ``thermal-metal'' phase, a non-decodable regime that is qualitatively distinct from the conventional non-decodable phase of the surface code under incoherent Pauli errors. By contrast, the metal phase cannot arise in optimal decoding, since the metallic fixed-point becomes unstable in the $n\to 1$ replica limit. We argue that optimal decoding may be possible up to the maximally-coherent rotation angle. Within the sigma model description, we show that the decoding fidelity is related to twist defects of the order-parameter field, yielding quantitative predictions for its system-size dependence near the metallic fixed point for both decoders. We examine our analytic predictions for the decoding fidelity as well as other physical observables with extensive numerical simulations. We discuss how the symmetries and the target space for the sigma model rely on the lattice of the surface code, and how a stable thermal metal phase can arise in optimal decoding when the syndromes reside on a non-bipartite lattice.

Weighted Nested Commutators for Scalable Counterdiabatic State Preparation

Jialiang Tang, Xi Chen, Zhi-Yuan Wei

2603.25625 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper introduces a new method called weighted nested commutators (WNC) to efficiently prepare quantum states in large systems by approximating complex nonlocal operations with simpler local ones. The approach significantly improves quantum state preparation for systems with up to 1000 qubits compared to existing methods.

Key Contributions

  • Introduction of weighted nested-commutator (WNC) ansatz that generalizes standard nested-commutator approaches with independent variational weights
  • Demonstration of efficient quantum state preparation for large systems up to 1000 qubits using counterdiabatic driving with local optimization
counterdiabatic driving quantum state preparation adiabatic gauge potentials matrix product states variational optimization
View Full Abstract

Counterdiabatic (CD) driving enables efficient quantum state preparation, but it requires implementing highly nonlocal adiabatic gauge potentials (AGP) that are impractical to compute and realize in large many-body systems. We introduce a \textit{weighted nested-commutator} (WNC) ansatz to approximate AGP using local operators. The WNC ansatz generalizes the standard nested-commutator ansatz by assigning independent variational weights to commutators of local Hamiltonian terms, thereby enlarging the variational space while preserving a fixed operator range. We show that the WNC ansatz can be efficiently optimized using a local optimization scheme. Moreover, it systematically outperforms the nested-commutator ansatz in preparing one-dimensional matrix product states (MPS) and the ground state of a nonintegrable quantum Ising model. We then numerically demonstrate that CD driving based on the WNC ansatz significantly accelerates the preparation of 1D MPS for system sizes up to $N = 1000$ qubits, as well as the two-dimensional Affleck-Kennedy-Lieb-Tasaki state on a hexagonal lattice with up to $N = 3 \times 10$ sites.

Kardashev scale Quantum Computing for Bitcoin Mining

Pierre-Luc Dallaire-Demers

2603.25519 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes the practical feasibility of using quantum computers to mine Bitcoin by applying Grover's algorithm to accelerate the cryptographic hash calculations. The authors find that while quantum mining could theoretically provide advantages, the physical resource requirements (qubits and energy) scale to astronomical levels that make it impractical even for civilizations operating at planetary energy scales.

Key Contributions

  • First comprehensive end-to-end cost analysis of fault-tolerant quantum hardware requirements for Bitcoin mining using Grover's algorithm
  • Open-source estimator tool that models the full attack surface including surface-code error correction, fleet logistics, and energy requirements at astronomical scales
Grover's algorithm Bitcoin mining fault-tolerant quantum computing surface code cryptographic hash functions
View Full Abstract

Bitcoin already faces a quantum threat through Shor attacks on elliptic-curve signatures. This paper isolates the other component that public discussion often conflates with it: mining. Grover's algorithm halves the exponent of brute-force search, promising a quadratic edge to any quantum miner of Bitcoin. Exactly how large that edge grows depends on fault-tolerant hardware. No prior study has costed that hardware end to end. We build an open-source estimator that sweeps the full attack surface: reversible oracles for double-SHA-256 mining and RIPEMD-based address preimages, surface-code factory sizing, fleet logistics under Nakamoto-consensus timing, and Kardashev-scale energy accounting. A parametric sweep over difficulty bits b, runtime caps, and target success probabilities reveals a sharp transition. At the most favourable partial-preimage setting (b = 32, 2^224 marked states), a superconducting surface-code fleet still requires about 10^8 physical qubits and about 10^4 MW. That load is comparable to a large national grid. Tightening to Bitcoin's January 2025 mainnet difficulty (b about 79) explodes the bill to about 10^23 qubits and about 10^25 W, approaching the Kardashev Type II threshold. These numbers settle a narrower question than "Is Bitcoin quantum-secure?" Once Grover mining is lifted from asymptotic query counts to fault-tolerant physical cost, practical quantum mining collapses under oracle, distillation, and fleet overhead. To push mining into non-trivial consensus effects, one must invoke astronomical quantum fleets operating at energy scales that lie far above present-day civilization.

Weak distillation of quantum resources

Shinnosuke Onishi, Oliver Hahn, Ryuji Takagi

2603.25358 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper develops a new framework that allows quantum computers to simulate quantum operations they cannot directly perform by using sampling techniques based on quasi-probability distributions. Instead of just estimating average values, their method can actually sample from the desired quantum distributions while using fewer quantum resources than previous approaches.

Key Contributions

  • General framework converting quasi-probability protocols from expectation value estimation to full weak simulation
  • Significant reduction in sampling requirements compared to naive approaches, with cost proportional to quasi-probability negativity
  • Introduction of weak quantum resource distillation as alternative to physical state distillation
quantum error mitigation quasi-probability decomposition importance sampling magic state distillation entanglement distillation
View Full Abstract

Importance sampling based on quasi-probability decomposition is the backbone of many widely used techniques, such as error mitigation, circuit knitting, and, more generally, virtual quantum resource distillation, as it allows one to simulate operations that are not accessible in a given setting. However, this class of protocols faces a fundamental problem -- it only allows to estimate expectation values. Here, we provide a general framework that lifts any quasi-probability-based protocol from expectation value estimation to a weak simulator, realizing sampling from the desired distribution only using a restricted class of quantum resources. Our method runs with the sampling cost proportional to the negativity of the quasi-probability, in stark contrast to the naive estimation-based approach that requires a large number of samples even in the case of small negativity. We show that our method requires significantly fewer samples in a number of relevant scenarios, such as error mitigation, entanglement distillation and magic state distillation. Our framework realizes the weak simulation of quantum resources without actually distilling the state, introducing a new notion of quantum resource distillation.

T Count as a Numerically Solvable Minimization Problem

Marc Grau Davis, Ed Younis, Mathias Weiden, Hyeongrak Choi, Dirk Englund

2603.25101 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new method to find quantum circuits that minimize the number of T gates (which are expensive in fault-tolerant quantum computing) by formulating it as a continuous optimization problem that can be solved numerically. The authors demonstrate their approach works for small circuits and show how to extend it to larger circuits by breaking them into smaller optimizable pieces.

Key Contributions

  • Formulates T-count minimization as numerically solvable continuous optimization problems using binary search
  • Demonstrates circuit partitioning approach to scale the optimization method to larger quantum circuits
T-count optimization fault-tolerant quantum computing quantum circuit synthesis binary search optimization circuit partitioning
View Full Abstract

We present a formulation of the problem of finding the smallest T -Count circuit that implements a given unitary as a binary search over a sequence of continuous minimization problems, and demonstrate that these problems are numerically solvable in practice. We reproduce best-known results for synthesis of circuits with a small number of qubits, and push the bounds of the largest circuits that can be solved for in this way. Additionally, we show that circuit partitioning can be used to adapt this technique to be used to optimize the T -Count of circuits with large numbers of qubits by breaking the circuit into a series of smaller sub-circuits that can be optimized independently.

Uncertainty Quantification for Quantum Computing

Ryan Bennink, Olena Burkovska, Konstantin Pieper, Jorge Ramirez, Elaine Wong

2603.25039 • Mar 26, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This review paper introduces uncertainty quantification methods to quantum computing, showing how mathematical tools like probabilistic modeling and Bayesian inference can help address noise and error propagation in quantum devices. It aims to bridge applied mathematics and quantum information science to improve algorithm design and error mitigation.

Key Contributions

  • Bridging uncertainty quantification methodologies with quantum computing error analysis
  • Providing mathematical framework for noise characterization and error mitigation in quantum devices
  • Establishing rigorous statistical inference approaches for quantum computational reliability
uncertainty quantification quantum error mitigation noise characterization probabilistic modeling Bayesian inference
View Full Abstract

This review is designed to introduce mathematicians and computational scientists to quantum computing (QC) through the lens of uncertainty quantification (UQ) by presenting a mathematically rigorous and accessible narrative for understanding how noise and intrinsic randomness shape quantum computational outcomes in the language of mathematics. By grounding quantum computation in statistical inference, we highlight how mathematical tools such as probabilistic modeling, stochastic analysis, Bayesian inference, and sensitivity analysis, can directly address error propagation and reliability challenges in today's quantum devices. We also connect these methods to key scientific priorities in the field, including scalable uncertainty-aware algorithms and characterization of correlated errors. The purpose is to narrow the conceptual divide between applied mathematics, scientific computing and quantum information sciences, demonstrating how mathematically rooted UQ methodologies can guide validation, error mitigation, and principled algorithm design for emerging quantum technologies, in order to address challenges and opportunities present in modern-day quantum high performance and fault-tolerant quantum computing paradigms.

Finite-Degree Quantum LDPC Codes Reaching the Gilbert-Varshamov Bound

Kenta Kasai

2603.24588 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops new quantum error-correcting codes called quantum LDPC codes that achieve optimal error correction performance (reaching the Gilbert-Varshamov bound) while maintaining practical constraints on code structure. The researchers construct these codes using nested classical error-correcting codes and prove their effectiveness both theoretically and through computer-assisted verification.

Key Contributions

  • Construction of quantum LDPC codes with finite degree that achieve Gilbert-Varshamov bound performance
  • Rigorous computer-assisted proof demonstrating optimal distance properties for practical code parameters
quantum error correction LDPC codes Calderbank-Shor-Steane codes Gilbert-Varshamov bound fault tolerance
View Full Abstract

We construct nested Calderbank-Shor-Steane code pairs with non-vanishing coding rate from Hsu-Anastasopoulos codes and MacKay-Neal codes. In the fixed-degree regime, we prove relative linear distance with high probability. Moreover, for several finite degree settings, we prove Gilbert-Varshamov distance by a rigorous computer-assisted proof.

Flagging the Clifford hierarchy:~Fault-tolerant logical $\fracπ{2^l}$ rotations via measuring circuit gauge operators of non-Cliffords

Shival Dasu, Ben Criger

2603.24573 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops fault-tolerant quantum circuits for implementing specific rotation gates in the Clifford hierarchy using flag-based error detection. The authors provide efficient circuits with linear overhead for performing precise rotations that are essential for fault-tolerant quantum computing applications.

Key Contributions

  • Recursive flag circuits for detecting logical errors in non-Clifford rotation gates
  • O(l) overhead circuits for fault-tolerant logical rotations on CSS codes
  • Methods to increase fault distance through concatenation and Cliffordization
  • Resource state preparation circuits for gate teleportation implementations
fault-tolerant quantum computing error correction Clifford hierarchy CSS codes flag circuits
View Full Abstract

We provide a recursively defined sequence of flag circuits which will detect logical errors induced by non-fault-tolerant $R_{\overline{Z}}(\fracπ{2^l})$ gates on CSS codes with a fault distance of two. As applications, we give a family of circuits with $O(l)$ gates and ancillae which implement fault-tolerant logical $R_{Z}(\fracπ{2^l})$ or $R_{ZZ}(\fracπ{2^l})$ gates on any $[[k + 2, k, 2]]$ iceberg code and fault-tolerant circuits of size $O(l)$ for preparing $|\fracπ{2^l}\rangle$ resource states in the $[[7,1,3]]$ code, which can be used to perform fault-tolerant $R_{\overline{Z}}(\fracπ{2^l})$ rotations via gate teleportation, allowing for implementations of these gates that bypass the high overheads of gate synthesis when $l$ is small relative to the precision required. We show how the circuits above can be generalized to $π( x_0.x_{1}x_{2}\ldots x_{l}) = \sum_{j}^{l} π\frac{x_j}{2^j}$ rotations with identical overheads in $l$, which could be useful in quantum simulations where time is digitized in binary. Finally, we illustrate two approaches to increase the fault-distance of our construction. We show how to increase the fault distance of a Cliffordized version of the T gate circuit to $3$ in the Steane code and how to increase the fault-distance of the $\fracπ{2}$ iceberg circuit to $4$ through concatenation in two-level iceberg codes. This yields a targeted logical $R_{\overline{Z}}(\fracπ{2})$ gate with fault distance $4$ on any row of logical qubits in an $[[(k_2+2)(k_1+2), k_1k_2, 4]]$ code.

Robust Parametric Quantum Gate Against Stochastic Time-Varying Noise

Yang He, Zigui Zhang, Zibo Miao

2603.24345 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops an improved method called FF-QCRL for creating robust quantum control pulses that can handle realistic time-varying noise in quantum processors. The method combines filter function formalism with quantum control robustness landscape techniques to generate better control sequences for quantum gates that remain effective despite environmental disturbances.

Key Contributions

  • Integration of filter function formalism into quantum control robustness landscape framework
  • Development of FF-QCRL algorithm for robust pulse generation under realistic time-varying noise
quantum control robust gates NISQ filter functions noise mitigation
View Full Abstract

The performance of quantum processors in the noisy intermediate-scale quantum (NISQ) era is severely constrained by environmental noise and other uncertainties. While the recently proposed quantum control robustness landscape (QCRL) offers a powerful framework for generating robust control pulses for parametric gate families, its application has been practically restricted to quasi-static noise. To address the spectrally complex, time-varying noise prevalent in reality, we propose filter function-enhanced QCRL (FF-QCRL), which integrates filter function formalism into the QCRL framework. The resulting FF-QCRL algorithm minimizes a generalized robustness metric that faithfully encodes the impact of stochastic processes, enabling robust pulse-family generation for parametric gates under realistic time-varying noise. Numerical validation in a representative single-qubit setting confirms the effectiveness of the proposed method.

Correlated Atom Loss as a Resource for Quantum Error Correction

Hugo Perrin, Gatien Roger, Guido Pupillo

2603.24237 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved quantum error correction decoder for neutral-atom quantum computers that exploits correlations in atom loss events. The new approach reduces logical error rates by up to 10x compared to existing methods that treat atom losses as independent events.

Key Contributions

  • Novel decoding strategy that exploits loss correlations in neutral-atom quantum processors
  • Demonstration of order-of-magnitude reduction in logical error probability and increased loss threshold from 3.2% to 4%
quantum error correction surface code neutral atoms atom loss erasure channels
View Full Abstract

Atom loss is a dominant error source in neutral-atom quantum processors, yet its correlated structure remains largely unexploited by existing quantum error correction decoders. We analyze the performance of the surface code equipped with teleportation-based loss-detection units for neutral-atom quantum processors subject to circuit-level, partially correlated atom loss and depolarizing noise. We introduce and implement a decoding strategy that exploits loss correlations, effectively converting the \textit{delayed} erasure channels stemming from atom loss to erasure channels. The decoder constructs a loss graph and dynamically updates loss probabilities, a procedure that is highly parallelizable and compatible with real-time operation. Compared to a decoder that assumes independent loss events, our approach achieves up to an order-of-magnitude reduction in logical error probability and increases the loss threshold from $3.2\%$ to $4\%$. Our approach extends to experimentally relevant regimes with partially correlated loss, demonstrating robust gains beyond the idealized fully correlated setting.

Mitigating Dynamic Crosstalk with Optimal Control

Matthias G. Krauss, Luise C. Butzke, Christiane P. Koch

2603.24205 • Mar 25, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a method to eliminate dynamic crosstalk in quantum computers using optimal control theory and the perfect entangler spectrum. The technique requires only minimal modifications to control pulses to suppress unwanted interactions between qubits that occur during gate operations.

Key Contributions

  • Development of optimal control method using perfect entangler spectrum to suppress dynamic crosstalk
  • Demonstration that minimal pulse modifications can eliminate the most difficult-to-predict form of quantum crosstalk
  • Establishment of generalizable control principle for eliminating unwanted interactions in quantum hardware
dynamic crosstalk optimal control perfect entangler spectrum parametric gates tunable coupler
View Full Abstract

The prevalence of quantum crosstalk is an important barrier to scaling frequency-addressable qubit architectures, with dynamic crosstalk being particularly difficult to detect and suppress. This form of crosstalk refers to unintended interactions driven by the gate control fields themselves. Here, we minimize dynamic crosstalk using quantum optimal control based on the perfect entangler spectrum, where spectral peaks signal unwanted entanglement with spectator qubits. Focusing on parametric gates in tunable coupler systems, we derive pulse shapes that eliminate dynamic crosstalk. Remarkably, only minimal pulse modifications are required to mitigate the form of crosstalk that is otherwise most difficult to predict. The ability to suppress dynamic crosstalk via the perfect entangler spectrum establishes a generalizable control principle for eliminating unwanted interactions in quantum hardware.

STAR-Magic Mutation: Even More Efficient Analog Rotation Gates for Early Fault-Tolerant Quantum Computer

Riki Toshio, Shota Kanasugi, Jun Fujisaki, Hirotaka Oshima, Shintaro Sato, Keisuke Fujii

2603.22891 • Mar 24, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces STAR-magic mutation, a new protocol for implementing rotation gates on fault-tolerant quantum computers that achieves better error scaling and significantly reduces execution time for small-angle rotations. The authors also propose a new quantum computing architecture called 'STAR ver. 3' that could simulate quantum many-body systems with only hundreds of thousands of physical qubits.

Key Contributions

  • Development of STAR-magic mutation protocol with improved error scaling O(θ_L^{2(1-Θ(1/d))}p_ph) for logical rotation gates
  • Introduction of STAR ver. 3 quantum computing architecture using Clifford+T+φ gate set for early fault-tolerant quantum computers
  • Demonstration that realistic quantum many-body system simulations are feasible with hundreds of thousands of physical qubits at 10^-3 error rates
fault-tolerant quantum computing surface codes magic state distillation rotation gates quantum simulation
View Full Abstract

We introduce STAR-magic mutation, an efficient protocol for implementing logical rotation gates on early fault-tolerant quantum computers. This protocol judiciously combines two of the latest state preparation protocols: transversal multi-rotation protocol and magic state cultivation. It achieves a logical rotation gate with a favorable error scaling of $\mathcal{O}(θ_L^{2(1-Θ(1/d))}p_{\text{ph}})$, while requiring only the ancillary space of a single surface code patch. Here, $θ_L$ is the logical rotation angle, $p_{\text{ph}}$ is the physical error rate, and $d$ is the code distance. This scaling marks a significant improvement over the previous state-of-the-art, $\mathcal{O}(θ_L p_{\text{ph}})$, making our protocol particularly powerful for implementing a sequence of small-angle rotation gates, like Trotter-based circuits. Notably, for $θ_L \lesssim 10^{-5}$, our protocol achieves a two-order-of-magnitude reduction in both the execution time and the error rate of analog rotation gates compared to the standard $T$-gate synthesis using cultivated magic states. Building upon this protocol, we also propose a novel quantum computing architecture designed for early fault-tolerant quantum computers, dubbed ``STAR ver.~3". It employs a refined circuit compilation strategy based on Clifford+$T$+$φ$ gate set, rather than the conventional Clifford+$T$ or Clifford+$φ$ gate sets. We establish a theoretical bound on the feasible circuit size on this architecture and illustrate its capabilities by analyzing the spacetime costs for simulating the dynamics of quantum many-body systems. Specifically, we demonstrate that our architecture can simulate biologically-relevant molecules or lattice models at scales beyond the reach of exact classical simulation, with only a few hundred thousand physical qubits, even assuming a realistic error rate of $p_{\text{ph}}=10^{-3}$.

Low Latency GNN Accelerator for Quantum Error Correction

Alessio Cicero, Luigi Altamura, Moritz Lange, Mats Granath, Pedro Trancoso

2603.22149 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a specialized computer chip (FPGA accelerator) that uses neural networks to quickly detect and correct errors in quantum computers. The system can perform quantum error correction within the strict 1 microsecond timing requirement while maintaining higher accuracy than existing methods.

Key Contributions

  • FPGA accelerator implementation of GNN-based quantum error correction decoder
  • Hardware-aware optimizations achieving sub-1μs latency while maintaining high accuracy
  • Demonstrated performance improvements over state-of-the-art methods for surface codes up to distance d=7
quantum error correction surface codes neural network decoder FPGA accelerator superconducting qubits
View Full Abstract

Quantum computers have the potential to solve certain complex problems in a much more efficient way than classical computers. Nevertheless, current quantum computer implementations are limited by high physical error rates. This issue is addressed by Quantum Error Correction (QEC) codes, which use multiple physical qubits to form a logical qubit to achieve a lower logical error rate, with the surface code being one of the most commonly used. The most time-critical step in this process is interpreting the measurements of the physical qubits to determine which errors have most likely occurred - a task called decoding. Consequently, the main challenge for QEC is to achieve error correction with high accuracy within the tight $1μs$ decoding time budget imposed by superconducting qubits. State-of-the-art QEC approaches trade accuracy for latency. In this work, we propose an FPGA accelerator for a Neural Network based decoder as a way to achieve a lower logical error rate than current methods within the tight time constraint, for code distance up to d=7. We achieved this goal by applying different hardware-aware optimizations to a high-accuracy GNN-based decoder. In addition, we propose several accelerator optimizations leading to the FPGA-based decoder achieving a latency smaller than $1μs$, with a lower error rate compared to the state-of-the-art.

The color code, the surface code, and the transversal CNOT: NP-hardness of minimum-weight decoding

Shouzhen Gu, Lily Wang, Aleksander Kubica

2603.22064 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper proves that finding the minimum-weight decoding solution for quantum error correction codes is computationally intractable (NP-hard) for three important cases: color codes with Z errors, surface codes with general Pauli errors, and surface codes with transversal CNOT gates. The results establish fundamental computational limits for optimal decoding in fault-tolerant quantum computing.

Key Contributions

  • Proves NP-hardness of minimum-weight decoding for color codes with Pauli Z errors
  • Demonstrates computational intractability of optimal decoding for surface codes with general Pauli errors and transversal CNOT operations
  • Establishes sharp complexity separation between optimal and approximate decoding methods in fault-tolerant quantum computing
quantum error correction surface codes color codes minimum-weight decoding fault-tolerant quantum computing
View Full Abstract

The decoding problem is a ubiquitous algorithmic task in fault-tolerant quantum computing, and solving it efficiently is essential for scalable quantum computing. Here, we prove that minimum-weight decoding is NP-hard in three quintessential settings: (i) the color code with Pauli $Z$ errors, (ii) the surface code with Pauli $X$, $Y$ and $Z$ errors, and (iii) the surface code with a transversal CNOT gate, Pauli $Z$ and measurement bit-flip errors. Our results show that computational intractability already arises in basic and practically relevant decoding problems central to both quantum memories and logical circuit implementations, highlighting a sharp computational complexity separation between minimum-weight decoding and its approximate realizations.

Neural Belief-Matching Decoding for Topological Quantum Error Correction Codes

Luca Menti, Francisco Lázaro

2603.21730 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a neural network approach to improve quantum error correction decoding for topological codes like the toric code, replacing traditional belief-propagation methods with a neural belief-matching decoder that reduces computational complexity while maintaining performance.

Key Contributions

  • Development of neural belief-matching decoder that reduces average decoding complexity for topological quantum error correction
  • Introduction of convolutional architecture enabling weight sharing and transfer learning from small to large code instances without performance loss
quantum error correction topological codes toric code neural networks belief propagation
View Full Abstract

Quantum error correction (QEC) is critical for scalable fault-tolerant quantum computing. Topological codes, such as the toric code, offer hardware-efficient architectures but their Tanner graphs contain many girth-4 cycles that degrade the performance of belief-propagation (BP) decoding. For this reason, BP decoding is typically followed by a more complex second stage decoder such as minimum-weight perfect matching. These combined decoders achieve a remarkable performance, albeit at the cost of increased complexity. In this paper we propose two key improvements for the decoding of toric code. The first one is replacing the BP decoder by a neural BP decoder, giving rise to the neural belief-matching decoder which substantially decreases the average decoding complexity. The main drawback of this approach is the high cost associated with the training of the neural BP decoder. To address this issue, we impose a convolutional architecture on the neural BP decoder, enabling weight sharing across the spatially homogeneous structure of the code's factor graph. This design allows a model trained on a modest-size topological code to be directly transferred to much larger instances, preserving decoding quality while dramatically lowering the training burden. Our numerical experiments on toric-code lattices of various sizes demonstrate that this technique does not result in a noticeable loss in performance.

All-optical quantum memory using bosonic quantum error correction codes

Kaustav Chatterjee, Niklas Budinger, Kian Latifi Yaghin, Lucas Borg Clausen, Ulrik Lund Andersen

2603.21721 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: high

This paper develops an all-optical quantum memory system that stores quantum information in fiber loops using Gottesman-Kitaev-Preskill error correction codes. The researchers optimize the error correction strategy and identify key performance thresholds, demonstrating storage times exceeding 400ms with high fidelity at sufficient squeezing levels.

Key Contributions

  • Developed optimized syndrome decoder for GKP codes that significantly outperforms standard decoders in finite-squeezing regime
  • Identified squeezing threshold of 6.7 dB and optimal correction spacing for maximizing memory lifetime
  • Demonstrated path to scalable all-optical fault-tolerant quantum storage with clear performance benchmarks
quantum memory GKP codes bosonic quantum error correction all-optical fault-tolerant quantum computing
View Full Abstract

Reliable quantum memory is essential for scalable quantum networks and fault-tolerant photonic quantum computing. We present a quantitative analysis of an all-optical quantum memory architecture in which a Gottesman-Kitaev-Preskill (GKP) encoded qubit is stored in a fibre loop and periodically stabilized using teleportation-based error correction. By modelling fibre propagation as a pure-loss channel and representing each correction round as an effective logical map acting on the Bloch vector, we obtain a compact description of the full multi-round memory channel. We show that syndrome decoder optimization plays a crucial role in the experimentally relevant finite-squeezing regime. The optimal decoder deviates from standard square-grid GKP decoder in both tile-size and tile-shape, leading to significant improved logical performance. Using this optimized decoding strategy, we identify a squeezing-dependent optimal spacing between correction nodes that maximizes the memory lifetime. Remarkably, this optimal segment length is largely independent of the desired storage time, providing a simple and practical design rule for fibre-loop quantum memory. We further find a squeezing threshold of approximately 6.7 dB below which intermediate error correction becomes counterproductive, while above threshold the achievable storage time increases approximately exponentially with squeezing. For example, at 17 dB squeezing, storage times exceeding 400 ms can be achieved with logical infidelity below 1%. These results establish clear performance benchmarks and reveal the fundamental trade-off between photon loss, squeezing, and correction frequency in continuous-variable architectures. Our findings provide actionable design principles for near-term photonic quantum memory and clarify the path toward scalable all-optical fault-tolerant quantum storage.

Neural network approach to mitigating intra-gate crosstalk in superconducting CZ gates

Yiming Yu, Yexiong Zeng, Ye-Hong Chen, Franco Nori, Yan Xia

2603.21631 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a neural network approach called Physics-Guided Neural Control (PGNC) to create better control pulses for quantum gates in superconducting quantum computers. The method specifically targets reducing crosstalk errors during two-qubit CZ gate operations, showing improved gate fidelity compared to existing optimization methods.

Key Contributions

  • Development of Physics-Guided Neural Control framework for quantum gate optimization
  • Demonstration of superior CZ gate fidelity and robustness against crosstalk in superconducting transmon systems
superconducting qubits crosstalk mitigation neural networks CZ gate transmon
View Full Abstract

The potential of quantum computing is fundamentally constrained by the inherent susceptibility of qubits to noise and crosstalk, particularly during multi-qubit gate operations. Existing strategies, such as hardware isolation and dynamical decoupling, face limitations in scalability, experimental feasibility, and robustness against complex noise sources. In this manuscript, we propose a physics-guided neural control (PGNC) framework to generate robust control pulses for superconducting transmon qubit systems, specifically targeting crosstalk mitigation. By combining a hardware aware parameterization with a Hamiltonian-informed objective that accounts for condition-dependent crosstalk distortions, PGNC steers the search toward smooth and physically realizable pulses while efficiently exploring high dimensional control landscapes. Numerical simulations for the CZ gate demonstrate superior fidelity and pulse smoothness compared to a Krotov baseline under matched constraints. Taken together, the results show consistent and practically meaningful improvements in both nominal and perturbed conditions, with pronounced gains in worst-case fidelity, supporting PGNC as a viable route to robust control on near-term transmon devices.

Systematic construction of digital autonomous quantum error correction for state preparation and error suppression via conditional Gaussian operations

Keitaro Anai, Suguru Endo, Shuntaro Takeda, Tomohiro Shitara

2603.21598 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops a new approach for autonomous quantum error correction in continuous-variable quantum computing that uses conditional Gaussian operations to automatically steer noisy quantum states toward target states without requiring explicit measurements and feedback. The method is demonstrated for preparing non-Gaussian resource states needed for universal quantum computation and for suppressing errors in cat states.

Key Contributions

  • Development of nullifier-based digital autonomous quantum error correction using conditional Gaussian operations
  • Demonstration of autonomous preparation of non-Gaussian resource states including cubic phase states and trisqueezed states for universal quantum computation
  • Autonomous error suppression scheme for cat and squeezed cat states with explicit gate decompositions and realistic noise analysis
autonomous quantum error correction continuous-variable quantum computing conditional Gaussian operations non-Gaussian states cat states
View Full Abstract

In continuous-variable quantum computing, autonomous quantum error correction (QEC) can dissipatively steer a noisy quantum state into a target state or manifold, enabling robust quantum information processing without explicit syndrome measurements and feedback. Here, we propose a nullifier-based digital autonomous QEC enabled by conditional Gaussian operations. By designing jump operators for target nullifiers and compiling the resulting Lindbladian into a Trotterized sequence of elementary conditional Gaussian operations, we demonstrate two use cases: (i) deterministic preparation of non-Gaussian resource states for universal computation, including finitely squeezed cubic phase states and approximate trisqueezed states, and (ii) autonomous suppression of dephasing error for cat and squeezed cat states. We provide explicit gate decompositions for the required conditional Gaussian operations and numerically evaluate the performance under realistic imperfections, including photon loss in the bosonic mode and ancillary-qubit decoherence. Our results clarify the resource requirements and trade-offs, such as circuit depth, time-step choices, and the required set of conditional Gaussian operations, for scalable, gate-level implementations of autonomous state preparation and error suppression.

High-yield integration design of fixed-frequency superconducting qubit systems using siZZle-CZ gates

Kazuhisa Ogawa, Yutaka Tabuchi, Makoto Negoro

2603.21537 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces the siZZle-CZ gate as an alternative to cross-resonance gates for fixed-frequency superconducting qubits, demonstrating that it can achieve high fidelities while being more robust to frequency collisions that limit manufacturing yields in large quantum processors.

Key Contributions

  • Development of siZZle-CZ gate architecture that relaxes frequency collision constraints in superconducting qubit systems
  • Demonstration of >99.6% fidelity controlled-Z gates across wide operating windows
  • Design of scalable lattice architectures with >1000 qubits showing 80-100% zero-collision yields
superconducting qubits transmon controlled-Z gate quantum gate fidelity scalable quantum computing
View Full Abstract

Fixed-frequency transmon qubits, characterized by simple architectures and long coherence times, are promising platforms for large-scale quantum computing. However, the rapidly increasing frequency collisions, which directly reduce the fabrication yield, hinder scaling, especially in cross-resonance (CR) gate-based architectures, wherein the restricted drive frequency severely limits the available design space. We investigate the Stark-induced ZZ by level excursions (siZZle) gate, which relaxes this limitation by allowing arbitrary drive-frequency choices. Extensive numerical analyses across a broad parameter range -- including the far-detuned regime that has received negligible prior attention -- reveal wide operating windows that support controlled-Z (CZ) fidelities >99.6%. Leveraging these windows, we design lattice architectures containing >1000 qubits, showing that even under 0.25% fabrication-induced frequency dispersion, the zero-collision yields in square and heavy-hexagonal lattices reach 80% and 100%, respectively. Thus, the siZZle-CZ gate is a scalable and collision-robust alternative to the CR gate, offering a viable route toward high-yield fixed-frequency transmon quantum processors.

Optimal Compilation of Syndrome Extraction Circuits for General Quantum LDPC Codes

Kai Zhang, Dingchao Gao, Zhaohui Yang, Runshi Zhou, Fangming Liu, Zhengfeng Ji, Jianxin Chen

2603.21499 • Mar 23, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents Auto-Stabilizer-Check (ASC), a software framework that automatically generates optimal quantum circuits for error correction in quantum low-density parity-check codes. ASC reduces circuit depth by approximately 50% and achieves 7-8x better error suppression compared to existing methods, making these advanced error correction codes more practical for large-scale quantum computers.

Key Contributions

  • Development of ASC framework for optimal syndrome extraction circuit compilation for arbitrary qLDPC codes
  • Definitive solution to IBM's open problem regarding depth-6 syndrome extraction circuits for bivariate bicycle codes
  • 50% reduction in circuit depth and 7-8x improvement in logical error rate suppression compared to existing methods
quantum error correction qLDPC codes syndrome extraction circuit compilation fault tolerance
View Full Abstract

Quantum error correcting codes (QECC) are essential for constructing large-scale quantum computers that deliver faithful results. As strong competitors to the conventional surface code, quantum low-density parity-check (qLDPC) codes are emerging rapidly: they offer high encoding rates while maintaining reasonable physical-qubit connectivity requirements. Despite the existence of numerous code constructions, a notable gap persists between these designs -- some of which remain purely theoretical -- and their circuit-level deployment. In this work, we propose Auto-Stabilizer-Check (ASC), a universal compilation framework that generates depth-optimal syndrome extraction circuits for arbitrary qLDPC codes. ASC leverages the sparsity of parity-check matrices and exploits the commutativity of X and Z stabilizer measurement subroutines to search for optimal compilation schemes. By iteratively invoking an SMT solver, ASC returns a depth-optimal solution if a satisfying assignment is found, and a near-optimal solution in cases of solver timeouts. Notably, ASC provides the first definitive answer to one of IBM's open problems: for all instances of bivariate bicycle (BB) code reported in their work, our compiler certifies that no depth-6 syndrome extraction circuit exists. Furthermore, by integrating ASC with an end-to-end evaluation framework -- one that assesses different compilation settings under a circuit-level noise model -- ASC reduces circuit depth by approximately 50% and achieves an average 7x-8x suppression of the logical error rate for general qLDPC codes, compared with as-soon-as-possible (ASAP) and coloration-based scheduling. ASC thus substantially reduces manual design overhead and demonstrates its strong potential to serve as a key component in accelerating hardware deployment of qLDPC codes.

Analyzing Decoders for Quantum Error Correction

Abtin Molavi, Feras Saad, Aws Albarghouthi

2603.20127 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a new systematic method for evaluating quantum error correction decoders that can outperform traditional Monte Carlo simulation, especially at low error rates. The approach uses structured search over possible errors and polynomial optimization to quantify both decoder accuracy and robustness to changes in physical error rates.

Key Contributions

  • Novel formal semantics for QEC programs based on the Stim circuit format
  • Systematic decoder evaluation method using structured error space search and constrained polynomial optimization that outperforms Monte Carlo simulation
quantum error correction decoder analysis fault tolerance Stim circuits polynomial optimization
View Full Abstract

Quantum error correction (QEC) enables reliable computation on noisy hardware by encoding logical information across many physical qubits and periodically measuring parities to detect errors. A decoder is the classical algorithm that uses these measurements to infer which error most likely occurred, so that the system can correct it. The decoder's accuracy-how rarely it makes the wrong guess-directly determines the scale of quantum computation that can be reliably executed. With a wealth of competing decoding algorithms, a QEC system designer needs reliable methods to evaluate them. Today, the dominant approach is to evaluate decoders using Monte Carlo simulation. However, simulation has several drawbacks such as requiring many samples to produce low variance estimates. In this work, we develop a new systematic analysis for evaluating decoders. We introduce a novel formal semantics of a core language for QEC programs that captures the de facto standard Stim circuit format, providing a principled theoretical foundation for the emerging space of fault-tolerant quantum systems design. Given a QEC program and a decoder, our verifier can quantify both the decoder accuracy and the decoder robustness to drift in physical error rate. Our approach has two key components: (i) a structured search over the space of possible errors; and (ii) a constrained polynomial optimization kernel. A thorough empirical evaluation of our approach suggests that it can outperform simulation, especially in low error rate regimes, and that it can be deployed to quantify decoder robustness over an interval of physical error rates.

Adaptive Parallelism-Aware Qubit Routing for Ion Trap QCCD Architectures

Anabel Ovide, Andreu Angles-Castillo, Carmen G. Almudever

2603.19969 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents a new method for efficiently moving qubits (trapped ions) between different zones in modular quantum computers, optimizing both the physical transport of ions and the parallel execution of quantum operations to improve overall performance and fidelity.

trapped-ion QCCD qubit routing ion transport quantum compilation
View Full Abstract

Trapped-ion Quantum Charge-Coupled Device (QCCD) architectures promise scalability through interconnected trap zones and dynamic ion transport; however, this transport capability creates a complex compilation challenge: how to move qubits efficiently without degrading fidelity. We introduce a routing strategy that turns this challenge into an advantage by exploiting operational parallelism across traps while adapting to both algorithmic structure and device topology through a configurable multi-parameter scoring mechanism. Across a broad suite of benchmarks and QCCD layouts, the method consistently reduces ion-transport overhead and improves execution fidelity, outperforming state-of-the-art routing techniques. These results highlight that explicitly balancing movement overhead and execution parallelism under architectural constraints is key to unlocking the full potential of modular trapped-ion quantum processors.

SDP bounds on quantum codes: rational certificates

Gerard Anglès Munné, Felix Huber

2603.19901 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops mathematical methods to determine the maximum possible size of quantum error-correcting codes with given parameters. The researchers use semidefinite programming with rational certificates to rigorously prove improved upper bounds on code sizes for quantum systems with 6-19 qubits.

Key Contributions

  • Development of rational infeasibility certificates for semidefinite programming bounds on quantum codes
  • Improvement of 18 upper bounds on maximum quantum code sizes for n-qubit systems with 6 ≤ n ≤ 19
quantum error correction quantum codes semidefinite programming coding theory fault tolerance
View Full Abstract

A fundamental problem in quantum coding theory is to determine the maximum size of quantum codes of given block length and distance. A recent work introduced bounds based on semidefinite programming, strengthening the well-known quantum linear programming bounds. However, floating-point inaccuracies prevent the extraction of rigorous non-existence proofs from the numerical methods. Here, we address this by providing rational infeasibility certificates for a range of quantum codes. Using a clustered low-rank solver with heuristic rounding to algebraic expressions, we can improve upon $18$ upper bounds on the maximum size of $n$-qubit codes with $6 \leq n \leq 19$. Our work highlights the practicality and scalability of semidefinite programming for quantum coding bounds.

Linear-optical generation of hybrid GKP entanglement from small-amplitude cat states

Shohei Kiryu, Yohji Chin, Masahiro Takeoka, Kosuke Fukui

2603.19870 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper proposes a method to create hybrid quantum states that combine two different types of quantum error correction codes using only standard optical equipment and small cat states. The approach could make fault-tolerant quantum computing more experimentally feasible by avoiding the need for complex non-Gaussian resources.

Key Contributions

  • Novel linear-optical scheme for generating hybrid GKP-photon entangled states using only small-amplitude cat states
  • Breeding process method to increase non-Gaussianity without complex resources
  • Extension to hybrid qudit states for enhanced quantum error correction capabilities
GKP codes hybrid bosonic codes linear optics cat states quantum error correction
View Full Abstract

Hybrid bosonic codes combining bosonic codes with photon states offer a promising pathway for fault-tolerant quantum computation. However, the efficient generation of such states in optical setups remains technically challenging due to the requirement for complex non-Gaussian resources. In this paper, we propose a novel scheme to efficiently generate hybrid entangled states between a GKP qubit and a photon-number state using small-amplitude cat states as the primary resource. We apply a breeding process using small-amplitude cat states to increase the non-Gaussianity of the input states. This method requires only linear optical elements and homodyne measurements. Furthermore, we demonstrate that this protocol can be extended to generate hybrid qudit states. This scheme has the potential to provide a resource-efficient and experimentally attractive route toward implementing hybrid quantum error correction.

Beyond-Ten-Hour Coherence in a Decoherence-Free Trapped-Ion Clock Qubit

Jiahao Pi, Xiangjia Liu, Junle Cao, Pengfei Wang, Lingfeng Ou, Erfu Gao, Hengchao Tu, Menglin Zou, Xiang Zhang, Junhua Zhang, Kihwan Kim

2603.19631 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: high Network: medium

This paper demonstrates quantum coherence lasting over 10 hours in trapped ion systems by combining clock-state qubits with decoherence-free subspace encoding. The technique uses pairs of ytterbium and barium ions to reject noise and maintain quantum information without requiring magnetic shielding or complex stabilization systems.

Key Contributions

  • Achieved >10 hour coherence times in trapped ion qubits using decoherence-free subspace encoding
  • Demonstrated passive error correction technique that eliminates technical noise constraints without magnetic shielding
  • Established pathway toward million-year coherence potential in atomic ion quantum systems
trapped ions decoherence-free subspace quantum coherence clock states quantum memory
View Full Abstract

Quantum systems promise to revolutionize information processing science and technology [1-3]. The preservation of quantum coherence, the defining property of qubits, fundamentally constrains the performance of quantum information processing with quantum memories [4]. While trapped atomic ions theoretically support million-year coherence based on spontaneous emission [5-7], experimental demonstrations have reached far less, only about an hour [8-13]. Here we combine clock-state qubits with decoherence-free subspace (DFS) encoding to achieve coherence exceeding ten hours. Using correlation-based phase tracking in 171Yb+ ion pairs sympathetically cooled by 138Ba+ ion, we demonstrate this without magnetic shielding or enhanced microwave phase stabilization that previously limited coherence times. DFS encoding references the qubit phase to the inter-ion energy difference to reject microwave phase noise and common-mode magnetic fluctuations, while clock states provide environmental insensitivity. Throughout measurements extended to 1600 seconds, we observe minimal coherence decay, with exponential fits yielding a coherence time of (3.77 +/- 1.09) x 10^4 seconds. Our results establish DFS encoding as a form of passive error correction that eliminates technical noise constraints, unlocking the million-year coherence potential of atomic ions for scalable quantum information processing.

Stabilizer Formalism for EAQECCs with Noise ebits

Ruihu Li, Guanmin Guo, Yang Liu, Hao Song

2603.19597 • Mar 20, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops a mathematical framework called stabilizer formalism for entanglement-assisted quantum error correcting codes (EAQECCs) that can work with imperfect entangled bits (ebits). The work provides theoretical tools to construct and analyze quantum error correction schemes when the shared entanglement resources contain noise.

Key Contributions

  • Development of stabilizer formalism for EAQECCs with noisy ebits
  • Derivation of equivalent formalisms using symplectic geometry and additive codes
  • Construction and performance analysis of specific EAQECCs with noise ebits
quantum_error_correction stabilizer_codes entanglement_assisted_codes noisy_entanglement symplectic_geometry
View Full Abstract

We introduce a stabilizer formalism for EAQECCs with noise ebits, using special subgroups of product groups of two Pauli groups. This formalism includes the two coding schemes,given by Lai and Brun (C.Y. Lai and T. A. Brun, PHYSICAL REVIEW A 86, 032319 (2012)), for EAQECCs with imperfect ebits as special cases. Then two equivalent formalisms of the formalism are derived in nomenclature of sympletic geometry and additive codes. We apply this theory to construct some EAQECCs with noise ebits, and analyze their performance.

Preserving MWPM-Decodability in Fault-Equivalent Rewrites

Maximilian Schweikart, Linnea Grans-Samuelsson, Aleks Kissinger, Benjamin Rodatz

2603.19522 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to preserve the efficient decodability of quantum error correction codes when implementing fault-tolerant quantum circuits. The authors show how to maintain the special mathematical structure that allows fast decoding while constructing practical quantum computing operations.

Key Contributions

  • Formalized how ZX circuit rewrites affect quantum error correction decodability
  • Identified specific circuit transformations that preserve minimum-weight perfect matching decodability
  • Demonstrated construction of efficiently decodable fault-tolerant syndrome extraction circuits for matchable codes
quantum error correction fault tolerance minimum weight perfect matching ZX calculus surface codes
View Full Abstract

Decoding a quantum error correction code is generally NP-hard, but corrections must be applied at a high frequency to suppress noise successfully. Matchable codes, like the surface code, exhibit a special structure that makes it possible to efficiently, approximately solve the decoding problem through minimum-weight perfect matching (MWPM). However, this efficiency-enabling property can be lost when constructing implementations for fault-tolerant gadgets such as syndrome-extraction circuits or logical operations. In this work, we take a circuit-centric perspective to formalise how the decoding problem changes when applying ZX rewrites to a ZX diagram with a given detector basis. We demonstrate a set of rewrites that preserve MWPM-decodability of circuits and show that these matchability-preserving rewrites can be used to fault-tolerantly extract quantum circuits from phase-free ZX diagrams. In particular, this allows us to build efficiently decodable, fault-tolerant syndrome-extraction circuits for matchable codes.

Assessing Spatiotemporally Correlated Noise in Superconducting Qubits via Pulse-Based Quantum Noise Spectroscopy

Mayra Amezcua, Leigh Norris, Tom Gilliss, Ryan Sitler, James Shackford, Gregory Quiroz, Kevin Schultz

2603.19373 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper develops a new method called quantum noise spectroscopy (QNS) to characterize correlated noise between multiple superconducting qubits, which is important for understanding and mitigating errors that can spread across quantum devices. The researchers demonstrate their technique can better identify these problematic noise correlations compared to existing methods.

Key Contributions

  • Development of nonparametric quantum noise spectroscopy protocol for characterizing spatiotemporally correlated noise in multi-qubit systems
  • Demonstration of superior performance over existing comb-based QNS protocols for noise characterization
  • Validation through engineered noise processes and application to quantum crosstalk characterization
quantum noise spectroscopy superconducting qubits spatiotemporal correlation quantum crosstalk error correction
View Full Abstract

Spatiotemporally correlated errors are widespread in quantum devices and are particularly adversarial to error correcting schemes. To characterize these errors, we propose and validate a nonparametric quantum noise spectroscopy (QNS) protocol to estimate both spectra and static errors associated with spatiotemporally correlated dephasing noise and fluctuating quantum crosstalk on two qubits. Our scheme reconstructs the real and imaginary components of the two-qubit cross-spectrum by using fixed total time pulse sequences and single qubit and joint two-qubit measurements to separately resolve spatially correlated noise processes. We benchmark our protocol by reconstructing the spectra of spatiotemporally correlated noise processes engineered via the Schrödinger Wave Autoregressive Moving Average technique, emulating dephasing errors. Furthermore, we show that the protocol can outperform existing comb-based QNS protocols. Our results demonstrate the utility of our protocol in characterizing spatiotemporally correlated noise and quantum crosstalk in a multi-qubit device for potential use in noise-adapted control or error protection schemes.

Low-weight quantum syndrome errors in belief propagation decoding

Haggai Landa

2603.19126 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops methods to identify problematic low-weight error patterns in quantum error correction codes that cause belief propagation decoding algorithms to converge slowly or fail. The authors analyze how these decoding failures occur and propose improvements to the decoder by modifying the decoding matrix to reduce both logical errors and decoding time.

Key Contributions

  • Empirical method to identify low-weight error syndromes that cause belief propagation decoding convergence issues
  • Analysis of BP dynamics for weight-four and weight-five errors showing exponential activation behavior
  • Decoder improvement technique using fault column combinations to reduce logical errors and decoding time
quantum error correction belief propagation syndrome decoding low-density parity check fault tolerance
View Full Abstract

We describe an empirical approach to identify low-weight combinations of columns of the decoding matrices of a quantum circuit-level noise model, for which belief-propagation (BP) algorithms converge possibly very slowly. Focusing on the logical-idle syndrome cycle of the low-density parity check gross code, we identify criteria providing a characterization of the Tanner subgraph of such low-weight error syndromes. We analyze the dynamics of iterations when BP is used to decode weight-four and weight-five errors, finding statistics akin to exponential activation in the presence of noise or escape from chaotic phase-space domains. We study how BP convergence improves when adding to the decoding matrix relevant combinations of fault columns, and show that the suggested decoder amendment can result in the reduction of both logical errors and decoding time.

Post-Quantum Cryptography from Quantum Stabilizer Decoding

Jonathan Z. Lu, Alexander Poremba, Yihui Quek, Akshar Ramkumar

2603.19110 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: low

This paper proposes quantum stabilizer code decoding as a new hardness assumption for post-quantum cryptography, showing it can support key cryptographic primitives like public-key encryption and oblivious transfer. The authors argue this provides a quantum-native alternative to current post-quantum assumptions that could be more resistant to both classical and quantum attacks.

Key Contributions

  • Establishing quantum stabilizer decoding as a viable post-quantum cryptographic assumption with reductions to core cryptographic primitives
  • Developing new scrambling techniques for structured linear spaces with symplectic algebraic structure to enable security proofs
post-quantum cryptography quantum stabilizer codes cryptographic hardness assumptions public-key encryption oblivious transfer
View Full Abstract

Post-quantum cryptography currently rests on a small number of hardness assumptions, posing significant risks should any one of them be compromised. This vulnerability motivates the search for new and cryptographically versatile assumptions that make a convincing case for quantum hardness. In this work, we argue that decoding random quantum stabilizer codes -- a quantum analog of the well-studied LPN problem -- is an excellent candidate. This task occupies a unique middle ground: it is inherently native to quantum computation, yet admits an equivalent formulation with purely classical input and output, as recently shown by Khesin et al. (STOC '26). We prove that the average-case hardness of quantum stabilizer decoding implies the core primitives of classical Cryptomania, including public-key encryption (PKE) and oblivious transfer (OT), as well as one-way functions. Our constructions are moreover practical: our PKE scheme achieves essentially the same efficiency as state-of-the-art LPN-based PKE, and our OT is round-optimal. We also provide substantial evidence that stabilizer decoding does not reduce to LPN, suggesting that the former problem constitutes a genuinely new post-quantum assumption. Our primary technical contributions are twofold. First, we give a reduction from random quantum stabilizer decoding to an average-case problem closely resembling LPN, but which is equipped with additional symplectic algebraic structure. While this structure is essential to the quantum nature of the problem, it raises significant barriers to cryptographic security reductions. Second, we develop a new suit of scrambling techniques for such structured linear spaces, and use them to produce rigorous security proofs for all of our constructions.

Fair Decoder Baselines and Rigorous Finite-Size Scaling for Bivariate Bicycle Codes on the Quantum Erasure Channel

Tushar Pandey

2603.19062 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper evaluates bivariate bicycle quantum error-correcting codes on erasure channels, addressing unfair decoder comparisons in previous work and using rigorous statistical methods to estimate true asymptotic error thresholds. The study shows these codes can achieve near-optimal performance without maximum-likelihood decoding and outperform surface codes in some metrics.

Key Contributions

  • Establishes fair decoder baselines for comparing bivariate bicycle codes against surface codes on quantum erasure channels
  • Provides rigorous finite-size scaling analysis to estimate true asymptotic error thresholds rather than finite-size pseudo-thresholds
  • Demonstrates bivariate bicycle codes achieve ~0.488 threshold within 2.4% of theoretical limit with 12x lower normalized overhead than surface codes
quantum error correction bivariate bicycle codes surface codes quantum erasure channel finite-size scaling
View Full Abstract

Fair threshold estimation for bivariate bicycle (BB) codes on the quantum erasure channel runs into two recurring problems: decoder-baseline unfairness and the conflation of finite-size pseudo-thresholds with true asymptotic thresholds. We run both uninformed and \emph{erasure-aware} minimum-weight perfect matching (MWPM) surface code baselines alongside BP-OSD decoding of BB codes. With standard depolarizing-weight MWPM and no erasure information, performance matches random guessing on the erasure channel in our tested regime -- so prior work that compares against this baseline is really comparing decoders, not codes. Using 200{,}000 shots per point and bootstrap confidence intervals, we sweep five BB code sizes from $N=144$ to $N=1296$. Pseudo-thresholds (WER = 0.10) run from $p^* = 0.370$ to $0.471$; finite-size scaling (FSS) gives an asymptotic threshold $p^*_\infty \approx 0.488$, within 2.4\% of the zero-rate limit and without maximum-likelihood decoding. On the fair baseline, BB at $N=1296$ has a modest edge in threshold over the surface code at twice the qubit count, and a 12$\times$ lower normalized overhead -- the latter is where the practical advantage sits. All runs are reproducible from recorded seeds and package versions.

XCOM: Full Mesh Network Synchronization and Low-Latency Communication for QICK (Quantum Instrumentation Control Kit)

Diego Martin, Luis H. Arnaldi, Kenneth Treptow, Neal Wilcer, Sho Uemura, Sara Sussman, David I Schuster, Gustavo Cancelo

2603.18977 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper presents XCOM, a networking system that enables precise synchronization (within 100 picoseconds) and low-latency communication between multiple quantum control boards in large-scale quantum computing systems. The system addresses the critical challenge of coordinating many hardware components needed to control hundreds or thousands of qubits in superconducting and spin qubit testbeds.

Key Contributions

  • Development of XCOM network achieving sub-100ps synchronization between quantum control boards
  • Enabling scalable multi-board control systems for large qubit count quantum computers
  • Providing deterministic all-to-all communication with sub-185ns latency for quantum control hardware
quantum control hardware synchronization QICK superconducting qubits scalable quantum systems
View Full Abstract

Quantum computing experiments and testbeds with large qubit counts have until recently been a privilege afforded only to large companies or quantum technologies where scaling to hundreds or thousands of qubits does not require a substantial increase in quantum control hardware (neutral atoms, trapped ions, or spin defects). Superconducting and spin qubit testbeds critically depend on scaling their control systems beyond what a single electronics board can provide. Multi-board control systems combining RF, fast DC control, bias, and readout require precise synchronization and communication across many hardware and firmware components. To address this, we present XCOM, a network that synchronizes QICK boards and the absolute clocks governing quantum program execution to within 100 ps, free of drift and loss of lock. XCOM also provides deterministic, all-to-all simultaneous data communication with latency below 185 ns. Like QICK itself, XCOM is compatible with a broad range of qubit technologies and is designed to scale to large systems.

A Flexible GKP-State-Embedded Fault-Tolerant Quantum Computation Configuration Based on a Three-Dimensional Cluster State

Peilin Du, Jing Zhang, Tiancai Zhang, Rongguo Yang, Kui Liu, Jiangrui Gao

2603.18778 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper proposes a new architecture for fault-tolerant quantum computing that uses three-dimensional cluster states built from optical photons with different properties (polarization, frequency, and orbital angular momentum). The researchers combine this with Gottesman-Kitaev-Preskill (GKP) error correction codes to create a flexible, scalable system for reliable quantum computation.

Key Contributions

  • Novel three-dimensional cluster state architecture using multiple optical degrees of freedom
  • Integration of partially squeezed surface-GKP codes with optimal squeezing threshold of 11.5 dB for fault-tolerant quantum computation
fault-tolerant quantum computing GKP states cluster states optical quantum computing error correction
View Full Abstract

The integration of diverse quantum resources and the exploitation of more degrees of freedom provide key operational flexibility for universal fault-tolerant quantum computation. In this work, we propose a flexible Gottesman-Kitaev-Preskill-state-embedded fault-tolerant quantum computation architecture based on a three-dimensional cluster state constructed in polarization, frequency, and orbital angular momentum domains. Specifically, we design optical entanglement generators to produce three diverse entangled pairs, and subsequently construct a three-dimensional cluster state via a beam-splitter network with several time delays. Furthermore, we present a partially squeezed surface-GKP code to achieve fault-tolerant quantum computation and ultimately find the optimal choice of implementing the squeezing gate to give the best fault-tolerant performance (the fault-tolerant squeezing threshold is 11.5 dB). Our scheme is flexible, scalable, and experimentally feasible, providing versatile options for future optical fault-tolerant quantum computation architecture.

High-threshold magic state distillation with quantum quadratic residue codes

Michael Zurel, Santanil Jana, Nadish de Silva

2603.18560 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a unified framework using quantum quadratic residue codes for magic state distillation, showing that several well-known quantum error-correcting codes are special cases of this framework. The authors demonstrate new codes that achieve high thresholds for distilling T states and Strange states, which are essential resources for fault-tolerant quantum computation.

Key Contributions

  • Unified existing magic state distillation codes under quantum quadratic residue framework
  • Presented new quantum quadratic residue codes with high thresholds for T state and Strange state distillation
  • Proved existence of infinitely many quantum quadratic residue codes for T state distillation with non-trivial thresholds
magic state distillation quantum error correction fault-tolerant quantum computing quadratic residue codes T states
View Full Abstract

We present applications of quantum quadratic residue codes in magic state distillation. This includes showing that existing codes which are known to distill magic states, like the $5$-qubit perfect code, the $7$-qubit Steane code, and the $11$-qutrit and $23$-qubit Golay codes, are equivalent to certain quantum quadratic residue codes. We also present new examples of quantum quadratic residue codes that distill qubit $T$ states and qutrit Strange states with high thresholds, and we show that there are infinitely many quantum quadratic residue codes that distill $T$ states with a non-trivial threshold. All of these codes, including the codes with the highest currently known thresholds for $T$ state and Strange state distillation, are unified under the umbrella of quantum quadratic residue codes.

Simulating Quantum Error Correction beyond Pauli Stochastic Errors

Jordan Hines, Corey Ostrove, Kenneth Rudinger, Stefan Seritan, Kevin Young, Robin Blume-Kohout, Timothy Proctor

2603.18457 • Mar 19, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new methods to simulate how realistic quantum errors (beyond simple Pauli errors) affect quantum error correction protocols, showing that coherent errors can significantly degrade fault-tolerant quantum computing performance compared to standard error models.

Key Contributions

  • Development of detector error model (DEM) mapping technique for non-Pauli and coherent errors in fault-tolerant quantum circuits
  • Demonstration that coherent errors can shift fault-tolerance thresholds and increase logical error rates by an order of magnitude compared to stochastic Pauli errors
quantum error correction fault-tolerant quantum computing coherent errors surface codes magic state distillation
View Full Abstract

Quantum error correction (QEC), the lynchpin of fault-tolerant quantum computing (FTQC), is designed and validated against well-behaved Pauli stochastic error models. But in real-world deployment, QEC protocols encounter a vast array of other errors -- coherent and non-Pauli errors -- whose impacts on quantum circuits are vastly different than those of stochastic Pauli errors. The impacts of these errors on QEC and FTQC protocols have been largely unpredictable to date due to exponential classical simulation cost. Here, we show how to accurately and efficiently model the effects of coherent and non-Pauli errors on FTQC, and we study the effects of such errors on syndrome extraction for surface and bivariate bicycle codes, and on magic state cultivation. Our analysis suggests that coherent error can shift fault-tolerance thresholds, increase the space-time cost of magic state cultivation, and can increase logical error rates by an order of magnitude compared to equivalent stochastic errors. These analyses are enabled by a new technique for mapping any Markovian circuit-level error model with sufficiently small error rates onto a detector error model (DEM) for an FTQC circuit. The resulting DEM enables Monte Carlo estimation of logical error rates and noise-adapted decoding, and its parameters can be analytically related to the underlying physical noise parameters to enable approximate strong simulation.

Adaptive Loss-tolerant Syndrome Measurements

Yuanjia Wang, Todd A. Brun

2603.17988 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops adaptive protocols for quantum error correction that can handle both traditional Pauli errors and qubit losses (erasures) simultaneously. The authors extend existing fault-tolerant error correction methods to work with mixed error models and optimize syndrome measurement sequences to minimize overhead when qubits are lost.

Key Contributions

  • Development of adaptive syndrome measurement protocols for mixed Pauli error and erasure models
  • Quantification of minimal overhead for converting correctable erasures to located errors
  • Generalization of fault-tolerant error correction conditions to handle qubit losses
  • Extension of adaptive Shor-style measurement sequences to loss-tolerant quantum error correction
fault-tolerant quantum computing quantum error correction syndrome measurement qubit loss erasure errors
View Full Abstract

In the presence of qubit losses, the building blocks of fault-tolerant error correction (FTEC) must be revisited. Existing loss-tolerant approaches are mainly architecture-specific, and little attention has been given to optimizing the syndrome measurement sequences under loss. Schemes designed for the standard Pauli error model are not directly applicable because the syndrome patterns differ when both Pauli errors and erasures can occur. Based on recent advances in loss detection units and loss-tolerant syndrome extraction gadgets, we extend the study of adaptive Shor-style measurement sequences to the mixed error model. We begin by discussing how to adaptively convert correctable erasures into located errors. The minimal overhead is quantified by the number of stabilizer measurements, which can be reduced to a subgroup dimension problem for erasures arising in any FTEC circuit for qubits and prime-dimensional qudits. As a byproduct, we provide the construction of the canonical generating set with respect to a given bipartite partition for a stabilizer group on qudits of composite dimension. We then generalize both the weak and strong FTEC conditions. Finally, we present adaptive syndrome-measurement protocols for the mixed error model, generalizing the adaptive protocols for the standard Pauli error model.

Quantum Depth Compression via Local Dynamic Circuits

Benjamin Hall, Palash Goiporia, Rich Rines

2603.17774 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces Quantum Depth Compression (QDC), a compilation framework that uses dynamic circuits to significantly reduce the depth of quantum circuits by reorganizing non-Clifford gates and utilizing mid-circuit measurements. The method achieves depth linear in the number of non-Clifford gates while avoiding expensive SWAP operations for connectivity constraints.

Key Contributions

  • Development of QDC framework that reduces circuit depth to linear in non-Clifford gates
  • Method to achieve grid connectivity without SWAP networks using dynamic circuits
  • Demonstration of reduced depth and CNOT count compared to standard compilers
quantum circuit compilation dynamic circuits depth compression Clifford gates non-Clifford gates
View Full Abstract

We present Quantum Depth Compression (QDC), a general compilation framework that utilizes dynamic circuits to reduce arbitrary quantum circuits to depth linear in the number of non-Clifford gates and to grid connectivity without the need for expensive SWAP-networks. The framework consists of pushing Clifford gates to the end of the circuit, resulting in a sequence of non-Clifford Pauli-phasors followed by an all Clifford sub-circuit, both of which are then reduced to constant depth via dynamic circuits. We show that applying QDC to random Pauli-phasor circuits lowers both their depth and CNOT count compared to a standard alternative compiler.

Fast stabilizer state preparation via AI-optimized graph decimation

Michael Doherty, Matteo Puviani, Jasmine Brewer, Gabriel Matos, David Amaro, Ben Criger, David T. Stephen

2603.17743 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents AI-optimized methods to prepare stabilizer states (important quantum states used in error correction) more efficiently by reducing the number of two-qubit gates needed. The researchers use reinforcement learning and Monte Carlo tree search to find better ways to construct these quantum states, achieving up to 2.5x reduction in gate count for large quantum error correcting codes.

Key Contributions

  • AI-based method (QuSynth) combining reinforcement learning and Monte Carlo tree search for optimal Clifford gate selection
  • Demonstration of up to 2.5x reduction in two-qubit gate count for stabilizer state preparation including large codes like the 144-qubit gross code
stabilizer states quantum error correction Clifford gates reinforcement learning Monte Carlo tree search
View Full Abstract

We propose a general method for preparing stabilizer states with reduced two-qubit gate count and depth compared to the state of the art. The method starts from a graph state representation of the stabilizer state and iteratively reduces the number of edges in the graph using two-qubit Clifford gates to produce a unitary preparation circuit. We explore various heuristic search and AI-based approaches to optimally choose Clifford gates at each step, the most sophisticated of which is a combination of reinforcement learning and Monte Carlo tree search that we call QuSynth. We apply our method to synthesize code states of various quantum error correcting codes including the 23-qubit Golay code and the 144-qubit gross code, the latter of which is significantly beyond the qubit number that is accessible to prior optimal circuit synthesis methods. We demonstrate that our techniques are capable of reducing the required two-qubit gates by up to a factor of 2.5 compared to previous approaches while retaining low circuit depth.

Independent Trivariate Bicycle Codes

Aygul Azatovna Galimova

2603.17703 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new class of quantum error-correcting codes called independent trivariate bicycle codes that extend existing bicycle codes to three dimensions, achieving better performance metrics and lower error rates than previous multivariate bicycle codes.

Key Contributions

  • Development of independent trivariate bicycle codes extending bivariate framework to three cyclic dimensions
  • Construction of high-performance codes including [[140,6,14]] code with superior kd²/n ratio and pseudothreshold performance
  • Demonstration of improved error correction capabilities on realistic superconducting noise models
quantum error correction LDPC codes bicycle codes fault tolerance quantum computing
View Full Abstract

We introduce six independent trivariate bicycle (ITB) codes, which extend the bivariate bicycle framework of Bravyi et al.\ to three cyclic dimensions. Using asymmetric polynomial pairs on three-dimensional tori, we construct four codes including a $[[140,6,14]]$ code with $kd^2/n = 8.40$. In the code-capacity setting, the $[[140,6,14]]$ code achieves a pseudothreshold of $8.0\%$ and $kd^2/n = 8.40$, exceeding the best multivariate bicycle code of Voss et al.\ ($7.9\%$, $kd^2/n = 2.67$). With circuit-level depolarizing noise, pseudothresholds reach $0.59\%$ for $[[140,6,14]]$ and $0.53\%$ for $[[84,6,10]]$. On the SI1000 superconducting noise model, the $[[140,6,14]]$ code achieves a per-round per-observable rate of $5.6 \times 10^{-5}$ at $p = 0.20\%$. We additionally present two self-dual codes with weight-8 stabilizers: $[[54,14,5]]$ ($kd^2/n = 6.48$) and $[[128,20,8]]$ ($kd^2/n = 10.0$). These results expand the design space of algebraic quantum LDPC codes and demonstrate that the third cyclic dimension yields competitive candidates for practical fault-tolerant implementations.

General circuit compilation protocol into partially fault-tolerant quantum computing architecture

Tomochika Kurita

2603.17428 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a new circuit execution protocol for fault-tolerant quantum computers that can efficiently perform continuous rotation gates using lattice surgery with surface codes. The approach uses optimization techniques to minimize time overhead from probabilistic operations and includes performance prediction tools.

Key Contributions

  • Circuit execution protocol for STAR architecture enabling direct continuous Rz(θ) gate operations
  • QUBO-based optimization for resource state allocation to reduce time overhead
  • Performance estimation framework for predicting execution time and optimizing qubit topology
fault-tolerant quantum computing surface codes lattice surgery logical qubits error correction
View Full Abstract

As we are entering an early-FTQC era, circuit execution protocols with logical qubits and certain error-correcting codes are being discussed. Here, we propose a circuit execution protocol for the space-time efficient analog rotation (STAR) architecture. Gate operations within the STAR architecture is based on lattice surgery with surface codes, but it allows direct execution of continuous gates $Rz(θ)$ as non-Clifford gates instead of $T = Rz(π/4)$. $Rz(θ)$ operations involve creation of resource states $|m_θ\rangle = \frac{1}{\sqrt{2}} (|0 \rangle + e^{iθ} |1\rangle ) $ followed by ZZ joint measurements with target logical qubits. While employing $Rz(θ)$ enables more efficient circuit execution, both their creations and joint measurements are probabilistic processes and adopt repeat-until-success (RUS) protocols which are likely to result in considerable time overhead. Our circuit execution protocol aims to reduce such time overhead by parallel trials of resource state creations and more frequent trials of joint measurements. By employing quadratic unconstrained binary optimization (QUBO) in determining resource state allocations within the space, we successfully make our protocol efficient. Furthermore, we proposed performance estimators given the target circuit and qubit topology. It successfully predicts the time performance within less time than actual simulations do, and helps find the optimal qubit topology to run the target circuits efficiently.

Noise-resilient nonadiabatic geometric quantum computation for bosonic binomial codes

Dong-Sheng Li, Yang Xiao, Yu Wang, Yang Liu, Zhi-Cheng Shi, Ye-Hong Chen, Yi-Hao Kang, Yan Xia

2603.17250 • Mar 18, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a method for quantum computing that combines binomial codes (which protect against certain types of errors) with geometric quantum gates (which are naturally resistant to noise) in superconducting systems. The researchers develop control protocols that make quantum computations more reliable by leveraging both error correction techniques and noise-resilient gate operations.

Key Contributions

  • Integration of binomial codes with nonadiabatic geometric quantum computation for enhanced error resilience
  • Development of customized control protocols combining reverse engineering and optimal control for superconducting systems
  • Demonstration of high-fidelity quantum gates with tolerance to parameter fluctuations and decoherence
nonadiabatic geometric quantum computation binomial codes error correction superconducting qubits quantum gates
View Full Abstract

The binomial code is renowned for its parity-mediated loss immunity and loss-error recoverability, while geometric phases are widely recognized for their intrinsic resilience against noise. Capitalizing on their complementary merits, we propose a noise-resilient protocol to realize Nonadiabatic geometric quantum computation with binomial codes in a superconducting system composed of a microwave cavity %off-resonantly dispersively coupled to a %three-level qutrit. The control field %geometric quantum computation is designed by %combining geometric phases, integrating reverse engineering and optimal control. This design provides a customized control protocol featuring strong error-tolerance and inherent noise-resilience. Using experimentally accessible parameters in superconducting systems, numerical simulations show that the protocol yields relatively high average fidelity for geometric quantum gates based on binomial code, even in the presence of parameter fluctuations and decoherence. Thus, this protocol may provide a practical approach for realizing reliable Nonadiabatic geometric quantum computation with binomial codes in current technology.

Optimizing Logical Mappings for Quantum Low-Density Parity Check Codes

Sayam Sethi, Sahil Khan, Maxwell Poster, Abhinav Anand, Jonathan Mark Baker

2603.17167 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new compilation and mapping techniques for quantum low-density parity check (LDPC) codes, specifically the Gross code, to reduce error rates in fault-tolerant quantum computing. The authors introduce a two-stage pipeline using hypergraph partitioning and priority-based algorithms to optimize how logical qubits are mapped onto hardware, achieving significant reductions in program failure rates.

Key Contributions

  • Two-stage mapping pipeline using hypergraph partitioning for logical qubit placement on Gross code architectures
  • Demonstration of up to 36% reduction in error rates from inter-module measurements compared to existing mapping approaches
  • Analysis showing that existing NISQ and FTQC mappers are insufficient for LDPC code architectures due to two-level mapping complexity
fault-tolerant quantum computing LDPC codes logical qubit mapping error correction quantum compilation
View Full Abstract

Early demonstrations of fault tolerant quantum systems have paved the way for logical-level compilation. For fault-tolerant applications to succeed, execution must finish with a low total program error rate (i.e., a low program failure rate). In this work, we study a promising candidate for future fault-tolerant architectures with low spatial overhead: the Gross code. Compilation for the Gross code entails compiling to Pauli Based Computation and then reducing the rotations and measurements to the Bicycle ISA. Depending on the configuration of modules and the placement of code modules on hardware, one can reduce the amount of resulting Bicycle instructions to produce a lower overall error rate. We find that NISQ-based, and existing FTQC mappers are insufficient for mapping logical qubits on Gross code architectures because 1. they do not account for the two-level nature of the logical qubit mapping problem, which separates into code modules with distinct measurements, and 2. they naively account only for length two interactions, whereas Pauli-Products are up to length $n$, where $n$ is the number of logical qubits in the circuit. For these reasons, we introduce a two-stage pipeline that first uses hypergraph partitioning to create in-module clusters, and then executes a priority-based algorithm to efficiently assign clusters onto hardware. We find that our mapping policy reduces the error contribution from inter-module measurements, the largest source of error in the Gross Code, by up to $\sim36\%$ in the best case, with an average reduction of $\sim13\%$. On average, we reduce the failure rates from inter-module measurements by $\sim22\%$ with localized factory availability, and by $\sim17\%$ on grid architectures, allowing hardware developers to be less constrained in developing scalable fault tolerant systems due to software driven reductions in program failure rates.

Secure Quantum Communication: Simulation and Analysis of Quantum Key Distribution Protocols

Mahendra Rasay, Emmanuel D. Sebastian, Subhash Prasad Sah, David Chinamerem Akah, Ajay Kumar Singh

2603.16690 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: high

This paper simulates and analyzes quantum key distribution protocols (BB84, B92, and E91) using IBM Qiskit, evaluating their performance under realistic conditions like noise and eavesdropping. The study aims to assess the practical feasibility of QKD as a secure communication method in the quantum computing era.

Key Contributions

  • Simulation-based comparative analysis of three major QKD protocols (BB84, B92, E91) using IBM Qiskit
  • Evaluation of protocol performance under realistic quantum channel conditions including noise, decoherence, and eavesdropping attacks
quantum key distribution QKD protocols BB84 quantum cryptography quantum communication
View Full Abstract

Quantum computing poses significant threats to conventional cryptographic techniques such as RSA and AES, motivating the need for quantum secure communication methods. Quantum Key Distribution (QKD) offers information theoretic security based on fundamental quantum principles. This paper presents a simulation-based analysis of well-known QKD protocols, namely BB84, B92, and E91, using the IBM Qiskit framework. Realistic quantum channel effects, including noise, decoherence, and eavesdropping, are modeled to evaluate protocol performance. Key metrics such as error rate, secret key generation, and security characteristics are analyzed and compared. The study highlights practical challenges in QKD implementation, including hardware limitations and channel losses, and discusses insights toward scalable and robust quantum communication systems. The results support the feasibility of QKD as a promising solution for secure communication in the quantum era.

CryoCMOS RF multiplexer for superconducting qubit control, readout and flux biasing at millikelvin temperatures with picowatt power consumption

Liam Fallik, Sriram Balamurali, Alican Caglar, Rohith Acharya, Jacques Van Damme, Tsvetan Ivanov, Shana Massar, Ruben Asanovski, A. M. Vadiraj, Massim...

2603.16608 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a cryogenic CMOS RF multiplexer that operates at extremely low temperatures (10 millikelvin) with ultra-low power consumption, designed to address the input-output bottleneck in large-scale superconducting quantum computers by enabling multiple qubits to share the same control and readout lines.

Key Contributions

  • Record-low 200 pW power consumption cryoCMOS RF multiplexer operating at 10 mK
  • Demonstration of direct qubit connection with minimal impact on coherence times
  • Scalable solution for multiplexing readout, flux, and control lines in superconducting quantum processors
cryogenic CMOS superconducting qubits RF multiplexer quantum control scalable quantum systems
View Full Abstract

Large-scale cryogenic quantum systems are constrained by an input-output bottleneck between room-temperature electronics and millikelvin stages, particularly in superconducting qubit platforms. This bottleneck is most acute for output lines, where bulky and expensive microwave components limit scalability. A promising approach for scalable characterization and testing is to perform signal multiplexing directly at the qubit plane. We demonstrate a cryogenic CMOS (cryoCMOS) RF multiplexer operating at 10 millikelvin with record-low static power consumption of 200 pW. The device provides < 2 dB insertion loss and > 30 dB isolation across DC-8 GHz. Direct connection to transmon qubits marginally affects coherence times in the range of 100 microseconds, enabling multiplexing of readout, flux and, in principle, XY drive lines. This work introduces cryoCMOS multiplexers as valuable tools for scalable, high-throughput cryogenic characterization and testing, and advances co-integrated quantum-classical control for future large-scale quantum processors.

Quantum classification and search algorithms using spinorial representations

Lauro Mascarenhas, Vinicius N. A. Lula-Rocha, Marco A. S. Trindade

2603.16564 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents two quantum algorithms - one for classification and one for search with non-uniform initial conditions - both formulated using Clifford algebras and spinorial representations. The approach provides a unified algebraic framework where quantum states and operators are constructed from spinor representations, with the classification algorithm using orthogonal states for different classes and the search algorithm implementing oracles directly through Clifford algebra generators.

Key Contributions

  • Novel algebraic formulation of quantum classification algorithm using spinorial representations
  • Unified framework based on Clifford algebras for both classification and search algorithms
  • Simplified oracle implementation for quantum search using Clifford algebra generators
quantum algorithms Clifford algebras spinorial representations quantum classification quantum search
View Full Abstract

We propose an algebraic formulation for two distinct quantum algorithms: a quantum classification algorithm and a quantum search algorithm with a non-uniform initial distribution, both based on Clifford algebras and spinorial representations. In the classification algorithm, we exploit properties of spinorial representations to construct orthogonal quantum states associated with different classes, allowing the identification of an item's class through the evaluation of expectation values of operators derived from the generators of the Clifford algebra. In the quantum search algorithm, we consider a database with prior information in which the oracle is implemented directly using generators of the Clifford algebra, simplifying its realization. The proposed approach provides a unified algebraic description for both algorithms, employing spinorial representations in the construction of quantum states and operators. Computational implementations are presented.

Distinguishing types of correlated errors in superconducting qubits

Hannah P. Binney, H. Douglas Pinckney, Kate Azar, Patrick M. Harrington, Shantanu Jha, Mingyu Li, Jiatong Yang, Felipe Contipelli, Renée DePencier Pi...

2603.16494 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper investigates two types of correlated errors in superconducting qubits - those caused by radiation-induced quasiparticles and those caused by mechanical vibrations from refrigeration equipment. The researchers develop methods to distinguish between these error types and show that certain qubit designs with larger superconducting gaps can protect against both types of correlated errors.

Key Contributions

  • Method for distinguishing radiation-induced vs vibration-induced correlated errors in superconducting qubits
  • Demonstration that transmon qubits with superconducting gap greater than qubit energy are protected against both radiation and vibration errors
superconducting qubits correlated errors quantum error correction transmon quasiparticles
View Full Abstract

Errors in superconducting qubits that are correlated in time and space can pose problems for quantum error correction codes. Radiation from cosmic and terrestrial sources can increase the quasiparticle (QP) density in a superconducting qubit device, resulting in an increased rate of QPs tunneling across proximal Josephson junctions (JJs) and causing correlated errors. Mechanical vibrations, such as those induced by the pulse tube in a dry dilution refrigerator, are also a known source of correlated errors. We present a method for distinguishing these two types of errors by their temporal, spatial, and frequency domain features, enabling physically motivated error-mitigation strategies. We also present accelerometer data to study the correlation between dilution refrigerator vibrations and the errors. We measure arrays of transmon qubits where the difference in superconducting gap across the JJ is less than the qubit energy, as well as those where the gap is greater than the qubit energy, which has been shown to mitigate radiation-induced errors. We show that these latter devices are also protected against vibration-induced errors.

Reducing C-NOT Counts for State Preparation and Block Encoding via Diagonal Matrix Migration

Zexian Li, Guofeng Zhang, Xiao-Ming Zhang

2603.16492 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper presents algorithms to reduce the number of C-NOT gates needed for quantum state preparation and block encoding, which are fundamental operations in quantum computing. The authors achieve significant improvements in gate counts by developing a diagonal matrix migration technique that takes advantage of how diagonal matrices commute with certain quantum operations.

Key Contributions

  • Improved C-NOT count for n-qubit state preparation from (23/24)2^n to (11/12)2^n gates
  • Single-ancilla block encoding protocol achieving (11/48)4^n C-NOT count for 2^(n-1)×2^(n-1) matrices
  • Diagonal matrix migration technique based on commutativity properties to minimize C-NOT gate usage
  • Optimized algorithms for low-rank matrices with C-NOT count (K+(11/12))2^n for rank-K matrices
C-NOT gates state preparation block encoding gate complexity quantum circuits
View Full Abstract

Quantum state preparation and block encoding are versatile and practical input models for quantum algorithms in scientific computing. The circuit complexity of state preparation and block encoding frequently dominates the end-to-end gate complexity of quantum algorithms. We give algorithms with lower C-NOT counts for both the state preparation and block encoding. For a general $n$-qubit state, we improve the C-NOT count from Plesch-Brukner algorithm, proposed in 2011, from $(23/24)2^n$ to $(11/12)2^n$. For block encoding, our single-ancilla protocol for $2^{n-1}\times 2^{n-1}$ matrices uses the spectral norm as subnormalization and achieves a C-NOT count leading term $(11/48)4^n$. This result even exceeds the lower bound of $(1/4)4^n$ for $n$-qubit unitary synthesis. Further optimization is performed for low-rank matrices, which frequently arise in practical applications. Specifically, we achieve the C-NOT count leading term $(K+(11/12))2^n$ for a rank-$K$ matrix. Our approach builds upon the recursive block-ZXZ decomposition from Krol et al. and introduces a diagonal matrix migration technique based on the commutativity of the diagonal matrix and the uniformly controlled rotation about the $z$-axis to minimize the use of C-NOT gates.

Chipmunq: Fault-Tolerant Compiler for Chiplet Quantum Architectures

Peter Wegmann, Aleksandra Świerkowska, Emmanouil Giortamis, Pramod Bhatotia

2603.16389 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper presents Chipmunq, a specialized compiler designed to map fault-tolerant quantum circuits onto modular chiplet quantum computer architectures. The compiler addresses the challenge of efficiently compiling large-scale quantum error correction circuits while managing the constraints of distributed quantum hardware connected by noisy inter-chiplet links.

Key Contributions

  • First hardware-aware compiler specifically designed for fault-tolerant quantum circuits on modular chiplet architectures
  • Quantum-error-correction-aware partitioning strategy that preserves logical qubit patch integrity
  • Significant improvements in compilation efficiency and circuit performance metrics including 13.5x speedup and 86.4% depth reduction
fault-tolerant quantum computing quantum error correction chiplet architecture quantum compiler logical qubits
View Full Abstract

As quantum computing advances toward fault-tolerance through quantum error correction, modular chiplet architectures have emerged to provide the massive qubit counts required while overcoming fabrication limits of monolithic chips. However, this transition introduces a critical compilation gap: existing frameworks cannot handle the scale of fault-tolerant quantum circuits while managing the noisy, sparse interconnects of chiplet backends. We present Chipmunq, the first hardware-aware compiler for mapping and routing fault-tolerant circuits onto modular architectures. Chipmunq employs a quantum-error-correction-aware partitioning strategy that preserves the integrity of logical qubit patches, preventing prohibitive gate overheads common in general-purpose compilers. Our evaluation demonstrates that Chipmunq achieves a 13.5x speedup in compilation time compared to state-of-the-art tools. By incorporating chiplet constraints and defective qubits, it reduces circuit depth by 86.4% and SWAP gate counts by 91.4% across varying code distances. Crucially, Chipmunq overcomes heterogeneous inter-chiplet links, improving logical error rates by up to two orders of magnitude.

A Scalable Open-Source QEC System with Sub-Microsecond Decoding-Feedback Latency

Junyi Liu, Yi Lee, Yilun Xu, Gang Huang, Xiaodi Wu

2603.16203 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents an open-source quantum error correction (QEC) system that integrates real-time qubit control with ultra-fast error syndrome decoding and correction feedback. The system achieves 446 nanosecond end-to-end latency for a distance-3 surface code and can theoretically scale to handle ~881 physical qubits with sub-microsecond latency.

Key Contributions

  • First fully integrated open-source QEC system with sub-microsecond decoding-feedback latency
  • Scalable distributed multi-board FPGA architecture that can handle up to distance-21 surface codes
  • Complete hardware platform ready for deployment with superconducting qubits including real-time control and communication
quantum error correction surface codes fault-tolerant quantum computing FPGA real-time control
View Full Abstract

Quantum error correction (QEC) is essential for realizing large-scale, fault-tolerant quantum computation, yet its practical implementation remains a major engineering challenge. In particular, QEC demands precise real-time control of a large number of qubits and low-latency, high-throughput and accurate decoding of error syndromes. While most prior work has focused primarily on decoder design, the overall performance of any QEC system depends critically on all its subsystems including control, communication, and decoding, as well as their integration. To address this challenge, we present an open-source, fully integrated QEC system built on RISC-Q, a generator for RISC-V-based quantum control architectures. Implemented on RFSoC FPGAs, our system prototype integrates real-time qubit control, a scalable distributed multi-board architecture, and the state-of-the-art hardware QEC decoder within a low-latency, high-throughput decoding pipeline, forming a complete hardware platform ready for deployment with superconducting qubits. Experimental evaluation on a three-board prototype based on AMD ZCU216 RFSoCs demonstrates an end-to-end QEC decoding-feedback latency of 446 ns for a distance-3 surface code, including syndrome aggregation, network communication, syndrome decoding, and error distribution. Extrapolating from measured subsystem performance and state-of-the-art decoder benchmarks, the architecture can achieve sub-microsecond decoding-feedback latency up to a distance-21 surface code ($\sim$881 physical qubits) when scaled to larger hardware configurations.

Monolithic Segmented 3D Ion Trap for Quantum Technology Applications

Abhishek Menon, Michael Strauss, George Tomaras, Liam Jeanette, April X. Sheffield, Devon Valdez, Yuanheng Xie, Visal So, Henry De Luo, Midhuna Durais...

2603.16048 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper presents a new design for ion trap quantum computers using a monolithic 3D fused silica structure that can trap heavy ions like Yb+ and Ba+ with very low heating rates and high optical access. The researchers demonstrate high-fidelity two-qubit gate operations (99.3%) and establish this as a scalable platform for quantum computing with trapped ions.

Key Contributions

  • Development of monolithic 3D fused silica blade trap with 250 μm ion-electrode distance enabling stable high RF voltage operation
  • Demonstration of 99.3% two-qubit gate fidelity with heavy ions (Yb+) and low motional heating rates (1.1 quanta/s)
  • Achievement of high numerical aperture optical access (0.7 NA) while maintaining deep trapping potentials for scalable quantum computing
  • Establishment of modular platform suitable for quantum simulation, computation, metrology and networking applications
ion trap trapped ions quantum gates Yb+ Paul trap
View Full Abstract

Monolithic three-dimensional (3D) Paul traps combine the high-precision microfabrication of two-dimensional (2D) chip traps with the deep trapping potentials and low heating rates characteristic of macroscopic Paul traps, which are typically manually assembled. However, achieving low motional heating rates and optical access with a high numerical aperture (NA) while maintaining the high radio-frequency (RF) voltages required for heavy ionic species, such as Yb$^{+}$ and Ba$^{+}$, remains a significant technical challenge. In this work, we present a segmented, monolithic 3D fused silica blade trap, featuring an ion-electrode distance of 250 $μ$m with stable operation at high RF voltages. We benchmark the performance of the trap using Yb$^{+}$ ions, demonstrating axially homogeneous trapping potentials for 200 $μ$m around the axial center of the trap, high multi-directional optical access (up to 0.7 NA), and radial motional heating rate as low as 1.1 $\pm$ 0.1 quanta/s at radial trap frequencies about 3 MHz near room temperature. Furthermore, we observe a motional Ramsey coherence time, $T_{2}$, of around 95 ms for the radial center-of-mass mode. We demonstrate a two-qubit gate fidelity of ${99.3}^{+ 0.7}_{- 1.5}$$\%$ with state preparation and measurement correction. These results establish fused-silica monolithic blade traps as a scalable, modular platform for quantum simulation, computation, metrology, and networking with heavy ionic species.

CSS codes from the Bruhat order of Coxeter groups

Kamil Bradler

2603.16036 • Mar 17, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a new method for constructing CSS quantum error-correcting codes using the mathematical structure of Coxeter groups and their Bruhat ordering. The approach generates families of CSS codes with controllable parameters and stabilizer weights by exploiting the geometric properties of these algebraic structures.

Key Contributions

  • Novel method for generating CSS codes using Coxeter group Bruhat order and chain complexes
  • Construction of CSS code families with controlled stabilizer weights and parameters, including codes with thousands of qubits
  • Development of weight-reduction techniques for handling heavy stabilizers in irregular weight distributions
CSS codes quantum error correction Coxeter groups Bruhat order stabilizer codes
View Full Abstract

I introduce a method to generate families of CSS codes with interesting code parameters. The object of study is Coxeter groups, both finite and infinite (reducible or not), and a geometrically motivated partial order of Coxeter group elements named after Bruhat. The Bruhat order is known to provide a link to algebraic topology -- it doubles as a face poset capturing the inclusion relations of the $p$-dimensional cells of a regular CW~complex and that is what makes it interesting for QEC code design. Assisted by the Bruhat face poset interval structure unique to Coxeter groups I show that the corresponding chain complexes can be turned into multitudes of CSS codes. Depending on the approach, I obtain CSS codes (and their families) with controlled stabilizer weights, for example $[6006, 924, \{{\leq14},{\leq7}\}]$ (stabilizer weights~14 and 9) and $[22880,3432,\{{\leq8},{\leq16}\}]$ (weights 16 and 10), and CSS codes with highly irregular stabilizer weight distributions such as $[571,199,\{5,5\}]$. For the latter, I develop a weight-reduction method to deal with rare heavy stabilizers. Finally, I show how to extract four-term (length three) chain complexes that can be interpreted as CSS codes with a metacheck.

Universal Weakly Fault-Tolerant Quantum Computation via Code Switching in the [[8,3,2]] Code

Shixin Wu, Dawei Zhong, Todd A. Brun, Daniel A. Lidar

2603.15610 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a fault-tolerant quantum computing protocol that achieves universal quantum computation by switching between two versions of an [[8,3,2]] quantum error correction code, where one supports single-qubit operations and the other supports multi-qubit gates, circumventing theoretical limitations on gate sets within single codes.

Key Contributions

  • Development of a fault-tolerant code-switching protocol between two versions of the [[8,3,2]] quantum error correction code
  • Demonstration of universal quantum computation using postselected error detection with quadratic logical error suppression
  • Numerical validation through implementation of Grover's search algorithm on three logical qubits
fault-tolerant quantum computing quantum error correction code switching Eastin-Knill theorem transversal gates
View Full Abstract

Code-switching offers a route to universal, fault-tolerant quantum computation by circumventing the limitation implied by the Eastin-Knill theorem against a universal transversal gate set within a single quantum code. Here, we present a fault-tolerant code-switching protocol between two versions of the $[[8, 3, 2]]$ code. One version supports weakly fault-tolerant single-qubit Clifford gates, while the other supports a logical $\overline{\mathrm{CCZ}}$ gate via transversal $T/T^\dagger$ together with logical $\overline{\mathrm{CZ}}$, $\overline{\mathrm{CNOT}}$, and $\overline{\mathrm{SWAP}}$ gates. Because both codes have distance 2, the protocol operates in a postselected, error-detecting regime: single faults lead to detectable outcomes, and accepted runs exhibit quadratic suppression of logical error rates. This yields a universal scheme for postselected fault-tolerant computation. We validate the protocol numerically through simulations of state preparation, code switching, and a three-logical-qubit implementation of Grover's search.

A direct controlled-phase gate between microwave photons

Adrian Copetudo, Amon M. Kasper, Tanjung Krisnanda, Gregoire Veyrac, Shushen Qin, Hui Khoon Ng, Yvonne Y. Gao

2603.15587 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: medium

This paper demonstrates a new method to create direct interactions between microwave photons in superconducting cavities without exciting ancillary nonlinear elements, which reduces noise and decoherence. The researchers use this approach to implement a controlled-phase gate that directly entangles photons, providing a key building block for fault-tolerant bosonic quantum computing.

Key Contributions

  • Engineering a Raman-assisted cross-Kerr interaction between microwave photons without exciting nonlinear elements
  • Implementing a direct controlled-phase gate between oscillators that operates within bosonic code spaces
  • Demonstrating photon-number parity mapping for error detection while preserving coherence
  • Expanding the bosonic cQED toolbox for fault-tolerant quantum computing
bosonic quantum computing controlled-phase gate superconducting cavities cross-Kerr interaction fault tolerance
View Full Abstract

Useful quantum information processing ultimately requires operations over large Hilbert spaces, where logical information can be encoded efficiently and protected against noise. Harmonic oscillators naturally provide access to such high-dimensional spaces and enable hardware-efficient, error-correctable bosonic encodings. However, direct entangling operations between oscillators remains an outstanding challenge. Existing strategies typically rely on parametrically activating interactions that populate the excited states of an ancillary nonlinear element. This induces an effective interaction between the oscillators, at the expense of introducing additional dissipation channels and potential leakage from the encoded manifold. Here, we engineer a Raman-assisted cross-Kerr interaction between microwave photons hosted in two superconducting cavities, without exciting the nonlinear element, thereby suppressing coupler-induced decoherence.This approach generates a direct coupling between microwave photons that is exploited to implement a controlled-phase gate within the single- and two-photon subspaces of two oscillators, directly entangling them. Finally, we harness this dynamics to map the photon-number parity of a storage cavity onto an auxiliary oscillator rather than a nonlinear element, enabling error detection while protecting the storage mode from measurement-induced decoherence. Our work expands the bosonic circuit quantum electrodynamics (cQED) toolbox by enabling coherence-preserving direct photon-photon interactions between oscillators. This realizes an entangling gate that operates entirely within a bosonic code space while suppressing decoherence from nonlinear ancilla excitations, providing a key primitive for fault-tolerant bosonic quantum computing.

Simulating the Open System Dynamics of Multiple Exchange-Only Qubits using Subspace Monte Carlo

Tameem Albash, N. Tobias Jacobson

2603.15577 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops a Monte Carlo simulation method for modeling multiple exchange-only qubits in open quantum systems by leveraging the fact that spin projection quantum numbers remain unchanged under exchange operations. The method reduces computational complexity from 8^(2n) to 3^(2n) dimensions and is applied to study multi-round Bell state stabilization circuits using 6 exchange-only qubits.

Key Contributions

  • Development of Subspace Monte Carlo method that reduces computational complexity for simulating multiple exchange-only qubits from 8^(2n) to 3^(2n) dimensions
  • Demonstration of the method on multi-round Bell state stabilization circuits with reset-if-leaked gadgets using 6 EO qubits
exchange-only qubits open system dynamics Monte Carlo simulation Bell state stabilization quantum error correction
View Full Abstract

We propose a Monte Carlo based method for simulating the open system dynamics of multiple exchange-only (EO) qubits. In the EO encoding, the total spin projection quantum number along the $z$-axis of the three constituent spins remains unchanged under exchange operations, in contrast to the open system (or multi-qubit miscalibration) setting where coherent and incoherent mixing of states with different quantum numbers occurs. In our approach, we choose to measure the total spin component along the $z$-axis of each EO qubit after every logical quantum operation, which decoheres coherent mixtures of states with different spin projection quantum numbers. Independent simulations thus give different trajectories of the system in the associated subspaces, so we refer to this method as the Subspace Monte Carlo method. With each EO qubit having a definite spin projection quantum number, the density matrix of $n$ qubits can be represented by a vector of dimension $3^{2n}$, instead of $8^{2n}$, with an additional vector of dimension $n$ to label the quantum number of each qubit. We show that this approximation of the dynamics remains faithful to the true dynamics when the simulated circuits twirl the noise, converting coherent errors to stochastic errors, which can be achieved using randomized compiling. We use this simulation approach to study how correlations in measurement outcomes of circuits with reset-if-leaked gadgets, such as a multi-round Bell state stabilization circuit that uses 6 EO qubits, are affected by the choice of CNOT implementations.

Velocity-Enabled Quantum Computing with Neutral Atoms

Ohad Lib, Hendrik Timme, Maximilian Ammenwerth, Flavien Gyger, Renhao Tao, Shijia Sun, Immanuel Bloch, Johannes Zeiher

2603.15561 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new approach to quantum computing with neutral atoms that uses atom velocity as a control parameter, enabling selective operations on moving atoms through Doppler shifts and spatial phase manipulation. The researchers demonstrate key quantum error correction primitives including high-fidelity gates, cluster state generation, and error detection codes while reducing hardware complexity.

Key Contributions

  • Introduction of velocity as a new degree of freedom for neutral atom quantum computing architectures
  • Demonstration of velocity-selective state preparation and measurement using controlled Doppler shifts
  • Achievement of 99.86% fidelity CZ gates and implementation of quantum error correction primitives including 8-qubit cluster states and [[4,2,2]] error detection code
  • Reduction of hardware overhead by enabling selective operations on moving atoms with global control beams
neutral atoms quantum error correction logical qubits Doppler shifts cluster states
View Full Abstract

Realizing error-corrected logical qubits is a central goal for the current development of digital quantum computers. Neutral atoms offer the opportunity to coherently shuttle atoms for realizing efficient quantum error correction based on long-range connectivity and parallel atom transport. Nevertheless, time overheads in shuttling atoms and complex control hardware pose challenges to scaling current architectures. Here, we introduce atom velocity as a new degree of freedom in neutral-atom architectures tailored to quantum error correction. Through controlled Doppler shifts, we demonstrate velocity-selective mid-circuit state preparation and measurement on moving atoms, leaving spectator atoms unaffected. Furthermore, we achieve on-the-fly local single-qubit rotations by mapping micron-scale atom displacements to the spatial phase of global control beams. Complementing these techniques with CZ entangling gates with a fidelity of 99.86(4)%, we experimentally implement key primitives for quantum error correction and measurement-based quantum computing. We generate an eight-qubit entangled cluster state with an average stabilizer value of 0.830(4), realize an [[4,2,2]] error-detection code with 99.0(3) % logical Bell-state fidelity, and perform stabilizer measurements using a flying ancilla. By enabling selective operations on continuously moving atoms using only global beams, this velocity-enabled architecture reduces hardware overhead while minimizing shuttling and transfer delays, opening a new pathway for fast, large-scale atom-based quantum computation.

Error semitransparent universal control of a bosonic logical qubit

Saswata Roy, Owen C. Wetherbee, Valla Fatemi

2603.15356 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates error semi-transparent gates for bosonic logical qubits, achieving universal quantum computation with reduced errors from photon loss. The researchers show a five-fold reduction in infidelity and construct a complete gate set including non-Clifford operations necessary for fault-tolerant quantum computing.

Key Contributions

  • Introduction of error semi-transparent framework for universal bosonic logical qubit gates
  • Demonstration of complete gate set {X, H, T} with five-fold infidelity reduction
  • Construction of composite non-Clifford operations using error-corrected bosonic qubits
bosonic codes error correction fault-tolerant quantum computing logical qubits universal gates
View Full Abstract

Bosonic codes offer hardware-efficient approaches to logical qubit construction and hosted the first demonstration of beyond-break even logical quantum memory.However, such accomplishments were done for idling information, and realization of fault-tolerant logical operations remains a critical bottleneck for universal quantum computation in scaled systems. Error-transparent (ET) gates offer an avenue to resolve this issue, but experimental demonstrations have been limited to phase gates. Here, we introduce a framework based on dynamic encoding subspaces that enables simple linear drives to accomplish universal gates that are error semi-transparent (EsT) to oscillator photon loss. With an EsT logical gate set of {X, H, T}, we observe a five-fold reduction in infidelity conditioned on photon loss, demonstrate extended active-manipulation lifetimes with quantum error correction, and construct a composite EsT non-Clifford operation using a sequence of eight gates from the set. Our approach is compatible with methods for detectable ancilla errors, offering an approach to error-mitigated universal control of bosonic logical qubits with the standard quantum control toolkit.

Asymptotically good bosonic Fock state codes: Exact and approximate

Dor Elimelech, Arda Aydin, Alexander Barg

2603.15190 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: high

This paper develops new quantum error correction codes for photonic quantum systems that can protect against photon loss (amplitude damping). The authors prove that exact and approximate error correction are equivalent for these codes and construct families of asymptotically good codes with bounded photon numbers per mode.

Key Contributions

  • Proved equivalence of exact and approximate error correction for Fock state codes against amplitude damping
  • Constructed asymptotically good bosonic Fock state codes with bounded per-mode occupancy
  • Established connection to permutation invariant codes and extended results to qudit systems
quantum error correction bosonic codes Fock states amplitude damping photonic quantum computing
View Full Abstract

We examine exact and approximate error correction for multi-mode Fock state codes protecting against the amplitude damping noise. Based on a new formalization of the truncated amplitude damping channel, we show the equivalence of exact and approximate error correction for Fock state codes against random photon losses. Leveraging the recently found construction method based on classical codes with large distance measured in the $\ell_1$ metric, we construct asymptotically good (exact and approximate) Fock state codes. These codes have an additional property of bounded per-mode occupancy, which increases the coherence lifetime of code states and reduces the photon loss probability, both of which have a positive impact on the stability of the system. Using the relation between Fock state code construction and permutation invariant (PI) codes, we also obtain families of asymptotically good qudit PI codes as well as codes in monolithic nuclear state spaces.

Scalable Self-Testing of Mutually Anticommuting Observables and Maximally Entangled Two-Qudits

Souradeep Sasmal, Ritesh K. Singh, Prabuddha Roy, A. K. Pan

2603.15018 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: high

This paper develops a method to verify quantum systems using Bell inequalities, specifically testing high-dimensional entangled states and mutually anticommuting measurements without needing to trust the measurement devices. The framework can scale to certify increasingly complex quantum resources needed for advanced quantum technologies.

Key Contributions

  • Simultaneous self-testing framework for maximally entangled two-qudit states and mutually anticommuting observables
  • Derivation of optimal quantum bounds using Sum-of-Squares decomposition without dimensional assumptions
  • Proof that maximal quantum violation corresponds to Clifford algebra representations with minimal required dimensions
  • Establishment of quantitative robustness bounds relating Bell value deviations to strategy fidelity
self-testing Bell inequalities entanglement anticommuting observables Clifford algebra
View Full Abstract

The next frontier in device-independent quantum information lies in the certification of scalable and parallel quantum resources, which underpin advanced quantum technologies. We put forth a simultaneous self-testing framework for maximally entangled two-qudit state of local dimension $m_*=2^{\lfloor n/2 \rfloor}$ (equivalently $\lfloor n/2 \rfloor$ copies of maximally entangled two-qubit pairs), together with $n$ numbers of anti-commuting observables on one side. To this end, we employ an $n$-settings Bell inequality comprising two space-like separated observers, Alice and Bob, having $2^{n-1}$ and $n$ number of measurement settings, respectively. We derive the local ontic bound of this inequality and, crucially, employ the Sum-of-Squares decomposition to determine the optimal quantum bound without presupposing the dimension of the state or observables. We then establish that any physical realisation achieving the maximal quantum violation must, up to local isometries and complex conjugation, correspond to a reference strategy consisting of a maximally entangled state of local dimension of at least $2^{\lfloor n/2 \rfloor}$ and local observables forming an irreducible representation of the Clifford algebra. This construction thereby demonstrates that the minimal dimension compatible with $n$ mutually anticommuting observables is naturally self-tested by the maximal violation of the proposed Bell functional. Finally, we analyse the robustness of the protocol by establishing quantitative bounds relating deviations in the observed Bell value to the fidelity between the realised and the ideal strategies. Our results thus provide a scalable, dimension-independent route for the certification of high-dimensional entanglement and Clifford measurements in a fully device-independent framework.

Cavity-Free Distributed Quantum Computing with Rydberg Ensembles via Collective Enhancement

Aman Ullah

2603.14854 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents a quantum networking architecture that uses Rydberg atom ensembles to create entangled connections between distant quantum computers without needing optical cavities. The approach achieves high-fidelity quantum gates and atom-photon conversion, enabling practical distributed quantum computing with entanglement generation rates exceeding 600 Hz at 20 km distances.

Key Contributions

  • Cavity-free quantum networking architecture using Rydberg atom ensembles
  • High-fidelity distributed quantum computing protocol with 99.93% gate fidelity and >97.5% Bell state fidelity
  • Practical scalable approach achieving 600+ Hz entanglement rates at 20 km separation
Rydberg atoms distributed quantum computing quantum networking entanglement distribution cavity-free
View Full Abstract

A complete architecture for cavity-free quantum networking based on collective enhancement in Rydberg atom ensembles is presented. The protocol exploits Rydberg blockade and phase-matched directional emission to eliminate optical cavities without sacrificing performance. The architecture comprises three steps: (i) local control-ensemble entanglement via Rydberg blockade with fidelity $F_{\mathrm{gate}} \approx 99.93\%$; (ii) atom-photon conversion via Raman transitions, achieving directional emission ($η_{\mathrm{dir}} \approx 35\%$) and single-node efficiency $η_{\mathrm{node}} \approx 19\%$; and (iii) remote atom-atom entanglement via Hong-Ou-Mandel interference, producing Bell states with fidelity $F > 97.5\%$. With quantum memories enabling retry protocols, entanglement generation rates exceed $600$ Hz at 20 km separation. This cavity-free approach provides a practical and scalable pathway for distributed quantum computing and secure quantum communication.

Protecting Distributed Blockchain with Twin-Field Quantum Key Distribution: A Quantum Resistant Approach

Xuan Li, Ying Guo

2603.14826 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: medium Sensing: none Network: high

This paper proposes a quantum-resistant blockchain architecture that uses twin-field quantum key distribution (TF-QKD) to protect distributed blockchain networks from quantum computing threats. The approach aims to overcome distance and scalability limitations of traditional QKD systems by implementing a measurement-device-independent topology that reduces infrastructure complexity.

Key Contributions

  • Scalable quantum-resistant blockchain architecture using TF-QKD protocol
  • Linear scaling optimization reducing infrastructure complexity from quadratic to linear
  • Integration of measurement-device-independent topology to overcome rate-loss limits in quantum networks
quantum key distribution twin-field QKD blockchain security measurement-device-independent quantum-resistant cryptography
View Full Abstract

Quantum computing provides the feasible multi-layered security challenges to classical blockchain systems. Whereas, quantum-secured blockchains relied on quantum key distribution (QKD) to establish secure channels can address this potential threat. This paper presents a scalable quantum-resistant blockchain architecture designed to address the connectivity and distance limitations of the QKD integrated quantum networks. By leveraging the twin-field (TF) QKD protocol within a measurement-device-independent (MDI) topology, the proposed framework can optimize the infrastructure complexity from quadratic to linear scaling. This architecture effectively integrates information-theoretic security with distributed consensus mechanisms, allowing the system to overcome the fundamental rate-loss limits inherent in traditional point-to-point links. The proposed scheme offers a theoretically sound and feasible solution for deploying large-scale and long-distance consortium.

Adaptive Control of Stochastic Error Accumulation in Fault-Tolerant Quantum Computation

Tirtha Haque

2603.14687 • Mar 16, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a machine learning approach called Chronological Deep Q-Network (Ch-DQN) for adaptive quantum error correction that tracks how noise changes over time, rather than treating each error correction cycle independently. The method aims to prevent the gradual accumulation of errors that can cause logical qubits to fail in fault-tolerant quantum computers.

Key Contributions

  • Introduction of adaptive error correction using deep reinforcement learning that accounts for temporal noise correlations
  • Novel approach treating fault-tolerant quantum computation as a stochastic control problem with hazard accumulation
  • Development of Ch-DQN algorithm with backward trajectory refinement and fractional meta-updates for non-stationary noise environments
fault-tolerant quantum computing quantum error correction adaptive control deep reinforcement learning stochastic noise
View Full Abstract

In realistic hardware for quantum computation that possesses fault-tolerance, non-stationary noise and stochastic drift lead to logical failure from the temporal accumulation of errors, not from independent events. Static decoding and fixed calibration techniques are structurally incompatible with this situation because they do not take into account temporal correlations between errors or control-induced back-action of errors. These effects motivate control policies that must track noise evolution across correction cycles, rather than respond to individual syndromes in isolation. We treat fault-tolerant quantum computation as a stochastic control problem, modelled using reduced quantum dynamics in which Pauli error processes are governed by latent noise parameters that vary temporally. From this perspective, logical failure arises through the accumulation of a hazard variable, and the corresponding control objective depends on the full history of observations. Operating under these conditions, a Chronological Deep Q-Network (Ch-DQN) maintains an internal belief state that tracks both noise evolution and accumulated hazard. During training, backward refinement of trajectories is used to sample slowly drifting modes of operation, while runtime inference remains strictly causal. A fractional meta-update stabilizes learning in the presence of non-stationary, control-coupled dynamics. Through multi-distance simulations that incorporate stochastic drift and feedback from decision-making, Ch-DQN suppresses hazard accumulation and extends logical survival time relative to static and recurrent baselines. Error correction in this regime is therefore no longer a static decoding task, but a control process whose success is determined over time by the underlying noise dynamics.

Quantifying surface losses in superconducting aluminum microwave resonators

Elizabeth Hedrick, Faranak Bahrami, Alexander C. Pakpour-Tabrizi, Atharv Joshi, Q. Rumman Rahman, Ambrose Yang, Ray D. Chang, Matthew P. Bland, Apoorv...

2603.13183 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper investigates how surface defects in aluminum oxide layers limit the performance of superconducting quantum devices. The researchers measure microwave losses caused by two-level systems in aluminum resonators and find that native aluminum oxide contributes significantly to qubit decoherence, providing insights for improving quantum device fabrication.

Key Contributions

  • Quantified that surface two-level systems in 2.7 nm aluminum oxide layers are the primary source of losses in superconducting aluminum resonators
  • Demonstrated that aluminum interface defects contribute approximately 27% of the relaxation rate in state-of-the-art tantalum-on-silicon qubits
  • Showed that HF treatment removes aluminum oxide but rapid regrowth limits long-term improvements in device performance
superconducting qubits two-level systems aluminum oxide transmon qubits microwave resonators
View Full Abstract

The recent realization of millisecond-scale coherence with tantalum-on-silicon transmon qubits showed that depositing the Al/AlOx/Al Josephson junction in a high purity, ultrahigh vacuum environment was critical for achieving lifetime-limited coherence, motivating careful examination of the aluminum surface two-level system (TLS) bath. Here, we measure the microwave absorption arising from surface TLSs in superconducting aluminum resonators, following methodology developed for tantalum resonators. We vary film and surface properties and correlate microwave measurements with materials characterization. We find that the lifetimes of superconducting aluminum resonators are primarily limited by surface losses associated with TLSs in the 2.7 nm-thick native AlOx. Treatment with 49% HF removes surface AlOx completely; however, rapid oxide regrowth limits improvements in surface loss and long term device stability. Using these measurements we estimate that TLSs in aluminum interfaces contribute around 27% of the relaxation rate of state-of-the-art tantalum-on-silicon qubits that incorporate aluminum-based Josephson junctions.

Beta Tantalum Transmon Qubits with Quality Factors Approaching 10 Million

Atharv Joshi, Apoorv Jindal, Paal H. Prestegaard, Faranak Bahrami, Elizabeth Hedrick, Matthew P. Bland, Tunmay Gerg, Guangming Cheng, Nan Yao, Robert ...

2603.13174 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper demonstrates that beta-phase tantalum can be used to create high-quality superconducting qubits for quantum computers, achieving quality factors approaching 10 million despite previous beliefs that this material phase would be inferior to alpha-phase tantalum.

Key Contributions

  • Demonstrated that beta-Ta can achieve exceptionally high qubit quality factors (up to 10.1 million), challenging previous assumptions about material requirements
  • Established beta-Ta on sapphire as a viable platform for scalable qubit fabrication since beta-Ta readily nucleates at room temperature
  • Characterized the loss mechanisms in beta-Ta qubits, showing surface two-level systems as the dominant loss channel
transmon qubits beta tantalum quality factor superconducting qubits two-level systems
View Full Abstract

Tantalum-based transmon qubits are a promising platform for building large-scale quantum processors. So far, these qubits have been made from tantalum films grown exclusively in the alpha phase (α-Ta). The beta phase of tantalum (\{beta}-Ta) readily nucleates at room temperature, making it attractive for scalable qubit fabrication. However, \{beta}-Ta is widely believed to be detrimental to qubit performance because it has a lower superconducting critical temperature than α-Ta. We challenge this prevailing belief by fabricating low-loss transmon qubits from \{beta}-Ta films on sapphire. Across 11 qubits, the mean time-averaged quality factor is (5.6 +/- 2.3) x 10^6, with the best qubit recording a time-averaged quality factor of (10.1 +/- 1.3) x 10^6. Resonator studies demonstrate that the dominant microwave loss channel is surface two-level systems, with the surface loss contribution for \{beta}-Ta being about twice that of α-Ta. \{beta}-Ta films exhibit significant kinetic inductance, consistent with an estimated magnetic penetration depth of (1.78 +/- 0.02) μm. This work establishes \{beta}-Ta on sapphire as a material platform for realizing low-loss transmon qubits and other superconducting devices such as compact resonators, kinetic inductance detectors, and quasiparticle traps.

Circuit Optimization for Universality Transformation

Yasuaki Nakayama, Yuki Takeuchi, Seiseki Akibue

2603.13169 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents more efficient quantum circuit constructions for transforming between different universal gate sets, specifically showing how to convert a computationally universal gate set to a strictly universal one using fewer resources. The work demonstrates that any multi-qubit quantum operation can be generated using only real single-qubit gates, CCZ gates, and a single special quantum state.

Key Contributions

  • Circuit optimization that eliminates non-imaginary ancillary qubits in universality transformation
  • Extension to continuous gate-set setting showing exact generation of any multi-qubit unitary using constrained gate set
quantum gates circuit optimization universal computation gate synthesis quantum compilation
View Full Abstract

It is known that a computationally universal gate set $\{H,CCZ\}$ can be transformed to a strictly universal one $\{Λ(S), H\}$ using one maximally imaginary state $|+i\rangle$ and non-imaginary ancillary qubits. We succeed this transformation with a shorter circuit that eliminates non-imaginary ancillary qubits. We further extend this to the continuous gate-set setting, showing that any multi-qubit unitary can be exactly generated by real single-qubit unitary gates, $CCZ$ gates and $|+i\rangle$.

On-Demand Correlated Errors in Superconducting Qubits from a Particle Accelerator

Thomas McJunkin, A. W. Hunt, Yenuel Jones-Alberty, T. M. Haard, M. K. Spear, James Shackford, Tom Gilliss, Mayra Amezcua, C. A. Watson, T. M. Sweeney,...

2603.13124 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper describes a new experimental facility that uses a particle accelerator to study how ionizing radiation creates correlated errors in superconducting quantum computers. The researchers can now generate radiation-induced errors on demand to better understand and characterize how cosmic rays and other high-energy particles interfere with quantum computations.

Key Contributions

  • Development of a controllable facility coupling electron linear accelerator to dilution refrigerator for studying radiation effects on quantum systems
  • Demonstration of on-demand generation and characterization of radiation-induced qubit errors including relaxation, excitation, and detuning errors
  • Systematic study showing error signatures depend on junction placement and superconducting gap properties
superconducting qubits quantum error correction ionizing radiation correlated errors transmon
View Full Abstract

Ionizing radiation is a known source of correlated errors in superconducting quantum processors, inhibiting the functionality of quantum error correction surface codes. High-energy photons and charged particles deposit pair-breaking energy into these systems leading to excess quasiparticles near Josephson junctions that increase qubit decoherence. Previous investigations of this problem have relied on ambient, stochastic sources of ionizing radiation or alternative methods of quasiparticle generation. Here, we present a facility that couples an electron linear accelerator (linac) to a dilution refrigerator to study ionizing radiation in quantum systems. A single linac electron closely mimics the energy deposition characteristics of a typical cosmic-ray muon, and we demonstrate the facility's usefulness with a multi-qubit superconducting transmon chip. Characteristic radiation-induced relaxation errors are quickly and easily collected with the speed and timing information of the linac. Additionally, we present qubit excitation and detuning errors that can be difficult to detect without the on-demand source of ionizing radiation. These error signatures are shown to be dependent on the junction placement and surrounding superconducting gaps.

Partially Fault-Tolerant Quantum Computation for Megaquop Applications

Ming-Zhi Chung, Ali H. Z. Kavaki, Artur Scherer, Abdullah Khalid, Xiangzhou Kong, Toru Kawakubo, Namit Anand, Gebremedhin A Dagnew, Zachary Webb, Ally...

2603.13093 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper analyzes partially fault-tolerant quantum computing approaches for executing large-scale quantum circuits with millions of operations, focusing on the STAR architecture for efficient analog rotations and comparing resource requirements against full fault-tolerant methods. The authors demonstrate that partial fault tolerance could enable practical quantum simulation of condensed matter systems like the 2D Fermi-Hubbard model with hundreds of thousands of qubits.

Key Contributions

  • Quantum resource estimation comparison between partial and full fault-tolerant quantum computing architectures
  • Development of code growth procedure to reduce factory size for analog rotation state production
  • Analysis of space-time trade-offs and identification of optimal circuit regimes for partial FTQC
  • Demonstration that 2D Fermi-Hubbard model simulation is well-suited for STAR architecture implementation
fault-tolerant quantum computing quantum resource estimation STAR architecture analog rotations quantum error correction
View Full Abstract

Partially fault-tolerant quantum computing (FTQC) has recently emerged as a promising approach for the execution of megaquop-scale circuits with millions of logical operations. In this work, we demonstrate the strengths and the limitations of this approach by conducting quantum resource estimation (QRE) of the space--time-efficient analog rotation (STAR) architecture using realistic hardware specifications for superconducting processors, and compare it against the QRE of the full FTQC architecture. We show how the performance of the STAR architecture's protocols is affected by hardware improvements. We also reduce the space requirements for partial FTQC by developing a procedure leveraging code growth to decrease the size of a factory producing analog rotation states. Our results reveal a non-trivial dependence of the optimal pre-growth code distance on the rotation angle with respect to post-growth infidelity. Further, we analyze space--time trade-offs between the factory size and the error-mitigation overhead, and observe that in an application-agnostic setting, there is a Goldilocks zone for circuits in the regime of roughly $10^5$--$10^6$ small-angle rotation gates. We show that quantum simulation of 2D Fermi--Hubbard model systems is a particularly well-suited application for the STAR architecture, requiring only hundreds of thousands of physical qubits and runtimes on the order of minutes for modest system sizes. Due to its favourable algorithmic scaling to larger system sizes, utility-scale simulation of the 2D Fermi--Hubbard model could potentially be attained using partial FTQC.

Asymptotically Optimal Quantum Circuits for Comparators and Incrementers

Vivien Vandaele

2603.12917 • Mar 13, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops more efficient quantum circuits for basic arithmetic operations like comparisons and increments, achieving optimal performance in terms of gate count, circuit depth, and qubit usage. The authors show these improvements can significantly reduce the complexity of Shor's factoring algorithm from O(n³) to O(n² log² n) depth.

Key Contributions

  • Asymptotically optimal quantum circuits for comparators and incrementers with Θ(n) gates and Θ(log n) depth
  • Improved Shor's algorithm implementation reducing circuit depth from O(n³) to O(n² log² n)
  • General theorem for trading ancilla qubits for control qubits with low overhead
quantum circuits Shor's algorithm quantum arithmetic fault-tolerant quantum computing circuit optimization
View Full Abstract

We present quantum circuits for comparison and increment operations that achieve an asymptotically optimal gate count of $Θ(n)$ and depth of $Θ(\log n)$ over the Clifford+Toffoli gate set, while using a provably minimal number of qubits. We extend these results to classical-quantum comparators, yielding an improved classical-quantum adder with an optimal qubit count. Given the ubiquity of these operations as algorithmic building blocks, our constructions translate directly into reduced circuit complexity for many quantum algorithms. As a notable example, they can be used to improve a space-efficient circuit for Shor's factoring algorithm, reducing circuit depth from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2 \log^2 n)$ without increasing either the qubit count or the asymptotic gate complexity. Underpinning these results is a general theorem demonstrating how to trade ancilla qubits for control qubits with low overhead in both depth and gate count, providing a broadly applicable tool for quantum circuit design.