Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Low-weight quantum syndrome errors in belief propagation decoding
This paper develops methods to identify problematic low-weight error patterns in quantum error correction codes that cause belief propagation decoding algorithms to converge slowly or fail. The authors analyze how these decoding failures occur and propose improvements to the decoder by modifying the decoding matrix to reduce both logical errors and decoding time.
Key Contributions
- Empirical method to identify low-weight error syndromes that cause belief propagation decoding convergence issues
- Analysis of BP dynamics for weight-four and weight-five errors showing exponential activation behavior
- Decoder improvement technique using fault column combinations to reduce logical errors and decoding time
View Full Abstract
We describe an empirical approach to identify low-weight combinations of columns of the decoding matrices of a quantum circuit-level noise model, for which belief-propagation (BP) algorithms converge possibly very slowly. Focusing on the logical-idle syndrome cycle of the low-density parity check gross code, we identify criteria providing a characterization of the Tanner subgraph of such low-weight error syndromes. We analyze the dynamics of iterations when BP is used to decode weight-four and weight-five errors, finding statistics akin to exponential activation in the presence of noise or escape from chaotic phase-space domains. We study how BP convergence improves when adding to the decoding matrix relevant combinations of fault columns, and show that the suggested decoder amendment can result in the reduction of both logical errors and decoding time.
Post-Quantum Cryptography from Quantum Stabilizer Decoding
This paper proposes quantum stabilizer code decoding as a new hardness assumption for post-quantum cryptography, showing it can support key cryptographic primitives like public-key encryption and oblivious transfer. The authors argue this provides a quantum-native alternative to current post-quantum assumptions that could be more resistant to both classical and quantum attacks.
Key Contributions
- Establishing quantum stabilizer decoding as a viable post-quantum cryptographic assumption with reductions to core cryptographic primitives
- Developing new scrambling techniques for structured linear spaces with symplectic algebraic structure to enable security proofs
View Full Abstract
Post-quantum cryptography currently rests on a small number of hardness assumptions, posing significant risks should any one of them be compromised. This vulnerability motivates the search for new and cryptographically versatile assumptions that make a convincing case for quantum hardness. In this work, we argue that decoding random quantum stabilizer codes -- a quantum analog of the well-studied LPN problem -- is an excellent candidate. This task occupies a unique middle ground: it is inherently native to quantum computation, yet admits an equivalent formulation with purely classical input and output, as recently shown by Khesin et al. (STOC '26). We prove that the average-case hardness of quantum stabilizer decoding implies the core primitives of classical Cryptomania, including public-key encryption (PKE) and oblivious transfer (OT), as well as one-way functions. Our constructions are moreover practical: our PKE scheme achieves essentially the same efficiency as state-of-the-art LPN-based PKE, and our OT is round-optimal. We also provide substantial evidence that stabilizer decoding does not reduce to LPN, suggesting that the former problem constitutes a genuinely new post-quantum assumption. Our primary technical contributions are twofold. First, we give a reduction from random quantum stabilizer decoding to an average-case problem closely resembling LPN, but which is equipped with additional symplectic algebraic structure. While this structure is essential to the quantum nature of the problem, it raises significant barriers to cryptographic security reductions. Second, we develop a new suit of scrambling techniques for such structured linear spaces, and use them to produce rigorous security proofs for all of our constructions.
Fair Decoder Baselines and Rigorous Finite-Size Scaling for Bivariate Bicycle Codes on the Quantum Erasure Channel
This paper evaluates bivariate bicycle quantum error-correcting codes on erasure channels, addressing unfair decoder comparisons in previous work and using rigorous statistical methods to estimate true asymptotic error thresholds. The study shows these codes can achieve near-optimal performance without maximum-likelihood decoding and outperform surface codes in some metrics.
Key Contributions
- Establishes fair decoder baselines for comparing bivariate bicycle codes against surface codes on quantum erasure channels
- Provides rigorous finite-size scaling analysis to estimate true asymptotic error thresholds rather than finite-size pseudo-thresholds
- Demonstrates bivariate bicycle codes achieve ~0.488 threshold within 2.4% of theoretical limit with 12x lower normalized overhead than surface codes
View Full Abstract
Fair threshold estimation for bivariate bicycle (BB) codes on the quantum erasure channel runs into two recurring problems: decoder-baseline unfairness and the conflation of finite-size pseudo-thresholds with true asymptotic thresholds. We run both uninformed and \emph{erasure-aware} minimum-weight perfect matching (MWPM) surface code baselines alongside BP-OSD decoding of BB codes. With standard depolarizing-weight MWPM and no erasure information, performance matches random guessing on the erasure channel in our tested regime -- so prior work that compares against this baseline is really comparing decoders, not codes. Using 200{,}000 shots per point and bootstrap confidence intervals, we sweep five BB code sizes from $N=144$ to $N=1296$. Pseudo-thresholds (WER = 0.10) run from $p^* = 0.370$ to $0.471$; finite-size scaling (FSS) gives an asymptotic threshold $p^*_\infty \approx 0.488$, within 2.4\% of the zero-rate limit and without maximum-likelihood decoding. On the fair baseline, BB at $N=1296$ has a modest edge in threshold over the surface code at twice the qubit count, and a 12$\times$ lower normalized overhead -- the latter is where the practical advantage sits. All runs are reproducible from recorded seeds and package versions.
XCOM: Full Mesh Network Synchronization and Low-Latency Communication for QICK (Quantum Instrumentation Control Kit)
This paper presents XCOM, a networking system that enables precise synchronization (within 100 picoseconds) and low-latency communication between multiple quantum control boards in large-scale quantum computing systems. The system addresses the critical challenge of coordinating many hardware components needed to control hundreds or thousands of qubits in superconducting and spin qubit testbeds.
Key Contributions
- Development of XCOM network achieving sub-100ps synchronization between quantum control boards
- Enabling scalable multi-board control systems for large qubit count quantum computers
- Providing deterministic all-to-all communication with sub-185ns latency for quantum control hardware
View Full Abstract
Quantum computing experiments and testbeds with large qubit counts have until recently been a privilege afforded only to large companies or quantum technologies where scaling to hundreds or thousands of qubits does not require a substantial increase in quantum control hardware (neutral atoms, trapped ions, or spin defects). Superconducting and spin qubit testbeds critically depend on scaling their control systems beyond what a single electronics board can provide. Multi-board control systems combining RF, fast DC control, bias, and readout require precise synchronization and communication across many hardware and firmware components. To address this, we present XCOM, a network that synchronizes QICK boards and the absolute clocks governing quantum program execution to within 100 ps, free of drift and loss of lock. XCOM also provides deterministic, all-to-all simultaneous data communication with latency below 185 ns. Like QICK itself, XCOM is compatible with a broad range of qubit technologies and is designed to scale to large systems.
A Flexible GKP-State-Embedded Fault-Tolerant Quantum Computation Configuration Based on a Three-Dimensional Cluster State
This paper proposes a new architecture for fault-tolerant quantum computing that uses three-dimensional cluster states built from optical photons with different properties (polarization, frequency, and orbital angular momentum). The researchers combine this with Gottesman-Kitaev-Preskill (GKP) error correction codes to create a flexible, scalable system for reliable quantum computation.
Key Contributions
- Novel three-dimensional cluster state architecture using multiple optical degrees of freedom
- Integration of partially squeezed surface-GKP codes with optimal squeezing threshold of 11.5 dB for fault-tolerant quantum computation
View Full Abstract
The integration of diverse quantum resources and the exploitation of more degrees of freedom provide key operational flexibility for universal fault-tolerant quantum computation. In this work, we propose a flexible Gottesman-Kitaev-Preskill-state-embedded fault-tolerant quantum computation architecture based on a three-dimensional cluster state constructed in polarization, frequency, and orbital angular momentum domains. Specifically, we design optical entanglement generators to produce three diverse entangled pairs, and subsequently construct a three-dimensional cluster state via a beam-splitter network with several time delays. Furthermore, we present a partially squeezed surface-GKP code to achieve fault-tolerant quantum computation and ultimately find the optimal choice of implementing the squeezing gate to give the best fault-tolerant performance (the fault-tolerant squeezing threshold is 11.5 dB). Our scheme is flexible, scalable, and experimentally feasible, providing versatile options for future optical fault-tolerant quantum computation architecture.
High-threshold magic state distillation with quantum quadratic residue codes
This paper presents a unified framework using quantum quadratic residue codes for magic state distillation, showing that several well-known quantum error-correcting codes are special cases of this framework. The authors demonstrate new codes that achieve high thresholds for distilling T states and Strange states, which are essential resources for fault-tolerant quantum computation.
Key Contributions
- Unified existing magic state distillation codes under quantum quadratic residue framework
- Presented new quantum quadratic residue codes with high thresholds for T state and Strange state distillation
- Proved existence of infinitely many quantum quadratic residue codes for T state distillation with non-trivial thresholds
View Full Abstract
We present applications of quantum quadratic residue codes in magic state distillation. This includes showing that existing codes which are known to distill magic states, like the $5$-qubit perfect code, the $7$-qubit Steane code, and the $11$-qutrit and $23$-qubit Golay codes, are equivalent to certain quantum quadratic residue codes. We also present new examples of quantum quadratic residue codes that distill qubit $T$ states and qutrit Strange states with high thresholds, and we show that there are infinitely many quantum quadratic residue codes that distill $T$ states with a non-trivial threshold. All of these codes, including the codes with the highest currently known thresholds for $T$ state and Strange state distillation, are unified under the umbrella of quantum quadratic residue codes.
Simulating Quantum Error Correction beyond Pauli Stochastic Errors
This paper develops new methods to simulate how realistic quantum errors (beyond simple Pauli errors) affect quantum error correction protocols, showing that coherent errors can significantly degrade fault-tolerant quantum computing performance compared to standard error models.
Key Contributions
- Development of detector error model (DEM) mapping technique for non-Pauli and coherent errors in fault-tolerant quantum circuits
- Demonstration that coherent errors can shift fault-tolerance thresholds and increase logical error rates by an order of magnitude compared to stochastic Pauli errors
View Full Abstract
Quantum error correction (QEC), the lynchpin of fault-tolerant quantum computing (FTQC), is designed and validated against well-behaved Pauli stochastic error models. But in real-world deployment, QEC protocols encounter a vast array of other errors -- coherent and non-Pauli errors -- whose impacts on quantum circuits are vastly different than those of stochastic Pauli errors. The impacts of these errors on QEC and FTQC protocols have been largely unpredictable to date due to exponential classical simulation cost. Here, we show how to accurately and efficiently model the effects of coherent and non-Pauli errors on FTQC, and we study the effects of such errors on syndrome extraction for surface and bivariate bicycle codes, and on magic state cultivation. Our analysis suggests that coherent error can shift fault-tolerance thresholds, increase the space-time cost of magic state cultivation, and can increase logical error rates by an order of magnitude compared to equivalent stochastic errors. These analyses are enabled by a new technique for mapping any Markovian circuit-level error model with sufficiently small error rates onto a detector error model (DEM) for an FTQC circuit. The resulting DEM enables Monte Carlo estimation of logical error rates and noise-adapted decoding, and its parameters can be analytically related to the underlying physical noise parameters to enable approximate strong simulation.
Adaptive Loss-tolerant Syndrome Measurements
This paper develops adaptive protocols for quantum error correction that can handle both traditional Pauli errors and qubit losses (erasures) simultaneously. The authors extend existing fault-tolerant error correction methods to work with mixed error models and optimize syndrome measurement sequences to minimize overhead when qubits are lost.
Key Contributions
- Development of adaptive syndrome measurement protocols for mixed Pauli error and erasure models
- Quantification of minimal overhead for converting correctable erasures to located errors
- Generalization of fault-tolerant error correction conditions to handle qubit losses
- Extension of adaptive Shor-style measurement sequences to loss-tolerant quantum error correction
View Full Abstract
In the presence of qubit losses, the building blocks of fault-tolerant error correction (FTEC) must be revisited. Existing loss-tolerant approaches are mainly architecture-specific, and little attention has been given to optimizing the syndrome measurement sequences under loss. Schemes designed for the standard Pauli error model are not directly applicable because the syndrome patterns differ when both Pauli errors and erasures can occur. Based on recent advances in loss detection units and loss-tolerant syndrome extraction gadgets, we extend the study of adaptive Shor-style measurement sequences to the mixed error model. We begin by discussing how to adaptively convert correctable erasures into located errors. The minimal overhead is quantified by the number of stabilizer measurements, which can be reduced to a subgroup dimension problem for erasures arising in any FTEC circuit for qubits and prime-dimensional qudits. As a byproduct, we provide the construction of the canonical generating set with respect to a given bipartite partition for a stabilizer group on qudits of composite dimension. We then generalize both the weak and strong FTEC conditions. Finally, we present adaptive syndrome-measurement protocols for the mixed error model, generalizing the adaptive protocols for the standard Pauli error model.
Quantum Depth Compression via Local Dynamic Circuits
This paper introduces Quantum Depth Compression (QDC), a compilation framework that uses dynamic circuits to significantly reduce the depth of quantum circuits by reorganizing non-Clifford gates and utilizing mid-circuit measurements. The method achieves depth linear in the number of non-Clifford gates while avoiding expensive SWAP operations for connectivity constraints.
Key Contributions
- Development of QDC framework that reduces circuit depth to linear in non-Clifford gates
- Method to achieve grid connectivity without SWAP networks using dynamic circuits
- Demonstration of reduced depth and CNOT count compared to standard compilers
View Full Abstract
We present Quantum Depth Compression (QDC), a general compilation framework that utilizes dynamic circuits to reduce arbitrary quantum circuits to depth linear in the number of non-Clifford gates and to grid connectivity without the need for expensive SWAP-networks. The framework consists of pushing Clifford gates to the end of the circuit, resulting in a sequence of non-Clifford Pauli-phasors followed by an all Clifford sub-circuit, both of which are then reduced to constant depth via dynamic circuits. We show that applying QDC to random Pauli-phasor circuits lowers both their depth and CNOT count compared to a standard alternative compiler.
Fast stabilizer state preparation via AI-optimized graph decimation
This paper presents AI-optimized methods to prepare stabilizer states (important quantum states used in error correction) more efficiently by reducing the number of two-qubit gates needed. The researchers use reinforcement learning and Monte Carlo tree search to find better ways to construct these quantum states, achieving up to 2.5x reduction in gate count for large quantum error correcting codes.
Key Contributions
- AI-based method (QuSynth) combining reinforcement learning and Monte Carlo tree search for optimal Clifford gate selection
- Demonstration of up to 2.5x reduction in two-qubit gate count for stabilizer state preparation including large codes like the 144-qubit gross code
View Full Abstract
We propose a general method for preparing stabilizer states with reduced two-qubit gate count and depth compared to the state of the art. The method starts from a graph state representation of the stabilizer state and iteratively reduces the number of edges in the graph using two-qubit Clifford gates to produce a unitary preparation circuit. We explore various heuristic search and AI-based approaches to optimally choose Clifford gates at each step, the most sophisticated of which is a combination of reinforcement learning and Monte Carlo tree search that we call QuSynth. We apply our method to synthesize code states of various quantum error correcting codes including the 23-qubit Golay code and the 144-qubit gross code, the latter of which is significantly beyond the qubit number that is accessible to prior optimal circuit synthesis methods. We demonstrate that our techniques are capable of reducing the required two-qubit gates by up to a factor of 2.5 compared to previous approaches while retaining low circuit depth.
Independent Trivariate Bicycle Codes
This paper introduces a new class of quantum error-correcting codes called independent trivariate bicycle codes that extend existing bicycle codes to three dimensions, achieving better performance metrics and lower error rates than previous multivariate bicycle codes.
Key Contributions
- Development of independent trivariate bicycle codes extending bivariate framework to three cyclic dimensions
- Construction of high-performance codes including [[140,6,14]] code with superior kd²/n ratio and pseudothreshold performance
- Demonstration of improved error correction capabilities on realistic superconducting noise models
View Full Abstract
We introduce six independent trivariate bicycle (ITB) codes, which extend the bivariate bicycle framework of Bravyi et al.\ to three cyclic dimensions. Using asymmetric polynomial pairs on three-dimensional tori, we construct four codes including a $[[140,6,14]]$ code with $kd^2/n = 8.40$. In the code-capacity setting, the $[[140,6,14]]$ code achieves a pseudothreshold of $8.0\%$ and $kd^2/n = 8.40$, exceeding the best multivariate bicycle code of Voss et al.\ ($7.9\%$, $kd^2/n = 2.67$). With circuit-level depolarizing noise, pseudothresholds reach $0.59\%$ for $[[140,6,14]]$ and $0.53\%$ for $[[84,6,10]]$. On the SI1000 superconducting noise model, the $[[140,6,14]]$ code achieves a per-round per-observable rate of $5.6 \times 10^{-5}$ at $p = 0.20\%$. We additionally present two self-dual codes with weight-8 stabilizers: $[[54,14,5]]$ ($kd^2/n = 6.48$) and $[[128,20,8]]$ ($kd^2/n = 10.0$). These results expand the design space of algebraic quantum LDPC codes and demonstrate that the third cyclic dimension yields competitive candidates for practical fault-tolerant implementations.
General circuit compilation protocol into partially fault-tolerant quantum computing architecture
This paper proposes a new circuit execution protocol for fault-tolerant quantum computers that can efficiently perform continuous rotation gates using lattice surgery with surface codes. The approach uses optimization techniques to minimize time overhead from probabilistic operations and includes performance prediction tools.
Key Contributions
- Circuit execution protocol for STAR architecture enabling direct continuous Rz(θ) gate operations
- QUBO-based optimization for resource state allocation to reduce time overhead
- Performance estimation framework for predicting execution time and optimizing qubit topology
View Full Abstract
As we are entering an early-FTQC era, circuit execution protocols with logical qubits and certain error-correcting codes are being discussed. Here, we propose a circuit execution protocol for the space-time efficient analog rotation (STAR) architecture. Gate operations within the STAR architecture is based on lattice surgery with surface codes, but it allows direct execution of continuous gates $Rz(θ)$ as non-Clifford gates instead of $T = Rz(π/4)$. $Rz(θ)$ operations involve creation of resource states $|m_θ\rangle = \frac{1}{\sqrt{2}} (|0 \rangle + e^{iθ} |1\rangle ) $ followed by ZZ joint measurements with target logical qubits. While employing $Rz(θ)$ enables more efficient circuit execution, both their creations and joint measurements are probabilistic processes and adopt repeat-until-success (RUS) protocols which are likely to result in considerable time overhead. Our circuit execution protocol aims to reduce such time overhead by parallel trials of resource state creations and more frequent trials of joint measurements. By employing quadratic unconstrained binary optimization (QUBO) in determining resource state allocations within the space, we successfully make our protocol efficient. Furthermore, we proposed performance estimators given the target circuit and qubit topology. It successfully predicts the time performance within less time than actual simulations do, and helps find the optimal qubit topology to run the target circuits efficiently.
Noise-resilient nonadiabatic geometric quantum computation for bosonic binomial codes
This paper proposes a method for quantum computing that combines binomial codes (which protect against certain types of errors) with geometric quantum gates (which are naturally resistant to noise) in superconducting systems. The researchers develop control protocols that make quantum computations more reliable by leveraging both error correction techniques and noise-resilient gate operations.
Key Contributions
- Integration of binomial codes with nonadiabatic geometric quantum computation for enhanced error resilience
- Development of customized control protocols combining reverse engineering and optimal control for superconducting systems
- Demonstration of high-fidelity quantum gates with tolerance to parameter fluctuations and decoherence
View Full Abstract
The binomial code is renowned for its parity-mediated loss immunity and loss-error recoverability, while geometric phases are widely recognized for their intrinsic resilience against noise. Capitalizing on their complementary merits, we propose a noise-resilient protocol to realize Nonadiabatic geometric quantum computation with binomial codes in a superconducting system composed of a microwave cavity %off-resonantly dispersively coupled to a %three-level qutrit. The control field %geometric quantum computation is designed by %combining geometric phases, integrating reverse engineering and optimal control. This design provides a customized control protocol featuring strong error-tolerance and inherent noise-resilience. Using experimentally accessible parameters in superconducting systems, numerical simulations show that the protocol yields relatively high average fidelity for geometric quantum gates based on binomial code, even in the presence of parameter fluctuations and decoherence. Thus, this protocol may provide a practical approach for realizing reliable Nonadiabatic geometric quantum computation with binomial codes in current technology.
Optimizing Logical Mappings for Quantum Low-Density Parity Check Codes
This paper develops new compilation and mapping techniques for quantum low-density parity check (LDPC) codes, specifically the Gross code, to reduce error rates in fault-tolerant quantum computing. The authors introduce a two-stage pipeline using hypergraph partitioning and priority-based algorithms to optimize how logical qubits are mapped onto hardware, achieving significant reductions in program failure rates.
Key Contributions
- Two-stage mapping pipeline using hypergraph partitioning for logical qubit placement on Gross code architectures
- Demonstration of up to 36% reduction in error rates from inter-module measurements compared to existing mapping approaches
- Analysis showing that existing NISQ and FTQC mappers are insufficient for LDPC code architectures due to two-level mapping complexity
View Full Abstract
Early demonstrations of fault tolerant quantum systems have paved the way for logical-level compilation. For fault-tolerant applications to succeed, execution must finish with a low total program error rate (i.e., a low program failure rate). In this work, we study a promising candidate for future fault-tolerant architectures with low spatial overhead: the Gross code. Compilation for the Gross code entails compiling to Pauli Based Computation and then reducing the rotations and measurements to the Bicycle ISA. Depending on the configuration of modules and the placement of code modules on hardware, one can reduce the amount of resulting Bicycle instructions to produce a lower overall error rate. We find that NISQ-based, and existing FTQC mappers are insufficient for mapping logical qubits on Gross code architectures because 1. they do not account for the two-level nature of the logical qubit mapping problem, which separates into code modules with distinct measurements, and 2. they naively account only for length two interactions, whereas Pauli-Products are up to length $n$, where $n$ is the number of logical qubits in the circuit. For these reasons, we introduce a two-stage pipeline that first uses hypergraph partitioning to create in-module clusters, and then executes a priority-based algorithm to efficiently assign clusters onto hardware. We find that our mapping policy reduces the error contribution from inter-module measurements, the largest source of error in the Gross Code, by up to $\sim36\%$ in the best case, with an average reduction of $\sim13\%$. On average, we reduce the failure rates from inter-module measurements by $\sim22\%$ with localized factory availability, and by $\sim17\%$ on grid architectures, allowing hardware developers to be less constrained in developing scalable fault tolerant systems due to software driven reductions in program failure rates.
Secure Quantum Communication: Simulation and Analysis of Quantum Key Distribution Protocols
This paper simulates and analyzes quantum key distribution protocols (BB84, B92, and E91) using IBM Qiskit, evaluating their performance under realistic conditions like noise and eavesdropping. The study aims to assess the practical feasibility of QKD as a secure communication method in the quantum computing era.
Key Contributions
- Simulation-based comparative analysis of three major QKD protocols (BB84, B92, E91) using IBM Qiskit
- Evaluation of protocol performance under realistic quantum channel conditions including noise, decoherence, and eavesdropping attacks
View Full Abstract
Quantum computing poses significant threats to conventional cryptographic techniques such as RSA and AES, motivating the need for quantum secure communication methods. Quantum Key Distribution (QKD) offers information theoretic security based on fundamental quantum principles. This paper presents a simulation-based analysis of well-known QKD protocols, namely BB84, B92, and E91, using the IBM Qiskit framework. Realistic quantum channel effects, including noise, decoherence, and eavesdropping, are modeled to evaluate protocol performance. Key metrics such as error rate, secret key generation, and security characteristics are analyzed and compared. The study highlights practical challenges in QKD implementation, including hardware limitations and channel losses, and discusses insights toward scalable and robust quantum communication systems. The results support the feasibility of QKD as a promising solution for secure communication in the quantum era.
CryoCMOS RF multiplexer for superconducting qubit control, readout and flux biasing at millikelvin temperatures with picowatt power consumption
This paper demonstrates a cryogenic CMOS RF multiplexer that operates at extremely low temperatures (10 millikelvin) with ultra-low power consumption, designed to address the input-output bottleneck in large-scale superconducting quantum computers by enabling multiple qubits to share the same control and readout lines.
Key Contributions
- Record-low 200 pW power consumption cryoCMOS RF multiplexer operating at 10 mK
- Demonstration of direct qubit connection with minimal impact on coherence times
- Scalable solution for multiplexing readout, flux, and control lines in superconducting quantum processors
View Full Abstract
Large-scale cryogenic quantum systems are constrained by an input-output bottleneck between room-temperature electronics and millikelvin stages, particularly in superconducting qubit platforms. This bottleneck is most acute for output lines, where bulky and expensive microwave components limit scalability. A promising approach for scalable characterization and testing is to perform signal multiplexing directly at the qubit plane. We demonstrate a cryogenic CMOS (cryoCMOS) RF multiplexer operating at 10 millikelvin with record-low static power consumption of 200 pW. The device provides < 2 dB insertion loss and > 30 dB isolation across DC-8 GHz. Direct connection to transmon qubits marginally affects coherence times in the range of 100 microseconds, enabling multiplexing of readout, flux and, in principle, XY drive lines. This work introduces cryoCMOS multiplexers as valuable tools for scalable, high-throughput cryogenic characterization and testing, and advances co-integrated quantum-classical control for future large-scale quantum processors.
Quantum classification and search algorithms using spinorial representations
This paper presents two quantum algorithms - one for classification and one for search with non-uniform initial conditions - both formulated using Clifford algebras and spinorial representations. The approach provides a unified algebraic framework where quantum states and operators are constructed from spinor representations, with the classification algorithm using orthogonal states for different classes and the search algorithm implementing oracles directly through Clifford algebra generators.
Key Contributions
- Novel algebraic formulation of quantum classification algorithm using spinorial representations
- Unified framework based on Clifford algebras for both classification and search algorithms
- Simplified oracle implementation for quantum search using Clifford algebra generators
View Full Abstract
We propose an algebraic formulation for two distinct quantum algorithms: a quantum classification algorithm and a quantum search algorithm with a non-uniform initial distribution, both based on Clifford algebras and spinorial representations. In the classification algorithm, we exploit properties of spinorial representations to construct orthogonal quantum states associated with different classes, allowing the identification of an item's class through the evaluation of expectation values of operators derived from the generators of the Clifford algebra. In the quantum search algorithm, we consider a database with prior information in which the oracle is implemented directly using generators of the Clifford algebra, simplifying its realization. The proposed approach provides a unified algebraic description for both algorithms, employing spinorial representations in the construction of quantum states and operators. Computational implementations are presented.
Distinguishing types of correlated errors in superconducting qubits
This paper investigates two types of correlated errors in superconducting qubits - those caused by radiation-induced quasiparticles and those caused by mechanical vibrations from refrigeration equipment. The researchers develop methods to distinguish between these error types and show that certain qubit designs with larger superconducting gaps can protect against both types of correlated errors.
Key Contributions
- Method for distinguishing radiation-induced vs vibration-induced correlated errors in superconducting qubits
- Demonstration that transmon qubits with superconducting gap greater than qubit energy are protected against both radiation and vibration errors
View Full Abstract
Errors in superconducting qubits that are correlated in time and space can pose problems for quantum error correction codes. Radiation from cosmic and terrestrial sources can increase the quasiparticle (QP) density in a superconducting qubit device, resulting in an increased rate of QPs tunneling across proximal Josephson junctions (JJs) and causing correlated errors. Mechanical vibrations, such as those induced by the pulse tube in a dry dilution refrigerator, are also a known source of correlated errors. We present a method for distinguishing these two types of errors by their temporal, spatial, and frequency domain features, enabling physically motivated error-mitigation strategies. We also present accelerometer data to study the correlation between dilution refrigerator vibrations and the errors. We measure arrays of transmon qubits where the difference in superconducting gap across the JJ is less than the qubit energy, as well as those where the gap is greater than the qubit energy, which has been shown to mitigate radiation-induced errors. We show that these latter devices are also protected against vibration-induced errors.
Reducing C-NOT Counts for State Preparation and Block Encoding via Diagonal Matrix Migration
This paper presents algorithms to reduce the number of C-NOT gates needed for quantum state preparation and block encoding, which are fundamental operations in quantum computing. The authors achieve significant improvements in gate counts by developing a diagonal matrix migration technique that takes advantage of how diagonal matrices commute with certain quantum operations.
Key Contributions
- Improved C-NOT count for n-qubit state preparation from (23/24)2^n to (11/12)2^n gates
- Single-ancilla block encoding protocol achieving (11/48)4^n C-NOT count for 2^(n-1)×2^(n-1) matrices
- Diagonal matrix migration technique based on commutativity properties to minimize C-NOT gate usage
- Optimized algorithms for low-rank matrices with C-NOT count (K+(11/12))2^n for rank-K matrices
View Full Abstract
Quantum state preparation and block encoding are versatile and practical input models for quantum algorithms in scientific computing. The circuit complexity of state preparation and block encoding frequently dominates the end-to-end gate complexity of quantum algorithms. We give algorithms with lower C-NOT counts for both the state preparation and block encoding. For a general $n$-qubit state, we improve the C-NOT count from Plesch-Brukner algorithm, proposed in 2011, from $(23/24)2^n$ to $(11/12)2^n$. For block encoding, our single-ancilla protocol for $2^{n-1}\times 2^{n-1}$ matrices uses the spectral norm as subnormalization and achieves a C-NOT count leading term $(11/48)4^n$. This result even exceeds the lower bound of $(1/4)4^n$ for $n$-qubit unitary synthesis. Further optimization is performed for low-rank matrices, which frequently arise in practical applications. Specifically, we achieve the C-NOT count leading term $(K+(11/12))2^n$ for a rank-$K$ matrix. Our approach builds upon the recursive block-ZXZ decomposition from Krol et al. and introduces a diagonal matrix migration technique based on the commutativity of the diagonal matrix and the uniformly controlled rotation about the $z$-axis to minimize the use of C-NOT gates.
Chipmunq: Fault-Tolerant Compiler for Chiplet Quantum Architectures
This paper presents Chipmunq, a specialized compiler designed to map fault-tolerant quantum circuits onto modular chiplet quantum computer architectures. The compiler addresses the challenge of efficiently compiling large-scale quantum error correction circuits while managing the constraints of distributed quantum hardware connected by noisy inter-chiplet links.
Key Contributions
- First hardware-aware compiler specifically designed for fault-tolerant quantum circuits on modular chiplet architectures
- Quantum-error-correction-aware partitioning strategy that preserves logical qubit patch integrity
- Significant improvements in compilation efficiency and circuit performance metrics including 13.5x speedup and 86.4% depth reduction
View Full Abstract
As quantum computing advances toward fault-tolerance through quantum error correction, modular chiplet architectures have emerged to provide the massive qubit counts required while overcoming fabrication limits of monolithic chips. However, this transition introduces a critical compilation gap: existing frameworks cannot handle the scale of fault-tolerant quantum circuits while managing the noisy, sparse interconnects of chiplet backends. We present Chipmunq, the first hardware-aware compiler for mapping and routing fault-tolerant circuits onto modular architectures. Chipmunq employs a quantum-error-correction-aware partitioning strategy that preserves the integrity of logical qubit patches, preventing prohibitive gate overheads common in general-purpose compilers. Our evaluation demonstrates that Chipmunq achieves a 13.5x speedup in compilation time compared to state-of-the-art tools. By incorporating chiplet constraints and defective qubits, it reduces circuit depth by 86.4% and SWAP gate counts by 91.4% across varying code distances. Crucially, Chipmunq overcomes heterogeneous inter-chiplet links, improving logical error rates by up to two orders of magnitude.
A Scalable Open-Source QEC System with Sub-Microsecond Decoding-Feedback Latency
This paper presents an open-source quantum error correction (QEC) system that integrates real-time qubit control with ultra-fast error syndrome decoding and correction feedback. The system achieves 446 nanosecond end-to-end latency for a distance-3 surface code and can theoretically scale to handle ~881 physical qubits with sub-microsecond latency.
Key Contributions
- First fully integrated open-source QEC system with sub-microsecond decoding-feedback latency
- Scalable distributed multi-board FPGA architecture that can handle up to distance-21 surface codes
- Complete hardware platform ready for deployment with superconducting qubits including real-time control and communication
View Full Abstract
Quantum error correction (QEC) is essential for realizing large-scale, fault-tolerant quantum computation, yet its practical implementation remains a major engineering challenge. In particular, QEC demands precise real-time control of a large number of qubits and low-latency, high-throughput and accurate decoding of error syndromes. While most prior work has focused primarily on decoder design, the overall performance of any QEC system depends critically on all its subsystems including control, communication, and decoding, as well as their integration. To address this challenge, we present an open-source, fully integrated QEC system built on RISC-Q, a generator for RISC-V-based quantum control architectures. Implemented on RFSoC FPGAs, our system prototype integrates real-time qubit control, a scalable distributed multi-board architecture, and the state-of-the-art hardware QEC decoder within a low-latency, high-throughput decoding pipeline, forming a complete hardware platform ready for deployment with superconducting qubits. Experimental evaluation on a three-board prototype based on AMD ZCU216 RFSoCs demonstrates an end-to-end QEC decoding-feedback latency of 446 ns for a distance-3 surface code, including syndrome aggregation, network communication, syndrome decoding, and error distribution. Extrapolating from measured subsystem performance and state-of-the-art decoder benchmarks, the architecture can achieve sub-microsecond decoding-feedback latency up to a distance-21 surface code ($\sim$881 physical qubits) when scaled to larger hardware configurations.
Monolithic Segmented 3D Ion Trap for Quantum Technology Applications
This paper presents a new design for ion trap quantum computers using a monolithic 3D fused silica structure that can trap heavy ions like Yb+ and Ba+ with very low heating rates and high optical access. The researchers demonstrate high-fidelity two-qubit gate operations (99.3%) and establish this as a scalable platform for quantum computing with trapped ions.
Key Contributions
- Development of monolithic 3D fused silica blade trap with 250 μm ion-electrode distance enabling stable high RF voltage operation
- Demonstration of 99.3% two-qubit gate fidelity with heavy ions (Yb+) and low motional heating rates (1.1 quanta/s)
- Achievement of high numerical aperture optical access (0.7 NA) while maintaining deep trapping potentials for scalable quantum computing
- Establishment of modular platform suitable for quantum simulation, computation, metrology and networking applications
View Full Abstract
Monolithic three-dimensional (3D) Paul traps combine the high-precision microfabrication of two-dimensional (2D) chip traps with the deep trapping potentials and low heating rates characteristic of macroscopic Paul traps, which are typically manually assembled. However, achieving low motional heating rates and optical access with a high numerical aperture (NA) while maintaining the high radio-frequency (RF) voltages required for heavy ionic species, such as Yb$^{+}$ and Ba$^{+}$, remains a significant technical challenge. In this work, we present a segmented, monolithic 3D fused silica blade trap, featuring an ion-electrode distance of 250 $μ$m with stable operation at high RF voltages. We benchmark the performance of the trap using Yb$^{+}$ ions, demonstrating axially homogeneous trapping potentials for 200 $μ$m around the axial center of the trap, high multi-directional optical access (up to 0.7 NA), and radial motional heating rate as low as 1.1 $\pm$ 0.1 quanta/s at radial trap frequencies about 3 MHz near room temperature. Furthermore, we observe a motional Ramsey coherence time, $T_{2}$, of around 95 ms for the radial center-of-mass mode. We demonstrate a two-qubit gate fidelity of ${99.3}^{+ 0.7}_{- 1.5}$$\%$ with state preparation and measurement correction. These results establish fused-silica monolithic blade traps as a scalable, modular platform for quantum simulation, computation, metrology, and networking with heavy ionic species.
CSS codes from the Bruhat order of Coxeter groups
This paper develops a new method for constructing CSS quantum error-correcting codes using the mathematical structure of Coxeter groups and their Bruhat ordering. The approach generates families of CSS codes with controllable parameters and stabilizer weights by exploiting the geometric properties of these algebraic structures.
Key Contributions
- Novel method for generating CSS codes using Coxeter group Bruhat order and chain complexes
- Construction of CSS code families with controlled stabilizer weights and parameters, including codes with thousands of qubits
- Development of weight-reduction techniques for handling heavy stabilizers in irregular weight distributions
View Full Abstract
I introduce a method to generate families of CSS codes with interesting code parameters. The object of study is Coxeter groups, both finite and infinite (reducible or not), and a geometrically motivated partial order of Coxeter group elements named after Bruhat. The Bruhat order is known to provide a link to algebraic topology -- it doubles as a face poset capturing the inclusion relations of the $p$-dimensional cells of a regular CW~complex and that is what makes it interesting for QEC code design. Assisted by the Bruhat face poset interval structure unique to Coxeter groups I show that the corresponding chain complexes can be turned into multitudes of CSS codes. Depending on the approach, I obtain CSS codes (and their families) with controlled stabilizer weights, for example $[6006, 924, \{{\leq14},{\leq7}\}]$ (stabilizer weights~14 and 9) and $[22880,3432,\{{\leq8},{\leq16}\}]$ (weights 16 and 10), and CSS codes with highly irregular stabilizer weight distributions such as $[571,199,\{5,5\}]$. For the latter, I develop a weight-reduction method to deal with rare heavy stabilizers. Finally, I show how to extract four-term (length three) chain complexes that can be interpreted as CSS codes with a metacheck.
Universal Weakly Fault-Tolerant Quantum Computation via Code Switching in the [[8,3,2]] Code
This paper presents a fault-tolerant quantum computing protocol that achieves universal quantum computation by switching between two versions of an [[8,3,2]] quantum error correction code, where one supports single-qubit operations and the other supports multi-qubit gates, circumventing theoretical limitations on gate sets within single codes.
Key Contributions
- Development of a fault-tolerant code-switching protocol between two versions of the [[8,3,2]] quantum error correction code
- Demonstration of universal quantum computation using postselected error detection with quadratic logical error suppression
- Numerical validation through implementation of Grover's search algorithm on three logical qubits
View Full Abstract
Code-switching offers a route to universal, fault-tolerant quantum computation by circumventing the limitation implied by the Eastin-Knill theorem against a universal transversal gate set within a single quantum code. Here, we present a fault-tolerant code-switching protocol between two versions of the $[[8, 3, 2]]$ code. One version supports weakly fault-tolerant single-qubit Clifford gates, while the other supports a logical $\overline{\mathrm{CCZ}}$ gate via transversal $T/T^\dagger$ together with logical $\overline{\mathrm{CZ}}$, $\overline{\mathrm{CNOT}}$, and $\overline{\mathrm{SWAP}}$ gates. Because both codes have distance 2, the protocol operates in a postselected, error-detecting regime: single faults lead to detectable outcomes, and accepted runs exhibit quadratic suppression of logical error rates. This yields a universal scheme for postselected fault-tolerant computation. We validate the protocol numerically through simulations of state preparation, code switching, and a three-logical-qubit implementation of Grover's search.
A direct controlled-phase gate between microwave photons
This paper demonstrates a new method to create direct interactions between microwave photons in superconducting cavities without exciting ancillary nonlinear elements, which reduces noise and decoherence. The researchers use this approach to implement a controlled-phase gate that directly entangles photons, providing a key building block for fault-tolerant bosonic quantum computing.
Key Contributions
- Engineering a Raman-assisted cross-Kerr interaction between microwave photons without exciting nonlinear elements
- Implementing a direct controlled-phase gate between oscillators that operates within bosonic code spaces
- Demonstrating photon-number parity mapping for error detection while preserving coherence
- Expanding the bosonic cQED toolbox for fault-tolerant quantum computing
View Full Abstract
Useful quantum information processing ultimately requires operations over large Hilbert spaces, where logical information can be encoded efficiently and protected against noise. Harmonic oscillators naturally provide access to such high-dimensional spaces and enable hardware-efficient, error-correctable bosonic encodings. However, direct entangling operations between oscillators remains an outstanding challenge. Existing strategies typically rely on parametrically activating interactions that populate the excited states of an ancillary nonlinear element. This induces an effective interaction between the oscillators, at the expense of introducing additional dissipation channels and potential leakage from the encoded manifold. Here, we engineer a Raman-assisted cross-Kerr interaction between microwave photons hosted in two superconducting cavities, without exciting the nonlinear element, thereby suppressing coupler-induced decoherence.This approach generates a direct coupling between microwave photons that is exploited to implement a controlled-phase gate within the single- and two-photon subspaces of two oscillators, directly entangling them. Finally, we harness this dynamics to map the photon-number parity of a storage cavity onto an auxiliary oscillator rather than a nonlinear element, enabling error detection while protecting the storage mode from measurement-induced decoherence. Our work expands the bosonic circuit quantum electrodynamics (cQED) toolbox by enabling coherence-preserving direct photon-photon interactions between oscillators. This realizes an entangling gate that operates entirely within a bosonic code space while suppressing decoherence from nonlinear ancilla excitations, providing a key primitive for fault-tolerant bosonic quantum computing.
Simulating the Open System Dynamics of Multiple Exchange-Only Qubits using Subspace Monte Carlo
This paper develops a Monte Carlo simulation method for modeling multiple exchange-only qubits in open quantum systems by leveraging the fact that spin projection quantum numbers remain unchanged under exchange operations. The method reduces computational complexity from 8^(2n) to 3^(2n) dimensions and is applied to study multi-round Bell state stabilization circuits using 6 exchange-only qubits.
Key Contributions
- Development of Subspace Monte Carlo method that reduces computational complexity for simulating multiple exchange-only qubits from 8^(2n) to 3^(2n) dimensions
- Demonstration of the method on multi-round Bell state stabilization circuits with reset-if-leaked gadgets using 6 EO qubits
View Full Abstract
We propose a Monte Carlo based method for simulating the open system dynamics of multiple exchange-only (EO) qubits. In the EO encoding, the total spin projection quantum number along the $z$-axis of the three constituent spins remains unchanged under exchange operations, in contrast to the open system (or multi-qubit miscalibration) setting where coherent and incoherent mixing of states with different quantum numbers occurs. In our approach, we choose to measure the total spin component along the $z$-axis of each EO qubit after every logical quantum operation, which decoheres coherent mixtures of states with different spin projection quantum numbers. Independent simulations thus give different trajectories of the system in the associated subspaces, so we refer to this method as the Subspace Monte Carlo method. With each EO qubit having a definite spin projection quantum number, the density matrix of $n$ qubits can be represented by a vector of dimension $3^{2n}$, instead of $8^{2n}$, with an additional vector of dimension $n$ to label the quantum number of each qubit. We show that this approximation of the dynamics remains faithful to the true dynamics when the simulated circuits twirl the noise, converting coherent errors to stochastic errors, which can be achieved using randomized compiling. We use this simulation approach to study how correlations in measurement outcomes of circuits with reset-if-leaked gadgets, such as a multi-round Bell state stabilization circuit that uses 6 EO qubits, are affected by the choice of CNOT implementations.
Velocity-Enabled Quantum Computing with Neutral Atoms
This paper introduces a new approach to quantum computing with neutral atoms that uses atom velocity as a control parameter, enabling selective operations on moving atoms through Doppler shifts and spatial phase manipulation. The researchers demonstrate key quantum error correction primitives including high-fidelity gates, cluster state generation, and error detection codes while reducing hardware complexity.
Key Contributions
- Introduction of velocity as a new degree of freedom for neutral atom quantum computing architectures
- Demonstration of velocity-selective state preparation and measurement using controlled Doppler shifts
- Achievement of 99.86% fidelity CZ gates and implementation of quantum error correction primitives including 8-qubit cluster states and [[4,2,2]] error detection code
- Reduction of hardware overhead by enabling selective operations on moving atoms with global control beams
View Full Abstract
Realizing error-corrected logical qubits is a central goal for the current development of digital quantum computers. Neutral atoms offer the opportunity to coherently shuttle atoms for realizing efficient quantum error correction based on long-range connectivity and parallel atom transport. Nevertheless, time overheads in shuttling atoms and complex control hardware pose challenges to scaling current architectures. Here, we introduce atom velocity as a new degree of freedom in neutral-atom architectures tailored to quantum error correction. Through controlled Doppler shifts, we demonstrate velocity-selective mid-circuit state preparation and measurement on moving atoms, leaving spectator atoms unaffected. Furthermore, we achieve on-the-fly local single-qubit rotations by mapping micron-scale atom displacements to the spatial phase of global control beams. Complementing these techniques with CZ entangling gates with a fidelity of 99.86(4)%, we experimentally implement key primitives for quantum error correction and measurement-based quantum computing. We generate an eight-qubit entangled cluster state with an average stabilizer value of 0.830(4), realize an [[4,2,2]] error-detection code with 99.0(3) % logical Bell-state fidelity, and perform stabilizer measurements using a flying ancilla. By enabling selective operations on continuously moving atoms using only global beams, this velocity-enabled architecture reduces hardware overhead while minimizing shuttling and transfer delays, opening a new pathway for fast, large-scale atom-based quantum computation.
Error semitransparent universal control of a bosonic logical qubit
This paper demonstrates error semi-transparent gates for bosonic logical qubits, achieving universal quantum computation with reduced errors from photon loss. The researchers show a five-fold reduction in infidelity and construct a complete gate set including non-Clifford operations necessary for fault-tolerant quantum computing.
Key Contributions
- Introduction of error semi-transparent framework for universal bosonic logical qubit gates
- Demonstration of complete gate set {X, H, T} with five-fold infidelity reduction
- Construction of composite non-Clifford operations using error-corrected bosonic qubits
View Full Abstract
Bosonic codes offer hardware-efficient approaches to logical qubit construction and hosted the first demonstration of beyond-break even logical quantum memory.However, such accomplishments were done for idling information, and realization of fault-tolerant logical operations remains a critical bottleneck for universal quantum computation in scaled systems. Error-transparent (ET) gates offer an avenue to resolve this issue, but experimental demonstrations have been limited to phase gates. Here, we introduce a framework based on dynamic encoding subspaces that enables simple linear drives to accomplish universal gates that are error semi-transparent (EsT) to oscillator photon loss. With an EsT logical gate set of {X, H, T}, we observe a five-fold reduction in infidelity conditioned on photon loss, demonstrate extended active-manipulation lifetimes with quantum error correction, and construct a composite EsT non-Clifford operation using a sequence of eight gates from the set. Our approach is compatible with methods for detectable ancilla errors, offering an approach to error-mitigated universal control of bosonic logical qubits with the standard quantum control toolkit.
Asymptotically good bosonic Fock state codes: Exact and approximate
This paper develops new quantum error correction codes for photonic quantum systems that can protect against photon loss (amplitude damping). The authors prove that exact and approximate error correction are equivalent for these codes and construct families of asymptotically good codes with bounded photon numbers per mode.
Key Contributions
- Proved equivalence of exact and approximate error correction for Fock state codes against amplitude damping
- Constructed asymptotically good bosonic Fock state codes with bounded per-mode occupancy
- Established connection to permutation invariant codes and extended results to qudit systems
View Full Abstract
We examine exact and approximate error correction for multi-mode Fock state codes protecting against the amplitude damping noise. Based on a new formalization of the truncated amplitude damping channel, we show the equivalence of exact and approximate error correction for Fock state codes against random photon losses. Leveraging the recently found construction method based on classical codes with large distance measured in the $\ell_1$ metric, we construct asymptotically good (exact and approximate) Fock state codes. These codes have an additional property of bounded per-mode occupancy, which increases the coherence lifetime of code states and reduces the photon loss probability, both of which have a positive impact on the stability of the system. Using the relation between Fock state code construction and permutation invariant (PI) codes, we also obtain families of asymptotically good qudit PI codes as well as codes in monolithic nuclear state spaces.
Scalable Self-Testing of Mutually Anticommuting Observables and Maximally Entangled Two-Qudits
This paper develops a method to verify quantum systems using Bell inequalities, specifically testing high-dimensional entangled states and mutually anticommuting measurements without needing to trust the measurement devices. The framework can scale to certify increasingly complex quantum resources needed for advanced quantum technologies.
Key Contributions
- Simultaneous self-testing framework for maximally entangled two-qudit states and mutually anticommuting observables
- Derivation of optimal quantum bounds using Sum-of-Squares decomposition without dimensional assumptions
- Proof that maximal quantum violation corresponds to Clifford algebra representations with minimal required dimensions
- Establishment of quantitative robustness bounds relating Bell value deviations to strategy fidelity
View Full Abstract
The next frontier in device-independent quantum information lies in the certification of scalable and parallel quantum resources, which underpin advanced quantum technologies. We put forth a simultaneous self-testing framework for maximally entangled two-qudit state of local dimension $m_*=2^{\lfloor n/2 \rfloor}$ (equivalently $\lfloor n/2 \rfloor$ copies of maximally entangled two-qubit pairs), together with $n$ numbers of anti-commuting observables on one side. To this end, we employ an $n$-settings Bell inequality comprising two space-like separated observers, Alice and Bob, having $2^{n-1}$ and $n$ number of measurement settings, respectively. We derive the local ontic bound of this inequality and, crucially, employ the Sum-of-Squares decomposition to determine the optimal quantum bound without presupposing the dimension of the state or observables. We then establish that any physical realisation achieving the maximal quantum violation must, up to local isometries and complex conjugation, correspond to a reference strategy consisting of a maximally entangled state of local dimension of at least $2^{\lfloor n/2 \rfloor}$ and local observables forming an irreducible representation of the Clifford algebra. This construction thereby demonstrates that the minimal dimension compatible with $n$ mutually anticommuting observables is naturally self-tested by the maximal violation of the proposed Bell functional. Finally, we analyse the robustness of the protocol by establishing quantitative bounds relating deviations in the observed Bell value to the fidelity between the realised and the ideal strategies. Our results thus provide a scalable, dimension-independent route for the certification of high-dimensional entanglement and Clifford measurements in a fully device-independent framework.
Cavity-Free Distributed Quantum Computing with Rydberg Ensembles via Collective Enhancement
This paper presents a quantum networking architecture that uses Rydberg atom ensembles to create entangled connections between distant quantum computers without needing optical cavities. The approach achieves high-fidelity quantum gates and atom-photon conversion, enabling practical distributed quantum computing with entanglement generation rates exceeding 600 Hz at 20 km distances.
Key Contributions
- Cavity-free quantum networking architecture using Rydberg atom ensembles
- High-fidelity distributed quantum computing protocol with 99.93% gate fidelity and >97.5% Bell state fidelity
- Practical scalable approach achieving 600+ Hz entanglement rates at 20 km separation
View Full Abstract
A complete architecture for cavity-free quantum networking based on collective enhancement in Rydberg atom ensembles is presented. The protocol exploits Rydberg blockade and phase-matched directional emission to eliminate optical cavities without sacrificing performance. The architecture comprises three steps: (i) local control-ensemble entanglement via Rydberg blockade with fidelity $F_{\mathrm{gate}} \approx 99.93\%$; (ii) atom-photon conversion via Raman transitions, achieving directional emission ($η_{\mathrm{dir}} \approx 35\%$) and single-node efficiency $η_{\mathrm{node}} \approx 19\%$; and (iii) remote atom-atom entanglement via Hong-Ou-Mandel interference, producing Bell states with fidelity $F > 97.5\%$. With quantum memories enabling retry protocols, entanglement generation rates exceed $600$ Hz at 20 km separation. This cavity-free approach provides a practical and scalable pathway for distributed quantum computing and secure quantum communication.
Protecting Distributed Blockchain with Twin-Field Quantum Key Distribution: A Quantum Resistant Approach
This paper proposes a quantum-resistant blockchain architecture that uses twin-field quantum key distribution (TF-QKD) to protect distributed blockchain networks from quantum computing threats. The approach aims to overcome distance and scalability limitations of traditional QKD systems by implementing a measurement-device-independent topology that reduces infrastructure complexity.
Key Contributions
- Scalable quantum-resistant blockchain architecture using TF-QKD protocol
- Linear scaling optimization reducing infrastructure complexity from quadratic to linear
- Integration of measurement-device-independent topology to overcome rate-loss limits in quantum networks
View Full Abstract
Quantum computing provides the feasible multi-layered security challenges to classical blockchain systems. Whereas, quantum-secured blockchains relied on quantum key distribution (QKD) to establish secure channels can address this potential threat. This paper presents a scalable quantum-resistant blockchain architecture designed to address the connectivity and distance limitations of the QKD integrated quantum networks. By leveraging the twin-field (TF) QKD protocol within a measurement-device-independent (MDI) topology, the proposed framework can optimize the infrastructure complexity from quadratic to linear scaling. This architecture effectively integrates information-theoretic security with distributed consensus mechanisms, allowing the system to overcome the fundamental rate-loss limits inherent in traditional point-to-point links. The proposed scheme offers a theoretically sound and feasible solution for deploying large-scale and long-distance consortium.
Adaptive Control of Stochastic Error Accumulation in Fault-Tolerant Quantum Computation
This paper presents a machine learning approach called Chronological Deep Q-Network (Ch-DQN) for adaptive quantum error correction that tracks how noise changes over time, rather than treating each error correction cycle independently. The method aims to prevent the gradual accumulation of errors that can cause logical qubits to fail in fault-tolerant quantum computers.
Key Contributions
- Introduction of adaptive error correction using deep reinforcement learning that accounts for temporal noise correlations
- Novel approach treating fault-tolerant quantum computation as a stochastic control problem with hazard accumulation
- Development of Ch-DQN algorithm with backward trajectory refinement and fractional meta-updates for non-stationary noise environments
View Full Abstract
In realistic hardware for quantum computation that possesses fault-tolerance, non-stationary noise and stochastic drift lead to logical failure from the temporal accumulation of errors, not from independent events. Static decoding and fixed calibration techniques are structurally incompatible with this situation because they do not take into account temporal correlations between errors or control-induced back-action of errors. These effects motivate control policies that must track noise evolution across correction cycles, rather than respond to individual syndromes in isolation. We treat fault-tolerant quantum computation as a stochastic control problem, modelled using reduced quantum dynamics in which Pauli error processes are governed by latent noise parameters that vary temporally. From this perspective, logical failure arises through the accumulation of a hazard variable, and the corresponding control objective depends on the full history of observations. Operating under these conditions, a Chronological Deep Q-Network (Ch-DQN) maintains an internal belief state that tracks both noise evolution and accumulated hazard. During training, backward refinement of trajectories is used to sample slowly drifting modes of operation, while runtime inference remains strictly causal. A fractional meta-update stabilizes learning in the presence of non-stationary, control-coupled dynamics. Through multi-distance simulations that incorporate stochastic drift and feedback from decision-making, Ch-DQN suppresses hazard accumulation and extends logical survival time relative to static and recurrent baselines. Error correction in this regime is therefore no longer a static decoding task, but a control process whose success is determined over time by the underlying noise dynamics.
Quantifying surface losses in superconducting aluminum microwave resonators
This paper investigates how surface defects in aluminum oxide layers limit the performance of superconducting quantum devices. The researchers measure microwave losses caused by two-level systems in aluminum resonators and find that native aluminum oxide contributes significantly to qubit decoherence, providing insights for improving quantum device fabrication.
Key Contributions
- Quantified that surface two-level systems in 2.7 nm aluminum oxide layers are the primary source of losses in superconducting aluminum resonators
- Demonstrated that aluminum interface defects contribute approximately 27% of the relaxation rate in state-of-the-art tantalum-on-silicon qubits
- Showed that HF treatment removes aluminum oxide but rapid regrowth limits long-term improvements in device performance
View Full Abstract
The recent realization of millisecond-scale coherence with tantalum-on-silicon transmon qubits showed that depositing the Al/AlOx/Al Josephson junction in a high purity, ultrahigh vacuum environment was critical for achieving lifetime-limited coherence, motivating careful examination of the aluminum surface two-level system (TLS) bath. Here, we measure the microwave absorption arising from surface TLSs in superconducting aluminum resonators, following methodology developed for tantalum resonators. We vary film and surface properties and correlate microwave measurements with materials characterization. We find that the lifetimes of superconducting aluminum resonators are primarily limited by surface losses associated with TLSs in the 2.7 nm-thick native AlOx. Treatment with 49% HF removes surface AlOx completely; however, rapid oxide regrowth limits improvements in surface loss and long term device stability. Using these measurements we estimate that TLSs in aluminum interfaces contribute around 27% of the relaxation rate of state-of-the-art tantalum-on-silicon qubits that incorporate aluminum-based Josephson junctions.
Beta Tantalum Transmon Qubits with Quality Factors Approaching 10 Million
This paper demonstrates that beta-phase tantalum can be used to create high-quality superconducting qubits for quantum computers, achieving quality factors approaching 10 million despite previous beliefs that this material phase would be inferior to alpha-phase tantalum.
Key Contributions
- Demonstrated that beta-Ta can achieve exceptionally high qubit quality factors (up to 10.1 million), challenging previous assumptions about material requirements
- Established beta-Ta on sapphire as a viable platform for scalable qubit fabrication since beta-Ta readily nucleates at room temperature
- Characterized the loss mechanisms in beta-Ta qubits, showing surface two-level systems as the dominant loss channel
View Full Abstract
Tantalum-based transmon qubits are a promising platform for building large-scale quantum processors. So far, these qubits have been made from tantalum films grown exclusively in the alpha phase (α-Ta). The beta phase of tantalum (\{beta}-Ta) readily nucleates at room temperature, making it attractive for scalable qubit fabrication. However, \{beta}-Ta is widely believed to be detrimental to qubit performance because it has a lower superconducting critical temperature than α-Ta. We challenge this prevailing belief by fabricating low-loss transmon qubits from \{beta}-Ta films on sapphire. Across 11 qubits, the mean time-averaged quality factor is (5.6 +/- 2.3) x 10^6, with the best qubit recording a time-averaged quality factor of (10.1 +/- 1.3) x 10^6. Resonator studies demonstrate that the dominant microwave loss channel is surface two-level systems, with the surface loss contribution for \{beta}-Ta being about twice that of α-Ta. \{beta}-Ta films exhibit significant kinetic inductance, consistent with an estimated magnetic penetration depth of (1.78 +/- 0.02) μm. This work establishes \{beta}-Ta on sapphire as a material platform for realizing low-loss transmon qubits and other superconducting devices such as compact resonators, kinetic inductance detectors, and quasiparticle traps.
Circuit Optimization for Universality Transformation
This paper presents more efficient quantum circuit constructions for transforming between different universal gate sets, specifically showing how to convert a computationally universal gate set to a strictly universal one using fewer resources. The work demonstrates that any multi-qubit quantum operation can be generated using only real single-qubit gates, CCZ gates, and a single special quantum state.
Key Contributions
- Circuit optimization that eliminates non-imaginary ancillary qubits in universality transformation
- Extension to continuous gate-set setting showing exact generation of any multi-qubit unitary using constrained gate set
View Full Abstract
It is known that a computationally universal gate set $\{H,CCZ\}$ can be transformed to a strictly universal one $\{Λ(S), H\}$ using one maximally imaginary state $|+i\rangle$ and non-imaginary ancillary qubits. We succeed this transformation with a shorter circuit that eliminates non-imaginary ancillary qubits. We further extend this to the continuous gate-set setting, showing that any multi-qubit unitary can be exactly generated by real single-qubit unitary gates, $CCZ$ gates and $|+i\rangle$.
On-Demand Correlated Errors in Superconducting Qubits from a Particle Accelerator
This paper describes a new experimental facility that uses a particle accelerator to study how ionizing radiation creates correlated errors in superconducting quantum computers. The researchers can now generate radiation-induced errors on demand to better understand and characterize how cosmic rays and other high-energy particles interfere with quantum computations.
Key Contributions
- Development of a controllable facility coupling electron linear accelerator to dilution refrigerator for studying radiation effects on quantum systems
- Demonstration of on-demand generation and characterization of radiation-induced qubit errors including relaxation, excitation, and detuning errors
- Systematic study showing error signatures depend on junction placement and superconducting gap properties
View Full Abstract
Ionizing radiation is a known source of correlated errors in superconducting quantum processors, inhibiting the functionality of quantum error correction surface codes. High-energy photons and charged particles deposit pair-breaking energy into these systems leading to excess quasiparticles near Josephson junctions that increase qubit decoherence. Previous investigations of this problem have relied on ambient, stochastic sources of ionizing radiation or alternative methods of quasiparticle generation. Here, we present a facility that couples an electron linear accelerator (linac) to a dilution refrigerator to study ionizing radiation in quantum systems. A single linac electron closely mimics the energy deposition characteristics of a typical cosmic-ray muon, and we demonstrate the facility's usefulness with a multi-qubit superconducting transmon chip. Characteristic radiation-induced relaxation errors are quickly and easily collected with the speed and timing information of the linac. Additionally, we present qubit excitation and detuning errors that can be difficult to detect without the on-demand source of ionizing radiation. These error signatures are shown to be dependent on the junction placement and surrounding superconducting gaps.
Partially Fault-Tolerant Quantum Computation for Megaquop Applications
This paper analyzes partially fault-tolerant quantum computing approaches for executing large-scale quantum circuits with millions of operations, focusing on the STAR architecture for efficient analog rotations and comparing resource requirements against full fault-tolerant methods. The authors demonstrate that partial fault tolerance could enable practical quantum simulation of condensed matter systems like the 2D Fermi-Hubbard model with hundreds of thousands of qubits.
Key Contributions
- Quantum resource estimation comparison between partial and full fault-tolerant quantum computing architectures
- Development of code growth procedure to reduce factory size for analog rotation state production
- Analysis of space-time trade-offs and identification of optimal circuit regimes for partial FTQC
- Demonstration that 2D Fermi-Hubbard model simulation is well-suited for STAR architecture implementation
View Full Abstract
Partially fault-tolerant quantum computing (FTQC) has recently emerged as a promising approach for the execution of megaquop-scale circuits with millions of logical operations. In this work, we demonstrate the strengths and the limitations of this approach by conducting quantum resource estimation (QRE) of the space--time-efficient analog rotation (STAR) architecture using realistic hardware specifications for superconducting processors, and compare it against the QRE of the full FTQC architecture. We show how the performance of the STAR architecture's protocols is affected by hardware improvements. We also reduce the space requirements for partial FTQC by developing a procedure leveraging code growth to decrease the size of a factory producing analog rotation states. Our results reveal a non-trivial dependence of the optimal pre-growth code distance on the rotation angle with respect to post-growth infidelity. Further, we analyze space--time trade-offs between the factory size and the error-mitigation overhead, and observe that in an application-agnostic setting, there is a Goldilocks zone for circuits in the regime of roughly $10^5$--$10^6$ small-angle rotation gates. We show that quantum simulation of 2D Fermi--Hubbard model systems is a particularly well-suited application for the STAR architecture, requiring only hundreds of thousands of physical qubits and runtimes on the order of minutes for modest system sizes. Due to its favourable algorithmic scaling to larger system sizes, utility-scale simulation of the 2D Fermi--Hubbard model could potentially be attained using partial FTQC.
Asymptotically Optimal Quantum Circuits for Comparators and Incrementers
This paper develops more efficient quantum circuits for basic arithmetic operations like comparisons and increments, achieving optimal performance in terms of gate count, circuit depth, and qubit usage. The authors show these improvements can significantly reduce the complexity of Shor's factoring algorithm from O(n³) to O(n² log² n) depth.
Key Contributions
- Asymptotically optimal quantum circuits for comparators and incrementers with Θ(n) gates and Θ(log n) depth
- Improved Shor's algorithm implementation reducing circuit depth from O(n³) to O(n² log² n)
- General theorem for trading ancilla qubits for control qubits with low overhead
View Full Abstract
We present quantum circuits for comparison and increment operations that achieve an asymptotically optimal gate count of $Θ(n)$ and depth of $Θ(\log n)$ over the Clifford+Toffoli gate set, while using a provably minimal number of qubits. We extend these results to classical-quantum comparators, yielding an improved classical-quantum adder with an optimal qubit count. Given the ubiquity of these operations as algorithmic building blocks, our constructions translate directly into reduced circuit complexity for many quantum algorithms. As a notable example, they can be used to improve a space-efficient circuit for Shor's factoring algorithm, reducing circuit depth from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2 \log^2 n)$ without increasing either the qubit count or the asymptotic gate complexity. Underpinning these results is a general theorem demonstrating how to trade ancilla qubits for control qubits with low overhead in both depth and gate count, providing a broadly applicable tool for quantum circuit design.
Fisher information based lower bounds on the cost of quantum phase estimation
This paper analyzes the fundamental performance limits of quantum phase estimation (QPE) algorithms using Fisher information theory, comparing two main approaches (QFT-QPE and Hadamard test-based QPE) and showing that their optimal choice depends on the overlap between input and target states.
Key Contributions
- Establishes fundamental lower bounds on QPE performance using Fisher information and Cramer-Rao bounds, separating circuit limitations from classical post-processing
- Demonstrates performance crossover between QFT-QPE and HT-QPE paradigms depending on state overlap, with QFT-QPE having more favorable scaling
View Full Abstract
Quantum phase estimation (QPE) is a cornerstone of quantum algorithms designed to estimate the eigenvalues of a unitary operator. QPE is typically implemented through two paradigms with distinct circuit structures: quantum Fourier transform-based QPE (QFT-QPE) and Hadamard test-based QPE (HT-QPE). Existing performance assessments fail to separate the statistical information inherent in the quantum circuit from the efficiency of classical post-processing, thereby obscuring the limits intrinsic to the circuit structure itself. In this study, we employ Fisher information and the Cramer-Rao lower bound to formulate the performance limits of circuit designs independent of the efficiency of classical post-processing. Defining the circuit depth as $T$ and the total runtime as $t_{\rm total}$, our results demonstrate that the achievable scaling is constrained by a non-trivial lower bound on their product $T\,t_{\rm total}$, although previous studies have typically treated the circuit depth $T$ and the total runtime $t_{\rm total}$ as separate resources. Notably, QFT-QPE possesses a more favorable scaling with respect to the overlap between the input state and the target eigenstate corresponding to the desired eigenvalue than HT-QPE. Numerical simulations confirm these theoretical findings, demonstrating a clear performance crossover between the two paradigms depending on the overlap. Furthermore, we verify that practical algorithms, specifically the quantum multiple eigenvalue Gaussian filtered search (QMEGS) and curve-fitted QPE, achieve performance levels closely approaching our derived limits. By elucidating the performance limits inherent in quantum circuit structures, this work concludes that the optimal choice of circuit configuration depends significantly on the overlap.
Optimal control with flag qubits
This paper introduces Flag-GRAPE, a new quantum control algorithm that uses auxiliary 'flag' qubits to actively combat decoherence in quantum operations. By correlating noise errors with measurable ancilla states and using post-selection, the method achieves 51% better fidelity than traditional approaches and converts random errors into more manageable erasure errors for quantum error correction.
Key Contributions
- Introduction of Flag-GRAPE algorithm that actively tailors noise structure using flag ancillas for improved quantum control
- Demonstration of 51% infidelity reduction compared to traditional methods and conversion of decoherence into heralded erasure errors
- Integration with quantum error correction showing enhanced logical state preparation for fault-tolerant quantum computing
View Full Abstract
High-fidelity quantum operations are the cornerstone of fault-tolerant quantum computation. In open quantum systems, traditional optimal control only passively resists decoherence, leaving environment-induced uncertainty as a fundamental performance bottleneck. To overcome this, we propose a new optimal control framework with flag ancillas and the Flag-GRAPE algorithm, which can actively tailor the system's noise structure. Through embedding post-selection directly into the objective function, Flag-GRAPE correlates decoherence errors with the ancilla's unexpected state. Subsequent measurement and post-selection effectively expel this uncertainty, circumventing the fidelity bounds of traditional control. Numerical simulations in a superconducting quantum circuit demonstrate a $51\%$ reduction in infidelity compared to traditional closed-system pulses and also show that such enhancement is robust across broad noise regimes. Furthermore, by actively converting unstructured decoherence into heralded erasure errors, Flag-GRAPE is inherently compatible with quantum error correction. We demonstrate this by initializing a logical cat-code state, showing that the combination between Flag-GRAPE and QEC yields immediate state preparation enhancements. This new framework can reduce hardware overhead for fault-tolerant architectures and open up a practical path toward logical state preparation gain in near-term experiments.
Measurement-Induced State transitions in Inductively-Shunted Transmons
This paper studies measurement-induced state transitions (MIST) in superconducting quantum bits, where fast qubit measurements cause unwanted energy transitions. The researchers add inductive shunts to transmon qubits to stabilize these problematic transitions and make them more predictable.
Key Contributions
- Demonstration of inductive shunts to eliminate offset charge dependence in MIST
- Experimental characterization and theoretical modeling of MIST in inductively-shunted transmons
View Full Abstract
Fast and high-fidelity qubit measurement plays a key role in quantum error correction. In superconducting qubits, measurement is typically performed using a resonant microwave drive on a readout resonator dispersively coupled to the qubit. Shorter measurement times require larger numbers of photons populating the readout resonator, which ultimately leads to undesired measurementinduced state transitions (MIST) of the qubit. MIST can be particularly problematic because these transitions often leave the qubit in a high energy state, and the MIST locations in readout parameter space drift as a function of qubit offset charge. In transmon qubits, these drifts have been avoided using very large qubit-resonator detunings or dedicated offset charge biases. In this work, we take an alternative approach and add an inductive shunt to the transmon to eliminate the offset charge dependence and stabilize the MIST. We experimentally characterize MIST in several different inductively-shunted transmons, in agreement with quantum and semiclassical models for MIST. These results extend to other inductively-shunted qubits.
Climbing the Clifford Hierarchy
This paper studies the Clifford Hierarchy in quantum computation, specifically characterizing which Clifford gates have square roots that advance to the third level of the hierarchy. The work extends understanding of how gates can 'climb' between hierarchy levels through mathematical operations like taking square roots.
Key Contributions
- Full characterization of Clifford gates whose square roots climb to the third level of the hierarchy
- Extension of the theoretical framework for understanding gate relationships within the Clifford Hierarchy
View Full Abstract
The Clifford Hierarchy has been a central topic in quantum computation due to its strong connections with fault-tolerant quantum computation, magic state distillation, and more. Nevertheless, only sections of the hierarchy are fully understood, such as diagonal gates and third level gates. The diagonal part of the hierarchy can be climbed by taking square roots and adding controls. Similarly, square roots of Pauli gates (first level) are Clifford gates (climb to the second level). Based on this theme, we study gates whose square roots climb to the next level. In particular, we fully characterize Clifford gates whose square roots climb to the third level.
Noise Correlations as a Resource in Pauli-Twirled Circuits
This paper studies how randomized compiling transforms correlated quantum noise into simpler Pauli errors in quantum circuits. The researchers show that noise correlations actually improve circuit performance and that randomized compiling reduces the strength and duration of these correlations, making circuits more robust to memory effects.
Key Contributions
- Analytical proof that correlated Gaussian noise under randomized compiling reduces correlation strength and temporal range
- Discovery that noise correlations increase circuit fidelity in randomly compiled circuits, making correlations a resource
- Demonstration that randomized compiling suppresses quantum bath correlations, allowing classical noise treatment for weak coupling
View Full Abstract
Randomized compiling (RC) is an established tool to tailor arbitrary quantum noise channels into Pauli errors. The effect of both spatial and temporal noise correlations in randomly compiled circuits, however, is not fully understood. Here, we show that for a broad class of correlated Gaussian noise, RC reduces both the strength and temporal range of correlations. For Clifford circuits, we derive a simple analytical expression for the circuit fidelity of randomly compiled circuits. Surprisingly, we show that this fidelity is always increased by the presence of correlations, suggesting that correlations are a resource in randomly compiled circuits. To leading order in system-bath coupling, we also show that RC suppresses the quantum component of bath correlations, implying that one can safely treat weak noise as being classical. Finally, through extensive numerical simulations, we show that our results remain valid for many relevant non-Clifford circuits. These results clarify how RC mitigates memory effects and enhances circuit robustness.
Probing the memory of a superconducting qubit environment
This paper investigates how superconducting qubits interact with their environment, specifically identifying long-lived two-level systems that retain memory of past qubit states and can disrupt fault-tolerant quantum computing. The researchers develop methods to distinguish these problematic memory effects from standard environmental noise by analyzing quantum jump patterns.
Key Contributions
- Development of method to distinguish non-Markovian environmental memory from standard Markovian noise in superconducting qubits
- Demonstration that non-Poissonian quantum jump traces can identify long-lived two-level systems that threaten fault-tolerant operation
View Full Abstract
Achieving fault tolerance with superconducting quantum processors requires qubits to operate within the regime of threshold theorems based on the Born-Markov approximation. This approximation, which models dissipation as constant energy decay into a memoryless environment, breaks down when qubits couple to long-lived two-level systems (TLSs) that become polarized during operation and retain memory of past qubit states. Here, we show that non-Poissonian quantum jump traces carry the information required to distinguish long-lived TLSs from the standard Markovian bath. By fitting the Solomon equations to measured quantum jumps dynamics arising naturally due to thermal fluctuations, we can disentangle the coupling of the qubit to the two environments. Sweeping the qubit frequency reveals distinct peaks, each associated with a TLS that outlives the qubit, providing a handle to understand their microscopic origin.
Demonstration of High-Fidelity Gates in a Strongly Anharmonic with Long-Coherence C-Shunt Flux Qubit
This paper demonstrates high-fidelity quantum gates on a C-shunt flux qubit that achieves both large anharmonicity and long coherence times. The researchers used advanced pulse techniques to achieve gate fidelities exceeding 99.9%, showing this qubit design could be promising for building large-scale quantum computers.
Key Contributions
- Demonstration of 99.9% gate fidelity on C-shunt flux qubits with large anharmonicity and long coherence
- Establishing C-shunt flux qubits as a promising platform for scalable quantum computing
View Full Abstract
We demonstrate high-fidelity single-qubit gates on a C-shunt flux qubit that simultaneously combines a large anharmonicity ($\mathcal{A}/2π=848~\mathrm{MHz}$) with long relaxation time ($T_1 = 23~μ\text{s}$). The large anharmonicity significantly suppresses leakage to higher energy levels, enabling fast and precise microwave control. Using DRAG pulses and randomized benchmarking, the qubit achieves gate fidelities exceeding 99.9\%, highlighting the capability of C-shunt flux qubits for robust and high-performance quantum operations. These results establish them as a promising platform for scalable quantum information processing.
Quantum Error Correction by Purification
This paper introduces a new quantum error correction method called purification quantum error correction (PQEC) that uses multiple noisy copies of quantum states and the SWAP test to reduce errors without requiring knowledge of the original state. The method achieves high error thresholds of 75% for depolarizing noise and 50% for dephasing noise.
Key Contributions
- Novel purification-based quantum error correction scheme using SWAP test
- Demonstration of high error thresholds (75% for depolarizing channel, 50% for dephasing)
- General-purpose method requiring no prior state knowledge or postselection
View Full Abstract
We present a general-purpose quantum error correction primitive based on state purification via the SWAP test, which we refer to as purification quantum error correction (PQEC). This method operates on $N$ noisy copies, requires minimally $O(M\log_2 N)$ data qubits to process the $M$-qubit inputs. In a similar way to standard QEC, the purification steps may be interleaved within a quantum algorithm to suppress the logical error rate. No postselection is performed and no knowledge of the state is required. We analyze its performance under a variety of error channels and find that PQEC is highly effective at boosting fidelity and reducing logical error rates, particularly for the depolarizing channel. Error thresholds for the local depolarizing channel are found to be $ 75 \%$ for any register size. For local dephasing, the error threshold is reduced to $ 50 \% $ but may be boosted using twirling.
Mitigating crosstalk errors for simultaneous single-qubit gates on a superconducting quantum processor
This paper addresses crosstalk errors that occur when multiple qubits are controlled simultaneously on superconducting quantum processors, developing techniques to optimize qubit frequencies and shape control pulses to minimize interference between neighboring qubits. The researchers achieved 99.96% fidelity for simultaneous single-qubit gates on a 49-qubit processor and demonstrated scalability to systems with up to 1000 qubits.
Key Contributions
- Analytical model for simultaneous single-qubit gate errors caused by microwave crosstalk
- Model-based optimization strategy for qubit frequencies to minimize crosstalk errors
- Crosstalk transition suppression (CTS) pulse shaping technique
- Demonstration of scalability to 1000-qubit systems through simulations
View Full Abstract
Single-qubit gates on superconducting quantum processors are typically implemented using microwave pulses applied through dedicated control lines. However, these microwave pulses may also drive other qubits due to crosstalk arising from capacitive coupling and wavefunction overlap in systems with closely spaced transition frequencies. Crosstalk and frequency crowding increase errors during simultaneous single-qubit operations relative to isolated gates, thus forming a major bottleneck for scaling superconducting quantum processors. In this work, we combine model-based qubit frequency optimization with pulse shaping to demonstrate crosstalk error mitigation in single-qubit gates on a 49-qubit superconducting quantum processor. We introduce and experimentally verify an analytical model of simultaneous single-qubit gate error caused by microwave crosstalk that depends on a given pulse shape. By employing a model-based optimization strategy of qubit frequencies, we minimize the crosstalk-induced error across the processor and achieve a mean simultaneous single-qubit gate fidelity of 99.96% for a 16-ns gate duration, approaching the mean individual gate fidelity. To further reduce the simultaneous error and required qubit frequency bandwidth on high-crosstalk qubit pairs, we introduce a crosstalk transition suppression (CTS) pulse shaping technique that minimizes the spectral energy around transitions inducing leakage and crosstalk errors. Finally, we combine CTS with model-based frequency optimization across the device and experimentally show a systematic reduction in the required qubit frequency bandwidth for high-fidelity simultaneous gates, supported by simulations of systems with up to 1000 qubits. By alleviating constraints on qubit frequency bandwidth for parallel single-qubit operations, this work represents an important step for scaling towards larger quantum processors.
Permutation-invariant codes: a numerical study and qudit constructions
This paper studies permutation-invariant quantum error-correcting codes that can protect quantum information stored in qudits (quantum systems with more than two levels) from deletion errors. The researchers investigate how the required number of physical qudits scales with the desired error correction capability and find that using higher-dimensional qudits can reduce the overhead needed for error correction.
Key Contributions
- Conjectured lower bound on block length for qubit PI codes correcting deletion errors with scaling n(d) ≥ (3d² + 1)/4
- Demonstrated that increasing physical qudit dimension reduces block length requirements and approaches the quantum Singleton bound
- Extended AAB construction from qubits to qudits using semi-analytic methods with linear programming
View Full Abstract
We investigate Permutation-Invariant (PI) quantum error-correcting codes encoding a logical qudit of dimension $\mathrm{d}_\mathrm{L}$ in PI states using physical qudits of dimension $\mathrm{d}_\mathrm{P}$. We extend the Knill--Laflamme (KL) conditions for $d-1$ deletion errors from qubits to qudits and investigate numerically both qubit ($\mathrm{d}_\mathrm{L} = \mathrm{d}_\mathrm{P} = 2$) and qudit ($\mathrm{d}_\mathrm{L} > 2$ or $\mathrm{d}_\mathrm{P} > 2$) PI codes. We analyze the scaling of the block length $n$ in terms of the code distance $d$, and compare to existing families of PI codes due to Ouyang, Aydin--Alekseyev--Barg (AAB) and Pollatsek--Ruskai (PR). Our three main findings are: (i) We conjecture that qubit PI codes correcting up to $d-1$ deletion errors have block length $n(d) \geq (3d^2 + 1) / 4$, which implies an upper bound $d \leq \sqrt{12n-3}/3$ on their code distance, and that PR codes can saturate this bound. (ii) For qudit PI codes encoding a single qudit we numerically observe that increasing $\mathrm{d}_\mathrm{P}$ results in $n$ monotonically decreasing and approaching the quantum Singleton bound $n(d) \geq 2d-1$. (iii) We propose a semi-analytic extension of the qubit AAB construction to qudits that finds explicit solutions by solving a linear program. Our results therefore provide key insights into lower bounds on the block length scaling of both qubit and qudit PI codes, and demonstrate the benefit of increased physical local dimension in the context of PI codes.
Efficient and accurate two-qubit-gate operation in a high-connectivity transmon lattice utilizing a tunable coupling to a shared mode
This paper proposes a new quantum computer architecture using a honeycomb lattice of superconducting qubits where each group of qubits connects through tunable couplers and a shared central element, enabling faster two-qubit gates and better connectivity. The researchers develop improved gate protocols and analyze how this design reduces errors while allowing more qubits to operate simultaneously.
Key Contributions
- Novel honeycomb qubit lattice architecture with tunable multi-mode coupling for all-to-all connectivity
- Efficient single-step conditional-Z gate protocol that improves gate speed compared to previous center-mode architectures
- Analysis of spectator qubit effects and crosstalk mitigation in simultaneous gate operations
- Analytical error estimates for relaxation and dephasing in multi-mode coupling structures
View Full Abstract
Increasing connectivity and decreasing qubit-state delocalization without compromising the speed and accuracy of elementary gate operations are topical challenges in the development of large-scale superconducting quantum computers. In this theoretical work, we study a special honeycomb qubit lattice where each qubit inside a unit cell is coupled to every other one via two dedicated tunable couplers and a common central element. This results in an effective multi-mode interaction enabling tunable, on-demand, all-to-all connectivity between each qubit pair within the unit cell. We provide a thorough analysis of the unit cell, including a proposal for a novel and efficient conditional-Z gate scheme which takes advantage of the effective multi-mode coupling. We develop an experimentally viable pulse protocol for a single-step gate implementation which considerably improves the gate speed compared to the previous two-qubit-gate realizations suggested for architectures utilizing a center mode. We also show numerical results on how the presence of spectator qubits affects the average two-qubit-gate fidelity, and analyse how the multi-mode coupling structure mitigates the delocalization-induced crosstalk during simultaneous single-qubit gates within the unit cell. We also provide analytical estimates for the errors caused by relaxation and dephasing during a two-qubit-gate operation, including noise terms for the multi-mode coupling structure. Our multi-mode coupling architecture results in a good balance between increased connectivity and available parallelism, especially when several interacting unit cells form a quantum processing unit. We anticipate that the obtained results pave the way towards high-connectivity quantum processors with efficient and low-overhead quantum algorithms.
Reducing Quantum Error Mitigation Bias Using Verifiable Benchmark Circuits
This paper presents new methods to reduce bias in quantum error mitigation techniques by using specially designed benchmark circuits that match the noise characteristics of the target quantum computation. The authors demonstrate up to 15% fidelity improvements on 100-qubit circuits compared to standard error mitigation approaches.
Key Contributions
- Development of verifiable benchmark circuits that mirror application circuit noise profiles for bias mitigation
- Introduction of benchmarked-noise zero-noise extrapolation (bnZNE) as an improved error mitigation method
- Demonstration of 15% fidelity improvements on utility-scale 100-qubit circuits with up to 2000 entangling gates
View Full Abstract
We present a simple, malleable and low-overhead approach for improving generic biased quantum error mitigation (QEM) methods, achieving up to 15% fidelity improvements over standard QEM on 100-qubit circuits with up to 2000 entangling gates. We do so by constructing verifiable benchmark circuits which mirror the application circuit's native-gate structure and thus noise profile. These circuits can be used to benchmark and mitigate the bias of the underlying error mitigation method, requiring only the application circuit and hardware native gate set. We present two methods for generating benchmark circuits; one is agnostic to the target hardware at the expense of a small overhead of single-qubit gates, while the other is specific to the IBM superconducting hardware and has no gate overhead. As a corollary, we introduce benchmarked-noise zero-noise extrapolation (bnZNE) as a simple adaptation of zero-noise extrapolation (ZNE), one of the most popular error mitigation methods. We consider as an example the bias-mitigated ZNE and bnZNE of Trotterized Hamiltonian simulations, observing that our approaches outperform standard ZNE using both small-scale classical simulations and 100-qubit utility-scale experiments on the IBM superconducting hardware. We consider the measurement of both single-site observables as well as two-site correlations along a one-dimensional qubit chain. We also provide a software package for implementing the error mitigation techniques used in this research.
Crosstalk in Multi-Qubit Fluxonium Architectures with Transmon Couplers
This paper studies the scalability of quantum computing architectures that use transmon qubits as couplers between fluxonium qubits, finding that spectator qubit crosstalk limits gate fidelity but can be mitigated through reduced coupling strength and dynamic tuning. The work demonstrates methods to reduce spectator errors to below 10^-4 while maintaining high-fidelity two-qubit operations.
Key Contributions
- Analysis of scalability limitations in fluxonium-transmon hybrid quantum architectures due to spectator qubit crosstalk
- Development of mitigation strategies including coupling strength reduction and dynamic transmon tuning to achieve spectator errors below 10^-4
View Full Abstract
In recent years, several architectures have been proposed for implementing two-qubit operations on fluxonium superconducting qubits. A particularly promising approach, which was demonstrated experimentally by Refs. [1,2], employs a transmon superconducting qubit as a tunable coupler between the fluxonium qubits. These experiments have shown that the transmon coupler enables fast, high-fidelity two-qubit operations while suppressing unwanted ZZ crosstalk between the fluxonium qubits. In this work, we numerically study the scalability of this architecture. We find that, when trivially scaling this architecture, crosstalk from spectator qubits limits the gate fidelity to below 90%. We show that these spectator errors can be reduced to below $10^{-4}$ by reducing the coupling strength and by dynamically tuning transmons that are not used for a two-qubit operation to an off position. We further investigate the resilience of the operation to direct capacitive coupling between the transmon couplers and to microwave crosstalk.
Fictitious Copy Quantum Error Mitigation
This paper introduces a new quantum error mitigation technique called Fictitious Copy Quantum Error Mitigation (FCQEM) that corrects quantum computing errors using only classical post-processing without requiring additional quantum resources. The method works by analyzing joint probability distributions from quantum circuit measurements and was demonstrated to successfully recover ground state energies in molecular and spin models.
Key Contributions
- Novel quantum error mitigation method requiring no additional quantum resources
- Classical post-processing technique that corrects expectation values using joint probability distributions
- Demonstration of compatibility with existing QEM methods like Quantum Computed Moments
- Experimental validation on 84-qubit superconducting quantum processor
View Full Abstract
Errors are arguably the most pressing challenge impeding practical applications of quantum computers, which has instigated vigorous research on the development of quantum error mitigation (QEM) techniques. Existing QEM methods suppress errors with a varying degree of efficacy but importantly demand significant additional quantum and classical computational resources. In this work, we present Fictitious Copy Quantum Error Mitigation (FCQEM) method which corrects quantum errors without requiring any additional quantum resources and purely relies on using classical postprocessing of a joint probability distribution to correct expectation values. The joint probability distribution can be measured ``fictitiously'' by sampling one copy of noisy quantum circuit twice, or classically squaring probabilities from simply one copy. We show that FCQEM can recover eigenvalues even if exact eigenstates are not prepared. Furthermore, our technique can benefit other noise mitigation techniques with no additional quantum resources, which is demonstrated by combining FCQEM with the Quantum Computed Moments (QCM) method. FCQEM can compensate for noise that is pathological to QCM, and QCM allows for FCQEM to recover the ground state energy with a larger variety of trial states. We show that our technique can find the exact ground state energy of molecular and spin models under simulated noise models as well as experiments on a Rigetti 84-qubit superconducting quantum processor. The reported FCQEM method is general purpose for the current generation of quantum devices and is applicable to any problem that measures eigenvalues of operators on sharply peaked distributions.
Reconfigurable Superconducting Quantum Circuits Enabled by Micro-Scale Liquid-Metal Interconnects
This paper demonstrates liquid-metal interconnects for superconducting quantum circuits that allow modular quantum processors to be reconfigured by replacing components without destroying the system. The researchers show these gallium-based connections maintain high microwave performance and can survive thermal cycling between room temperature and millikelvin temperatures.
Key Contributions
- Demonstration of chip-scale liquid-metal interconnects for superconducting quantum circuits with performance comparable to conventional waveguides
- Proof of concept for plug-and-play modular quantum processor architecture that enables non-destructive component replacement
- Characterization of power-dependent loss mechanisms and kinetic inductance effects in liquid-metal quantum interconnects
View Full Abstract
Modular architectures are a promising route toward scalable superconducting quantum processors, but finite fabrication yield and the lack of high quality temporary interconnects impose fundamental limitations on system size. Here, we demonstrate chip-scale liquid-metal interconnects that show promise for plug-and-play superconducting quantum circuits by enabling non-destructive module replacement while maintaining high microwave performance. Using gallium-based liquid metals, we realize high-quality inter-module signal and ground interconnects, comparable in performance to conventional coplanar waveguide resonators. We illustrate consistent device characteristics across three thermal cycles between room temperature and 15 mK, as well as the ability to reform superconducting connections following module replacement. A width-dependent resonance frequency shift reveals a significant kinetic inductance fraction, which we attribute to the presence of $β$-phase tantalum as confirmed by X-ray characterization. Finally, we investigate power-dependent loss mechanisms and observe high-power dissipative nonlinearities qualitatively consistent with a readout-power heating model. These results establish liquid metals as viable chip-scale interconnects for reconfigurable, modular superconducting quantum systems.
Coupled-Layer Construction of Quantum Product Codes
This paper presents a new physical framework for constructing quantum product codes, which are important quantum error correcting codes, by showing how they can be built as coupled layers where one code forms a stack and excitations are condensed according to patterns from another code. The work provides an intuitive physical mechanism for creating these codes that was previously unclear despite their mathematical formulation.
Key Contributions
- Developed coupled-layer construction framework for tensor and balanced product codes providing intuitive physical assembly mechanism
- Unified known physical mechanisms for constructing higher dimensional topological phases via anyon condensation and extended to non-topological codes
View Full Abstract
Product codes are a class of quantum error correcting codes built from two or more constituent codes. They have recently gained prominence for a breakthrough yielding quantum low-density parity-check (qLDPC) codes with favorable scaling of both code distance and encoding rate. However, despite its powerful algebraic formulation, the physical mechanism for assembling a general product code from its constituents remains unclear. In this letter, we show that the tensor and balanced product codes admit an intuitive coupled-layer construction by taking a stack of one code and condensing a set of excitations in the pattern given by the checks of the other code. Our framework accommodates both classical or quantum CSS input codes, unifies known physical mechanisms for constructing higher dimensional topological phases via anyon condensation, and naturally extends to non-topological codes.
Scalable Postselection of Quantum Resources
This paper develops a technique called scalable postselection to reduce quantum error correction overhead by selectively choosing better-performing quantum resource states based on decoder information. The method achieves a 4x reduction in overhead per logical gate while maintaining the same error probability, potentially making quantum computers more practical.
Key Contributions
- Introduction of scalable postselection technique that reduces quantum error correction overhead by 4x
- Development of the partial gap metric to predict resource state quality after consumption
- Demonstration of scalable improvements in logical error rates through postselection of sub-circuits
View Full Abstract
The large overhead imposed by quantum error correction is a critical challenge to the realization of quantum computers, and motivates searching for alternative error correcting codes and fault-tolerant circuit constructions. Postselection is a powerful tool that builds large programs out of probabilistically generated sub-circuits, and has been shown to increase the threshold of quantum error correction based on fusing fixed-size resource states or concatenated codes. In this work, we present an approach to lower the overhead of quantum computing using scalable postselection, based on directly postselecting sub-circuits with a size extensive in the code distance using decoder soft information. We introduce a metric, the partial gap, that estimates what the logical gap of a resource state will be after it is consumed, and show that postselection based on the partial gap leads to scalable improvements in the logical error rate. In the specific context of implementing logical gates via teleportation through a cluster state, we demonstrate that scalable postselection provides a $4\times$ reduction in the overhead per logical gate, at the same logical error probability.
Construction of a Family of Quantum Codes Using Sub-exceding Functions via the Hypergraph Product and the Generalized Shor Construction
This paper develops a new family of quantum error-correcting codes by combining classical linear codes with mathematical constructions called hypergraph products and generalized Shor codes. The resulting quantum codes have good error-correction properties and efficient structures that could help build more reliable quantum computers.
Key Contributions
- Introduction of new quantum LDPC codes with parameters [[6k^2, k^2, d]] derived from sub-exceding functions
- Combination of hypergraph product framework with generalized Shor construction for scalable quantum code design
View Full Abstract
In this paper, we introduce a new family of stabilizer quantum LDPC codes derived from the classical linear codes $L_k$ and $L_k^{+}$, defined via sub-exceding functions. In previous work, these codes demonstrated strong performance in minimum distance, decoding efficiency, and structural simplicity. By combining the hypergraph product framework with a generalized Shor construction, we obtain a scalable class of quantum codes with parameters $[[6k^2,\, k^2,\, d]]$. The resulting quantum codes exhibit a rich combinatorial structure and promising properties, particularly in terms of locality, low-density parity-check (LDPC) structure, and asymptotic behavior. The minimum distance satisfies $d=3$ for $k=3$ and $d=4$ for $k\ge4$, establishing a new framework for structured quantum LDPC code design and optimization.
Lattice: A Post-Quantum Settlement Layer
This paper presents Lattice, a cryptocurrency designed to be resistant to quantum computer attacks through post-quantum cryptographic signatures, CPU-only mining, and adaptive difficulty adjustment mechanisms.
Key Contributions
- Implementation of ML-DSA-44 post-quantum digital signatures from genesis block
- Multi-layered defense against quantum threats through hardware, network, and cryptographic resilience
View Full Abstract
We present Lattice (L, ticker: LAT), a peer-to-peer electronic cash system designed as a post-quantum settlement layer for the era of quantum computing. Lattice combines three independent defense vectors: hardware resilience through RandomX CPU-only proof-of-work, network resilience through LWMA-1 per-block difficulty adjustment (mitigating the Flash Hash Rate vulnerability that affects fixed-interval retarget protocols), and cryptographic resilience through ML-DSA-44 post-quantum digital signatures (NIST FIPS 204, lattice-based), enforced exclusively from the genesis block with no classical signature fallback. The protocol uses a brief warm-up period of 5,670 fast blocks (53-second target, 25 LAT reduced reward) for network bootstrap, then transitions permanently to 240-second blocks, following a 295,000-block halving schedule with a perpetual tail emission floor of 0.15 LAT per block. Block weight capacity grows in stages (11M to 28M to 56M) as the network matures. The smallest unit of LAT is the shor, named after Peter Shor, where 1 LAT = 10^8 shors.
A Scalable Distributed Quantum Optimization Framework via Factor Graph Paradigm
This paper presents a new framework for distributed quantum computing that breaks down optimization problems using factor graphs to run on multiple small quantum processors connected by entanglement. The approach maintains the quadratic speedup of Grover's algorithm while reducing the number of qubits needed per processor.
Key Contributions
- Structure-aware distributed quantum optimization framework using factor graph decomposition
- Proof that Grover-like O(√N) scaling is preserved across distributed processors
- Hierarchical divide-and-conquer strategy with both fault-tolerant and near-term operating modes
View Full Abstract
Distributed quantum computing (DQC) connects many small quantum processors into a single logical machine, offering a practical route to scalable quantum computation. However, most existing DQC paradigms are structure-agnostic. Circuit cutting proposed by Peng et al. in [Phys. Rev. Lett., Oct. 2020] reduces per-device qubits at the cost of exponential classical post-processing, while search-space partitioning proposed by Avron et al. in [Phys. Rev. A., Nov. 2021] distributes the workload but weakens Grover's ideal quadratic speedup. In this paper, we introduce a structure-aware framework for distributed quantum optimization that resolves this complexity-resource trade-off. We model the objective function as a factor graph and expose its sparse interaction structure. We cut the graph along its natural ``seams'', i.e., a separator of boundary variables, to obtain loosely coupled subproblems that fit on resource-constrained processors. We coordinate these subproblems with shared entanglement, so the network executes a single globally coherent search rather than independent local searches. We prove that this design preserves Grover-like scaling: for a search space of size $N$, our framework achieves $O(\sqrt{N})$ query complexity up to processors and separator dependent factors, while relaxing the qubit requirement of each processor. We extend the framework with a hierarchical divide-and-conquer strategy that scales to large-scale optimization problems and supports two operating modes: a fully coherent mode for fault-tolerant networks and a hybrid mode that inserts measurements to cap circuit depth on near-term devices. We validate the predicted query-entanglement trade-offs through simulations over diverse network topologies, and we show that structure-aware decomposition delivers a practical path to scalable distributed quantum optimization on quantum networks.
Remote Entanglement in Lattice Surgery: To Distill, or Not to Distill
This paper analyzes whether to use error-corrected or raw entangled photons for connecting distributed quantum computers, finding that the optimal choice depends on entanglement quality and can reduce resource requirements by up to 100x in some cases.
Key Contributions
- Identified fidelity crossover point determining optimal strategy between distillation vs higher surface code distance
- Demonstrated up to two orders of magnitude resource reduction through proper strategy selection
- Provided co-design principles for photonic interconnects in fault-tolerant distributed quantum computers
View Full Abstract
Distributed quantum computing can potentially address the scalability challenge by networking processors through photon-mediated remote entanglement. Prior approaches assumed that remote Bell pairs require distillation, resulting in substantial overhead, to achieve sufficiently high fidelity before use. However, recent results show that lattice-surgery operations at logical qubit boundaries tolerate significantly higher error rates than previously assumed. We quantify the resource trade-offs between distillation overhead and surface-code distance requirements under realistic constraints including probabilistic entanglement generation and memory decoherence. We identify the fidelity crossover point separating the two regimes and show that choosing the right strategy can reduce resource overhead by up to two orders of magnitude at low fidelities and up to 68% at high fidelities. We briefly describe the application of these methods to ion-trap and neutral-atom platforms. These results provide co-design principles for optimizing photonic interconnects and fault-tolerant architectures in distributed quantum computers.
Quantum Hamlets: Distributed Compilation of Large Algorithmic Graph States
This paper develops a new algorithm called BURY for efficiently distributing the creation of quantum graph states across multiple quantum processors, reducing the number of entangled Bell pairs needed for distributed quantum computation. The work focuses on optimizing how to partition large quantum computational resources to minimize communication overhead between quantum devices.
Key Contributions
- Development of BURY heuristic algorithm for balanced k-graph partitioning that minimizes Bell pair requirements
- Introduction of maximum matching minimization as a better metric than cut edges for measuring entanglement requirements in distributed quantum systems
- Scalable framework for distributed measurement-based quantum computation with reduced quantum network overhead
View Full Abstract
We investigate the problem of compiling the generation of graph states to arbitrarily many distributed homogeneous quantum processing units (QPUs), providing a scalable partitioning algorithm and graph state generation protocol to minimize the number of Bell pairs required. To this goal, we consider the problem of balanced k graph partitioning with the objective of minimizing the sizes of the maximum matchings between partitions, a more natural measure of entanglement compared to the naive but common metric of cut edges. We show that our heuristic algorithm, BURY, partitions graph states to require fewer Bell pairs for generation than state-of-the-art k partition algorithms. Furthermore, we show that BURY reduces the cut-rank of the partitions, demonstrating that the partitioning found by our algorithm is likely to minimize the Bell pair utilization of any future improved distributed graph state generation protocol. Additionally, we discuss how one could straightforwardly apply our methods to the dynamic case where the graph state generation and measurement are performed concurrently. Our study of the balanced minimum maximum matching k partition problem and the heuristic algorithm we design provides a scalable foundation for reducing quantum network overhead for distributed measurement-based quantum computation (MBQC), as well as any scheme where distributed graph state generation is desired.
A Scheduler for the Active Volume Architecture
This paper develops scheduling software for the Active Volume quantum computing architecture that more accurately estimates resource requirements and execution times. The work shows that improved scheduling can reduce overhead costs and allow larger quantum circuits to run on a given quantum computer than previously predicted.
Key Contributions
- Development of greedy scheduling algorithm for Active Volume architecture that reduces bridge and stale-state qubit overheads by 1.44x
- Empirical derivation of novel formula for overhead calculations that improves runtime estimate accuracy by 1.76x
- Demonstration that larger quantum circuits can execute on given hardware than previously predicted by analytic models
View Full Abstract
We improve the accuracy of Active Volume resource estimates by explicitly scheduling when Active Volume blocks execute. We present software that uses a greedy strategy to assign each logical qubit a role in each logical cycle (e.g., workspace, stale state storage, and bridge qubits). We empirically derive a novel formula for bridge- and stale-state-qubit overheads and improve the accuracy of runtime estimates, revealing that larger circuits can run on a given computer than previously predicted by analytic models. For a $4\times4$ Fermi-Hubbard simulation test circuit, this yields a $1.76\times$ runtime speedup with a $1.44\times$ reduction in bridge- and stale-state-qubit overheads compared to the model used in arXiv:2501.06165. Moreover, we show that for this test circuit, reaction times are insignificant in runtime estimates for computers with fewer than 600 logical qubits and that the number of reaction layers per logical cycle remains 1 in this regime. Our results pave the way for a full compilation pipeline for the Active Volume architecture and improved analytic resource estimates.
Vertical ion transport in a surface Paul trap: escalator and elevator approaches
This paper develops methods to move trapped ions vertically (perpendicular to the chip surface) in surface Paul traps, introducing an 'escalator' approach using geometrically optimized transitions and comparing two 'elevator' configurations that dynamically reposition ions using electrode voltages.
Key Contributions
- Introduction of 'escalator' approach for vertical ion transport using geometrically optimized transitions between trapping zones
- Comparative analysis of two 'elevator' configurations that dynamically reposition RF null via additional electrode voltages
View Full Abstract
Surface ion traps confining and manipulating tens of ion qubits have become the leading platform for quantum processors with high quantum volume. These devices employ the Quantum Charge-Coupled Device (QCCD) architecture, wherein multiple trapping zones are linked by an on-chip transport network that shuttles ion chains, enabling full connectivity through physical ion transport in a plane parallel to the chip surface. The ability to move ions perpendicular to this plane can offer additional advantages, including tuning the laser-ion interaction strength, systematic studies of surface-induced heating mechanisms, and precise alignment with a mode of an external optical cavity. We introduce an "escalator" - a geometrically optimized transition between trapping zones of different confinement heights - and present a comparative analysis of two "elevator" configurations that reposition the RF null dynamically via additional electrode voltages. Both approaches enable nearly a twofold change in the ion confinement height above the chip surface.
Universal quantum computation with group surface codes
This paper introduces group surface codes, which generalize the standard surface code used in quantum error correction. The authors show these codes can perform non-Clifford gates transversally, enabling universal quantum computation by bypassing theoretical limitations that restrict the computational power of standard topological quantum error correction schemes.
Key Contributions
- Introduction of group surface codes as a generalization of Z2 surface codes
- Demonstration that non-Clifford gates can be performed transversally in these codes, enabling universal quantum computation
- Method to bypass Bravyi-König theorem restrictions on topological Pauli stabilizer models
- Unified framework connecting various recent constructions including sliding group surface codes and magic state preparation
View Full Abstract
We introduce group surface codes, which are a natural generalization of the $\mathbb{Z}_2$ surface code, and equivalent to quantum double models of finite groups with specific boundary conditions. We show that group surface codes can be leveraged to perform non-Clifford gates in $\mathbb{Z}_2$ surface codes, thus enabling universal computation with well-established means of performing logical Clifford gates. Moreover, for suitably chosen groups, we demonstrate that arbitrary reversible classical gates can be implemented transversally in the group surface code. We present the logical operations in terms of a set of elementary logical operations, which include transversal logical gates, a means of transferring encoded information into and out of group surface codes, and preparation and readout. By composing these elementary operations, we implement a wide variety of logical gates and provide a unified perspective on recent constructions in the literature for sliding group surface codes and preparing magic states. We furthermore use tensor networks inspired by ZX-calculus to construct spacetime implementations of the elementary operations. This spacetime perspective also allows us to establish explicit correspondences with topological gauge theories. Our work extends recent efforts in performing universal quantum computation in topological orders without the braiding of anyons, and shows how certain group surface codes allow us to bypass the restrictions set by the Bravyi-K{ö}nig theorem, which limits the computational power of topological Pauli stabilizer models.
Mirror codes: High-threshold quantum LDPC codes beyond the CSS regime
This paper introduces mirror codes, a new class of quantum error-correcting codes that go beyond traditional CSS codes and achieve high error correction thresholds. The authors demonstrate these codes can achieve error pseudothresholds around 0.2% with efficient syndrome extraction circuits, making them promising for near-term fault-tolerant quantum devices.
Key Contributions
- Introduction of mirror codes, a flexible LDPC stabilizer code construction that generalizes beyond CSS codes
- Development of syndrome extraction circuits with provable fault tolerance using 1-6 ancillae per check
View Full Abstract
The realization of quantum error correction protocols whose logical error rates are suppressed far below physical error rates relies on an intricate combination: the error-correcting code's efficiency, the syndrome extraction circuit's fault tolerance and overhead, the decoder's quality, and the device's constraints, such as physical qubit count and connectivity. This work makes two contributions towards error-corrected quantum devices. First, we introduce mirror codes, a simple yet flexible construction of LDPC stabilizer codes parameterized by a group $G$ and two subsets of $G$ whose total size bounds the check weight. These codes contain all abelian two-block group algebra codes, such as bivariate bicycle (BB) codes. At the same time, they are manifestly not CSS in general, thus deviating substantially from most prior constructions. Fixing a check weight of 6, we find $[[ 60, 4, 10 ]], [[ 36, 6, 6 ]], [[ 48, 8, 6 ]]$, and $[[ 85, 8, 9 ]]$ codes, all of which are not CSS; we also find several weight-7 codes with $kd > n$. Next, we construct syndrome extraction circuits that trade overhead for provable fault tolerance. These circuits use 1-2, 3, and 6 ancillae per check, and respectively are partially fault-tolerant (FT), provably FT on weight-6 CSS codes, and provably FT on \emph{all} weight-6 stabilizer codes. Using our constructions, we perform end-to-end quantum memory experiments on several representative mirror codes under circuit-level noise. We achieve an error pseudothreshold on the order of $0.2\%$, approximately matching that of the $[[ 144, 12, 12 ]]$ BB code under the same model. These findings position mirror codes as a versatile candidate for fault-tolerant quantum memory, especially on smaller-scale devices in the near term.
Improved Decoding of Quantum Tanner Codes Using Generalized Check Nodes
This paper improves the decoding of quantum Tanner codes by grouping check nodes into more powerful generalized check nodes and using enhanced iterative belief propagation decoding. The proposed method significantly outperforms standard quaternary BP decoders and other recent approaches for quantum low-density parity-check codes.
Key Contributions
- Enhanced generalized belief propagation decoder for quantum Tanner codes that significantly outperforms existing methods
- Greedy algorithm for combining checks in generalized BP decoding for quantum LDPC codes
- Theoretical cycle analysis for various quantum LDPC code classes
View Full Abstract
We study the decoding problem for quantum Tanner codes and propose to exploit the underlying local code structure by grouping check nodes into more powerful generalized check nodes for enhanced iterative belief propagation (BP) decoding by decoding the generalized checks using a maximum a posteriori (MAP) decoder as part of the check node processing of each decoding iteration. We mainly study the finite-length setting and show that the proposed enhanced generalized BP decoder for quantum Tanner codes significantly outperforms the standard quaternary BP decoder with memory effects, as well as the recently proposed Relay-BP decoder, even outperforming generalized bicycle (GB) codes with comparable parameters in some cases. For other classes of quantum low-density parity-check (qLDPC) codes, we propose a greedy algorithm to combine checks for generalized BP decoding. However, for GB codes, bivariate bicycle codes, hypergraph product codes, and lifted-product codes, there seems to be limited gain by combining simple checks into more powerful ones. To back up our findings, we also provide a theoretical cycle analysis for the considered qLDPC codes.
High-performance syndrome extraction circuits for quantum codes
This paper develops an improved framework for designing syndrome extraction circuits used in quantum error correction, achieving up to an order of magnitude better performance than existing designs. The authors generalize left-right circuit constructions to work with arbitrary CSS quantum codes and introduce formal tools to analyze error propagation and optimize circuit performance.
Key Contributions
- Generalization of left-right syndrome extraction circuits to arbitrary CSS codes with optimized performance
- Introduction of formal residual error analysis framework for quantifying circuit-level error propagation
- Demonstration of order-of-magnitude improvements in logical performance over existing single-ancilla designs
View Full Abstract
We present a fast and effective framework for analysing and designing syndrome-extraction circuits (SECs). Our approach is based on left-right circuits, a general design for SECs which maintain low depth by staggering $X$ and $Z$ checks without interleaving gates. Initially proposed for specific classes of codes, we generalise this construction to arbitrary CSS codes and optimise the circuit structure to achieve low qubit idling time, large effective distance, and reduced minimum-weight failure mechanisms. A key component of our framework is the formal notion of residual errors and their associated distance metrics, which form lightweight tools for capturing error propagation and quantifying the potential harm of circuit-level errors. Applying our automated framework to diverse classes of codes, we observe consistent improvements in logical performance of up to an order of magnitude compared to existing single-ancilla SEC designs. We also use these tools to prove that no non-interleaving SEC can achieve circuit distance $12$ for the gross code, and identify an explicit circuit that we conjecture achieves distance $11$, exceeding previously known constructions.
Low-depth amplitude estimation via statistical eigengap estimation
This paper develops new algorithms for quantum amplitude estimation by reframing the problem as estimating energy gaps in effective Hamiltonians rather than using traditional phase estimation approaches. The methods achieve optimal performance while using simpler classical post-processing and offering flexible tradeoffs between query complexity and circuit depth.
Key Contributions
- Reframes amplitude estimation as statistical eigengap estimation of effective Hamiltonians
- Develops algorithms achieving Heisenberg-limited scaling with simplified classical post-processing
- Establishes optimal query-depth tradeoffs for low-depth quantum circuits with theoretical guarantees
View Full Abstract
Amplitude estimation, in its original form, is formulated as phase estimation upon the Grover walk operator. Since its introduction, subsequent improvements to the algorithm have removed the use of phase estimation and introduced low-depth variants that trade speedup factors for lower circuit depth. We make the key observation that amplitude estimation is equivalent to estimating the energy gap of an effective Hamiltonian, whereby discrete time evolution is generated by amplitude amplification. This enables us to develop two amplitude estimation algorithms for both Heisenberg-limited and low-depth circuit regimes, inspired by statistical phase estimation techniques developed for seemingly unrelated early fault-tolerant ground-state energy estimation. Our approach has significant technical and practical benefits, and uses simplified classical post-processing compared to prior techniques -- our theoretical and numerical results indicate that we achieve state-of-the-art performance. Furthermore, while our approach achieves Heisenberg-limited scaling, we also establish optimal query-depth tradeoffs up to polylogarithmic factors in the low-depth regime with provable theoretical guarantees. Due to its flexibility, generality, and robustness, we expect our approach to be a key enabler for a broad range of early fault-tolerant applications.
Spatiotemporal Pauli processes: Quantum combs for modelling correlated noise in quantum error correction
This paper introduces Spatiotemporal Pauli Processes (SPPs), a mathematical framework that bridges the gap between simple error models used in quantum error correction and the complex, correlated noise that occurs in real quantum devices. The authors demonstrate how this framework can model realistic noise patterns and show that certain types of correlated noise can cause complete breakdown of quantum error correction codes.
Key Contributions
- Introduction of Spatiotemporal Pauli Processes framework that maps arbitrary non-Markovian quantum dynamics to tractable multi-time Pauli processes
- Demonstration that correlated noise can cause complete breakdown of surface code error correction through critical slowing down and macroscopic error avalanches
- Development of efficient tensor network representations and transfer operator diagnostics for analyzing correlated quantum noise
View Full Abstract
Correlated noise is a critical failure mode in quantum error correction (QEC), as temporal memory and spatial structure concentrate faults into error bursts that undermine standard threshold assumptions. Yet, a fundamental gap persists between the stochastic Pauli models ubiquitous in QEC and the microscopic, non-Markovian descriptions of physical device dynamics. We close this gap by introducing \emph{Spatiotemporal Pauli Processes} (SPPs). By applying a multi-time Pauli twirl -- operationally realised by Pauli-frame randomisation -- to a general process tensor, we map arbitrary multi-time, non-Markovian dynamics to a multi-time Pauli process. This process is represented by a process-separable comb, or equivalently, a well-defined joint probability distribution over Pauli trajectories in spacetime. We show that SPPs inherit efficient tensor network representations whose bond dimensions are bounded by the environment's Liouville-space dimension. To interpret these structures, we develop transfer operator diagnostics linking spectra to correlation decay, and exact hidden Markov representations for suitable classes of SPPs. We demonstrate the framework via surface code memory and stability simulations of up to distance \(19\) for (i) a temporally correlated ``storm'' model that tunes correlation length at fixed marginal error rates, and (ii) a genuinely spatiotemporal 2D quantum cellular automaton bath that maps exactly to a nonlinear probabilistic cellular automaton under twirling. Tuning coherent bath interactions drives the system into a pseudo-critical regime, exhibiting critical slowing down and macroscopic error avalanches that cause a complete breakdown of surface code distance scaling. Together, these results justify SPPs as an operationally grounded, scalable toolkit for modelling, diagnosing, and benchmarking correlated noise in QEC.
Heuristics for Shuttling Sequence Optimization for a Linear Segmented Trapped-Ion Quantum Computer
This paper develops optimization algorithms for moving trapped ions between different zones in a linear ion-trap quantum computer, focusing on minimizing the number of physical ion movements needed to execute quantum circuits. The authors present heuristics for determining optimal initial ion ordering and demonstrate improved performance for quantum Fourier transform-like circuits.
Key Contributions
- Development of heuristic algorithms for optimizing ion shuttling sequences in trapped-ion quantum computers
- Implementation of qubit mapping strategies to determine optimal initial ion ordering
- Demonstration that multiple interaction zones can reduce register reordering overhead
View Full Abstract
An algorithm for the generation of shuttling sequences is necessary for the operation of a linear segmented ion-trap quantum computer. The present work provides an implementation of an algorithm that produces sequences proved to be optimal for circuits with a quantum Fourier transform-like structure. Such optimality was proved in previous work of our group. We first present an approach for qubit mapping, i.e. determining the initial ordering of the ions, termed the common ion order, and develop a heuristic algorithm for its implementation. We explain how this heuristic is integrated in the shuttling sequence generation algorithm described in the previous work. The results show the increased performance of the heuristic in terms of reducing the number of required shuttling operations. The number of ion displacements required exhibits a polynomial increase in terms of the number of qubits, such that these operations become the main contribution to the overall resource cost. Furthermore, we show that multiple zones for gate interactions can reduce the amount of qubit register reordering.
Constant depth magic state cultivation with Clifford measurements by gauging
This paper introduces a method to improve magic state preparation for quantum error correction by using constant-depth measurements instead of depth-scaling measurements, making the approach practical for larger quantum codes. The technique uses 'gauging' to perform logical measurements on color codes with better scalability than previous cultivation methods.
Key Contributions
- Development of constant-depth logical measurement circuits for color codes using gauging technique
- Achievement of 10^-12 logical error rates for d=7 color codes with improved scalability over magic state cultivation
View Full Abstract
Magic states are a scarce resource for two-dimensional qubit stabilizer codes. Magic state cultivation was recently proposed to reduce the cost of magic state preparation by measuring the transversal Clifford operator of the color code. Cultivation achieves $\sim 10^{-9}$ logical error rates for the $d=5$ color code, with substantially lower space-time overhead than magic state distillation. However, due to the $\mathcal{O}(d)$ depth of the Clifford measurement circuit, magic state cultivation becomes impractical for $d>5$. Here, we perform logical $XS^\dagger$ measurements on the color code by gauging a transversal Clifford gate, resulting in a constant-depth logical measurement circuit. We employ repeated gauging measurements with post-selection rather than performing error correction on the Clifford stabilizer code that emerges during the gauging protocol, thus gaining simplicity at the cost of scalability. Our protocol requires a regular square grid connectivity and yields logical error rates comparable to magic state cultivation. The $d=7$ version of our protocol gives access to the $10^{-12}$ logical error rate regime at $0.05\%$ physical error rate while retaining more than $1\%$ of the shots after the equivalent of the cultivation stage.
Optimal Decoding with the Worm
This paper introduces a new quantum error correction decoder called the 'worm algorithm' that uses Markov-Chain Monte-Carlo methods to optimally decode errors in quantum low-density parity-check (qLDPC) codes. The decoder can handle various quantum error correction codes including surface codes and hyperbolic surface codes, and demonstrates superior performance compared to existing decoding methods.
Key Contributions
- Novel worm algorithm decoder for optimal decoding of matchable qLDPC codes using MCMC methods
- Rigorous analysis of mixing time guarantees and connection to defect susceptibility
- Demonstration of superior decoding thresholds compared to minimum-weight perfect matching
- Extension to correlated decoding schemes that work beyond independent error models
View Full Abstract
We propose a new decoder for ``matchable'' qLDPC codes that uses a Markov-Chain Monte-Carlo algorithm -- called the \emph{worm algorithm} -- to approximately compute the probabilities of logical error classes given a syndrome. The algorithm hence performs (approximate) \emph{optimal} decoding, and we expect it to be computationally efficient in certain settings. The algorithm is applicable to decoding random errors for the surface code, the honeycomb Floquet code, and hyperbolic surface codes with constant rate, in all cases with and without measurement errors. The efficiency of the decoder hinges on the mixing time of the underlying Markov chain. We give a rigorous mixing time guarantee in terms of a quantity that we call the \emph{defect susceptibility}. We connect this quantity to the notion of disorder operators in statistical mechanics and use this to argue (non-rigorously) that the algorithm is efficient for \emph{typical} errors in the entire decodable phase. We also demonstrate the effectiveness of the worm decoder numerically by applying it to the surface code with measurement errors as well as a family of hyperbolic surface codes. For most codes, the matchability condition restricts direct application of our decoder to noise models with independent bit-flip, phase-flip, and measurement errors. However, our decoder returns \emph{soft information} which makes it useful also in heuristic ``correlated decoding'' schemes which work beyond this simple setting. We demonstrate this by simulating decoding of the surface code under depolarizing noise, and we find that the threshold for ``correlated worm decoding'' is substantially higher than for both minimum-weight perfect matching and for correlated matching.
Decay Rates in Interleaved Benchmarking with Single-Qubit References
This paper develops improved theoretical foundations for characterizing multi-qubit quantum gates using cross-entropy benchmarking with single-qubit reference sequences. The authors identify and correct systematic errors in current approaches, providing more accurate gate fidelity measurements that match standard benchmarking methods while achieving higher precision.
Key Contributions
- Derived analytical expression for joint decay of simultaneous single-qubit reference sequences
- Introduced refined expression for interleaved gate fidelity estimation that corrects systematic overestimation
- Validated theory experimentally on superconducting quantum processor showing agreement with standard interleaved randomized benchmarking
View Full Abstract
Cross-entropy benchmarking (XEB) with single-qubit reference sequences is widely used to characterize multi-qubit gates in large-scale quantum processors, despite the lack of a rigorous theoretical justification. Here we show that the commonly employed additive single-qubit errors approximation underlying this approach breaks down and leads to a systematic overestimation of gate fidelities. We derive an analytical expression for the joint decay of simultaneous single-qubit reference sequences and introduce a refined expression for the interleaved gate fidelity estimation. Experiments on a superconducting quantum processor validate the theory and demonstrate that fidelities obtained using XEB with single-qubit references agree with those extracted from standard interleaved randomized benchmarking (IRB), while achieving higher precision due to reduced reference-sequence errors. Our results establish theoretical foundation for the single-qubit-based XEB and show that, with appropriate post-processing, it enables a reliable and robust approach for entangling gates benchmarking without the need for multi-qubit Clifford reference sequences.
Recursive Magic State Distillation on the Surface Code
This paper develops a more efficient method for preparing magic states needed for quantum computation by using recursive 15-to-1 distillation with lattice surgery on surface codes. The approach reduces the physical qubit requirements and time needed to create high-quality magic states, though it requires lower physical error rates to be effective.
Key Contributions
- Recursive implementation of 15-to-1 magic state distillation reducing resource overhead
- Specific resource estimates for T and CCZ magic state preparation on surface codes with lattice surgery
View Full Abstract
I reduce the cost to prepare magic states with lattice surgery operations on the surface code by using a recursive implementation of 15-to-1 magic state distillation. On a rotated surface code with distance $d$, $|T\rangle$ preparation requires a $d$-by-$3 d$ grid of data qubits for up to $15 d$ error correction cycles, and $|CCZ\rangle$ preparation requires a $3 d$-by-$2 d$ grid for up to $10.5 d$ cycles. However, a significantly lower physical error threshold than that of the underlying surface code is required to match the error probability of the output magic state with the logical error rate of the output surface code at large code distances.
Generalized matching decoders for 2D topological translationally-invariant codes
This paper develops new graph-matching decoders for 2D topological quantum error-correcting codes like bivariate bicycle codes, which work by converting the syndrome information into an equivalent toric code representation that can be efficiently decoded using graph-matching techniques.
Key Contributions
- Development of graph-matching decoders for general translationally-invariant topological codes
- Proof that these decoders correct errors up to a constant fraction of code distance with provable performance guarantees
- Numerical demonstration of competitive performance with existing decoders for bivariate bicycle codes
View Full Abstract
Two-dimensional topological translationally-invariant (TTI) quantum codes, such as the toric code (TC) and bivariate bicycle (BB) codes, are promising candidates for fault-tolerant quantum computation. For such codes to be practically relevant, their decoders must successfully correct the most likely errors while remaining computationally efficient. For the TC, graph-matching decoders satisfy both requirements and, additionally, admit provable performance guarantees. Given the equivalence between TTI codes and (multiple copies of) the TC, one may then ask whether TTI codes also admit analogous graph-matching decoders. In this work, we develop a graph-matching approach to decoding general TTI codes. Intuitively, our approach coarse-grains the TTI code to obtain an effective description of the syndrome in terms of TC excitations, which can then be removed using graph-matching techniques. We prove that our decoders correct errors of weight up to a constant fraction of the code distance and achieve non-zero code-capacity thresholds. We further numerically study a variant optimized for practically relevant BB codes and observe performance comparable to that of the belief propagation with ordered statistics decoder. Our results indicate that graph-matching decoders are a viable approach to decoding BB codes and other TTI codes.
QGPU: Parallel logic in quantum LDPC codes
This paper introduces clustered-cyclic codes, a new family of quantum low-density parity-check (LDPC) codes that enable highly parallel logical operations, and proposes parallel product surgery techniques to perform multiple logical measurements simultaneously with fixed overhead.
Key Contributions
- Introduction of clustered-cyclic quantum LDPC codes with competitive parameters like [[136,8,14]] and [[198,18,10]]
- Development of parallel product surgery protocol enabling surface-code-style maximal parallelism for logical operations
- Proof that parallel product surgery preserves code distance and demonstration of fault-tolerant Clifford group generation
View Full Abstract
Quantum error correction is critical to the design and manufacture of scalable quantum computing systems. Recently, there has been growing interest in quantum low-density parity-check codes as a resource-efficient alternative to surface codes. Their adoption is hindered by the difficulty of compiling fault-tolerant logical operations. A key challenge is that logical qubits do not necessarily map to disjoint sets of physical qubits, which limits parallelism. We introduce clustered-cyclic codes, a quantum low-density parity-check code family with finite-size instances such as [[136,8,14]] and [[198,18,10]] that are competitive with state-of-the-art constructions. These codes admit a directly addressable logical basis, enabling highly parallel logical measurement layers. To leverage this structure, we propose parallel product surgery for quantum product codes. Using an auxiliary copy of the data patch and an engineered product-connection structure, the protocol performs many logical Pauli-product measurements in a single surgery round with small, fixed overhead. For clustered-cyclic codes, this yields surface-code-style maximal parallelism: up to k/2 disjoint Pauli-product measurements per round under explicit algebraic conditions. We prove that parallel product surgery preserves the code distance for hypergraph product codes and numerically verify distance preservation for the listed clustered-cyclic instances with k = 8. Finally, for the [[24,8,3]] clustered-cyclic code, treating half of the logical qubits as auxiliaries enables arbitrary parallel CNOTs on disjoint pairs; combined with symmetry-derived operations, these gates generate the full Clifford group fault-tolerantly.
SpiderCat: Optimal Fault-Tolerant Cat State Preparation
This paper develops optimal methods for preparing fault-tolerant CAT states (multi-qubit entangled states) needed for quantum error correction, using graph theory to find circuits that minimize the number of CNOT gates while preventing error spread. The authors provide both theoretical lower bounds and practical constructions that significantly improve upon previous resource requirements.
Key Contributions
- Derived formal lower bounds on CNOT gate requirements for fault-tolerant n-qubit CAT state preparation using ZX-diagram analysis and graph theory
- Provided explicit optimal circuit constructions for CAT states up to n≤100 qubits that significantly improve resource counts over previous methods
- Developed constant-depth fault-tolerant implementations using O(n) ancilla qubits and O(n) CNOT gates
View Full Abstract
The ability to fault-tolerantly prepare CAT states, also known as multi-qubit GHZ states, is an important primitive for quantum error correction. It is required for Shor-style syndrome extraction, and can also be used as a subroutine for doing fault-tolerant state preparation of CSS codewords. Existing approaches to fault-tolerant CAT state preparations have been found using computationally expensive heuristics involving SAT solving, reinforcement learning, or exhaustive analysis. In this paper, we constructively find optimal circuits for CAT states in a more scalable way. In particular, we derive formal lower bounds on the number of CNOT gates required for circuits implementing $n$-qubit CAT states that do not spread errors of weight at most $t$ for $1\leq t \leq 5$. We do this by using fault-equivalent rewrites of ZX-diagrams to reduce it to a problem of characterising certain 3-regular simple graphs. We then provide families of such optimal graphs for infinitely many values of $n$ and $t\leq5$. By encoding the construction of optimal graphs as a constraint satisfaction problem we find explicit constructions for circuits that match this lower bound on CNOT count for all $n\leq50$ and $t \leq 5$ and for nearly all pairs $(n,t)$ with $n\leq 100$ and $t\leq 5$ or $n\leq 50$ and $t\leq 7$, significantly extending the regimes that were achievable by previous methods and improving the resource counts for existing constructions. We additionally show how to trade CNOT count against depth, allowing us to construct constant-depth fault-tolerant implementations using $O(n)$ ancilla and $O(n)$ CNOT gates.
Achieving Thresholds via Standalone Belief Propagation on Surface Codes
This paper develops new belief propagation decoders for quantum error correction that achieve threshold performance on surface codes by operating on decoding graphs rather than traditional Tanner graphs. The approach achieves performance comparable to minimum weight perfect matching decoders while being more suitable for hardware acceleration.
Key Contributions
- Novel belief propagation decoders that achieve thresholds on surface codes by operating on decoding graphs instead of Tanner graphs
- Hardware-scalable decoder implementation that matches minimum weight perfect matching performance
View Full Abstract
The usual belief propagation (BP) decoders are, in general, exchanging local information on the Tanner graph of the quantum error-correcting (QEC) code and, in particular, are known to not have a threshold for the surface code. We propose novel BP decoders that exchange messages on the decoding graph and obtain code capacity thresholds via standalone BP for the surface code under depolarizing noise. Our approach, similarly to the minimum weight perfect matching (MWPM) decoder, is applicable to any graphlike QEC code. The thresholds observed with our decoders are close to those obtained by MWPM. This result opens the path towards scalable hardware-accelerated implementations of MWPM-compatible decoders.
Simplified circuit-level decoding using Knill error correction
This paper investigates Knill error correction, a quantum error correction technique that uses a single round of measurements instead of repeated syndrome measurements, requiring an auxiliary logical Bell state. The authors prove its fault tolerance and show that it can use simpler decoding algorithms, potentially reducing the classical control requirements for large-scale quantum computers.
Key Contributions
- Theoretical proof of fault tolerance for Knill error correction under circuit-level noise
- Demonstration that Knill error correction can use simpler code-capacity decoders instead of complex circuit-level decoders
- Numerical benchmarking of the protocol's performance on quantum low-density parity-check codes
View Full Abstract
Quantum error correction will likely be essential for building a large-scale quantum computer, but it comes with significant requirements at the level of classical control software. In particular, a quantum error-correcting code must be supplemented with a fast and accurate classical decoding algorithm. Standard techniques for measuring the parity-check operators of a quantum error-correcting code involve repeated measurements, which both increases the amount of data that needs to be processed by the decoder, and changes the nature of the decoding problem. Knill error correction is a technique that replaces repeated syndrome measurements with a single round of measurements, but requires an auxiliary logical Bell state. Here, we provide a theoretical and numerical investigation into Knill error correction from the perspective of decoding. We give a self-contained description of the protocol, prove its fault tolerance under locally decaying (circuit-level) noise, and numerically benchmark its performance for quantum low-density parity-check codes. We show analytically and numerically that the time-constrained decoding problem for Knill error correction can be solved using the same decoder used for the simpler code-capacity noise model, illustrating that Knill error correction may alleviate the stringent requirements on classical control required for building a large-scale quantum computer.
Robust and optimal control of open quantum systems
This paper develops an improved algorithm for controlling quantum systems that accounts for real-world imperfections like noise and parameter uncertainties. The researchers demonstrate their approach experimentally using superconducting quantum circuits, achieving very low error rates of about 0.60%.
Key Contributions
- Enhanced scalable algorithm for robust quantum control in open systems with noise and imperfections
- Experimental validation achieving 0.60% infidelity in superconducting quantum circuits
View Full Abstract
Recent advancements in quantum technologies have highlighted the importance of mitigating system imperfections, including parameter uncertainties and decoherence effects, to improve the performance of experimental platforms. However, most of the previous efforts in quantum control are devoted to the realization of arbitrary unitary operations in a closed quantum system. Here, we improve the algorithm that suppresses system imperfections and noises, providing notably enhanced scalability for robust and optimal control of open quantum systems. Through experimental validation in a superconducting quantum circuit, we demonstrate that our approach outperforms its conventional counterpart for closed quantum systems with an ultra-low infidelity of about $0.60\%$, while the complexity of this algorithm exhibits the same scaling, with only a modest increase in the prefactor. This work represents a notable advancement in quantum optimal control techniques, paving the way for realizing quantum-enhanced technologies in practical applications.
Quantum advantages for syndrome-aware noisy logical observable estimation
This paper develops a theoretical framework to analyze how error syndrome information can improve the estimation of quantum observables in fault-tolerant quantum computers. The research shows that classical post-processing of syndromes provides limited improvement, but quantum protocols that adapt measurements based on syndromes can achieve exponentially better performance.
Key Contributions
- Proves universal limitation that classical syndrome-aware protocols can improve logical error rates by at most factor of two
- Demonstrates quantum protocols with syndrome-conditioned control can achieve exponential improvement in effective logical error rate
View Full Abstract
Recent progress in fault-tolerant quantum computing suggests that leveraging error-syndrome information at the logical layer can substantially improve performance, including the estimation of logical observables from noisy states. In this work, based on quantum estimation theory, we develop an information-theoretic framework to quantify the utility of error syndromes for noisy logical observable estimation. We distinguish two operational regimes of such syndrome-aware protocols: classical protocols, in which the logical measurement basis is fixed and syndrome information is used only in classical post-processing, and quantum protocols, in which the logical quantum control can be tailored to depend on the observed error syndrome. For classical syndrome-aware protocols, we prove a universal limitation: on average, syndrome information can improve the effective logical error rate by at most a factor of two, implying at most a quadratic reduction in sampling overhead. In contrast, once syndrome-conditioned quantum control is permitted, we exhibit settings in which the effective logical error rate decays exponentially with the number of logical qubits. These findings provide fundamental guidance for designing future fault-tolerant architectures that actively exploit syndrome records rather than discarding them after decoding.
Parsimonious Quantum Low-Density Parity-Check Code Surgery
This paper introduces a more efficient method for quantum code surgery in quantum Low-Density Parity-Check codes, reducing the number of ancilla qubits needed from linear to logarithmic scaling when measuring logical operators. The work improves the overhead costs of fault-tolerant quantum computing schemes that rely on measuring logical operators within error-correcting codes.
Key Contributions
- Development of O(W log W) ancilla system construction for measuring logical Pauli operators of weight W
- Asymptotic overhead reduction across various quantum code surgery schemes in qLDPC codes
View Full Abstract
Quantum code surgery offers a flexible, low-overhead framework for executing logical measurements within quantum error-correcting codes. It encompasses several fault-tolerant logical computation schemes, including parallel surgery, universal adapters and fast surgery, and serves as the key primitive in extractor architectures. The efficiency of these schemes crucially depends on constructing low-overhead ancilla systems for measuring arbitrary logical operators in general quantum Low-Density Parity-Check (qLDPC) codes. In this work, we introduce a method to construct an ancilla system of qubit size $O(W \log W)$ to measure an arbitrary logical Pauli operator of weight $W$ in any qLDPC stabilizer code. This new construction immediately reduces the asymptotic overhead across various quantum code surgery schemes.
Quantum Weight Reduction with Layer Codes
This paper introduces a new quantum weight reduction method that makes quantum error correction codes easier to implement by replacing components of existing codes with surface code patches connected together. The method achieves lower weight checks and qubit degrees than existing approaches, making the codes more practical for modular quantum computing architectures.
Key Contributions
- Novel quantum weight reduction procedure achieving check weight 6 and qubit degree 6
- Introduction of Layer Codes formed by connecting surface code patches for practical implementation
View Full Abstract
Quantum weight reduction procedures ease the implementation of quantum codes by sparsifying them, resulting in low-weight checks and low-degree qubits. However, to date, only few quantum weight reduction methods have been explored. In this work we introduce a simple and general procedure for quantum weight reduction that achieves check weight 6 and total qubit degree 6, lower than existing procedures at the cost of a potentially larger qubit overhead. Our quantum weight reduction procedure replaces each qubit and check in an arbitrary Calderbank-Shor-Steane code with an ample patch of surface code, these patches are then joined together to form a geometrically nonlocal Layer Code. This is a quantum analog of the simple classical weight reduction procedure where each bit and check is replaced by a repetition code. Due to the simplicity of our weight reduction procedure, bounds on the weight and degree of the resulting code follow directly from the Layer Code construction and hence are easily verified by inspection. Our procedure is well suited for implementation in modular architectures that consist of surface code patches networked via long-range interconnects.
HyQBench: A Benchmark Suite for Hybrid CV-DV Quantum Computing
This paper introduces HyQBench, a benchmarking framework for hybrid quantum systems that combine continuous-variable (CV) and discrete-variable (DV) quantum computing approaches. The researchers developed a simulation tool and created standardized benchmarks to evaluate the performance and capabilities of these hybrid quantum systems across various computational tasks.
Key Contributions
- Development of HyQBench simulation and benchmarking framework for hybrid CV-DV quantum circuits using Bosonic Qiskit
- Creation of standardized benchmark suite including cat state generation, GKP states, hybrid quantum Fourier transform, and Shor's algorithm
- Definition of CV-DV-specific feature maps and metrics for evaluating circuit complexity, scalability, and hardware requirements
View Full Abstract
Hybrid continuous-variable (CV)-discrete-variable (DV) quantum systems present a promising direction for quantum computing by combining the high dimensional encoding capabilities of qumodes with the control offered by DV qubits on the coupled qumodes. There have been exciting recent progresses on hybrid CV-DV quantum computing, including variational algorithms, error correction, compiler-level optimizations for Hamiltonian simulation, etc. However, there is a lack of a standardized CV-DV benchmark suite for assessing various emerging hardware platforms and evaluating software optimizations on hybrid CV-DV circuits. In this work, we introduce a simulation and benchmarking framework for hybrid CV-DV circuits, implemented using Bosonic Qiskit-a tool specifically designed to model CV-DV systems, along with QuTip for functional correctness verification. We construct and characterize representative CV-DV benchmarks, including cat state generation, GKP state generation, CV-DV state transfers, hybrid quantum Fourier transform, variational quantum algorithms, Hamiltonian simulation, and Shor's algorithm. To assess circuit complexity and scalability, we define a feature map organized into two categories: general features (e.g., qubit/qumode count, gate counts) and CV-DV-specific features (e.g., Wigner negativity, energy, truncation cost). These metrics enable evaluation of both classical simulability and hardware resource requirements. Our results, including one benchmark on real hardware, demonstrate that hybrid CV-DV architectures are not only viable but well-suited for a range of computational tasks, from optimization to Hamiltonian simulation. This framework lays the groundwork for systematic evaluation and future development of hybrid quantum systems.
On Error Thresholds for Pauli Channels: Some answers with many more questions
This paper analyzes error thresholds for quantum error correction codes, specifically studying how well different stabilizer codes can protect quantum information from Pauli noise channels. The researchers compute bounds on error rates that quantum codes can tolerate and discover some codes perform better when combined than theory predicts.
Key Contributions
- Numerical computation of lower bounds for error thresholds in Pauli channels using coset weight enumerators
- Discovery of significant non-additivity in concatenated stabilizer codes and closed-form expressions for repetition code concatenations
- Optimization of channel parameters for maximal non-additivity and threshold estimates for large concatenated codes
View Full Abstract
This paper focuses on error thresholds for Pauli channels. We numerically compute lower bounds for the thresholds using the analytic framework of coset weight enumerators pioneered by DiVincenzo, Shor and Smolin in 1998. In particular, we study potential non-additivity of a variety of small stabilizer codes and their concatenations, and report several new concatenated stabilizer codes of small length that show significant non-additivity. We also give a closed form expression of coset weight enumerators of concatenated phase and bit flip repetition codes. Using insights from this formalism, we estimate the threshold for concatenated repetition codes of large lengths. Finally, for several concatenations of small stabilizer codes we optimize for channels which lead to maximal non-additivity at the hashing point of the corresponding channel. We supplement these results with a discussion on the performance of various stabilizer codes from the perspective of the non-additivity and threshold problem. We report both positive and negative results, and highlight some counterintuitive observations, to support subsequent work on lower bounds for error thresholds.
Magic state distillation with permutation-invariant codes and a two-qubit example
This paper presents a new magic state distillation protocol that uses permutation-invariant codes as small as two qubits to create clean quantum states needed for fault-tolerant quantum computing. The protocol achieves better performance than previous methods by allowing non-Clifford gates and flexible output states, with a 0.5 error threshold and can distill magic states with arbitrary magic levels.
Key Contributions
- Novel magic state distillation protocol using permutation-invariant codes with minimal two-qubit overhead
- Achievement of 0.5 error threshold and 1/2 distillation rate surpassing comparable schemes
- Flexible protocol that can distill magic states with arbitrary magic levels by varying ideal input state positions
View Full Abstract
Magic states, by allowing non-Clifford gates through gate teleportation, are important building blocks of fault-tolerant quantum computation. Magic state distillation protocols aim to create clean copies of magic states from many noisier copies. However, the prevailing protocols require substantial qubit overhead. We present a distillation protocol based on permutation-invariant gnu codes, as small as two qubits. The two-qubit protocol achieves a 0.5 error threshold and 1/2 distillation rate, surpassing prior schemes for comparable codes. Our protocol furthermore distils magic states with arbitrary magic by varying the position of the ideal input states on the Bloch sphere. We achieve this by departing from the usual magic state distillation formalism, allowing the use of non-Clifford gates in the distillation protocol, and allowing the form of the output state to differ from the input state. Our protocol is compatible for use in tandem with existing magic state distillation protocols to enhance their performance.
Minimum Weight Decoding in the Colour Code is NP-hard
This paper proves that exact decoding of the colour code, a promising quantum error correction scheme, is computationally intractable (NP-hard). Unlike the surface code which can be decoded efficiently, colour code decoding cannot be solved exactly in polynomial time unless P=NP.
Key Contributions
- Proves that minimum weight decoding in the colour code is NP-hard
- Establishes fundamental computational limitations that distinguish colour codes from surface codes
View Full Abstract
All utility-scale quantum computers will require some form of Quantum Error Correction in which logical qubits are encoded in a larger number of physical qubits. One promising encoding is known as the colour code which has broad applicability across all qubit types and can decisively reduce the overhead of certain logical operations when compared to other two-dimensional topological codes such as the surface code. However, whereas the surface code decoding problem can be solved exactly in polynomial time by finding minimum weight matchings in a graph, prior to this work, it was not known whether exact and efficient colour code decoding was possible. Optimism in this area, stemming from the colour code's significant structure and well understood similarities to the surface code, fanned this uncertainty. In this paper we resolve this, proving that exact decoding of the colour code is NP-hard -- that is, there does not exist a polynomial time algorithm unless P=NP. This highlights a notable contrast to some of the colour code's key competitors, such as the surface code, and motivates continued work in the narrower space of heuristic and approximate algorithms for fast, accurate and scalable colour code decoding.
Achieving Optimal-Distance Atom-Loss Correction via Pauli Envelope
This paper develops new methods to correct atom loss errors in neutral-atom quantum computers, which account for over 40% of physical errors. The researchers propose a 'Pauli Envelope' framework with improved syndrome extraction circuits and decoders that achieve better error correction performance than existing approaches.
Key Contributions
- Pauli Envelope framework for bounding atom loss effects with efficient computation
- Mid-SWAP syndrome extraction circuit that reduces error propagation without additional overhead
- Envelope-MLE decoder achieving optimal effective code distance for atom-loss errors
- Envelope-Matching decoder providing improved performance within MWPM framework
View Full Abstract
Atom loss is a major error source in neutral-atom quantum computers, accounting for over 40% of the total physical errors in recent experiments. Unlike Pauli errors, atom loss poses significant challenges for both syndrome extraction and decoding due to its nonlinearity and correlated nature. Current syndrome extraction circuits either require additional physical overhead or do not provide optimal loss tolerance. On the decoding side, existing methods are either computationally inefficient, achieve suboptimal logical error rates, or rely on machine learning without provable guarantees. To address these challenges, we propose the Pauli Envelope framework. This framework constructs a Pauli envelope that bounds the effect of atom loss while remaining low weight and efficiently computable. Guided by this framework, we first design a new atom-replenishing syndrome extraction circuit, the Mid-SWAP syndrome extraction, that reduces error propagation with no additional space-time cost. We then propose an optimal decoder for Mid-SWAP syndrome extraction: the Envelope-MLE decoder formulated as an MILP that achieves optimal effective code distance dloss ~ d for atom-loss errors. Inspired by the exclusivity constraint of the optimal decoder, we also propose an Envelope-Matching decoder to approximately enforce the exclusivity constraint within the MWPM framework. This decoder achieves d_loss ~ 2d/3, surpassing the previous best algorithmic decoder, which achieves dloss ~ d/2 even with an MILP formulation. Circuit-level simulations demonstrate that our approach attains up to 40% higher thresholds and 30% higher effective distances compared with existing algorithmic decoders and syndrome extraction circuits in the loss-dominated regime. On recent experimental data, our Envelope-MLE decoder improves the error suppression factor of a hybrid MLE--machine-learning decoder from 2.14 to 2.24.
Efficient Time-Aware Partitioning of Quantum Circuits for Distributed Quantum Computing
This paper develops a time-aware algorithm based on beam search to efficiently partition quantum circuits across multiple quantum processing units in distributed quantum computing networks. The algorithm minimizes communication costs between remote quantum processors while providing significant computational speedup over existing methods.
Key Contributions
- Time-aware beam search heuristic for quantum circuit partitioning in distributed systems
- Algorithm with quadratic scaling in qubits and linear scaling in circuit depth, providing computational speedup over metaheuristics
- Demonstrated reduction in quantum communication overhead across various circuit sizes and network topologies
View Full Abstract
To overcome the physical limitations of scaling monolithic quantum computers, distributed quantum computing (DQC) interconnects multiple smaller-scale quantum processing units (QPUs) to form a quantum network. However, this approach introduces a critical challenge, namely the high cost of quantum communication between remote QPUs incurred by quantum state teleportation and quantum gate teleportation. To minimize this communication overhead, DQC compilers must strategically partition quantum circuits by mapping logical qubits to distributed physical QPUs. Static graph partitioning methods are fundamentally ill-equipped for this task as they ignore execution dynamics and underlying network topology, while metaheuristics require substantial computational runtime. In this work, we propose a heuristic based on beam search to solve the circuit partitioning problem. Our time-aware algorithm incrementally constructs a low-cost sequence of qubit assignments across successive time steps to minimize overall communication overhead. The time and space complexities of the proposed algorithm scale quadratically with the number of qubits and linearly with circuit depth, offering a significant computational speedup over common metaheuristics. We demonstrate that our proposed algorithm consistently achieves significantly lower communication costs than static baselines across varying circuit sizes, depths, and network topologies, providing an efficient compilation tool for near-term distributed quantum hardware.
Spectrally Corrected Polynomial Approximation for Quantum Singular Value Transformation
This paper improves Quantum Singular Value Transformation (QSVT) by developing a spectral correction method that uses prior knowledge of some eigenvalues to create more efficient polynomial approximations. The approach achieves up to 5× reduction in quantum circuit depth while maintaining high fidelity, demonstrated on solving linear systems like the Poisson equation.
Key Contributions
- Development of spectral correction method for QSVT that exploits prior eigenvalue knowledge to reduce polynomial degree
- Demonstration of up to 5× circuit depth reduction while maintaining unit fidelity on linear system solving problems
- Framework that is agnostic to base polynomial choice and robust to eigenvalue perturbations up to 10%
View Full Abstract
Quantum Singular Value Transformation (QSVT) provides a unified framework for applying polynomial functions to the singular values of a block-encoded matrix. QSVT prepares a state proportional to $\bA^{-1}\bb$ with circuit depth $O(d\cdot\mathrm{polylog}(N))$, where $d$ is the polynomial degree of the $1/x$ approximation and $N$ is the size of $\bA$. Current polynomial approximation methods are over the continuous interval $[a,1]$, giving $d = O(\sqrt{\kap}\log(1/\varepsilon))$, and make no use of any properties of $\bA$. We observe here that QSVT solution accuracy depends only on the polynomial accuracy at the eigenvalues of $\bA$. When all $N$ eigenvalues are known exactly, a pure spectral polynomial $p_{S}$ can interpolate $1/x$ at these eigenvalues and achieve unit fidelity at reduced degree. But its practical applicability is limited. To address this, we propose a spectral correction that exploits prior knowledge of $K$ eigenvalues of $\bA$. Given any base polynomial $p_0$, such as Remez, of degree $d_0$, a $K\times K$ linear system enforces exact interpolation of $1/x$ only at these $K$ eigenvalues without increasing $d_0$. The spectrally corrected polynomial $p_{SC}$ preserves the continuous error profile between eigenvalues and inherits the parity of $p_0$. QSVT experiments on the 1D Poisson equation demonstrate up to a $5\times$ reduction in circuit depth relative to the base polynomial, at unit fidelity and improved compliance error. The correction is agnostic to the choice of base polynomial and robust to eigenvalue perturbations up to $10\%$ relative error. Extension to the 2D Poisson equation suggests that correcting a small fraction of the spectrum may suffice to achieve fidelity above $0.999$.
Overflow-Safe Polylog-Time Parallel Minimum-Weight Perfect Matching Decoder: Toward Experimental Demonstration
This paper develops an improved algorithm for quantum error correction that can decode errors much faster than existing methods by solving the minimum-weight perfect matching problem in polylogarithmic time rather than polynomial time. The key innovation is using a truncated polynomial ring framework that prevents numerical overflow issues and reduces memory requirements by over 99.9% while maintaining the speed advantage.
Key Contributions
- Development of overflow-safe polylog-time parallel MWPM decoder using truncated polynomial ring framework
- Reduction of arithmetic bit length requirements by over 99.9% while preserving polylogarithmic runtime scaling
- Hardware-friendly implementation using only bitwise XOR and shift operations
View Full Abstract
Fault-tolerant quantum computation (FTQC) requires fast and accurate decoding of quantum errors, which is often formulated as a minimum-weight perfect matching (MWPM) problem. A determinant-based approach has been proposed as a promising method to surpass the conventional polynomial runtime of MWPM decoding via the blossom algorithm, asymptotically achieving polylogarithmic parallel runtime. However, the existing approach requires an impractically large bit length to represent intermediate values during the computation of the matrix determinant; moreover, when implemented on a finite-bit machine, the algorithm cannot detect overflow, and therefore, the mathematical correctness of such algorithms cannot be guaranteed. In this work, we address these issues by presenting a polylog-time MWPM decoder that detects overflow in finite-bit representations by employing an algebraic framework over a truncated polynomial ring. Within this framework, all arithmetic operations are implemented using bitwise XOR and shift operations, enabling efficient and hardware-friendly implementation. Furthermore, with algorithmic optimizations tailored to the structure of the determinant-based approach, we reduce the arithmetic bit length required to represent intermediate values in the determinant computation by more than $99.9\%$, while preserving its polylogarithmic runtime scaling. These results open the possibility of a proof-of-principle demonstration of the polylog-time MPWM decoding in the early FTQC regime.
Resource-Efficient Emulation of Majorana Zero Mode Braiding on a Superconducting Trijunction
This paper presents a more efficient method for simulating Majorana zero modes (exotic quantum particles) on quantum computers, specifically focusing on braiding operations that could enable fault-tolerant quantum computing. The authors develop direct braiding operators that reduce the computational overhead compared to previous simulation approaches that required very deep quantum circuits.
Key Contributions
- Development of resource-efficient direct braiding operators for MZM simulation
- Generalization of the method to extended trijunction architectures based on Kitaev chains
View Full Abstract
Topological superconductivity could host quasiparticles that are key candidates for fault-tolerant quantum computation due to their immunity to noise as they obey non-Abelian exchange statistics. For example, in the case of Majorana Zero Modes (MZM), braiding enables two topologically protected quantum gates. While their direct manipulation in solid-state systems remains experimentally challenging, digital emulation of MZM behavior has provided insight as well as a deeper understanding of controlling these topological quantum systems. This emulation is typically accomplished by mapping the topological and trivial phases of a Majorana system to ferromagnetic and paramagnetic Hamiltonians of a spin-glass model. This approach usually relies on adiabatic evolution of superconducting Hamiltonians, which require circuits with very large depths. In this work, we present a resource-efficient method to emulate MZM braiding in a trijunction geometry using a quantum processor. We introduce direct braiding operators which simulate the evolution more efficiently, reducing the quantum gate overhead. We then further generalize this method to emulate braiding operations in extended trijunction architectures based on Kitaev chains.
Mitigating many-body quantum crosstalk with tensor-network robust control
This paper develops a method to suppress quantum crosstalk in large quantum systems by combining tensor network simulations with robust control algorithms. The approach successfully designs high-fidelity quantum operations for up to 50 qubits, achieving order-of-magnitude improvements in performance when unwanted interactions between neighboring qubits are present.
Key Contributions
- Development of tensor-network based robust control method that overcomes exponential scaling limitations
- Demonstration of order-of-magnitude fidelity improvements for large-scale quantum operations up to 50 qubits in presence of crosstalk
- Efficient random sampling technique for noise ensembles combined with GRAPE algorithm for practical implementation
View Full Abstract
Quantum crosstalk poses a major challenge to scaling up quantum computations as its strength is typically unknown and its effect accumulates exponentially as system size grows. Here, we show that many-body robust control can be utilized to suppress unwanted couplings during multi-qubit gate operations and state preparation. By combining tensor network simulations with the GRAPE algorithm, and leveraging an efficient random sampling over noise ensembles, our method overcomes the exponential scaling of the Hilbert space. We demonstrate its effectiveness for designing control solutions for high-fidelity implementations of parallel X gates and parallel CNOT on a chain of 50 qubits, and for realizing a 30-qubit GHZ state and the ground state of a 20-qubit Heisenberg model. In the presence of many-body quantum crosstalk due to parasitic interaction between neighboring qubits, robust control results in order-of magnitude improvement in fidelity for large system sizes. These findings pave the way for more reliable operations on near-term quantum processors.
Quantum Lego Power-up: Designing Transversal Gates with Tensor Networks
This paper presents a new approach using tensor networks and 'quantum lego' formalism to systematically design quantum error-correcting codes that support transversal gates, which are the simplest fault-tolerant quantum gates. The method allows construction of codes with addressable non-Clifford gates like T gates and multi-qubit gates, overcoming limitations of traditional stabilizer code constructions.
Key Contributions
- Development of tensor network framework for systematic construction of quantum error-correcting codes with transversal gates
- Construction of new finite-rate code families supporting non-Clifford transversal gates including T, CCZ, and other complex gates
- Demonstration of addressable transversal gates in holographic codes, reducing overhead for universal fault-tolerant computation
View Full Abstract
Transversal gates are the simplest form of fault-tolerant gates and are relatively easy to implement in practice. Yet designing codes that support useful transversal operations -- especially non-Clifford or addressable gates -- remains difficult within the stabilizer formalism or CSS constructions alone. We show that these limitations can be overcome using tensor-network frameworks such as the quantum lego formalism, where transversal gates naturally appear as global or localized symmetries. Within the quantum lego formalism, small codes carrying desirable symmetries can be "glued" into larger ones, with operator-flow rules guiding how logical symmetries are preserved. This approach enables the systematic construction of codes with addressable transversal single- and multi-qubit gates targeting specific logical qubits regardless of whether the gate is Clifford or not. As a proof of principle, we build new finite-rate code families that support strongly transversal $T$, $CCZ$, $SH$, and Gottesman's $K_3$ gates, structures that are challenging to realize with conventional methods. We further construct holographic and fractal-like codes that admit addressable transversal inter-, meso-, and intra-block $T$, $CS$, and $C^\ell Z$ gates. As a corollary, we demonstrate that the heterogeneous holographic Steane-Reed-Muller black hole code also supports fully addressable transversal inter- and intra-block $CZ$ gates, significantly lowering the overhead for universal fault-tolerant computation.
Generalised All-Optical Cat Correction
This paper develops improved error correction methods for quantum cat codes using all-optical techniques, showing that higher-order cat codes can dramatically reduce the number of correction iterations needed while using more photons per correction.
Key Contributions
- Generalized all-optical telecorrection protocol for higher-order cat codes
- Demonstrated 70x reduction in correction iterations for third-order vs first-order cat codes
- Introduced probabilistic scheme for correcting state deformation with basis-changing capability
View Full Abstract
We have generalised an all-optical telecorrection protocol for the higher orders of the cat code, and show that with these higher orders we can achieve target performance at substantially reduced iteration counts at the cost of a higher mean photon-number. We also introduce a probabilistic scheme for correcting deformation of the state, which highlights two interesting abilities of telecorrection: to encode new sets of transformations, and to change the basis of the code. We find that for a target channel fidelity of $99.9\%$ over a channel with $1\text{ dB}$ of loss, a third-order cat code requires $70$ times fewer telecorrection iterations than a first-order one, at a cost of a $3.6$-fold increase in mean photon-number.
Entanglement-Assisted Codes Outside the Stabilizer Framework
This paper presents methods for constructing entanglement-assisted quantum error-correcting codes from arbitrary quantum codes by connecting them to erasure channel codes. The work extends beyond traditional stabilizer codes to include new types like permutation-invariant and XP-stabilizer codes.
Key Contributions
- Novel construction method for entanglement-assisted codes from arbitrary quantum codes via erasure channel association
- First examples of entanglement-assisted codes outside stabilizer and codeword-stabilized frameworks
- Compression techniques for degenerate codes with analysis of error-correction trade-offs
View Full Abstract
We show how entanglement-assisted codes can be constructed from arbitrary quantum codes by associating them with quantum codes for erasure channels. If a subset of physical qubits is correctable for an erasure error, then it naturally forms the receiver's share of a bipartite state that can be used for entanglement-assisted communications, both in the noiseless and noisy ebit error models. In the case of degenerate codes, we show that the receiver's share of the bipartite state can sometimes be compressed, at the cost of potentially reduced error-correction ability in the noisy ebit error model. We also give examples of permutation-invariant and XP-stabilizer entanglement-assisted codes, the first outside of the stabilizer and codeword-stabilized frameworks.
Scaling of silicon spin qubits under correlated noise
This paper studies how noise correlations between closely-packed silicon spin qubits affect quantum error correction by measuring noise in a five-qubit array. The researchers found that while magnetic field drifts create problematic correlations, charge noise correlations are manageable and compatible with fault-tolerant quantum computing.
Key Contributions
- Quantified spatial extent of noise correlations in silicon spin qubit arrays and identified two distinct sources: global magnetic drifts and localized charge noise
- Established that charge noise correlations are moderate and compatible with fault-tolerant quantum error correction with minimal overhead
View Full Abstract
The path to fault-tolerant quantum computing hinges on hardware that scales while remaining compatible with quantum error correction (QEC). Silicon spin qubits are a leading hardware candidate because they combine industrial fabrication compatibility with a nanoscale footprint that could accommodate millions of qubits on a chip. However, their suitability for QEC remains uncertain since spatially correlated noise naturally emerges from the resulting close proximity of qubits. These correlations increase the likelihood of simultaneous errors and erode the redundancy that QEC depends on. Here we quantify the spatial extent of noise correlations in a five-qubit silicon array and assess their impact on QEC. We identify two distinct sources of correlated noise: global magnetic field drifts that generate perfectly correlated fluctuations, and charge noise from two-level fluctuators that produces short-range correlations decaying within neighboring qubits. While magnetic drifts represent a critical correlated noise source that can compromise QEC, they can be mitigated. In contrast, the measured charge noise correlations are moderate, electrically tunable, and compatible with fault-tolerant operation with minimal qubit overhead. Our results establish quantitative benchmarks for correlated noise and clarify how such correlations impact the viability of quantum error correction in scalable qubit arrays.
QFlowNet: Fast, Diverse, and Efficient Unitary Synthesis with Generative Flow Networks
This paper introduces QFlowNet, a machine learning framework that combines Generative Flow Networks with Transformers to efficiently decompose quantum unitary operations into sequences of quantum gates, achieving 99.7% success rate on 3-qubit benchmarks while generating diverse solution sets.
Key Contributions
- Novel combination of GFlowNet and Transformers for unitary synthesis that generates diverse solutions rather than single policies
- Achievement of 99.7% success rate on 3-qubit unitary synthesis benchmark with efficient learning from sparse reward signals
View Full Abstract
Unitary Synthesis, the decomposition of a unitary matrix into a sequence of quantum gates, is a fundamental challenge in quantum compilation. Prevailing reinforcement learning(RL) approaches are often hampered by sparse reward signals, which necessitate complex reward shaping or long training times, and typically converge to a single policy, lacking solution diversity. In this work, we propose QFlowNet, a novel framework that learns efficiently from sparse signals by pairing a Generative Flow Network (GFlowNet) with Transformers. Our approach addresses two key challenges. First, the GFlowNet framework is fundamentally designed to learn a diverse policy that samples solutions proportional to their reward, overcoming the single-solution limitation of RL while offering faster inference than other generative models like diffusion. Second, the Transformers act as a powerful encoder, capturing the non-local structure of unitary matrices and compressing a high-dimensional state into a dense latent representation for the policy network. Our agent achieves an overall success rate of 99.7% on a 3-qubit benchmark(lengths 1-12) and discovers a diverse set of compact circuits, establishing QFlowNet as an efficient and diverse paradigm for unitary synthesis.
Ultra-low loss piezo-optomechanical low-confinement silicon nitride platform for visible wavelength quantum photonic circuits
This paper demonstrates an ultra-low loss silicon nitride photonic platform that combines excellent passive optical properties (0.026 dB/cm loss) with active control via piezo-optomechanical actuation, enabling scalable quantum photonic circuits that operate at visible wavelengths with low power consumption and fast reconfiguration.
Key Contributions
- Achieved ultra-low propagation loss of 0.026 dB/cm at 780 nm in a low-confinement silicon nitride platform
- Demonstrated piezo-optomechanical phase shifters with MHz bandwidth and 2.8 V·m voltage-length product
- Combined passive and active properties to enable scalable visible-wavelength quantum photonic circuits
View Full Abstract
The stringent demands of photonic quantum computing protocols motivate photonic integrated circuit (PIC) platforms with passive optical properties such as extremely low losses and correspondingly large circuit depths, as well as active optical properties such as high reconfiguration rates, low power dissipation, and minimal crosstalk. At the same time, many quantum photonic resource state generators, such as single-photon sources and quantum memories, require operation in the visible wavelength range. These requirements make the passive optical properties of CMOS-fabricated, ultralow-loss, low-confinement silicon nitride waveguides especially attractive. However, the conventional active properties of these systems based on thermo-optic modulation are plagued by high levels of crosstalk, slow modulation rates, and high power dissipation. Although there have been recent demonstrations of CMOS-fabricated, visible wavelength, piezo-optomechanical PICs that solve the above challenges associated with implementing active functionality, these have made use of high-confinement waveguides with currently demonstrated losses of order $0.3$-$1~\mathrm{dB/cm}$, precluding circuit depths required for scalable quantum algorithms. Here, we demonstrate that combining piezo-optomechanical actuation with a low-confinement, ultra-low loss silicon nitride platform addresses the scalability challenge while enabling high-performance active functionality at visible wavelengths. This platform achieves a propagation loss $0.026~\mathrm{dB/cm}$ at $780~\mathrm{nm}$, modulation bandwidths in the MHz range, and a phase shifter voltage-length product ($V_πL$) of approximately $2.8~\mathrm{\mathrm{V}\cdot\mathrm{m}}$ and negligible hysteresis. We further demonstrate reconfigurable Mach-Zehnder interferometers based on spiral phase shifters with 0.63 dB loss per phase shifter.
Steering paths mid-flight for fault-tolerance in measurement-based holonomic gates
This paper develops a fault-tolerant framework for implementing holonomic quantum gates using continuous measurements and real-time feedback. The approach can correct errors mid-computation and relaxes strict timing requirements, enabling faster and more robust quantum gate operations.
Key Contributions
- Fault-tolerant framework for measurement-based holonomic gates with real-time error correction
- Method to suppress non-Markovian decoherence through quantum Zeno effect
- Protocol for correcting measurement-induced errors from non-adiabatic effects
- Relaxation of adiabaticity requirements enabling faster gate implementation
View Full Abstract
Continuous measurement-based holonomic quantum computation provides a route to universal logical computation in quantum error correcting codes. We introduce a fault-tolerant framework for implementing measurement-based holonomic gates that leverages continuous measurements with real-time feedback. We show that non-Markovian decoherence is intrinsically suppressed through the quantum Zeno effect, while Markovian errors are identified by the decoding of measurement records to reveal the rotated syndrome subspace populated during the evolution. This information enables steering holonomic paths mid-flight to ensure that the final evolution realizes the target logical gate. We further demonstrate that non-adiabatic effects give rise to measurement-induced errors, and we show that these can also be corrected by an analogous protocol. This approach relaxes the stringent adiabaticity requirement and enables faster implementation of holonomic gates.
Constant-Time Surgery on 2D Hypergraph Product Codes with Near-Constant Space Overhead
This paper develops new techniques for performing fault-tolerant quantum computations on quantum error-correcting codes that dramatically reduce the time overhead from O(d) to constant time O(1) while maintaining very low space requirements. The work focuses on improving 'code surgery' methods that allow logical operations on quantum low-density parity-check codes.
Key Contributions
- Development of constant-time surgery gadgets for 2D hypergraph product codes that achieve O(1) time overhead
- Demonstration that performing d surgery operations in O(d) time maintains fault tolerance through amortization
View Full Abstract
Generalized code surgery is a versatile and low-overhead technique for performing fault-tolerant computation on quantum low-density parity-check (qLDPC) codes. In many settings, surgery exhibits practical space overheads, while its time overhead remains a bottleneck at $O(d)$ syndrome rounds per operation. In this work, we construct surgery gadgets that perform parallel logical measurements on 2D hypergraph product codes in constant time overhead ($O(1)$) and near-constant space overhead ($\tilde{O}(1)$). The reduced time overhead is a result of amortization, as we show, following the formulation by Cowtan et al. (arXiv:2510.14895), that performing $d$ surgery operations in $O(d)$ time is fault tolerant. Our gadgets combine the strengths of different approaches to fault-tolerant logical operations: they partially retain the flexibility of surgery while achieving overheads comparable to transversal gates. Consequently, they are well-suited for near-term experimental realization and demonstrate new possibilities in the design of gadgets for fast logical computation.
Obstacles to Continuous Quantum Error Correction via Parity Measurements
This paper identifies fundamental problems with continuous quantum error correction using parity measurements in circuit quantum electrodynamics platforms. The researchers show that approximating required three-body interactions with two-body couplings corrupts the logical quantum information, limiting practical implementation of continuous error correction.
Key Contributions
- Demonstrates that common parity-measurement protocols in circuit QED corrupt logical information during continuous operation
- Identifies that the failure mechanism stems from approximating three-body interactions with two-body couplings to meters
- Proposes alternative approaches including native three-body interaction architectures and erasure-based encodings
View Full Abstract
Time-continuous quantum error correction, necessary to protect quantum information under time-dependent Hamiltonians, relies on weak continuous syndrome measurements. Implementing these measurements requires a continuous coupling among at least two qubits and a meter, a demanding requirement. We show that, under continuous operation, common parity-measurement protocols in the circuit quantum electrodynamics platform corrupt the logical information. The failure arises from approximating the three-body interaction by a sum of two-body couplings to the meter, which prevents simultaneous suppression of measurement backaction on the logical and error subspaces. We argue that the same mechanism applies more generally beyond the circuit quantum electrodynamics setting. Taken together, our results impose a practical limitation on continuous stabilizer quantum error correction and point to the viable alternatives -- architectures that realize native three-body interactions, or erasure-based encodings in which the error subspace need not be protected.
No More Hooks in the Surface Code: Distance-Preserving Syndrome Extraction for Arbitrary Layouts at Minimum Depth
This paper proposes a new method called ZX interleaving syndrome extraction for quantum error correction in surface codes that eliminates problematic 'hook errors' while maintaining minimum circuit depth. The technique preserves the full fault tolerance distance for any surface code layout, improving upon existing methods that either add circuit overhead or reduce error correction capability.
Key Contributions
- ZX interleaving syndrome extraction method that preserves full fault distance d for arbitrary surface code layouts at minimum depth
- Elimination of hook errors without additional circuit depth or simultaneous measurement/CNOT execution requirements
- Numerical validation showing full fault distance d achievement versus d-1 for existing minimum-depth approaches
View Full Abstract
Hook errors are a major challenge in implementing logical operations with the surface code, because they can reduce the fault distance below the code distance. This motivates syndrome-extraction circuits that suppress hook-error effects for the stabilizer layouts that appear during logical operations. However, the existing methods either increase circuit depth or require simultaneous execution of measurements and CNOT gates, both of which introduce additional overheads and degrade the threshold. We propose the ZX interleaving syndrome extraction, which preserves the full fault distance $d$ for any surface-code layout with regular stabilizer tiles at minimum depth, i.e., four layers of CNOT gates, without requiring additional circuit depth or simultaneous execution of measurements and CNOT gates. The key idea is to interleave the Z and X stabilizer tiles so that hook-error edges in the decoding graph are shortened and effectively eliminated. Numerical simulations under uniform depolarizing noise for memory and lattice-surgery experiments confirm that the proposed method achieves a full fault distance of $d$, whereas the best existing minimum-depth approach achieves $d-1$. Since the full fault distance is achievable for any regular tiling layout of the surface code, the proposed method may serve as an indispensable technique for practical fault-tolerant quantum computation.
Sustaining high-fidelity quantum logic in neutral-atom circuits via mid-circuit operations
This paper demonstrates a neutral-atom quantum computing system that maintains high gate fidelities (~99.8%) across multiple operational rounds by using mid-circuit cooling and qubit reinitialization to counteract atom loss and heating. The approach enables sustained high-performance operation needed for fault-tolerant quantum error correction.
Key Contributions
- Demonstration of 99.81% fidelity two-qubit gates with erasure detection in neutral atoms
- In-circuit Raman sideband cooling and qubit re-initialization maintaining ~99.8% fidelity across multiple rounds
- Hardware-efficient mid-circuit operations framework enabling sustainable deep quantum circuits
View Full Abstract
The realization of fault-tolerant quantum computation hinges on the ability to execute deep quantum circuits while maintaining gate fidelities consistently above error-correction thresholds. Although neutral-atom arrays have recently demonstrated high-fidelity two-qubit gates and early-stage logical quantum processors, sustaining such high performance across deep, repetitive circuits remains a formidable challenge due to cumulative motional heating and atom loss. Here we demonstrate a sustainable neutral-atom framework that overcomes these limitations by integrating a suite of hardware-efficient mid-circuit operations. We report a two-qubit controlled logic gate with a raw fidelity of 99.60(1)%, which is further increased to a fidelity of 99.81(1)% via non-destructive erasure detection. Crucially, by implementing in-circuit Raman sideband cooling and qubit re-initialization, we demonstrate that gate fidelities can be maintained at the ~99.8% level across multiple operational rounds without observable degradation. By actively managing the internal and motional entropy of the system mid-stream, our in-situ refreshable architecture provides a critical pathway for executing the repeated syndrome-extraction cycles required for large-scale, continuous quantum error correction.
QuMeld: A Modular Framework for Benchmarking Qubit Mapping Algorithms
This paper presents QuMeld, an open-source software framework designed to systematically evaluate and compare different algorithms for mapping logical qubits to physical qubits on quantum computers. The framework supports multiple mapping algorithms, quantum computer topologies, and evaluation metrics in a modular design that allows for future extensions.
Key Contributions
- Development of unified benchmarking framework for qubit mapping algorithms
- Modular design supporting six algorithms and sixteen quantum computer topologies with extensibility for future additions
View Full Abstract
The qubit mapping problem is a challenge in quantum computing that is related to mapping logical qubits to the physical ones on the quantum computer. Due to the diversity of quantum computer topologies and circuits, numerous approaches solving this problem exist. Finding the best solution for specific combination of topology and circuit remains difficult and no unified framework currently exists for systematically evaluating and comparing qubit mapping algorithms across different cases. We present QuMeld, an open-source framework that is designed for solving this issue. The framework currently supports six qubit mapping algorithms, sixteen quantum computer topologies and multiple evaluation metrics. The modular design of the framework allows integration of new mapping algorithms, quantum circuits, hardware topologies, and evaluation metrics, ensuring extensibility and adaptability to future developments.
Estimating the performance boundary of Gottesman-Kitaev-Preskill codes and number-phase codes
This paper compares two types of quantum error-correcting codes that use light particles (bosonic codes) - GKP and number-phase codes - to determine which performs better under different noise conditions. The researchers found that the choice between codes depends critically on the ratio of photon loss to dephasing noise, with a clear crossover point when dephasing is about 100 times weaker than loss.
Key Contributions
- Established quantitative performance boundary between GKP and number-phase codes under photon loss and dephasing noise
- Developed practical methodology for benchmarking and optimizing bosonic quantum error-correcting codes
- Identified sharp crossover regime where dephasing strength is approximately two orders of magnitude smaller than loss strength
View Full Abstract
Bosonic quantum error-correcting codes encode logical information in a harmonic oscillator, with the Gottesman-Kitaev-Preskill (GKP) and number-phase (NP) codes representing two fundamentally different encoding paradigms. Although both have been extensively studied, it remains unclear under what physical noise conditions (including photon loss and dephasing) one encoding intrinsically outperforms the other. Here we estimate a quantitative performance boundary between GKP and NP codes under general photon loss-dephasing noise. By optimizing code parameters within each encoding family, we identify the noise regimes in which each code exhibits a fundamental advantage. In particular, we find that the crossover occurs when the dephasing strength is approximately two orders of magnitude smaller than the loss strength, revealing a sharp separation between operational regimes. Beyond this specific comparison, our work establishes a practical and extensible methodology for benchmarking bosonic codes and optimizing their parameters, providing concrete guidance for the experimental selection and deployment of bosonic encodings in realistic noise environments.
A frequency-agile microwave-optical interface for superconducting qubits
This paper demonstrates a frequency-agile interface that converts microwave signals from superconducting qubits to optical signals for transmission over fiber optic cables. The system overcomes bandwidth limitations by cascading a microwave-to-microwave frequency converter with a microwave-to-optical transducer, enabling quantum communication between distant superconducting quantum processors.
Key Contributions
- Development of a frequency-agile microwave-optical interface with continuous frequency coverage from 5.0 to 8.5 GHz
- Demonstration of optical readout of a superconducting qubit detuned by 1.7 GHz from the native transducer frequency
- Cascaded M2M-M2O architecture enabling heterogeneous superconducting device networking
View Full Abstract
Superconducting quantum processors operate at microwave frequencies in millikelvin environments, making it challenging to interconnect distant nodes using conventional microwave wiring. Coherent microwave-to-optical (M2O) transduction enables superconducting quantum networks by interfacing itinerant microwave photons with low-loss optical fiber. However, many state-of-the-art transducers provide efficient conversion only over a narrow frequency span, complicating deployment with heterogeneous superconducting devices that are detuned by gigahertz-scale offsets. Here we demonstrate a frequency-agile microwave-optical interface that overcomes this bandwidth mismatch by cascading an electro-optic M2O transducer with a multimode microwave-to-microwave (M2M) frequency converter, with in situ tunability of the microwave resonances in both stages. Using this architecture, we realize continuous frequency coverage from 5.0 to 8.5 GHz within a single system. As an application relevant to superconducting-qubit networking, we use the cascaded M2M-M2O interface to optically read out a superconducting qubit whose readout resonator is detuned by 1.7 GHz from the native M2O microwave resonance, demonstrating a scalable route toward fiber-linked superconducting quantum nodes.
Optimized Compilation for Distributed Quantum Computing
This paper develops a greedy algorithm to optimize quantum circuit compilation for distributed quantum computing by minimizing the use of Einstein-Podolsky-Rosen (EPR) pairs. The approach groups non-local gates to share EPR pairs and reorders commutative gates to reduce circuit depth and resource consumption.
Key Contributions
- Greedy algorithm for optimizing EPR pair usage in distributed quantum circuits
- Circuit compilation strategy that groups non-local gates and reorders commutative operations
View Full Abstract
In many practical applications, quantum algorithms require several qubits, significantly more than those available with current noisy intermediate-scale quantum processors. Distributed quantum computing (DQC) is considered a scalable approach to increasing the number of available qubits for computational tasks. In the DQC setting, a quantum compiler must find the best partitioning for the quantum algorithm and then perform smart non-local operations scheduling to optimize the consumption of Einstein-Podolsky-Rosen (EPR) pairs. In this work, the focus is on minimizing the use of EPR pairs when the circuit structure allows for multiple non-local gates to utilize a single TeleGate operation. This is achieved by using a greedy algorithm that explores the circuit and groups together the gates that could share an EPR pair while also changing the order of commutative gates when necessary. With this preliminary pass, the compiled circuits show reduced depth and EPR usage. Since the quality of each EPR pair quickly deteriorates, the number of non-local gates using the same EPR pair should also be bounded. This means that, depending on the features of the target quantum network, the user can achieve different levels of optimization. Here, it is shown that this approach brings benefits even while assuming a low EPR pair lifetime.
3D Integrated Embedded Filters for Superconducting Quantum Circuits
This paper presents a new design for microwave filters used in superconducting quantum computers, where the filters are embedded in printed circuit boards rather than on the qubit chip itself. This approach improves qubit isolation and enables better scaling to larger quantum processors while maintaining high qubit coherence times.
Key Contributions
- Novel off-chip PCB-embedded Purcell filter design that removes filter components from qubit substrate
- Demonstration of thousand-fold improvement in qubit isolation with multiplexed readout capability for up to 9 resonators
- Experimental validation showing compatibility with high-coherence qubits and scalability to large qubit counts
View Full Abstract
Microwave filtering for superconducting qubits is a key element of quantum computing technology, enabling high coherence and fast state detection. This work presents the design and implementation of novel microwave Purcell filters for superconducting quantum circuits, integrated within a multilayer printed circuit board (PCB). The off-chip design removes all filter components from the qubit substrate, reducing device complexity, improving layout footprint and allowing better scalability to large qubit counts. Each embedded filter can couple up to nine readout resonators, enabling efficient multiplexed readout. Electromagnetic simulations of the filter predict a thousand-fold improvement in qubit isolation from the readout port. The design was experimentally validated under cryogenic conditions in conjunction with a 35-qubit device, demonstrating compatibility of the PCB-based filter with high-coherence superconducting qubits. The comparison of the measured qubit median T1 of 84 $μ$s with the expected radiative limit from electromagnetic simulations validated the presence of Purcell filtering in the system.
Characterization of Josephson Junction Aging and Annealing Under Different Environments
This paper studies how Josephson junctions used in quantum computers degrade over time under different storage conditions and how thermal annealing can restore their properties. The researchers found that aging follows predictable patterns and can be controlled through proper storage environments and annealing procedures.
Key Contributions
- Characterized aging behavior of Josephson junctions following logarithmic curves with fabrication-dependent amplitude and storage-dependent speed
- Demonstrated that controlled annealing can restore junction properties with environment-dependent effects on resistance
View Full Abstract
Understanding the aging behavior of Josephson junctions and the effect of annealing on junction resistances is important in building large-scale superconducting quantum processors. Here we study the effects of aging of Josephson junctions under different storage conditions from immediately after fabrication up to 2 to 3 months. We find that the aging curve follows a logarithmic curve, with the aging amplitude mainly determined by fabrication conditions and the aging speed determined by storage conditions. Junctions stored at ambient laboratory conditions aged faster compared to junctions stored in a nitrogen atmosphere or vacuum, with the aging speed appreciably changes when the storage condition changed. We also compared the effect of thermal annealing under nitrogen environment with annealing under ambient conditions up to 250$^\circ$ C. We find that under nitrogen environment, the resistances decreased at all temperatures tested, while under ambient environment the resistances increased at 200$^\circ$ C and decreased at 250$^\circ$ C instead. We were unable to decrease the resistance below the initial-time resistance, suggesting a lower limit on the range of resistance tuning.
Spin stiffness and resilience phase transition in a noisy toric-rotor code
This paper studies how well the toric-rotor code (a type of quantum error-correcting code) can protect quantum information from phase-shift noise. The researchers use mathematical connections to classical physics models to identify a critical noise threshold above which the code loses its protective properties.
Key Contributions
- Mapped the resilience properties of toric-rotor codes under noise to the Kosterlitz-Thouless phase transition in the classical XY model
- Developed a quantum formalism for spin stiffness that corresponds to gate fidelity in the logical subspace
- Identified a critical noise threshold (σc ≈ 0.89) for partial resilience in toric-rotor codes
- Provided mathematical framework using quantum partition functions for studying correctability in continuous-variable quantum codes
View Full Abstract
We use a quantum formalism for the partition function of the classical $XY$ model to identify a resilience phase transition in a noisy toric-rotor code. Specifically, we consider the toric-rotor code under phase-shift noise described by a von Mises probability distribution and show that the fidelity between the final state after noise and the initial state is proportional to the partition function of the $XY$ model. We map the temperature of the $XY$ model to the width of the noise in the toric-rotor code, such that a Kosterlitz--Thouless phase transition at a critical temperature $T_{c}$ corresponds to a mixed-state phase transition at a critical width $σ_c$. To characterize this phase transition, we develop a quantum formalism for the spin stiffness in the $XY$ model and show that it is mapped to the gate fidelity in the logical subspace of the toric-rotor code. In particular, we introduce a topological order parameter that characterizes the resilience of the toric-rotor code to decoherence within the logical subspace. We show that the logical subspace does not exhibit complete resilience to noise, which is a necessary condition for correctability. However, it exhibits partial resilience to noise for widths less than $σ_c\approx 0.89$, where the resilience order parameter takes values near $1$ and then drops to zero at $σ_c$. We also use our results to shed light on the correctability of toric-rotor codes in higher dimensions $d > 2$. Our work shows that the quantum formalism for partition functions provides a mathematically rigorous framework for studying correctability in continuous-variable quantum codes.
Copy-cup Gates in Tensor Products of Group Algebra Codes
This paper develops quantum error-correcting codes with built-in constant-depth quantum gates (CZ and CCZ) by analyzing when classical group algebra codes can support specific mathematical structures called copy-cup gates. The researchers connect this problem to graph theory and provide concrete conditions for constructing these enhanced quantum codes.
Key Contributions
- Established conditions for classical group algebra codes to support copy-cup gates that enable constant-depth CZ and CCZ quantum gates
- Connected the copy-cup gate problem to perfect matching in graph theory
- Fully characterized conditions for 2- and 3-copy-cup gates in weight 4 group algebra codes
- Demonstrated that bivariate bicycle codes lack pre-orientation for copy-cup gates
View Full Abstract
We determine conditions on classical group algebra codes so that they have pre-orientation for cup products and copy-cup gates. This defines quantum codes that have constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates constructed via tensor products of classical group algebra codes, including hypergraph and balanced products. We show that determining the conditions relies on solving the perfect matching problem in graph theory. Conditions are fully determined for the 2- and 3-copy-cup gates, for group algebra codes up to weight 4, including for codes with odd check weight. These include the bivariate bicycle codes, which we show do not have the pre-orientation for either type of copy-cup gate. We show that abelian weight 4 group algebra codes satisfying the non-associative 3-copy-cup gate necessarily have a code distance of 2, whereas codes that satisfy conditions for the symmetric 3-copy-cup gate can have higher distances, and in fact also satisfy conditions for the 2-copy-cup gate. Finally we find examples of quantum codes from the product of abelian group algebra codes that have inter-code constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates.
Hyperbolic and Semi-Hyperbolic Floquet Codes for Photonic Quantum Computing
This paper develops new quantum error correcting codes called hyperbolic and semi-hyperbolic Floquet codes that are specifically designed for photonic quantum computing systems. The codes use only simple weight-2 measurements and are tested under various noise models, showing improved performance compared to surface codes for photon-mediated quantum computing applications.
Key Contributions
- Construction of new hyperbolic Floquet codes from {10,3} and {12,3} tessellations using the LINS algorithm
- Demonstration that these codes achieve better fault-tolerant performance than surface codes in photonic quantum computing with 2.2x larger fault-tolerant area while encoding 10 logical qubits
View Full Abstract
Tailoring error correcting codes to the structure of the physical noise can reduce the overhead of fault-tolerant quantum computation. Hyperbolic Floquet codes use only weight-2 measurements and can be implemented directly on hardware with native pair measurements. We construct hyperbolic and semi-hyperbolic Floquet codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations via the Wythoff kaleidoscopic construction with the Low-Index Normal Subgroups (LINS) algorithm. The $\{10,3\}$ and $\{12,3\}$ families are new to hyperbolic Floquet codes. We evaluate these codes under four noise models: phenomenological, ancilla Entangling Measurement (EM3), Single-step Depolarizing EM3 (SDEM3), and erasure. Under phenomenological noise, specific-logical threshold crossings occur near $p_e \approx 0.3$--$0.5\%$ for $\{8,3\}$ ($k=6$--$56$) and $0.15$--$0.2\%$ for $\{10,3\}$ ($k=12$--$146$). EM3 ancilla noise yields a threshold of ${\sim}1.5\%$ for all three families. SDEM3 is a depolarizing noise model motivated by Majorana tetron architectures; fine-grained codes achieve thresholds of ${\sim}1.0$--$1.2\%$ for all three families. The erasure model captures detected photon loss on spin-optical links; fine-grained codes achieve erasure thresholds of ${\sim}8.5$--$9\%$ for $\{8,3\}$, ${\sim}7$--$8\%$ for $\{10,3\}$, and ${\sim}6.5$--$8\%$ for $\{12,3\}$. Photon loss is the dominant error source in photon-mediated quantum computing. Under the full three-parameter SPOQC-2 noise model, the $\{8,3\}$ codes achieve a 2D fault-tolerant area $2.2\times$ that of the surface code compiled to pair measurements while encoding $k = 10$ logical qubits. In a companion paper, we evaluate the same code families in a distributed setting.
Spin-Cat Qubit with Biased Noise in an Optical Tweezer Array
This paper demonstrates the implementation of spin-cat qubits using ytterbium-173 atoms with nuclear spin 5/2 in optical tweezers, showing how these qubits exhibit biased noise that favors dephasing errors over bit-flip errors. The researchers achieved single-qubit gate operations and characterized the noise properties, demonstrating the feasibility of using these qubits for bias-tailored quantum error correction codes.
Key Contributions
- Demonstration of single-qubit controls for spin-cat qubits in ytterbium-173 with nuclear spin I=5/2
- Characterization of biased noise in spin-cat qubits showing preference for dephasing errors over bit-flip errors
- Achievement of covariant SU(2) rotations and benchmarking of gate fidelities for bias-tailored quantum error correction
View Full Abstract
Bias-tailored quantum error correcting codes (QECCs) offer a higher error threshold than standard QECCs and have the potential to achieve lower logical errors with less space overhead. The spin-cat qubit, encoded in a large nuclear spin-$F$ system, is a promising candidate for bias-tailored QECCs. Yet its feasibility is hindered by the difficulty of performing fast covariant SU(2) rotation with arbitrary rotation angles for nuclear spins and by a lack of noise characterization for gate operations in neutral atom platforms. Here we demonstrate single-qubit controls of ${}^{173}\mathrm{Yb}$ spin-cat qubits with nuclear spin $I=5/2$ in an optical tweezer array. We implement a covariant SU(2) rotation and non-linear rotations by optical beams and achieve an averaged single-Clifford gate fidelity of $0.961_{-5}^{+5}$. The measurement of the coherence time and spin relaxation time shows that the idling error becomes increasingly biased toward dephasing errors as the magnitude of the encoded sublevel $|m_F|$ increases. Furthermore, we benchmark the noise bias of rank-preserving gates on spin-cat qubits, demonstrating a finite bias of $18_{-11}^{+132}$, in contrast to the case of the two-level system in ${}^{171}\mathrm{Yb}$, which shows no bias within the experimental uncertainty. Our work demonstrates the feasibility of spin-cat qubits for realizing bias-tailored QECCs, paving the way for achieving hardware-efficient quantum error correction.
A matching decoder for bivariate bicycle codes
This paper develops a new decoding algorithm for bivariate bicycle quantum error-correcting codes using minimum-weight perfect matching, introducing a 'cylinder trick' method that leverages code symmetries to efficiently find error corrections.
Key Contributions
- Development of matching-based decoder for bivariate bicycle codes using the 'cylinder trick' method
- Demonstration of improved decoder performance through augmentation with belief propagation and over-matching strategies
View Full Abstract
The discovery of new quantum error-correcting codes that encode several logical qubits into relatively few physical qubits motivates the development of efficient and accurate methods of decoding these systems. Here, we adopt the minimum-weight perfect matching algorithm, a subroutine invaluable to decoding topological codes, to decode bivariate bicycle codes. Using the equivalence of bivariate bicycle codes to copies of the toric code, we propose a method we call the 'cylinder trick' to rapidly find a correction using matching on code symmetries. We benchmark our decoder on the gross code family, cyclic hypergraph-product codes, generalized toric codes, and recently proposed directional codes, demonstrating the general applicability of our protocol. For a subset of these codes, we find that our decoder can be significantly improved by augmenting matching with strategies including belief propagation and 'over-matching', thus achieving performance competitive with state-of-the-art approaches.
The Road to Useful Quantum Computers
This paper provides a comprehensive overview of the current state of quantum computing development, examining the gap between existing quantum computer capabilities and the goal of achieving 'quantum utility' where quantum computers solve practically important problems. The authors analyze the key scientific and engineering challenges that must be overcome to build useful quantum computers.
Key Contributions
- Comprehensive assessment of current quantum computing capabilities versus requirements for quantum utility
- Identification and analysis of key scientific and engineering challenges blocking progress toward useful quantum computers
- Framework for tracking progress from current prototypes to quantum utility applications
View Full Abstract
Building a useful quantum computer is a grand science and engineering challenge, currently pursued intensely by teams around the world. In the 1980s, Richard Feynman and Yuri Manin observed independently that computers based on quantum mechanics might enable better simulations of quantum phenomena. Their vision remained an intellectual curiosity until Peter Shor published his famous quantum algorithm for integer factoring, and shortly thereafter a proof that errors in quantum computations can be corrected. Since then, quantum computing R&D has progressed rapidly, from small-scale experiments in university physics laboratories to well-funded industrial efforts and prototypes. Hype notwithstanding, quantum computers have yet to solve scientifically or practically important problems -- a target often called quantum utility. In this article, we describe the capabilities of contemporary quantum computers, compare them to the requirements of quantum utility, and illustrate how to track progress from today to utility. We highlight key science and engineering challenges on the road to quantum utility, touching on relevant aspects of our own research.
Computing with many encoded logical qubits beyond break-even
This paper demonstrates quantum error correction codes that encode many logical qubits and actually perform better than unencoded qubits, using up to 94 logical qubits on a 98-qubit trapped-ion quantum computer. The researchers achieved 'beyond break-even' performance where error correction improves rather than degrades computation quality.
Key Contributions
- First demonstration of beyond break-even performance with high-rate quantum error correction codes using up to 94 logical qubits
- Implementation of fault-tolerant operations including state preparation, measurement, and quantum simulation on the 98-qubit Quantinuum Helios processor
- Development of new encoded operation gadgets for iceberg QED and two-level concatenated iceberg QEC codes
View Full Abstract
High-rate quantum error correcting (QEC) codes encode many logical qubits in a given number of physical qubits, making them promising candidates for quantum computation. Implementing high-rate codes at a scale that both frustrates classical computing and improves performance by encoding requires both high fidelity gates and long-range qubit connectivity -- both of which are offered by trapped-ion quantum computers. Here, we demonstrate computations that outperform their unencoded counterparts in the high-rate $[[ k+2,\, k,\, 2 ]]$ iceberg quantum error detecting (QED) and $[[ (k_2 + 2)(k_1 + 2),\, k_2k_1,\, 4 ]]$ two-level concatenated iceberg QEC codes, using the 98-qubit Quantinuum Helios trapped-ion quantum processor. Utilizing new gadgets for encoded operations, we realize this "beyond break-even" performance with reasonable postselection rates across a range of fault-tolerant (FT) and partially-fault-tolerant (pFT) component and application benchmarks with between $48$ and $94$ logical qubits. These benchmarks include FT state preparation and measurement, QEC cycle benchmarking, logical gate benchmarking, GHZ state preparation, and a pFT quantum simulation of the three-dimensional $XY$ model of quantum magnetism. Additionally, we illustrate that postselection rates can be suppressed by increasing the code distance via concatenation. Our results represent state-of-the-art logical component and state fidelities and provide evidence that high-rate QED/QEC codes are viable on contemporary quantum computers for near-term beyond-classical-scale computation.
Controlled jump in the Clifford hierarchy
This paper develops a systematic method for reaching higher levels of the Clifford hierarchy in quantum computing by using controlled versions of Clifford operations, establishing precise rules for how much these controlled gates can advance up the hierarchy levels. The authors prove resource bounds showing that accessing very high hierarchy levels requires exponentially many qubits, and demonstrate applications to preparing states for fractional phase gates.
Key Contributions
- Proof of controlled-jump rule showing controlled Clifford gates CU reach hierarchy level m+2 where m is the Pauli periodicity of U
- Tight upper bound on Pauli periodicity showing exponential qubit requirements for high hierarchy levels
- Construction of explicit Clifford families achieving asymptotically optimal hierarchy jumps
- Protocol for preparing logical catalyst states enabling fractional Z gates via phase kickback
View Full Abstract
We develop a simple and systematic route to higher levels of the qubit Clifford hierarchy by coherently controlling Clifford operations. Our approach is based on Pauli periodicity, defined for a Clifford unitary $U$ as the smallest integer $m\ge 1$ such that $U^{2^{m}}$ is a Pauli operator up to phase. We prove a sharp controlled-jump rule showing that the controlled gate $CU$ lies strictly in level $m+2$ of the hierarchy, and equivalently that $CU$ lies in level $k$ if $U^{2^{k-2}}$ is Pauli while no smaller positive power of $U$ is Pauli. We further quantify the resources required to realize large level jumps in the Clifford hierarchy by proving an essentially tight upper bound on Pauli periodicity as a function of the number of qubits, which implies that accessing high hierarchy levels through controlled Cliffords requires a number of target qubits that grows exponentially with the desired level. We complement this limitation with explicit infinite families of Pauli-periodic Cliffords whose controlled versions achieve asymptotically optimal jumps. As an application, we propose a protocol for preparing logical catalyst states that enable logical $Z^{1/2^k}$ phase gates via phase kickback from a single jumped Clifford.
Beyond Single-Shot Fidelity: Chernoff-Based Throughput Optimization in Superconducting Qubit Readout
This paper develops a new approach to optimize qubit readout in superconducting quantum computers by focusing on minimizing the total time needed to certify quantum states, rather than just maximizing single-shot measurement accuracy. The researchers show that using longer integration times than what maximizes single-shot fidelity can actually reduce overall certification time by 9-11%.
Key Contributions
- Formulated information-theoretic framework treating qubit readout as a stochastic communication channel with Chernoff information analysis
- Demonstrated that throughput-optimal integration times are longer than fidelity-optimal times, achieving 9-11% speedup in state certification
View Full Abstract
Single-shot fidelity is the standard benchmark for superconducting qubit readout, but it does not directly minimize the total wall-clock time required to certify a quantum state. We formulate an information-theoretic description of dispersive readout that treats the measurement record as a stochastic communication channel and compute the classical Chernoff information governing the multi-shot error exponent using a trajectory model that incorporates T1 relaxation with full cavity memory. We find a consistent separation between the integration time that maximizes single-shot fidelity and the time that minimizes total certification time. For representative transmon parameters and hardware overheads, the throughput-optimal integration window is longer than the fidelity-optimal one, yielding certification speedups of approximately 9-11%, with the gain saturating near 1.13x in the high-readout-power and high-overhead regime. Comparing the extracted classical information to the Gaussian Chernoff limit defines an information-extraction efficiency metric and shows that typical dispersive schemes are limited to about 45% capture at short integration times by detection efficiency, decreasing to approximately 12% at the throughput-optimal integration time of approximately 1.22 us due to T1-induced trajectory smearing. This formulation connects readout calibration directly to the operational objective of minimizing certification time in high-throughput superconducting processors.
Analysis of the action of conventional trapped-ion entangling gates in qudit space
This paper analyzes how conventional quantum gates designed for qubits (2-level systems) behave when applied to qudits (multi-level quantum systems) in trapped-ion quantum computers. The researchers study unwanted phase accumulations that occur in higher-dimensional systems and propose methods to compensate for these phases to make qudit-based quantum processors more practical.
Key Contributions
- Theoretical analysis of phase accumulation in Mølmer-Sørensen and Light-shift gates when operating on qudits
- Methods to actively compensate for unwanted phases and enhance gate robustness in multi-level quantum systems
View Full Abstract
Qudits, or multi-level quantum information carriers, present a promising path for scaling quantum computers. However, their use introduces increased complexity in quantum logic, necessitating careful control of relative phases between different qudit levels. In trapped-ion systems, entangling operations accumulate phases on specific levels that are no longer global, unlike in qubit architectures. Furthermore, the structure of multi-level gates becomes increasingly intricate with higher-dimensional Hilbert spaces. This work explores the theory of these additional entangling and non-entangling phases, accumulated in Mølmer--Sørensen and Light-shift gates. We propose methods to actively compensate for these phases, enhance gate robustness against parameter fluctuations, and simplify native gates for more efficient circuit decomposition. Our results pave the way toward the practical and scalable implementation of qudit-based quantum processors.
Tuning Wave-Particle Duality of Quantum Light by Generalized Photon Subtraction
This paper demonstrates a technique called generalized photon subtraction to create quantum light states that can be tuned between wave-like and particle-like properties. The researchers show this method can efficiently generate special quantum states needed for fault-tolerant optical quantum computing, particularly addressing bottlenecks in creating GKP qubits.
Key Contributions
- Experimental demonstration of tunable wave-particle duality control using generalized photon subtraction
- High-rate generation of intermediate quantum states optimized for fault-tolerant quantum computing thresholds
- Pathway to efficient GKP qubit generation addressing bottlenecks in optical quantum computing
View Full Abstract
Wave--particle duality is a hallmark of quantum mechanics. For bosonic systems, there exists a continuum of intermediate states bridging wave-like Schrödinger cat states and particle-like Fock states. Such states have recently been recognized as valuable resources for enhancing fault-tolerant quantum computation (FTQC) with propagating light. Here we experimentally demonstrate tunable generation of these intermediate states by employing generalized photon subtraction (GPS). By detecting up to three photons from squeezed-light sources with a photon-number-resolving detector, we continuously control the balance between wave- and particle-like features. This approach allows us to construct a spectral family of quantum states with high generation rates, optimized according to the required fault-tolerance threshold. Our results establish GPS as a versatile toolbox for tailoring non-Gaussian resources, opening a pathway to efficient Gottesman--Kitaev--Preskill (GKP) qubit generation and addressing a central bottleneck in optical quantum computing.
Optimized ancillary drive for fast Rydberg entangling gates
This paper develops a method to speed up two-qubit quantum gates in neutral atom systems by using an optimized ancillary laser drive that enhances the coupling between ground and Rydberg states. The technique reduces gate execution time by over 30% while maintaining high fidelity above 99.54% and reducing laser power requirements.
Key Contributions
- Development of optimized ancillary drive technique to enhance two-photon Rabi frequency in Rydberg atom systems
- Demonstration of >30% reduction in CZ gate execution time while maintaining >99.54% fidelity with reduced laser power requirements
View Full Abstract
Reaching fast and robust two-qubit gates with low infidelities has been an outstanding challenge for the long-term goal of useful quantum computers. Typically, optimizing the pulse shapes can minimize the gate infidelity and improve its robustness to certain types of errors; yet it remains incapable of speeding up the gate execution time which is fundamentally restricted by the attainable Rabi frequency in a realistic setup. In this work, we develop a fast implementation of two-qubit CZ gates using optimized ancillary drive to enhance the two-photon Rabi frequency between the ground and Rydberg states.This ancillary drive can work in an error-robustness framework without increasing the original gate infidelity in the absence of the drive. Considering the experimentally feasible parameters for $^{87}$Rb atoms, we demonstrate that the execution time required for such CZ gates can be shortened by more than 30$\%$ as compared to standard two-photon protocols arising the gate fidelity above 0.9954 by taking account of all relevant error sources. Our results reduce the high-power laser requirement and unlock the potential toward fast, high-fidelity quantum operations for large-scale quantum computation with neutral atoms.
Correcting coherent quantum errors by going with the flow
This paper shows that coherent quantum errors (correlated errors across qubits) can be effectively managed in quantum error correction by using 'passive' correction strategies that track errors virtually rather than physically correcting them immediately. The authors demonstrate that this approach prevents coherent errors from compounding over multiple correction cycles, achieving performance comparable to simpler uncorrelated error models.
Key Contributions
- Demonstrates that passive error correction with virtual Pauli frame updates prevents coherent errors from compounding in quantum error correction
- Shows through theory and simulation that correlated Hamiltonian noise can achieve similar performance to uncorrelated Pauli noise when using proper correction strategies
View Full Abstract
The performance of a given quantum error correction (QEC) code depends upon the noise model that is assumed. Independent Pauli noise, applied after each quantum operation, is a simplistic noise model that is easy to simulate and understand in the context of stabilizer codes. Although such a noise model is artificial, it is equivalent to independent, random, unbiased qubit rotations. What about spatially or temporally correlated qubit rotations? Such a noise model is applicable to global operations (e.g., NMR or ESR), common control sources (e.g., lasers), or slow drift (e.g., charge or magnetic noise) in various qubit technologies. In the worst case, such errors can combine constructively and result in a post-correction failure rate that increases with the number of error correction cycles. However, we show that this worst case does not generally arise unless taking active corrective actions while performing QEC. That is, by employing virtual Pauli frame updates ("passive" error correction) rather than physical corrections ("active" error correction), coherent errors do not compound appreciably. Starting in a random Pauli frame is also advantageous. In fact, through perturbation theory arguments and supporting numerical simulations, we show that the logical qubit performance beyond distance 3 for correlated single-qubit Hamiltonian noise models (i.e., global errant qubit rotations), when employing these "lazy" strategies, essentially matches the performance of Pauli noise model with the same process fidelity (fidelity after one application). In a more general circuit model of noise, correlations may add constructively within syndrome extraction rounds but Pauli frame randomization from passive error correction mitigates this effect across multiple rounds.
Entanglement-Induced Resilience of Quantum Dynamics
This paper demonstrates that quantum entanglement naturally protects quantum systems from errors and noise without requiring additional error correction schemes. The researchers show that as entanglement grows in quantum many-body systems, it automatically confines and suppresses the impact of local perturbations and errors.
Key Contributions
- Theoretical proof that entanglement entropy growth provides intrinsic protection against quantum errors
- Demonstration of a passive error suppression mechanism that requires no additional qubits or control overhead
- Quantitative correlation between entanglement entropy and degree of error protection in quantum dynamics
View Full Abstract
Quantum many-body devices suffer from imperfections that destabilize dynamics and limit scalability. We show that the dynamical growth of entanglement can intrinsically protect generic quantum dynamics against coherent and perturbative noise. Through rigorous theoretical analysis of general quantum dynamics and numerical simulations of spin chains and fermionic lattices, we prove that entanglement-entropy growth confines the influence of local Hamiltonian perturbations, thereby suppressing errors in dynamical errors. The degree of protection correlates quantitatively with the entanglement entropy of subsystems on which the perturbations act, and applies broadly to both analog quantum simulators and real-time control protocols. This entanglement-induced resilience is conceptually distinct from quantum error correction or dynamical decoupling: it passively leverages native many-body correlations without additional qubits, measurements, or control overhead. Our results reveal a generic mechanism linking entanglement growth to dynamical stability and provide practical guidelines for designing noise-resilient quantum devices.
Error correction with brickwork Clifford circuits
This paper proves that random 1D Clifford brickwork circuits can form good quantum error correction codes with logarithmic depth, providing both approximate and exact error correction bounds. The research uses statistical mechanics techniques to analyze these random quantum circuits and establishes mathematical limits on the circuit depth needed for effective error correction.
Key Contributions
- Proof that random 1D Clifford brickwork circuits form good approximate quantum error correction codes in logarithmic depth
- Matching upper and lower bounds for the circuit depth required for exact error correction in random 1D Clifford brickwork circuits
View Full Abstract
We prove that random 1D Clifford brickwork circuits form (in expectation) good approximate quantum error correction codes in logarithmic depth. Our proof makes use of the statistical mechanics techniques for random circuits developed by Dalzell et al. [PRX Quantum 3, 010333], adapted extensively to our own purpose. We also consider exact error correction, where we give matching upper and lower bounds for the required depth in which random 1D Clifford brickwork circuits become error correcting.
Toward speedup without quantum coherent access
This paper proposes a hybrid classical-quantum algorithm that combines classical preprocessing of matrix data with quantum circuits to solve various computational problems. The approach aims to achieve quantum speedups for tasks like linear equation solving and data fitting while avoiding the strong input assumptions that limit many existing quantum algorithms.
Key Contributions
- Development of hybrid classical-quantum algorithm with logarithmic complexity in input dimension
- Demonstration of exponential speedups for certain matrices compared to existing methods
- End-to-end quantum data fitting application with practical prediction capabilities
- Block encoding technique that avoids strong input assumptions of previous quantum algorithms
View Full Abstract
Along with the development of quantum technology, finding useful applications of quantum computers has been a central pursuit. Despite various quantum algorithms have been developed, many of them often require strong input assumptions, which is hardware demanding. In particular, recent advances on dequantization have revealed that the quantum advantage is more of a mere artifact of strong input assumptions. In this work, we propose a variant of these algorithms, leveraging both classical and quantum resources. Provided the classical knowledge (the entries) of the matrix/vector of interest, a classical procedure is used to pre-process this information. Then they are fed into a quantum circuit which is shown to be a block encoding of the matrix of interest. From this block-encoding, we show how to use it to tackle a wide range of problems, including principal component analysis, linear equation solving, Hamiltonian simulation, preparing ground state, and data fitting. We also analyze our protocol, showing that both the classical and quantum procedure can achieve logarithmic complexity in the input dimension, thus implying its potential for near term realization. We then discuss several implications and corollaries of our result. First,, our results suggest there are certain matrices/Hamiltonians where our method can provide exponential improvement compared to the existing ones with respect to the sparsity. Regarding dense linear systems, our method achieves exponential speed-up with respect to the inverse of error tolerance, compared to the best previously known quantum algorithm for dense systems. Last, and most importantly, regarding quantum data fitting, we show how the output of our quantum algorithms can be leveraged to predict unseen data. Thus, it provides an end-to-end application, which has been an open aspect of the previous quantum data fitting algorithm.
Qudit stabiliser codes for $\mathbb{Z}_N$ lattice gauge theories with matter
This paper shows how lattice gauge theories with matter can be reformulated as quantum error correcting codes using qudits (quantum systems with N levels instead of just 2). The authors demonstrate that quantum error correction can reveal hidden mathematical relationships between different physical theories and show how to perform fault-tolerant quantum computations using these qudit codes.
Key Contributions
- Extended quantum error correction from qubits to qudits for lattice gauge theories with matter
- Demonstrated logical duality between different bosonic models through error correction mapping
- Showed implementation of universal fault-tolerant gates via state injection between compatible qudit codes
View Full Abstract
In this work we extend the connection between Quantum Error Correction (QEC) and Lattice Gauge Theories (LGTs) by showing that a $\mathbb{Z}_N$ gauge theory with prime dimension $N$ coupled to dynamical matter can be expressed as a qudit stabilizer code. Using the stabilizer formalism we show how to formulate an exact mapping of the encoded $\mathbb{Z}_N$ gauge theory onto two different bosonic models, uncovering a logical duality generated by error correction itself. From this perspective, quantum error correction provides a unifying language to expose dual descriptions of lattice gauge theories. In addition, we generalize earlier $\mathbb{Z}_2$ constructions on qubits to $\mathbb{Z}_N$ on $N$-level qudits and demonstrate how universal fault-tolerant gates can be implemented via state injection between compatible qudit codes.
Distilling Magic States in the Bicycle Architecture
This paper develops improved magic state distillation factories using Bivariate Bicycle (BB) codes that can operate within a single code block, achieving better space-time efficiency and lower error rates compared to conventional surface code approaches that require multiple code blocks and lattice surgery.
Key Contributions
- Development of magic state distillation factories on Bivariate Bicycle codes that execute within single code blocks
- Joint optimization framework for logical qubit mapping, gate scheduling, measurement nativization, and protocol compression via qubit recycling
- Demonstration of improved space-time volume and lower target error rates compared to leading surface code distillation factories
View Full Abstract
Magic State Distillation is considered to be one of the promising methods for supplying the non-Clifford resources required to achieve universal fault tolerance. Conventional MSD protocols implemented in surface codes often require multiple code blocks and lattice surgery rounds, resulting in substantial qubit overhead, especially at low target error rates. In this work, we present practical magic state distillation factories on Bivariate Bicycle (BB) codes that execute Pauli-measurement-based Clifford circuits inside a single BB code block. We formulate distillation circuit design as a joint optimization of logical qubit mapping, gate scheduling, measurement nativization, and protocol compression via qubit recycling. Based on detailed resource analysis and simulations, our BB factories have space-time volume comparable to that of leading distillation factories while delivering lower target error at a smaller qubit footprint, and are particularly compelling as second-round distillers following magic state cultivations.
A Unified Error Correction Code for Universal Quantum Computing with Identical Particles
This paper proposes a new quantum error correction approach for quantum computers built with identical particle qubits, showing that these systems interact differently with environmental noise than conventional qubits. The authors develop a unified framework where error correction can be implemented directly at the physical qubit level using non-unitary reversal operations.
Key Contributions
- Identification of fundamental differences between identical particle qubit-bath interactions and conventional qubit-bath interactions
- Development of a unified error correction framework using non-unitary reversal operations for fault-tolerant quantum computing
- Demonstration that dynamical decoupling and decoherence-free subspace structures remain effective in this new framework
View Full Abstract
We present a universal fault-tolerant quantum computing architecture based on identical particle qubits (IPQs), where we find that the first-order IPQ - bath interaction fundamentally differs from the conventional first-order qubit-bath interaction. This key distinction necessitates a redesign of existing strategies to fight decoherence. We propose that the simplest quantum error correction code can be realized directly within the physical qubit, provided that conventional correction and restoration are generalized beyond unitary operations to employ physically implementable reversal operations -- naturally placing logical and physical qubits on equal footing. We further demonstrate that dynamical decoupling (DD) remains effective within this unified framework, and that a decoherence-free subspace (DFS) -- like structure emerges. Unlike previous approximate treatments, our analytically solvable IPQ-Bath model enables rigorous testing of these strategies, with numerical simulations validating their effectiveness.
Generalized $\mathbb{Z}_p$ toric codes as qudit low-density parity-check codes
This paper develops improved quantum error correction codes by generalizing the Kitaev toric code to work with higher-dimensional quantum systems (qudits) and systematically searches for codes with better performance parameters, finding examples that achieve optimal trade-offs between code distance and information storage capacity.
Key Contributions
- Development of generalized Z_p toric codes for qudits with enhanced stabilizer structures
- Systematic search identifying optimal qudit LDPC codes with improved k*d²/n ratios
- Efficient computational method using Laurent polynomials and Gröbner basis to calculate logical dimensions
View Full Abstract
We study two-dimensional translation-invariant CSS stabilizer codes over prime-dimensional qudits on the square lattice under twisted boundary conditions, generalizing the Kitaev $\mathbb{Z}_p$ toric code by augmenting each stabilizer with two additional qudits. Using the Laurent-polynomial formalism, we adapt the Gröbner basis to compute the logical dimension $k$ efficiently, without explicitly constructing large parity-check matrices. We then perform a systematic search over various stabilizer realizations and lattice geometries for $p\in\{3,5,7,11\}$, identifying qudit low-density parity-check codes with the optimal finite-size performance. Representative examples include $[[242,10,22]]_3$ and $[[120,6,20]]_{11}$, both achieving $k d^{2}/n=20$. Across the searched regime, the best observed $k d^{2}$ at fixed $n$ increases with $p$, with an empirical relation $k d^{2} = 0.0541 \, n^{2}\ln p + 3.84 \, n$, compatible with a Bravyi--Poulin--Terhal-type tradeoff when the interaction range grows with system size.
Experimental characterization of coherent and non-Markovian errors using tangent space decomposition
This paper develops and experimentally validates a new method for diagnosing different types of quantum errors in single-qubit gates using tangent-space decomposition. The technique can distinguish between coherent errors, Markovian noise, and non-Markovian noise from a single measurement, and was tested on a trapped ion quantum computing platform.
Key Contributions
- Novel tangent-space decomposition method for quantum error characterization that distinguishes coherent, Markovian, and non-Markovian errors
- Experimental validation on trapped ion platform showing practical application for quantum control system diagnostics
View Full Abstract
Accurate characterization of coherent and non-Markovian errors remains a central challenge in quantum information processing, as conventional benchmarking techniques typically rely on Markovian and time-independent noise assumptions. In practice, however, quantum devices exhibit both systematic coherent miscalibrations and temporally correlated fluctuations, which complicate error diagnosis and mitigation. Here, we apply a technique based on tangent-space decomposition to characterize such error in single-qubit quantum gates implemented on a trapped ion platform. Small imperfections in a quantum operation are treated as perturbations of the target quantum map, represented as tangent vectors in the space of quantum channels. This formulations enables a natural decomposition of the deviation into three components corresponding to coherent, Markovian and non-Markovian processes.The relative weights of these components provide a quantitative measure of the contribution from each type of error mechanism, directly from a single tomographic snapshot. We experimentally validate this method on a single-qubit gates implemented on a trapped $^{40}$Ca$^+$ ion, where control is achieved through laser-driven optical transitions. By analyzing experimentally reconstructed process matrices, expressed in the Pauli Transfer Matrix and Choi representations, we identify and quantify non-Markovian effects arising from controlled injection of slow fluctuations in the experimental environment. We also characterize deterministic coherent miscalibrations using the same technique. This approach provides a physically transparent and experimentally accessible tool for diagnosing complex error sources in quantum control systems.
CQM: Cyclic Qubit Mappings
This paper proposes Cyclic Qubit Mappings (CQM), a technique that dynamically moves logical qubits around quantum hardware during compilation to average out spatial and temporal error variations in quantum computers using surface codes and lattice surgery operations.
Key Contributions
- Dynamic remapping technique to mitigate hardware heterogeneity in quantum computers
- Method to achieve average logical error rates by moving qubits spatially using lattice surgery
View Full Abstract
Quantum computers show promise to solve select problems otherwise intractable on classical computers. However, noisy intermediate-scale quantum (NISQ) era devices are currently prone to various sources of error. Quantum error correction (QEC) shows promise as a path towards fault tolerant quantum computing. Surface codes, in particular, have become ubiquitous throughout literature for their efficacy as a quantum error correcting code, and can execute quantum circuits via lattice surgery operations. Lattice surgery also allows for logical qubits to maneuver around the architecture, if there is space for it. Hardware used for near-term demonstrations have both spatially and temporally varying error results in logical qubits. By maneuvering logical qubits around the topology, an average logical error rate (LER) can be enforced. We propose cyclic qubit mappings (CQM), a dynamic remapping technique implemented during compilation to mitigate hardware heterogeneity by expanding and contracting logical qubits. In addition to LER averaging, CQM shows initial promise given it's minimal execution time overhead and effective resource utilization.
Electrical post-fabrication tuning of aluminum Josephson junctions at room temperature
This paper demonstrates a method to electrically tune aluminum Josephson junctions at room temperature using voltage pulses, allowing researchers to adjust qubit frequencies after fabrication. The technique can increase junction resistance by up to 270% while maintaining high qubit quality, providing a solution to frequency crowding problems in quantum processors.
Key Contributions
- Demonstrated controllable post-fabrication tuning of superconducting qubit frequencies while maintaining quality factors above 1 million
- Established practical protocols and limits for electrical tuning of Josephson junctions with up to 270% resistance increase
- Provided solution to frequency crowding in quantum processors through room-temperature junction modification
View Full Abstract
Josephson junctions are a key element of superconducting quantum technology, serving as the core building blocks of superconducting qubits. We present an experimental study on room-temperature electrical tuning of aluminum junctions, showing that voltage pulses can controllably increase their resistance and adjust the Josephson energy while maintaining qubit quality factors above 1 million. We find that the rate of resistance increase scales exponentially with pulse amplitude during manipulation, after which the spontaneous resistance increase scales proportionally to the amount of manipulation. We show that this spontaneous increase halts at cryogenic temperatures, and resumes again at room temperature. Using our stepwise protocol, we achieve up to a 270% increase in junction resistance, corresponding to a reduction of nearly 2 GHz of the qubit transition frequency. These results establish the achievable range, relaxation behavior, and practical limits of electrical tuning, enabling post-fabrication mitigation of frequency crowding in quantum processors.
Differentiable Maximum Likelihood Noise Estimation for Quantum Error Correction
This paper develops a new method called differentiable Maximum Likelihood Estimation (dMLE) to better estimate noise in quantum computers, which is crucial for quantum error correction. The approach uses gradient descent to optimize noise parameters and demonstrates significant improvements in reducing logical error rates compared to existing methods when tested on Google's quantum processor.
Key Contributions
- Development of differentiable Maximum Likelihood Estimation framework for quantum noise estimation
- Demonstration of up to 30.6% reduction in logical error rates for repetition codes and 8.1% for surface codes
- Integration of exact Planar solver and novel Tensor Network architecture for tractable likelihood evaluation
View Full Abstract
Accurate noise estimation is essential for fault-tolerant quantum computing, as decoding performance depends critically on the fidelity of the circuit-level noise parameters. In this work, we introduce a differentiable Maximum Likelihood Estimation (dMLE) framework that enables exact, efficient, and fully differentiable computation of syndrome log-likelihoods, allowing circuit-level noise parameters to be optimized directly via gradient descent. Leveraging the exact Planar solver for repetition codes and a novel, simplified Tensor Network (TN) architecture combined with optimized contraction path finding for surface codes, our method achieves tractable and fully differentiable likelihood evaluation even for distance 5 surface codes with up to 25 rounds. Our method recovers the underlying error probabilities with near-exact precision in simulations and reduces logical error rates by up to 30.6(3)% for repetition codes and 8.1(2)% for surface codes on experimental data from Google's processor compared to previous state-of-the-art methods: correlation analysis and Reinforcement Learning (RL) methods. Our approach yields provably optimal, decoder-independent error priors by directly maximizing the syndrome likelihood, offering a powerful noise estimation and control tool for unlocking the full potential of current and future error-corrected quantum processors.
Calderbank-Shor-Steane codes on group-valued qudits
This paper introduces a new class of quantum error-correcting codes called group-CSS codes that work on qudits (quantum systems with more than two levels) based on any finite group. The codes generalize existing CSS codes and quantum double models, providing new theoretical frameworks for quantum error correction with non-Abelian groups.
Key Contributions
- Introduction of CSS-like codes on group-valued qudits for arbitrary finite groups
- Proof that certain group-CSS codes reduce to CW quantum double models
- Construction of intrinsically non-Abelian code families with asymptotically optimal rate and distances
- Generalization of quantum double models with defects using ghost vertices
View Full Abstract
Calderbank-Shor-Steane (CSS) codes are a versatile quantum error-correcting family built out of commuting $X$- and $Z$-type checks. We introduce CSS-like codes on $G$-valued qudits for any finite group $G$ that reduce to qubit CSS codes for $G = \mathbb{Z}_2$ yet generalize the Kitaev quantum double model for general groups. The $X$-checks of our group-CSS codes correspond to left and/or right multiplication by group elements, while $Z$-checks project onto solutions to group word equations. We describe quantum-double models on oriented two-dimensional CW complexes (which need not cellulate a manifold) and prove that, when $G$ is non-Abelian and simple, every $G$-covariant group-CSS code with suitably upper-bounded $Z$-check weight and lower-bounded $Z$-distance reduces to a CW quantum double. We describe the codespace and logical operators of CW quantum doubles via the same intuition used to obtain logical structure of surface codes. We obtain distance bounds for codes on non-Abelian simple groups from the graph underlying the CW complex, and construct intrinsically non-Abelian code families with asymptotically optimal rate and distances. Adding "ghost vertices" to the CW complex generalizes quantum double models with defects and rough boundary conditions whose logical structure can be understood without reference to non-Abelian anyons or defects. Several non-invertible symmetry-protected topological states, both with ordinary and higher-form symmetries, are the unique codewords of simply-connected CW quantum doubles with a single ghost vertex.
A Fine-Grained and Efficient Reliability Analysis Framework for Noisy Quantum Circuits
This paper develops a new framework for efficiently evaluating how reliable quantum circuits are when running on noisy quantum computers. The method introduces a 'Noise Proxy Circuit' that models cumulative noise effects and provides accurate reliability estimates without actually running the circuits, achieving results comparable to more computationally expensive fidelity measurements.
Key Contributions
- Introduction of Noise Proxy Circuit (NPC) abstraction for modeling cumulative noise effects without logical operations
- Development of Proxy Fidelity metric for quantifying both qubit-level and circuit-level reliability
- Analytical algorithm for estimating reliability under multiple noise channels (depolarizing, thermal relaxation, readout error)
- Execution-free, scalable framework achieving fidelity-level accuracy with low computational cost
View Full Abstract
Evaluating the reliability of noisy quantum circuits is essential for implementing quantum algorithms on noisy quantum devices. However, current quantum hardware exhibits diverse noise mechanisms whose compounded effects make accurate and efficient reliability evaluation challenging. While state fidelity is the most faithful indicator of circuit reliability, it is experimentally and computationally prohibitive to obtain. Alternative metrics, although easier to compute, often fail to accurately reflect circuit reliability, lack universality across circuit types, or offer limited interpretability. To address these challenges, we propose a fine-grained, scalable, and interpretable framework for efficient and accurate reliability evaluation of noisy quantum circuits. Our approach performs a state-independent analysis to model how circuit reliability progressively degrades during execution. We introduce the Noise Proxy Circuit (NPC), which removes all logical operations while preserving the complete sequence of noise channels, thereby providing an abstraction of cumulative noise effects. Based on the NPC, we define Proxy Fidelity, a reliability metric that quantifies both qubit-level and circuit-level reliability. We further develop an analytical algorithm to estimate Proxy Fidelity under depolarizing, thermal relaxation, and readout error channels. The proposed framework achieves fidelity-level reliability estimation while remaining execution-free, scalable, and interpretable. Experimental results show that our method accurately estimates circuit fidelity, with an average absolute difference (AAD) ranging from 0.031 to 0.069 across diverse circuits and devices.
Universal Protection of Quantum States from Decoherence
This paper presents a universal method to protect quantum states from decoherence by temporarily moving quantum information to a protected ancillary system, without requiring prior knowledge of the quantum state. The researchers experimentally demonstrated this protection protocol using quantum optics, showing it can preserve coherence for arbitrary quantum states.
Key Contributions
- Development of a state-independent quantum decoherence protection protocol that works without prior knowledge of the quantum state
- Experimental validation of universal quantum state protection using quantum optical platform with ancillary degrees of freedom
View Full Abstract
The fragility of quantum coherence fundamentally limits the scalability of quantum technologies, as unavoidable environmental interactions induce decoherence and rapidly degrade quantum properties. The Quantum Zeno Effect offers a powerful route to suppress quantum evolution and protect coherence through frequent measurements, irrespective of the underlying dynamics. However, existing implementations require prior knowledge of the quantum state, severely restricting their applicability. Here we introduce a state- and dynamics-independent protection protocol embedding the system in a larger Hilbert space, temporarily swapping the quantum information from its original degree of freedom to a decoherence-free ancillary one. We experimentally validate the protocol on a quantum optical platform, demonstrating robust preservation of coherence and purity for arbitrary polarization qubits under decoherence, thereby enabling the universal safeguarding of unknown quantum states.
Separating Non-Interactive Classical Verification of Quantum Computation from Falsifiable Assumptions
This paper proves that it's impossible to create a non-interactive classical verification protocol for quantum computation based on standard cryptographic assumptions. The authors show a fundamental limitation where verifying quantum computations with just a single message exchange cannot rely on typical assumptions like Learning-with-Errors that most cryptographic systems use.
Key Contributions
- Proves impossibility of non-interactive classical verification of QMA problems under falsifiable assumptions using quantum black-box reductions
- Establishes fundamental limitations for quantum verification protocols in the plain model with single-message communication
View Full Abstract
Mahadev [SIAM J. Comput. 2022] introduced the first protocol for classical verification of quantum computation based on the Learning-with-Errors (LWE) assumption, achieving a 4-message interactive scheme. This breakthrough naturally raised the question of whether fewer messages are possible in the plain model. Despite its importance, this question has remained unresolved. In this work, we prove that there is no quantum black-box reduction of non-interactive classical verification of quantum computation of $\textsf{QMA}$ to any falsifiable assumption. Here, "non-interactive" means that after an instance-independent setup, the protocol consists of a single message. This constitutes a strong negative result given that falsifiable assumptions cover almost all standard assumptions used in cryptography, including LWE. Our separation holds under the existence of a $\textsf{QMA} \text{-} \textsf{QCMA}$ gap problem. Essentially, these problems require a slightly stronger assumption than $\textsf{QMA}\neq \textsf{QCMA}$. To support the existence of such problems, we present a construction relative to a quantum unitary oracle.
Distributed Hyperbolic Floquet Codes under Depolarizing and Erasure Noise
This paper develops distributed quantum error correcting codes based on hyperbolic geometry that can operate across multiple quantum processing units connected by shared entanglement. The authors test these codes under various noise conditions and demonstrate their effectiveness for scaling quantum computers beyond single-device architectures.
Key Contributions
- Introduction of new hyperbolic Floquet code families from {10,3} and {12,3} tessellations
- Demonstration of distributed quantum error correction across multiple QPUs with measurable pseudo-thresholds under realistic noise models
View Full Abstract
Distributing qubits across quantum processing units (QPUs) connected by shared entanglement enables scaling beyond monolithic architectures. Hyperbolic Floquet codes use only weight-2 measurements and are good candidates for distributed quantum error correcting codes. We construct hyperbolic and semi-hyperbolic Floquet codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations via the Wythoff kaleidoscopic construction with the Low-Index Normal Subgroups (LINS) algorithm and distribute them across QPUs via spectral bisection. The $\{10,3\}$ and $\{12,3\}$ families are new to hyperbolic Floquet codes. We simulate these distributed codes under four noise models: depolarizing, SDEM3, correlated EM3, and erasure. With depolarizing noise ($p_{\text{local}} = 0.03\%$), fine-grained codes achieve non-local pseudo-thresholds up to 3.0\% for $\{8,3\}$, 3.0\% for $\{10,3\}$, and 1.75\% for $\{12,3\}$. Correlated EM3 yields pseudo-thresholds up to 0.75\% for $\{8,3\}$, 0.75\% for $\{10,3\}$, and 0.50\% for $\{12,3\}$; crossing-based thresholds from same-$k$ families are ${\sim}1.75$--$2.9\%$ across all tessellations. Using the SDEM3 model, fine-grained codes achieve distributed pseudo-thresholds of 1.75\% for $\{8,3\}$, 1.25\% for $\{10,3\}$, and 1.00\% for $\{12,3\}$. Under erasure noise motivated by spin-optical architectures, thresholds at 1\% local loss are 35--40\% for $\{8,3\}$, 30--35\% for $\{10,3\}$, and 25--30\% for $\{12,3\}$.
Exact quantum decision diagrams with scaling guarantees for Clifford+$T$ circuits and beyond
This paper develops exact quantum decision diagrams that avoid floating-point errors by using algebraic representations for complex numbers, specifically for analyzing Clifford+T quantum circuits. The authors prove theoretical scaling guarantees showing that their method's runtime and memory usage scale exponentially only with the number of T gates, while remaining polynomial in the number of Clifford gates and qubits.
Key Contributions
- First exact algebraic representation for quantum decision diagrams that eliminates floating-point errors
- Theoretical scaling guarantees proving runtime bounds of 2^t · poly(g,n) for quantum circuit simulation
- Connection between quantum state stabilizer nullity and decision diagram width for Clifford+T circuits
View Full Abstract
A decision diagram (DD) is a graph-like data structure for homomorphic compression of Boolean and pseudo-Boolean functions. Over the past decades, decision diagrams have been successfully applied to verification, linear algebra, stochastic reasoning, and quantum circuit analysis. Floating-point errors have, however, significantly slowed down practical implementations of real- and complex-valued decision diagrams. In the context of quantum computing, attempts to mitigate this numerical instability have thus far lacked theoretical scaling guarantees and have had only limited success in practice. Here, we focus on the analysis of quantum circuits consisting of Clifford gates and $T$ gates (a common universal gate set). We first hand-craft an algebraic representation for complex numbers, which replace the floating point coefficients in a decision diagram. Then, we prove that the sizes of these algebraic representations are linearly bounded in the number of $T$ gates and qubits, and constant in the number of Clifford gates. Furthermore, we prove that both the runtime and the number of nodes of decision diagrams are upper bounded as $2^t \cdot poly(g, n)$, where $t$ ($g$) is the number of $t$ gates (Clifford gates) and $n$ the number of qubits. Our proofs are based on a $T$-count dependent characterization of the density matrix entries of quantum states produced by circuits with Clifford+$T$ gates, and uncover a connection between a quantum state's stabilizer nullity and its decision diagram width. With an open source implementation, we demonstrate that our exact method resolves the inaccuracies occurring in floating-point-based counterparts and can outperform them due to lower node counts. Our contributions are, to the best of our knowledge, the first scaling guarantees on the runtime of (exact) quantum decision diagram simulation for a universal gate set.
A Shadow Enhanced Greedy Quantum Eigensolver
This paper introduces SEGQE, a new quantum algorithm that efficiently finds the ground state (lowest energy state) of quantum systems by using classical shadows to evaluate many potential quantum operations in parallel, then greedily selecting the best one at each step. The method is designed to work well on early fault-tolerant quantum computers where measurements are expensive.
Key Contributions
- Development of SEGQE algorithm that uses classical shadows for measurement-efficient ground-state preparation
- Rigorous theoretical analysis providing worst-case sample complexity bounds with logarithmic scaling
- Numerical demonstration of linear scaling with system size on transverse-field Ising models and random Hamiltonians
View Full Abstract
While ground-state preparation is expected to be a primary application of quantum computers, it is also an essential subroutine for many fault-tolerant algorithms. In early fault-tolerant regimes, logical measurements remain costly, motivating adaptive, shot-frugal state-preparation strategies that efficiently utilize each measurement. We introduce the Shadow Enhanced Greedy Quantum Eigensolver (SEGQE) as a greedy, shadow-assisted framework for measurement-efficient ground-state preparation. SEGQE uses classical shadows to evaluate, in parallel and entirely in classical post-processing, the energy reduction induced by large collections of local candidate gates, greedily selecting at each step the gate with the largest estimated energy decrease. We derive rigorous worst-case per-iteration sample-complexity bounds for SEGQE, exhibiting logarithmic dependence on the number of candidate gates. Numerical benchmarks on finite transverse-field Ising models and ensembles of random local Hamiltonians demonstrate convergence in a number of iterations that scales approximately linearly with system size, while maintaining high-fidelity ground-state approximations and competitive energy estimates. Together, our empirical scaling laws and rigorous per-iteration guarantees establish SEGQE as a measurement-efficient state-preparation primitive well suited to early fault-tolerant quantum computing architectures.
Fault-tolerant preparation of arbitrary logical states in the cat code
This paper presents a method for preparing arbitrary logical quantum states using a four-legged cat code that can suppress major types of quantum errors. The approach achieves high fidelity (error rates around 10^-4) and is designed to work with current superconducting quantum hardware.
Key Contributions
- Complete framework for fault-tolerant preparation of arbitrary logical states in cat codes
- Demonstration of quadratic error suppression confirming first-order error elimination
- Scalable protocol compatible with current superconducting hardware achieving 10^-4 logical infidelities
View Full Abstract
Preparing high-fidelity logical states is a central challenge in fault-tolerant quantum computing, yet existing approaches struggle to balance control complexity against resource overhead. Here, we present a complete framework for the fault-tolerant preparation of arbitrary logical states encoded in the four-legged cat code. This framework is engineered to suppress the dominant incoherent errors, including excitation decay and dephasing in both the bosonic mode and the ancilla via error detection. Numerical simulations with experimentally realistic parameters on a 3D superconducting cavity platform yield logical infidelities on the order of $10^{-4}$. A scaling analysis confirms that the logical error rate grows nearly quadratically with the physical error rate, confirming that all first-order errors are fully suppressed. Our protocol is compatible with current hardware and is scalable to multiple bosonic modes, providing a resource-efficient foundation for magic state preparation and higher-level concatenated quantum error correction.
Near-single-domain superconducting aluminum films on GaAs(111)A with exceptional crystalline quality for scalable quantum circuits
This paper demonstrates a breakthrough method for growing extremely high-quality aluminum superconducting films on semiconductor substrates using molecular beam epitaxy, achieving unprecedented crystalline uniformity that could enable more reliable and scalable superconducting quantum circuits.
Key Contributions
- Achieved record-low twin-domain ratios of 0.00005 for aluminum films on GaAs substrates
- Demonstrated exceptional crystalline quality with narrow FWHM values and atomically smooth interfaces
- Established a scalable materials platform for high-coherence superconducting qubits with critical temperatures approaching bulk values
View Full Abstract
We have reproducibly grown near-single-domain superconducting aluminum (Al) films on GaAs(111)A wafers using molecular beam epitaxy. Synchrotron X-ray diffraction revealed twin-domain ratios of 0.00005 and 0.0003 for 19.4-nm- and 9.6-nm-thick films, respectively-the lowest reported for Al on any substrate and long considered unattainable for practical device platforms. Azimuthal scans across off-normal Al{$11\bar{1}$} reflections exhibit narrow full width at half maximum (FWHM) values down to $0.55^\circ$, unmatched by epi-Al grown by any other method. Normal scans showed a well-defined (111) orientation with pronounced Pendellösung fringes, and $θ$-rocking-curve FWHM values down to $0.018^\circ$; the former indicates abrupt film-substrate and oxide-film interfaces. Electron backscatter diffraction mapping confirms macroscopic in-plane uniformity and the absence of $Σ$3 twin domains. Atomic force microscopy and scanning transmission electron microscopy confirmed atomically smooth surfaces and abrupt heterointerfaces. The films exhibit critical temperatures approaching bulk values, establishing a materials platform for scalable, high-coherence superconducting qubits.
Fault-tolerant interfaces for quantum LDPC codes
This paper develops fault-tolerant interfaces for quantum LDPC codes that enable quantum state preparation with constant space overhead, improving on previous methods that required polylogarithmic overhead. The work focuses on creating efficient protocols for changing protection levels in quantum error correction codes while maintaining fault tolerance.
Key Contributions
- Development of fault-tolerant interfaces for quantum LDPC codes with constant space overhead
- Construction of decoders that can change protection levels by arbitrary amounts while preventing error accumulation
View Full Abstract
The preparation of a quantum state using a noisy quantum computer (gate noise strength $δ$), will necessarily affect an O($δ$)-fraction of the qubits, no matter which protocol is used. Here, we show that fault-tolerant quantum state preparation can be achieved with constant space overhead improving on previous constructions requiring polylogarithmic overhead. To achieve this, we add to the toolbox of fault-tolerant schemes for circuits with quantum input and output. More specifically, we construct fault-tolerant interfaces that decrease the level of protection for quantum low-density parity-check (LDPC) codes. When information is encoded in multiple code blocks, our interfaces have constant space overhead. In our decoder construction that change the level of protection by an arbitrary amount, we circumvent bottlenecks to error pileup and overhead by gradual lowering of the level of encoding at the same time as we increase the number of blocks on which decoding is carried out simultaneously.
Adaptive Aborting Schemes for Quantum Error Correction Decoding
This paper introduces adaptive abort schemes for quantum error correction that can terminate syndrome measurements early when errors are likely, reducing computational overhead while maintaining or improving error correction performance. The methods show 5-60% efficiency improvements over standard approaches across different quantum error correcting codes.
Key Contributions
- Introduction of first adaptive abort schemes for quantum error correction (AdAbort and OSLA)
- Demonstration of 5-60% efficiency improvements in decoder performance for surface and color codes
- Real-time syndrome-based decision making framework that balances measurement costs against restart costs
View Full Abstract
Quantum error correction (QEC) is essential for realizing fault-tolerant quantum computation. Current QEC controllers execute all scheduled syndrome (parity-bit) measurement rounds before decoding, even when early syndrome data indicates that the run will result in an error. The resulting excess measurements increase the decoder's workload and system latency. To address this, we introduce an adaptive abort module that simultaneously reduces decoder overhead and suppresses logical error rates in surface codes and color codes under an existing QEC controller. The key idea is that initial syndrome information allows the controller to terminate risky shots early before additional resources are spent. An effective scheme balances the cost of further measurement against the restart cost and thus increases decoder efficiency. Adaptive abort schemes dynamically adjust the number of syndrome measurement rounds per shot using real-time syndrome information. We consider three schemes: fixed-depth (FD) decoding (the standard non-adaptive approach used in current state-of-the-art QEC controllers), and two adaptive schemes, AdAbort and One-Step Lookahead (OSLA) decoding. For surface and color codes under a realistic circuit-level depolarizing noise model, AdAbort substantially outperforms both OSLA and FD, yielding higher decoder efficiency across a broad range of code distances. Numerically, as the code distance increases from 5 to 15, AdAbort yields an improvement that increases from 5% to 35% for surface codes and from 7% to 60% for color codes. To our knowledge, these are the first adaptive abort schemes considered for QEC. Our results highlight the potential importance of abort rules for increasing efficiency as we scale to large, resource-intensive quantum architectures.
Device for MHz-rate rastering of arbitrary 2D optical potentials
This paper presents a new optical device that can rapidly manipulate neutral atom arrays by creating arbitrary 2D optical patterns at MHz refresh rates, overcoming current limitations of existing systems that can only move atoms row-by-row or column-by-column. The device enables simultaneous transport of atomic qubits in any direction with 40x40 resolution, scalable to 100x100.
Key Contributions
- Design of MHz-rate optical rastering device for arbitrary 2D patterns in neutral atom arrays
- Demonstration of enhanced qubit connectivity through simultaneous multi-directional atomic qubit transport
View Full Abstract
Current architectures for neutral-atom arrays utilize devices such as acousto-optic deflectors (AODs) and spatial light modulators (SLMs) to multiplex a single classical control line into N qubit control lines. Dynamic control is speed-limited by the response time of AODs, and geometrically constrained to respect a product structure, limiting motion to row-by-row or column-by-column moves. We propose an optical rastering device that can produce any 2D pattern, not limited to grids, at 1 MHz refresh rates. We demonstrate a design with a resolution of 40 x 40 that can be further scaled up to 100 x 100 to match existing and future neutral atom devices. The ability to simultaneously transport atomic qubits in arbitrary directions will enhance qubit connectivity, enable more efficient circuits, and may have broader applications ranging from LiDAR to fluorescence microscopy.
Hardware-Agnostic Modeling of Quantum Side-Channel Leakage via Conditional Dynamics and Learning from Full Correlation Data
This paper studies quantum side-channel attacks where an adversarial probe qubit monitors a target qubit during hidden quantum gate sequences to extract secret information. The authors develop both theoretical models and machine learning methods to predict optimal coupling strengths for such attacks and demonstrate how quantum information can leak through side channels.
Key Contributions
- Hardware-agnostic framework for modeling quantum side-channel leakage through probe qubits
- Theoretical prediction of optimal 'Goldilocks' coupling bands for side-channel attacks based on circuit depth
- Machine learning decoder that can extract gate sequences from correlation data across different coupling and noise conditions
View Full Abstract
We study a sequential coherent side-channel model in which an adversarial probe qubit interacts with a target qubit during a hidden gate sequence. Repeating the same hidden sequence for $N$ shots yields an empirical full-correlation record: the joint histogram $\widehat{P}_g(b)$ over probe bit-strings $b\in\{0,1\}^k$, which is a sufficient statistic for classical post-processing under identically and independently distributed (i.i.d.) shots but grows exponentially with circuit depth. We first describe this sequential probe framework in a coupling- and measurement-agnostic form, emphasizing the scaling of the observation space and why exact analytic distinguishability becomes intractable with circuit depth. We then specialize to a representative instantiation (a controlled-rotation probe coupling with fixed projective readout and a commuting $R_x$ gate alphabet) where we (i) derive a depth-dependent leakage envelope whose maximizer predicts a "Goldilocks" coupling band as a function of depth, and (ii) provide an operational decoder, via machine learning, a single parameter-conditioned map from $\widehat{P}_g$ to Alice's per-step gate labels, generalizing across coupling and noise settings without retraining. Experiments over broad coupling and noise grids show that strict sequence recovery concentrates near the predicted coupling band and degrades predictably under decoherence and finite-shot estimation.
Self-dual Stacked Quantum Low-Density Parity-Check Codes
This paper develops a new method for constructing self-dual quantum low-density parity-check (qLDPC) codes by stacking non-self-dual codes, creating several new code families with improved parameters. The work addresses a key challenge in fault-tolerant quantum computing by enabling easier implementation of logical operations while maintaining high encoding rates and error correction capabilities.
Key Contributions
- Novel stacking method for constructing self-dual qLDPC codes from non-self-dual codes
- Development of multiple new code families including double-chain bicycle codes and double-layer bivariate bicycle codes
- Numerical demonstration of improved logical failure rates and high pseudo-thresholds under circuit-level noise
View Full Abstract
Quantum low-density parity-check (qLDPC) codes are promising candidates for fault-tolerant quantum computation due to their high encoding rates and distances. However, implementing logical operations using qLDPC codes presents significant challenges. Previous research has demonstrated that self-dual qLDPC codes facilitate the implementation of transversal Clifford gates. Here we introduce a method for constructing self-dual qLDPC codes by stacking non-self-dual qLDPC codes. Leveraging this methodology, we develop double-chain bicycle codes, double-layer bivariate bicycle (BB) codes, double-layer twisted BB codes, and double-layer reflection codes, many of which exhibit favorable code parameters. Additionally, we conduct numerical calculations to assess the performance of these codes as quantum memory under the circuit-level noise model, revealing that the logical failure rate can be significantly reduced with high pseudo-thresholds.
Realizing a Universal Quantum Gate Set via Double-Braiding of SU(2)k Anyon Models
This paper investigates using double-braiding techniques with SU(2)k anyon models to implement universal quantum gates for topological quantum computing. The authors show that their approach can synthesize both single-qubit and two-qubit gates while requiring manipulation of fewer physical anyons than previous methods.
Key Contributions
- Derived explicit double elementary braiding matrices for SU(2)k anyon models and demonstrated universal gate synthesis
- Developed a protocol that reduces the number of physical anyons requiring manipulation in topological quantum computation
- Achieved fault-tolerant accuracy for single-qubit gates using GA-enhanced Solovay-Kitaev Algorithm with only 2-level decomposition
View Full Abstract
We systematically investigate the implementation of a universal gate set via double-braiding within SU(2)k anyon models. The explicit form of the double elementary braiding matrices (DEBMs) in these models are derived from the F-matrices and R-symbols obtained via the q-deformed representation theory of SU(2). Using these EBMs, standard single-qubit gates are synthesized up to a global phase by a Genetic Algorithm-enhanced Solovay-Kitaev Algorithm (GA-enhanced SKA), achieving the accuracy required for fault-tolerant quantum computation with only 2-level decomposition. For two-qubit entangling gates, Genetic Algorithm (GA) yields braidwords of 30 braiding operations that approximate the local equivalence class [CNOT]. Theoretically, we demonstrate that performing double-braiding in a three-anyon (six-anyon) encoding of single-qubit (two-qubit) is topologically equivalent to a protocol requiring the physical manipulation of only one (three) anyons to execute arbitrary braids. Our numerical results provide strong evidence that double-braiding in SU(2)k anyons models is capable of universal quantum computation. Moreover, the proposed protocol offers a potential new strategy for significantly reducing the number of non-Abelian anyons that need to be physically manipulated in future braiding-based topological quantum computations (TQC).
Tensor Decomposition for Non-Clifford Gate Minimization
This paper develops new algebraic methods to minimize the number of non-Clifford gates (specifically Toffoli and T gates) needed in quantum circuits by connecting the optimization problem to tensor decomposition over finite fields. The methods achieve better or equal results compared to previous approaches while being dramatically more computationally efficient.
Key Contributions
- Development of algebraic methods connecting Toffoli gate minimization to tensor decomposition over F_2
- Significant computational efficiency improvements achieving same results with single CPU vs thousands of TPUs
- Matching or improving all reported results on standard benchmarks for both Toffoli and T-count optimization
View Full Abstract
Fault-tolerant quantum computation requires minimizing non-Clifford gates, whose implementation via magic state distillation dominates the resource costs. While $T$-count minimization is well-studied, dedicated $CCZ$ factories shift the natural target to direct Toffoli minimization. We develop algebraic methods for this problem, building on a connection between Toffoli count and tensor decomposition over $\mathbb{F}_2$. On standard benchmarks, these methods match or improve all reported results for both Toffoli and $T$-count, with most circuits completing in under a minute on a single CPU instead of thousands of TPUs used by prior work.
Do we have a quantum computer? Expert perspectives on current status and future prospects
This paper presents interviews with quantum computing experts about the current state of quantum computing technology, timelines for fault-tolerant systems, and realistic expectations for future quantum computer development and deployment.
Key Contributions
- Expert consensus on realistic timelines for fault-tolerant quantum computers (decade for small systems, several decades for scalable systems)
- Assessment that quantum computers will remain specialized tools in data centers rather than personal devices
- Evaluation of current NISQ-era machines as legitimate quantum computers despite limitations
View Full Abstract
The rapid growth of quantum information science and technology (QIST) in the 21st century has created both excitement and uncertainty about the field's trajectory. This qualitative study presents perspectives from leading quantum researchers, who are educators, on fundamental questions frequently posed by students, the public, and the media regarding QIST. Through in-depth interviews, we explored several issues related to QIST including the following key areas: the current state of quantum computing in the noisy intermediate-scale quantum (NISQ) era and timelines for fault-tolerant quantum computers, the feasibility of personal quantum computers in our pockets, and promising qubit architectures for future development. Our findings reveal diverse yet convergent perspectives on these issues. While experts agree that the current machines with physical qubits that are being built currently should be called quantum computers, most estimated that it will take a decade to build a small fault-tolerant quantum computer, and several decades to achieve scalable systems capable of running Shor's factoring algorithm with quantum advantage. Regarding carrying a quantum computer in the pocket, experts viewed quantum computers as specialized tools that will remain in central locations such as data centers and can be accessed remotely for applications for which they are particularly effective compared to classical computers. Quantum researchers suggested that multiple platforms show promise, with no clear winner emerging. These insights provide valuable guidance for educators, policymakers, and the broader community in establishing realistic expectations for developments in this exciting field. Our findings can provide valuable information for educators to clarify student doubts about these important yet confusing issues related to quantum technologies.
Beyond Reinforcement Learning: Fast and Scalable Quantum Circuit Synthesis
This paper presents a new method for quantum circuit synthesis that uses supervised learning to estimate the minimum description length of quantum operations and combines this with beam search to find efficient gate sequences. The approach achieves faster synthesis times and better success rates than existing methods while using a lightweight model that generalizes across different numbers of qubits.
Key Contributions
- Novel supervised learning approach for approximating minimum description length of residual unitaries
- Lightweight model with zero-shot generalization across different qubit counts
- Improved synthesis speed and success rates compared to state-of-the-art methods
View Full Abstract
Quantum unitary synthesis addresses the problem of translating abstract quantum algorithms into sequences of hardware-executable quantum gates. Solving this task exactly is infeasible in general due to the exponential growth of the underlying combinatorial search space. Existing approaches suffer from misaligned optimization objectives, substantial training costs and limited generalization across different qubit counts. We mitigate these limitations by using supervised learning to approximate the minimum description length of residual unitaries and combining this estimate with stochastic beam search to identify near optimal gate sequences. Our method relies on a lightweight model with zero-shot generalization, substantially reducing training overhead compared to prior baselines. Across multiple benchmarks, we achieve faster wall-clock synthesis times while exceeding state-of-the-art methods in terms of success rate for complex circuits.
Faster Optimal Decoder for Graph Codes with a Single Logical Qubit
This paper develops a more efficient decoding algorithm for quantum error-correcting codes based on graph states by exploiting structural properties to create a hierarchical decoder that runs in polynomial time while maintaining optimal performance at lower hierarchy levels.
Key Contributions
- Development of a polynomial-time hierarchical decoder for graph codes that avoids full maximum-likelihood decoding
- Demonstration that post-measurement states follow well-defined structures determined by syndrome measurements, enabling more efficient error correction
View Full Abstract
In this work, we develop an efficient decoding method for graph codes, a class of stabilizer quantum error-correcting codes constructed from graph states. While optimal decoding is generally NP-hard, we propose a faster decoder exploiting the structural properties of the underlying graph states. Although distinct error patterns may yield the same syndrome, we demonstrate that the post-measurement state follows a well-defined structure determined by the projective syndrome measurement. Building on this idea, we introduce a hierarchical decoder in which each level can be solved in polynomial time. Additionally, this decoder achieves optimal decoding performance at the lower levels of the hierarchy. This strategy avoids the need for full maximum-likelihood decoding of graph codes. Numerical results illustrate the efficiency and effectiveness of the proposed approach.
Homological origin of transversal implementability of logical diagonal gates in quantum CSS codes
This paper uses homology theory to characterize when logical diagonal gates can be implemented transversally in quantum CSS error-correcting codes. The authors prove that the solvability of implementing these gates with finer rotation angles is completely determined by mathematical structures called Bockstein homomorphisms.
Key Contributions
- Formulated the refinement problem for transversal logical diagonal gates and showed its solvability is characterized by Bockstein homomorphisms
- Proved conditions for existence of transversal implementations of logical Pauli Z rotations in general CSS codes based on X-stabilizer generator properties
- Identified canonical homological obstructions to transversal implementability in quantum error correction
View Full Abstract
Transversal Pauli Z rotations provide a natural route to fault-tolerant logical diagonal gates in quantum CSS codes, yet their capability is fundamentally constrained. In this work, we formulate the refinement problem of realizing a logical diagonal gate by a transversal implementation with a finer discrete rotation angle and show that its solvability is completely characterized by the Bockstein homomorphism in homology theory. Furthermore, we prove that the linear independence of the X-stabilizer generators together with the commutativity condition modulo a power of two ensures the existence of transversal implementations of all logical Pauli Z rotations with discrete angles in general CSS codes. Our results identify a canonical homological obstruction governing transversal implementability and provide a conceptual foundation for a formal theory of transversal structures in quantum error correction.
A hardware-native time-frequency GKP logical qubit toward fault-tolerant photonic operation
This paper demonstrates a new type of fault-tolerant quantum computing qubit called a GKP logical qubit using single photons encoded in time and frequency domains. The approach provides a hardware-compatible way to implement quantum error correction in photonic quantum computers by naturally mapping common noise sources to correctable errors.
Key Contributions
- First hardware-native implementation of time-frequency GKP logical qubits using single photons
- Demonstration that timing jitter and phase noise naturally map to correctable displacement errors
- Concrete pathway for integrating GKP error correction into photonic quantum computing architectures
View Full Abstract
We realize a hardware-native time--frequency Gottesman--Kitaev--Preskill (GKP) logical qubit encoded in the continuous phase space of single photons, establishing a propagating photonic implementation of bosonic grid encoding. Finite-energy grid states are generated deterministically using coherently driven entangled nonlinear biphoton sources that produce single-photon frequency-comb supermodes. An optical-frequency-comb reference anchors the time--frequency phase space and enforces commuting displacement stabilizers directly at the hardware level, continuously defining the logical subspace. Timing jitter, spectral drift, and phase noise map naturally onto Gaussian displacement errors within this lattice, yielding intrinsic correctability inside a stabilizer cell. Logical operations correspond to experimentally accessible phase and delay controls, enabling deterministic state preparation and manipulation. Building on the modal time--frequency GKP framework, we identify a concrete pathway toward active syndrome extraction and deterministic displacement recovery using ancillary grid states and interferometric time--frequency measurements. These primitives establish a hardware-compatible route for integrating the time--frequency GKP logical layer into erasure-aware and fusion-based fault-tolerant photonic architectures.
High-fidelity Quantum Readout Processing via an Embedded SNAIL Amplifier
This paper proposes embedding a SNAIL (Superconducting Nonlinear Asymmetric Inductive eLement) directly into quantum readout circuits to improve the fidelity of quantum state measurements while reducing hardware complexity. The approach enables on-chip signal processing and amplification, eliminating the need for bulky external components typically required in superconducting quantum processors.
Key Contributions
- Novel embedded SNAIL architecture for on-chip quantum readout processing
- Enhanced readout fidelity with reduced measurement-induced decoherence
- Simplified hardware complexity by eliminating external isolators and amplifiers
View Full Abstract
Scalable, high-fidelity quantum-state readout remains a central challenge in the development of large-scale superconducting quantum processors. Conventional dispersive readout architectures depend on bulky isolators and external amplifiers, introducing significant hardware overhead and limiting opportunities for on-chip information processing. In this work, we propose a novel approach that embeds a nonlinear Superconducting Nonlinear Asymmetric Inductive eLement (SNAIL) into the readout chain, enabling coherent and directional processing of readout signals directly on-chip. This embedded SNAIL platform allows frequency-multiplexed resonators to interact through engineered couplings, forming a tunable readout-amplifier-output architecture that can manipulate quantum readout data \textit{in situ}. Through theoretical modeling and numerical optimization, we show that this platform enhances fidelity, suppresses measurement-induced decoherence, and simplifies hardware complexity. These results establish the hybridized SNAIL as a promising building block for scalable and coherent quantum-state readout in next-generation processors.
Single snapshot non-Markovianity of Pauli channels
This paper studies noise in quantum computers by analyzing Pauli channels and finds that the commonly assumed Markovian (memoryless) noise models are often invalid. The researchers show that real quantum computer noise frequently exhibits non-Markovian behavior with negative rates, and demonstrate improved noise prediction accuracy when accounting for this complexity.
Key Contributions
- Demonstrated that random Pauli channels are almost always non-Markovian with probability converging doubly exponentially to unity
- Showed that negative rates in noise generators are generic even for physically motivated Markovian noise models
- Generalized probabilistic error amplification and cancellation techniques to non-Markovian generators
- Experimentally validated on superconducting qubits that allowing negative rates improves noise model accuracy
View Full Abstract
Pauli channels are widely used to describe errors in quantum computers, particularly when noise is shaped via Pauli twirling. A common assumption is that such channels admit a Markovian generator, namely a Pauli-Lindblad model with non-negative rates, but the validity of this assumption has not been systematically examined. Here, using CP-indivisibility as our criterion for non-Markovianity, we study multi-qubit Pauli channels from a single snapshot of the dynamics. We find that while the generator always has the same structure as the standard Pauli-Lindblad model, the rates may be negative or complex. We show that random Pauli channels are almost always non-Markovian, with the probability of encountering a negative rate converging doubly exponentially to unity with the number of qubits. For physically motivated noise models shaped by Pauli twirling, including single-qubit over-rotations and two-qubit amplitude damping errors, we find that negative rates are generic, even when the underlying physical noise is Markovian. We generalize probabilistic error amplification and cancellation to non-Markovian generators, and quantify the sampling overhead introduced by negative and complex rates. Experiments on superconducting qubits confirm that allowing negative rates in the learned noise model yields more accurate predictions than restricting to non-negative rates.
Optimized Compilation of Logical Clifford Circuits
This paper develops improved methods for compiling logical quantum circuits in fault-tolerant quantum computing by treating simulation primitives as single blocks rather than compiling gate-by-gate. The approach reduces circuit depth and error rates while maintaining compatibility with quantum error correction codes.
Key Contributions
- Development of block-based compilation methodology for logical Clifford circuits that reduces circuit depth compared to gate-by-gate approaches
- Demonstration of significant error-rate reductions in compiled circuits with improved realizations for different gate placement patterns
View Full Abstract
Fault-tolerant quantum computing hinges on efficient logical compilation, in particular, translating high-level circuits into code-compatible implementations. Gate-by-gate compilation often yields deep circuits, requiring significant overhead to ensure fault-tolerance. As an alternative, we investigate the compilation of primitives from quantum simulation as single blocks. We focus our study on the [[n,n-2,2]] code family, which allows for the exhaustive comparison of potential compilation primitives on small circuit instances. Based upon that, we then introduce a methodology that lifts these primitives into size-invariant, depth-efficient compilation strategies. This recovers known methods for circuits with moderate Hadamard counts and yields improved realizations for sparse and dense placements. Simulations show significant error-rate reductions in the compiled circuits. We envision the approach as a core component of peephole-based compilers. Its flexibility and low hand-crafting burden make it readily extensible to other circuit structures and code families.
Design and Operation of Wafer-Scale Packages Containing >500 Superconducting Qubits
This paper presents a wafer-scale packaging system that can house over 500 superconducting qubits on a single chip, demonstrating that large arrays of qubits can be operated without degrading their performance. The package is designed to work at extremely cold temperatures and shows promising qubit coherence times and readout fidelities.
Key Contributions
- Development of wafer-scale packaging architecture supporting >500 superconducting qubits
- Demonstration that large-scale integration maintains qubit performance with ~100 μs coherence times
- Validation of thermal management and RF interference suppression at millikelvin temperatures
- High-throughput metrology system for fabrication process optimization
View Full Abstract
Packages capable of supporting large arrays of high-coherence superconducting qubits are vital for the realisation of fault-tolerant quantum computers and the necessary high-throughput metrology required to optimise fabrication and manufacturing processes. We present a wafer-scale packaging architecture supporting over 500 qubits on a single 3-inch die. The package is engineered to suppress parasitic RF modes, and to mitigate material loss through simulation-informed design while managing differential thermal contraction to ensure robust operation at millikelvin temperatures. System-level heat-load calculations from a large wiring payload show this package may be operated in commercial dilution refrigerators. Measurements of the qubits loaded into the package show median $T_1$, $T_{2e} \sim 100~μ$s ($\sim$100 qubits) alongside readout with median fidelity of 97.5% (54 qubits) and a median qubit temperature of 36 mK (54 qubits). These results validate the performance of these packages and demonstrate that large-scale integration can be achieved without compromising device performance. Finally, we highlight the utility of these packages as a tool for high throughput feedback on qubit figures of merit over large sample sizes, allowing identification of performance outliers in the tails of the coherence distribution, a critical capability for informing fabrication and manufacture of high-quality quantum qubits and quantum processors.
Floquet implementation of a 3d fermionic toric code with full logical code space
This paper presents a 3D Floquet quantum error-correcting code that implements a fermionic toric code while preserving all logical qubits throughout the measurement process. The work identifies a specific 3D lattice geometry that enables fault-tolerant quantum computation through time-periodic measurement sequences, avoiding the information loss that typically occurs in naive sequential measurement approaches.
Key Contributions
- Development of a 3D Floquet quantum error-correcting code that preserves all three logical qubits during measurement sequences
- Identification of a novel 3D lattice geometry that generalizes the Kekulé lattice structure to avoid logical information collapse
- Design of measurement protocols that extract complete error syndrome information without disturbing the logical subspace
View Full Abstract
Floquet quantum error-correcting codes provide an operationally economical route to fault tolerance by dynamically generating stabilizer structures using only two-body Pauli measurements. But while it is well established that stabilizer codes in higher spatial dimensions gain additional levels of intrinsic robustness, higher-dimensional Floquet codes have hitherto been explored only in limited scope. Here we introduce a 3d generalization of a Floquet code whose instantaneous stabilizer group realizes a 3d fermionic toric code, while crucially preserving all three logical qubits throughout the entire measurement sequence. One central ingredient is the identification of a 3d lattice geometry that generalizes the features of the Kekulé lattice underlying the 2d Hastings-Haah code - specifically, a structure where deleting any one edge color yields a two-color subgraph that decomposes into short, closed loops rather than homologically nontrivial chains. This loop property avoids the collapse of logical information that plagues naive sequential two-color measurement schedules on many 3d lattices. Although, for our lattice geometry, a simple 3-round cycle that sequentially measures the three types of parity checks does not expose the full error syndrome set, we show that one can append a measurement sequence to extract the missing syndromes without disturbing the logical subspace. Beyond code design, 3d tricoordinated lattice geometries define a family of 3d monitored Kitaev models, in which random measurements of the non-commuting parity checks give rise to dynamically created entangled phases with nontrivial topology. In discussing the general structure of their underlying phase diagrams and, in particular, the existence of certain quantum critical points, we again make a connection to the general preservation of logical information in time-ordered Floquet protocols.
Non-Abelian Quantum Low-Density Parity Check Codes and Non-Clifford Operations from Gauging Logical Gates via Measurements
This paper develops new methods for creating non-Abelian quantum low-density parity check (qLDPC) codes by using measurement and feedback to gauge transversal Clifford gates. The work provides two different construction approaches and shows how these methods enable magic state preparation and non-Clifford operations on any qLDPC code.
Key Contributions
- Two novel construction methods for non-Abelian qLDPC codes via gauging transversal Clifford gates
- Demonstration that gauging procedures enable magic state preparation and non-Clifford operations on any qLDPC code
- Connection between gauged codes and 2D non-Abelian topological order properties
View Full Abstract
In this work, we introduce constructions for non-Abelian qLDPC codes obtained by gauging transversal Clifford gates using measurement and feedback. In particular, we identify two qualitatively different approaches to gauging qLDPC codes to obtain their non-Abelian counterparts. The first approach applies to codes that exhibit a generalized form of Poincaré duality and leads to a qLDPC non-Abelian Clifford stabilizer code, whose stabilizers are reminiscent of the action of a Type-III twisted quantum double. Our second approach applies to general qLDPC codes, and uses a graph of ancilla qubits which may be tailored to properties of the input codes to gauge a single transversal gate. For both constructions, the resulting gauged codes are shown to have properties analogous to 2D non-Abelian topological order -- e.g. the analog of a single anyon on a torus. We conclude by demonstrating that our gauging procedures enable magic state preparation via the measurement of logical Clifford gates. Consequently, our gauging constructions offer a protocol for performing non-Clifford operations on any qLDPC code.
Millisecond-Scale Calibration and Benchmarking of Superconducting Qubits
This paper develops fast calibration techniques for superconducting qubits that can adjust qubit parameters in milliseconds using FPGA-based processing, addressing the problem that qubit performance drifts on sub-second timescales. The researchers demonstrate automated recalibration methods that maintain better gate performance than initial calibration over extended periods.
Key Contributions
- Development of millisecond-scale FPGA-based calibration workflow for superconducting qubits that eliminates CPU round trips
- Demonstration of continuous automated recalibration maintaining gate fidelity over 6 hours with 74,000+ recalibrations
View Full Abstract
Superconducting qubit parameters drift on sub-second timescales, motivating calibration and benchmarking techniques that can be executed on millisecond timescales. We demonstrate an on-FPGA workflow that co-locates pulse generation, data acquisition, analysis, and feed-forward, eliminating CPU round trips. Within this workflow, we introduce sparse-sampling and on-FPGA inference tools, including computationally efficient methods for estimation of exponential and sine-like response functions, as well as on-FPGA implementations of Nelder-Mead optimization and golden-section search. These methods enable low-latency primitives for readout calibration, spectroscopy, pulse-amplitude calibration, coherence estimation, and benchmarking. We deploy this toolset to estimate $T_1$ in 10 ms, optimize readout parameters in 100 ms, optimize pulse amplitudes in 1 ms, and perform Clifford randomized gate benchmarking in 107 ms on a flux-tunable superconducting transmon qubit. Running a closed-loop on-FPGA recalibration protocol continuously for 6 hours enables more than 74,000 consecutive recalibrations and yields gate errors that consistently retain better performance than the baseline initial calibration. Correlation analysis shows that recalibration suppresses coupling of gate error to control-parameter drift while preserving a coherence-linked performance. Finally, we quantify uncertainty versus time-to-decision under our sparse sampling approaches and identify optimal parameter regimes for efficient estimation of qubit and pulse parameters.
Control the qubit-qubit coupling with double superconducting resonators
This paper demonstrates experimental control of coupling between superconducting qubits using a double-resonator design, showing that qubit-qubit coupling can be tuned from effectively zero to gate-operation strength by adjusting qubit frequencies by less than 50 MHz. The approach offers fabrication advantages and reduced noise for scaling up superconducting quantum processors.
Key Contributions
- Experimental demonstration of tunable qubit-qubit coupling using double-resonator architecture
- Achievement of coupling control from off to gate-operational strength with small frequency shifts
- Simplified fabrication approach with reduced flux noise for scalable quantum processors
View Full Abstract
We experimentally studied the switching off processes in the double-resonator coupler superconducting quantum circuit.In both frequency and time-domain, we observed the variation of qubit-qubit effective coupling by tuning qubits'frequencies. According to the measurement results, by just shifting qubits' frequencies smaller than 50 MHz, the effective qubit-qubit coupling strength can be tuned from switching off point to two qubit gate point (effective coupling larger than 5 MHz) in double-resonator superconducting quantum circuit. The double-resonator coupler superconducting quantum circuit has the advantage of simple fabrications, introducing less flux noises, reducing occupancy of dilution refrigerator cables, which might supply a promising platform for future large-scale superconducting quantum processors.
Structural control of two-level defect density revealed by high-throughput correlative measurements of Josephson junctions
This paper investigates defects in superconducting Josephson junctions that interfere with quantum computer performance by analyzing over 6,000 junctions and 600 microscopy images. The researchers found that aluminum electrode thickness and grain size strongly correlate with defect density, leading to a fabrication method that reduces harmful defects by two-thirds.
Key Contributions
- Established statistical correlation between aluminum electrode microstructure and two-level system defect density in Josephson junctions
- Demonstrated fabrication parameter optimization that reduces TLS density by two-thirds
- Developed high-throughput correlative methodology combining materials characterization with quantum device performance
View Full Abstract
Materials defects in Josephson junctions (JJs), often referred to as two-level systems (TLS), couple to superconducting qubits and are a critical bottleneck for scalable quantum processors. Despite their importance, understanding the microscopic sources of TLS and how to mitigate them has remained a major challenge. Here, we demonstrate a high-throughput, correlated approach to trace the microstructural origins of strongly-coupled TLS in Josephson circuits. We assembled a massive dataset of TLS across 6,000 Al/AlOx/Al JJs and more than 600 atomic resolution transmission electron microscopy images. We statistically link fabrication, microstructure, and TLS occurrence, revealing a strong correlation between Al electrode thickness, Al grain size, and TLS density. Correspondingly, we find a two-thirds reduction in TLS prompted by a change in electrode fabrication parameters. These results demonstrate a robust, data-driven methodology to understand and control defects in quantum circuits and pave the way for significantly reducing TLS density.
The Pinnacle Architecture: Reducing the cost of breaking RSA-2048 to 100 000 physical qubits using quantum LDPC codes
This paper introduces the Pinnacle Architecture using quantum low-density parity check codes to dramatically reduce the physical qubit requirements for fault-tolerant quantum computing, demonstrating that RSA-2048 can be broken with fewer than 100,000 physical qubits instead of the previously estimated million+ qubits.
Key Contributions
- Introduction of Pinnacle Architecture using quantum LDPC codes for fault-tolerant quantum computing
- Demonstration that RSA-2048 factoring requires only ~100,000 physical qubits with order-of-magnitude reduction in overhead
- Development of practical low-overhead fault-tolerant architecture for utility-scale quantum computing
View Full Abstract
The realisation of utility-scale quantum computing inextricably depends on the design of practical, low-overhead fault-tolerant architectures. We introduce the \textit{Pinnacle Architecture}, which uses quantum low-density parity check (QLDPC) codes to allow for universal, fault-tolerant quantum computation with a spacetime overhead significantly smaller than that of any competing architecture. With this architecture, we show that 2048-bit RSA integers can be factored with less than one hundred thousand physical qubits, given a physical error rate of $10^{-3}$, code cycle time of $1$ \textmu s and a reaction time of $10$ \textmu s. We thereby demonstrate the feasibility of utility-scale quantum computing with an order of magnitude fewer physical qubits than has previously been believed necessary.
Multi-ion entangling gates mediated by spectrally unresolved modes
This paper introduces a new method for creating entangling gates between trapped-ion qubits using time-dependent magnetic field gradients, where all motional modes participate simultaneously rather than addressing individual modes. This nonperturbative approach enables faster gates on larger ion strings and can implement multi-qubit gates or simultaneous two-qubit gates between arbitrary ion pairs.
Key Contributions
- Nonperturbative gate scheme using all axial motional modes simultaneously
- Time-dependent magnetic-field gradient approach for multi-ion entangling gates
- Method for simultaneous gates on multiple ion pairs in linear strings
View Full Abstract
Entangling interactions between distant qubits can be mediated via an additional degree of freedom. In conventional trapped-ion schemes, realizing a well-defined, coherent gate typically requires spectrally addressing a specific bus mode. As the ion number increases, the coupling to each individual motional mode becomes weaker, so gates on large ion strings mediated by a single mode are necessarily slow. Moreover, addressing a large number of modes demands complex driving schemes, and the fundamentally perturbative character of these approaches imposes constraints on achievable gate speed and fidelity. Here, we introduce a scheme for entangling trapped-ion qubits using a time-dependent magnetic-field gradient, in which all axial motional modes participate in mediating the interaction and the gate construction is nonperturbative. The framework can be used to implement both multi-qubit gates and two-qubit gates between arbitrary pairs in a linear ion string. Through several explicit examples, we highlight the advantages over existing magnetic-gradient schemes and show how gates on multiple ion pairs can be carried out simultaneously.
Recirculating Quantum Photonic Networks for Fast Deterministic Quantum Information Processing
This paper proposes a recirculating quantum photonic network (RQPN) architecture that processes quantum information by capturing photons, circulating them between interconnected nonlinear cavities, and releasing outputs faster than traditional approaches. The architecture demonstrates significant speedups for multi-qubit gates like the Toffoli gate and quantum error correction operations.
Key Contributions
- Novel recirculating quantum photonic network architecture that reduces processing time for quantum operations
- Demonstration of faster three-qubit Toffoli gate implementation and seven-fold speedup in quantum error correction
View Full Abstract
A fundamental challenge in photonics-based deterministic quantum information processing is to realize key transformations on time scales shorter than those of detrimental decoherence and loss mechanisms. This challenge has been addressed through device-focused approaches that aim to increase nonlinear interactions relative to decoherence rates. In this work, we adopt a complementary architecture-focused approach by proposing a recirculating quantum photonic network (RQPN) that minimizes the duration of quantum information processing tasks, thereby reducing the requirements on nonlinear interaction rates. The RQPN consists of a network of all-to-all connected nonlinear cavities with dynamically controlled waveguide couplings, and it processes information by capturing a photonic input state, recirculating photons between the cavities, and releasing a photonic output state. We demonstrate the RQPN's architectural advantage through two examples: first, we show that processing all qubits simultaneously yields faster operations than single- and two-qubit decompositions of the three-qubit Toffoli gate. Second, we demonstrate implementations of a measurement-free correction for single-photon loss, achieving up to seven-fold speedups and significantly improved hardware efficiency relative to state-of-the-art architecture proposals. Our work shows that a single hardware-efficient recirculating architecture substantially reduces the temporal overhead of multi-qubit gates and quantum error correction, thereby lowering the barrier to experimental realizations of deterministic photonic quantum information processing.
Erasure Thresholds for Hyperbolic and Semi-Hyperbolic Surface Codes
This paper develops and tests 25 new quantum error correction codes based on hyperbolic and semi-hyperbolic surface geometries, measuring their performance against different types of quantum noise. The researchers find that these codes can tolerate error rates of 5% or higher for certain noise types, with some achieving better performance than traditional surface codes.
Key Contributions
- Construction of 25 new hyperbolic and semi-hyperbolic CSS surface codes from various tessellations
- Comprehensive simulation and threshold analysis showing improved noise tolerance compared to traditional surface codes
- Demonstration that fine-grained scaling families achieve higher thresholds with erasure-to-Pauli ratios of 4.5-5.2×
View Full Abstract
We construct 14 hyperbolic CSS surface codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations and 11 semi-hyperbolic (fine-grained) codes. We simulate all 25 codes under circuit-level erasure and Pauli noise. Under circuit-level Pauli noise, pseudothresholds increase with code size within each family ($0.24$--$0.49\%$ for $\{8,3\}$, $0.11$--$0.43\%$ for $\{10,3\}$, $0.07$--$0.13\%$ for $\{12,3\}$). For erasure noise, most codes have $p^*_{\mathrm{E}} > 5\%$. Per-observable family thresholds give erasure-to-Pauli ratios of $2.7$--$3.9\times$ for the base code families. Fine-grained scaling families achieve higher thresholds in both Pauli ($0.67$--$0.68\%$) and erasure ($3.0$--$3.5\%$), with ratios of $4.5$--$5.2\times$. Under phenomenological noise, per-logical $Z$-channel thresholds are ${\sim}2\%$ for $\{8,3\}$ and ${\sim}1\%$ for $\{10,3\}$; the $\{12,3\}$ threshold lies below $0.5\%$.
Comparing and correcting robustness metrics for quantum optimal control
This paper develops improved methods for designing quantum control pulses that are robust against hardware errors and drift. The researchers compare different mathematical approaches for measuring error sensitivity and introduce corrections that make quantum control more reliable in practical implementations.
Key Contributions
- Systematic comparison of adjoint end-point and toggling-frame approaches for robustness estimation
- Introduction of discretization correction to toggling-frame robustness estimator
- Novel framework positioning robustness as first-class objective in constrained optimal control
View Full Abstract
Control pulses that nominally optimize fidelity are sensitive to routine hardware drift and modeling errors. Robust quantum optimal control seeks error-insensitive control pulses that maintain fidelity thresholds and obey hardware constraints. Distinct numerical approximations to the first-order error susceptibility include adjoint end-point and toggling-frame approaches. Although theoretically equivalent, we provide a novel, systematic study demonstrating important numerical differences between these two approaches. We also introduce a critical discretization correction to the widely-used toggling-frame robustness estimator, measurably improving its estimate of first-order error susceptibility. We accomplish our study by positioning robustness as a first-class objective within direct, constrained optimal control. Our approach uniquely handles control and fidelity constraints while cleanly isolating robustness for dedicated optimization. In both single- and two-qubit examples under realistic constraints, our approach provides an analytic edge for obtaining precise, physics-informed robustness.
Simpler Presentations for Many Fragments of Quantum Circuits
This paper develops improved mathematical frameworks for optimizing quantum circuits by creating more efficient rule sets for proving when different quantum circuit arrangements are equivalent. The authors focus on several important quantum gate families and demonstrate that their new approach requires significantly fewer rules while maintaining completeness and often achieving minimality.
Key Contributions
- Development of a unified PROP framework for quantum circuit optimization with significantly reduced rule counts
- Proof of minimality and bounded minimality for multiple quantum gate fragments including Clifford circuits
View Full Abstract
Equational reasoning is central to quantum circuit optimisation and verification: one replaces subcircuits by provably equivalent ones using a fixed set of rewrite rules viewed as equations. We study such reasoning through finite equational theories, presenting restricted quantum gate fragments as symmetric monoidal categories (PROPs), where wire permutations are treated as structural and separated cleanly from fragment-specific gate axioms. For six widely used near-Clifford fragments: qubit Clifford, real Clifford, Clifford+T (up to two qubits), Clifford+CS (up to three qubits) and CNOT-dihedral, we transfer the completeness results of prior work into our PROP framework. Beyond completeness, we address minimality (axiom independence). Using uniform separating interpretations into simple semantic targets, we prove minimality for several fragments (including all arities for qubit Clifford, real Clifford, and CNOT-dihedral), and bounded minimality for the remaining cases. Overall, our presentations significantly reduce rule counts compared to prior work and provide a reusable categorical framework for constructing complete and often minimal rewrite systems for quantum circuit fragments.
Polycontrolled PROPs for Qudit Circuits: A Uniform Complete Equational Theory For Arbitrary Finite Dimension
This paper develops a complete mathematical framework for reasoning about quantum circuits using qudits (d-level quantum systems) of any finite dimension, providing a finite set of axioms that can prove when two circuits are equivalent. The work extends previous results for qubits to arbitrary dimensions while maintaining uniform axiom structures.
Key Contributions
- Finite schematic axiomatisation of qudit circuits uniform in every finite dimension d >= 2
- Sound and complete equational theory for unitary d-level circuits using at most three-wire axioms
- Translation between qudit circuits and LOPP calculus via d-ary Gray codes
- Extension of qubit circuit completeness results to arbitrary finite dimensions
View Full Abstract
We present a finite schematic axiomatisation of quantum circuits over d-level systems (qudits), uniform in every finite dimension d >= 2. For each d we define a PROP equipped with a family of control functors, treating control as a primitive categorical constructor. Using a translation between qudit circuits and the LOPP calculus for linear optics based on d-ary Gray codes, we obtain for each d a finite set of local axiom schemata that is sound and complete for unitary d-level circuits: two circuits denote the same unitary if and only if they are inter-derivable using axioms involving at most three wires. The generators are compatible with standard universal qudit gate families, yielding a sound equational basis for circuit rewriting and optimisation-by-rewriting. Conceptually, this extends the qubit circuit completeness results of Clément et al.\ to arbitrary finite dimension, and instantiates the control-as-constructor approach of Delorme and Perdrix in this setting, while keeping the axiom shapes uniform in d.
Construction of the full logical Clifford group for high-rate quantum Reed-Muller codes using only transversal and fold-transversal gates
This paper develops a method to implement the complete set of logical Clifford gates for high-rate quantum Reed-Muller error-correcting codes using only transversal and fold-transversal gates, eliminating the need for ancilla qubits. The work enables fault-tolerant quantum computation with codes that can efficiently store large amounts of quantum information.
Key Contributions
- First construction of the full logical Clifford group for high-rate quantum codes using only transversal and fold-transversal gates without ancilla qubits
- Development of fault-tolerant gate implementation for quantum Reed-Muller codes with near-linear information rate scaling
View Full Abstract
To build large-scale quantum computers while minimizing resource requirements, one may want to use high-rate quantum error-correcting codes that can efficiently encode information. However, realizing an addressable gate$\unicode{x2014}$a logical gate on a subset of logical qubits within a high-rate code$\unicode{x2014}$in a fault-tolerant manner can be challenging and may require ancilla qubits. Transversal and fold-transversal gates could provide a means to fault-tolerantly implement logical gates using a constant-depth circuit without ancilla qubits, but available gates of these types could be limited depending on the code and might not be addressable. In this work, we study a family of $[\![n=2^m,k={m \choose m/2}\approx n/\sqrt{π\log_2(n)/2},d=2^{m/2}=\sqrt{n}]\!]$ self-dual quantum Reed$\unicode{x2013}$Muller codes, where $m$ is a positive even number. For any code in this family, we construct a generating set of the full logical Clifford group comprising only transversal and fold-transversal gates, thus enabling the implementation of any addressable Clifford gate. To our knowledge, this is the first known construction of the full logical Clifford group for a family of codes in which $k$ grows near-linearly in $n$ up to a $1/\sqrt{\log n}$ factor that uses only transversal and fold-transversal gates without requiring ancilla qubits.
How to Classically Verify a Quantum Cat without Killing It
This paper develops a new protocol for classically verifying quantum computations that preserves the quantum witness state instead of destroying it, solving a key problem in quantum verification where only one copy of a non-clonable quantum witness is available.
Key Contributions
- First classical verification protocol for quantum computation that preserves the witness state
- Construction of state preserving classical arguments for NP and dual-mode trapdoor functions with state recovery
View Full Abstract
Existing protocols for classical verification of quantum computation (CVQC) consume the prover's witness state, requiring a new witness state for each invocation. Because QMA witnesses are not generally clonable, destroying the input witness means that amplifying soundness and completeness via repetition requires many copies of the witness. Building CVQC with low soundness error that uses only *one* copy of the witness has remained an open problem so far. We resolve this problem by constructing a CVQC that uses a single copy of the QMA witness, has negligible completeness and soundness errors, and does *not* destroy its witness. The soundness of our CVQC is based on the post-quantum Learning With Errors (LWE) assumption. To obtain this result, we define and construct two primitives (under the post-quantum LWE assumption) for non-destructively handling superpositions of classical data, which we believe are of independent interest: - A *state preserving* classical argument for NP. - Dual-mode trapdoor functions with *state recovery*.
Coherence Protection for Mobile Spin Qubits in Silicon
This paper demonstrates techniques to preserve quantum coherence in mobile silicon spin qubits that can be moved between locations, achieving coherence times up to 32 microseconds during transport over distances exceeding 200 nanometers. The researchers used magnetic field optimization, motional narrowing through periodic shuttling, and dynamical decoupling to maintain qubit performance during movement.
Key Contributions
- Demonstration of coherence preservation during spin qubit shuttling with multiple noise mitigation strategies
- Achievement of 32 μs coherence time during transport over 200+ nm distances using dynamical decoupling
- Development of dressed-state shuttling for robust protection against low-frequency noise without pulsed control overhead
View Full Abstract
Mobile spin qubit architectures promise flexible connectivity for efficient quantum error correction and relaxed device layout constraints, but their viability rests on preserving spin coherence during transport. While shuttling transforms spatial disorder into time-dependent noise, its net impact on spin coherence remains an open question. Here we demonstrate systematic noise mitigation during spin shuttling in a linear $^{28}$Si/SiGe quantum dot device. First, by passively reducing magnetic field gradients, we minimize charge-noise coupling to the spin and double the spatially averaged dephasing time $T_2^*(x_n)$ from $4.4$ to $8.5\,μ\text{s}$. Next, we exploit motional narrowing by periodically shuttling the qubit, achieving a further enhancement in coherence time up to $T_{2}^{*,sh} = 11.5\,μ\text{s}$. Finally, we incorporate dynamical decoupling techniques while periodically shuttling over distances exceeding $200\,\text{nm}$, reaching $T_\text{2}^{H,sh}= 32\,μ\text{s}$. For the same setup, we demonstrate that dressed-state shuttling provides robust protection against low-frequency noise with a decay time $T_R^{\text{sh}} = 21\,μ\text{s}$, without the overhead of pulsed control and allowing protection during one-way spin transport. By preserving coherence over timescales exceeding typical gate and readout operations, the demonstrated strategies establish mobile spin qubits as a viable solution for scalable silicon quantum processors.
A cavity-mediated reconfigurable coupling scheme for superconducting qubits
This paper introduces a new architecture for superconducting quantum computers that uses a shared cavity to enable flexible connections between non-adjacent qubits. The system allows researchers to dynamically reconfigure which qubits can interact with each other, overcoming the typical limitation where qubits can only interact with their immediate neighbors.
Key Contributions
- Development of cavity-mediated reconfigurable coupling architecture for superconducting qubits
- Demonstration of high-fidelity two-qubit gates (iSWAP and CZ) with coherent error below 10^-4
- Extension to four-qubit systems with selective coupling and low crosstalk
View Full Abstract
Superconducting qubits have achieved remarkable progress in gate fidelity and coherence, yet their typical nearest-neighbor connectivity presents constraints for implementing complex quantum circuits. Here, we introduce a cavity-mediated coupling architecture in which a shared cavity mode, accessed through tunable qubit-cavity couplers, enables dynamically reconfigurable interactions between non-adjacent qubits. By selectively activating the couplers, we demonstrate that high-fidelity iSWAP and CZ gates can be performed within 50 ns with simulated coherent error below $10^{-4}$, while residual $ZZ$ interaction during idling remains below a few kilohertz. Extending to a four-qubit system, we also simulate gates between every qubit pair by selectively enabling the couplers with low qubit crosstalk. This approach provides a practical route toward enhanced interaction flexibility in superconducting quantum processors and may serve as a useful building block for devices that benefit from selective non-local coupling.
The equivalence of quantum deletion and insertion errors on permutation-invariant codes
This paper addresses quantum synchronisation errors that change the number of qubits in a system, establishing an equivalence between quantum deletion and insertion errors for permutation-invariant quantum error-correcting codes. The work extends classical insertion-deletion error correction theory to the quantum domain and provides conditions for when these codes can correct such errors.
Key Contributions
- Establishes quantum insertion-deletion equivalence for permutation-invariant codes
- Provides conditions for t-insertion error-correctability and (t,s)-insdel error-correctability in quantum systems
View Full Abstract
Quantum synchronisation errors are a class of quantum errors that change the number of qubits in a quantum system. The classical error correction of synchronisation errors has been well-studied, including an insertion-deletion equivalence more than a half-century ago, but little progress has been made towards the quantum counterpart since the birth of quantum error correction. We address the longstanding problem of a quantum insertion-deletion equivalence on permutation-invariant codes, detailing the conditions under which such codes are $t$-insertion error-correctable. We extend these conditions to quantum insdel errors, formulating a more restrictive set of conditions under which permutation-invariant codes are $(t,s)$-insdel error-correctable. Our work resolves many of the outstanding questions regarding the quantum error correction of synchronisation errors.
Non-Markovianity induced by Pauli-twirling
This paper studies how Pauli twirling, a technique used to simplify quantum noise into a more manageable form, can paradoxically convert well-behaved Markovian noise into non-Markovian noise that requires negative parameters to describe correctly. The authors prove that this counterintuitive effect occurs even when starting with standard Markovian quantum channels, which has important implications for quantum error correction and noise characterization.
Key Contributions
- Proved that Pauli channels are non-Markovian if and only if they have negative Pauli-Lindblad parameters
- Demonstrated that Pauli twirling can induce non-Markovianity in originally Markovian quantum channels
- Showed this effect occurs in realistic scenarios like implementing square-root-X gates under standard noise
View Full Abstract
Noise forms a central obstacle to effective quantum information processing. Recent experimental advances have enabled the tailoring of noise properties through Pauli twirling, transforming arbitrary noise channels into Pauli channels. This underpins theoretical descriptions of fault-tolerant quantum computation and forms an essential tool in noise characterization and error mitigation. Pauli-Lindblad channels have been introduced to aptly parameterize quasi-local Pauli errors across a quantum register, excluding negative Pauli-Lindblad parameters relying on the Markovianity of the underlying noise processes. We point out that caution is required when parameterizing channels as Pauli-Lindblad channels with nonnegative parameters. For this, we study the effects of Pauli twirling on Markovianity. We use the notion of Markovianity of a channel (rather than that of an entire semigroup) and prove a general Pauli channel is non-Markovian if and only if at least one of its Pauli-Lindblad parameters is negative. Using this, we show that Markovian quantum channels often become non-Markovian after Pauli twirling. The Pauli-twirling induced non-Markovianity necessitates the use of negative Pauli-Lindblad parameters for a correct noise description in experimentally realistic scenarios. An important example is the implementation of the $\sqrt{X}$-gate under standard Markovian noise. As such, our results have direct implications for quantum error mitigation protocols that rely on accurate noise characterization.
Efficient circuit compression by multi-qudit entangling gates in linear optical quantum computation
This paper develops new multi-level control-Z gates for linear optical quantum computation that can selectively operate on subsets of qubits encoded in qudits, improving the efficiency of quantum circuits by reducing the exponential scaling of non-local gates from O(2^(r1+r2)) to O(2^r1 + 2^r2).
Key Contributions
- Development of multi-level control-Z gates for qudits in linear optical quantum computation
- Two explicit schemes with improved scaling - one state-dependent with 1/8 success probability using single non-local gate, and one state-independent reducing gate complexity from O(2^(r1+r2)) to O(2^r1 + 2^r2)
View Full Abstract
Linear optical quantum computation (LOQC) offers a promising platform for scalable quantum information processing, but its scalability is fundamentally constrained by the probabilistic nature of non-local entangling gates. Qudit circuit compression schemes mitigate this issue by encoding multiple qubits onto qudits. However, these schemes become inefficient when only a subset of the encoded qubits is required to participate in the non-local entangling gate, leading to an exponential increase in the number of non-local gates. In this Letter, we address this bottleneck by demonstrating the existence of multi-level control-Z (CZ) gates for qudits encoded in multiple spatial modes in LOQC. Unlike conventional two-level CZ gates, which act only on a single pair of modes, multi-level CZ gates impart a conditional phase shift for an arbitrarily chosen subset of the spatial modes. We present two explicit linear optical schemes that realize such operations, illustrating a fundamental trade-off between prior information about the input quantum state and the physical resources required. The first scheme is realized with a constant success probability of $1/8$ independent of the qudit dimension using a single non-local entangling gate, at the cost of state dependence, which is significantly better than the current success probability of $1/9$. Our second scheme provides a fully state independent realization reducing the number of non-local gates to $\mathcal{O}(2^{r_1}+2^{r_2})$ as compared to the existing bound of $\mathcal{O}(2^{r_1+r_2})$ where $r_1$ and $r_2$ are the number of qubits to be removed as control in the qudits. The success probability of the realization is $\frac{1}{2} \left(\frac{1}{8}\right)^{2^{r_1}+2^{r_2}}$. When combined with qudit circuit compression schemes, our results improve upon a key scalability limitation and significantly improve the efficiency of LOQC architectures.
Preparing squeezed, cat and GKP states with parity measurements
This paper presents a protocol for preparing various quantum states in bosonic modes (like oscillators) using displaced parity measurements combined with auxiliary qubits. The method can generate squeezed states, cat states, and Gottesman-Kitaev-Preskill (GKP) states, which are important for quantum information processing.
Key Contributions
- Development of a displaced parity measurement protocol for preparing diverse bosonic quantum states
- Demonstration of squeezed state generation achieving ~9 dB squeezing with only three measurements
- Extension to preparation of cat states and GKP states which are crucial for quantum error correction
View Full Abstract
Bosonic modes constitute a central resource in a wide range of quantum technologies, providing long-lived degrees of freedom for the storage, processing, and transduction of quantum information. Such modes naturally arise in platforms including circuit quantum electrodynamics, quantum acoustodynamics, and trapped-ion systems. In these architectures, coherent control and high-fidelity readout of the bosonic degrees of freedom are achieved via coupling to an auxiliary qubit. When operated in the strong dispersive regime, this interaction enables parity measurements of the mode which, in combination with phase-space displacements, constitute a standard experimental tool for full Wigner-function tomography. Here, we propose a protocol based on displaced parity measurements that allows for the preparation of a variety of bosonic quantum states. As a first example, we demonstrate the generation of squeezed states, achieving up to ~9 dB of squeezing after only three parity measurements, and show that the protocol is robust against experimental imperfections. Finally, we generalize our approach to the preparation of other paradigmatic bosonic states, including cat and Gottesman-Kitaev-Preskill states.
Charge-$4e$ superconductor with parafermionic vortices: A path to universal topological quantum computation
This paper proposes a new type of superconductor that supports charge-4e pairing instead of conventional charge-2e pairing, which hosts parafermion zero modes that can naturally encode qutrits (3-level quantum systems) and enable universal quantum computation through braiding operations and interferometric measurements.
Key Contributions
- Introduction of charge-4e topological superconductors with Z3 parafermion zero modes for qutrit-based quantum computing
- Demonstration that braiding parafermion defects generates the full Clifford group and interferometric measurements enable universal quantum computation
- Proposal for realizing these systems through vortex proliferation in stacked p+ip superconductors or melted quantum Hall states
View Full Abstract
Topological superconductors (TSCs) provide a promising route to fault-tolerant quantum information processing. However, the canonical Majorana platform based on $2e$ TSCs remains computationally constrained. In this work, we find a $4e$ TSC that overcomes these constraints by combining a charge-$4e$ condensate with an Abelian chiral $\mathbb{Z}_3$ topological order in an intertwined fashion. Remarkably, this $4e$ TSC can be obtained by proliferating vortex-antivortex pairs in a stack of two $2e$ $p+ip$ TSCs, or by melting a $ν=2/3$ quantum Hall state. Specific to this TSC, the $hc/(4e)$ fluxes act as charge-conjugation defects in the topological order, whose braiding with anyons transmutes anyons into their antiparticles. This symmetry enrichment leads to $\mathbb{Z}_3$ parafermion zero modes trapped in the elementary vortex cores, which naturally encode qutrits. Braiding the parafermion defects alone generates the full many-qutrit Clifford group. We further show that a simple single-probe interferometric measurement enables topologically protected magic-state preparation, promoting Clifford operations to a universal gate set. Importantly, the non-Abelian excitations in the $4e$ TSC are confined to externally controlled defects, making them uniquely identifiable and amenable to controlled creation and motion with superconducting-circuit technology. Our results establish hierarchical electron aggregation as a complementary principle for engineering topological quantum matter with enhanced computational power.
Hybrid Coupling Topology with Dynamic ZZ Suppression for Optimizing Circuit Depth during Runtime in Superconducting Quantum Processor
This paper presents a new hybrid coupling architecture for superconducting quantum processors that connects four qubits using a single tunable coupler, which can dynamically suppress unwanted ZZ interactions during operation. The design achieves higher qubit connectivity and reduces quantum circuit depth by nearly 20% compared to IBM's current architecture.
Key Contributions
- Introduction of hybrid tunable-coupling architecture connecting four transmon qubits with single coupler
- Dynamic ZZ suppression using off-resonant Stark drives
- 20% reduction in circuit depth compared to IBM Heavy-Hexagonal layout
- Improved qubit connectivity while maintaining scalability
View Full Abstract
To reduce circuit depth when executing Quantum algorithms, it is necessary to maximize qubit connectivity on a near-term quantum processor. While addressing this, we also need to ensure high gate fidelity, suppression of unwanted ZZ cross-talk, a compact layout footprint, and minimal control hardware complexity to support scalability. In current superconducting quantum chips, fixed coupling is used as it is easier to scale, but it is limited by unwanted static ZZ interaction during single qubit operations, which degrades system performance. To overcome these challenges, we have introduced a first-of-its-kind hybrid tunable-coupling architecture that connects four fixed-frequency transmon qubits using a single coupler. This hybrid coupler uses off-resonant Stark drives to tune ZZ strength between qubit pairs. Experimentally backed simulation results indicate that our proposed hybrid design maximizes the qubit connectivity while reducing control overhead. This design achieves a near 20% reduction in circuit depth compared to IBM's Heavy-Hexagonal layout, showing its potential for scalability.
Extensible universal photonic quantum computing with nonlinearity
This paper demonstrates a breakthrough photonic quantum computer that combines programmable linear optical networks with nonlinear modules to achieve universal quantum computing. The system successfully generates optical Gottesman-Kitaev-Preskill states for error correction and simulates complex quantum dynamics like the Bose-Hubbard model.
Key Contributions
- First extensible photonic quantum computer achieving universality through integrated linear and nonlinear operations
- Quasi-deterministic generation of optical Gottesman-Kitaev-Preskill states for bosonic error correction
- Demonstration of complex many-body quantum simulation on photonic hardware
View Full Abstract
Universal quantum computing requires an architecture that supports both linear circuits and, crucially, strong nonlinear resources. For quantum photonic systems, integrating such nonlinearities with scalable linear circuitry has been a major bottleneck, leaving most optical experiments without nonlinear operations and, consequently, incapable of achieving universality. Here, we report an extensible photonic computer that supports a universal gate set by seamlessly combining fully programmable, scalable linear optical networks with integrated nonlinear modules. This platform enables a broad range of quantum computing and simulation tasks. We demonstrate the quasi-deterministic generation of optical Gottesman-Kitaev-Preskill states, which are essential resources for bosonic error correction, yet had previously been realized only probabilistically. Furthermore, we simulate complex many-body quantum dynamics, exemplified by the Bose-Hubbard model. Such quantum simulation tasks have long been considered beyond the reach of photonic hardware limited to linear operations. These capabilities, enabled by our extensible architecture, establish a viable route towards photonic quantum simulation and fault-tolerant quantum computing.
Algebraic Reduction to Improve an Optimally Bounded Quantum State Preparation Algorithm
This paper presents an improved algorithm for preparing quantum states by using a simpler algebraic decomposition that separates preparation of real and complex parts of the desired state. The new approach reduces circuit depth, gate count, and CNOT operations compared to existing optimally bounded state preparation methods when ancillary qubits are available.
Key Contributions
- Simplified algebraic decomposition for quantum state preparation that reduces circuit complexity
- Reduction in circuit depth, total gates, and CNOT count when ancillary qubits are available
- Implementation and testing using PennyLane for both dense and sparse quantum states
View Full Abstract
The preparation of $n$-qubit quantum states is a cross-cutting subroutine for many quantum algorithms, and the effort to reduce its circuit complexity is a significant challenge. In the literature, the quantum state preparation algorithm by Sun et al. is known to be optimally bounded, defining the asymptotically optimal width-depth trade-off bounds with and without ancillary qubits. In this work, a simpler algebraic decomposition is proposed to separate the preparation of the real part of the desired state from the complex one, resulting in a reduction in terms of circuit depth, total gates, and CNOT count when $m$ ancillary qubits are available. The reduction in complexity is due to the use of a single operator $Λ$ for each uniformly controlled gate, instead of the three in the original decomposition. Using the PennyLane library, this new algorithm for state preparation has been implemented and tested in a simulated environment for both dense and sparse quantum states, including those that are random and of physical interest. Furthermore, its performance has been compared with that of Möttönen et al.'s algorithm, which is a de facto standard for preparing quantum states in cases where no ancillary qubits are used, highlighting interesting lines of development.
Characterizing Quantum Error Correction Performance of Radiation-induced Errors
This paper develops computational models to simulate how radiation impacts affect quantum error correction performance on superconducting quantum devices, since radiation can cause correlated errors that standard error correction codes struggle with. The researchers create a holistic modeling framework that maps radiation-induced qubit errors onto quantum error channels and tests mitigation strategies for improved error correction.
Key Contributions
- Computational model linking radiation-induced quasiparticle effects to quantum error correction performance
- Performance metric for quantifying QEC code resilience to radiation impacts
- Modular framework for testing error mitigation strategies and chip designs
View Full Abstract
Radiation impacts are a current challenge with computing on superconducting-based quantum devices because they can lead to widespread correlated errors across the device. Such errors can be problematic for quantum error correction (QEC) codes, which are generally designed to correct independent errors. To address this, we have developed a computational model to simulate the effects of radiation impacts on QEC performance. This is achieved by building from recently developed models of quasiparticle density, mapping radiation-induced qubit error rates onto a quantum error channel and simulation of a simple surface code. We also provide a performance metric to quantify the resilience of a QEC code to radiation impacts. Additionally, we sweep various parameters of chip design to test mitigation strategies for improved QEC performance. Our model approach is holistic, allowing for modular performance testing of error mitigation strategies and chip and code designs.
Modeling integrated frequency shifters and beam splitters
This paper develops theoretical methods for designing frequency-mode beam splitters using modulated coupled resonator arrays for photonic quantum computing. The authors create a flexible methodology based on quantum input-output network formalism to construct transfer matrices for these devices and prove limitations on certain implementations.
Key Contributions
- Development of SLH formalism-based methodology for constructing transfer matrices of frequency-mode beam splitters
- Analysis of various device configurations including two-resonator devices and Mach-Zehnder interferometers
- Formal no-go theorem on limitations of native N-mode frequency-domain beam splitters with N-resonator arrays
View Full Abstract
Photonic quantum computing is a strong contender in the race to fault-tolerance. Recent proposals using qubits encoded in frequency modes promise a large reduction in hardware footprint, and have garnered much attention. In this encoding, linear optics, i.e., beam splitters and phase shifters, is necessarily not energy-conserving, and is costly to implement. In this work, we present designs of frequency-mode beam splitters based on modulated arrays of coupled resonators. We develop a methodology to construct their effective transfer matrices based on the SLH formalism for quantum input-output networks. Our methodology is flexible and highly composable, allowing us to define $N$-mode beam splitters either natively based on arrays of $N$-resonators of arbitrary connectivity or as networks of interconnected $l$-mode beam splitters, with $l<N$. We apply our methodology to analyze a two-resonator device, a frequency-domain phase shifter and a Mach-Zehnder interferometer obtained from composing these devices, a four-resonator device, and present a formal no-go theorem on the possibility of natively generating certain $N$-mode frequency-domain beam splitters with arrays of $N$-resonators.
Extended Rydberg Lifetimes in a Cryogenic Atom Array
This paper demonstrates how cooling cesium atoms to 4K in optical tweezers significantly extends the lifetime of Rydberg states by reducing blackbody radiation effects. The extended lifetimes improve the coherence time of ground-Rydberg qubits, which is crucial for reducing errors in neutral-atom quantum computing systems.
Key Contributions
- Demonstration of 3.3x longer Rydberg state lifetimes in cryogenic environment
- Measurement of small differential dynamic polarizability reducing dephasing
- Advancement toward higher fidelity neutral-atom two-qubit gates
View Full Abstract
We report on the realization of a $^{133}$Cs optical tweezer array in a cryogenic blackbody radiation (BBR) environment. By enclosing the array within a 4K radiation shield, we measure long Rydberg lifetimes, up to $406 (36)\,μ$s for the $55 P_{3/2}$ Rydberg state, a factor of 3.3(3) longer than the room-temperature value. We employ single-photon coupling for coherent manipulation of the ground-Rydberg qubit. We measure a small differential dynamic polarizability of the transition, beneficial for reducing dephasing due to light intensity fluctuations. Our results pave the path for advancing neutral-atom two-qubit gate fidelities as their error budgets become increasingly dominated by $T_1$ relaxation of the ground-Rydberg qubit.
Quantum Error Mitigation at the pre-processing stage
This paper introduces a new quantum error mitigation technique that corrects for noise before measurement (pre-processing) rather than after measurement (post-processing). The method finds a surrogate observable to measure on the noisy quantum state that gives the same result as measuring the target observable on the noise-free state, using tensor networks to make this computationally feasible.
Key Contributions
- Development of pre-processing quantum error mitigation approach that finds surrogate observables to compensate for noise effects
- Significant computational complexity improvement (~10^6 times) over existing Tensor Error Mitigation methods by eliminating tomographic measurements
View Full Abstract
The realization of fault-tolerant quantum computers remains a challenging endeavor, forcing state-of-the-art quantum hardware to rely heavily on noise mitigation techniques. Standard quantum error mitigation is typically based on post-processing strategies. In contrast, the present work explores a pre-processing approach, in which the effects of noise are mitigated before performing a measurement on the output state. The main idea is to find an observable $Y$ such that its expectation value on a noisy quantum state $\mathcal{E(ρ)}$ matches the expectation value of a target observable $X$ on the noiseless quantum state $ρ$. Our method requires the execution of a noisy quantum circuit, followed by the measurement of the surrogate observable $Y$. The main enablers of our method in practical scenarios are Tensor Networks. The proposed method improves over Tensor Error Mitigation (TEM) in terms of average error, circuit depth, and complexity, attaining a measurement overhead that approaches the theoretical lower bound. The improvement in terms of classical computation complexity is in the order of $\sim 10^6$ times when compared to the post-processing computational cost of TEM in practical scenarios. Such gain comes from eliminating the need to perform the set of informationally complete positive operator-valued measurements (IC-POVM) required by TEM, as well as any other tomographic strategy.
High-order dynamical decoupling in the weak-coupling regime
This paper develops an improved method for dynamical decoupling, which uses carefully timed quantum pulses to protect quantum systems from environmental noise. The new approach significantly reduces the number of pulses needed compared to existing methods while maintaining better error suppression, making it more practical for real quantum devices.
Key Contributions
- Development of high-order dynamical decoupling scheme with polynomial pulse scaling O(n^(k-1)K) versus exponential scaling O(exp(n)) in existing methods
- Novel mapping to continuous necklace-splitting problem to construct optimal pulse sequences
- Demonstration of asymptotically optimal pulse count with superior performance over Quadratic DD in weak-coupling regime
View Full Abstract
We introduce a high-order dynamical decoupling (DD) scheme for arbitrary system-bath interactions in the weak-coupling regime. Given any decoupling group $\mathcal G$ that averages the interaction to zero, our construction yields pulse sequences whose length scales as $\mathcal{O}(|\mathcal G| K)$, while canceling all error terms linear in the system-bath coupling strength up to order $K$ in the total evolution time. As a corollary, for an $n$-qubit system with $k$-local system-bath interactions, we obtain an $\mathcal{O}(n^{k-1}K)$-pulse sequence, a significant improvement over existing schemes with $\mathcal{O}(\exp(n))$ pulses (for $k=\mathcal{O}(1)$). The construction is obtained via a mapping to the continuous necklace-splitting problem, which asks how to cut a multi-colored interval into pieces that give each party the same share of every color. We provide explicit pulse sequences for suppressing general single-qubit decoherence, prove that the pulse count is asymptotically optimal, and verify the predicted error scaling in numerical simulations. For the same number of pulses, we observe that our sequences outperform the state-of-the-art Quadratic DD in the weak-coupling regime. We also note that the same construction extends to suppress slow, time-dependent Hamiltonian noise.
Digital signatures with classical shadows on near-term quantum computers
This paper introduces a quantum digital signature scheme that uses only classical communication by leveraging 'classical shadows' of quantum states produced by random circuits as public keys. The authors demonstrate improved noise tolerance and experimentally validate their approach using 32-qubit circuits on near-term quantum hardware.
Key Contributions
- Novel quantum digital signature scheme requiring only classical communication using classical shadows
- Improved state-certification primitive with higher noise tolerance and lower sample complexity
- Experimental demonstration on 32-qubit states with circuits containing ≥80 logical gates
View Full Abstract
Quantum mechanics provides cryptographic primitives whose security is grounded in hardness assumptions independent of those underlying classical cryptography. However, existing proposals require low-noise quantum communication and long-lived quantum memory, capabilities which remain challenging to realize in practice. In this work, we introduce a quantum digital signature scheme that operates with only classical communication, using the classical shadows of states produced by random circuits as public keys. We provide theoretical and numerical evidence supporting the conjectured hardness of learning the private key (the circuit) from the public key (the shadow). A key technical ingredient enabling our scheme is an improved state-certification primitive that achieves higher noise tolerance and lower sample complexity than prior methods. We realize this certification by designing a high-rate error-detecting code tailored to our random-circuit ensemble and experimentally generating shadows for 32-qubit states using circuits with $\geq 80$ logical ($\geq 582$ physical) two-qubit gates, attaining 0.90 $\pm$ 0.01 fidelity. With increased number of measurement samples, our hardware-demonstrated primitives realize a proof-of-principle quantum digital signature, demonstrating the near-term feasibility of our scheme.
Review of Superconducting Qubit Devices and Their Large-Scale Integration
This paper provides a comprehensive review of superconducting qubit quantum computers, covering the fundamental physics, device engineering, and scaling challenges. It examines key technical requirements through DiVincenzo's criteria and discusses the path toward large-scale integration using electronic design automation tools.
Key Contributions
- Comprehensive review of superconducting qubit technologies and their implementation challenges
- Analysis of large-scale integration approaches for superconducting quantum computers
- Discussion of electronic design automation tools for quantum computer design
- Review of fault-tolerant quantum computing requirements and entanglement gate operations
View Full Abstract
The superconducting qubit quantum computer is one of the most promising quantum computing architectures for large-scale integration due to its maturity and close proximity to the well-established semiconductor manufacturing infrastructure. From an education perspective, it also bridges classical microwave electronics and quantum electrodynamics. In this paper, we will review the basics of quantum computers, superconductivity, and Josephson junctions. We then introduce important technologies and concepts related to DiVincenzo's criteria, which are the necessary conditions for the superconducting qubits to work as a useful quantum computer. Firstly, we will discuss various types of superconducting qubits formed with Josephson junctions, from which we will understand the trade-off across multiple design parameters, including their noise immunity. Secondly, we will discuss different schemes to achieve entanglement gate operations, which are a major bottleneck in achieving more efficient fault-tolerant quantum computing. Thirdly, we will review readout engineering, including the implementations of the Purcell filters and quantum-limited amplifiers. Finally, we will discuss the nature and review the studies of two-level system defects, which are currently the limiting factor of qubit coherence time. DiVincenzo's criteria are only the necessary conditions for a technology to be eligible for quantum computing. To have a useful quantum computer, large-scale integration is required. We will review proposals and developments for the large-scale integration of superconducting qubit devices. By comparing with the application of electronic design automation (EDA) in semiconductors, we will also review the use of EDA in superconducting qubit quantum computer design, which is necessary for its large-scale integration.
Resource-Efficient Digitized Adiabatic Quantum Factorization
This paper develops a more efficient quantum algorithm for integer factorization using digitized adiabatic quantum computing. The researchers propose encoding the solution in the kernel subspace of the problem Hamiltonian and reformulate the problem as QUBO instead of PUBO, demonstrating improved performance for factoring integers up to 8 bits with reduced circuit complexity.
Key Contributions
- Novel kernel subspace encoding approach for adiabatic quantum factorization
- Reformulation of factorization problem from PUBO to QUBO framework with reduced gate complexity
View Full Abstract
Digitized adiabatic quantum factorization is a hybrid algorithm that exploits the advantage of digitized quantum computers to implement efficient adiabatic algorithms for factorization through gate decompositions of analog evolutions. In this paper, we harness the flexibility of digitized computers to derive a digitized adiabatic algorithm able to reduce the gate-demanding costs of implementing factorization. To this end, we propose a new approach for adiabatic factorization by encoding the solution of the problem in the kernel subspace of the problem Hamiltonian, instead of using ground-state encoding considered in the standard adiabatic factorization proposed by Peng $et$ $al$. [Phys. Rev. Lett. 101, 220405 (2008)]. Our encoding enables the design of adiabatic factorization algorithms belonging to the class of Quadratic Unconstrained Binary Optimization (QUBO) methods, instead the Polinomial Unconstrained Binary Optimization (PUBO) used by standard adiabatic factorization. We illustrate the performance of our QUBO algorithm by implementing the factorization of integers $N$ up to 8 bits. The results demonstrate a substantial improvement over the PUBO formulation, both in terms of reduced circuit complexity and increased fidelity in identifying the correct solution.
Qudit Twisted-Torus Codes in the Bivariate Bicycle Framework
This paper develops improved quantum error correction codes called qudit twisted-torus codes that work with quantum systems based on higher-dimensional units (qudits) rather than just qubits. The researchers show these twisted designs achieve better performance metrics than previous untwisted versions and outperform existing qubit-based codes.
Key Contributions
- Extension of twisted-torus quantum error correction codes to qudit systems over finite fields
- Demonstration that twisted-torus qudit codes achieve larger distances and better rate-distance tradeoffs than untwisted counterparts and previous qubit implementations
View Full Abstract
We study finite-length qudit quantum low-density parity-check (LDPC) codes from translation-invariant CSS constructions on two-dimensional tori with twisted boundary conditions. Recent qubit work [PRX Quantum 6, 020357 (2025)] showed that, within the bivariate-bicycle viewpoint, twisting generalized toric patterns can significantly improve finite-size performance as measured by $k d^{2}/n$. Here $n$ denotes the number of physical qudits, $k$ the number of logical qudits, and $d$ the code distance. Building on this insight, we extend the search to qudit codes over finite fields. Using algebraic methods, we compute the number of logical qudits and identify compact codes with favorable rate--distance tradeoffs. Overall, for the finite sizes explored, twisted-torus qudit constructions typically achieve larger distances than their untwisted counterparts and outperform previously reported twisted qubit instances. The best new codes are tabulated.
Approximate simulation of complex quantum circuits using sparse tensors
This paper presents a new method for simulating quantum circuits on classical computers using sparse tensor networks, which can efficiently represent and manipulate quantum states that don't have underlying symmetries. The approach provides improved runtime scaling with respect to circuit size and depth compared to traditional methods.
Key Contributions
- Novel sparse tensor data structure for quantum states without symmetry
- Efficient contraction and truncation algorithms for sparse tensor networks
- Improved runtime scaling for quantum circuit simulation
View Full Abstract
The study of quantum circuit simulation using classical computers is a key research topic that helps define the boundary of verifiable quantum advantage, solve quantum many-body problems, and inform development of quantum hardware and software. Tensor networks have become forefront mathematical tools for these tasks. Here we introduce a method to approximately simulate quantum circuits using sparsely-populated tensors. We describe a sparse tensor data structure that can represent quantum states with no underlying symmetry, and outline algorithms to efficiently contract and truncate these tensors. We show that the data structure and contraction algorithm are efficient, leading to expected runtime scalings versus qubit number and circuit depth. Our results motivate future research in optimization of sparse tensor networks for quantum simulation.
Detailed, interpretable characterization of mid-circuit measurement on a transmon qubit
This paper develops new methods to analyze and understand mid-circuit measurements on quantum computing hardware by adapting error analysis techniques to break down measurement errors into physically meaningful components. The researchers applied their approach to a transmon qubit device to identify and quantify specific error sources like amplitude damping and readout errors.
Key Contributions
- Adapted error generator formalism to mid-circuit measurements for better physical interpretation
- Demonstrated detailed characterization of measurement errors on transmon qubits including amplitude damping and readout errors
- Showed how measurement errors vary with readout pulse parameters and validated theoretical predictions
View Full Abstract
Mid-circuit measurements (MCMs) are critical components of the quantum error correction protocols expected to enable utility-scale quantum computing. MCMs can be modeled by quantum instruments (a type of quantum operation or process), which can be characterized self-consistently using gate set tomography. However, experimentally estimated quantum instruments are often hard to interpret or relate to device physics. We address this challenge by adapting the error generator formalism -- previously used to interpret noisy quantum gates by decomposing their error processes into physically meaningful sums of "elementary errors" -- to MCMs. We deploy our new analysis on a transmon qubit device to tease out and quantify error mechanisms including amplitude damping, readout error, and imperfect collapse. We examine in detail how the magnitudes of these errors vary with the readout pulse amplitude, recover the key features of dispersive readout predicted by theory, and show that these features can be modeled parsimoniously using a reduced model with just a few parameters.
Accelerating qubit reset through the Mpemba effect
This paper demonstrates how to accelerate qubit reset (initialization to ground state) by up to 50% using the Mpemba effect, where a two-qubit gate converts slow-decaying single-qubit coherences into faster-decaying two-qubit coherences. The researchers implemented and validated this protocol on a superconducting quantum processor.
Key Contributions
- Novel protocol using Mpemba effect to accelerate passive qubit reset by up to 50%
- Experimental demonstration on superconducting quantum processor with analysis of robustness under realistic error conditions
View Full Abstract
Passive qubit reset is a key primitive for quantum information processing, whereby qubits are initialized by allowing them to relax to their ground state through natural dissipation, without the need for active control or feedback. However, passive reset occurs on timescales that are much longer than those of gate operations and measurements, making it a significant bottleneck for algorithmic execution. Here, we show that this limitation can be overcome by exploiting the Mpemba effect, originally indicating the faster cooling of hot systems compared to cooler ones. Focusing on the regime where coherence times exceed energy relaxation times ($T_2 > T_1$), we propose a simple protocol based on a single entangling two-qubit gate that converts local single-qubit coherences into fast-decaying global two-qubit coherences. This removes their overlap with the slowest decaying Liouvillian mode and enables a substantially faster relaxation to the ground state. For realistic parameters, we find that our protocol can reduce reset times by up to $50\%$ compared to standard passive reset. We analyze the robustness of the protocol under non-Markovian noise, imperfect coherent control and finite temperature, finding that the accelerated reset persists across a broad range of realistic error sources. Finally, we present an experimental implementation of our protocol on an IQM superconducting quantum processor. Our results demonstrate how Mpemba-like accelerated relaxation can be harnessed as a practical tool for fast and accurate qubit initialization.
An Evaluation of the Remote CX Protocol under Noise in Distributed Quantum Computing
This paper evaluates how the remote CX protocol performs under noise when connecting multiple quantum processing units (QPUs) in distributed quantum computing networks. The researchers simulate different network configurations and qubit assignment strategies to understand how noise degrades performance when running quantum algorithms across distributed quantum computers.
Key Contributions
- Evaluation of remote CX protocol performance under noise in distributed quantum computing systems
- Comparison of naive versus graph partitioning strategies for qubit assignment across multiple QPUs
- Performance analysis on various quantum algorithms including Grover's algorithm in distributed settings
View Full Abstract
Quantum computers connected through classical and quantum communication channels can be combined to function as a single unit to run large quantum circuits that each device is unable to execute on their own. The distributed quantum computing paradigm is therefore often seen as a potential pathway to scaling quantum computing to capacities necessary for practical and large-scale applications. Whether connecting multiple quantum processing units (QPUs) in clusters or over networks, quantum communication requires entanglement to be generated and distributed over distances. Using entanglement, the remote CX protocol can be performed, which allows the application of the CX gate involving qubits located in different QPUs. In this work, we use a specialized simulation framework for a high-level evaluation of the impact of the protocol when executed under noise in various network configurations using different number of QPUs. We compare naive and graph partitioning qubit assignment strategies and how they affect the fidelity in experiments run on Grover, GHZ, VQC, and random circuits. The results provide insights on how QPU and network configurations or naive scheduling can degrade performance.
Even More Efficient Soft-Output Decoding with Extra-Cluster Growth and Early Stopping
This paper develops more efficient methods for computing soft outputs in quantum error correction decoders, specifically focusing on cluster-based decoders like Union-Find. The authors introduce early-stopping techniques and new soft-output types that reduce computational overhead while maintaining hardware compatibility with existing FPGA implementations.
Key Contributions
- Introduction of bounded cluster gap and extra-cluster gap soft-output methods with early stopping
- Development of hardware-compatible soft-output computation for FPGA-implemented Union-Find decoders
- Improved computational scaling with code distance compared to previous methods
View Full Abstract
In fault-tolerant quantum computing, soft outputs from real-time decoders play a crucial role in improving decoding accuracy, post-selecting magic states, and accelerating lattice surgery. A recent paper by Meister et al. [arXiv:2405.07433 (2024)] proposed an efficient method to evaluate soft outputs for cluster-based decoders, including the Union-Find (UF) decoder. However, in parallel computing environments, its computational complexity is comparable to or even surpasses that of the UF decoder itself, resulting in a substantial overhead. Furthermore, this method requires global information about the decoding graph, making it poorly suited for existing hardware implementations of the UF decoder on Field-Programmable Gate Arrays (FPGAs). In this paper, to alleviate these issues, we develop more efficient methods for evaluating high-quality soft outputs in cluster-based decoders by introducing several early-stopping techniques. Our central idea is that the precise value of a large soft output is often unnecessary in practice. Based on this insight, we introduce two types of novel soft-outputs: the bounded cluster gap and the extra-cluster gap. The former reduces the computational complexity of Meister's method by terminating the calculation at an early stage. Our numerical simulations show that this method achieves improved scaling with code distance $d$ compared to the original proposal. The latter, the extra-cluster gap, quantifies decoder reliability by performing a small, additional growth of the clusters obtained by the decoder. This approach offers the significant advantage of enabling soft-output computation without modifying the existing architecture of FPGA-implemented UF decoders. These techniques offer lower computational complexity and higher hardware compatibility, laying a crucial foundation for future real-time decoders with soft outputs.
Device variability of Josephson junctions induced by interface roughness
This paper develops a quantitative model to predict how microscopic surface roughness at the interfaces of superconducting Josephson junctions causes device-to-device variability in their energy parameters. The researchers simulate thousands of junctions to understand how manufacturing imperfections affect the consistency of quantum processor components.
Key Contributions
- Quantitative model linking interface roughness parameters to Josephson energy variability
- Statistical characterization showing Josephson energy follows log-normal distribution with identified scaling relationships
View Full Abstract
As quantum processors scale to large qubit numbers, device-to-device variability emerges as a critical challenge. Superconducting qubits are commonly realized using Al/AlO$_{\text{x}}$/Al Josephson junctions operating in the tunneling regime, where even minor variations in device geometry can lead to substantial performance fluctuations. In this work, we develop a quantitative model for the variability of the Josephson energy $E_{J}$ induced by interface roughness at the Al/AlO$_{\text{x}}$ interfaces. The roughness is modeled as a Gaussian random field characterized by two parameters: the root-mean-square roughness amplitude $σ$ and the transverse correlation length $ξ$. These parameters are extracted from the literature and molecular dynamics simulations. Quantum transport is treated using the Ambegaokar--Baratoff relation combined with a local thickness approximation. Numerical simulations over $5,000$ Josephson junctions show that $E_{J}$ follows a log-normal distribution. The mean value of $E_{J}$ increases with $σ$ and decreases slightly with $ξ$, while the variance of $E_{J}$ increases with both $σ$ and $ξ$. These results paint a quantitative and intuitive picture of Josephson energy variability induced by surface roughness, with direct relevance for junction design.
Accelerating the Tesseract Decoder for Quantum Error Correction
This paper optimizes the Tesseract decoder, a quantum error correction algorithm that uses A* search to find the most likely errors in quantum codes. The researchers implemented four performance enhancements including better data structures and memory layouts, achieving 2-5x speedups across various quantum error correction code families.
Key Contributions
- Systematic optimization of Tesseract decoder achieving 2-5x performance improvements
- Implementation of four targeted optimization strategies including data structure improvements and memory layout reorganization
- Demonstration of consistent speedups across multiple quantum error correction code families including Surface Codes and Color Codes
View Full Abstract
Quantum Error Correction (QEC) is essential for building robust, fault-tolerant quantum computers; however, the decoding process often presents a significant computational bottleneck. Tesseract is a novel Most-Likely-Error (MLE) decoder for QEC that employs the A* search algorithm to explore an exponentially large graph of error hypotheses, achieving high decoding speed and accuracy. This paper presents a systematic approach to optimizing the Tesseract decoder through low-level performance enhancements. Based on extensive profiling, we implemented four targeted optimization strategies, including the replacement of inefficient data structures, reorganization of memory layouts to improve cache hit rates, and the use of hardware-accelerated bit-wise operations. We achieved significant decoding speedups across a wide range of code families and configurations, including Color Codes, Bivariate-Bicycle Codes, Surface Codes, and Transversal CNOT Protocols. Our results demonstrate consistent speedups of approximately 2x for most code families, often exceeding 2.5x. Notably, we achieved a peak performance gain of over 5x for the most computationally demanding configurations of Bivariate-Bicycle Codes. These improvements make the Tesseract decoder more efficient and scalable, serving as a practical case study that highlights the importance of high-performance software engineering in QEC and providing a strong foundation for future research.
On the Spectral theory of Isogeny Graphs and Quantum Sampling of Hard Supersingular Elliptic curves
This paper presents a quantum algorithm for sampling random supersingular elliptic curves with unknown endomorphism rings, which is crucial for isogeny-based cryptography. The algorithm provides the first provable quantum polynomial-time solution to generate these 'hard' curves without requiring a trusted setup.
Key Contributions
- First provable quantum polynomial-time algorithm for sampling hard supersingular elliptic curves
- Proof of Quantum Unique Ergodicity conjecture for supersingular isogeny graphs
- Stronger eigenvalue separation property for isogeny graphs removing heuristic assumptions in quantum money protocols
View Full Abstract
In this paper we study the problem of sampling random supersingular elliptic curves with unknown endomorphism rings. This task has recently attracted significant attention, as the secure instantiation of many isogeny-based cryptographic protocols relies on the ability to sample such ``hard'' curves. Existing approaches, however, achieve this only in a trusted-setup setting. We present the first provable quantum polynomial-time algorithm that samples a random hard supersingular elliptic curve with high probability.Our algorithm runs heuristically in $\tilde{O}\!\left(\log^{4}p\right)$ quantum gate complexity and in $\tilde{O}\!\left(\log^{13} p\right)$ under the Generalized Riemann Hypothesis. As a consequence, our algorithm gives a secure instantiation of the CGL hash function and other cryptographic primitives. Our analysis relies on a new spectral delocalization result for supersingular $\ell$-isogeny graphs: we prove the Quantum Unique Ergodicity conjecture, and we further provide numerical evidence for complete eigenvector delocalization; this theoretical result may be of independent interest. Along the way, we prove a stronger $\varepsilon$-separation property for eigenvalues of isogeny graphs than that predicted in the quantum money protocol of Kane, Sharif, and Silverberg, thereby removing a key heuristic assumption in their construction.