Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Estimating the performance boundary of Gottesman-Kitaev-Preskill codes and number-phase codes
This paper compares two types of quantum error-correcting codes that use light particles (bosonic codes) - GKP and number-phase codes - to determine which performs better under different noise conditions. The researchers found that the choice between codes depends critically on the ratio of photon loss to dephasing noise, with a clear crossover point when dephasing is about 100 times weaker than loss.
Key Contributions
- Established quantitative performance boundary between GKP and number-phase codes under photon loss and dephasing noise
- Developed practical methodology for benchmarking and optimizing bosonic quantum error-correcting codes
- Identified sharp crossover regime where dephasing strength is approximately two orders of magnitude smaller than loss strength
View Full Abstract
Bosonic quantum error-correcting codes encode logical information in a harmonic oscillator, with the Gottesman-Kitaev-Preskill (GKP) and number-phase (NP) codes representing two fundamentally different encoding paradigms. Although both have been extensively studied, it remains unclear under what physical noise conditions (including photon loss and dephasing) one encoding intrinsically outperforms the other. Here we estimate a quantitative performance boundary between GKP and NP codes under general photon loss-dephasing noise. By optimizing code parameters within each encoding family, we identify the noise regimes in which each code exhibits a fundamental advantage. In particular, we find that the crossover occurs when the dephasing strength is approximately two orders of magnitude smaller than the loss strength, revealing a sharp separation between operational regimes. Beyond this specific comparison, our work establishes a practical and extensible methodology for benchmarking bosonic codes and optimizing their parameters, providing concrete guidance for the experimental selection and deployment of bosonic encodings in realistic noise environments.
A frequency-agile microwave-optical interface for superconducting qubits
This paper demonstrates a frequency-agile interface that converts microwave signals from superconducting qubits to optical signals for transmission over fiber optic cables. The system overcomes bandwidth limitations by cascading a microwave-to-microwave frequency converter with a microwave-to-optical transducer, enabling quantum communication between distant superconducting quantum processors.
Key Contributions
- Development of a frequency-agile microwave-optical interface with continuous frequency coverage from 5.0 to 8.5 GHz
- Demonstration of optical readout of a superconducting qubit detuned by 1.7 GHz from the native transducer frequency
- Cascaded M2M-M2O architecture enabling heterogeneous superconducting device networking
View Full Abstract
Superconducting quantum processors operate at microwave frequencies in millikelvin environments, making it challenging to interconnect distant nodes using conventional microwave wiring. Coherent microwave-to-optical (M2O) transduction enables superconducting quantum networks by interfacing itinerant microwave photons with low-loss optical fiber. However, many state-of-the-art transducers provide efficient conversion only over a narrow frequency span, complicating deployment with heterogeneous superconducting devices that are detuned by gigahertz-scale offsets. Here we demonstrate a frequency-agile microwave-optical interface that overcomes this bandwidth mismatch by cascading an electro-optic M2O transducer with a multimode microwave-to-microwave (M2M) frequency converter, with in situ tunability of the microwave resonances in both stages. Using this architecture, we realize continuous frequency coverage from 5.0 to 8.5 GHz within a single system. As an application relevant to superconducting-qubit networking, we use the cascaded M2M-M2O interface to optically read out a superconducting qubit whose readout resonator is detuned by 1.7 GHz from the native M2O microwave resonance, demonstrating a scalable route toward fiber-linked superconducting quantum nodes.
Optimized Compilation for Distributed Quantum Computing
This paper develops a greedy algorithm to optimize quantum circuit compilation for distributed quantum computing by minimizing the use of Einstein-Podolsky-Rosen (EPR) pairs. The approach groups non-local gates to share EPR pairs and reorders commutative gates to reduce circuit depth and resource consumption.
Key Contributions
- Greedy algorithm for optimizing EPR pair usage in distributed quantum circuits
- Circuit compilation strategy that groups non-local gates and reorders commutative operations
View Full Abstract
In many practical applications, quantum algorithms require several qubits, significantly more than those available with current noisy intermediate-scale quantum processors. Distributed quantum computing (DQC) is considered a scalable approach to increasing the number of available qubits for computational tasks. In the DQC setting, a quantum compiler must find the best partitioning for the quantum algorithm and then perform smart non-local operations scheduling to optimize the consumption of Einstein-Podolsky-Rosen (EPR) pairs. In this work, the focus is on minimizing the use of EPR pairs when the circuit structure allows for multiple non-local gates to utilize a single TeleGate operation. This is achieved by using a greedy algorithm that explores the circuit and groups together the gates that could share an EPR pair while also changing the order of commutative gates when necessary. With this preliminary pass, the compiled circuits show reduced depth and EPR usage. Since the quality of each EPR pair quickly deteriorates, the number of non-local gates using the same EPR pair should also be bounded. This means that, depending on the features of the target quantum network, the user can achieve different levels of optimization. Here, it is shown that this approach brings benefits even while assuming a low EPR pair lifetime.
3D Integrated Embedded Filters for Superconducting Quantum Circuits
This paper presents a new design for microwave filters used in superconducting quantum computers, where the filters are embedded in printed circuit boards rather than on the qubit chip itself. This approach improves qubit isolation and enables better scaling to larger quantum processors while maintaining high qubit coherence times.
Key Contributions
- Novel off-chip PCB-embedded Purcell filter design that removes filter components from qubit substrate
- Demonstration of thousand-fold improvement in qubit isolation with multiplexed readout capability for up to 9 resonators
- Experimental validation showing compatibility with high-coherence qubits and scalability to large qubit counts
View Full Abstract
Microwave filtering for superconducting qubits is a key element of quantum computing technology, enabling high coherence and fast state detection. This work presents the design and implementation of novel microwave Purcell filters for superconducting quantum circuits, integrated within a multilayer printed circuit board (PCB). The off-chip design removes all filter components from the qubit substrate, reducing device complexity, improving layout footprint and allowing better scalability to large qubit counts. Each embedded filter can couple up to nine readout resonators, enabling efficient multiplexed readout. Electromagnetic simulations of the filter predict a thousand-fold improvement in qubit isolation from the readout port. The design was experimentally validated under cryogenic conditions in conjunction with a 35-qubit device, demonstrating compatibility of the PCB-based filter with high-coherence superconducting qubits. The comparison of the measured qubit median T1 of 84 $μ$s with the expected radiative limit from electromagnetic simulations validated the presence of Purcell filtering in the system.
Characterization of Josephson Junction Aging and Annealing Under Different Environments
This paper studies how Josephson junctions used in quantum computers degrade over time under different storage conditions and how thermal annealing can restore their properties. The researchers found that aging follows predictable patterns and can be controlled through proper storage environments and annealing procedures.
Key Contributions
- Characterized aging behavior of Josephson junctions following logarithmic curves with fabrication-dependent amplitude and storage-dependent speed
- Demonstrated that controlled annealing can restore junction properties with environment-dependent effects on resistance
View Full Abstract
Understanding the aging behavior of Josephson junctions and the effect of annealing on junction resistances is important in building large-scale superconducting quantum processors. Here we study the effects of aging of Josephson junctions under different storage conditions from immediately after fabrication up to 2 to 3 months. We find that the aging curve follows a logarithmic curve, with the aging amplitude mainly determined by fabrication conditions and the aging speed determined by storage conditions. Junctions stored at ambient laboratory conditions aged faster compared to junctions stored in a nitrogen atmosphere or vacuum, with the aging speed appreciably changes when the storage condition changed. We also compared the effect of thermal annealing under nitrogen environment with annealing under ambient conditions up to 250$^\circ$ C. We find that under nitrogen environment, the resistances decreased at all temperatures tested, while under ambient environment the resistances increased at 200$^\circ$ C and decreased at 250$^\circ$ C instead. We were unable to decrease the resistance below the initial-time resistance, suggesting a lower limit on the range of resistance tuning.
Spin stiffness and resilience phase transition in a noisy toric-rotor code
This paper studies how well the toric-rotor code (a type of quantum error-correcting code) can protect quantum information from phase-shift noise. The researchers use mathematical connections to classical physics models to identify a critical noise threshold above which the code loses its protective properties.
Key Contributions
- Mapped the resilience properties of toric-rotor codes under noise to the Kosterlitz-Thouless phase transition in the classical XY model
- Developed a quantum formalism for spin stiffness that corresponds to gate fidelity in the logical subspace
- Identified a critical noise threshold (σc ≈ 0.89) for partial resilience in toric-rotor codes
- Provided mathematical framework using quantum partition functions for studying correctability in continuous-variable quantum codes
View Full Abstract
We use a quantum formalism for the partition function of the classical $XY$ model to identify a resilience phase transition in a noisy toric-rotor code. Specifically, we consider the toric-rotor code under phase-shift noise described by a von Mises probability distribution and show that the fidelity between the final state after noise and the initial state is proportional to the partition function of the $XY$ model. We map the temperature of the $XY$ model to the width of the noise in the toric-rotor code, such that a Kosterlitz--Thouless phase transition at a critical temperature $T_{c}$ corresponds to a mixed-state phase transition at a critical width $σ_c$. To characterize this phase transition, we develop a quantum formalism for the spin stiffness in the $XY$ model and show that it is mapped to the gate fidelity in the logical subspace of the toric-rotor code. In particular, we introduce a topological order parameter that characterizes the resilience of the toric-rotor code to decoherence within the logical subspace. We show that the logical subspace does not exhibit complete resilience to noise, which is a necessary condition for correctability. However, it exhibits partial resilience to noise for widths less than $σ_c\approx 0.89$, where the resilience order parameter takes values near $1$ and then drops to zero at $σ_c$. We also use our results to shed light on the correctability of toric-rotor codes in higher dimensions $d > 2$. Our work shows that the quantum formalism for partition functions provides a mathematically rigorous framework for studying correctability in continuous-variable quantum codes.
Copy-cup Gates in Tensor Products of Group Algebra Codes
This paper develops quantum error-correcting codes with built-in constant-depth quantum gates (CZ and CCZ) by analyzing when classical group algebra codes can support specific mathematical structures called copy-cup gates. The researchers connect this problem to graph theory and provide concrete conditions for constructing these enhanced quantum codes.
Key Contributions
- Established conditions for classical group algebra codes to support copy-cup gates that enable constant-depth CZ and CCZ quantum gates
- Connected the copy-cup gate problem to perfect matching in graph theory
- Fully characterized conditions for 2- and 3-copy-cup gates in weight 4 group algebra codes
- Demonstrated that bivariate bicycle codes lack pre-orientation for copy-cup gates
View Full Abstract
We determine conditions on classical group algebra codes so that they have pre-orientation for cup products and copy-cup gates. This defines quantum codes that have constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates constructed via tensor products of classical group algebra codes, including hypergraph and balanced products. We show that determining the conditions relies on solving the perfect matching problem in graph theory. Conditions are fully determined for the 2- and 3-copy-cup gates, for group algebra codes up to weight 4, including for codes with odd check weight. These include the bivariate bicycle codes, which we show do not have the pre-orientation for either type of copy-cup gate. We show that abelian weight 4 group algebra codes satisfying the non-associative 3-copy-cup gate necessarily have a code distance of 2, whereas codes that satisfy conditions for the symmetric 3-copy-cup gate can have higher distances, and in fact also satisfy conditions for the 2-copy-cup gate. Finally we find examples of quantum codes from the product of abelian group algebra codes that have inter-code constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates.
Hyperbolic and Semi-Hyperbolic Floquet Codes for Photonic Quantum Computing
This paper develops new quantum error correcting codes called hyperbolic and semi-hyperbolic Floquet codes that are specifically designed for photonic quantum computing systems. The codes use only simple weight-2 measurements and are tested under various noise models, showing improved performance compared to surface codes for photon-mediated quantum computing applications.
Key Contributions
- Construction of new hyperbolic Floquet codes from {10,3} and {12,3} tessellations using the LINS algorithm
- Demonstration that these codes achieve better fault-tolerant performance than surface codes in photonic quantum computing with 2.2x larger fault-tolerant area while encoding 10 logical qubits
View Full Abstract
Tailoring error correcting codes to the structure of the physical noise can reduce the overhead of fault-tolerant quantum computation. Hyperbolic Floquet codes use only weight-2 measurements and can be implemented directly on hardware with native pair measurements. We construct hyperbolic and semi-hyperbolic Floquet codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations via the Wythoff kaleidoscopic construction with the Low-Index Normal Subgroups (LINS) algorithm. The $\{10,3\}$ and $\{12,3\}$ families are new to hyperbolic Floquet codes. We evaluate these codes under four noise models: phenomenological, ancilla Entangling Measurement (EM3), Single-step Depolarizing EM3 (SDEM3), and erasure. Under phenomenological noise, specific-logical threshold crossings occur near $p_e \approx 0.3$--$0.5\%$ for $\{8,3\}$ ($k=6$--$56$) and $0.15$--$0.2\%$ for $\{10,3\}$ ($k=12$--$146$). EM3 ancilla noise yields a threshold of ${\sim}1.5\%$ for all three families. SDEM3 is a depolarizing noise model motivated by Majorana tetron architectures; fine-grained codes achieve thresholds of ${\sim}1.0$--$1.2\%$ for all three families. The erasure model captures detected photon loss on spin-optical links; fine-grained codes achieve erasure thresholds of ${\sim}8.5$--$9\%$ for $\{8,3\}$, ${\sim}7$--$8\%$ for $\{10,3\}$, and ${\sim}6.5$--$8\%$ for $\{12,3\}$. Photon loss is the dominant error source in photon-mediated quantum computing. Under the full three-parameter SPOQC-2 noise model, the $\{8,3\}$ codes achieve a 2D fault-tolerant area $2.2\times$ that of the surface code compiled to pair measurements while encoding $k = 10$ logical qubits. In a companion paper, we evaluate the same code families in a distributed setting.
Spin-Cat Qubit with Biased Noise in an Optical Tweezer Array
This paper demonstrates the implementation of spin-cat qubits using ytterbium-173 atoms with nuclear spin 5/2 in optical tweezers, showing how these qubits exhibit biased noise that favors dephasing errors over bit-flip errors. The researchers achieved single-qubit gate operations and characterized the noise properties, demonstrating the feasibility of using these qubits for bias-tailored quantum error correction codes.
Key Contributions
- Demonstration of single-qubit controls for spin-cat qubits in ytterbium-173 with nuclear spin I=5/2
- Characterization of biased noise in spin-cat qubits showing preference for dephasing errors over bit-flip errors
- Achievement of covariant SU(2) rotations and benchmarking of gate fidelities for bias-tailored quantum error correction
View Full Abstract
Bias-tailored quantum error correcting codes (QECCs) offer a higher error threshold than standard QECCs and have the potential to achieve lower logical errors with less space overhead. The spin-cat qubit, encoded in a large nuclear spin-$F$ system, is a promising candidate for bias-tailored QECCs. Yet its feasibility is hindered by the difficulty of performing fast covariant SU(2) rotation with arbitrary rotation angles for nuclear spins and by a lack of noise characterization for gate operations in neutral atom platforms. Here we demonstrate single-qubit controls of ${}^{173}\mathrm{Yb}$ spin-cat qubits with nuclear spin $I=5/2$ in an optical tweezer array. We implement a covariant SU(2) rotation and non-linear rotations by optical beams and achieve an averaged single-Clifford gate fidelity of $0.961_{-5}^{+5}$. The measurement of the coherence time and spin relaxation time shows that the idling error becomes increasingly biased toward dephasing errors as the magnitude of the encoded sublevel $|m_F|$ increases. Furthermore, we benchmark the noise bias of rank-preserving gates on spin-cat qubits, demonstrating a finite bias of $18_{-11}^{+132}$, in contrast to the case of the two-level system in ${}^{171}\mathrm{Yb}$, which shows no bias within the experimental uncertainty. Our work demonstrates the feasibility of spin-cat qubits for realizing bias-tailored QECCs, paving the way for achieving hardware-efficient quantum error correction.
A matching decoder for bivariate bicycle codes
This paper develops a new decoding algorithm for bivariate bicycle quantum error-correcting codes using minimum-weight perfect matching, introducing a 'cylinder trick' method that leverages code symmetries to efficiently find error corrections.
Key Contributions
- Development of matching-based decoder for bivariate bicycle codes using the 'cylinder trick' method
- Demonstration of improved decoder performance through augmentation with belief propagation and over-matching strategies
View Full Abstract
The discovery of new quantum error-correcting codes that encode several logical qubits into relatively few physical qubits motivates the development of efficient and accurate methods of decoding these systems. Here, we adopt the minimum-weight perfect matching algorithm, a subroutine invaluable to decoding topological codes, to decode bivariate bicycle codes. Using the equivalence of bivariate bicycle codes to copies of the toric code, we propose a method we call the 'cylinder trick' to rapidly find a correction using matching on code symmetries. We benchmark our decoder on the gross code family, cyclic hypergraph-product codes, generalized toric codes, and recently proposed directional codes, demonstrating the general applicability of our protocol. For a subset of these codes, we find that our decoder can be significantly improved by augmenting matching with strategies including belief propagation and 'over-matching', thus achieving performance competitive with state-of-the-art approaches.
The Road to Useful Quantum Computers
This paper provides a comprehensive overview of the current state of quantum computing development, examining the gap between existing quantum computer capabilities and the goal of achieving 'quantum utility' where quantum computers solve practically important problems. The authors analyze the key scientific and engineering challenges that must be overcome to build useful quantum computers.
Key Contributions
- Comprehensive assessment of current quantum computing capabilities versus requirements for quantum utility
- Identification and analysis of key scientific and engineering challenges blocking progress toward useful quantum computers
- Framework for tracking progress from current prototypes to quantum utility applications
View Full Abstract
Building a useful quantum computer is a grand science and engineering challenge, currently pursued intensely by teams around the world. In the 1980s, Richard Feynman and Yuri Manin observed independently that computers based on quantum mechanics might enable better simulations of quantum phenomena. Their vision remained an intellectual curiosity until Peter Shor published his famous quantum algorithm for integer factoring, and shortly thereafter a proof that errors in quantum computations can be corrected. Since then, quantum computing R&D has progressed rapidly, from small-scale experiments in university physics laboratories to well-funded industrial efforts and prototypes. Hype notwithstanding, quantum computers have yet to solve scientifically or practically important problems -- a target often called quantum utility. In this article, we describe the capabilities of contemporary quantum computers, compare them to the requirements of quantum utility, and illustrate how to track progress from today to utility. We highlight key science and engineering challenges on the road to quantum utility, touching on relevant aspects of our own research.
Computing with many encoded logical qubits beyond break-even
This paper demonstrates quantum error correction codes that encode many logical qubits and actually perform better than unencoded qubits, using up to 94 logical qubits on a 98-qubit trapped-ion quantum computer. The researchers achieved 'beyond break-even' performance where error correction improves rather than degrades computation quality.
Key Contributions
- First demonstration of beyond break-even performance with high-rate quantum error correction codes using up to 94 logical qubits
- Implementation of fault-tolerant operations including state preparation, measurement, and quantum simulation on the 98-qubit Quantinuum Helios processor
- Development of new encoded operation gadgets for iceberg QED and two-level concatenated iceberg QEC codes
View Full Abstract
High-rate quantum error correcting (QEC) codes encode many logical qubits in a given number of physical qubits, making them promising candidates for quantum computation. Implementing high-rate codes at a scale that both frustrates classical computing and improves performance by encoding requires both high fidelity gates and long-range qubit connectivity -- both of which are offered by trapped-ion quantum computers. Here, we demonstrate computations that outperform their unencoded counterparts in the high-rate $[[ k+2,\, k,\, 2 ]]$ iceberg quantum error detecting (QED) and $[[ (k_2 + 2)(k_1 + 2),\, k_2k_1,\, 4 ]]$ two-level concatenated iceberg QEC codes, using the 98-qubit Quantinuum Helios trapped-ion quantum processor. Utilizing new gadgets for encoded operations, we realize this "beyond break-even" performance with reasonable postselection rates across a range of fault-tolerant (FT) and partially-fault-tolerant (pFT) component and application benchmarks with between $48$ and $94$ logical qubits. These benchmarks include FT state preparation and measurement, QEC cycle benchmarking, logical gate benchmarking, GHZ state preparation, and a pFT quantum simulation of the three-dimensional $XY$ model of quantum magnetism. Additionally, we illustrate that postselection rates can be suppressed by increasing the code distance via concatenation. Our results represent state-of-the-art logical component and state fidelities and provide evidence that high-rate QED/QEC codes are viable on contemporary quantum computers for near-term beyond-classical-scale computation.
Controlled jump in the Clifford hierarchy
This paper develops a systematic method for reaching higher levels of the Clifford hierarchy in quantum computing by using controlled versions of Clifford operations, establishing precise rules for how much these controlled gates can advance up the hierarchy levels. The authors prove resource bounds showing that accessing very high hierarchy levels requires exponentially many qubits, and demonstrate applications to preparing states for fractional phase gates.
Key Contributions
- Proof of controlled-jump rule showing controlled Clifford gates CU reach hierarchy level m+2 where m is the Pauli periodicity of U
- Tight upper bound on Pauli periodicity showing exponential qubit requirements for high hierarchy levels
- Construction of explicit Clifford families achieving asymptotically optimal hierarchy jumps
- Protocol for preparing logical catalyst states enabling fractional Z gates via phase kickback
View Full Abstract
We develop a simple and systematic route to higher levels of the qubit Clifford hierarchy by coherently controlling Clifford operations. Our approach is based on Pauli periodicity, defined for a Clifford unitary $U$ as the smallest integer $m\ge 1$ such that $U^{2^{m}}$ is a Pauli operator up to phase. We prove a sharp controlled-jump rule showing that the controlled gate $CU$ lies strictly in level $m+2$ of the hierarchy, and equivalently that $CU$ lies in level $k$ if $U^{2^{k-2}}$ is Pauli while no smaller positive power of $U$ is Pauli. We further quantify the resources required to realize large level jumps in the Clifford hierarchy by proving an essentially tight upper bound on Pauli periodicity as a function of the number of qubits, which implies that accessing high hierarchy levels through controlled Cliffords requires a number of target qubits that grows exponentially with the desired level. We complement this limitation with explicit infinite families of Pauli-periodic Cliffords whose controlled versions achieve asymptotically optimal jumps. As an application, we propose a protocol for preparing logical catalyst states that enable logical $Z^{1/2^k}$ phase gates via phase kickback from a single jumped Clifford.
Beyond Single-Shot Fidelity: Chernoff-Based Throughput Optimization in Superconducting Qubit Readout
This paper develops a new approach to optimize qubit readout in superconducting quantum computers by focusing on minimizing the total time needed to certify quantum states, rather than just maximizing single-shot measurement accuracy. The researchers show that using longer integration times than what maximizes single-shot fidelity can actually reduce overall certification time by 9-11%.
Key Contributions
- Formulated information-theoretic framework treating qubit readout as a stochastic communication channel with Chernoff information analysis
- Demonstrated that throughput-optimal integration times are longer than fidelity-optimal times, achieving 9-11% speedup in state certification
View Full Abstract
Single-shot fidelity is the standard benchmark for superconducting qubit readout, but it does not directly minimize the total wall-clock time required to certify a quantum state. We formulate an information-theoretic description of dispersive readout that treats the measurement record as a stochastic communication channel and compute the classical Chernoff information governing the multi-shot error exponent using a trajectory model that incorporates T1 relaxation with full cavity memory. We find a consistent separation between the integration time that maximizes single-shot fidelity and the time that minimizes total certification time. For representative transmon parameters and hardware overheads, the throughput-optimal integration window is longer than the fidelity-optimal one, yielding certification speedups of approximately 9-11%, with the gain saturating near 1.13x in the high-readout-power and high-overhead regime. Comparing the extracted classical information to the Gaussian Chernoff limit defines an information-extraction efficiency metric and shows that typical dispersive schemes are limited to about 45% capture at short integration times by detection efficiency, decreasing to approximately 12% at the throughput-optimal integration time of approximately 1.22 us due to T1-induced trajectory smearing. This formulation connects readout calibration directly to the operational objective of minimizing certification time in high-throughput superconducting processors.
Analysis of the action of conventional trapped-ion entangling gates in qudit space
This paper analyzes how conventional quantum gates designed for qubits (2-level systems) behave when applied to qudits (multi-level quantum systems) in trapped-ion quantum computers. The researchers study unwanted phase accumulations that occur in higher-dimensional systems and propose methods to compensate for these phases to make qudit-based quantum processors more practical.
Key Contributions
- Theoretical analysis of phase accumulation in Mølmer-Sørensen and Light-shift gates when operating on qudits
- Methods to actively compensate for unwanted phases and enhance gate robustness in multi-level quantum systems
View Full Abstract
Qudits, or multi-level quantum information carriers, present a promising path for scaling quantum computers. However, their use introduces increased complexity in quantum logic, necessitating careful control of relative phases between different qudit levels. In trapped-ion systems, entangling operations accumulate phases on specific levels that are no longer global, unlike in qubit architectures. Furthermore, the structure of multi-level gates becomes increasingly intricate with higher-dimensional Hilbert spaces. This work explores the theory of these additional entangling and non-entangling phases, accumulated in Mølmer--Sørensen and Light-shift gates. We propose methods to actively compensate for these phases, enhance gate robustness against parameter fluctuations, and simplify native gates for more efficient circuit decomposition. Our results pave the way toward the practical and scalable implementation of qudit-based quantum processors.
Tuning Wave-Particle Duality of Quantum Light by Generalized Photon Subtraction
This paper demonstrates a technique called generalized photon subtraction to create quantum light states that can be tuned between wave-like and particle-like properties. The researchers show this method can efficiently generate special quantum states needed for fault-tolerant optical quantum computing, particularly addressing bottlenecks in creating GKP qubits.
Key Contributions
- Experimental demonstration of tunable wave-particle duality control using generalized photon subtraction
- High-rate generation of intermediate quantum states optimized for fault-tolerant quantum computing thresholds
- Pathway to efficient GKP qubit generation addressing bottlenecks in optical quantum computing
View Full Abstract
Wave--particle duality is a hallmark of quantum mechanics. For bosonic systems, there exists a continuum of intermediate states bridging wave-like Schrödinger cat states and particle-like Fock states. Such states have recently been recognized as valuable resources for enhancing fault-tolerant quantum computation (FTQC) with propagating light. Here we experimentally demonstrate tunable generation of these intermediate states by employing generalized photon subtraction (GPS). By detecting up to three photons from squeezed-light sources with a photon-number-resolving detector, we continuously control the balance between wave- and particle-like features. This approach allows us to construct a spectral family of quantum states with high generation rates, optimized according to the required fault-tolerance threshold. Our results establish GPS as a versatile toolbox for tailoring non-Gaussian resources, opening a pathway to efficient Gottesman--Kitaev--Preskill (GKP) qubit generation and addressing a central bottleneck in optical quantum computing.
Optimized ancillary drive for fast Rydberg entangling gates
This paper develops a method to speed up two-qubit quantum gates in neutral atom systems by using an optimized ancillary laser drive that enhances the coupling between ground and Rydberg states. The technique reduces gate execution time by over 30% while maintaining high fidelity above 99.54% and reducing laser power requirements.
Key Contributions
- Development of optimized ancillary drive technique to enhance two-photon Rabi frequency in Rydberg atom systems
- Demonstration of >30% reduction in CZ gate execution time while maintaining >99.54% fidelity with reduced laser power requirements
View Full Abstract
Reaching fast and robust two-qubit gates with low infidelities has been an outstanding challenge for the long-term goal of useful quantum computers. Typically, optimizing the pulse shapes can minimize the gate infidelity and improve its robustness to certain types of errors; yet it remains incapable of speeding up the gate execution time which is fundamentally restricted by the attainable Rabi frequency in a realistic setup. In this work, we develop a fast implementation of two-qubit CZ gates using optimized ancillary drive to enhance the two-photon Rabi frequency between the ground and Rydberg states.This ancillary drive can work in an error-robustness framework without increasing the original gate infidelity in the absence of the drive. Considering the experimentally feasible parameters for $^{87}$Rb atoms, we demonstrate that the execution time required for such CZ gates can be shortened by more than 30$\%$ as compared to standard two-photon protocols arising the gate fidelity above 0.9954 by taking account of all relevant error sources. Our results reduce the high-power laser requirement and unlock the potential toward fast, high-fidelity quantum operations for large-scale quantum computation with neutral atoms.
Correcting coherent quantum errors by going with the flow
This paper shows that coherent quantum errors (correlated errors across qubits) can be effectively managed in quantum error correction by using 'passive' correction strategies that track errors virtually rather than physically correcting them immediately. The authors demonstrate that this approach prevents coherent errors from compounding over multiple correction cycles, achieving performance comparable to simpler uncorrelated error models.
Key Contributions
- Demonstrates that passive error correction with virtual Pauli frame updates prevents coherent errors from compounding in quantum error correction
- Shows through theory and simulation that correlated Hamiltonian noise can achieve similar performance to uncorrelated Pauli noise when using proper correction strategies
View Full Abstract
The performance of a given quantum error correction (QEC) code depends upon the noise model that is assumed. Independent Pauli noise, applied after each quantum operation, is a simplistic noise model that is easy to simulate and understand in the context of stabilizer codes. Although such a noise model is artificial, it is equivalent to independent, random, unbiased qubit rotations. What about spatially or temporally correlated qubit rotations? Such a noise model is applicable to global operations (e.g., NMR or ESR), common control sources (e.g., lasers), or slow drift (e.g., charge or magnetic noise) in various qubit technologies. In the worst case, such errors can combine constructively and result in a post-correction failure rate that increases with the number of error correction cycles. However, we show that this worst case does not generally arise unless taking active corrective actions while performing QEC. That is, by employing virtual Pauli frame updates ("passive" error correction) rather than physical corrections ("active" error correction), coherent errors do not compound appreciably. Starting in a random Pauli frame is also advantageous. In fact, through perturbation theory arguments and supporting numerical simulations, we show that the logical qubit performance beyond distance 3 for correlated single-qubit Hamiltonian noise models (i.e., global errant qubit rotations), when employing these "lazy" strategies, essentially matches the performance of Pauli noise model with the same process fidelity (fidelity after one application). In a more general circuit model of noise, correlations may add constructively within syndrome extraction rounds but Pauli frame randomization from passive error correction mitigates this effect across multiple rounds.
Entanglement-Induced Resilience of Quantum Dynamics
This paper demonstrates that quantum entanglement naturally protects quantum systems from errors and noise without requiring additional error correction schemes. The researchers show that as entanglement grows in quantum many-body systems, it automatically confines and suppresses the impact of local perturbations and errors.
Key Contributions
- Theoretical proof that entanglement entropy growth provides intrinsic protection against quantum errors
- Demonstration of a passive error suppression mechanism that requires no additional qubits or control overhead
- Quantitative correlation between entanglement entropy and degree of error protection in quantum dynamics
View Full Abstract
Quantum many-body devices suffer from imperfections that destabilize dynamics and limit scalability. We show that the dynamical growth of entanglement can intrinsically protect generic quantum dynamics against coherent and perturbative noise. Through rigorous theoretical analysis of general quantum dynamics and numerical simulations of spin chains and fermionic lattices, we prove that entanglement-entropy growth confines the influence of local Hamiltonian perturbations, thereby suppressing errors in dynamical errors. The degree of protection correlates quantitatively with the entanglement entropy of subsystems on which the perturbations act, and applies broadly to both analog quantum simulators and real-time control protocols. This entanglement-induced resilience is conceptually distinct from quantum error correction or dynamical decoupling: it passively leverages native many-body correlations without additional qubits, measurements, or control overhead. Our results reveal a generic mechanism linking entanglement growth to dynamical stability and provide practical guidelines for designing noise-resilient quantum devices.
Error correction with brickwork Clifford circuits
This paper proves that random 1D Clifford brickwork circuits can form good quantum error correction codes with logarithmic depth, providing both approximate and exact error correction bounds. The research uses statistical mechanics techniques to analyze these random quantum circuits and establishes mathematical limits on the circuit depth needed for effective error correction.
Key Contributions
- Proof that random 1D Clifford brickwork circuits form good approximate quantum error correction codes in logarithmic depth
- Matching upper and lower bounds for the circuit depth required for exact error correction in random 1D Clifford brickwork circuits
View Full Abstract
We prove that random 1D Clifford brickwork circuits form (in expectation) good approximate quantum error correction codes in logarithmic depth. Our proof makes use of the statistical mechanics techniques for random circuits developed by Dalzell et al. [PRX Quantum 3, 010333], adapted extensively to our own purpose. We also consider exact error correction, where we give matching upper and lower bounds for the required depth in which random 1D Clifford brickwork circuits become error correcting.
Toward speedup without quantum coherent access
This paper proposes a hybrid classical-quantum algorithm that combines classical preprocessing of matrix data with quantum circuits to solve various computational problems. The approach aims to achieve quantum speedups for tasks like linear equation solving and data fitting while avoiding the strong input assumptions that limit many existing quantum algorithms.
Key Contributions
- Development of hybrid classical-quantum algorithm with logarithmic complexity in input dimension
- Demonstration of exponential speedups for certain matrices compared to existing methods
- End-to-end quantum data fitting application with practical prediction capabilities
- Block encoding technique that avoids strong input assumptions of previous quantum algorithms
View Full Abstract
Along with the development of quantum technology, finding useful applications of quantum computers has been a central pursuit. Despite various quantum algorithms have been developed, many of them often require strong input assumptions, which is hardware demanding. In particular, recent advances on dequantization have revealed that the quantum advantage is more of a mere artifact of strong input assumptions. In this work, we propose a variant of these algorithms, leveraging both classical and quantum resources. Provided the classical knowledge (the entries) of the matrix/vector of interest, a classical procedure is used to pre-process this information. Then they are fed into a quantum circuit which is shown to be a block encoding of the matrix of interest. From this block-encoding, we show how to use it to tackle a wide range of problems, including principal component analysis, linear equation solving, Hamiltonian simulation, preparing ground state, and data fitting. We also analyze our protocol, showing that both the classical and quantum procedure can achieve logarithmic complexity in the input dimension, thus implying its potential for near term realization. We then discuss several implications and corollaries of our result. First,, our results suggest there are certain matrices/Hamiltonians where our method can provide exponential improvement compared to the existing ones with respect to the sparsity. Regarding dense linear systems, our method achieves exponential speed-up with respect to the inverse of error tolerance, compared to the best previously known quantum algorithm for dense systems. Last, and most importantly, regarding quantum data fitting, we show how the output of our quantum algorithms can be leveraged to predict unseen data. Thus, it provides an end-to-end application, which has been an open aspect of the previous quantum data fitting algorithm.
Qudit stabiliser codes for $\mathbb{Z}_N$ lattice gauge theories with matter
This paper shows how lattice gauge theories with matter can be reformulated as quantum error correcting codes using qudits (quantum systems with N levels instead of just 2). The authors demonstrate that quantum error correction can reveal hidden mathematical relationships between different physical theories and show how to perform fault-tolerant quantum computations using these qudit codes.
Key Contributions
- Extended quantum error correction from qubits to qudits for lattice gauge theories with matter
- Demonstrated logical duality between different bosonic models through error correction mapping
- Showed implementation of universal fault-tolerant gates via state injection between compatible qudit codes
View Full Abstract
In this work we extend the connection between Quantum Error Correction (QEC) and Lattice Gauge Theories (LGTs) by showing that a $\mathbb{Z}_N$ gauge theory with prime dimension $N$ coupled to dynamical matter can be expressed as a qudit stabilizer code. Using the stabilizer formalism we show how to formulate an exact mapping of the encoded $\mathbb{Z}_N$ gauge theory onto two different bosonic models, uncovering a logical duality generated by error correction itself. From this perspective, quantum error correction provides a unifying language to expose dual descriptions of lattice gauge theories. In addition, we generalize earlier $\mathbb{Z}_2$ constructions on qubits to $\mathbb{Z}_N$ on $N$-level qudits and demonstrate how universal fault-tolerant gates can be implemented via state injection between compatible qudit codes.
Distilling Magic States in the Bicycle Architecture
This paper develops improved magic state distillation factories using Bivariate Bicycle (BB) codes that can operate within a single code block, achieving better space-time efficiency and lower error rates compared to conventional surface code approaches that require multiple code blocks and lattice surgery.
Key Contributions
- Development of magic state distillation factories on Bivariate Bicycle codes that execute within single code blocks
- Joint optimization framework for logical qubit mapping, gate scheduling, measurement nativization, and protocol compression via qubit recycling
- Demonstration of improved space-time volume and lower target error rates compared to leading surface code distillation factories
View Full Abstract
Magic State Distillation is considered to be one of the promising methods for supplying the non-Clifford resources required to achieve universal fault tolerance. Conventional MSD protocols implemented in surface codes often require multiple code blocks and lattice surgery rounds, resulting in substantial qubit overhead, especially at low target error rates. In this work, we present practical magic state distillation factories on Bivariate Bicycle (BB) codes that execute Pauli-measurement-based Clifford circuits inside a single BB code block. We formulate distillation circuit design as a joint optimization of logical qubit mapping, gate scheduling, measurement nativization, and protocol compression via qubit recycling. Based on detailed resource analysis and simulations, our BB factories have space-time volume comparable to that of leading distillation factories while delivering lower target error at a smaller qubit footprint, and are particularly compelling as second-round distillers following magic state cultivations.
A Unified Error Correction Code for Universal Quantum Computing with Identical Particles
This paper proposes a new quantum error correction approach for quantum computers built with identical particle qubits, showing that these systems interact differently with environmental noise than conventional qubits. The authors develop a unified framework where error correction can be implemented directly at the physical qubit level using non-unitary reversal operations.
Key Contributions
- Identification of fundamental differences between identical particle qubit-bath interactions and conventional qubit-bath interactions
- Development of a unified error correction framework using non-unitary reversal operations for fault-tolerant quantum computing
- Demonstration that dynamical decoupling and decoherence-free subspace structures remain effective in this new framework
View Full Abstract
We present a universal fault-tolerant quantum computing architecture based on identical particle qubits (IPQs), where we find that the first-order IPQ - bath interaction fundamentally differs from the conventional first-order qubit-bath interaction. This key distinction necessitates a redesign of existing strategies to fight decoherence. We propose that the simplest quantum error correction code can be realized directly within the physical qubit, provided that conventional correction and restoration are generalized beyond unitary operations to employ physically implementable reversal operations -- naturally placing logical and physical qubits on equal footing. We further demonstrate that dynamical decoupling (DD) remains effective within this unified framework, and that a decoherence-free subspace (DFS) -- like structure emerges. Unlike previous approximate treatments, our analytically solvable IPQ-Bath model enables rigorous testing of these strategies, with numerical simulations validating their effectiveness.
Generalized $\mathbb{Z}_p$ toric codes as qudit low-density parity-check codes
This paper develops improved quantum error correction codes by generalizing the Kitaev toric code to work with higher-dimensional quantum systems (qudits) and systematically searches for codes with better performance parameters, finding examples that achieve optimal trade-offs between code distance and information storage capacity.
Key Contributions
- Development of generalized Z_p toric codes for qudits with enhanced stabilizer structures
- Systematic search identifying optimal qudit LDPC codes with improved k*d²/n ratios
- Efficient computational method using Laurent polynomials and Gröbner basis to calculate logical dimensions
View Full Abstract
We study two-dimensional translation-invariant CSS stabilizer codes over prime-dimensional qudits on the square lattice under twisted boundary conditions, generalizing the Kitaev $\mathbb{Z}_p$ toric code by augmenting each stabilizer with two additional qudits. Using the Laurent-polynomial formalism, we adapt the Gröbner basis to compute the logical dimension $k$ efficiently, without explicitly constructing large parity-check matrices. We then perform a systematic search over various stabilizer realizations and lattice geometries for $p\in\{3,5,7,11\}$, identifying qudit low-density parity-check codes with the optimal finite-size performance. Representative examples include $[[242,10,22]]_3$ and $[[120,6,20]]_{11}$, both achieving $k d^{2}/n=20$. Across the searched regime, the best observed $k d^{2}$ at fixed $n$ increases with $p$, with an empirical relation $k d^{2} = 0.0541 \, n^{2}\ln p + 3.84 \, n$, compatible with a Bravyi--Poulin--Terhal-type tradeoff when the interaction range grows with system size.
Experimental characterization of coherent and non-Markovian errors using tangent space decomposition
This paper develops and experimentally validates a new method for diagnosing different types of quantum errors in single-qubit gates using tangent-space decomposition. The technique can distinguish between coherent errors, Markovian noise, and non-Markovian noise from a single measurement, and was tested on a trapped ion quantum computing platform.
Key Contributions
- Novel tangent-space decomposition method for quantum error characterization that distinguishes coherent, Markovian, and non-Markovian errors
- Experimental validation on trapped ion platform showing practical application for quantum control system diagnostics
View Full Abstract
Accurate characterization of coherent and non-Markovian errors remains a central challenge in quantum information processing, as conventional benchmarking techniques typically rely on Markovian and time-independent noise assumptions. In practice, however, quantum devices exhibit both systematic coherent miscalibrations and temporally correlated fluctuations, which complicate error diagnosis and mitigation. Here, we apply a technique based on tangent-space decomposition to characterize such error in single-qubit quantum gates implemented on a trapped ion platform. Small imperfections in a quantum operation are treated as perturbations of the target quantum map, represented as tangent vectors in the space of quantum channels. This formulations enables a natural decomposition of the deviation into three components corresponding to coherent, Markovian and non-Markovian processes.The relative weights of these components provide a quantitative measure of the contribution from each type of error mechanism, directly from a single tomographic snapshot. We experimentally validate this method on a single-qubit gates implemented on a trapped $^{40}$Ca$^+$ ion, where control is achieved through laser-driven optical transitions. By analyzing experimentally reconstructed process matrices, expressed in the Pauli Transfer Matrix and Choi representations, we identify and quantify non-Markovian effects arising from controlled injection of slow fluctuations in the experimental environment. We also characterize deterministic coherent miscalibrations using the same technique. This approach provides a physically transparent and experimentally accessible tool for diagnosing complex error sources in quantum control systems.
CQM: Cyclic Qubit Mappings
This paper proposes Cyclic Qubit Mappings (CQM), a technique that dynamically moves logical qubits around quantum hardware during compilation to average out spatial and temporal error variations in quantum computers using surface codes and lattice surgery operations.
Key Contributions
- Dynamic remapping technique to mitigate hardware heterogeneity in quantum computers
- Method to achieve average logical error rates by moving qubits spatially using lattice surgery
View Full Abstract
Quantum computers show promise to solve select problems otherwise intractable on classical computers. However, noisy intermediate-scale quantum (NISQ) era devices are currently prone to various sources of error. Quantum error correction (QEC) shows promise as a path towards fault tolerant quantum computing. Surface codes, in particular, have become ubiquitous throughout literature for their efficacy as a quantum error correcting code, and can execute quantum circuits via lattice surgery operations. Lattice surgery also allows for logical qubits to maneuver around the architecture, if there is space for it. Hardware used for near-term demonstrations have both spatially and temporally varying error results in logical qubits. By maneuvering logical qubits around the topology, an average logical error rate (LER) can be enforced. We propose cyclic qubit mappings (CQM), a dynamic remapping technique implemented during compilation to mitigate hardware heterogeneity by expanding and contracting logical qubits. In addition to LER averaging, CQM shows initial promise given it's minimal execution time overhead and effective resource utilization.
Electrical post-fabrication tuning of aluminum Josephson junctions at room temperature
This paper demonstrates a method to electrically tune aluminum Josephson junctions at room temperature using voltage pulses, allowing researchers to adjust qubit frequencies after fabrication. The technique can increase junction resistance by up to 270% while maintaining high qubit quality, providing a solution to frequency crowding problems in quantum processors.
Key Contributions
- Demonstrated controllable post-fabrication tuning of superconducting qubit frequencies while maintaining quality factors above 1 million
- Established practical protocols and limits for electrical tuning of Josephson junctions with up to 270% resistance increase
- Provided solution to frequency crowding in quantum processors through room-temperature junction modification
View Full Abstract
Josephson junctions are a key element of superconducting quantum technology, serving as the core building blocks of superconducting qubits. We present an experimental study on room-temperature electrical tuning of aluminum junctions, showing that voltage pulses can controllably increase their resistance and adjust the Josephson energy while maintaining qubit quality factors above 1 million. We find that the rate of resistance increase scales exponentially with pulse amplitude during manipulation, after which the spontaneous resistance increase scales proportionally to the amount of manipulation. We show that this spontaneous increase halts at cryogenic temperatures, and resumes again at room temperature. Using our stepwise protocol, we achieve up to a 270% increase in junction resistance, corresponding to a reduction of nearly 2 GHz of the qubit transition frequency. These results establish the achievable range, relaxation behavior, and practical limits of electrical tuning, enabling post-fabrication mitigation of frequency crowding in quantum processors.
Differentiable Maximum Likelihood Noise Estimation for Quantum Error Correction
This paper develops a new method called differentiable Maximum Likelihood Estimation (dMLE) to better estimate noise in quantum computers, which is crucial for quantum error correction. The approach uses gradient descent to optimize noise parameters and demonstrates significant improvements in reducing logical error rates compared to existing methods when tested on Google's quantum processor.
Key Contributions
- Development of differentiable Maximum Likelihood Estimation framework for quantum noise estimation
- Demonstration of up to 30.6% reduction in logical error rates for repetition codes and 8.1% for surface codes
- Integration of exact Planar solver and novel Tensor Network architecture for tractable likelihood evaluation
View Full Abstract
Accurate noise estimation is essential for fault-tolerant quantum computing, as decoding performance depends critically on the fidelity of the circuit-level noise parameters. In this work, we introduce a differentiable Maximum Likelihood Estimation (dMLE) framework that enables exact, efficient, and fully differentiable computation of syndrome log-likelihoods, allowing circuit-level noise parameters to be optimized directly via gradient descent. Leveraging the exact Planar solver for repetition codes and a novel, simplified Tensor Network (TN) architecture combined with optimized contraction path finding for surface codes, our method achieves tractable and fully differentiable likelihood evaluation even for distance 5 surface codes with up to 25 rounds. Our method recovers the underlying error probabilities with near-exact precision in simulations and reduces logical error rates by up to 30.6(3)% for repetition codes and 8.1(2)% for surface codes on experimental data from Google's processor compared to previous state-of-the-art methods: correlation analysis and Reinforcement Learning (RL) methods. Our approach yields provably optimal, decoder-independent error priors by directly maximizing the syndrome likelihood, offering a powerful noise estimation and control tool for unlocking the full potential of current and future error-corrected quantum processors.
Calderbank-Shor-Steane codes on group-valued qudits
This paper introduces a new class of quantum error-correcting codes called group-CSS codes that work on qudits (quantum systems with more than two levels) based on any finite group. The codes generalize existing CSS codes and quantum double models, providing new theoretical frameworks for quantum error correction with non-Abelian groups.
Key Contributions
- Introduction of CSS-like codes on group-valued qudits for arbitrary finite groups
- Proof that certain group-CSS codes reduce to CW quantum double models
- Construction of intrinsically non-Abelian code families with asymptotically optimal rate and distances
- Generalization of quantum double models with defects using ghost vertices
View Full Abstract
Calderbank-Shor-Steane (CSS) codes are a versatile quantum error-correcting family built out of commuting $X$- and $Z$-type checks. We introduce CSS-like codes on $G$-valued qudits for any finite group $G$ that reduce to qubit CSS codes for $G = \mathbb{Z}_2$ yet generalize the Kitaev quantum double model for general groups. The $X$-checks of our group-CSS codes correspond to left and/or right multiplication by group elements, while $Z$-checks project onto solutions to group word equations. We describe quantum-double models on oriented two-dimensional CW complexes (which need not cellulate a manifold) and prove that, when $G$ is non-Abelian and simple, every $G$-covariant group-CSS code with suitably upper-bounded $Z$-check weight and lower-bounded $Z$-distance reduces to a CW quantum double. We describe the codespace and logical operators of CW quantum doubles via the same intuition used to obtain logical structure of surface codes. We obtain distance bounds for codes on non-Abelian simple groups from the graph underlying the CW complex, and construct intrinsically non-Abelian code families with asymptotically optimal rate and distances. Adding "ghost vertices" to the CW complex generalizes quantum double models with defects and rough boundary conditions whose logical structure can be understood without reference to non-Abelian anyons or defects. Several non-invertible symmetry-protected topological states, both with ordinary and higher-form symmetries, are the unique codewords of simply-connected CW quantum doubles with a single ghost vertex.
A Fine-Grained and Efficient Reliability Analysis Framework for Noisy Quantum Circuits
This paper develops a new framework for efficiently evaluating how reliable quantum circuits are when running on noisy quantum computers. The method introduces a 'Noise Proxy Circuit' that models cumulative noise effects and provides accurate reliability estimates without actually running the circuits, achieving results comparable to more computationally expensive fidelity measurements.
Key Contributions
- Introduction of Noise Proxy Circuit (NPC) abstraction for modeling cumulative noise effects without logical operations
- Development of Proxy Fidelity metric for quantifying both qubit-level and circuit-level reliability
- Analytical algorithm for estimating reliability under multiple noise channels (depolarizing, thermal relaxation, readout error)
- Execution-free, scalable framework achieving fidelity-level accuracy with low computational cost
View Full Abstract
Evaluating the reliability of noisy quantum circuits is essential for implementing quantum algorithms on noisy quantum devices. However, current quantum hardware exhibits diverse noise mechanisms whose compounded effects make accurate and efficient reliability evaluation challenging. While state fidelity is the most faithful indicator of circuit reliability, it is experimentally and computationally prohibitive to obtain. Alternative metrics, although easier to compute, often fail to accurately reflect circuit reliability, lack universality across circuit types, or offer limited interpretability. To address these challenges, we propose a fine-grained, scalable, and interpretable framework for efficient and accurate reliability evaluation of noisy quantum circuits. Our approach performs a state-independent analysis to model how circuit reliability progressively degrades during execution. We introduce the Noise Proxy Circuit (NPC), which removes all logical operations while preserving the complete sequence of noise channels, thereby providing an abstraction of cumulative noise effects. Based on the NPC, we define Proxy Fidelity, a reliability metric that quantifies both qubit-level and circuit-level reliability. We further develop an analytical algorithm to estimate Proxy Fidelity under depolarizing, thermal relaxation, and readout error channels. The proposed framework achieves fidelity-level reliability estimation while remaining execution-free, scalable, and interpretable. Experimental results show that our method accurately estimates circuit fidelity, with an average absolute difference (AAD) ranging from 0.031 to 0.069 across diverse circuits and devices.
Universal Protection of Quantum States from Decoherence
This paper presents a universal method to protect quantum states from decoherence by temporarily moving quantum information to a protected ancillary system, without requiring prior knowledge of the quantum state. The researchers experimentally demonstrated this protection protocol using quantum optics, showing it can preserve coherence for arbitrary quantum states.
Key Contributions
- Development of a state-independent quantum decoherence protection protocol that works without prior knowledge of the quantum state
- Experimental validation of universal quantum state protection using quantum optical platform with ancillary degrees of freedom
View Full Abstract
The fragility of quantum coherence fundamentally limits the scalability of quantum technologies, as unavoidable environmental interactions induce decoherence and rapidly degrade quantum properties. The Quantum Zeno Effect offers a powerful route to suppress quantum evolution and protect coherence through frequent measurements, irrespective of the underlying dynamics. However, existing implementations require prior knowledge of the quantum state, severely restricting their applicability. Here we introduce a state- and dynamics-independent protection protocol embedding the system in a larger Hilbert space, temporarily swapping the quantum information from its original degree of freedom to a decoherence-free ancillary one. We experimentally validate the protocol on a quantum optical platform, demonstrating robust preservation of coherence and purity for arbitrary polarization qubits under decoherence, thereby enabling the universal safeguarding of unknown quantum states.
Separating Non-Interactive Classical Verification of Quantum Computation from Falsifiable Assumptions
This paper proves that it's impossible to create a non-interactive classical verification protocol for quantum computation based on standard cryptographic assumptions. The authors show a fundamental limitation where verifying quantum computations with just a single message exchange cannot rely on typical assumptions like Learning-with-Errors that most cryptographic systems use.
Key Contributions
- Proves impossibility of non-interactive classical verification of QMA problems under falsifiable assumptions using quantum black-box reductions
- Establishes fundamental limitations for quantum verification protocols in the plain model with single-message communication
View Full Abstract
Mahadev [SIAM J. Comput. 2022] introduced the first protocol for classical verification of quantum computation based on the Learning-with-Errors (LWE) assumption, achieving a 4-message interactive scheme. This breakthrough naturally raised the question of whether fewer messages are possible in the plain model. Despite its importance, this question has remained unresolved. In this work, we prove that there is no quantum black-box reduction of non-interactive classical verification of quantum computation of $\textsf{QMA}$ to any falsifiable assumption. Here, "non-interactive" means that after an instance-independent setup, the protocol consists of a single message. This constitutes a strong negative result given that falsifiable assumptions cover almost all standard assumptions used in cryptography, including LWE. Our separation holds under the existence of a $\textsf{QMA} \text{-} \textsf{QCMA}$ gap problem. Essentially, these problems require a slightly stronger assumption than $\textsf{QMA}\neq \textsf{QCMA}$. To support the existence of such problems, we present a construction relative to a quantum unitary oracle.
Distributed Hyperbolic Floquet Codes under Depolarizing and Erasure Noise
This paper develops distributed quantum error correcting codes based on hyperbolic geometry that can operate across multiple quantum processing units connected by shared entanglement. The authors test these codes under various noise conditions and demonstrate their effectiveness for scaling quantum computers beyond single-device architectures.
Key Contributions
- Introduction of new hyperbolic Floquet code families from {10,3} and {12,3} tessellations
- Demonstration of distributed quantum error correction across multiple QPUs with measurable pseudo-thresholds under realistic noise models
View Full Abstract
Distributing qubits across quantum processing units (QPUs) connected by shared entanglement enables scaling beyond monolithic architectures. Hyperbolic Floquet codes use only weight-2 measurements and are good candidates for distributed quantum error correcting codes. We construct hyperbolic and semi-hyperbolic Floquet codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations via the Wythoff kaleidoscopic construction with the Low-Index Normal Subgroups (LINS) algorithm and distribute them across QPUs via spectral bisection. The $\{10,3\}$ and $\{12,3\}$ families are new to hyperbolic Floquet codes. We simulate these distributed codes under four noise models: depolarizing, SDEM3, correlated EM3, and erasure. With depolarizing noise ($p_{\text{local}} = 0.03\%$), fine-grained codes achieve non-local pseudo-thresholds up to 3.0\% for $\{8,3\}$, 3.0\% for $\{10,3\}$, and 1.75\% for $\{12,3\}$. Correlated EM3 yields pseudo-thresholds up to 0.75\% for $\{8,3\}$, 0.75\% for $\{10,3\}$, and 0.50\% for $\{12,3\}$; crossing-based thresholds from same-$k$ families are ${\sim}1.75$--$2.9\%$ across all tessellations. Using the SDEM3 model, fine-grained codes achieve distributed pseudo-thresholds of 1.75\% for $\{8,3\}$, 1.25\% for $\{10,3\}$, and 1.00\% for $\{12,3\}$. Under erasure noise motivated by spin-optical architectures, thresholds at 1\% local loss are 35--40\% for $\{8,3\}$, 30--35\% for $\{10,3\}$, and 25--30\% for $\{12,3\}$.
Exact quantum decision diagrams with scaling guarantees for Clifford+$T$ circuits and beyond
This paper develops exact quantum decision diagrams that avoid floating-point errors by using algebraic representations for complex numbers, specifically for analyzing Clifford+T quantum circuits. The authors prove theoretical scaling guarantees showing that their method's runtime and memory usage scale exponentially only with the number of T gates, while remaining polynomial in the number of Clifford gates and qubits.
Key Contributions
- First exact algebraic representation for quantum decision diagrams that eliminates floating-point errors
- Theoretical scaling guarantees proving runtime bounds of 2^t · poly(g,n) for quantum circuit simulation
- Connection between quantum state stabilizer nullity and decision diagram width for Clifford+T circuits
View Full Abstract
A decision diagram (DD) is a graph-like data structure for homomorphic compression of Boolean and pseudo-Boolean functions. Over the past decades, decision diagrams have been successfully applied to verification, linear algebra, stochastic reasoning, and quantum circuit analysis. Floating-point errors have, however, significantly slowed down practical implementations of real- and complex-valued decision diagrams. In the context of quantum computing, attempts to mitigate this numerical instability have thus far lacked theoretical scaling guarantees and have had only limited success in practice. Here, we focus on the analysis of quantum circuits consisting of Clifford gates and $T$ gates (a common universal gate set). We first hand-craft an algebraic representation for complex numbers, which replace the floating point coefficients in a decision diagram. Then, we prove that the sizes of these algebraic representations are linearly bounded in the number of $T$ gates and qubits, and constant in the number of Clifford gates. Furthermore, we prove that both the runtime and the number of nodes of decision diagrams are upper bounded as $2^t \cdot poly(g, n)$, where $t$ ($g$) is the number of $t$ gates (Clifford gates) and $n$ the number of qubits. Our proofs are based on a $T$-count dependent characterization of the density matrix entries of quantum states produced by circuits with Clifford+$T$ gates, and uncover a connection between a quantum state's stabilizer nullity and its decision diagram width. With an open source implementation, we demonstrate that our exact method resolves the inaccuracies occurring in floating-point-based counterparts and can outperform them due to lower node counts. Our contributions are, to the best of our knowledge, the first scaling guarantees on the runtime of (exact) quantum decision diagram simulation for a universal gate set.
A Shadow Enhanced Greedy Quantum Eigensolver
This paper introduces SEGQE, a new quantum algorithm that efficiently finds the ground state (lowest energy state) of quantum systems by using classical shadows to evaluate many potential quantum operations in parallel, then greedily selecting the best one at each step. The method is designed to work well on early fault-tolerant quantum computers where measurements are expensive.
Key Contributions
- Development of SEGQE algorithm that uses classical shadows for measurement-efficient ground-state preparation
- Rigorous theoretical analysis providing worst-case sample complexity bounds with logarithmic scaling
- Numerical demonstration of linear scaling with system size on transverse-field Ising models and random Hamiltonians
View Full Abstract
While ground-state preparation is expected to be a primary application of quantum computers, it is also an essential subroutine for many fault-tolerant algorithms. In early fault-tolerant regimes, logical measurements remain costly, motivating adaptive, shot-frugal state-preparation strategies that efficiently utilize each measurement. We introduce the Shadow Enhanced Greedy Quantum Eigensolver (SEGQE) as a greedy, shadow-assisted framework for measurement-efficient ground-state preparation. SEGQE uses classical shadows to evaluate, in parallel and entirely in classical post-processing, the energy reduction induced by large collections of local candidate gates, greedily selecting at each step the gate with the largest estimated energy decrease. We derive rigorous worst-case per-iteration sample-complexity bounds for SEGQE, exhibiting logarithmic dependence on the number of candidate gates. Numerical benchmarks on finite transverse-field Ising models and ensembles of random local Hamiltonians demonstrate convergence in a number of iterations that scales approximately linearly with system size, while maintaining high-fidelity ground-state approximations and competitive energy estimates. Together, our empirical scaling laws and rigorous per-iteration guarantees establish SEGQE as a measurement-efficient state-preparation primitive well suited to early fault-tolerant quantum computing architectures.
Fault-tolerant preparation of arbitrary logical states in the cat code
This paper presents a method for preparing arbitrary logical quantum states using a four-legged cat code that can suppress major types of quantum errors. The approach achieves high fidelity (error rates around 10^-4) and is designed to work with current superconducting quantum hardware.
Key Contributions
- Complete framework for fault-tolerant preparation of arbitrary logical states in cat codes
- Demonstration of quadratic error suppression confirming first-order error elimination
- Scalable protocol compatible with current superconducting hardware achieving 10^-4 logical infidelities
View Full Abstract
Preparing high-fidelity logical states is a central challenge in fault-tolerant quantum computing, yet existing approaches struggle to balance control complexity against resource overhead. Here, we present a complete framework for the fault-tolerant preparation of arbitrary logical states encoded in the four-legged cat code. This framework is engineered to suppress the dominant incoherent errors, including excitation decay and dephasing in both the bosonic mode and the ancilla via error detection. Numerical simulations with experimentally realistic parameters on a 3D superconducting cavity platform yield logical infidelities on the order of $10^{-4}$. A scaling analysis confirms that the logical error rate grows nearly quadratically with the physical error rate, confirming that all first-order errors are fully suppressed. Our protocol is compatible with current hardware and is scalable to multiple bosonic modes, providing a resource-efficient foundation for magic state preparation and higher-level concatenated quantum error correction.
Near-single-domain superconducting aluminum films on GaAs(111)A with exceptional crystalline quality for scalable quantum circuits
This paper demonstrates a breakthrough method for growing extremely high-quality aluminum superconducting films on semiconductor substrates using molecular beam epitaxy, achieving unprecedented crystalline uniformity that could enable more reliable and scalable superconducting quantum circuits.
Key Contributions
- Achieved record-low twin-domain ratios of 0.00005 for aluminum films on GaAs substrates
- Demonstrated exceptional crystalline quality with narrow FWHM values and atomically smooth interfaces
- Established a scalable materials platform for high-coherence superconducting qubits with critical temperatures approaching bulk values
View Full Abstract
We have reproducibly grown near-single-domain superconducting aluminum (Al) films on GaAs(111)A wafers using molecular beam epitaxy. Synchrotron X-ray diffraction revealed twin-domain ratios of 0.00005 and 0.0003 for 19.4-nm- and 9.6-nm-thick films, respectively-the lowest reported for Al on any substrate and long considered unattainable for practical device platforms. Azimuthal scans across off-normal Al{$11\bar{1}$} reflections exhibit narrow full width at half maximum (FWHM) values down to $0.55^\circ$, unmatched by epi-Al grown by any other method. Normal scans showed a well-defined (111) orientation with pronounced Pendellösung fringes, and $θ$-rocking-curve FWHM values down to $0.018^\circ$; the former indicates abrupt film-substrate and oxide-film interfaces. Electron backscatter diffraction mapping confirms macroscopic in-plane uniformity and the absence of $Σ$3 twin domains. Atomic force microscopy and scanning transmission electron microscopy confirmed atomically smooth surfaces and abrupt heterointerfaces. The films exhibit critical temperatures approaching bulk values, establishing a materials platform for scalable, high-coherence superconducting qubits.
Fault-tolerant interfaces for quantum LDPC codes
This paper develops fault-tolerant interfaces for quantum LDPC codes that enable quantum state preparation with constant space overhead, improving on previous methods that required polylogarithmic overhead. The work focuses on creating efficient protocols for changing protection levels in quantum error correction codes while maintaining fault tolerance.
Key Contributions
- Development of fault-tolerant interfaces for quantum LDPC codes with constant space overhead
- Construction of decoders that can change protection levels by arbitrary amounts while preventing error accumulation
View Full Abstract
The preparation of a quantum state using a noisy quantum computer (gate noise strength $δ$), will necessarily affect an O($δ$)-fraction of the qubits, no matter which protocol is used. Here, we show that fault-tolerant quantum state preparation can be achieved with constant space overhead improving on previous constructions requiring polylogarithmic overhead. To achieve this, we add to the toolbox of fault-tolerant schemes for circuits with quantum input and output. More specifically, we construct fault-tolerant interfaces that decrease the level of protection for quantum low-density parity-check (LDPC) codes. When information is encoded in multiple code blocks, our interfaces have constant space overhead. In our decoder construction that change the level of protection by an arbitrary amount, we circumvent bottlenecks to error pileup and overhead by gradual lowering of the level of encoding at the same time as we increase the number of blocks on which decoding is carried out simultaneously.
Adaptive Aborting Schemes for Quantum Error Correction Decoding
This paper introduces adaptive abort schemes for quantum error correction that can terminate syndrome measurements early when errors are likely, reducing computational overhead while maintaining or improving error correction performance. The methods show 5-60% efficiency improvements over standard approaches across different quantum error correcting codes.
Key Contributions
- Introduction of first adaptive abort schemes for quantum error correction (AdAbort and OSLA)
- Demonstration of 5-60% efficiency improvements in decoder performance for surface and color codes
- Real-time syndrome-based decision making framework that balances measurement costs against restart costs
View Full Abstract
Quantum error correction (QEC) is essential for realizing fault-tolerant quantum computation. Current QEC controllers execute all scheduled syndrome (parity-bit) measurement rounds before decoding, even when early syndrome data indicates that the run will result in an error. The resulting excess measurements increase the decoder's workload and system latency. To address this, we introduce an adaptive abort module that simultaneously reduces decoder overhead and suppresses logical error rates in surface codes and color codes under an existing QEC controller. The key idea is that initial syndrome information allows the controller to terminate risky shots early before additional resources are spent. An effective scheme balances the cost of further measurement against the restart cost and thus increases decoder efficiency. Adaptive abort schemes dynamically adjust the number of syndrome measurement rounds per shot using real-time syndrome information. We consider three schemes: fixed-depth (FD) decoding (the standard non-adaptive approach used in current state-of-the-art QEC controllers), and two adaptive schemes, AdAbort and One-Step Lookahead (OSLA) decoding. For surface and color codes under a realistic circuit-level depolarizing noise model, AdAbort substantially outperforms both OSLA and FD, yielding higher decoder efficiency across a broad range of code distances. Numerically, as the code distance increases from 5 to 15, AdAbort yields an improvement that increases from 5% to 35% for surface codes and from 7% to 60% for color codes. To our knowledge, these are the first adaptive abort schemes considered for QEC. Our results highlight the potential importance of abort rules for increasing efficiency as we scale to large, resource-intensive quantum architectures.
Device for MHz-rate rastering of arbitrary 2D optical potentials
This paper presents a new optical device that can rapidly manipulate neutral atom arrays by creating arbitrary 2D optical patterns at MHz refresh rates, overcoming current limitations of existing systems that can only move atoms row-by-row or column-by-column. The device enables simultaneous transport of atomic qubits in any direction with 40x40 resolution, scalable to 100x100.
Key Contributions
- Design of MHz-rate optical rastering device for arbitrary 2D patterns in neutral atom arrays
- Demonstration of enhanced qubit connectivity through simultaneous multi-directional atomic qubit transport
View Full Abstract
Current architectures for neutral-atom arrays utilize devices such as acousto-optic deflectors (AODs) and spatial light modulators (SLMs) to multiplex a single classical control line into N qubit control lines. Dynamic control is speed-limited by the response time of AODs, and geometrically constrained to respect a product structure, limiting motion to row-by-row or column-by-column moves. We propose an optical rastering device that can produce any 2D pattern, not limited to grids, at 1 MHz refresh rates. We demonstrate a design with a resolution of 40 x 40 that can be further scaled up to 100 x 100 to match existing and future neutral atom devices. The ability to simultaneously transport atomic qubits in arbitrary directions will enhance qubit connectivity, enable more efficient circuits, and may have broader applications ranging from LiDAR to fluorescence microscopy.
Hardware-Agnostic Modeling of Quantum Side-Channel Leakage via Conditional Dynamics and Learning from Full Correlation Data
This paper studies quantum side-channel attacks where an adversarial probe qubit monitors a target qubit during hidden quantum gate sequences to extract secret information. The authors develop both theoretical models and machine learning methods to predict optimal coupling strengths for such attacks and demonstrate how quantum information can leak through side channels.
Key Contributions
- Hardware-agnostic framework for modeling quantum side-channel leakage through probe qubits
- Theoretical prediction of optimal 'Goldilocks' coupling bands for side-channel attacks based on circuit depth
- Machine learning decoder that can extract gate sequences from correlation data across different coupling and noise conditions
View Full Abstract
We study a sequential coherent side-channel model in which an adversarial probe qubit interacts with a target qubit during a hidden gate sequence. Repeating the same hidden sequence for $N$ shots yields an empirical full-correlation record: the joint histogram $\widehat{P}_g(b)$ over probe bit-strings $b\in\{0,1\}^k$, which is a sufficient statistic for classical post-processing under identically and independently distributed (i.i.d.) shots but grows exponentially with circuit depth. We first describe this sequential probe framework in a coupling- and measurement-agnostic form, emphasizing the scaling of the observation space and why exact analytic distinguishability becomes intractable with circuit depth. We then specialize to a representative instantiation (a controlled-rotation probe coupling with fixed projective readout and a commuting $R_x$ gate alphabet) where we (i) derive a depth-dependent leakage envelope whose maximizer predicts a "Goldilocks" coupling band as a function of depth, and (ii) provide an operational decoder, via machine learning, a single parameter-conditioned map from $\widehat{P}_g$ to Alice's per-step gate labels, generalizing across coupling and noise settings without retraining. Experiments over broad coupling and noise grids show that strict sequence recovery concentrates near the predicted coupling band and degrades predictably under decoherence and finite-shot estimation.
Self-dual Stacked Quantum Low-Density Parity-Check Codes
This paper develops a new method for constructing self-dual quantum low-density parity-check (qLDPC) codes by stacking non-self-dual codes, creating several new code families with improved parameters. The work addresses a key challenge in fault-tolerant quantum computing by enabling easier implementation of logical operations while maintaining high encoding rates and error correction capabilities.
Key Contributions
- Novel stacking method for constructing self-dual qLDPC codes from non-self-dual codes
- Development of multiple new code families including double-chain bicycle codes and double-layer bivariate bicycle codes
- Numerical demonstration of improved logical failure rates and high pseudo-thresholds under circuit-level noise
View Full Abstract
Quantum low-density parity-check (qLDPC) codes are promising candidates for fault-tolerant quantum computation due to their high encoding rates and distances. However, implementing logical operations using qLDPC codes presents significant challenges. Previous research has demonstrated that self-dual qLDPC codes facilitate the implementation of transversal Clifford gates. Here we introduce a method for constructing self-dual qLDPC codes by stacking non-self-dual qLDPC codes. Leveraging this methodology, we develop double-chain bicycle codes, double-layer bivariate bicycle (BB) codes, double-layer twisted BB codes, and double-layer reflection codes, many of which exhibit favorable code parameters. Additionally, we conduct numerical calculations to assess the performance of these codes as quantum memory under the circuit-level noise model, revealing that the logical failure rate can be significantly reduced with high pseudo-thresholds.
Realizing a Universal Quantum Gate Set via Double-Braiding of SU(2)k Anyon Models
This paper investigates using double-braiding techniques with SU(2)k anyon models to implement universal quantum gates for topological quantum computing. The authors show that their approach can synthesize both single-qubit and two-qubit gates while requiring manipulation of fewer physical anyons than previous methods.
Key Contributions
- Derived explicit double elementary braiding matrices for SU(2)k anyon models and demonstrated universal gate synthesis
- Developed a protocol that reduces the number of physical anyons requiring manipulation in topological quantum computation
- Achieved fault-tolerant accuracy for single-qubit gates using GA-enhanced Solovay-Kitaev Algorithm with only 2-level decomposition
View Full Abstract
We systematically investigate the implementation of a universal gate set via double-braiding within SU(2)k anyon models. The explicit form of the double elementary braiding matrices (DEBMs) in these models are derived from the F-matrices and R-symbols obtained via the q-deformed representation theory of SU(2). Using these EBMs, standard single-qubit gates are synthesized up to a global phase by a Genetic Algorithm-enhanced Solovay-Kitaev Algorithm (GA-enhanced SKA), achieving the accuracy required for fault-tolerant quantum computation with only 2-level decomposition. For two-qubit entangling gates, Genetic Algorithm (GA) yields braidwords of 30 braiding operations that approximate the local equivalence class [CNOT]. Theoretically, we demonstrate that performing double-braiding in a three-anyon (six-anyon) encoding of single-qubit (two-qubit) is topologically equivalent to a protocol requiring the physical manipulation of only one (three) anyons to execute arbitrary braids. Our numerical results provide strong evidence that double-braiding in SU(2)k anyons models is capable of universal quantum computation. Moreover, the proposed protocol offers a potential new strategy for significantly reducing the number of non-Abelian anyons that need to be physically manipulated in future braiding-based topological quantum computations (TQC).
Tensor Decomposition for Non-Clifford Gate Minimization
This paper develops new algebraic methods to minimize the number of non-Clifford gates (specifically Toffoli and T gates) needed in quantum circuits by connecting the optimization problem to tensor decomposition over finite fields. The methods achieve better or equal results compared to previous approaches while being dramatically more computationally efficient.
Key Contributions
- Development of algebraic methods connecting Toffoli gate minimization to tensor decomposition over F_2
- Significant computational efficiency improvements achieving same results with single CPU vs thousands of TPUs
- Matching or improving all reported results on standard benchmarks for both Toffoli and T-count optimization
View Full Abstract
Fault-tolerant quantum computation requires minimizing non-Clifford gates, whose implementation via magic state distillation dominates the resource costs. While $T$-count minimization is well-studied, dedicated $CCZ$ factories shift the natural target to direct Toffoli minimization. We develop algebraic methods for this problem, building on a connection between Toffoli count and tensor decomposition over $\mathbb{F}_2$. On standard benchmarks, these methods match or improve all reported results for both Toffoli and $T$-count, with most circuits completing in under a minute on a single CPU instead of thousands of TPUs used by prior work.
Do we have a quantum computer? Expert perspectives on current status and future prospects
This paper presents interviews with quantum computing experts about the current state of quantum computing technology, timelines for fault-tolerant systems, and realistic expectations for future quantum computer development and deployment.
Key Contributions
- Expert consensus on realistic timelines for fault-tolerant quantum computers (decade for small systems, several decades for scalable systems)
- Assessment that quantum computers will remain specialized tools in data centers rather than personal devices
- Evaluation of current NISQ-era machines as legitimate quantum computers despite limitations
View Full Abstract
The rapid growth of quantum information science and technology (QIST) in the 21st century has created both excitement and uncertainty about the field's trajectory. This qualitative study presents perspectives from leading quantum researchers, who are educators, on fundamental questions frequently posed by students, the public, and the media regarding QIST. Through in-depth interviews, we explored several issues related to QIST including the following key areas: the current state of quantum computing in the noisy intermediate-scale quantum (NISQ) era and timelines for fault-tolerant quantum computers, the feasibility of personal quantum computers in our pockets, and promising qubit architectures for future development. Our findings reveal diverse yet convergent perspectives on these issues. While experts agree that the current machines with physical qubits that are being built currently should be called quantum computers, most estimated that it will take a decade to build a small fault-tolerant quantum computer, and several decades to achieve scalable systems capable of running Shor's factoring algorithm with quantum advantage. Regarding carrying a quantum computer in the pocket, experts viewed quantum computers as specialized tools that will remain in central locations such as data centers and can be accessed remotely for applications for which they are particularly effective compared to classical computers. Quantum researchers suggested that multiple platforms show promise, with no clear winner emerging. These insights provide valuable guidance for educators, policymakers, and the broader community in establishing realistic expectations for developments in this exciting field. Our findings can provide valuable information for educators to clarify student doubts about these important yet confusing issues related to quantum technologies.
Beyond Reinforcement Learning: Fast and Scalable Quantum Circuit Synthesis
This paper presents a new method for quantum circuit synthesis that uses supervised learning to estimate the minimum description length of quantum operations and combines this with beam search to find efficient gate sequences. The approach achieves faster synthesis times and better success rates than existing methods while using a lightweight model that generalizes across different numbers of qubits.
Key Contributions
- Novel supervised learning approach for approximating minimum description length of residual unitaries
- Lightweight model with zero-shot generalization across different qubit counts
- Improved synthesis speed and success rates compared to state-of-the-art methods
View Full Abstract
Quantum unitary synthesis addresses the problem of translating abstract quantum algorithms into sequences of hardware-executable quantum gates. Solving this task exactly is infeasible in general due to the exponential growth of the underlying combinatorial search space. Existing approaches suffer from misaligned optimization objectives, substantial training costs and limited generalization across different qubit counts. We mitigate these limitations by using supervised learning to approximate the minimum description length of residual unitaries and combining this estimate with stochastic beam search to identify near optimal gate sequences. Our method relies on a lightweight model with zero-shot generalization, substantially reducing training overhead compared to prior baselines. Across multiple benchmarks, we achieve faster wall-clock synthesis times while exceeding state-of-the-art methods in terms of success rate for complex circuits.
Faster Optimal Decoder for Graph Codes with a Single Logical Qubit
This paper develops a more efficient decoding algorithm for quantum error-correcting codes based on graph states by exploiting structural properties to create a hierarchical decoder that runs in polynomial time while maintaining optimal performance at lower hierarchy levels.
Key Contributions
- Development of a polynomial-time hierarchical decoder for graph codes that avoids full maximum-likelihood decoding
- Demonstration that post-measurement states follow well-defined structures determined by syndrome measurements, enabling more efficient error correction
View Full Abstract
In this work, we develop an efficient decoding method for graph codes, a class of stabilizer quantum error-correcting codes constructed from graph states. While optimal decoding is generally NP-hard, we propose a faster decoder exploiting the structural properties of the underlying graph states. Although distinct error patterns may yield the same syndrome, we demonstrate that the post-measurement state follows a well-defined structure determined by the projective syndrome measurement. Building on this idea, we introduce a hierarchical decoder in which each level can be solved in polynomial time. Additionally, this decoder achieves optimal decoding performance at the lower levels of the hierarchy. This strategy avoids the need for full maximum-likelihood decoding of graph codes. Numerical results illustrate the efficiency and effectiveness of the proposed approach.
Homological origin of transversal implementability of logical diagonal gates in quantum CSS codes
This paper uses homology theory to characterize when logical diagonal gates can be implemented transversally in quantum CSS error-correcting codes. The authors prove that the solvability of implementing these gates with finer rotation angles is completely determined by mathematical structures called Bockstein homomorphisms.
Key Contributions
- Formulated the refinement problem for transversal logical diagonal gates and showed its solvability is characterized by Bockstein homomorphisms
- Proved conditions for existence of transversal implementations of logical Pauli Z rotations in general CSS codes based on X-stabilizer generator properties
- Identified canonical homological obstructions to transversal implementability in quantum error correction
View Full Abstract
Transversal Pauli Z rotations provide a natural route to fault-tolerant logical diagonal gates in quantum CSS codes, yet their capability is fundamentally constrained. In this work, we formulate the refinement problem of realizing a logical diagonal gate by a transversal implementation with a finer discrete rotation angle and show that its solvability is completely characterized by the Bockstein homomorphism in homology theory. Furthermore, we prove that the linear independence of the X-stabilizer generators together with the commutativity condition modulo a power of two ensures the existence of transversal implementations of all logical Pauli Z rotations with discrete angles in general CSS codes. Our results identify a canonical homological obstruction governing transversal implementability and provide a conceptual foundation for a formal theory of transversal structures in quantum error correction.
A hardware-native time-frequency GKP logical qubit toward fault-tolerant photonic operation
This paper demonstrates a new type of fault-tolerant quantum computing qubit called a GKP logical qubit using single photons encoded in time and frequency domains. The approach provides a hardware-compatible way to implement quantum error correction in photonic quantum computers by naturally mapping common noise sources to correctable errors.
Key Contributions
- First hardware-native implementation of time-frequency GKP logical qubits using single photons
- Demonstration that timing jitter and phase noise naturally map to correctable displacement errors
- Concrete pathway for integrating GKP error correction into photonic quantum computing architectures
View Full Abstract
We realize a hardware-native time--frequency Gottesman--Kitaev--Preskill (GKP) logical qubit encoded in the continuous phase space of single photons, establishing a propagating photonic implementation of bosonic grid encoding. Finite-energy grid states are generated deterministically using coherently driven entangled nonlinear biphoton sources that produce single-photon frequency-comb supermodes. An optical-frequency-comb reference anchors the time--frequency phase space and enforces commuting displacement stabilizers directly at the hardware level, continuously defining the logical subspace. Timing jitter, spectral drift, and phase noise map naturally onto Gaussian displacement errors within this lattice, yielding intrinsic correctability inside a stabilizer cell. Logical operations correspond to experimentally accessible phase and delay controls, enabling deterministic state preparation and manipulation. Building on the modal time--frequency GKP framework, we identify a concrete pathway toward active syndrome extraction and deterministic displacement recovery using ancillary grid states and interferometric time--frequency measurements. These primitives establish a hardware-compatible route for integrating the time--frequency GKP logical layer into erasure-aware and fusion-based fault-tolerant photonic architectures.
High-fidelity Quantum Readout Processing via an Embedded SNAIL Amplifier
This paper proposes embedding a SNAIL (Superconducting Nonlinear Asymmetric Inductive eLement) directly into quantum readout circuits to improve the fidelity of quantum state measurements while reducing hardware complexity. The approach enables on-chip signal processing and amplification, eliminating the need for bulky external components typically required in superconducting quantum processors.
Key Contributions
- Novel embedded SNAIL architecture for on-chip quantum readout processing
- Enhanced readout fidelity with reduced measurement-induced decoherence
- Simplified hardware complexity by eliminating external isolators and amplifiers
View Full Abstract
Scalable, high-fidelity quantum-state readout remains a central challenge in the development of large-scale superconducting quantum processors. Conventional dispersive readout architectures depend on bulky isolators and external amplifiers, introducing significant hardware overhead and limiting opportunities for on-chip information processing. In this work, we propose a novel approach that embeds a nonlinear Superconducting Nonlinear Asymmetric Inductive eLement (SNAIL) into the readout chain, enabling coherent and directional processing of readout signals directly on-chip. This embedded SNAIL platform allows frequency-multiplexed resonators to interact through engineered couplings, forming a tunable readout-amplifier-output architecture that can manipulate quantum readout data \textit{in situ}. Through theoretical modeling and numerical optimization, we show that this platform enhances fidelity, suppresses measurement-induced decoherence, and simplifies hardware complexity. These results establish the hybridized SNAIL as a promising building block for scalable and coherent quantum-state readout in next-generation processors.
Single snapshot non-Markovianity of Pauli channels
This paper studies noise in quantum computers by analyzing Pauli channels and finds that the commonly assumed Markovian (memoryless) noise models are often invalid. The researchers show that real quantum computer noise frequently exhibits non-Markovian behavior with negative rates, and demonstrate improved noise prediction accuracy when accounting for this complexity.
Key Contributions
- Demonstrated that random Pauli channels are almost always non-Markovian with probability converging doubly exponentially to unity
- Showed that negative rates in noise generators are generic even for physically motivated Markovian noise models
- Generalized probabilistic error amplification and cancellation techniques to non-Markovian generators
- Experimentally validated on superconducting qubits that allowing negative rates improves noise model accuracy
View Full Abstract
Pauli channels are widely used to describe errors in quantum computers, particularly when noise is shaped via Pauli twirling. A common assumption is that such channels admit a Markovian generator, namely a Pauli-Lindblad model with non-negative rates, but the validity of this assumption has not been systematically examined. Here, using CP-indivisibility as our criterion for non-Markovianity, we study multi-qubit Pauli channels from a single snapshot of the dynamics. We find that while the generator always has the same structure as the standard Pauli-Lindblad model, the rates may be negative or complex. We show that random Pauli channels are almost always non-Markovian, with the probability of encountering a negative rate converging doubly exponentially to unity with the number of qubits. For physically motivated noise models shaped by Pauli twirling, including single-qubit over-rotations and two-qubit amplitude damping errors, we find that negative rates are generic, even when the underlying physical noise is Markovian. We generalize probabilistic error amplification and cancellation to non-Markovian generators, and quantify the sampling overhead introduced by negative and complex rates. Experiments on superconducting qubits confirm that allowing negative rates in the learned noise model yields more accurate predictions than restricting to non-negative rates.
Optimized Compilation of Logical Clifford Circuits
This paper develops improved methods for compiling logical quantum circuits in fault-tolerant quantum computing by treating simulation primitives as single blocks rather than compiling gate-by-gate. The approach reduces circuit depth and error rates while maintaining compatibility with quantum error correction codes.
Key Contributions
- Development of block-based compilation methodology for logical Clifford circuits that reduces circuit depth compared to gate-by-gate approaches
- Demonstration of significant error-rate reductions in compiled circuits with improved realizations for different gate placement patterns
View Full Abstract
Fault-tolerant quantum computing hinges on efficient logical compilation, in particular, translating high-level circuits into code-compatible implementations. Gate-by-gate compilation often yields deep circuits, requiring significant overhead to ensure fault-tolerance. As an alternative, we investigate the compilation of primitives from quantum simulation as single blocks. We focus our study on the [[n,n-2,2]] code family, which allows for the exhaustive comparison of potential compilation primitives on small circuit instances. Based upon that, we then introduce a methodology that lifts these primitives into size-invariant, depth-efficient compilation strategies. This recovers known methods for circuits with moderate Hadamard counts and yields improved realizations for sparse and dense placements. Simulations show significant error-rate reductions in the compiled circuits. We envision the approach as a core component of peephole-based compilers. Its flexibility and low hand-crafting burden make it readily extensible to other circuit structures and code families.
Design and Operation of Wafer-Scale Packages Containing >500 Superconducting Qubits
This paper presents a wafer-scale packaging system that can house over 500 superconducting qubits on a single chip, demonstrating that large arrays of qubits can be operated without degrading their performance. The package is designed to work at extremely cold temperatures and shows promising qubit coherence times and readout fidelities.
Key Contributions
- Development of wafer-scale packaging architecture supporting >500 superconducting qubits
- Demonstration that large-scale integration maintains qubit performance with ~100 μs coherence times
- Validation of thermal management and RF interference suppression at millikelvin temperatures
- High-throughput metrology system for fabrication process optimization
View Full Abstract
Packages capable of supporting large arrays of high-coherence superconducting qubits are vital for the realisation of fault-tolerant quantum computers and the necessary high-throughput metrology required to optimise fabrication and manufacturing processes. We present a wafer-scale packaging architecture supporting over 500 qubits on a single 3-inch die. The package is engineered to suppress parasitic RF modes, and to mitigate material loss through simulation-informed design while managing differential thermal contraction to ensure robust operation at millikelvin temperatures. System-level heat-load calculations from a large wiring payload show this package may be operated in commercial dilution refrigerators. Measurements of the qubits loaded into the package show median $T_1$, $T_{2e} \sim 100~μ$s ($\sim$100 qubits) alongside readout with median fidelity of 97.5% (54 qubits) and a median qubit temperature of 36 mK (54 qubits). These results validate the performance of these packages and demonstrate that large-scale integration can be achieved without compromising device performance. Finally, we highlight the utility of these packages as a tool for high throughput feedback on qubit figures of merit over large sample sizes, allowing identification of performance outliers in the tails of the coherence distribution, a critical capability for informing fabrication and manufacture of high-quality quantum qubits and quantum processors.
Floquet implementation of a 3d fermionic toric code with full logical code space
This paper presents a 3D Floquet quantum error-correcting code that implements a fermionic toric code while preserving all logical qubits throughout the measurement process. The work identifies a specific 3D lattice geometry that enables fault-tolerant quantum computation through time-periodic measurement sequences, avoiding the information loss that typically occurs in naive sequential measurement approaches.
Key Contributions
- Development of a 3D Floquet quantum error-correcting code that preserves all three logical qubits during measurement sequences
- Identification of a novel 3D lattice geometry that generalizes the Kekulé lattice structure to avoid logical information collapse
- Design of measurement protocols that extract complete error syndrome information without disturbing the logical subspace
View Full Abstract
Floquet quantum error-correcting codes provide an operationally economical route to fault tolerance by dynamically generating stabilizer structures using only two-body Pauli measurements. But while it is well established that stabilizer codes in higher spatial dimensions gain additional levels of intrinsic robustness, higher-dimensional Floquet codes have hitherto been explored only in limited scope. Here we introduce a 3d generalization of a Floquet code whose instantaneous stabilizer group realizes a 3d fermionic toric code, while crucially preserving all three logical qubits throughout the entire measurement sequence. One central ingredient is the identification of a 3d lattice geometry that generalizes the features of the Kekulé lattice underlying the 2d Hastings-Haah code - specifically, a structure where deleting any one edge color yields a two-color subgraph that decomposes into short, closed loops rather than homologically nontrivial chains. This loop property avoids the collapse of logical information that plagues naive sequential two-color measurement schedules on many 3d lattices. Although, for our lattice geometry, a simple 3-round cycle that sequentially measures the three types of parity checks does not expose the full error syndrome set, we show that one can append a measurement sequence to extract the missing syndromes without disturbing the logical subspace. Beyond code design, 3d tricoordinated lattice geometries define a family of 3d monitored Kitaev models, in which random measurements of the non-commuting parity checks give rise to dynamically created entangled phases with nontrivial topology. In discussing the general structure of their underlying phase diagrams and, in particular, the existence of certain quantum critical points, we again make a connection to the general preservation of logical information in time-ordered Floquet protocols.
Non-Abelian Quantum Low-Density Parity Check Codes and Non-Clifford Operations from Gauging Logical Gates via Measurements
This paper develops new methods for creating non-Abelian quantum low-density parity check (qLDPC) codes by using measurement and feedback to gauge transversal Clifford gates. The work provides two different construction approaches and shows how these methods enable magic state preparation and non-Clifford operations on any qLDPC code.
Key Contributions
- Two novel construction methods for non-Abelian qLDPC codes via gauging transversal Clifford gates
- Demonstration that gauging procedures enable magic state preparation and non-Clifford operations on any qLDPC code
- Connection between gauged codes and 2D non-Abelian topological order properties
View Full Abstract
In this work, we introduce constructions for non-Abelian qLDPC codes obtained by gauging transversal Clifford gates using measurement and feedback. In particular, we identify two qualitatively different approaches to gauging qLDPC codes to obtain their non-Abelian counterparts. The first approach applies to codes that exhibit a generalized form of Poincaré duality and leads to a qLDPC non-Abelian Clifford stabilizer code, whose stabilizers are reminiscent of the action of a Type-III twisted quantum double. Our second approach applies to general qLDPC codes, and uses a graph of ancilla qubits which may be tailored to properties of the input codes to gauge a single transversal gate. For both constructions, the resulting gauged codes are shown to have properties analogous to 2D non-Abelian topological order -- e.g. the analog of a single anyon on a torus. We conclude by demonstrating that our gauging procedures enable magic state preparation via the measurement of logical Clifford gates. Consequently, our gauging constructions offer a protocol for performing non-Clifford operations on any qLDPC code.
Millisecond-Scale Calibration and Benchmarking of Superconducting Qubits
This paper develops fast calibration techniques for superconducting qubits that can adjust qubit parameters in milliseconds using FPGA-based processing, addressing the problem that qubit performance drifts on sub-second timescales. The researchers demonstrate automated recalibration methods that maintain better gate performance than initial calibration over extended periods.
Key Contributions
- Development of millisecond-scale FPGA-based calibration workflow for superconducting qubits that eliminates CPU round trips
- Demonstration of continuous automated recalibration maintaining gate fidelity over 6 hours with 74,000+ recalibrations
View Full Abstract
Superconducting qubit parameters drift on sub-second timescales, motivating calibration and benchmarking techniques that can be executed on millisecond timescales. We demonstrate an on-FPGA workflow that co-locates pulse generation, data acquisition, analysis, and feed-forward, eliminating CPU round trips. Within this workflow, we introduce sparse-sampling and on-FPGA inference tools, including computationally efficient methods for estimation of exponential and sine-like response functions, as well as on-FPGA implementations of Nelder-Mead optimization and golden-section search. These methods enable low-latency primitives for readout calibration, spectroscopy, pulse-amplitude calibration, coherence estimation, and benchmarking. We deploy this toolset to estimate $T_1$ in 10 ms, optimize readout parameters in 100 ms, optimize pulse amplitudes in 1 ms, and perform Clifford randomized gate benchmarking in 107 ms on a flux-tunable superconducting transmon qubit. Running a closed-loop on-FPGA recalibration protocol continuously for 6 hours enables more than 74,000 consecutive recalibrations and yields gate errors that consistently retain better performance than the baseline initial calibration. Correlation analysis shows that recalibration suppresses coupling of gate error to control-parameter drift while preserving a coherence-linked performance. Finally, we quantify uncertainty versus time-to-decision under our sparse sampling approaches and identify optimal parameter regimes for efficient estimation of qubit and pulse parameters.
Control the qubit-qubit coupling with double superconducting resonators
This paper demonstrates experimental control of coupling between superconducting qubits using a double-resonator design, showing that qubit-qubit coupling can be tuned from effectively zero to gate-operation strength by adjusting qubit frequencies by less than 50 MHz. The approach offers fabrication advantages and reduced noise for scaling up superconducting quantum processors.
Key Contributions
- Experimental demonstration of tunable qubit-qubit coupling using double-resonator architecture
- Achievement of coupling control from off to gate-operational strength with small frequency shifts
- Simplified fabrication approach with reduced flux noise for scalable quantum processors
View Full Abstract
We experimentally studied the switching off processes in the double-resonator coupler superconducting quantum circuit.In both frequency and time-domain, we observed the variation of qubit-qubit effective coupling by tuning qubits'frequencies. According to the measurement results, by just shifting qubits' frequencies smaller than 50 MHz, the effective qubit-qubit coupling strength can be tuned from switching off point to two qubit gate point (effective coupling larger than 5 MHz) in double-resonator superconducting quantum circuit. The double-resonator coupler superconducting quantum circuit has the advantage of simple fabrications, introducing less flux noises, reducing occupancy of dilution refrigerator cables, which might supply a promising platform for future large-scale superconducting quantum processors.
Structural control of two-level defect density revealed by high-throughput correlative measurements of Josephson junctions
This paper investigates defects in superconducting Josephson junctions that interfere with quantum computer performance by analyzing over 6,000 junctions and 600 microscopy images. The researchers found that aluminum electrode thickness and grain size strongly correlate with defect density, leading to a fabrication method that reduces harmful defects by two-thirds.
Key Contributions
- Established statistical correlation between aluminum electrode microstructure and two-level system defect density in Josephson junctions
- Demonstrated fabrication parameter optimization that reduces TLS density by two-thirds
- Developed high-throughput correlative methodology combining materials characterization with quantum device performance
View Full Abstract
Materials defects in Josephson junctions (JJs), often referred to as two-level systems (TLS), couple to superconducting qubits and are a critical bottleneck for scalable quantum processors. Despite their importance, understanding the microscopic sources of TLS and how to mitigate them has remained a major challenge. Here, we demonstrate a high-throughput, correlated approach to trace the microstructural origins of strongly-coupled TLS in Josephson circuits. We assembled a massive dataset of TLS across 6,000 Al/AlOx/Al JJs and more than 600 atomic resolution transmission electron microscopy images. We statistically link fabrication, microstructure, and TLS occurrence, revealing a strong correlation between Al electrode thickness, Al grain size, and TLS density. Correspondingly, we find a two-thirds reduction in TLS prompted by a change in electrode fabrication parameters. These results demonstrate a robust, data-driven methodology to understand and control defects in quantum circuits and pave the way for significantly reducing TLS density.
The Pinnacle Architecture: Reducing the cost of breaking RSA-2048 to 100 000 physical qubits using quantum LDPC codes
This paper introduces the Pinnacle Architecture using quantum low-density parity check codes to dramatically reduce the physical qubit requirements for fault-tolerant quantum computing, demonstrating that RSA-2048 can be broken with fewer than 100,000 physical qubits instead of the previously estimated million+ qubits.
Key Contributions
- Introduction of Pinnacle Architecture using quantum LDPC codes for fault-tolerant quantum computing
- Demonstration that RSA-2048 factoring requires only ~100,000 physical qubits with order-of-magnitude reduction in overhead
- Development of practical low-overhead fault-tolerant architecture for utility-scale quantum computing
View Full Abstract
The realisation of utility-scale quantum computing inextricably depends on the design of practical, low-overhead fault-tolerant architectures. We introduce the \textit{Pinnacle Architecture}, which uses quantum low-density parity check (QLDPC) codes to allow for universal, fault-tolerant quantum computation with a spacetime overhead significantly smaller than that of any competing architecture. With this architecture, we show that 2048-bit RSA integers can be factored with less than one hundred thousand physical qubits, given a physical error rate of $10^{-3}$, code cycle time of $1$ \textmu s and a reaction time of $10$ \textmu s. We thereby demonstrate the feasibility of utility-scale quantum computing with an order of magnitude fewer physical qubits than has previously been believed necessary.
Multi-ion entangling gates mediated by spectrally unresolved modes
This paper introduces a new method for creating entangling gates between trapped-ion qubits using time-dependent magnetic field gradients, where all motional modes participate simultaneously rather than addressing individual modes. This nonperturbative approach enables faster gates on larger ion strings and can implement multi-qubit gates or simultaneous two-qubit gates between arbitrary ion pairs.
Key Contributions
- Nonperturbative gate scheme using all axial motional modes simultaneously
- Time-dependent magnetic-field gradient approach for multi-ion entangling gates
- Method for simultaneous gates on multiple ion pairs in linear strings
View Full Abstract
Entangling interactions between distant qubits can be mediated via an additional degree of freedom. In conventional trapped-ion schemes, realizing a well-defined, coherent gate typically requires spectrally addressing a specific bus mode. As the ion number increases, the coupling to each individual motional mode becomes weaker, so gates on large ion strings mediated by a single mode are necessarily slow. Moreover, addressing a large number of modes demands complex driving schemes, and the fundamentally perturbative character of these approaches imposes constraints on achievable gate speed and fidelity. Here, we introduce a scheme for entangling trapped-ion qubits using a time-dependent magnetic-field gradient, in which all axial motional modes participate in mediating the interaction and the gate construction is nonperturbative. The framework can be used to implement both multi-qubit gates and two-qubit gates between arbitrary pairs in a linear ion string. Through several explicit examples, we highlight the advantages over existing magnetic-gradient schemes and show how gates on multiple ion pairs can be carried out simultaneously.
Recirculating Quantum Photonic Networks for Fast Deterministic Quantum Information Processing
This paper proposes a recirculating quantum photonic network (RQPN) architecture that processes quantum information by capturing photons, circulating them between interconnected nonlinear cavities, and releasing outputs faster than traditional approaches. The architecture demonstrates significant speedups for multi-qubit gates like the Toffoli gate and quantum error correction operations.
Key Contributions
- Novel recirculating quantum photonic network architecture that reduces processing time for quantum operations
- Demonstration of faster three-qubit Toffoli gate implementation and seven-fold speedup in quantum error correction
View Full Abstract
A fundamental challenge in photonics-based deterministic quantum information processing is to realize key transformations on time scales shorter than those of detrimental decoherence and loss mechanisms. This challenge has been addressed through device-focused approaches that aim to increase nonlinear interactions relative to decoherence rates. In this work, we adopt a complementary architecture-focused approach by proposing a recirculating quantum photonic network (RQPN) that minimizes the duration of quantum information processing tasks, thereby reducing the requirements on nonlinear interaction rates. The RQPN consists of a network of all-to-all connected nonlinear cavities with dynamically controlled waveguide couplings, and it processes information by capturing a photonic input state, recirculating photons between the cavities, and releasing a photonic output state. We demonstrate the RQPN's architectural advantage through two examples: first, we show that processing all qubits simultaneously yields faster operations than single- and two-qubit decompositions of the three-qubit Toffoli gate. Second, we demonstrate implementations of a measurement-free correction for single-photon loss, achieving up to seven-fold speedups and significantly improved hardware efficiency relative to state-of-the-art architecture proposals. Our work shows that a single hardware-efficient recirculating architecture substantially reduces the temporal overhead of multi-qubit gates and quantum error correction, thereby lowering the barrier to experimental realizations of deterministic photonic quantum information processing.
Erasure Thresholds for Hyperbolic and Semi-Hyperbolic Surface Codes
This paper develops and tests 25 new quantum error correction codes based on hyperbolic and semi-hyperbolic surface geometries, measuring their performance against different types of quantum noise. The researchers find that these codes can tolerate error rates of 5% or higher for certain noise types, with some achieving better performance than traditional surface codes.
Key Contributions
- Construction of 25 new hyperbolic and semi-hyperbolic CSS surface codes from various tessellations
- Comprehensive simulation and threshold analysis showing improved noise tolerance compared to traditional surface codes
- Demonstration that fine-grained scaling families achieve higher thresholds with erasure-to-Pauli ratios of 4.5-5.2×
View Full Abstract
We construct 14 hyperbolic CSS surface codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations and 11 semi-hyperbolic (fine-grained) codes. We simulate all 25 codes under circuit-level erasure and Pauli noise. Under circuit-level Pauli noise, pseudothresholds increase with code size within each family ($0.24$--$0.49\%$ for $\{8,3\}$, $0.11$--$0.43\%$ for $\{10,3\}$, $0.07$--$0.13\%$ for $\{12,3\}$). For erasure noise, most codes have $p^*_{\mathrm{E}} > 5\%$. Per-observable family thresholds give erasure-to-Pauli ratios of $2.7$--$3.9\times$ for the base code families. Fine-grained scaling families achieve higher thresholds in both Pauli ($0.67$--$0.68\%$) and erasure ($3.0$--$3.5\%$), with ratios of $4.5$--$5.2\times$. Under phenomenological noise, per-logical $Z$-channel thresholds are ${\sim}2\%$ for $\{8,3\}$ and ${\sim}1\%$ for $\{10,3\}$; the $\{12,3\}$ threshold lies below $0.5\%$.
Comparing and correcting robustness metrics for quantum optimal control
This paper develops improved methods for designing quantum control pulses that are robust against hardware errors and drift. The researchers compare different mathematical approaches for measuring error sensitivity and introduce corrections that make quantum control more reliable in practical implementations.
Key Contributions
- Systematic comparison of adjoint end-point and toggling-frame approaches for robustness estimation
- Introduction of discretization correction to toggling-frame robustness estimator
- Novel framework positioning robustness as first-class objective in constrained optimal control
View Full Abstract
Control pulses that nominally optimize fidelity are sensitive to routine hardware drift and modeling errors. Robust quantum optimal control seeks error-insensitive control pulses that maintain fidelity thresholds and obey hardware constraints. Distinct numerical approximations to the first-order error susceptibility include adjoint end-point and toggling-frame approaches. Although theoretically equivalent, we provide a novel, systematic study demonstrating important numerical differences between these two approaches. We also introduce a critical discretization correction to the widely-used toggling-frame robustness estimator, measurably improving its estimate of first-order error susceptibility. We accomplish our study by positioning robustness as a first-class objective within direct, constrained optimal control. Our approach uniquely handles control and fidelity constraints while cleanly isolating robustness for dedicated optimization. In both single- and two-qubit examples under realistic constraints, our approach provides an analytic edge for obtaining precise, physics-informed robustness.
Simpler Presentations for Many Fragments of Quantum Circuits
This paper develops improved mathematical frameworks for optimizing quantum circuits by creating more efficient rule sets for proving when different quantum circuit arrangements are equivalent. The authors focus on several important quantum gate families and demonstrate that their new approach requires significantly fewer rules while maintaining completeness and often achieving minimality.
Key Contributions
- Development of a unified PROP framework for quantum circuit optimization with significantly reduced rule counts
- Proof of minimality and bounded minimality for multiple quantum gate fragments including Clifford circuits
View Full Abstract
Equational reasoning is central to quantum circuit optimisation and verification: one replaces subcircuits by provably equivalent ones using a fixed set of rewrite rules viewed as equations. We study such reasoning through finite equational theories, presenting restricted quantum gate fragments as symmetric monoidal categories (PROPs), where wire permutations are treated as structural and separated cleanly from fragment-specific gate axioms. For six widely used near-Clifford fragments: qubit Clifford, real Clifford, Clifford+T (up to two qubits), Clifford+CS (up to three qubits) and CNOT-dihedral, we transfer the completeness results of prior work into our PROP framework. Beyond completeness, we address minimality (axiom independence). Using uniform separating interpretations into simple semantic targets, we prove minimality for several fragments (including all arities for qubit Clifford, real Clifford, and CNOT-dihedral), and bounded minimality for the remaining cases. Overall, our presentations significantly reduce rule counts compared to prior work and provide a reusable categorical framework for constructing complete and often minimal rewrite systems for quantum circuit fragments.
Polycontrolled PROPs for Qudit Circuits: A Uniform Complete Equational Theory For Arbitrary Finite Dimension
This paper develops a complete mathematical framework for reasoning about quantum circuits using qudits (d-level quantum systems) of any finite dimension, providing a finite set of axioms that can prove when two circuits are equivalent. The work extends previous results for qubits to arbitrary dimensions while maintaining uniform axiom structures.
Key Contributions
- Finite schematic axiomatisation of qudit circuits uniform in every finite dimension d >= 2
- Sound and complete equational theory for unitary d-level circuits using at most three-wire axioms
- Translation between qudit circuits and LOPP calculus via d-ary Gray codes
- Extension of qubit circuit completeness results to arbitrary finite dimensions
View Full Abstract
We present a finite schematic axiomatisation of quantum circuits over d-level systems (qudits), uniform in every finite dimension d >= 2. For each d we define a PROP equipped with a family of control functors, treating control as a primitive categorical constructor. Using a translation between qudit circuits and the LOPP calculus for linear optics based on d-ary Gray codes, we obtain for each d a finite set of local axiom schemata that is sound and complete for unitary d-level circuits: two circuits denote the same unitary if and only if they are inter-derivable using axioms involving at most three wires. The generators are compatible with standard universal qudit gate families, yielding a sound equational basis for circuit rewriting and optimisation-by-rewriting. Conceptually, this extends the qubit circuit completeness results of Clément et al.\ to arbitrary finite dimension, and instantiates the control-as-constructor approach of Delorme and Perdrix in this setting, while keeping the axiom shapes uniform in d.
Construction of the full logical Clifford group for high-rate quantum Reed-Muller codes using only transversal and fold-transversal gates
This paper develops a method to implement the complete set of logical Clifford gates for high-rate quantum Reed-Muller error-correcting codes using only transversal and fold-transversal gates, eliminating the need for ancilla qubits. The work enables fault-tolerant quantum computation with codes that can efficiently store large amounts of quantum information.
Key Contributions
- First construction of the full logical Clifford group for high-rate quantum codes using only transversal and fold-transversal gates without ancilla qubits
- Development of fault-tolerant gate implementation for quantum Reed-Muller codes with near-linear information rate scaling
View Full Abstract
To build large-scale quantum computers while minimizing resource requirements, one may want to use high-rate quantum error-correcting codes that can efficiently encode information. However, realizing an addressable gate$\unicode{x2014}$a logical gate on a subset of logical qubits within a high-rate code$\unicode{x2014}$in a fault-tolerant manner can be challenging and may require ancilla qubits. Transversal and fold-transversal gates could provide a means to fault-tolerantly implement logical gates using a constant-depth circuit without ancilla qubits, but available gates of these types could be limited depending on the code and might not be addressable. In this work, we study a family of $[\![n=2^m,k={m \choose m/2}\approx n/\sqrt{π\log_2(n)/2},d=2^{m/2}=\sqrt{n}]\!]$ self-dual quantum Reed$\unicode{x2013}$Muller codes, where $m$ is a positive even number. For any code in this family, we construct a generating set of the full logical Clifford group comprising only transversal and fold-transversal gates, thus enabling the implementation of any addressable Clifford gate. To our knowledge, this is the first known construction of the full logical Clifford group for a family of codes in which $k$ grows near-linearly in $n$ up to a $1/\sqrt{\log n}$ factor that uses only transversal and fold-transversal gates without requiring ancilla qubits.
How to Classically Verify a Quantum Cat without Killing It
This paper develops a new protocol for classically verifying quantum computations that preserves the quantum witness state instead of destroying it, solving a key problem in quantum verification where only one copy of a non-clonable quantum witness is available.
Key Contributions
- First classical verification protocol for quantum computation that preserves the witness state
- Construction of state preserving classical arguments for NP and dual-mode trapdoor functions with state recovery
View Full Abstract
Existing protocols for classical verification of quantum computation (CVQC) consume the prover's witness state, requiring a new witness state for each invocation. Because QMA witnesses are not generally clonable, destroying the input witness means that amplifying soundness and completeness via repetition requires many copies of the witness. Building CVQC with low soundness error that uses only *one* copy of the witness has remained an open problem so far. We resolve this problem by constructing a CVQC that uses a single copy of the QMA witness, has negligible completeness and soundness errors, and does *not* destroy its witness. The soundness of our CVQC is based on the post-quantum Learning With Errors (LWE) assumption. To obtain this result, we define and construct two primitives (under the post-quantum LWE assumption) for non-destructively handling superpositions of classical data, which we believe are of independent interest: - A *state preserving* classical argument for NP. - Dual-mode trapdoor functions with *state recovery*.
Coherence Protection for Mobile Spin Qubits in Silicon
This paper demonstrates techniques to preserve quantum coherence in mobile silicon spin qubits that can be moved between locations, achieving coherence times up to 32 microseconds during transport over distances exceeding 200 nanometers. The researchers used magnetic field optimization, motional narrowing through periodic shuttling, and dynamical decoupling to maintain qubit performance during movement.
Key Contributions
- Demonstration of coherence preservation during spin qubit shuttling with multiple noise mitigation strategies
- Achievement of 32 μs coherence time during transport over 200+ nm distances using dynamical decoupling
- Development of dressed-state shuttling for robust protection against low-frequency noise without pulsed control overhead
View Full Abstract
Mobile spin qubit architectures promise flexible connectivity for efficient quantum error correction and relaxed device layout constraints, but their viability rests on preserving spin coherence during transport. While shuttling transforms spatial disorder into time-dependent noise, its net impact on spin coherence remains an open question. Here we demonstrate systematic noise mitigation during spin shuttling in a linear $^{28}$Si/SiGe quantum dot device. First, by passively reducing magnetic field gradients, we minimize charge-noise coupling to the spin and double the spatially averaged dephasing time $T_2^*(x_n)$ from $4.4$ to $8.5\,μ\text{s}$. Next, we exploit motional narrowing by periodically shuttling the qubit, achieving a further enhancement in coherence time up to $T_{2}^{*,sh} = 11.5\,μ\text{s}$. Finally, we incorporate dynamical decoupling techniques while periodically shuttling over distances exceeding $200\,\text{nm}$, reaching $T_\text{2}^{H,sh}= 32\,μ\text{s}$. For the same setup, we demonstrate that dressed-state shuttling provides robust protection against low-frequency noise with a decay time $T_R^{\text{sh}} = 21\,μ\text{s}$, without the overhead of pulsed control and allowing protection during one-way spin transport. By preserving coherence over timescales exceeding typical gate and readout operations, the demonstrated strategies establish mobile spin qubits as a viable solution for scalable silicon quantum processors.
A cavity-mediated reconfigurable coupling scheme for superconducting qubits
This paper introduces a new architecture for superconducting quantum computers that uses a shared cavity to enable flexible connections between non-adjacent qubits. The system allows researchers to dynamically reconfigure which qubits can interact with each other, overcoming the typical limitation where qubits can only interact with their immediate neighbors.
Key Contributions
- Development of cavity-mediated reconfigurable coupling architecture for superconducting qubits
- Demonstration of high-fidelity two-qubit gates (iSWAP and CZ) with coherent error below 10^-4
- Extension to four-qubit systems with selective coupling and low crosstalk
View Full Abstract
Superconducting qubits have achieved remarkable progress in gate fidelity and coherence, yet their typical nearest-neighbor connectivity presents constraints for implementing complex quantum circuits. Here, we introduce a cavity-mediated coupling architecture in which a shared cavity mode, accessed through tunable qubit-cavity couplers, enables dynamically reconfigurable interactions between non-adjacent qubits. By selectively activating the couplers, we demonstrate that high-fidelity iSWAP and CZ gates can be performed within 50 ns with simulated coherent error below $10^{-4}$, while residual $ZZ$ interaction during idling remains below a few kilohertz. Extending to a four-qubit system, we also simulate gates between every qubit pair by selectively enabling the couplers with low qubit crosstalk. This approach provides a practical route toward enhanced interaction flexibility in superconducting quantum processors and may serve as a useful building block for devices that benefit from selective non-local coupling.
The equivalence of quantum deletion and insertion errors on permutation-invariant codes
This paper addresses quantum synchronisation errors that change the number of qubits in a system, establishing an equivalence between quantum deletion and insertion errors for permutation-invariant quantum error-correcting codes. The work extends classical insertion-deletion error correction theory to the quantum domain and provides conditions for when these codes can correct such errors.
Key Contributions
- Establishes quantum insertion-deletion equivalence for permutation-invariant codes
- Provides conditions for t-insertion error-correctability and (t,s)-insdel error-correctability in quantum systems
View Full Abstract
Quantum synchronisation errors are a class of quantum errors that change the number of qubits in a quantum system. The classical error correction of synchronisation errors has been well-studied, including an insertion-deletion equivalence more than a half-century ago, but little progress has been made towards the quantum counterpart since the birth of quantum error correction. We address the longstanding problem of a quantum insertion-deletion equivalence on permutation-invariant codes, detailing the conditions under which such codes are $t$-insertion error-correctable. We extend these conditions to quantum insdel errors, formulating a more restrictive set of conditions under which permutation-invariant codes are $(t,s)$-insdel error-correctable. Our work resolves many of the outstanding questions regarding the quantum error correction of synchronisation errors.
Non-Markovianity induced by Pauli-twirling
This paper studies how Pauli twirling, a technique used to simplify quantum noise into a more manageable form, can paradoxically convert well-behaved Markovian noise into non-Markovian noise that requires negative parameters to describe correctly. The authors prove that this counterintuitive effect occurs even when starting with standard Markovian quantum channels, which has important implications for quantum error correction and noise characterization.
Key Contributions
- Proved that Pauli channels are non-Markovian if and only if they have negative Pauli-Lindblad parameters
- Demonstrated that Pauli twirling can induce non-Markovianity in originally Markovian quantum channels
- Showed this effect occurs in realistic scenarios like implementing square-root-X gates under standard noise
View Full Abstract
Noise forms a central obstacle to effective quantum information processing. Recent experimental advances have enabled the tailoring of noise properties through Pauli twirling, transforming arbitrary noise channels into Pauli channels. This underpins theoretical descriptions of fault-tolerant quantum computation and forms an essential tool in noise characterization and error mitigation. Pauli-Lindblad channels have been introduced to aptly parameterize quasi-local Pauli errors across a quantum register, excluding negative Pauli-Lindblad parameters relying on the Markovianity of the underlying noise processes. We point out that caution is required when parameterizing channels as Pauli-Lindblad channels with nonnegative parameters. For this, we study the effects of Pauli twirling on Markovianity. We use the notion of Markovianity of a channel (rather than that of an entire semigroup) and prove a general Pauli channel is non-Markovian if and only if at least one of its Pauli-Lindblad parameters is negative. Using this, we show that Markovian quantum channels often become non-Markovian after Pauli twirling. The Pauli-twirling induced non-Markovianity necessitates the use of negative Pauli-Lindblad parameters for a correct noise description in experimentally realistic scenarios. An important example is the implementation of the $\sqrt{X}$-gate under standard Markovian noise. As such, our results have direct implications for quantum error mitigation protocols that rely on accurate noise characterization.
Efficient circuit compression by multi-qudit entangling gates in linear optical quantum computation
This paper develops new multi-level control-Z gates for linear optical quantum computation that can selectively operate on subsets of qubits encoded in qudits, improving the efficiency of quantum circuits by reducing the exponential scaling of non-local gates from O(2^(r1+r2)) to O(2^r1 + 2^r2).
Key Contributions
- Development of multi-level control-Z gates for qudits in linear optical quantum computation
- Two explicit schemes with improved scaling - one state-dependent with 1/8 success probability using single non-local gate, and one state-independent reducing gate complexity from O(2^(r1+r2)) to O(2^r1 + 2^r2)
View Full Abstract
Linear optical quantum computation (LOQC) offers a promising platform for scalable quantum information processing, but its scalability is fundamentally constrained by the probabilistic nature of non-local entangling gates. Qudit circuit compression schemes mitigate this issue by encoding multiple qubits onto qudits. However, these schemes become inefficient when only a subset of the encoded qubits is required to participate in the non-local entangling gate, leading to an exponential increase in the number of non-local gates. In this Letter, we address this bottleneck by demonstrating the existence of multi-level control-Z (CZ) gates for qudits encoded in multiple spatial modes in LOQC. Unlike conventional two-level CZ gates, which act only on a single pair of modes, multi-level CZ gates impart a conditional phase shift for an arbitrarily chosen subset of the spatial modes. We present two explicit linear optical schemes that realize such operations, illustrating a fundamental trade-off between prior information about the input quantum state and the physical resources required. The first scheme is realized with a constant success probability of $1/8$ independent of the qudit dimension using a single non-local entangling gate, at the cost of state dependence, which is significantly better than the current success probability of $1/9$. Our second scheme provides a fully state independent realization reducing the number of non-local gates to $\mathcal{O}(2^{r_1}+2^{r_2})$ as compared to the existing bound of $\mathcal{O}(2^{r_1+r_2})$ where $r_1$ and $r_2$ are the number of qubits to be removed as control in the qudits. The success probability of the realization is $\frac{1}{2} \left(\frac{1}{8}\right)^{2^{r_1}+2^{r_2}}$. When combined with qudit circuit compression schemes, our results improve upon a key scalability limitation and significantly improve the efficiency of LOQC architectures.
Preparing squeezed, cat and GKP states with parity measurements
This paper presents a protocol for preparing various quantum states in bosonic modes (like oscillators) using displaced parity measurements combined with auxiliary qubits. The method can generate squeezed states, cat states, and Gottesman-Kitaev-Preskill (GKP) states, which are important for quantum information processing.
Key Contributions
- Development of a displaced parity measurement protocol for preparing diverse bosonic quantum states
- Demonstration of squeezed state generation achieving ~9 dB squeezing with only three measurements
- Extension to preparation of cat states and GKP states which are crucial for quantum error correction
View Full Abstract
Bosonic modes constitute a central resource in a wide range of quantum technologies, providing long-lived degrees of freedom for the storage, processing, and transduction of quantum information. Such modes naturally arise in platforms including circuit quantum electrodynamics, quantum acoustodynamics, and trapped-ion systems. In these architectures, coherent control and high-fidelity readout of the bosonic degrees of freedom are achieved via coupling to an auxiliary qubit. When operated in the strong dispersive regime, this interaction enables parity measurements of the mode which, in combination with phase-space displacements, constitute a standard experimental tool for full Wigner-function tomography. Here, we propose a protocol based on displaced parity measurements that allows for the preparation of a variety of bosonic quantum states. As a first example, we demonstrate the generation of squeezed states, achieving up to ~9 dB of squeezing after only three parity measurements, and show that the protocol is robust against experimental imperfections. Finally, we generalize our approach to the preparation of other paradigmatic bosonic states, including cat and Gottesman-Kitaev-Preskill states.
Charge-$4e$ superconductor with parafermionic vortices: A path to universal topological quantum computation
This paper proposes a new type of superconductor that supports charge-4e pairing instead of conventional charge-2e pairing, which hosts parafermion zero modes that can naturally encode qutrits (3-level quantum systems) and enable universal quantum computation through braiding operations and interferometric measurements.
Key Contributions
- Introduction of charge-4e topological superconductors with Z3 parafermion zero modes for qutrit-based quantum computing
- Demonstration that braiding parafermion defects generates the full Clifford group and interferometric measurements enable universal quantum computation
- Proposal for realizing these systems through vortex proliferation in stacked p+ip superconductors or melted quantum Hall states
View Full Abstract
Topological superconductors (TSCs) provide a promising route to fault-tolerant quantum information processing. However, the canonical Majorana platform based on $2e$ TSCs remains computationally constrained. In this work, we find a $4e$ TSC that overcomes these constraints by combining a charge-$4e$ condensate with an Abelian chiral $\mathbb{Z}_3$ topological order in an intertwined fashion. Remarkably, this $4e$ TSC can be obtained by proliferating vortex-antivortex pairs in a stack of two $2e$ $p+ip$ TSCs, or by melting a $ν=2/3$ quantum Hall state. Specific to this TSC, the $hc/(4e)$ fluxes act as charge-conjugation defects in the topological order, whose braiding with anyons transmutes anyons into their antiparticles. This symmetry enrichment leads to $\mathbb{Z}_3$ parafermion zero modes trapped in the elementary vortex cores, which naturally encode qutrits. Braiding the parafermion defects alone generates the full many-qutrit Clifford group. We further show that a simple single-probe interferometric measurement enables topologically protected magic-state preparation, promoting Clifford operations to a universal gate set. Importantly, the non-Abelian excitations in the $4e$ TSC are confined to externally controlled defects, making them uniquely identifiable and amenable to controlled creation and motion with superconducting-circuit technology. Our results establish hierarchical electron aggregation as a complementary principle for engineering topological quantum matter with enhanced computational power.
Hybrid Coupling Topology with Dynamic ZZ Suppression for Optimizing Circuit Depth during Runtime in Superconducting Quantum Processor
This paper presents a new hybrid coupling architecture for superconducting quantum processors that connects four qubits using a single tunable coupler, which can dynamically suppress unwanted ZZ interactions during operation. The design achieves higher qubit connectivity and reduces quantum circuit depth by nearly 20% compared to IBM's current architecture.
Key Contributions
- Introduction of hybrid tunable-coupling architecture connecting four transmon qubits with single coupler
- Dynamic ZZ suppression using off-resonant Stark drives
- 20% reduction in circuit depth compared to IBM Heavy-Hexagonal layout
- Improved qubit connectivity while maintaining scalability
View Full Abstract
To reduce circuit depth when executing Quantum algorithms, it is necessary to maximize qubit connectivity on a near-term quantum processor. While addressing this, we also need to ensure high gate fidelity, suppression of unwanted ZZ cross-talk, a compact layout footprint, and minimal control hardware complexity to support scalability. In current superconducting quantum chips, fixed coupling is used as it is easier to scale, but it is limited by unwanted static ZZ interaction during single qubit operations, which degrades system performance. To overcome these challenges, we have introduced a first-of-its-kind hybrid tunable-coupling architecture that connects four fixed-frequency transmon qubits using a single coupler. This hybrid coupler uses off-resonant Stark drives to tune ZZ strength between qubit pairs. Experimentally backed simulation results indicate that our proposed hybrid design maximizes the qubit connectivity while reducing control overhead. This design achieves a near 20% reduction in circuit depth compared to IBM's Heavy-Hexagonal layout, showing its potential for scalability.
Extensible universal photonic quantum computing with nonlinearity
This paper demonstrates a breakthrough photonic quantum computer that combines programmable linear optical networks with nonlinear modules to achieve universal quantum computing. The system successfully generates optical Gottesman-Kitaev-Preskill states for error correction and simulates complex quantum dynamics like the Bose-Hubbard model.
Key Contributions
- First extensible photonic quantum computer achieving universality through integrated linear and nonlinear operations
- Quasi-deterministic generation of optical Gottesman-Kitaev-Preskill states for bosonic error correction
- Demonstration of complex many-body quantum simulation on photonic hardware
View Full Abstract
Universal quantum computing requires an architecture that supports both linear circuits and, crucially, strong nonlinear resources. For quantum photonic systems, integrating such nonlinearities with scalable linear circuitry has been a major bottleneck, leaving most optical experiments without nonlinear operations and, consequently, incapable of achieving universality. Here, we report an extensible photonic computer that supports a universal gate set by seamlessly combining fully programmable, scalable linear optical networks with integrated nonlinear modules. This platform enables a broad range of quantum computing and simulation tasks. We demonstrate the quasi-deterministic generation of optical Gottesman-Kitaev-Preskill states, which are essential resources for bosonic error correction, yet had previously been realized only probabilistically. Furthermore, we simulate complex many-body quantum dynamics, exemplified by the Bose-Hubbard model. Such quantum simulation tasks have long been considered beyond the reach of photonic hardware limited to linear operations. These capabilities, enabled by our extensible architecture, establish a viable route towards photonic quantum simulation and fault-tolerant quantum computing.
Algebraic Reduction to Improve an Optimally Bounded Quantum State Preparation Algorithm
This paper presents an improved algorithm for preparing quantum states by using a simpler algebraic decomposition that separates preparation of real and complex parts of the desired state. The new approach reduces circuit depth, gate count, and CNOT operations compared to existing optimally bounded state preparation methods when ancillary qubits are available.
Key Contributions
- Simplified algebraic decomposition for quantum state preparation that reduces circuit complexity
- Reduction in circuit depth, total gates, and CNOT count when ancillary qubits are available
- Implementation and testing using PennyLane for both dense and sparse quantum states
View Full Abstract
The preparation of $n$-qubit quantum states is a cross-cutting subroutine for many quantum algorithms, and the effort to reduce its circuit complexity is a significant challenge. In the literature, the quantum state preparation algorithm by Sun et al. is known to be optimally bounded, defining the asymptotically optimal width-depth trade-off bounds with and without ancillary qubits. In this work, a simpler algebraic decomposition is proposed to separate the preparation of the real part of the desired state from the complex one, resulting in a reduction in terms of circuit depth, total gates, and CNOT count when $m$ ancillary qubits are available. The reduction in complexity is due to the use of a single operator $Λ$ for each uniformly controlled gate, instead of the three in the original decomposition. Using the PennyLane library, this new algorithm for state preparation has been implemented and tested in a simulated environment for both dense and sparse quantum states, including those that are random and of physical interest. Furthermore, its performance has been compared with that of Möttönen et al.'s algorithm, which is a de facto standard for preparing quantum states in cases where no ancillary qubits are used, highlighting interesting lines of development.
Characterizing Quantum Error Correction Performance of Radiation-induced Errors
This paper develops computational models to simulate how radiation impacts affect quantum error correction performance on superconducting quantum devices, since radiation can cause correlated errors that standard error correction codes struggle with. The researchers create a holistic modeling framework that maps radiation-induced qubit errors onto quantum error channels and tests mitigation strategies for improved error correction.
Key Contributions
- Computational model linking radiation-induced quasiparticle effects to quantum error correction performance
- Performance metric for quantifying QEC code resilience to radiation impacts
- Modular framework for testing error mitigation strategies and chip designs
View Full Abstract
Radiation impacts are a current challenge with computing on superconducting-based quantum devices because they can lead to widespread correlated errors across the device. Such errors can be problematic for quantum error correction (QEC) codes, which are generally designed to correct independent errors. To address this, we have developed a computational model to simulate the effects of radiation impacts on QEC performance. This is achieved by building from recently developed models of quasiparticle density, mapping radiation-induced qubit error rates onto a quantum error channel and simulation of a simple surface code. We also provide a performance metric to quantify the resilience of a QEC code to radiation impacts. Additionally, we sweep various parameters of chip design to test mitigation strategies for improved QEC performance. Our model approach is holistic, allowing for modular performance testing of error mitigation strategies and chip and code designs.
Modeling integrated frequency shifters and beam splitters
This paper develops theoretical methods for designing frequency-mode beam splitters using modulated coupled resonator arrays for photonic quantum computing. The authors create a flexible methodology based on quantum input-output network formalism to construct transfer matrices for these devices and prove limitations on certain implementations.
Key Contributions
- Development of SLH formalism-based methodology for constructing transfer matrices of frequency-mode beam splitters
- Analysis of various device configurations including two-resonator devices and Mach-Zehnder interferometers
- Formal no-go theorem on limitations of native N-mode frequency-domain beam splitters with N-resonator arrays
View Full Abstract
Photonic quantum computing is a strong contender in the race to fault-tolerance. Recent proposals using qubits encoded in frequency modes promise a large reduction in hardware footprint, and have garnered much attention. In this encoding, linear optics, i.e., beam splitters and phase shifters, is necessarily not energy-conserving, and is costly to implement. In this work, we present designs of frequency-mode beam splitters based on modulated arrays of coupled resonators. We develop a methodology to construct their effective transfer matrices based on the SLH formalism for quantum input-output networks. Our methodology is flexible and highly composable, allowing us to define $N$-mode beam splitters either natively based on arrays of $N$-resonators of arbitrary connectivity or as networks of interconnected $l$-mode beam splitters, with $l<N$. We apply our methodology to analyze a two-resonator device, a frequency-domain phase shifter and a Mach-Zehnder interferometer obtained from composing these devices, a four-resonator device, and present a formal no-go theorem on the possibility of natively generating certain $N$-mode frequency-domain beam splitters with arrays of $N$-resonators.
Extended Rydberg Lifetimes in a Cryogenic Atom Array
This paper demonstrates how cooling cesium atoms to 4K in optical tweezers significantly extends the lifetime of Rydberg states by reducing blackbody radiation effects. The extended lifetimes improve the coherence time of ground-Rydberg qubits, which is crucial for reducing errors in neutral-atom quantum computing systems.
Key Contributions
- Demonstration of 3.3x longer Rydberg state lifetimes in cryogenic environment
- Measurement of small differential dynamic polarizability reducing dephasing
- Advancement toward higher fidelity neutral-atom two-qubit gates
View Full Abstract
We report on the realization of a $^{133}$Cs optical tweezer array in a cryogenic blackbody radiation (BBR) environment. By enclosing the array within a 4K radiation shield, we measure long Rydberg lifetimes, up to $406 (36)\,μ$s for the $55 P_{3/2}$ Rydberg state, a factor of 3.3(3) longer than the room-temperature value. We employ single-photon coupling for coherent manipulation of the ground-Rydberg qubit. We measure a small differential dynamic polarizability of the transition, beneficial for reducing dephasing due to light intensity fluctuations. Our results pave the path for advancing neutral-atom two-qubit gate fidelities as their error budgets become increasingly dominated by $T_1$ relaxation of the ground-Rydberg qubit.
Quantum Error Mitigation at the pre-processing stage
This paper introduces a new quantum error mitigation technique that corrects for noise before measurement (pre-processing) rather than after measurement (post-processing). The method finds a surrogate observable to measure on the noisy quantum state that gives the same result as measuring the target observable on the noise-free state, using tensor networks to make this computationally feasible.
Key Contributions
- Development of pre-processing quantum error mitigation approach that finds surrogate observables to compensate for noise effects
- Significant computational complexity improvement (~10^6 times) over existing Tensor Error Mitigation methods by eliminating tomographic measurements
View Full Abstract
The realization of fault-tolerant quantum computers remains a challenging endeavor, forcing state-of-the-art quantum hardware to rely heavily on noise mitigation techniques. Standard quantum error mitigation is typically based on post-processing strategies. In contrast, the present work explores a pre-processing approach, in which the effects of noise are mitigated before performing a measurement on the output state. The main idea is to find an observable $Y$ such that its expectation value on a noisy quantum state $\mathcal{E(ρ)}$ matches the expectation value of a target observable $X$ on the noiseless quantum state $ρ$. Our method requires the execution of a noisy quantum circuit, followed by the measurement of the surrogate observable $Y$. The main enablers of our method in practical scenarios are Tensor Networks. The proposed method improves over Tensor Error Mitigation (TEM) in terms of average error, circuit depth, and complexity, attaining a measurement overhead that approaches the theoretical lower bound. The improvement in terms of classical computation complexity is in the order of $\sim 10^6$ times when compared to the post-processing computational cost of TEM in practical scenarios. Such gain comes from eliminating the need to perform the set of informationally complete positive operator-valued measurements (IC-POVM) required by TEM, as well as any other tomographic strategy.
High-order dynamical decoupling in the weak-coupling regime
This paper develops an improved method for dynamical decoupling, which uses carefully timed quantum pulses to protect quantum systems from environmental noise. The new approach significantly reduces the number of pulses needed compared to existing methods while maintaining better error suppression, making it more practical for real quantum devices.
Key Contributions
- Development of high-order dynamical decoupling scheme with polynomial pulse scaling O(n^(k-1)K) versus exponential scaling O(exp(n)) in existing methods
- Novel mapping to continuous necklace-splitting problem to construct optimal pulse sequences
- Demonstration of asymptotically optimal pulse count with superior performance over Quadratic DD in weak-coupling regime
View Full Abstract
We introduce a high-order dynamical decoupling (DD) scheme for arbitrary system-bath interactions in the weak-coupling regime. Given any decoupling group $\mathcal G$ that averages the interaction to zero, our construction yields pulse sequences whose length scales as $\mathcal{O}(|\mathcal G| K)$, while canceling all error terms linear in the system-bath coupling strength up to order $K$ in the total evolution time. As a corollary, for an $n$-qubit system with $k$-local system-bath interactions, we obtain an $\mathcal{O}(n^{k-1}K)$-pulse sequence, a significant improvement over existing schemes with $\mathcal{O}(\exp(n))$ pulses (for $k=\mathcal{O}(1)$). The construction is obtained via a mapping to the continuous necklace-splitting problem, which asks how to cut a multi-colored interval into pieces that give each party the same share of every color. We provide explicit pulse sequences for suppressing general single-qubit decoherence, prove that the pulse count is asymptotically optimal, and verify the predicted error scaling in numerical simulations. For the same number of pulses, we observe that our sequences outperform the state-of-the-art Quadratic DD in the weak-coupling regime. We also note that the same construction extends to suppress slow, time-dependent Hamiltonian noise.
Digital signatures with classical shadows on near-term quantum computers
This paper introduces a quantum digital signature scheme that uses only classical communication by leveraging 'classical shadows' of quantum states produced by random circuits as public keys. The authors demonstrate improved noise tolerance and experimentally validate their approach using 32-qubit circuits on near-term quantum hardware.
Key Contributions
- Novel quantum digital signature scheme requiring only classical communication using classical shadows
- Improved state-certification primitive with higher noise tolerance and lower sample complexity
- Experimental demonstration on 32-qubit states with circuits containing ≥80 logical gates
View Full Abstract
Quantum mechanics provides cryptographic primitives whose security is grounded in hardness assumptions independent of those underlying classical cryptography. However, existing proposals require low-noise quantum communication and long-lived quantum memory, capabilities which remain challenging to realize in practice. In this work, we introduce a quantum digital signature scheme that operates with only classical communication, using the classical shadows of states produced by random circuits as public keys. We provide theoretical and numerical evidence supporting the conjectured hardness of learning the private key (the circuit) from the public key (the shadow). A key technical ingredient enabling our scheme is an improved state-certification primitive that achieves higher noise tolerance and lower sample complexity than prior methods. We realize this certification by designing a high-rate error-detecting code tailored to our random-circuit ensemble and experimentally generating shadows for 32-qubit states using circuits with $\geq 80$ logical ($\geq 582$ physical) two-qubit gates, attaining 0.90 $\pm$ 0.01 fidelity. With increased number of measurement samples, our hardware-demonstrated primitives realize a proof-of-principle quantum digital signature, demonstrating the near-term feasibility of our scheme.
Review of Superconducting Qubit Devices and Their Large-Scale Integration
This paper provides a comprehensive review of superconducting qubit quantum computers, covering the fundamental physics, device engineering, and scaling challenges. It examines key technical requirements through DiVincenzo's criteria and discusses the path toward large-scale integration using electronic design automation tools.
Key Contributions
- Comprehensive review of superconducting qubit technologies and their implementation challenges
- Analysis of large-scale integration approaches for superconducting quantum computers
- Discussion of electronic design automation tools for quantum computer design
- Review of fault-tolerant quantum computing requirements and entanglement gate operations
View Full Abstract
The superconducting qubit quantum computer is one of the most promising quantum computing architectures for large-scale integration due to its maturity and close proximity to the well-established semiconductor manufacturing infrastructure. From an education perspective, it also bridges classical microwave electronics and quantum electrodynamics. In this paper, we will review the basics of quantum computers, superconductivity, and Josephson junctions. We then introduce important technologies and concepts related to DiVincenzo's criteria, which are the necessary conditions for the superconducting qubits to work as a useful quantum computer. Firstly, we will discuss various types of superconducting qubits formed with Josephson junctions, from which we will understand the trade-off across multiple design parameters, including their noise immunity. Secondly, we will discuss different schemes to achieve entanglement gate operations, which are a major bottleneck in achieving more efficient fault-tolerant quantum computing. Thirdly, we will review readout engineering, including the implementations of the Purcell filters and quantum-limited amplifiers. Finally, we will discuss the nature and review the studies of two-level system defects, which are currently the limiting factor of qubit coherence time. DiVincenzo's criteria are only the necessary conditions for a technology to be eligible for quantum computing. To have a useful quantum computer, large-scale integration is required. We will review proposals and developments for the large-scale integration of superconducting qubit devices. By comparing with the application of electronic design automation (EDA) in semiconductors, we will also review the use of EDA in superconducting qubit quantum computer design, which is necessary for its large-scale integration.
Resource-Efficient Digitized Adiabatic Quantum Factorization
This paper develops a more efficient quantum algorithm for integer factorization using digitized adiabatic quantum computing. The researchers propose encoding the solution in the kernel subspace of the problem Hamiltonian and reformulate the problem as QUBO instead of PUBO, demonstrating improved performance for factoring integers up to 8 bits with reduced circuit complexity.
Key Contributions
- Novel kernel subspace encoding approach for adiabatic quantum factorization
- Reformulation of factorization problem from PUBO to QUBO framework with reduced gate complexity
View Full Abstract
Digitized adiabatic quantum factorization is a hybrid algorithm that exploits the advantage of digitized quantum computers to implement efficient adiabatic algorithms for factorization through gate decompositions of analog evolutions. In this paper, we harness the flexibility of digitized computers to derive a digitized adiabatic algorithm able to reduce the gate-demanding costs of implementing factorization. To this end, we propose a new approach for adiabatic factorization by encoding the solution of the problem in the kernel subspace of the problem Hamiltonian, instead of using ground-state encoding considered in the standard adiabatic factorization proposed by Peng $et$ $al$. [Phys. Rev. Lett. 101, 220405 (2008)]. Our encoding enables the design of adiabatic factorization algorithms belonging to the class of Quadratic Unconstrained Binary Optimization (QUBO) methods, instead the Polinomial Unconstrained Binary Optimization (PUBO) used by standard adiabatic factorization. We illustrate the performance of our QUBO algorithm by implementing the factorization of integers $N$ up to 8 bits. The results demonstrate a substantial improvement over the PUBO formulation, both in terms of reduced circuit complexity and increased fidelity in identifying the correct solution.
Qudit Twisted-Torus Codes in the Bivariate Bicycle Framework
This paper develops improved quantum error correction codes called qudit twisted-torus codes that work with quantum systems based on higher-dimensional units (qudits) rather than just qubits. The researchers show these twisted designs achieve better performance metrics than previous untwisted versions and outperform existing qubit-based codes.
Key Contributions
- Extension of twisted-torus quantum error correction codes to qudit systems over finite fields
- Demonstration that twisted-torus qudit codes achieve larger distances and better rate-distance tradeoffs than untwisted counterparts and previous qubit implementations
View Full Abstract
We study finite-length qudit quantum low-density parity-check (LDPC) codes from translation-invariant CSS constructions on two-dimensional tori with twisted boundary conditions. Recent qubit work [PRX Quantum 6, 020357 (2025)] showed that, within the bivariate-bicycle viewpoint, twisting generalized toric patterns can significantly improve finite-size performance as measured by $k d^{2}/n$. Here $n$ denotes the number of physical qudits, $k$ the number of logical qudits, and $d$ the code distance. Building on this insight, we extend the search to qudit codes over finite fields. Using algebraic methods, we compute the number of logical qudits and identify compact codes with favorable rate--distance tradeoffs. Overall, for the finite sizes explored, twisted-torus qudit constructions typically achieve larger distances than their untwisted counterparts and outperform previously reported twisted qubit instances. The best new codes are tabulated.
Approximate simulation of complex quantum circuits using sparse tensors
This paper presents a new method for simulating quantum circuits on classical computers using sparse tensor networks, which can efficiently represent and manipulate quantum states that don't have underlying symmetries. The approach provides improved runtime scaling with respect to circuit size and depth compared to traditional methods.
Key Contributions
- Novel sparse tensor data structure for quantum states without symmetry
- Efficient contraction and truncation algorithms for sparse tensor networks
- Improved runtime scaling for quantum circuit simulation
View Full Abstract
The study of quantum circuit simulation using classical computers is a key research topic that helps define the boundary of verifiable quantum advantage, solve quantum many-body problems, and inform development of quantum hardware and software. Tensor networks have become forefront mathematical tools for these tasks. Here we introduce a method to approximately simulate quantum circuits using sparsely-populated tensors. We describe a sparse tensor data structure that can represent quantum states with no underlying symmetry, and outline algorithms to efficiently contract and truncate these tensors. We show that the data structure and contraction algorithm are efficient, leading to expected runtime scalings versus qubit number and circuit depth. Our results motivate future research in optimization of sparse tensor networks for quantum simulation.
Detailed, interpretable characterization of mid-circuit measurement on a transmon qubit
This paper develops new methods to analyze and understand mid-circuit measurements on quantum computing hardware by adapting error analysis techniques to break down measurement errors into physically meaningful components. The researchers applied their approach to a transmon qubit device to identify and quantify specific error sources like amplitude damping and readout errors.
Key Contributions
- Adapted error generator formalism to mid-circuit measurements for better physical interpretation
- Demonstrated detailed characterization of measurement errors on transmon qubits including amplitude damping and readout errors
- Showed how measurement errors vary with readout pulse parameters and validated theoretical predictions
View Full Abstract
Mid-circuit measurements (MCMs) are critical components of the quantum error correction protocols expected to enable utility-scale quantum computing. MCMs can be modeled by quantum instruments (a type of quantum operation or process), which can be characterized self-consistently using gate set tomography. However, experimentally estimated quantum instruments are often hard to interpret or relate to device physics. We address this challenge by adapting the error generator formalism -- previously used to interpret noisy quantum gates by decomposing their error processes into physically meaningful sums of "elementary errors" -- to MCMs. We deploy our new analysis on a transmon qubit device to tease out and quantify error mechanisms including amplitude damping, readout error, and imperfect collapse. We examine in detail how the magnitudes of these errors vary with the readout pulse amplitude, recover the key features of dispersive readout predicted by theory, and show that these features can be modeled parsimoniously using a reduced model with just a few parameters.
Accelerating qubit reset through the Mpemba effect
This paper demonstrates how to accelerate qubit reset (initialization to ground state) by up to 50% using the Mpemba effect, where a two-qubit gate converts slow-decaying single-qubit coherences into faster-decaying two-qubit coherences. The researchers implemented and validated this protocol on a superconducting quantum processor.
Key Contributions
- Novel protocol using Mpemba effect to accelerate passive qubit reset by up to 50%
- Experimental demonstration on superconducting quantum processor with analysis of robustness under realistic error conditions
View Full Abstract
Passive qubit reset is a key primitive for quantum information processing, whereby qubits are initialized by allowing them to relax to their ground state through natural dissipation, without the need for active control or feedback. However, passive reset occurs on timescales that are much longer than those of gate operations and measurements, making it a significant bottleneck for algorithmic execution. Here, we show that this limitation can be overcome by exploiting the Mpemba effect, originally indicating the faster cooling of hot systems compared to cooler ones. Focusing on the regime where coherence times exceed energy relaxation times ($T_2 > T_1$), we propose a simple protocol based on a single entangling two-qubit gate that converts local single-qubit coherences into fast-decaying global two-qubit coherences. This removes their overlap with the slowest decaying Liouvillian mode and enables a substantially faster relaxation to the ground state. For realistic parameters, we find that our protocol can reduce reset times by up to $50\%$ compared to standard passive reset. We analyze the robustness of the protocol under non-Markovian noise, imperfect coherent control and finite temperature, finding that the accelerated reset persists across a broad range of realistic error sources. Finally, we present an experimental implementation of our protocol on an IQM superconducting quantum processor. Our results demonstrate how Mpemba-like accelerated relaxation can be harnessed as a practical tool for fast and accurate qubit initialization.
An Evaluation of the Remote CX Protocol under Noise in Distributed Quantum Computing
This paper evaluates how the remote CX protocol performs under noise when connecting multiple quantum processing units (QPUs) in distributed quantum computing networks. The researchers simulate different network configurations and qubit assignment strategies to understand how noise degrades performance when running quantum algorithms across distributed quantum computers.
Key Contributions
- Evaluation of remote CX protocol performance under noise in distributed quantum computing systems
- Comparison of naive versus graph partitioning strategies for qubit assignment across multiple QPUs
- Performance analysis on various quantum algorithms including Grover's algorithm in distributed settings
View Full Abstract
Quantum computers connected through classical and quantum communication channels can be combined to function as a single unit to run large quantum circuits that each device is unable to execute on their own. The distributed quantum computing paradigm is therefore often seen as a potential pathway to scaling quantum computing to capacities necessary for practical and large-scale applications. Whether connecting multiple quantum processing units (QPUs) in clusters or over networks, quantum communication requires entanglement to be generated and distributed over distances. Using entanglement, the remote CX protocol can be performed, which allows the application of the CX gate involving qubits located in different QPUs. In this work, we use a specialized simulation framework for a high-level evaluation of the impact of the protocol when executed under noise in various network configurations using different number of QPUs. We compare naive and graph partitioning qubit assignment strategies and how they affect the fidelity in experiments run on Grover, GHZ, VQC, and random circuits. The results provide insights on how QPU and network configurations or naive scheduling can degrade performance.
Even More Efficient Soft-Output Decoding with Extra-Cluster Growth and Early Stopping
This paper develops more efficient methods for computing soft outputs in quantum error correction decoders, specifically focusing on cluster-based decoders like Union-Find. The authors introduce early-stopping techniques and new soft-output types that reduce computational overhead while maintaining hardware compatibility with existing FPGA implementations.
Key Contributions
- Introduction of bounded cluster gap and extra-cluster gap soft-output methods with early stopping
- Development of hardware-compatible soft-output computation for FPGA-implemented Union-Find decoders
- Improved computational scaling with code distance compared to previous methods
View Full Abstract
In fault-tolerant quantum computing, soft outputs from real-time decoders play a crucial role in improving decoding accuracy, post-selecting magic states, and accelerating lattice surgery. A recent paper by Meister et al. [arXiv:2405.07433 (2024)] proposed an efficient method to evaluate soft outputs for cluster-based decoders, including the Union-Find (UF) decoder. However, in parallel computing environments, its computational complexity is comparable to or even surpasses that of the UF decoder itself, resulting in a substantial overhead. Furthermore, this method requires global information about the decoding graph, making it poorly suited for existing hardware implementations of the UF decoder on Field-Programmable Gate Arrays (FPGAs). In this paper, to alleviate these issues, we develop more efficient methods for evaluating high-quality soft outputs in cluster-based decoders by introducing several early-stopping techniques. Our central idea is that the precise value of a large soft output is often unnecessary in practice. Based on this insight, we introduce two types of novel soft-outputs: the bounded cluster gap and the extra-cluster gap. The former reduces the computational complexity of Meister's method by terminating the calculation at an early stage. Our numerical simulations show that this method achieves improved scaling with code distance $d$ compared to the original proposal. The latter, the extra-cluster gap, quantifies decoder reliability by performing a small, additional growth of the clusters obtained by the decoder. This approach offers the significant advantage of enabling soft-output computation without modifying the existing architecture of FPGA-implemented UF decoders. These techniques offer lower computational complexity and higher hardware compatibility, laying a crucial foundation for future real-time decoders with soft outputs.
Device variability of Josephson junctions induced by interface roughness
This paper develops a quantitative model to predict how microscopic surface roughness at the interfaces of superconducting Josephson junctions causes device-to-device variability in their energy parameters. The researchers simulate thousands of junctions to understand how manufacturing imperfections affect the consistency of quantum processor components.
Key Contributions
- Quantitative model linking interface roughness parameters to Josephson energy variability
- Statistical characterization showing Josephson energy follows log-normal distribution with identified scaling relationships
View Full Abstract
As quantum processors scale to large qubit numbers, device-to-device variability emerges as a critical challenge. Superconducting qubits are commonly realized using Al/AlO$_{\text{x}}$/Al Josephson junctions operating in the tunneling regime, where even minor variations in device geometry can lead to substantial performance fluctuations. In this work, we develop a quantitative model for the variability of the Josephson energy $E_{J}$ induced by interface roughness at the Al/AlO$_{\text{x}}$ interfaces. The roughness is modeled as a Gaussian random field characterized by two parameters: the root-mean-square roughness amplitude $σ$ and the transverse correlation length $ξ$. These parameters are extracted from the literature and molecular dynamics simulations. Quantum transport is treated using the Ambegaokar--Baratoff relation combined with a local thickness approximation. Numerical simulations over $5,000$ Josephson junctions show that $E_{J}$ follows a log-normal distribution. The mean value of $E_{J}$ increases with $σ$ and decreases slightly with $ξ$, while the variance of $E_{J}$ increases with both $σ$ and $ξ$. These results paint a quantitative and intuitive picture of Josephson energy variability induced by surface roughness, with direct relevance for junction design.
Accelerating the Tesseract Decoder for Quantum Error Correction
This paper optimizes the Tesseract decoder, a quantum error correction algorithm that uses A* search to find the most likely errors in quantum codes. The researchers implemented four performance enhancements including better data structures and memory layouts, achieving 2-5x speedups across various quantum error correction code families.
Key Contributions
- Systematic optimization of Tesseract decoder achieving 2-5x performance improvements
- Implementation of four targeted optimization strategies including data structure improvements and memory layout reorganization
- Demonstration of consistent speedups across multiple quantum error correction code families including Surface Codes and Color Codes
View Full Abstract
Quantum Error Correction (QEC) is essential for building robust, fault-tolerant quantum computers; however, the decoding process often presents a significant computational bottleneck. Tesseract is a novel Most-Likely-Error (MLE) decoder for QEC that employs the A* search algorithm to explore an exponentially large graph of error hypotheses, achieving high decoding speed and accuracy. This paper presents a systematic approach to optimizing the Tesseract decoder through low-level performance enhancements. Based on extensive profiling, we implemented four targeted optimization strategies, including the replacement of inefficient data structures, reorganization of memory layouts to improve cache hit rates, and the use of hardware-accelerated bit-wise operations. We achieved significant decoding speedups across a wide range of code families and configurations, including Color Codes, Bivariate-Bicycle Codes, Surface Codes, and Transversal CNOT Protocols. Our results demonstrate consistent speedups of approximately 2x for most code families, often exceeding 2.5x. Notably, we achieved a peak performance gain of over 5x for the most computationally demanding configurations of Bivariate-Bicycle Codes. These improvements make the Tesseract decoder more efficient and scalable, serving as a practical case study that highlights the importance of high-performance software engineering in QEC and providing a strong foundation for future research.
On the Spectral theory of Isogeny Graphs and Quantum Sampling of Hard Supersingular Elliptic curves
This paper presents a quantum algorithm for sampling random supersingular elliptic curves with unknown endomorphism rings, which is crucial for isogeny-based cryptography. The algorithm provides the first provable quantum polynomial-time solution to generate these 'hard' curves without requiring a trusted setup.
Key Contributions
- First provable quantum polynomial-time algorithm for sampling hard supersingular elliptic curves
- Proof of Quantum Unique Ergodicity conjecture for supersingular isogeny graphs
- Stronger eigenvalue separation property for isogeny graphs removing heuristic assumptions in quantum money protocols
View Full Abstract
In this paper we study the problem of sampling random supersingular elliptic curves with unknown endomorphism rings. This task has recently attracted significant attention, as the secure instantiation of many isogeny-based cryptographic protocols relies on the ability to sample such ``hard'' curves. Existing approaches, however, achieve this only in a trusted-setup setting. We present the first provable quantum polynomial-time algorithm that samples a random hard supersingular elliptic curve with high probability.Our algorithm runs heuristically in $\tilde{O}\!\left(\log^{4}p\right)$ quantum gate complexity and in $\tilde{O}\!\left(\log^{13} p\right)$ under the Generalized Riemann Hypothesis. As a consequence, our algorithm gives a secure instantiation of the CGL hash function and other cryptographic primitives. Our analysis relies on a new spectral delocalization result for supersingular $\ell$-isogeny graphs: we prove the Quantum Unique Ergodicity conjecture, and we further provide numerical evidence for complete eigenvector delocalization; this theoretical result may be of independent interest. Along the way, we prove a stronger $\varepsilon$-separation property for eigenvalues of isogeny graphs than that predicted in the quantum money protocol of Kane, Sharif, and Silverberg, thereby removing a key heuristic assumption in their construction.
Real-time detection of correlated quasiparticle tunneling events in a multi-qubit superconducting device
This paper develops a method to detect quasiparticle tunneling events in real-time across multiple superconducting qubits, revealing that these error-causing events occur individually at low rates but in correlated bursts across devices about once per minute.
Key Contributions
- Real-time detection method for quasiparticle tunneling events with microsecond temporal resolution
- Discovery of correlated burst episodes across multiple qubits that increase error rates by 1000-fold
- Characterization of burst lifetimes and spatial correlation structure in superconducting quantum devices
View Full Abstract
Quasiparticle tunneling events are a source of decoherence and correlated errors in superconducting circuits. Understanding and ultimately mitigating these errors calls for real-time detection of quasiparticle tunneling events on individual devices. In this work, we simultaneously detect quasiparticle tunneling events in two co-housed, charge-sensitive transmons coupled to a common waveguide. We measure background quasiparticle tunneling rates at the single-hertz level, with temporal resolution of tens of microseconds. Using time-tagged coincidence analysis, we show that individual events are uncorrelated across devices, whereas burst episodes occur about once per minute and are largely correlated. These bursts have a characteristic lifetime of 7 ms and induce a thousand-fold increase in the quasiparticle tunneling rate across both devices. In addition, we identify a rarer subset of bursts which are accompanied by a shift in the offset charge, at approximately one event per hour. Our results establish a practical and extensible method to identify quasiparticle bursts in superconducting circuits, as well as their correlations and spatial structure, advancing routes to suppress correlated errors in superconducting quantum processors.
Numerical Error Extraction by Quantum Measurement Algorithm
This paper introduces NEEQMA, a quantum algorithm that uses quantum measurements to determine the exact convergence constants for iterative quantum gate implementations, allowing for better optimization of quantum algorithms like Quantum Signal Processing and Hamiltonian Simulation.
Key Contributions
- Introduces NEEQMA protocol for extracting convergence constants from quantum gate approximations
- Demonstrates application to Quantum Signal Processing and Hamiltonian Simulation optimization
- Provides method to minimize convergence parameters while maintaining required accuracy
View Full Abstract
Important quantum algorithm routines allow the implementation of specific quantum operations (a.k.a. gates) by combining basic quantum circuits with an iterative structure. In this structure, the number of repetitions of the basic circuit pattern is associated to convergence parameters. This iterative structure behaves similarly to function approximation by series expansion: the higher the truncation order, the better the target gate (i.e. operation) approximation. The asymptotic convergence of the gate error with respect to the number of basic pattern repetitions is known. It is referred to as the query complexity. The underlying convergence law is bounded, but not in an explicit fashion. Upper bounds are generally too pessimistic to be useful in practice. The actual convergence law contains constants that depend on the joint properties of the matrix encoded by the query and the initial state vector, which are difficult to compute classically. This paper proposes a strategy to study this convergence law and extract the associated constants from the gate (operation) approximation at different accuracy (convergence parameter) constructed directly on a Quantum Processing Unit (QPU). This protocol is called Numerical Error Extraction by Quantum Measurement Algorithm (NEEQMA). NEEQMA concepts are tested on specific instances of Quantum Signal Processing (QSP) and Hamiltonian Simulation by Trotterization. Knowing theexact convergence constants allows for selecting the smallest convergence parameters that enable reaching the required gate approximation accuracy, hence satisfying the quantum algorithm's requirements.
Weight-four parity checks with silicon spin qubits
This paper demonstrates a silicon spin qubit device that can transport qubits along a 'shuttling bus' to four different interaction locations, enabling dynamic connectivity for quantum operations. The researchers achieved universal control of a five-qubit processor and demonstrated quantum error correction building blocks, including the creation of the largest five-qubit entangled state ever made with semiconductor spins.
Key Contributions
- First demonstration of coherent spin shuttling with four isolated interaction locations for quantum operations
- Achievement of universal control and surface-code stabilizer operations in a five-qubit silicon spin processor
- Creation of the largest five-qubit GHZ entangled state with gate-defined semiconductor spins
- Development of protocols for modular calibration and operation of sparse spin qubit arrays using quantum non-demolition measurements
View Full Abstract
Recent advances in coherent spin shuttling have made sparse semiconductor spin qubit arrays an appealing solid-state platform to realize quantum processors. The dynamic and long-range connectivity enabled by shuttling is also essential for many quantum error-correction (QEC) schemes. Here, we demonstrate a silicon spin-qubit device that comprises a shuttling bus for coherently transporting qubits that can interact at four isolated locations we call bus stops. We dynamically populate the array and tune all single- and two-qubit operations using shuttling and quantum non-demolition (QND) spin measurements, without access to charge sensing in most of the device. We achieve universal control of the effective five-qubit processor and select the connectivity required to form a surface-code stabilizer plaquette that supports X- and Z-type parity checks up to weight-four. We use the parity checks to generate multi-qubit entanglement between all qubit combinations in the array and report the genuine entanglement of a five-qubit Greenberger-Horne-Zeilinger (GHZ) state, constituting the largest such state ever constructed with gate-defined semiconductor spins. This work opens immediate opportunities to pursue QEC experiments with spin qubits, and the protocols developed here lay the groundwork for the modular calibration and operation of sparse spin qubit arrays.
TopoLS: Lattice Surgery Compilation via Topological Program Transformations
This paper presents TopoLS, a compiler that converts quantum circuits into lattice surgery instructions for fault-tolerant quantum computing using surface codes. The compiler uses topological optimizations and Monte Carlo tree search to reduce the space-time volume required for quantum computations by 33% compared to existing methods.
Key Contributions
- Development of TopoLS compiler that combines ZX-diagram optimizations with Monte Carlo tree search for lattice surgery compilation
- Achievement of 33% reduction in space-time volume overhead while maintaining linear compilation time scaling
View Full Abstract
Fault-tolerant quantum computing with surface codes can be achieved by compiling logical circuits into lattice-surgery instructions. To minimize space-time volume, we present TopoLS, a topological compiler that combines ZX-diagram optimizations with Monte Carlo tree search guided by different operation placements and topology-aware circuit partitioning. Our approach enables scalable exploration of lattice surgery structures and consistently reduces resource overhead. Evaluations of various benchmark algorithms across multiple architectures show that TopoLS achieves an average 33% reduction in space-time volume over prior heuristic-based compilers, while maintaining linear compilation time scaling. Compared to the SAT-solver-based compiler, which provides optimal results only for small circuits before becoming intractable, TopoLS offers an effective and scalable solution for lattice-surgery compilation.
Fast magic state preparation by gauging higher-form transversal gates in parallel
This paper presents a new method for rapidly preparing magic states needed for universal quantum computation by using a code surgery technique that measures multiple transversal logic gates in parallel. The approach achieves constant time overhead and linear qubit overhead while maintaining fault-tolerance properties.
Key Contributions
- Fast parallel code surgery procedure for fault-tolerant measurement of transversal logic gates
- Constant time overhead and linear qubit overhead protocol for magic state preparation
- Framework connecting higher-form transversal gates to efficient magic state preparation
View Full Abstract
Magic states are a foundational resource for universal quantum computation. To survive in a realistic noisy environment, magic states must be prepared fault-tolerantly and protected by a quantum error-correcting code. The recent discovery of highly efficient quantum low-density parity-check codes, together with efficient logic gates, lays the groundwork for low-overhead fault-tolerant quantum computation. This motivates the search for fast and parallel protocols for logical magic state preparation to enable universal quantum computation. Here, we introduce a fast code surgery procedure that performs a fault-tolerant measurement of many transversal logic gates in parallel. This is achieved by performing a generalized gauging measurement on a quantum code that supports a higher-form transversal gate. The time overhead of our procedure is constant, and the qubit overhead is linear. The procedure inherits fault-tolerance properties from the base code and the structure of the higher-form transversal gate. When applied to codes that support higher-form Clifford gates our procedure achieves fast and fault-tolerant preparation of many magic states in parallel. This motivates the search for good quantum low-density parity-check codes that support higher-form Clifford gates.
Orders of magnitude runtime reduction in quantum error mitigation
This paper introduces a new quantum error mitigation framework that dramatically reduces the computational overhead required to infer noise-free quantum circuit results. The approach combines virtual noise scaling with a layered architecture to achieve orders of magnitude faster processing compared to conventional methods.
Key Contributions
- Orders of magnitude reduction in quantum error mitigation runtime overhead
- Novel framework combining virtual noise scaling with layered mitigation architecture
- Compatibility with dynamic circuits and integration with quantum error correction schemes
- Extension to agnostic noise amplification-based mitigation of mid-circuit measurements
View Full Abstract
Quantum error mitigation (QEM) infers noiseless expectation values by combining outcomes from intentionally modified, noisy variants of a target quantum circuit. Unlike quantum error correction, QEM requires no additional hardware resources and is therefore routinely employed in experiments on contemporary quantum processors. A central limitation of QEM is its substantial sampling overhead, which necessitates long execution times where device noise may drift, potentially compromising the reliability of standard mitigation protocols. QEM strategies based on agnostic noise amplification (ANA) are intrinsically resilient to such noise variations, but their sampling cost remains a major practical bottleneck. Here we introduce a mitigation framework that combines virtual noise scaling with a layered mitigation architecture, yielding orders of magnitude reduction in runtime overhead compared to conventional zero-noise extrapolation post-processing. The proposed approach is compatible with dynamic circuits and can be seamlessly integrated with error detection and quantum error correction schemes. In addition, it naturally extends to ANA-based mitigation of mid-circuit measurements and preparation errors. We validate our post-processing approach by applying it to previously reported experimental data, where we observe a substantial improvement in mitigation efficiency and accuracy.
Quantum $(r,δ)$-Locally Recoverable BCH and Homothetic-BCH Codes
This paper develops quantum error-correcting codes that can efficiently recover from multiple failures in distributed storage systems by constructing quantum locally recoverable codes from BCH and homothetic-BCH codes, achieving optimal performance bounds.
Key Contributions
- Construction of quantum (r,δ)-locally recoverable codes from BCH and homothetic-BCH codes
- Development of pure quantum (r,δ)-LRCs that achieve optimal Singleton-like bounds
View Full Abstract
Quantum $(r,δ)$-locally recoverable codes ($(r,δ)$-LRCs) are the quantum version of classical $(r,δ)$-LRCs designed to recover multiple failures in large-scale distributed and cloud storage systems. A quantum $(r,δ)$-LRC, $Q(C)$, can be constructed from an $(r,δ)$-LRC, $C$, which is Euclidean or Hermitian dual-containing. This article is devoted to studying how to get quantum $(r,δ)$-LRCs from BCH and homothetic-BCH codes. As a consequence, we give pure quantum $(r,δ)$-LRCs which are optimal for the Singleton-like bound.
Structural Conditions for Native CCZ Magic-State Fountains in qLDPC Codes
This paper identifies theoretical conditions under which quantum error-correcting codes can efficiently prepare multiple non-Clifford quantum gates (CCZ gates) simultaneously in constant time, which is crucial for fault-tolerant quantum computing. The authors develop a mathematical framework using 'magic-friendly triples' to determine when quantum low-density parity-check codes can support these efficient magic-state preparation protocols.
Key Contributions
- Development of algebraic conditions (magic-friendly triples) for identifying when CSS qLDPC codes can support constant-depth CCZ magic-state fountains
- Proof that codes with sufficient magic-friendly triples can implement many logical CCZ gates in parallel while preserving error correction properties
- Reduction of the magic-state fountain existence problem to a concrete combinatorial counting problem for asymptotically good qLDPC families
View Full Abstract
Quantum low-density parity-check (qLDPC) codes promise constant-rate, linear-distance families with bounded-weight checks, and recent work has realized transversal or constant-depth non-Clifford gates on various (often non-LDPC) codes. However, no explicit \emph{qubit} qLDPC family is known that simultaneously has constant rate, linear distance, bounded stabilizer weight, and a native \emph{magic-state fountain} that prepares many non-Clifford resource states in constant depth. We take a structural approach and identify coding-theoretic conditions under which a CSS qLDPC family necessarily supports a constant-depth $\CCZ$ magic-state fountain. The key ingredients are: (i) an algebraic notion of \emph{magic-friendly triples} of $X$-type logical operators, defined by pairwise orthogonality and a triple-overlap form controlling diagonal $\CCZ$ phases, and (ii) a 3-uniform hypergraph model of physical $\CCZ$ circuits combined with a packing lemma that turns large collections of such triples with bounded overlaps into bounded-degree hypergraphs. Our main theorem shows that if a CSS code family on $n$ qubits admits $Ω(n^{1+γ})$ magic-friendly triples whose supports have bounded per-qubit participation, then there exists a constant-depth circuit of physical $\CCZ$ gates implementing $Ω(n^γ)$ logical $\CCZ$ gates in parallel while preserving distance up to a constant factor. For asymptotically good qLDPC families such as quantum Tanner codes, this reduces the existence of a native $\CCZ$ magic-state fountain to a concrete combinatorial problem about counting and distributing magic-friendly triples in the logical $X$ space.
Manjushri: A Tool for Equivalence Checking of Quantum Circuits
This paper introduces Manjushri, a new automated tool for checking whether two quantum circuits produce equivalent results. The tool uses a novel approach with local projections and weighted binary decision diagrams to efficiently verify quantum circuit equivalence, showing significant speed improvements over existing methods for circuits up to depth 30.
Key Contributions
- Introduction of Manjushri framework using local projections and WBDDs for quantum circuit equivalence checking
- Comprehensive experimental comparison showing 8-10x speed improvements over existing ECMC tool for circuits up to depth 30
- Demonstration of scalability to large quantum circuits with up to 128 qubits
View Full Abstract
Verifying whether two quantum circuits are equivalent is a central challenge in the compilation and optimization of quantum programs. We introduce \textsc{Manjushri}, a new automated framework for scalable quantum-circuit equivalence checking. \textsc{Manjushri} uses local projections as discriminative circuit fingerprints, implemented with weighted binary decision diagrams (WBDDs), yielding a compact and efficient symbolic representation of quantum behavior. We present an extensive experimental evaluation that, for random 1D Clifford+$T$ circuits, explores the trade-off between \textsc{Manjushri} and \textsc{ECMC}, a tool for equivalence checking based on a much different approach. \textsc{Manjushri} is much faster up to depth 30 (with the crossover point varying from 39--49, depending on the number of qubits and whether the input circuits are equivalent or inequivalent): when inputs are equivalent, \textsc{Manjushri} is about 10$\times$ faster (or more); when inputs are inequivalent, \textsc{Manjushri} is about 8$\times$ faster (or more). For both kinds of equivalence-checking outcomes, \textsc{ECMC}'s success rate out to depth 50 is impressive on 32- and 64-qubit circuits: on such circuits, \textsc{ECMC} is almost uniformly successful. However, \textsc{ECMC} struggled on 128-qubit circuits for some depths. \textsc{Manjushri} is almost uniformly successful out to about depth 38, before tailing off to about 75\% at depth 50 (falling to 0\% at depth 48 for 128-qubit circuits that are equivalent). These results establish that \textsc{Manjushri} is a practical and scalable solution for large-scale quantum-circuit verification, and would be the preferred choice unless clients need to check equivalence of circuits of depth $>$38.
Quantum bootstrap product codes
This paper introduces a new method called 'quantum bootstrap product' for constructing quantum error-correcting codes that goes beyond traditional approaches by solving consistency equations rather than just combining existing codes. The method unifies different types of important quantum codes and can generate self-correcting codes that surpass previous theoretical limits.
Key Contributions
- Introduction of quantum bootstrap product framework that extends beyond homological paradigm for constructing CSS codes
- Unification of diverse code families including hypergraph product codes and fracton codes under single framework
- Development of fork complexes structure that elucidates topological structures of fracton codes
- Generation of self-correcting quantum codes that surpass code-rate upper bounds of existing methods
View Full Abstract
Product constructions constitute a powerful method for generating quantum CSS codes, yielding celebrated examples such as toric codes and asymptotically good low-density parity check (LDPC) codes. Since a CSS code is fully described by a chain complex, existing product formalisms are predominantly homological, defined via the tensor product of the underlying chain complexes of input codes, thereby establishing a natural connection between quantum codes and topology. In this Letter, we introduce the \textit{quantum bootstrap product} (QBP), an approach that extends beyond this standard homological paradigm. Specifically, a QBP code is determined by solving a consistency condition termed the ``bootstrap equation''. We find that the QBP paradigm unifies a wide range of important codes, including general hypergraph product (HGP) codes of arbitrary dimensions and fracton codes typically represented by the X-cube code. Crucially, the solutions to the bootstrap equation yield chain complexes where the chain groups and associated boundary maps consist of multiple components. We term such structures \textit{fork complexes}. This structure elucidates the underlying topological structures of fracton codes, akin to foliated fracton order theories. Beyond conceptual insights, we demonstrate that the QBP paradigm can generate self-correcting quantum codes from input codes with constant energy barriers and surpass the code-rate upper bounds inherent to HGP codes. Our work thus substantially extends the scope of quantum product codes and provides a versatile framework for designing fault-tolerant quantum memories.
Efficient learning of logical noise from syndrome data
This paper develops methods to efficiently characterize logical errors in fault-tolerant quantum computers by analyzing syndrome measurement data from error correction, rather than requiring many direct measurements of rare logical errors. The authors extend previous work to realistic circuit-level noise and demonstrate orders-of-magnitude improvements in sample efficiency.
Key Contributions
- Extended syndrome-based logical error characterization from phenomenological to realistic circuit-level noise models
- Developed efficient estimators with provable sample complexity guarantees using Fourier analysis and compressed sensing
- Demonstrated orders-of-magnitude sample-complexity savings over direct logical benchmarking on syndrome-extraction circuits
View Full Abstract
Characterizing errors in quantum circuits is essential for device calibration, yet detecting rare error events requires a large number of samples. This challenge is particularly severe in calibrating fault-tolerant, error-corrected circuits, where logical error probabilities are suppressed to higher order relative to physical noise and are therefore difficult to calibrate through direct logical measurements. Recently, Wagner et al. [PRL 130, 200601 (2023)] showed that, for phenomenological Pauli noise models, the logical channel can instead be inferred from syndrome measurement data generated during error correction. Here, we extend this framework to realistic circuit-level noise models. From a unified code-theoretic perspective and spacetime code formalism, we derive necessary and sufficient conditions for learning the logical channel from syndrome data alone and explicitly characterize the learnable degrees of freedom of circuit-level Pauli faults. Using Fourier analysis and compressed sensing, we develop efficient estimators with provable guarantees on sample complexity and computational cost. We further present an end-to-end protocol and demonstrate its performance on several syndrome-extraction circuits, achieving orders-of-magnitude sample-complexity savings over direct logical benchmarking. Our results establish syndrome-based learning as a practical approach to characterizing the logical channel in fault-tolerant quantum devices.
A Bravyi-König theorem for Floquet codes generated by locally conjugate instantaneous stabiliser groups
This paper extends the Bravyi-König theorem, which limits the types of logical operations possible in topological quantum error correcting codes, to a new class called Floquet codes where the codespace is dynamically generated through time-dependent measurements. The authors prove that similar fundamental limitations apply to these time-dependent codes and introduce a broader class of operations that work within these constraints.
Key Contributions
- Extension of Bravyi-König theorem to Floquet codes with locally conjugate stabilizer groups
- Introduction and characterization of generalized unitaries for Floquet codes that preserve logical operations without preserving codespace at each time step
View Full Abstract
The Bravyi-König (BK) theorem is an important no-go theorem for the dynamics of topological stabiliser quantum error correcting codes. It states that any logical operation on a $D$-dimensional topological stabiliser code that can be implemented by a short-depth circuit acts on the codespace as an element of the $D$-th level of the Clifford hierarchy. In recent years, a new type of quantum error correcting codes based on Pauli stabilisers, dubbed Floquet codes, has been introduced. In Floquet codes, syndrome measurements are arranged such that they dynamically generate a codespace at each time step. Here, we show that the BK theorem holds for a definition of Floquet codes based on locally conjugate stabiliser groups. Moreover, we introduce and define a class of generalised unitaries in Floquet codes that need not preserve the codespace at each time step, but that combined with the measurements constitute a valid logical operation. We derive a canonical form of these generalised unitaries and show that the BK theorem holds for them too.
Error-detectable Universal Control for High-Gain Bosonic Quantum Error Correction
This paper introduces error-detectable universal control for bosonic quantum error correction, where ancilla relaxation events are detected and discarded to suppress operational errors. They achieve universal gates with 99.6% fidelity and demonstrate 8.33× QEC gains beyond break-even for binomial codes, with projections showing 10× gains are possible.
Key Contributions
- Error-detectable universal control method that suppresses ancilla-induced operational errors by detecting and discarding trajectories with ancilla relaxation events
- Demonstration of 8.33× QEC gains beyond break-even with universal gates achieving 99.6% fidelity for binomial codes
View Full Abstract
Protecting quantum information through quantum error correction (QEC) is a cornerstone of future fault-tolerant quantum computation. However, current QEC-protected logical qubits have only achieved coherence times about twice those of their best physical constituents. Here, we show that the primary barrier to higher QEC gains is ancilla-induced operational errors rather than intrinsic cavity coherence. To overcome this bottleneck, we introduce error-detectable universal control of bosonic modes, wherein ancilla relaxation events are detected and the corresponding trajectories discarded, thereby suppressing operational errors on logical qubits. For binomial codes, we demonstrate universal gates with fidelities exceeding $99.6\%$ and QEC gains of $8.33\times$ beyond break-even. Our results establish that gains beyond $10\times$ are achievable with state-of-the-art devices, establishing a path toward fault-tolerant bosonic quantum computing.
Hierarchical quantum decoders
This paper introduces a new family of quantum error correction decoders that use mathematical optimization techniques to provide a controllable trade-off between decoding speed and accuracy. The approach uses the Lasserre Sum-of-Squares hierarchy to create multiple levels of decoders, where lower levels are faster but less accurate, while higher levels are slower but approach optimal performance.
Key Contributions
- Development of hierarchical quantum decoders using Sum-of-Squares optimization with tunable speed-accuracy trade-offs
- Demonstration that low levels of the hierarchy significantly outperform standard Linear Programming relaxations on surface codes and color codes
View Full Abstract
Decoders are a critical component of fault-tolerant quantum computing. They must identify errors based on syndrome measurements to correct quantum states. While finding the optimal correction is NP-hard and thus extremely difficult, approximate decoders with faster runtime often rely on uncontrolled heuristics. In this work, we propose a family of hierarchical quantum decoders with a tunable trade-off between speed and accuracy while retaining guarantees of optimality. We use the Lasserre Sum-of-Squares (SOS) hierarchy from optimization theory to relax the decoding problem. This approach creates a sequence of Semidefinite Programs (SDPs). Lower levels of the hierarchy are faster but approximate, while higher levels are slower but more accurate. We demonstrate that even low levels of this hierarchy significantly outperform standard Linear Programming relaxations. Our results on rotated surface codes and honeycomb color codes show that the SOS decoder approaches the performance of exact decoding. We find that Levels 2 and 3 of our hierarchy perform nearly as well as the exact solver. We analyze the convergence using rank-loop criteria and compare the method against other relaxation schemes. This work bridges the gap between fast heuristics and rigorous optimal decoding.
Reinforcement Learning for Adaptive Composition of Quantum Circuit Optimisation Passes
This paper develops a reinforcement learning approach to automatically optimize the order of quantum circuit optimization passes, achieving better two-qubit gate reduction than default sequences. The RL agent learns to compose circuit optimization sequences tailored to individual quantum circuits rather than using general-purpose optimization sequences.
Key Contributions
- Development of reinforcement learning framework for adaptive quantum circuit optimization pass composition
- Demonstration of 57.7% mean two-qubit gate reduction compared to 41.8% for best default pass sequences
View Full Abstract
Many quantum software development kits provide a suite of circuit optimisation passes. These passes have been highly optimised and tested in isolation. However, the order in which they are applied is left to the user, or else defined in general-purpose default pass sequences. While general-purpose sequences miss opportunities for optimisation which are particular to individual circuits, designing pass sequences bespoke to particular circuits requires exceptional knowledge about quantum circuit design and optimisation. Here we propose and demonstrate training a reinforcement learning agent to compose optimisation-pass sequences. In particular the agent's action space consists of passes for two-qubit gate count reduction used in default PyTKET pass sequences. For the circuits in our diverse test set, the (mean, median) fraction of two-qubit gates removed by the agent is $(57.7\%, \ 56.7 \%)$, compared to $(41.8 \%, \ 50.0 \%)$ for the next best default pass sequence.
A biased-erasure cavity qubit with hardware-efficient quantum error detection
This paper demonstrates a new type of quantum bit (qubit) called a biased-erasure qubit that can detect its own errors very efficiently. The researchers encoded quantum information in microwave cavity states and achieved over 99% error detection while maintaining good quantum coherence, representing a significant step toward fault-tolerant quantum computing.
Key Contributions
- Demonstration of hardware-efficient biased-erasure qubit with 265:1 erasure bias ratio
- Achievement of over 99.3% error detection efficiency with sub-1% logical assignment errors
- Establishment of strong error hierarchy with 6x coherence improvement beyond break-even point
- Hardware-efficient platform using single cavity with transmon ancilla for scalable error correction
View Full Abstract
Erasure qubits are beneficial for quantum error correction due to their relaxed threshold requirements. While dual-rail erasure qubits have been demonstrated with a strong error hierarchy in circuit quantum electrodynamics, biased-erasure qubits -- where erasures originate predominantly from one logical basis state -- offer further advantages. Here, we realize a hardware-efficient biased-erasure qubit encoded in the vacuum and two-photon Fock states of a single microwave cavity. The qubit exhibits an erasure bias ratio of over 265. By using a transmon ancilla for logical measurements and mid-circuit erasure detections, we achieve logical state assignment errors below 1% and convert over 99.3% leakage errors into detected erasures. After postselection against erasures, we achieve effective logical relaxation and dephasing rates of $(6.2~\mathrm{ms})^{-1}$ and $(3.1~\mathrm{ms})^{-1}$, respectively, which exceed the erasure error rate by factors of 31 and 15, establishing a strong error hierarchy within the logical subspace. These postselected error rates indicate a coherence gain of about 6.0 beyond the break-even point set by the best physical qubit encoded in the two lowest Fock states in the cavity. Moreover, randomized benchmarking with interleaved erasure detections reveals a residual logical gate error of 0.29%. This work establishes a compact and hardware-efficient platform for biased-erasure qubits, promising concatenations into outer-level stabilizer codes toward fault-tolerant quantum computation.
High-Coherence and High-frequency Quantum Computing: The Design of a High-Frequency, High-Coherence and Scalable Quantum Computing Architecture
This paper proposes a design for high-frequency transmon quantum computing architecture operating at 11-13.5 GHz (above the typical 4-6 GHz range), aiming to achieve longer coherence times and better scalability. The design includes an 8-qubit system with potential expansion to 72 qubits using advanced superconducting materials and manufacturing techniques.
Key Contributions
- High-frequency transmon qubit architecture operating beyond 10 GHz
- Scalable design from 8 to 72 qubits with new connection topology
- Integration of advanced superconducting materials for improved coherence times
View Full Abstract
High-coherence, fault-tolerant and scalable quantum computing architectures with unprecedented long coherence times, faster gates, low losses and low bit-flip errors may be one of the only ways forward to achieve the true quantum advantage. In this context, high-frequency high-coherence (HCQC) qubits with new high-performance topologies could be a significant step towards efficient and high-fidelity quantum computing by facilitating compact size, higher scalability and higher than conventional operating temperatures. Although transmon type qubits are designed and manufactured routinely in the range of a few Giga-Hertz, normally from 4 to 6 GHz (and, at times, up to around 10GHz), achieving higher-frequency operation has challenges and entails special design and manufacturing considerations. This report presents the proposal and preliminary design of an 8-qubit transmon (with possible upgrade to up to 72 qubits on a chip) architecture working beyond an operation frequency of 10GHz, as well as presents a new connection topology. The current design spans a range of around 11 to 13.5 GHz (with a possible full range of 9-12GHz at the moment), with a central optimal operating frequency of 12.0 GHz, with the aim to possibly achieve a stable, compact and low-charge-noise operation, as lowest as possible as per the existing fabrication techniques. The aim is to achieve average relaxation times of up to 1.9ms with average quality factors of up to 2.75 x 10^7 after trials, while exploiting the new advances in superconducting junction manufacturing using tantalum and niobium/aluminum/aluminum oxide tri-layer structures on high-resistivity silicon substrates (carried out elsewhere by other groups and referred in this report).
Transversal gates for quantum CSS codes
This paper develops methods to compute transversal gates for CSS quantum error-correcting codes, specifically focusing on diagonal gates and their logical actions. The authors provide explicit equations defining these gate groups and apply their approach to monomial codes, extending previous results on several important code families.
Key Contributions
- Development of explicit equations defining transversal gate groups for CSS codes
- Complete characterization of transversal gates for monomial codes including polar codes and triorthogonal codes
View Full Abstract
In this paper, we focus on the problem of computing the set of diagonal transversal gates fixing a CSS code. We determine the logical actions of the gates as well as the groups of transversal gates that induce non-trivial logical gates and logical identities. We explicitly declare the set of equations defining the groups, a key advantage and differentiator of our approach. We compute the complete set of transversal stabilizers and transversal gates for any CSS code arising from monomial codes, a family that includes decreasing monomial codes and polar codes. As a consequence, we recover and extend some results in the literature on CSS-T codes, triorthogonal codes, and divisible codes.
In-situ benchmarking of fault-tolerant quantum circuits. I. Clifford circuits
This paper develops methods to benchmark and characterize both physical and logical errors in fault-tolerant quantum circuits using syndrome data collected during circuit execution, rather than requiring separate benchmarking runs. The approach can efficiently estimate error rates and predict logical fidelities even when logical errors are exponentially suppressed.
Key Contributions
- Development of in-situ benchmarking methods for fault-tolerant quantum circuits using syndrome data
- Mapping of fault-tolerant Clifford circuits to subsystem codes using spacetime formalism
- Polynomial-time estimation scheme that provides exponential advantage over direct fidelity estimation methods
- Necessary and sufficient conditions for learnability of physical and logical noise from syndrome data
View Full Abstract
Benchmarking physical devices and verifying logical algorithms are important tasks for scalable fault-tolerant quantum computing. Numerous protocols exist for benchmarking devices before running actual algorithms. In this work, we show that both physical and logical errors of fault-tolerant circuits can even be characterized in-situ using syndrome data. To achieve this, we map general fault-tolerant Clifford circuits to subsystem codes using the spacetime code formalism and develop a scheme for estimating Pauli noise in Clifford circuits using syndrome data. We give necessary and sufficient conditions for the learnability of physical and logical noise from given syndrome data, and show that we can accurately predict logical fidelities from the same data. Importantly, our approach requires only a polynomial sample size, even when the logical error rate is exponentially suppressed by the code distance, and thus gives an exponential advantage against methods that use only logical data such as direct fidelity estimation. We demonstrate the practical applicability of our methods in various scenarios using synthetic data as well as the experimental data from a recent demonstration of fault-tolerant circuits by Bluvstein et al. [Nature 626, 7997 (2024)]. Our methods provide an efficient, in-situ way of characterizing a fault-tolerant quantum computer to help gate calibration, improve decoding accuracy, and verify logical circuits.
Quantum Memory and Autonomous Computation in Two Dimensions
This paper presents a breakthrough method for quantum error correction that works passively in two dimensions without requiring active measurements or classical processing, using quantum cellular automata with self-correcting properties that can maintain quantum information indefinitely and enable fault-tolerant universal quantum computation.
Key Contributions
- First demonstration of passive quantum error correction in physically realistic two spatial dimensions
- Construction of a self-correcting universal quantum computer using hierarchical quantum cellular automata
- Proof of noise threshold below which logical errors are arbitrarily suppressed with increasing system size
View Full Abstract
Standard approaches to quantum error correction (QEC) require active maintenance using measurements and classical processing. The possibility of passive QEC has so far only been established in an unphysical number of spatial dimensions. In this work, we present a simple method for autonomous QEC in two spatial dimensions, formulated as a quantum cellular automaton with a fixed, local and translation-invariant update rule. The construction uses hierarchical, self-simulating control elements based on the classical schemes from the seminal results of Gács (1986, 1989) together with a measurement-free concatenated code. We analyze the system under a local noise model and prove a noise threshold below which the logical errors are suppressed arbitrarily with increasing system size and the memory lifetime diverges in the thermodynamic limit. The scheme admits a continuous-time implementation as a time-independent, translation-invariant local Lindbladian with engineered dissipative jump operators. Further, the recursive nature of our protocol allows for the fault-tolerant encoding of arbitrary quantum circuits and thus constitutes a self-correcting universal quantum computer.
Computer Science Challenges in Quantum Computing: Early Fault-Tolerance and Beyond
This paper analyzes how quantum computing progress is shifting from hardware-only challenges to computer science challenges, focusing on the systems design, software, and integration needs for early fault-tolerant quantum computers with small numbers of logical qubits.
Key Contributions
- Identifies computer science research challenges for early fault-tolerant quantum computing
- Organizes research priorities around algorithms, error correction, software, and architecture for near-term quantum systems
View Full Abstract
Quantum computing is entering a period in which progress will be shaped as much by advances in computer science as by improvements in hardware. The central thesis of this report is that early fault-tolerant quantum computing shifts many of the primary bottlenecks from device physics alone to computer-science-driven system design, integration, and evaluation. While large-scale, fully fault-tolerant quantum computers remain a long-term objective, near- and medium-term systems will support early fault-tolerant computation with small numbers of logical qubits and tight constraints on error rates, connectivity, latency, and classical control. How effectively such systems can be used will depend on advances across algorithms, error correction, software, and architecture. This report identifies key research challenges for computer scientists and organizes them around these four areas, each centered on a fundamental question.
Theory of low-weight quantum codes
This paper develops theoretical foundations for quantum low-density parity-check (qLDPC) codes with constrained check weights, proving that optimal weight calculation is NP-hard and establishing analytical bounds on code parameters. The authors provide explicit characterizations of low-weight stabilizer codes and develop linear programming methods to determine optimal parameters for practical quantum error correction.
Key Contributions
- Proved that computing optimal code weight for stabilizer codes is NP-hard
- Completely characterized stabilizer codes with weight at most 3, showing distance 2 and rate at most 1/4
- Developed linear programming scheme for exact optimal weight bounds for small systems (n≤9)
- Demonstrated practical application to IBM 127-qubit chip architecture
View Full Abstract
Low check weight is practically crucial code property for fault-tolerant quantum computing, which underlies the strong interest in quantum low-density parity-check (qLDPC) codes. Here, we explore the theory of weight-constrained stabilizer codes from various foundational perspectives including the complexity of computing code weight and the explicit boundary of feasible low-weight codes in both theoretical and practical settings. We first prove that calculating the optimal code weight is an $\mathsf{NP}$-hard problem, demonstrating the necessity of establishing bounds for weight that are analytical or efficiently computable. Then we systematically investigate the feasible code parameters with weight constraints. We provide various explicit analytical lower bounds and in particular completely characterize stabilizer codes with weight at most 3, showing that they have distance 2 and code rate at most 1/4. We also develop a powerful linear programming (LP) scheme for setting code parameter bounds with weight constraints, which yields exact optimal weight values for all code parameters with $n\leq 9$. We further refined this constraint from multiple perspectives by considering the generator weight distribution and overlap. In particular, we consider practical architectures and demonstrate how to apply our methods to e.g.~the IBM 127-qubit chip. Our study brings the weight as a crucial parameter into coding theory and provide guidance for code design and utility in practical scenarios.
A Folded Surface Code Architecture for 2D Quantum Hardware
This paper presents a new architecture for implementing quantum error correction codes on 2D quantum hardware using qubit shuttling to create effective 3D connectivity. The approach enables faster logical gate operations and more efficient magic state distillation compared to conventional 2D surface code implementations.
Key Contributions
- Native implementation of folded surface codes on 2D hardware using qubit shuttling
- Reduction of logical Clifford gates and CNOT operations from O(d) to constant time
- Order-of-magnitude improvement in spacetime volume for magic-state distillation
- Introduction of virtual-stack layout for efficient multilayer routing on 2D devices
View Full Abstract
Qubit shuttling has become an indispensable ingredient for scaling leading quantum computing platforms, including semiconductor spin, neutral-atom, and trapped-ion qubits, enabling both crosstalk reduction and tighter integration of control hardware. Cai et al. (2023) proposed a scalable architecture that employs short-range shuttling to realize effective three-dimensional connectivity on a strictly two-dimensional device. Building on recent advances in quantum error correction, we show that this architecture enables the native implementation of folded surface codes on 2D hardware, reducing the runtime of all single-qubit logical Clifford gates and logical CNOTs within subsets of qubits from $\mathcal{O}(d)$ in conventional surface code lattice surgery to constant time. We present explicit protocols for these operations and demonstrate that access to a transversal $S$ gate reduces the spacetime volume of 8T-to-CCZ magic-state distillation by more than an order of magnitude compared with standard 2D lattice surgery approaches. Finally, we introduce a new "virtual-stack" layout that more efficiently exploits the quasi-three-dimensional structure of the architecture, enabling efficient multilayer routing on these two-dimensional devices.
Spectral Codes: A Geometric Formalism for Quantum Error Correction
This paper introduces a new mathematical framework for quantum error correction using spectral geometry, where error correcting codes are viewed as low-energy projections of geometric operators. The approach unifies different types of quantum codes under a single geometric language and provides new methods for improving error correction thresholds.
Key Contributions
- Unified geometric framework for quantum error correction using spectral triples
- Demonstration that spectral gaps control error correction performance and can be enhanced
- Recovery of diverse code types (stabilizer, topological, GKP) from single construction
View Full Abstract
We present a new geometric perspective on quantum error correction based on spectral triples in noncommutative geometry. In this approach, quantum error correcting codes are reformulated as low energy spectral projections of Dirac type operators that separate global logical degrees of freedom from local, correctable errors. Locality, code distance, and the Knill Laflamme condition acquire a unified spectral and geometric interpretation in terms of the induced metric and spectrum of the Dirac operator. Within this framework, a wide range of known error correcting codes including classical linear codes, stabilizer codes, GKP type codes, and topological codes are recovered from a single construction. This demonstrates that classical and quantum codes can be organized within a common geometric language. A central advantage of the spectral triple perspective is that the performance of error correction can be directly related to spectral properties. We show that leakage out of the code space is controlled by the spectral gap of the Dirac operator, and that code preserving internal perturbations can systematically increase this gap without altering the encoded logical subspace. This yields a geometric mechanism for enhancing error correction thresholds, which we illustrate explicitly for a stabilizer code. We further interpret Berezin Toeplitz quantization as a mixed spectral code and briefly discuss implications for holographic quantum error correction. Overall, our results suggest that quantum error correction can be viewed as a universal low energy phenomenon governed by spectral geometry.
Quantum Circuit Pre-Synthesis: Learning Local Edits to Reduce $T$-count
This paper presents Q-PreSyn, a reinforcement learning approach that optimizes quantum circuits before synthesis by learning sequences of local edits that preserve circuit equivalence but reduce the number of expensive T gates needed for fault-tolerant quantum computing. The method achieves up to 20% reduction in T-count on circuits with up to 25 qubits without introducing approximation errors.
Key Contributions
- Development of Q-PreSyn reinforcement learning framework for pre-synthesis circuit optimization
- Demonstration of up to 20% T-count reduction on circuits up to 25 qubits without approximation error
View Full Abstract
Compiling quantum circuits into Clifford+$T$ gates is a central task for fault-tolerant quantum computing using stabilizer codes. In the near term, $T$ gates will dominate the cost of fault tolerant implementations, and any reduction in the number of such expensive gates could mean the difference between being able to run a circuit or not. While exact synthesis is exponentially hard in the number of qubits, local synthesis approaches are commonly used to compile large circuits by decomposing them into substructures. However, composing local methods leads to suboptimal compilations in key metrics such as $T$-count or circuit depth, and their performance strongly depends on circuit representation. In this work, we address this challenge by proposing \textsc{Q-PreSyn}, a strategy that, given a set of local edits preserving circuit equivalence, uses a RL agent to identify effective sequences of such actions and thereby obtain circuit representations that yield a reduced $T$-count upon synthesis. Experimental results of our proposed strategy, applied on top of well-known synthesis algorithms, show up to a $20\%$ reduction in $T$-count on circuits with up to 25 qubits, without introducing any additional approximation error prior to synthesis.
Efficient Application of Tensor Network Operators to Tensor Network States
This paper introduces a new algorithm called Cholesky-based compression (CBC) that efficiently applies tree tensor network operators to tree tensor network states, achieving significant runtime improvements over existing methods. The authors demonstrate their method on quantum circuit simulation tasks and show that complex tree structures can outperform linear structures with lower errors.
Key Contributions
- Development of Cholesky-based compression (CBC) algorithm for efficient tensor network operator application with order-of-magnitude runtime improvements
- Demonstration that complex tree tensor network structures can outperform linear structures in quantum circuit simulation with lower computational errors
View Full Abstract
The performance of tensor network methods has seen constant improvements over the last few years. We add to this effort by introducing a new algorithm that efficiently applies tree tensor network operators to tree tensor network states inspired by the density matrix method and the Cholesky decomposition. This application procedure is a common subroutine in tensor network methods. We explicitly include the special case of tensor train structures and demonstrate how to extend methods commonly used in this context to general tree structures. We compare our newly developed method with the existing ones in a benchmark scenario with random tensor network states and operators. We find our Cholesky-based compression (CBC) performs equivalently to the current state-of-the-art method, while outperforming most established methods by at least an order of magnitude in runtime. We then apply our knowledge to perform circuit simulation of tree-like circuits, in order to test our method in a more realistic scenario. Here, we find that more complex tree structures can outperform simple linear structures and achieve lower errors than those possible with the simple structures. Additionally, our CBC still performs among the most successful methods, showing less dependence on the different bond dimensions of the operator.
DynQ: A Dynamic Topology-Agnostic Quantum Virtual Machine via Quality-Weighted Community Detection
This paper presents DynQ, a quantum virtual machine that enables multiple users to share quantum hardware by dynamically partitioning quantum processors into execution regions based on real-time calibration data and device quality, rather than using fixed geometric divisions.
Key Contributions
- First dynamic, topology-agnostic quantum virtual machine using quality-weighted community detection
- Enables quantum hardware virtualization and resource sharing while maintaining high fidelity execution
- Demonstrates resilience to hardware defects and calibration drift with up to 19.1% higher fidelity than existing approaches
View Full Abstract
Quantum cloud platforms remain fundamentally non-virtualised: despite rapid hardware scaling, each user program still monopolises an entire quantum processor, preventing resource sharing, economic scalability, and quality-of-service differentiation. Existing Quantum Virtual Machine (QVM) designs attempt spatial multiplexing through topology-specific or template-based partitioning, but these approaches are brittle under hardware heterogeneity, calibration drift, and transient defects, which dominate real quantum devices. We present DynQ, the first dynamic, topology-agnostic Quantum Virtual Machine that virtualises quantum hardware using quality-weighted community detection. Instead of imposing fixed geometric regions, DynQ models a quantum processor as a weighted graph derived from live calibration data and automatically discovers execution regions that maximise internal gate quality while minimising inter-region coupling. This operationalises the classical virtualisation principle of high cohesion and low coupling in a quantum-native setting, producing execution regions that are connectivity-efficient, noise-aware, and resilient to crosstalk and defects. We evaluate DynQ across five IBM Quantum backends using calibration-derived noise simulation and on two production devices, comparing against state-of-the-art QVM and standard compilation baselines. On hardware with pronounced spatial quality variation, DynQ achieves up to 19.1 percent higher fidelity and 45.1 percent lower output error. When transient hardware defects cause baseline executions to fail completely, DynQ adapts dynamically and achieves over 86 percent fidelity. By transforming calibrated device graphs into adaptive virtual hardware abstractions, DynQ decouples quantum programs from fragile physical layouts and enables reliable, high-utilisation quantum cloud services.
Reinforcement Learning for Enhanced Advanced QEC Architecture Decoding
This paper develops reinforcement learning techniques to improve the decoding of quantum error correction codes, particularly for advanced architectures beyond surface codes. The approach uses AI agents to learn optimal decoding strategies from noisy syndrome measurements, potentially achieving better error rates and scalability than traditional methods.
Key Contributions
- Application of reinforcement learning to advanced quantum error correction code decoding
- Development of hybrid and multi-agent RL approaches for complex QEC architectures
- Demonstration of autonomous agent training for deriving decoding schemes
View Full Abstract
The advent of promising quantum error correction (QEC) codes with efficient resource utilization and high-performance fault-tolerant quantum memories signifies a critical step towards realizing practical quantum computation. While surface codes have been a dominant approach, their limitations have spurred the development of more advanced QEC architectures. These advanced codes often present increased complexity, demanding innovative decoding methodologies. This work investigates the application of reinforcement learning (RL) techniques, including hybrid and multi-agent approaches, to enhance the decoding of various advanced QEC architectures. By leveraging the ability of RL to learn optimal strategies from noisy syndrome measurements, we explore the potential for achieving improved logical error rates and scalability compared to traditional decoding methods. Our approach examines the adaptation of reinforcement learning to exploit the structural properties of these modern QEC models. We also explore the benefits of combining different RL algorithms to address the multifaceted nature of the decoding problem, considering factors such as code degeneracy and real-world noise characteristics. With our proposed method, we are able to demonstrate that an autonomously trained agent can derive decoding schemes for the complex decoding requirement of advanced QEC architectures.
Pareto-Front Engineering of Dynamical Sweet Spots in Superconducting Qubits
This paper develops a multi-objective optimization framework for operating superconducting qubits at dynamical sweet spots to simultaneously improve both energy relaxation time (T1) and dephasing time (Tφ). The method enhances coherence times by 3-5x compared to existing approaches while maintaining microsecond-scale performance, and establishes fundamental limits on achievable improvements.
Key Contributions
- Multi-objective Pareto optimization framework for dynamical sweet spots that simultaneously optimizes T1 and Tφ
- Proof of fundamental upper bounds on achievable T1 improvements despite eliminating first-order noise sensitivity
- Identification of double-DSS regions providing robust operating points insensitive to both DC and AC flux noise
- Demonstration of high-fidelity single and two-qubit gate protocols at optimized operating points
View Full Abstract
Operating superconducting qubits at dynamical sweet spots (DSSs) suppresses decoherence from low-frequency flux noise. A key open question is how long coherence can be extended under this strategy and what fundamental limits constrain it. Here we introduce a fully parameterized, multi-objective periodic-flux modulation framework that simultaneously optimizes energy relaxation $T_1$ and pure dephasing $T_φ$, thereby quantifying the tradeoff between them. For fluxonium qubits with realistic noise spectra, our method enhances $T_φ$ by a factor of 3-5 compared with existing DSS strategies while maintaining $T_1$ in the hundred-microsecond range. We further prove that, although DSSs eliminate first-order sensitivity to low-frequency noise, relaxation rate cannot be reduced arbitrarily close to zero, establishing an upper bound on achievable $T_1$. At the optimized working points, we identify double-DSS regions that are insensitive to both DC and AC flux, providing robust operating bands for experiments. As applications, we design single- and two-qubit control protocols at these operating points and numerically demonstrate high-fidelity gate operations. These results establish a general and useful framework for Pareto-front engineering of DSSs that substantially improves coherence and gate performance in superconducting qubits.
High-Performance Exact Synthesis of Two-Qubit Quantum Circuits
This paper develops an exact synthesis framework for optimally constructing two-qubit quantum circuits using Clifford+T gates, minimizing the number of T gates needed. The approach uses exhaustive search with pruning techniques and creates lookup tables that enable fast synthesis by turning the optimization problem into a simple query.
Key Contributions
- Exact synthesis framework for two-qubit circuits that guarantees optimal T-count
- Efficient algorithmic approach combining meet-in-the-middle search with algebraic canonicalization and pruning
- Reusable lookup table system that converts synthesis into fast query operations
View Full Abstract
Exact synthesis provides unconditional optimality and canonical structure, but is often limited to small, carefully scoped regimes. We present an exact synthesis framework for two-qubit circuits over the Clifford+$T$ gate set that optimizes $T$-count exactly. Our approach exhausts a bounded search space, exploits algebraic canonicalization to avoid redundancy, and constructs a lookup table of optimal implementations that turns synthesis into a query. Algorithmically, we combine meet-in-the-middle ideas with provable pruning rules and problem-specific arithmetic designed for modern hardware. The result is an exact, reusable synthesis engine with substantially improved practical performance.
Time-series based quantum state discrimination
This paper proposes using machine learning techniques, specifically LSTM neural networks, to improve quantum state readout by analyzing the full time-series data from measurements rather than just integrated signals. The approach better distinguishes between qubits that started in the ground state versus those that decayed during measurement, leading to improved readout fidelity.
Key Contributions
- Introduction of time-series machine learning methods for quantum state discrimination using raw analog signals
- Demonstration that LSTM networks outperform traditional clustering methods for qubit readout, particularly in boundary regions between quantum states
View Full Abstract
Accurate quantum state readout is crucial for error correction and algorithms, but measurement errors are detrimental. Readout fidelity is typically limited by a poor signal-to-noise ratio (SNR) and energy relaxation ($T_1$ decay), a significant problem for superconducting qubits. While most approaches classify results using clustering algorithms on integrated readout signals, these methods cannot distinguish a qubit that was initially in the ground state from one that decayed to it during measurement. We instead propose using machine learning (ML) on the raw, non-integrated analog signal. We apply time-series classification models, such as a long short-term memory (LSTM) network, to the full data trajectory. We find that our LSTM model, combined with filtering and feature engineering, consistently outperforms clustering. The largest improvements come from reclassifying points in the boundary regions between clusters. These points correspond to atypical measurement records, likely due to transient or noisy features lost during data integration. By retaining temporal information, sequence-aware models like LSTMs can better discriminate these trajectories, whereas clustering methods based on integrated values are more prone to misclassification.
When Does Adaptation Win? Scaling Laws for Meta-Learning in Quantum Control
This paper develops mathematical scaling laws to determine when adaptive quantum controllers are worth their computational overhead compared to fixed controllers, showing that adaptation benefits increase with device-to-device variation and validating this on quantum gate calibration tasks.
Key Contributions
- Derived scaling law lower bounds for meta-learning adaptation gain that scales linearly with task variance and saturates exponentially with gradient steps
- Demonstrated >40% fidelity improvements for two-qubit gate calibration under high-noise out-of-distribution conditions
- Provided quantitative framework for optimizing per-device calibration overhead on cloud quantum processors
View Full Abstract
Quantum hardware suffers from intrinsic device heterogeneity and environmental drift, forcing practitioners to choose between suboptimal non-adaptive controllers or costly per-device recalibration. We derive a scaling law lower bound for meta-learning showing that the adaptation gain (expected fidelity improvement from task-specific gradient steps) saturates exponentially with gradient steps and scales linearly with task variance, providing a quantitative criterion for when adaptation justifies its overhead. Validation on quantum gate calibration shows negligible benefits for low-variance tasks but $>40\%$ fidelity gains on two-qubit gates under extreme out-of-distribution conditions (10$\times$ the training noise), with implications for reducing per-device calibration time on cloud quantum processors. Further validation on classical linear-quadratic control confirms these laws emerge from general optimization geometry rather than quantum-specific physics. Together, these results offer a transferable framework for decision-making in adaptive control.
Approximate level-by-level maximum-likelihood decoding based on the Chase algorithm for high-rate concatenated stabilizer codes
This paper develops an improved decoder for quantum error correction codes that combines level-by-level decoding with the Chase algorithm to better correct errors in high-rate concatenated stabilizer codes. The authors demonstrate through simulations that their decoder outperforms existing methods for correcting bit-flip errors in quantum systems.
Key Contributions
- Development of a general high-performance decoder that extends level-by-level minimum-distance decoding using the Chase algorithm
- Demonstration of superior performance compared to conventional decoders for high-rate concatenated Hamming codes under bit-flip noise
View Full Abstract
Fault-tolerant quantum computation (FTQC) is expected to address a wide range of computational problems. To realize large-scale FTQC, it is essential to encode logical qubits using quantum error-correcting codes. High-rate concatenated codes have recently attracted attention due to theoretical advances in fault-tolerant protocols with constant-space-overhead and polylogarithmic-time-overhead, as well as practical developments of high-rate many-hypercube codes equipped with a high-performance level-by-level minimum-distance decoder (LMDD). We propose a general, high-performance decoder for high-rate concatenated stabilizer codes that extends LMDD by leveraging the Chase algorithm to generate a suitable set of candidate errors. Our simulation results demonstrate that the proposed decoder outperforms conventional decoders for high-rate concatenated Hamming codes under bit-flip noise.
Data-Driven Qubit Characterization and Optimal Control using Deep Learning
This paper develops a machine learning approach using recurrent neural networks to optimize control pulses for quantum computing gates. The method learns qubit behavior from experimental data and uses this trained model to design high-fidelity control sequences without requiring detailed physical system models.
Key Contributions
- Data-driven approach for qubit control optimization using RNNs
- Model-free gradient-based pulse optimization method for quantum gates
View Full Abstract
Quantum computing requires the optimization of control pulses to achieve high-fidelity quantum gates. We propose a machine learning-based protocol to address the challenges of evaluating gradients and modeling complex system dynamics. By training a recurrent neural network (RNN) to predict qubit behavior, our approach enables efficient gradient-based pulse optimization without the need for a detailed system model. First, we sample qubit dynamics using random control pulses with weak prior assumptions. We then train the RNN on the system's observed responses, and use the trained model to optimize high-fidelity control pulses. We demonstrate the effectiveness of this approach through simulations on a single $ST_0$ qubit.
Bayesian Optimization for Quantum Error-Correcting Code Discovery
This paper develops a machine learning approach using Bayesian optimization to automatically discover new quantum error-correcting codes that protect quantum information from noise. The method uses neural networks to predict code performance without expensive simulations, successfully finding codes that balance encoding efficiency with error protection.
Key Contributions
- Multi-view chain-complex neural embedding for predicting logical error rates without expensive simulations
- Bayesian optimization framework for automated quantum error-correcting code discovery
- Discovery of high-performance codes including [[144,36]] and [[144,16]] that outperform existing gross codes
View Full Abstract
Quantum error-correcting codes protect fragile quantum information by encoding it redundantly, but identifying codes that perform well in practice with minimal overhead remains difficult due to the combinatorial search space and the high cost of logical error rate evaluation. We propose a Bayesian optimization framework to discover quantum error-correcting codes that improves data efficiency and scalability with respect to previous machine learning approaches to this task. Our main contribution is a multi-view chain-complex neural embedding that allows us to predict the logical error rate of quantum LDPC codes without performing expensive simulations. Using bivariate bicycle codes and code capacity noise as a testbed, our algorithm discovers a high-rate code [[144,36]] that achieves competitive per-qubit error rate compared to the gross code, as well as a low-error code [[144,16]] that outperforms the gross code in terms of error rate per qubit. These results highlight the ability of our pipeline to automatically discover codes balancing rate and noise suppression, while the generality of the framework enables application across diverse code families, decoders, and noise models.
Fundamentals, Recent Advances, and Challenges Regarding Cryptographic Algorithms for the Quantum Computing Era
This is a comprehensive book/reference work that provides an overview of how quantum computing impacts cryptography, covering both the threats posed by quantum algorithms like Shor's algorithm and the development of post-quantum cryptographic solutions. It serves as an educational resource progressing from basic concepts to practical implementation challenges in the transition to quantum-resistant cryptography.
Key Contributions
- Comprehensive Portuguese-language reference on quantum computing's impact on cryptography
- Progressive educational structure covering fundamentals through practical implementation of post-quantum cryptography
- Analysis of NIST standardization process and migration strategies for quantum-resistant algorithms
View Full Abstract
This book arises from the need to provide a clear and up-to-date overview of the impacts of quantum computing on cryptography. The goal is to provide a reference in Portuguese for undergraduate, master's, and doctoral students in the field of data security and cryptography. Throughout the chapters, we present fundamentals, we discuss classical and post-quantum algorithms, evaluate emerging patterns, and point out real-world implementation challenges. The initial objective is to serve as a guide for students, researchers, and professionals who need to understand not only the mathematics involved, but also its practical implications in security systems and policies. For more advanced professionals, the main objective is to present content and ideas so that they can assess the changes and perspectives in the era of quantum cryptographic algorithms. To that end, the text's structure was designed to be progressive: we begin with essential concepts, move on to quantum algorithms and their consequences (with emphasis on Shor's algorithm), present issues focusing on "families" of post-quantum schemes (based on lattices, codes, hash functions, multivariate, isogenies), analyze the state of the art in standardization (highlighting the NIST process), and finally, discuss migration, interoperability, performance, and cryptographic governance. We hope that this work will assist in the formation of critical thinking and informed technical decision-making, fostering secure transition strategies for the post-quantum era.
Quantum Error Correction on Error-mitigated Physical Qubits
This paper develops a framework for applying quantum error mitigation techniques directly to the physical qubits within logical qubits, showing that this approach can effectively increase code distance by 2 and achieve similar error rates to larger codes while using significantly fewer qubits.
Key Contributions
- General framework for integrating linear quantum error mitigation with quantum error correction at the physical layer
- Demonstration that distance-3 codes with physical-level error mitigation can match distance-5 unmitigated codes while using 40-64% fewer qubits
View Full Abstract
We present a general framework for applying linear quantum error mitigation (QEM) techniques directly to physical qubits within a logical qubit to suppress logical errors. By exploiting the linearity of quantum error correction (QEC), we demonstrate that any linear QEM method$\unicode{x2014}$including probabilistic error cancellation (PEC), zero-noise extrapolation (ZNE), and symmetry verification$\unicode{x2014}$can be integrated into the physical layer without requiring modifications to the subsequent QEC decoder. Applying this framework to memory experiments using PEC, we analytically prove and numerically verify that the leading-order contribution to the logical error can be removed, increasing the effective code distance by 2. Our simulations on repetition and rotated surface codes show that a distance-3 code with physical-level PEC achieves logical error rates lower than or similar to a distance-5 unmitigated code while using 40% and 64% fewer qubits, respectively. These results establish physical-level QEM as a widely compatible and resource-efficient strategy for enhancing logical performance in early fault-tolerant architectures.
Overcoming Barren Plateaus in Variational Quantum Circuits using a Two-Step Least Squares Approach
This paper proposes a two-stage optimization method to solve the barren plateau problem in variational quantum algorithms, where gradients vanish as quantum circuits scale up. The authors test their approach on quantum cryptanalysis of the BB84 quantum key distribution protocol, showing improved performance over random initialization methods.
Key Contributions
- Two-stage optimization framework with convex initialization followed by nonconvex refinement to overcome barren plateaus
- Application to quantum cryptanalysis of BB84 protocol for optimal cloning strategies
View Full Abstract
Variational Quantum Algorithms are a vital part of quantum computing. It is a blend of quantum and classical methods for tackling tough problems in machine learning, chemistry, and combinatorial optimization. Yet as these algorithms scale up, they cannot escape the barren-plateau phenomenon. As systems grow, gradients can vanish so quickly that training deep or randomly initialized circuits becomes nearly impossible. To overcome the barren plateau problem, we introduce a two-stage optimization framework. First comes the convex initialization stage. Here, we shape the quantum energy landscape, the Hilmaton landscape, into a smooth, low-energy basin. This step makes gradients easier to spot and keeps noise from derailing the process. Once we have gotten a stable gradient flow, we move to the second stage: nonconvex refinement. In this phase, we allow the algorithm to explore different energy minima, thereby making the model more expressive. Finally, we used our two-stage solution to perform quantum cryptanalysis of the quantum key distribution protocol (i.e., BB84) to determine the optimal cloning strategies. The simulation results showed that our proposed two-stage solution outperforms its random initialization counterpart.
Quantum phase estimation with optimal confidence interval using three control qubits
This paper presents a more efficient method for quantum phase estimation by using matrix product states with bond dimension 4 to prepare optimal control states, requiring only three control qubits instead of larger registers for early fault-tolerant quantum computers.
Key Contributions
- Efficient preparation of discrete prolate spheroidal sequence states using matrix product states with bond dimension 4
- Reduction of quantum phase estimation to only three control qubits for power-of-2 dimensions
View Full Abstract
Quantum phase estimation is an important routine in many quantum algorithms, particularly for estimating the ground state energy in quantum chemistry simulations. This estimation involves applying powers of a unitary to the ground state, controlled by an auxiliary state prepared on a control register. In many applications the goal is to provide a confidence interval for the phase estimate, and optimal performance is provided by a discrete prolate spheroidal sequence. We show how to prepare the corresponding state in a far more efficient way than prior work. We find that a matrix product state representation with a bond dimension of 4 is sufficient to give a highly accurate approximation for all dimensions tested, up to $2^{24}$. This matrix product state can be efficiently prepared using a sequence of simple three-qubit operations. When the dimension is a power of 2, the phase estimation can be performed with only three qubits for the control register, making it suitable for early-generation fault-tolerant quantum computers with a limited number of logical qubits.
Reducing TLS loss in tantalum CPW resonators using titanium sacrificial layers
This paper presents a method to significantly improve the quality of tantalum-based superconducting resonators by using an ultrathin titanium layer that chemically modifies the tantalum oxide surface, reducing energy loss from two-level systems and achieving over 3x improvement in device performance.
Key Contributions
- Demonstrated 3x improvement in quality factors of tantalum resonators using titanium sacrificial layers
- Identified interfacial oxide chemistry as critical factor in superconducting loss mechanisms
- Developed fabrication-compatible method for atomic-scale surface engineering to extend qubit coherence times
View Full Abstract
We demonstrate a substantial reduction in two-level system loss in tantalum coplanar waveguide resonators fabricated on high-resistivity silicon substrates through the use of an ultrathin titanium sacrificial layer. A 0.2nm titanium film, deposited atop pre-sputtered α-tantalum, acts as a solid-state oxygen getter that chemically modifies the native Ta oxide at the metal-air interface. After device fabrication, the titanium layer is removed using buffered oxide etchant, leaving behind a chemically reduced Ta oxide surface. Subsequent high-vacuum annealing further suppresses two-level system loss. Resonators treated with this process exhibit internal quality factors Qi exceeding an average of 1.5 million in the single-photon regime across ten devices, over three times higher than otherwise identical devices lacking the titanium layer. These results highlight the critical role of interfacial oxide chemistry in superconducting loss and reinforce atomic-scale surface engineering as an effective approach to improving coherence in tantalum-based quantum circuits. The method is compatible with existing fabrication workflows applicable to tantalum films, offering a practical route to further extending T1 lifetimes in superconducting qubits.
Calibration-Conditioned FiLM Decoders for Low-Latency Decoding of Quantum Error Correction Evaluated on IBM Repetition-Code Experiments
This paper develops a neural network decoder for quantum error correction that adapts to hardware changes by using device calibration data to condition the decoding process. The approach separates slow hardware calibration updates from fast error correction decisions, achieving better performance than traditional decoding methods on IBM quantum processors.
Key Contributions
- Hardware-conditioned neural decoder framework that adapts to device calibration drift
- FiLM-based architecture that separates calibration processing from real-time syndrome decoding
- Experimental validation showing 11.1x reduction in logical error rate compared to minimum-weight perfect matching
View Full Abstract
Real-time decoding of quantum error correction (QEC) is essential for enabling fault-tolerant quantum computation. A practical decoder must operate with high accuracy at low latency, while remaining robust to spatial and temporal variations in hardware noise. We introduce a hardware-conditioned neural decoder framework designed to exploit the natural separation of timescales in superconducting processors, where calibration drifts occur over hours while error correction requires microsecond-scale responses. By processing calibration data through a graph-based encoder and conditioning a lightweight convolutional backbone via feature-wise linear modulation (FiLM), we decouple the heavy processing of device statistics from the low-latency syndrome decoding. We evaluate this approach using the 1D repetition code as a testbed on IBM Fez, Kingston, and Pittsburgh processors, collecting over 2.7 million experimental shots spanning distances up to d = 11. We demonstrate that a single trained model generalizes to unseen qubit chains and new calibration data acquired days later without retraining. On these unseen experiments, the FiLM-conditioned decoder achieves up to an 11.1x reduction in logical error rate relative to modified minimum-weight perfect matching. We observe that by employing a network architecture that exploits the highly asynchronous nature of system calibration and decoding, hardware-conditioned neural decoding demonstrates promising, adaptive performance with negligible latency overhead relative to unconditioned baselines.
Experimental prime factorization via a feedback quantum control
This paper demonstrates a new approach to quantum factorization using feedback quantum control that eliminates the need for classical computation during optimization. The researchers experimentally factored 551 using a 3-qubit NMR system and showed numerical scalability to larger numbers.
Key Contributions
- Novel all-quantum feedback control method for prime factorization that eliminates classical post-processing
- Experimental demonstration of factoring 551 on a 3-qubit NMR quantum processor
- Numerical scalability analysis showing potential for larger factorizations with 5 and 9 qubits
View Full Abstract
Prime factorization on quantum processors is typically implemented either via circuit-based approaches such as Shor's algorithm or through Hamiltonian optimization methods based on adiabatic, annealing, or variational techniques. While Shor's algorithm demands high-fidelity quantum gates, Hamiltonian optimization schemes, with prime factors encoded as degenerate ground states of a problem Hamiltonian, generally require substantial classical post-processing to determine control parameters. We propose an all-quantum, measurement-based feedback approach that iteratively steers a quantum system toward the target ground state, eliminating the need for classical computation of drive parameters once the problem Hamiltonian is determined and realized. As a proof of principle, we experimentally factor the biprime 551 using a three-qubit NMR quantum register and numerically analyze the robustness of the method against control field-errors. We further demonstrate scalability by numerically implementing the FALQON factorization of larger biprimes, 9,167 and 2,106,287, using 5 and 9 qubits, respectively.
Automated quantum circuit optimization with randomized replacements
This paper presents a new method for optimizing quantum circuits by allowing approximate transformations that use mixed quantum channels (noisy operations) instead of perfect unitary operations, achieving reduced gate counts while staying within error budgets. The approach strategically converts some gate-induced noise into engineered random noise, showing particular promise for structured circuits like the quantum Fourier transform.
Key Contributions
- Novel approximate circuit optimization using mixed quantum channels instead of pure unitaries
- Greedy replacement strategy using ZX-calculus that converts gate noise into engineered random noise
- Demonstration of substantial gate count reductions in structured circuits like quantum Fourier transform
View Full Abstract
Quantum circuit optimization - the process of transforming a quantum circuit into an equivalent one with reduced time and space requirements - is crucial for maximizing the utility of current and near-future quantum devices. While most automated optimization techniques focus on transforming circuits into equivalent ones that implement the same unitary, we show that substantial new opportunities for resource reduction can be achieved by (1) allowing approximate local transformations and (2) employing mixed quantum channels to approximate pure circuits. Our novel automated protocol for approximate circuit rewriting is a refined evolution of automated optimization techniques based on the ZX-calculus, where we add a greedy strategy that selectively replaces ZX-diagrams with small phase angles with stochastic mixtures of the identity and carefully chosen over-rotations, which are designed to reduce the overall gate count in expectation while staying within a strict error budget. This approach yields modest two-qubit gate count reduction in random quantum circuits, and achieves a substantial reduction in structured circuits such as the quantum Fourier transform. Fundamentally, our protocol converts experimental noise due to gate applications into deliberately engineered random noise, outperforming many other approximation methods on average. These results highlight the potential of mixed-channel approximations to enhance future quantum circuit performance, suggesting new directions for resource-aware automated quantum compilation beyond pure unitary channels.
NWQWorkflow: The Northwest Quantum Workflow
This paper presents NWQWorkflow, a comprehensive software toolkit that integrates multiple components for developing, compiling, error-correcting, simulating, and executing quantum applications on superconducting quantum hardware. The system provides an end-to-end workflow from programming through execution, designed to support the transition toward scalable quantum computing.
Key Contributions
- Comprehensive end-to-end quantum computing workflow integrating programming, compilation, error correction, and execution
- Open-source quantum software ecosystem with multiple integrated components for superconducting quantum testbeds
- Closed-loop software-hardware co-design framework for quantum application development
View Full Abstract
This whitepaper presents NWQWorkflow, an end-to-end workflow for quantum application development, compilation, error correction, benchmarking, numerical simulation, control, and execution on a prototype superconducting testbed. NWQWorkflow integrates NWQStudio (programming GUI environment), NWQASM (intermediate representation), QASMTrans (compiler), NWQEC (quantum error correction), QASMBench (benchmarking and characterization), NWQSim (HPC simulation), NWQLib (algorithm library), NWQData (data sets), NWQControl (quantum control), and NWQSC (superconducting testbed). The system enables closed-loop software-hardware co-design and reflects the past eight years of quantum computing research the author has led at PNNL (2018-2026). By releasing most software components as open source or planning their open-source availability, we aim to cultivate a collaborative quantum information science (QIS) ecosystem and support the transition toward a scalable quantum supercomputing era.
Stabilizer-Code Channel Transforms Beyond Repetition Codes for Improved Hashing Bounds
This paper develops improved methods for quantum error correction by using stabilizer codes as channel transforms to achieve better communication rates over noisy quantum channels. The authors generalize beyond simple repetition codes to arbitrary stabilizer codes and demonstrate improvements over standard quantum hashing bounds for certain types of Pauli noise channels.
Key Contributions
- Generalization of stabilizer-code channel transforms beyond repetition codes to arbitrary stabilizer codes
- Construction of symplectic tableaux and computation of induced logical error distributions for improved achievable rates via hashing bounds with decoder side information
View Full Abstract
The quantum hashing bound guarantees that rates up to $1-H(p_I, p_X, p_Y, p_Z)$ are achievable for memoryless Pauli channels, but it is not generally tight. A known way to improve achievable rates for certain asymmetric Pauli channels is to apply a small inner stabilizer code to a few channel uses, decode, and treat the resulting logical noise as an induced Pauli channel; reapplying the hashing argument to this induced channel can beat the baseline hashing bound. We generalize this induced-channel viewpoint to arbitrary stabilizer codes used purely as channel transforms. Given any $ [\![ n, k ]\!] $ stabilizer generator set, we construct a full symplectic tableau, compute the induced joint distribution of logical Pauli errors and syndromes under the physical Pauli channel, and obtain an achievable rate via a hashing bound with decoder side information. We perform a structured search over small transforms and report instances that improve the baseline hashing bound for a family of Pauli channels with skewed and independent errors studied in prior work.
Check-weight-constrained quantum codes: Bounds and examples
This paper studies quantum low-density parity-check (qLDPC) codes with constraints on the weight of their error-checking operations, establishing fundamental limits on what these codes can achieve and providing explicit constructions that approach these limits for practical quantum computers.
Key Contributions
- Proved that stabilizer codes with check weight at most 3 cannot have nontrivial distance
- Established tight bounds on rate-distance tradeoffs for CSS stabilizer and subsystem codes with constrained check weights
- Derived numerical bounds using linear programming and identified explicit code constructions approaching these limits
View Full Abstract
Quantum low-density parity-check (qLDPC) codes can be implemented by measuring only low-weight checks, making them compatible with noisy quantum hardware and central to the quest to build noise-resilient quantum computers. A fundamental open question is how constraints on check weight limit the achievable parameters of qLDPC codes. Here, we study stabilizer and subsystem codes with constrained check weight, combining analytical arguments with numerical optimization to establish strong upper bounds on their parameters. We show that stabilizer codes with checks of weight at most three cannot have nontrivial distance. We also prove tight tradeoffs between rate and distance for broad families of CSS stabilizer and subsystem codes with checks of weight at most four and two, respectively. Notably, our bounds are applicable to general qLDPC codes, as they rely only on check-weight constraints without assuming geometric locality or special graph connectivity. In the finite-size regime, we derive numerical upper bounds using linear programming techniques and identify explicit code constructions that approach these limits, delineating the landscape of practically relevant qLDPC codes with tens or hundreds of physical qubits.
Combatting noise in near-term quantum data centres
This paper compares different methods for handling errors in distributed quantum computing networks, specifically examining quantum error detection codes versus entanglement distillation techniques for performing quantum operations between remote quantum computers.
Key Contributions
- Comparative analysis of quantum error detection codes vs entanglement distillation for distributed quantum computing
- Performance evaluation of specific error correction codes (three-qubit repetition and [[4,1,2]] LNCY code) in quantum data center environments
View Full Abstract
We analyse the performance of different error handling methods in the quantum data centre paradigm of distributed quantum computing. We compare the impact of quantum error detection, using the three-qubit repetition code and the [[4, 1, 2]] Leung-Nielsen-Chuang-Yamamoto code, on remote gates with that of conventional entanglement distillation techniques. Detailed classical simulation is used to obtain results for realistic near-term hardware.
Active interference suppression in frequency-division-multiplexed quantum gates via off-resonant microwave tones
This paper develops a method to improve quantum gate operations when multiple qubits share the same control cable by deliberately adding specific off-resonant microwave signals that cancel out unwanted interference. The technique reduces gate errors and makes frequency-division multiplexing more practical for scaling up quantum computers.
Key Contributions
- Active interference suppression method for frequency-division-multiplexed quantum gates using deliberate off-resonant tones
- Demonstration that gate infidelity decreases proportionally to inverse square of number of microwave tones
- Mitigation strategy for fast oscillation effects through optimized frequency allocation
View Full Abstract
An increase in the number of control lines between the quantum processors and the external electronics constitutes a major bottleneck in the realization of large-scale quantum computers. Frequency-division multiplexing is expected to enable multiple qubits to be controlled through a single microwave cable; however, interference from off-resonant microwave tones hinders precise qubit control. Here, we propose an active interference suppression method for frequency-division-multiplexed simultaneous gate operations. We demonstrate that deliberate incorporation of off-resonant microwave tones improves the accuracy of single-qubit gates. Specifically, we find that by incorporating off-resonant orthogonal or quasi-orthogonal microwave tones, the gate infidelity decreases proportionally to the inverse square of the number of microwave tones. Furthermore, we show that fast oscillations neglected under the rotating wave approximation degrade gate fidelity, and that this degradation can be mitigated through optimized frequency allocation. Our approach is simple yet effective for improving the performance of frequency-division-multiplexed quantum gates.
Deep Learning Approaches to Quantum Error Mitigation
This paper develops deep learning techniques, particularly sequence-to-sequence attention-based models, to reduce errors in quantum computing measurements by correcting noisy output probability distributions from quantum circuits. The researchers tested their approach on IBM quantum processors up to 5 qubits and showed it outperforms other error mitigation methods.
Key Contributions
- Development of attention-based neural network architectures for quantum error mitigation that outperform baseline techniques
- Demonstration of cross-device generalization for error mitigation models across similar IBM quantum processors without full retraining
View Full Abstract
We present a systematic investigation of deep learning methods applied to quantum error mitigation of noisy output probability distributions from measured quantum circuits. We compare different architectures, from fully connected neural networks to transformers, and we test different design/training modalities, identifying sequence-to-sequence, attention-based models as the most effective on our datasets. These models consistently produce mitigated distributions that are closer to the ideal outputs when tested on both simulated and real device data obtained from IBM superconducting quantum processing units (QPU) up to five qubits. Across several different circuit depths, our approach outperforms other baseline error mitigation techniques. We perform a series of ablation studies to examine: how different input features (circuit, device properties, noisy output statistics) affect performance; cross-dataset generalization across circuit families; and transfer learning to a different IBM QPU. We observe that generalization performance across similar devices with the same architecture works effectively, without needing to fully retrain models.
Optimal Construction of Two-Qubit Gates using the Symmetries of B Gate Equivalence Class
This paper analyzes the mathematical structure of two-qubit quantum gates, focusing on the B gate equivalence class and its unique symmetry properties that allow it to generate all possible two-qubit operations using just two gate applications. The authors identify optimal constructions for universal two-qubit quantum circuits and discuss practical implementations on superconducting quantum computers.
Key Contributions
- Identification of unique symmetry properties of B gate equivalence class that enable universal two-qubit gate generation
- Construction of parameterized universal two-qubit quantum circuits using only two nonlocal gates
- Analysis of one-parameter families of local equivalence classes for optimal gate construction
- Discussion of practical implementation strategies for superconducting quantum computers
View Full Abstract
Two applications of gates from the B gate equivalence class can generate all two-qubit gates. This local equivalence class is invariant under the mirror (multiplication with the SWAP gate) operation, inverse (Hermitian conjugate) operation, and the combined inverse and mirror operations. The last two symmetries are associated with the ability of a two-qubit gate to generate the two-qubit local gates and the SWAP gate in two applications. No single local equivalence class of two-qubit gates, except the B gate equivalence class, has these two symmetries. Only the planar regions of the Weyl chamber, describing the mirror operation, contain the local equivalence classes with either one of the two symmetries. We show that there exist one-parameter families of local equivalence classes on these planes, with and without the B gate equivalence class, such that each of them can be used to construct a parameterized universal two-qubit quantum circuit that involves only two nonlocal two-qubit gates. We also discuss the implementation of the gates from a few families of local equivalence classes on superconducting quantum computers for optimal generation of all two-qubit gates.
3D Stacked Surface-Code Architecture for Measurement-Free Fault-Tolerant Quantum Error Correction
This paper introduces a 3D stacked architecture for quantum error correction that eliminates the need for mid-circuit measurements by using vertical connections between surface code layers. The approach overcomes connectivity limitations of 2D approaches and achieves better error rates than traditional measurement-based quantum error correction.
Key Contributions
- 3D stacked surface-code architecture with vertical transversal couplers
- Measurement-free fault-tolerant quantum error correction protocol
- Elimination of SWAP overhead through constant-depth inter-layer operations
- Analytical performance model showing orders of magnitude improvement in logical error rates
View Full Abstract
Mid-circuit measurements are a major bottleneck for superconducting quantum processors because they are slower and noisier than gates. Measurement-free quantum error correction (mfec) replaces repeated measurements and classical feed-forward by coherent quantum feedback, but existing mfec protocols suffer from severe connectivity overhead when mapped to planar surface-code architectures: transversal interactions between logical patches require SWAP chains of length $O(d)$ in the code distance, which increase depth and generate hook errors. This work introduces a 3D stacked surface-code architecture for measurement-free fault-tolerant quantum error correction that removes this connectivity bottleneck. Vertical transversal couplers between aligned surface-code patches enable coherent parity mapping and feedback with zero SWAP overhead, realizing constant-depth $O(1)$ inter-layer operations in d while preserving local 2D stabilizer checks. A fault-tolerant mfec protocol for the surface code is constructed that suppresses hook errors under realistic noise. An analytical performance model shows that the 3D architecture overcomes the readout error floor and achieves logical error rates orders of magnitude below both standard measurement-based surface codes and 2D mfec variants in regimes with slow, noisy measurements, identifying 3D integration as a key enabler for scalable measurement-free fault tolerance.
Dressed-state relaxation in coupled qubits as a source of two-qubit gate errors
This paper identifies a new source of errors in two-qubit quantum gates caused by noise at specific frequencies that match the energy splitting between dressed states of coupled qubits. The researchers demonstrate both theoretically and experimentally that this frequency-selective relaxation mechanism predictably degrades gate performance and represents a universal decoherence pathway across quantum platforms.
Key Contributions
- Identification of a new frequency-selective relaxation mechanism in two-qubit gates where noise at frequency 2g (twice the coupling strength) creates a distinct error channel
- Extension of T1ρ relaxation theory to interacting qubit systems with experimental verification on superconducting qubits
View Full Abstract
Understanding error mechanisms in two-qubit gate operations is essential for building high-fidelity quantum processors. While prior studies predominantly treat dephasing noise as either Markovian or predominantly low-frequency, realistic qubit environments exhibit structured, frequency-dependent spectra. Here we demonstrate that noise at frequencies matching the dressed-state energy splitting--set by the inter-qubit coupling strength g--induces a distinct relaxation channel that degrades gate performance. Through combined theoretical analysis and experimental verification on superconducting qubits with engineered noise spectra, we show that two-qubit gate errors scale predictably with the noise power spectral density at frequency 2g, extending the concept of $T_{1ρ}$ relaxation to interacting systems. This frequency-selective relaxation mechanism, universal across platforms, enriches our understanding of decoherence pathways during gate operations. The same mechanism sets coherence limits for dual-rail or singlet-triplet encodings.
Converting qubit relaxation into erasures with a single fluxonium
This paper demonstrates a new approach to quantum error correction using a single fluxonium qubit that converts relaxation errors into detectable erasure errors, achieving a four-fold improvement in logical qubit lifetime without requiring additional hardware overhead compared to previous dual-rail encoding methods.
Key Contributions
- Demonstrated erasure conversion in a single fluxonium qubit using 0-2 subspace encoding, eliminating hardware overhead of dual-rail approaches
- Achieved >4x improvement in logical qubit lifetime (193μs to 869μs) through erasure detection and post-selection
- Characterized measurement-induced dephasing and showed negligible error per erasure check (7.2×10^-5)
View Full Abstract
Qubits that experience predominantly erasure errors offer distinct advantages for fault-tolerant operation. Indeed, dual-rail encoded erasure qubits in superconducting cavities and transmons have demonstrated high-fidelity operations by converting physical-qubit relaxation into logical-qubit erasures, but this comes at the cost of increased hardware overhead and circuit complexity. Here, we address these limitations by realizing erasure conversion in a single fluxonium operated at zero flux, where the logical state is encoded in its 0-2 subspace. A single, carefully engineered resonator provides both mid-circuit erasure detection and end-of-line (EOL) logical measurement. Post-selection on non-erasure outcomes results in more than four-fold increase of the logical lifetime, from $193~μ$s to $869~μ$s. Finally, we characterize measurement-induced logical dephasing as a function of measurement power and frequency, and infer that each erasure check contributes a negligible error of $7.2\times 10^{-5}$. These results establish integer-fluxonium as a promising, resource-efficient platform for erasure-based error mitigation, without requiring additional hardware.
Stabilizer Code-Generic Universal Fault-Tolerant Quantum Computation
This paper presents a new method for achieving universal fault-tolerant quantum computation using any stabilizer error-correcting code by implementing logical Clifford and T gates through ancilla-mediated protocols. The approach is deterministic, doesn't consume ancilla registers, and works generically across all stabilizer codes, enabling any single code to perform universal quantum computation.
Key Contributions
- Generic universal fault-tolerant quantum gate implementation for all stabilizer codes
- Deterministic ancilla-mediated protocols for logical Clifford and T gates
- Enabling communication between heterogeneous stabilizer codes
- Method that doesn't modify underlying data codes or consume ancilla registers
View Full Abstract
Fault-tolerant quantum computation allows quantum computations to be carried out while resisting unwanted noise. Several error correcting codes have been developed to achieve this task, but none alone are capable of universal quantum computation. This universality is highly desired and often achieved using additional techniques such as code concatenation, code switching, or magic state distillation, which can be costly and only work for specific codes. This work implements logical Clifford and T gates through novel ancilla-mediated protocols to construct a universal fault-tolerant quantum gate set. Unlike traditional techniques, our implementation is deterministic, does not consume ancilla registers, does not modify the underlying data codes or registers, and is generic over all stabilizer codes. Thus, any single code becomes capable of universal quantum computation by leveraging helper codes in ancilla registers and mid-circuit measurements. Furthermore, since these logical gates are stabilizer code-generic, these implementations enable communication between heterogeneous stabilizer codes. These features collectively open the door to countless possibilities for existing and undiscovered codes as well as their scalable, heterogeneous coexistence.
Critical non-equilibrium phases from noisy topological memories
This paper studies quantum error correction in surface codes under noisy conditions, discovering a critical phase where quantum information partially survives but can only be recovered using global (not local) decoding methods. The researchers map this problem to a statistical physics model of loops to understand when and how quantum information can be preserved despite noise.
Key Contributions
- Discovery of extended non-equilibrium critical phase in surface codes with sub-exponential decay of conditional mutual information
- Introduction of punctured coherent information diagnostic to determine limits of quasi-local quantum error correction
View Full Abstract
We demonstrate the existence of an extended non-equilibrium critical phase, characterized by sub-exponential decay of conditional mutual information (CMI), in the surface code subject to heralded random Pauli measurement channels. By mapping the resulting mixed state to the ensemble of completely packed loops on a square lattice, we relate the extended phase to the Goldstone phase of the loop model. In particular, CMI is controlled by the characteristic length scale of loops, and we use analytic results of the latter to establish polylogarithmic decay of CMI in the critical phase. We find that the critical phase retains partial logical information that can be recovered by a global decoder, but not by any quasi-local decoder. To demonstrate this, we introduce a diagnostic called punctured coherent information which provides a necessary condition for quasi-local decoding.
Elevator Codes: Concatenation for resource-efficient quantum memory under biased noise
This paper introduces 'elevator codes,' a new quantum error correction scheme that uses a two-layer approach to dramatically reduce the number of qubits needed for quantum memory when noise is biased (one type of error is much more common than others). The method combines simple repetition codes for the common errors with high-rate codes for rare errors, achieving over 50% reduction in qubit overhead compared to existing approaches.
Key Contributions
- Introduction of elevator codes using concatenated classical codes with repetition phase-flip inner codes and high-rate bit-flip outer codes
- Demonstration of over 50% reduction in qubit overhead compared to rectangular surface codes and XZZX codes under biased noise conditions
View Full Abstract
Biased-noise qubits, in which one type of error (e.g. $X$- and $Y$-type errors) is significantly suppressed relative to the other (e.g. $Z$-type errors), can significantly reduce the overhead of quantum error correction. Codes such as the rectangular surface code or XZZX code substantially reduce the qubit overhead under biased noise, but they still face challenges. The rectangular surface code suffers from a relatively low threshold, while the XZZX code requires twice as many physical qubits to maintain the same code distance as the surface code. In this work, we introduce a 2D local code construction that outperforms these codes for noise biases $η\ge 7\times10^{4}$, reducing the qubit overhead by over 50% at $p_Z=10^{-3}$ and $η= 2 \times 10^6$ to achieve a logical error rate of $10^{-12}$. Our construction relies on the concatenation of two classical codes. The inner codes are repetition phase-flip codes while the outer codes are high-rate bit-flip codes enabled by their implementation at the logical level, which circumvents device connectivity constraints. These results indicate that under sufficiently biased noise, it is advantageous to address phase-flip and bit-flip errors at different layers of the coding scheme. The inner code should prioritize a high threshold for phase-flip errors, while the bit-flip outer code should optimize for encoding rate efficiency. In the strong biased-noise regime, high-rate outer codes keep the overhead for correcting residual bit-flip errors comparable to that of the repetition code itself, meaningfully lower than that required by earlier approaches.
Quantum Maxwell Erasure Decoder for qLDPC codes
This paper presents a new quantum error correction decoder called the quantum Maxwell erasure decoder for quantum low-density parity-check (qLDPC) codes. The decoder uses a 'bounded guessing' approach that can be tuned between fast linear-time decoding and optimal maximum-likelihood performance by adjusting a 'guessing budget' parameter.
Key Contributions
- Introduction of quantum Maxwell erasure decoder with tunable complexity-performance tradeoff via guessing budget
- Theoretical guarantees on asymptotic performance with demonstration on bivariate bicycle and quantum Tanner codes
View Full Abstract
We introduce a quantum Maxwell erasure decoder for CSS quantum low-density parity-check (qLDPC) codes that extends peeling with bounded guessing. Guesses are tracked symbolically and can be eliminated by restrictive checks, giving a tunable tradeoff between complexity and performance via a guessing budget: an unconstrained budget recovers Maximum-Likelihood (ML) performance, while a constant budget yields linear-time decoding and approximates ML. We provide theoretical guarantees on asymptotic performance and demonstrate strong performance on bivariate bicycle and quantum Tanner codes.
Symmetry-based Perspectives on Hamiltonian Quantum Search Algorithms and Schrodinger's Dynamics between Orthogonal States
This paper analyzes why Grover's quantum search algorithm fails when searching between orthogonal quantum states, proving that constant Hamiltonians cannot achieve time-optimal evolution in two-dimensional subspaces due to inherent symmetries. The authors demonstrate that overcoming this limitation requires either time-dependent Hamiltonians or evolution in higher-dimensional spaces.
Key Contributions
- Theoretical proof that constant Hamiltonians cannot achieve time-optimal evolution between orthogonal states in 2D subspaces
- Identification of symmetry as the fundamental cause of failure in analog quantum search with orthogonal states
View Full Abstract
It is known that the continuous-time variant of Grover's search algorithm is characterized by quantum search frameworks that are governed by stationary Hamiltonians, which result in search trajectories confined to the two-dimensional subspace of the complete Hilbert space formed by the source and target states. Specifically, the search approach is ineffective when the source and target states are orthogonal. In this paper, we employ normalization, orthogonality, and energy limitations to demonstrate that it is unfeasible to breach time-optimality between orthogonal states with constant Hamiltonians when the evolution is limited to the two-dimensional space spanned by the initial and final states. Deviations from time-optimality for unitary evolutions between orthogonal states can only occur with time-dependent Hamiltonian evolutions or, alternatively, with constant Hamiltonian evolutions in higher-dimensional subspaces of the entire Hilbert space. Ultimately, we employ our quantitative analysis to provide meaningful insights regarding the relationship between time-optimal evolutions and analog quantum search methods. We determine that the challenge of transitioning between orthogonal states with a constant Hamiltonian in a sub-optimal time is closely linked to the shortcomings of analog quantum search when the source and target states are orthogonal and not interconnected by the search Hamiltonian. In both scenarios, the fundamental cause of the failure lies in the existence of an inherent symmetry within the system.
Erasure conversion for singlet-triplet spin qubits enables high-performance shuttling-based quantum error correction
This paper develops a fault-tolerant quantum error correction framework using singlet-triplet spin qubits in semiconductor quantum dots, demonstrating how these qubits can function as erasure qubits with hardware-efficient leakage detection. The approach doubles the error correction threshold and significantly reduces logical error rates when combined with the XZZX surface code.
Key Contributions
- Hardware-efficient leakage-detection protocol for singlet-triplet qubits that projects leaked qubits back to computational subspace without measurement feedback
- Demonstration of twofold increase in error correction threshold and orders-of-magnitude reduction in logical error rates using XZZX surface code with leakage-aware decoding
View Full Abstract
Fast and high fidelity shuttling of spin qubits has been demonstrated in semiconductor quantum dot devices. Several architectures based on shuttling have been proposed; it has been suggested that singlet-triplet (dual-spin) qubits could be optimal for the highest shuttling fidelities. Here we present a fault-tolerant framework for quantum error correction based on such dual-spin qubits, establishing them as a natural realisation of erasure qubits within semiconductor architectures. We introduce a hardware-efficient leakage-detection protocol that automatically projects leaked qubits back onto the computational subspace, without the need for measurement feedback or increased classical control overheads. When combined with the XZZX surface code and leakage-aware decoding, we demonstrate a twofold increase in the error correction threshold and achieve orders-of-magnitude reductions in logical error rates. This establishes the singlet-triplet encoding as a practical route toward high-fidelity shuttling and erasure-based, fault-tolerant quantum computation in semiconductor devices.
Minimal-Energy Optimal Control of Tunable Two-Qubit Gates in Superconducting Platforms Using Continuous Dynamical Decoupling
This paper presents a method for creating high-fidelity quantum gates in superconducting quantum computers by combining continuous dynamical decoupling (to suppress noise) with variational optimization techniques to minimize energy while maximizing gate performance. The authors demonstrate their approach on key two-qubit gates like controlled-Z and controlled-X gates, achieving near-perfect fidelity under realistic experimental conditions.
Key Contributions
- Unified scheme combining continuous dynamical decoupling with variational optimal control for high-fidelity superconducting quantum gates
- Demonstration of near-unit fidelity for CZ, CX, and generic entangling gates with low-energy control fields and noise resilience
View Full Abstract
We present a unified scheme for generating high-fidelity entangling gates in superconducting platforms by continuous dynamical decoupling (CDD) combined with variational minimal-energy optimal control. During the CDD stage, we suppress residual couplings, calibration drifting, and quasistatic noise, resulting in a stable effective Hamiltonian that preserves the designed ZZ interaction intended for producing tunable couplers. In this stable $\mathrm{SU}(4)$ manifold, we calculate smooth low-energy single-quibt control functions using a variational geodesic optimization process that directly minimizes gate infidelity. We illustrate the methodology by applying it to CZ, CX, and generic engangling gates, achieving virtually unit fidelity and robustness under restricted single-qubit action, with experimentally realistic control fields. These results establish CDD-enhanced variational geometric optimal control as a practical and noise-resilient scheme for designing superconducting entangling gates.
Experimental Realization of Rabi-Driven Reset for Fast Cooling of a High-Q Cavity
This paper demonstrates a new method called Rabi-Driven Reset (RDR) for quickly cooling quantum memory devices to their ground state, achieving reset times over 100 times faster than natural decay. The technique uses engineered interactions between a superconducting qubit and cavity modes to create an efficient cooling pathway without requiring measurements.
Key Contributions
- Developed a measurement-free, hardware-efficient method for fast reset of high-Q bosonic memories that is over 100x faster than intrinsic decay
- Demonstrated engineered coupling that scales with qubit-mode dispersive interaction rather than weak intermode cross-Kerr, enabling fast cooling in weakly coupled architectures
View Full Abstract
High-Q bosonic memories are central to hardware-efficient quantum error correction, but their isolation makes fast, high-fidelity reset a persistent bottleneck. Existing approaches either rely on weak intermode cross-Kerr conversion or on measurement-based sequences with substantial latency. Here we demonstrate a hardware-efficient Rabi-Driven Reset (RDR) that implements continuous, measurement-free cooling of a superconducting cavity mode. A strong resonant Rabi drive on a transmon, together with sideband drives on the memory and readout modes detuned by the Rabi frequency, converts the dispersive interaction into an effective Jaynes-Cummings coupling between the qubit dressed states and each mode. This realizes a tunable dissipation channel from the memory to the cold readout bath. Crucially, the engineered coupling scales with the qubit-mode dispersive interaction and the drive amplitude, rather than with the intermode cross-Kerr, enabling fast cooling even in very weakly coupled architectures that deliberately suppress direct mode-mode coupling. We demonstrate RDR of a single photon with a decay time of $1.2 μs$, more than two orders of magnitude faster than the intrinsic lifetime. Furthermore, we reset about 30 thermal photons in about $80 μs$ to a steady-state average photon number of $\bar{n} = 0.045 \pm 0.025$.
Noise-Resilient Quantum Evolution in Open Systems through Error-Correcting Frameworks
This paper studies how quantum error correction codes protect quantum information in realistic noisy environments by modeling quantum systems coupled to thermal baths. The researchers compare different error correction codes (five-qubit, Steane, and toric codes) and find that the five-qubit code performs best across various temperature and noise conditions.
Key Contributions
- Developed a quantitative framework for evaluating quantum error correction codes under realistic thermal noise environments using microscopic system-bath models
- Demonstrated that the five-qubit code consistently outperforms Steane and toric codes in open-system settings across different temperature regimes
- Identified critical evolution times for entangled states where quantum error correction transitions from harmful to beneficial for state preservation
View Full Abstract
We analyze quantum state preservation in open quantum systems using quantum error-correcting (QEC) codes that are explicitly embedded into microscopic system-bath models. Instead of abstract quantum channels, we consider multi-qubit registers coupled to bosonic thermal environments, derive a second-order master equation for the reduced dynamics, and use it to benchmark the five-qubit, Steane, and toric codes under local and collective noise. We compute state fidelities for logical qubits as functions of coupling strength, bath temperature, and the number of correction cycles. In the low-temperature regime, we find that repeated error-correction with the five-qubit code strongly suppresses decoherence and relaxation, while in the high-temperature regime, thermal excitations dominate the dynamics and reduce the benefit of all codes, though the five-qubit code still outperforms the Steane and toric codes. For two-qubit Werner states, we identify a critical evolution time before which QEC does not improve fidelity, and this time increases as entanglement grows. After this critical time, QEC does improve fidelity. Comparative analysis further reveals that the five-qubit code (the smallest perfect code) offers consistently higher fidelities than topological and concatenated architectures in these open-system settings. These findings establish a quantitative framework for evaluating QEC under realistic noise environments and provide guidance for developing noise-resilient quantum architectures in near-term quantum technologies.
Generation of Large Coherent-State Superpositions in Free-Space Optical Pulses
This paper demonstrates the experimental generation of large-amplitude squeezed coherent-state superpositions (cat states) in optical pulses, achieving a record amplitude of 2.47 through controlled mixing of specific quantum states and detection techniques. These non-Gaussian quantum states are essential building blocks for continuous-variable quantum information processing.
Key Contributions
- Achievement of record-breaking amplitude (α=2.47) for squeezed cat states in free-space optical pulses
- Demonstration of protocol using controlled Fock state mixing and homodyne detection heralding
- Significant advancement toward scalable optical GKP states for fault-tolerant photonic quantum computing
View Full Abstract
The generation of non-Gaussian quantum states is a key requirement for universal continuous-variable quantum information processing. We report the experimental generation of large-amplitude squeezed coherent-state superpositions (squeezed cat states) on free-space optical pulses, reaching an amplitude of $α= 2.47$, which, to our knowledge, exceeds all previously reported values. Our protocol relies on the controlled mixing of the Fock states $|1\rangle$ and $|2\rangle$ through a tunable beam splitter, followed by heralding via homodyne detection. The resulting state displays three well-resolved negative regions in its Wigner function and achieves a fidelity of $0.53$ with the target state $\propto \hat{S}(z)(|α\rangle - |-α\rangle)$, with $α= 2.47$ and squeezing parameter $z = 0.56$. These results constitute a significant milestone for temporal breeding protocols and for the iterative generation of optical GKP states, opening new perspectives for scalable and fault-tolerant photonic quantum architectures.
Geometry- and Topology-Informed Quantum Computing: From States to Real-Time Control with FPGA Prototypes
This paper presents a comprehensive framework for quantum computing that bridges theoretical quantum mechanics with practical hardware implementation, covering everything from quantum state geometry to real-time error correction on FPGA platforms. It emphasizes the geometry and topology of quantum states while providing concrete hardware-aware approaches to quantum control systems.
Key Contributions
- Geometry-first approach to quantum computing connecting theoretical foundations to hardware implementation
- Real-time topological error correction with FPGA-based decoders and microarchitectural constraints
- Complete quantum control pipeline from quantum Fisher information geometry to low-latency streaming systems
- Integration of Shor's algorithm implementation with practical hardware considerations
View Full Abstract
This book gives a geometry-first, hardware-aware route through quantum-information workflows, with one goal: connect states, circuits, and measurement to deterministic classical pipelines that make hybrid quantum systems run. Part 1 develops the backbone (essential linear algebra, the Bloch-sphere viewpoint, differential-geometric intuition, and quantum Fisher information geometry) so evolution can be read as motion on curved spaces and measurement as statistics. Part 2 reframes circuits as dataflow graphs: measurement outcomes are parsed, aggregated, and reduced to small linear-algebra updates that schedule the next pulses, highlighting why low-latency, low-jitter streaming matters. Part 3 treats multi-qubit structure and entanglement as geometry and computation, including teleportation, superdense coding, entanglement detection, and Shor's algorithm via quantum phase estimation. Part 4 focuses on topological error correction and real-time decoding (Track A): stabilizer codes, surface-code decoding as "topology -> graph -> algorithm", and Union-Find decoders down to microarchitectural/RTL constraints, with verification, fault injection, and host/control-stack integration under product metrics (bounded latency, p99 tails, fail-closed policies, observability). Optional Track C covers quantum cryptography and streaming post-processing (BB84/E91, QBER/abort rules, privacy amplification, and zero-knowledge/post-quantum themes), emphasizing FSMs, counters, and hash pipelines. Appendices provide visualization-driven iCEstick labs (switch-to-bit conditioning, fixed-point phase arithmetic, FSM sequencing, minimal control ISAs), bridging principles to implementable systems.
Sparse quantum state preparation with improved Toffoli cost
This paper develops a more efficient method for preparing sparse quantum states (states with only a small number of non-zero components) by improving the circuit implementation of the isometry mapping step, reducing the required number of Toffoli gates by roughly a factor of log(s)/2 compared to previous methods.
Key Contributions
- Efficient algorithm for implementing isometry mappings in sparse state preparation with ~2s Toffoli gate cost
- Log(s)/2 improvement factor over state-of-the-art methods for sparse quantum state preparation
- Optimization strategies for joint dense-state preparation and isometry steps, particularly for real-coefficient states
View Full Abstract
The preparation of quantum states is one of the most fundamental tasks in quantum computing, and a key primitive in many quantum algorithms. Of particular interest to areas such as quantum simulation and linear-system solvers are sparse quantum states, which contain only a small number $s$ of non-zero computational basis states compared to a generic state. In this work, we present an approach that prepares $s$-sparse states on $n$ qubits, reducing the number of Toffoli gates required compared to prior art. We work in the established framework of first preparing a dense state on a $\lceil{\log(s)}\rceil$-qubit sub-register, and then mapping this state to the target state via an isometry, with the latter step dominating the cost of the full algorithm. The speed-up is achieved by designing an efficient algorithm for finding and implementing the isometry. The worst-case Toffoli cost of our isometry circuit, which may be viewed as a batched version of an approach by Malvetti et al., is essentially $2s$ for sufficiently large values of $n$, yielding roughly a $\log(s)/2$ improvement factor over the state-of-the-art. In numerical benchmarks on randomly chosen states, the cost is closer to $s$. With the improved isometry circuit, we examine the dense-state preparation step and present ways to optimize the joint cost of both steps, particularly in the case of target states with purely real coefficients, by outsourcing some sub-tasks from the dense-state preparation to the isometry.
Network-Based Quantum Computing: an efficient design framework for many-small-node distributed fault-tolerant quantum computing
This paper proposes a network-based quantum computing framework for distributed fault-tolerant quantum computation using many small-scale nodes that can each hold only a few logical qubits. The approach moves computational data continuously throughout the network and demonstrates improved efficiency compared to traditional circuit-based and measurement-based quantum computing methods.
Key Contributions
- Novel network-based quantum computing framework for distributed fault-tolerant quantum computation
- Demonstration of improved execution times and node efficiency compared to existing approaches
- Architecture design methodology for exploiting redundancy in many small fault-tolerant nodes
View Full Abstract
In fault-tolerant quantum computing, a large number of physical qubits are required to construct a single logical qubit, and a single quantum node may be able to hold only a small number of logical qubits. In such a case, the idea of distributed fault-tolerant quantum computing (DFTQC) is important to demonstrate large-scale quantum computation using small-scale nodes. However, the design of distributed systems on small-scale nodes, where each node can store only one or a few logical qubits for computation, has not been explored well yet. In this paper, we propose network-based quantum computation (NBQC) to efficiently realize distributed fault-tolerant quantum computation using many small-scale nodes. A key idea of NBQC is to let computational data continuously move throughout the network while maintaining the connectivity to other nodes. We numerically show that, for practical benchmark tasks, our method achieves shorter execution times than circuit-based strategies and more node-efficient constructions than measurement-based quantum computing. Also, if we are allowed to specialize the network to the structure of quantum programs, such as peak access frequencies, the number of nodes can be significantly reduced. Thus, our methods provide a foundation in designing DFTQC architecture exploiting the redundancy of many small fault-tolerant nodes.
Many-Body Effects in Dark-State Laser Cooling
This paper develops a theoretical framework for understanding laser cooling of trapped ions, specifically how cooling efficiency changes when multiple ions are present. The research provides guidelines for optimizing the cooling process needed to prepare ions in their quantum ground state, which is essential for high-fidelity quantum operations.
Key Contributions
- Unified many-body theory explaining ion-number-dependent cooling behavior
- Analytic results for both weak and strong coupling regimes with experimental optimization guidelines
- Identification of collective dynamics that enhance cooling rates in large ion crystals
View Full Abstract
We develop a unified many-body theory of two-photon dark-state laser cooling, the workhorse for preparing trapped ions close to their motional quantum ground state. For ions with a $Λ$ level structure, driven by Raman lasers, we identify an ion-number-dependent crossover between weak and strong coupling where both the cooling rate and final temperature are simultaneously optimized. We obtain simple analytic results in both extremes: In the weak coupling limit, we show a Lorentzian spin-absorption spectrum determines the cooling rate and final occupation of the motional state, which are both independent of the number of ions. We also highlight the benefit of including an additional spin dependent force in this case. In the strong coupling regime, our theory reveals the role of collective dynamics arising from phonon exchange between dark and bright states, allowing us to explain the enhancement of the cooling rate with increasing ion number. Our analytic results agree closely with exact numerical simulations and provide experimentally accessible guidelines for optimizing cooling in large ion crystals, a key step toward scalable, high-fidelity trapped-ion quantum technologies.
Bidirectional Decoding for Concatenated Quantum Hamming Codes
This paper develops a new 'bidirectional' decoding algorithm for concatenated quantum Hamming codes that uses information from higher-level error syndromes to improve lower-level error correction decisions. The method significantly improves error correction thresholds and maintains better distance scaling compared to conventional local decoding approaches.
Key Contributions
- Introduction of bidirectional decoding strategy that leverages higher-level syndrome information to improve lower-level error correction
- Demonstration of improved error threshold from 1.56% to 4.35% for concatenated quantum Hamming codes
- Preservation of full 3^L code-distance scaling across multiple concatenation levels
View Full Abstract
High-rate concatenated quantum codes offer a promising pathway toward fault-tolerant quantum computation, yet designing efficient decoders that fully exploit their error-correction capability remains a significant challenge. In this work, we introduce a hard-decision decoder for concatenated quantum Hamming codes with time complexity polynomial in the block length. This decoder overcomes the limitations of conventional local decoding by leveraging higher-level syndrome information to revise lower-level recovery decisions -- a strategy we refer to as bidirectional decoding. For the concatenated $[[15,7,3]]$ quantum Hamming code under independent bit-flip noise, the bidirectional decoder improves the threshold from approximately $1.56\%$ to $4.35\%$ compared with standard local decoding. Moreover, the decoder empirically preserves the full $3^{L}$ code-distance scaling for at least three levels of concatenation, resulting in substantially faster logical-error suppression than the $2^{L+1}$ scaling offered by local decoders. Our results can enhance the competitiveness of concatenated-code architectures for low-overhead fault-tolerant quantum computation.
Obfuscation of Arbitrary Quantum Circuits
This paper presents the first quantum obfuscation scheme that can hide the internal structure of arbitrary quantum circuits while preserving their functionality, extending beyond previous work that only handled specific types of quantum operations. The authors introduce a novel cryptographic primitive called subspace-preserving strong pseudorandom unitaries (spsPRU) to achieve this general obfuscation.
Key Contributions
- First quantum ideal obfuscation scheme for arbitrary quantum circuits supporting quantum inputs and outputs
- Introduction of subspace-preserving strong pseudorandom unitary (spsPRU) primitive
- Extension of obfuscation beyond unitaries to general completely positive trace-preserving maps
View Full Abstract
Program obfuscation aims to conceal a program's internal structure while preserving its functionality. A central open problem is whether an obfuscation scheme for arbitrary quantum circuits exists. Despite several efforts having been made toward this goal, prior works have succeeded only in obfuscating quantum circuits that implement either pseudo-deterministic functions or unitary transformations. Although unitary transformations already include a broad class of quantum computation, many important quantum tasks, such as state preparation and quantum error-correction, go beyond unitaries and fall within general completely positive trace-preserving maps. In this work, we construct the first quantum ideal obfuscation scheme for arbitrary quantum circuits that support quantum inputs and outputs in the classical oracle model assuming post-quantum one-way functions, thereby resolving an open problem posed in Bartusek et al. (STOC 2023), Bartusek, Brakerski, and Vaikuntanathan (STOC 2024), and Huang and Tang (FOCS 2025). At the core of our construction lies a novel primitive that we introduce, called the subspace-preserving strong pseudorandom unitary (spsPRU). An spsPRU is a family of efficient unitaries that fix every vector in a given linear subspace $S$, while acting as a Haar random unitary on the orthogonal complement $S^\perp$ under both forward and inverse oracle queries. Furthermore, by instantiating the classical oracle model with the ideal obfuscation scheme for classical circuits proposed by Jain et al. (CRYPTO 2023) and later enhanced by Bartusek et al. (arxiv:2510.05316), our obfuscation scheme can also be realized in the quantumly accessible pseudorandom oracle model.
Breaking the Orthogonality Barrier in Quantum LDPC Codes
This paper develops improved quantum low-density parity-check (LDPC) codes that overcome traditional trade-offs between orthogonality constraints and code performance. The authors construct quantum error-correcting codes with better structural properties (large girth, regular degree distributions) while maintaining the orthogonality requirements unique to quantum codes, demonstrating a specific code that achieves very low error rates under realistic noise conditions.
Key Contributions
- Breaking the conventional trade-off between orthogonality, regularity, girth, and minimum distance in quantum LDPC codes through controlled permutation matrices
- Demonstrating a concrete girth-8, (3,12)-regular quantum LDPC code with 9216 physical qubits encoding 4612 logical qubits that achieves frame error rates as low as 10^-8
View Full Abstract
Classical low-density parity-check (LDPC) codes are a widely deployed and well-established technology, forming the backbone of modern communication and storage systems. It is well known that, in this classical setting, increasing the girth of the Tanner graph while maintaining regular degree distributions leads simultaneously to good belief-propagation (BP) decoding performance and large minimum distance. In the quantum setting, however, this principle does not directly apply because quantum LDPC codes must satisfy additional orthogonality constraints between their parity-check matrices. When one enforces both orthogonality and regularity in a straightforward manner, the girth is typically reduced and the minimum distance becomes structurally upper bounded. In this work, we overcome this limitation by using permutation matrices with controlled commutativity and by restricting the orthogonality constraints to only the necessary parts of the construction, while preserving regular check-matrix structures. This design breaks the conventional trade-off between orthogonality, regularity, girth, and minimum distance, allowing us to construct quantum LDPC codes with large girth and without the usual distance upper bounds. As a concrete demonstration, we construct a girth-8, (3,12)-regular $[[9216,4612, \leq 48]]$ quantum LDPC code and show that, under BP decoding combined with a low-complexity post-processing algorithm, it achieves a frame error rate as low as $10^{-8}$ on the depolarizing channel with error probability $4 \%$.
Optimal logical Bell measurements on stabilizer codes with linear optics
This paper develops optimal methods for performing Bell measurements on logical qubits encoded in photonic quantum error-correcting codes using linear optics. The authors prove that any logical Bell measurement can be mapped to a single physical Bell measurement and demonstrate schemes that achieve theoretical upper bounds for success probability across multiple stabilizer codes.
Key Contributions
- Proved that any logical Bell measurement on stabilizer codes maps to a single physical Bell measurement on any qubit pair
- Established general upper bounds on success probability for logical Bell measurements with linear optics
- Developed optimal schemes achieving theoretical bounds for multiple quantum error-correcting codes including surface codes and Steane codes
View Full Abstract
Bell measurements (BMs) are ubiquitous in quantum information and technology. They are basic elements for quantum commmunication, computation, and error correction. In particular, when performed on logical qubits encoded in physical photonic qubits, they allow for a read-out of stabilizer syndrome information to enhance loss tolerance in qubit-state transmission and fusion. However, even in an ideal setting without photon loss, BMs cannot be done perfectly based on the simplest experimental toolbox of linear optics. Here we demonstrate that any logical BM on stabilizer codes can always be mapped onto a single physical BM perfomed on any qubit pair from the two codes. As a necessary condition for the success of a logical BM, this provides a general upper bound on its success probability, especially ruling out the possibility that the stabilizer information obtainable from only partially succeeding, physical linear-optics BMs could be combined into the full logical stabilizer information. We formulate sufficient criteria to find schemes for which a single successful BM on the physical level will always allow to obtain the full logical information by suitably adapting the subsequent physical measurements. Our approach based on stabilizer group theory is generally applicable to any stabilizer code, which we demonstrate for quantum parity, five-qubit, standard and rotated planar surface, tree, and seven-qubit Steane codes. Our schemes attain the general upper bound for all these codes, while this bound had previously only been reached for the quantum parity code.
Single-Period Floquet Control of Bosonic Codes with Quantum Lattice Gates
This paper introduces a new method for controlling bosonic quantum codes using single-period Floquet protocols, eliminating the need for slow multi-thousand period driving sequences. The approach enables efficient preparation of bosonic codes and implementation of logical gates using quantum lattice gates that exploit Josephson junction nonlinearity.
Key Contributions
- Development of single-period Floquet method for direct unitary synthesis in bosonic systems
- Demonstration of high-fidelity bosonic code preparation and logical gate implementation using quantum lattice gates
View Full Abstract
Bosonic codes constitute a promising route to fault-tolerant quantum computing. {Existing Floquet protocols enable analytical construction of bosonic codes but typically rely on slow adiabatic ramps with thousands of driving periods.} In this work, we circumvent this bottleneck by introducing an analytical and deterministic Floquet method that directly synthesizes arbitrary unitaries within a single period. The phase-space unitary ensembles generated by our approach reproduce the Haar-random statistics, enabling practical pseudorandom unitaries in continuous-variable systems. We prepare various prototypical bosonic codes from vacuum and implement single-qubit logical gates with high fidelities using quantum lattice gates. By harnessing the full intrinsic nonlinearity of Josephson junctions, quantum lattice gates decompose quantum circuits into primitive operations for efficient continuous-variable quantum computing.
Quantum CSS LDPC Codes based on Dyadic Matrices for Belief Propagation-based Decoding
This paper develops a new method for constructing quantum error-correcting codes called quantum CSS LDPC codes using dyadic matrices. The codes are designed to work with a specific type of decoder that can better handle problematic short cycles in the code structure by concentrating them at single points.
Key Contributions
- Algebraic construction method for quantum LDPC codes using dyadic matrices
- CSS framework extension with compatibility conditions for CAMEL-ensemble quaternary belief propagation decoder
View Full Abstract
Quantum low-density parity-check (QLDPC) codes provide a practical balance between error-correction capability and implementation complexity in quantum error correction (QEC). In this paper, we propose an algebraic construction based on dyadic matrices for designing both classical and quantum LDPC codes. The method first generates classical binary quasi-dyadic LDPC codes whose Tanner graphs have girth 6. It is then extended to the Calderbank-Shor-Steane (CSS) framework, where the two component parity-check matrices are built to satisfy the compatibility condition required by the recently introduced CAMEL-ensemble quaternary belief propagation decoder. This compatibility condition ensures that all unavoidable cycles of length 4 are assembled in a single variable node, allowing the mitigation of their detrimental effects by decimating that variable node.
Asymptotically good CSS codes that realize the logical transversal Clifford group fault-tolerantly
This paper develops new methods for constructing CSS quantum error-correcting codes that can perform fault-tolerant logical operations using transversal gates, specifically focusing on implementing the Clifford group of quantum operations. The work provides both theoretical frameworks and demonstrates that these codes can achieve good scaling properties while maintaining fault tolerance.
Key Contributions
- Framework for constructing CSS codes supporting fault-tolerant logical transversal Z-rotations
- Demonstration of asymptotically good CSS codes realizing the logical transversal Clifford group
- Analysis of CSS-T codes including necessary conditions and revised characterizations for logical T gate implementation
View Full Abstract
This paper introduces a framework for constructing Calderbank-Shor-Steane (CSS) codes that support fault-tolerant logical transversal $Z$-rotations. Using this framework, we obtain asymptotically good CSS codes that fault-tolerantly realize the logical transversal Clifford group. Furthermore, investigating CSS-T codes, we: (a) demonstrate asymptotically good CSS-T codes wherein the transversal $T$ realizes the logical transversal $S^{\dagger}$; (b) show that the condition $C_2 \ast C_1 \subseteq C_1^{\perp}$ is necessary but not sufficient for CSS-T codes; and (c) revise the characterizations of CSS-T codes wherein the transversal $T$ implements the logical identity and the logical transversal $T$, respectively.
Symmetry-Adapted State Preparation for Quantum Chemistry on Fault-Tolerant Quantum Computers
This paper develops efficient methods to prepare quantum states with proper symmetries for quantum chemistry calculations on fault-tolerant quantum computers. The approach uses symmetry projectors that significantly improve the success rate of quantum phase estimation while requiring orders of magnitude fewer resources than the main computation.
Key Contributions
- Development of resource-efficient symmetry projectors using linear combination of unitaries and generalized quantum signal processing for fault-tolerant quantum chemistry
- Demonstration that symmetry filtering reduces overall computational cost by 3-4 orders of magnitude compared to unfiltered approaches while substantially increasing quantum phase estimation success probability
View Full Abstract
We present systematic and resource-efficient constructions of continuous symmetry projectors, particularly $U(1)$ particle number and $SU(2)$ total spin, tailored for fault-tolerant quantum computations. Our approach employs a linear combination of unitaries (LCU) as well as generalized quantum signal processing (GQSP and GQSVT) to implement projectors. These projectors can then be coherently applied as state filters prior to quantum phase estimation (QPE). We analyze their asymptotic gate complexities for explicit circuit realizations. For the particle number and $S_z$ symmetries, GQSP offers favorable resource usage features owing to its low ancilla qubit requirements and robustness to finite precision rotation gate synthesis. For the total spin projection, the structured decomposition of $\hat{P}_{S,M_S}$ reduces the projector T gate count. Numerical simulations show that symmetry filtering substantially increases the QPE success probability, leading to a lower overall cost compared to that of unfiltered approaches across representative molecular systems. Resource estimates further indicate that the cost of symmetry filtering is $3$ to $4$ orders of magnitude lower than that of the subsequent phase estimation step This advantage is especially relevant in large, strongly correlated systems, such as FeMoco, a standard strongly correlated open-shell benchmark. For FeMoco, the QPE cost is estimated at ${\sim}10^{10}$ T gates, while our symmetry projector requires only ${\sim}10^{6}$--$10^{7}$ T gates. These results establish continuous-symmetry projectors as practical and scalable tools for state preparation in quantum chemistry and provide a pathway toward realizing more efficient fault-tolerant quantum simulations.
Toolchain for shuttling trapped-ion qubits in segmented traps
This paper presents a computational toolchain for optimizing the movement of trapped-ion qubits between different zones in segmented radiofrequency traps without causing unwanted vibrations. The framework helps design voltage waveforms that enable fast, reliable qubit transport in complex trap geometries for scalable quantum computing.
Key Contributions
- Numerical toolchain for generating optimized voltage waveforms for ion shuttling in segmented traps
- Framework supporting arbitrary trap geometries including junctions and multi-zone layouts with experimental constraints
- Validation methodology comparing predicted and measured secular frequencies with performance optimization for complex architectures
View Full Abstract
Scalable trapped-ion quantum computing requires fast and reliable transport of ions through complex, segmented radiofrequency trap architectures without inducing excessive motional excitation. We present a numerical toolchain for the systematic generation of time-dependent electrode voltages enabling fast, low-excitation ion shuttling in segmented radiofrequency traps. Based on a model of the trap electrode geometry, the framework combines an electrostatic field solver, efficient unconstrained optimization, waveform postprocessing, and dynamical simulations of ion motion to compute voltage waveforms that realize prescribed transport trajectories while respecting experimental constraints such as voltage limits and bandwidth. The toolchain supports arbitrary trap geometries, including junctions and multi-zone layouts, and allows for the flexible incorporation of optimization objectives. We provide a detailed assessment of the accuracy of the framework by investigating its numerical stability and by comparing measured and predicted secular frequencies. The framework is optimized for numerical performance, enabling rapid numerical prototyping of trap architectures of increasing complexity. As application examples, we apply the framework to the transport of a potential well along a linear, uniformly segmented trap, and we compute a solution for shuttling a potential well around the corner of an X-type trap junction. The presented approach provides an extensible and highly efficient numerical foundation for designing and validating transport protocols in current and next-generation trapped-ion processors.
A dataflow programming framework for linear optical distributed quantum computing
This paper introduces a graphical programming framework that combines linear optics, quantum computing theory, and dataflow programming to design and verify distributed quantum computing systems that use photons to connect quantum processors. The framework enables formal analysis and optimization of networked quantum architectures with classical control systems.
Key Contributions
- Development of a unified graphical framework integrating linear optics, ZX-calculus, and dataflow programming for distributed quantum computing
- Classification of entangling photonic fusion measurements with novel error correction flow structures
- Correctness proofs for repeat-until-success protocols enabling arbitrary fusions
- Construction of universal quantum computing architectures using practical optical components with deterministic operation guarantees
View Full Abstract
Photonic systems offer a promising platform for interconnecting quantum processors and enabling scalable, networked architectures. Designing and verifying such architectures requires a unified formalism that integrates linear algebraic reasoning with probabilistic and control-flow structures. In this work, we introduce a graphical framework for distributed quantum computing that brings together linear optics, the ZX-calculus, and dataflow programming. Our language supports the formal analysis and optimization of distributed protocols involving both qubits and photonic modes, with explicit interfaces for classical control and feedforward, all expressed within a synchronous dataflow model with discrete-time dynamics. Within this setting, we classify entangling photonic fusion measurements, show how their induced Pauli errors can be corrected via a novel flow structure for fusion networks, and establish correctness proofs for new repeat-until-success protocols enabling arbitrary fusions. Layer by layer, we construct qubit architectures incorporating practical optical components such as beam splitters, switches, and photon sources, with graphical proofs that they are deterministic and support universal quantum computation. Together, these results establish a foundation for verifiable compilation and automated optimization in networked quantum computing.
Extending Qubit Coherence Time via Hybrid Dynamical Decoupling
This paper presents a hybrid approach combining dynamical decoupling pulses with bath spin polarization to significantly extend qubit coherence times by 2-3 orders of magnitude. The technique is demonstrated using the central spin model applicable to GaAs quantum dots and similar quantum systems.
Key Contributions
- Development of hybrid dynamical decoupling technique combining pulsed DD with bath spin polarization
- Demonstration of 2-3 orders of magnitude improvement in qubit coherence time
- Application to practical quantum systems including GaAs/AlGaAs and silicon-based platforms
View Full Abstract
Dynamical decoupling (DD) and bath engineering are two parallel techniques employed to mitigate qubit decoherence resulting from their unavoidable coupling to the environment. Here, we present a hybrid DD approach that integrates pulsed DD with bath spin polarization to enhance qubit coherence within the central spin model. This model, which can be realized using GaAs semiconductor quantum dots or analogous quantum simulators, demonstrates a significant extension of the central spin's coherence time by approximately 2 to 3 orders of magnitude that compared with the free-induced decay time, where the dominant contribution from DD and a moderate improvement from spin-bath polarization. This study, which integrates uniaxial dynamical decoupling and auxiliary bath-spin engineering, paves the way for prolonging coherence times in various practical quantum systems, including GaAs/AlGaAs, silicon and Si/SiGe. And this advancement holds substantial promise for applications in quantum information processing.
Learning Better Error Correction Codes with Hybrid Quantum-Assisted Machine Learning
This paper develops a hybrid classical-quantum machine learning approach to automatically discover better quantum error correction codes. The method combines reinforcement learning with quantum device testing to find stabilizer codes optimized for specific hardware errors and photon loss.
Key Contributions
- Hybrid classical-quantum reinforcement learning algorithm for error correction code discovery
- Device-specific error correction code optimization using commercial quantum hardware
View Full Abstract
Quantum error correction is one of the fundamental building blocks of digital quantum computation. The Quantum Lego formalism has introduced a systematic way of constructing new stabilizer codes out of basic lego-like building blocks, which in previous work we have used to generate improved error correcting codes via an automated reinforcement learning process. Here, we take this a step further and show the use of a hybrid classical-quantum algorithm. We combine classical reinforcement learning with calls to two commercial quantum devices to search for a stabilizer code to correct errors specific to the device, as well as an induced photon loss error.
Mechanical Resonator-based Quantum Computing
This paper demonstrates a new quantum computing architecture that uses mechanical resonators (like acoustic wave devices) controlled by superconducting qubits to perform quantum computations. The researchers show they can implement universal quantum gates and run quantum algorithms like the quantum Fourier transform using this hybrid mechanical-superconducting system.
Key Contributions
- Demonstration of universal quantum gate set using mechanical resonators controlled by superconducting qubits
- Implementation of quantum Fourier transform and quantum period finding algorithms on mechanical modes
- New hybrid architecture combining mechanical systems with superconducting circuits for quantum computing
View Full Abstract
Hybrid quantum systems combine the unique advantages of different physical platforms with the goal of realizing more powerful and practical quantum information processing devices. Mechanical systems, such as bulk acoustic wave resonators, feature a large number of highly coherent harmonic modes in a compact footprint, which complements the strong nonlinearities and fast operation times of superconducting quantum circuits. Here, we demonstrate an architecture for mechanical resonator-based quantum computing, in which a superconducting qubit is used to perform quantum gates on a collection of mechanical modes. We show the implementation of a universal gate set, composed of single-qubit gates and controlled arbitrary-phase gates, and showcase their use in the quantum Fourier transform and quantum period finding algorithms. These results pave the way toward using mechanical systems to build crucial components for future quantum technologies, such as quantum random-access memories.
Hardware-Economic Manipulation of Dual-Type ${}^{171}$Yb$^+$ Qubits
This paper demonstrates a cost-effective method to control two different types of qubits in ytterbium ions using just one laser instead of multiple lasers. The researchers show they can perform quantum operations on both qubit types and create entanglement between them, which could make quantum computers simpler and cheaper to build.
Key Contributions
- Hardware-economic control of dual-type qubits using single 355 nm mode-locked pulsed laser
- Demonstration of direct entangling gate between two different qubit types in Yb-171 ions
- Simplification of trapped-ion quantum computer manipulation at both hardware and software levels
View Full Abstract
The dual-type qubit scheme is an emerging method to suppress crosstalk errors in scalable trapped-ion quantum computation and quantum network. Here we report a hardware-economic way to control dual-type $^{171}\mathrm{Yb}^+$ qubits using a single $355\,$nm mode-locked pulsed laser. Utilizing its broad frequency comb structure, we drive the Raman transitions of both qubit types encoded in the $S_{1/2}$ and the $F_{7/2}$ hyperfine levels, and probe their carrier transitions and the motional sidebands. We further demonstrate a direct entangling gate between the two qubit types. Our work can simplify the manipulation of the $^{171}\mathrm{Yb}^+$ qubits both at the hardware and the software level.
Fault-tolerant modular quantum computing with surface codes using single-shot emission-based hardware
This paper develops improved methods for connecting quantum computing modules in a network by generating high-quality entangled states using light-based emission protocols, eliminating the need for slow memory operations and achieving better error thresholds for fault-tolerant quantum computing.
Key Contributions
- Single-shot emission-based protocol for generating GHZ states without Bell-pair fusion
- Elimination of memory-based two-qubit gates in modular quantum computing
- Improved fault-tolerance thresholds from ~0.16% to 0.19-0.24% for surface codes
View Full Abstract
Fault-tolerant modular quantum computing requires stabilizer measurements across the modules in a quantum network. For this, entangled states of high quality and rate must be distributed. Currently, two main types of entanglement distribution protocols exist, namely emission-based and scattering-based, each with its own advantages and drawbacks. On the one hand, scattering-based protocols with cavities or waveguides are fast but demand stringent hardware such as high-efficiency integrated circulators or strong waveguide coupling. On the other hand, emission-based platforms are experimentally feasible but so far rely on Bell-pair fusion with extensive use of slow two-qubit memory gates, limiting thresholds to $\approx 0.16\%$. Here, we consider a fully distributed surface code using emission-based entanglement schemes that generate GHZ states in a single shot, i.e., without the need for Bell-pair fusions. We show that our optical setup produces Bell pairs, W states, and GHZ states, enabling both memory-based and optical protocols for distilling high-fidelity GHZ states with significantly improved success rates. Furthermore, we introduce protocols that completely eliminate the need for memory-based two-qubit gates, achieving thresholds of $\approx 0.19\%$ with modest hardware enhancements, increasing to above $\approx 0.24\%$ with photon-number-resolving detectors. These results show the feasibility of emission-based architectures for scalable fault-tolerant operation.
Quantum Error Correction and Detection for Quantum Machine Learning
This paper examines how to integrate quantum error correction and detection methods into quantum machine learning systems given current hardware limitations. The authors propose partial error correction approaches to reduce resource overhead and demonstrate quantum error detection methods for near-term QML applications.
Key Contributions
- Quantification of resource demands for fully error-corrected quantum machine learning
- Proposal of partial quantum error correction approach to reduce overhead while enabling error correction
- Demonstration and evaluation of quantum error detection methods for QML performance
View Full Abstract
At the intersection of quantum computing and machine learning, quantum machine learning (QML) is poised to revolutionize artificial intelligence. However, the vulnerability of the current generation of quantum computers to noise and computational error poses a significant barrier to this vision. Whilst quantum error correction (QEC) offers a promising solution for almost any type of hardware noise, its application requires millions of qubits to encode even a simple logical algorithm, rendering it impractical in the near term. In this chapter, we examine strategies for integrating QEC and quantum error detection (QED) into QML under realistic resource constraints. We first quantify the resource demands of fully error-corrected QML and propose a partial QEC approach that reduces overhead while enabling error correction. We then demonstrate the application of a simple QED method, evaluating its impact on QML performance and highlighting challenges we have yet to overcome before we achieve fully fault-tolerant QML.
Composable Verification in the Circuit-Model via Magic-Blindness
This paper develops new verification protocols that allow users to securely check whether their quantum computations were performed correctly, even when the quantum computer might be faulty or malicious. The approach works directly with circuit-based quantum computers using magic state injection, offering better efficiency and security guarantees than previous methods.
Key Contributions
- Introduction of magic-blindness concept for circuit-based quantum verification
- Development of noise-robust and composable verification protocols for Clifford + MSI circuits
- Reduction of quantum communication costs by requiring transmission only at magic state injection locations
- Bridge between MBQC and circuit-based verification protocols with equivalent security guarantees
View Full Abstract
As quantum computing machines move towards the utility regime, it is essential that users are able to verify their delegated quantum computations with security guarantees that are (i) robust to noise, (ii) composable with other secure protocols, and (iii) exponentially stronger as the number of resources dedicated to security increases. Previous works that achieve these guarantees and provide modularity necessary to optimization of protocols to real-world hardware are most often expressed in the Measurement-Based Quantum Computation (MBQC) model. This leaves architectures based on the circuit model -- in particular those using the Magic State Injection (MSI) -- with fewer options to verify their computations or with the need to compile their circuits in MBQC leading to overheads. This paper introduces a family of noise robust, composable and efficient verification protocols for Clifford + MSI circuits that are secure against arbitrary malicious behavior. This family contains the verification protocol of Broadbent (ToC, 2018), extends its security guarantees while also bridging the modularity gap between MBQC and circuit-based protocols, and reducing quantum communication costs. As a result, it opens the prospect of rapid implementation for near-term quantum devices. Our technique is based on a refined notion of blindness, called magic-blindness, which hides only the injected magic states -- the sole source of non-Clifford computational power. This enables verification by randomly interleaving computation rounds with classically simulable, magic-free test rounds, leading to a trap-based framework for verification. As a result, circuit-based quantum verification attains the same level of security and robustness previously known only in MBQC. It also optimizes the quantum communication cost as transmitted qubits are required only at the locations of state injection.
Below-threshold error reduction in single photons through photon distillation
This paper demonstrates a technique called photon distillation that reduces errors in photonic quantum computers by purifying single photons to improve their indistinguishability. The method is more resource-efficient than traditional quantum error correction and shows actual error reduction even when accounting for noise from the distillation process itself.
Key Contributions
- Experimental demonstration of scalable photon distillation for error mitigation
- Achievement of unconditional error reduction below threshold with net gain under fault-tolerant conditions
- Development of intrinsically bosonic error reduction strategy more efficient than quantum error correction
View Full Abstract
Photonic quantum computers use the bosonic statistics of photons to construct, through quantum interference, the large entangled states required for measurement-based quantum computation. Therefore, any which-way information present in the photons will degrade quantum interference and introduce errors. While quantum error correction can address such errors in principle, it is highly resource-intensive and operates with a low error threshold, requiring numerous high-quality optical components. We experimentally demonstrate scalable, optimal photon distillation as a substantially more resource-efficient strategy to reduce indistinguishability errors in a way that is compatible with fault-tolerant operation. Photon distillation is an intrinsically bosonic, coherent error-mitigation technique which exploits quantum interference to project single photons into purified internal states, thereby reducing indistinguishability errors at both a higher efficiency and higher threshold than quantum error correction. We observe unconditional error reduction (i.e., below-threshold behaviour) consistent with theoretical predictions, even when accounting for noise introduced by the distillation gate, thereby achieving actual net-gain error mitigation under conditions relevant for fault-tolerant quantum computing. We anticipate photon distillation will find uses in large-scale quantum computers. We also expect this work to inspire the search for additional intrinsically bosonic error-reduction strategies, even for fault-tolerant architectures.
Block Encoding Linear Combinations of Pauli Strings Using the Stabilizer Formalism
This paper presents a new method for creating quantum circuits that encode linear combinations of Pauli strings using stabilizer formalism, which could make quantum algorithms more efficient. The approach transforms Pauli strings to make them easier to implement in quantum circuits and shows comparable or better performance than existing methods.
Key Contributions
- Novel block encoding method for linear combinations of Pauli strings using stabilizer formalism
- Demonstrated circuit complexity improvements over Linear Combination of Unitaries approach
- Scalable implementation with logarithmic ancilla register requirements
View Full Abstract
The Quantum Singular Value Transformation (QSVT) provides a powerful framework with the potential for quantum speedups across a wide range of applications. Its core input model is the block encoding framework, in which non-unitary matrices are embedded into larger unitary matrices. Because the gate complexity of the block-encoding subroutine largely determines the overall cost of QSVT-based algorithms, developing new and more efficient block encodings is crucial for achieving practical quantum advantage. In this paper, we introduce a novel method for constructing quantum circuits that block encode linear combinations of Pauli strings. Our approach relies on two key components. First, we apply a transformation that converts the Pauli strings into pairwise anti-commuting ones, making the transformed linear combination unitary and thus directly implementable as a quantum circuit. Second, we employ a correction transformation based on the stabilizer formalism which uses an ancilla register to restore the original Pauli strings. Our method can be implemented with an ancilla register whose size scales logarithmically with the number of system qubits. It can also be extended to larger ancilla registers, which can substantially reduce the overall quantum circuit complexity. We present four concrete examples and use numerical simulations to compare our method's circuit complexity with that of the Linear Combination of Unitaries (LCU) approach. We find that our method achieves circuit complexities comparable to or better than LCU, with possible advantages when the structure of the target operators can be exploited. These results suggest that our approach could enable more efficient block encodings for a range of relevant problems extending beyond the examples analyzed in this work.
Scalable Suppression of XY Crosstalk by Pulse-Level Control in Superconducting Quantum Processors
This paper develops a scalable control method to reduce unwanted interactions (crosstalk) between neighboring qubits in superconducting quantum processors by using frequency modulation and dynamical decoupling techniques. The approach works independently of coupling strengths and demonstrates significant error reduction in both two-qubit and five-qubit systems.
Key Contributions
- Development of a scalable pulse-level control framework using frequency modulation and dynamical decoupling to suppress XY crosstalk
- Demonstration of orders-of-magnitude fidelity improvements in superconducting quantum processors with validation up to five-qubit systems
View Full Abstract
As superconducting quantum processors continue to scale, high-performance quantum control becomes increasingly critical. In densely integrated architectures, unwanted interactions between nearby qubits give rise to crosstalk errors that limit operational performance. In particular, direct exchange-type (XY) interactions are typically minimized by designing large frequency detunings between neighboring qubits at the hardware level. However, frequency crowding in large-scale systems ultimately restricts the achievable frequency separation. While such XY coupling facilitates entangling gate operations, its residual presence poses a key challenge during single-qubit controls. Here, we propose a scalable pulse-level control framework, incorporating frequency modulation (FM) and dynamical decoupling (DD), to suppress XY crosstalk errors. This framework operates independently of coupling strengths, reducing calibration overhead and naturally supporting multi-qubit connectivity. Numerical simulations show orders-of-magnitude reductions in infidelity for both idle and single-qubit gates in a two-qubit system. We further validate scalability in a five-qubit layout, where crosstalk between a central qubit and four neighbors is simultaneously suppressed. Our crosstalk suppression framework provides a practical route toward high-fidelity operation in dense superconducting architectures.
Interacting electrons in silicon quantum interconnects
This paper studies one-dimensional electron channels in silicon quantum structures that could serve as interconnects between quantum processing units. The researchers identify different interaction regimes (Wigner and Friedel) in these channels and propose how they could enable long-range coupling between quantum dots for scalable quantum computing architectures.
Key Contributions
- Identification of Wigner-Friedel crossover in silicon quantum interconnects with distinct correlation signatures
- Demonstration that Wigner regime enables long-range capacitive coupling between quantum dots for entanglement generation
- DMRG simulations showing robustness of interaction regimes against realistic disorder levels up to 400 micro eV
- Proposal for experimental signatures to detect different interaction regimes via transport and charge sensing
View Full Abstract
Coherent interconnects between gate-defined silicon quantum processing units are essential for scalable quantum computation and long-range entanglement. We argue that one-dimensional electron channels formed in the silicon quantum well of a Si/SiGe heterostructure exhibit strong Coulomb interactions and realize strongly interacting Luttinger liquid physics. At low electron densities, the system enters a Wigner regime characterized by dominant 4kF correlations; increasing the electron density leads to a crossover from the Wigner regime to a Friedel regime with dominant 2kF correlations. We support these results through large-scale density matrix renormalization group (DMRG) simulations of the interacting ground state under both screened and unscreened Coulomb potentials. We propose experimental signatures of the Wigner-Friedel crossover via charge transport and charge sensing in both zero- and high-magnetic field limits. We also analyze the impact of short-range correlated disorder - including random alloy fluctuations and valley splitting variations - and identify that the Wigner-Friedel crossover remains robust until disorder levels of about 400 micro eV. Finally, we show that the Wigner regime enables long-range capacitive coupling between quantum dots across the interconnect, suggesting a route to create long-range entanglement between solid-state qubits. Our results position silicon interconnects as a platform for studying Luttinger liquid physics and for enabling architectures supporting nonlocal quantum error correction and quantum simulation.
Unitary fault-tolerant encoding of Pauli states in surface codes
This paper presents a new method for preparing quantum states in surface codes that preserves the code's error-correction capabilities during state preparation. The approach uses only local quantum gates and significantly reduces error rates compared to traditional measurement-based methods, making it particularly valuable for quantum computing platforms where measurements are expensive.
Key Contributions
- Distance-preserving unitary encoding scheme for Pauli eigenstates in surface codes
- Scalable construction generalizable to arbitrary code distances with O(d) circuit depth
- Demonstration of up to order-of-magnitude improvement in logical error rates over standard methods
View Full Abstract
In fault-tolerant quantum computation, the preparation of logical states is a ubiquitous subroutine, yet significant challenges persist even for the simplest states required. In the present work, we present a unitary, scalable, distance-preserving encoding scheme for preparing Pauli eigenstates in surface codes. Unlike previous unitary approaches whose fault-distance remains constant with increasing code distance, our scheme ensures that the protection offered by the code is preserved during state preparation. Building on strategies discovered by reinforcement learning for the surface-17 code, we generalize the construction to arbitrary code distances and both rotated and unrotated surface codes. The proposed encoding relies only on geometrically local gates, and is therefore fully compatible with planar 2D qubit connectivity, and it achieves circuit depth scaling as $\mathcal{O}(d)$, consistent with fundamental entanglement-generation bounds. We design explicit stabilizer-expanding circuits with and without ancilla-mediated connectivity and analyze their error-propagation behavior. Numerical simulations under depolarizing noise show that our unitary encoding without ancillas outperforms standard stabilizer-measurement-based schemes, reducing logical error rates by up to an order of magnitude. These results make the scheme particularly relevant for platforms such as trapped ions and neutral atoms, where measurements are costly relative to gates and idling noise is considerably weaker than gate noise. Our work bridges the gap between measurement-based and unitary encodings of surface-code states and opens new directions for distance-preserving state preparation in fault-tolerant quantum computation.
Fast, high-fidelity Transmon readout with intrinsic Purcell protection via nonperturbative cross-Kerr coupling
This paper demonstrates a new 'junction readout' method for measuring superconducting qubits that achieves faster, more accurate measurements than traditional approaches. By coupling the qubit to its readout circuit through both capacitive and Josephson junction connections, they achieve 99.4% measurement accuracy in just 68 nanoseconds without needing expensive additional hardware components.
Key Contributions
- Development of junction readout architecture with intrinsic Purcell protection that eliminates need for external Purcell filters
- Achievement of 99.4% assignment fidelity with 68 ns integration time using bifurcation-based readout
- Demonstration of enhanced resilience to measurement-induced state transitions through nonperturbative cross-Kerr coupling
- Scalable readout solution with reduced hardware overhead compared to conventional dispersive readout
View Full Abstract
Dispersive readout of superconducting qubits relies on a transverse capacitive coupling that hybridizes the qubit with the readout resonator, subjecting the qubit to Purcell decay and measurement-induced state transitions (MIST). Despite the widespread use of Purcell filters to suppress qubit decay and near-quantum-limited amplifiers, dispersive readout often lags behind single- and two-qubit gates in both speed and fidelity. Here, we experimentally demonstrate junction readout, a simple readout architecture that realizes a strong qubit-resonator cross-Kerr interaction without relying on a transverse coupling. This interaction is achieved by coupling a transmon qubit to its readout resonator through both a capacitance and a Josephson junction. By varying the qubit frequency, we show that this hybrid coupling provides intrinsic Purcell protection and enhanced resilience to MIST, enabling readout at high photon numbers. While junction readout is compatible with conventional linear measurement, in this work we exploit the nonlinear coupling to intentionally engineer a large Kerr nonlinearity in the resonator, enabling bifurcation-based readout. Using this approach, we achieve a 99.4 % assignment fidelity with a 68 ns integration time and a 98.4 % QND fidelity without an external Purcell filter or a near-quantum-limited amplifier. These results establish the junction readout architecture with bifurcation-based readout as a scalable and practical alternative to dispersive readout, enabling fast, high-fidelity qubit measurement with reduced hardware overhead.
SurgeQ: A Hybrid Framework for Ultra-Fast Quantum Processor Design and Crosstalk-Aware Circuit Execution
This paper presents SurgeQ, a hybrid hardware-software approach for quantum computing that uses stronger coupling between qubits to enable faster two-qubit gates, while using smart scheduling algorithms to minimize the increased crosstalk noise that comes with stronger coupling.
Key Contributions
- Hardware-software co-design framework combining coupling-strengthened fast gates with crosstalk-aware scheduling
- Systematic evaluation pipeline for optimizing coupling strength under composite noise models
- Demonstration of million-fold fidelity improvement in large-scale quantum circuits
View Full Abstract
Executing quantum circuits on superconducting platforms requires balancing the trade-off between gate errors and crosstalk. To address this, we introduce SurgeQ, a hardware-software co-design strategy consisting of a design phase and an execution phase, to achieve accelerated circuit execution and improve overall program fidelity. SurgeQ employs coupling-strengthened, faster two-qubit gates while mitigating their increased crosstalk through a tailored scheduling strategy. With detailed consideration of composite noise models, we establish a systematic evaluation pipeline to identify the optimal coupling strength. Evaluations on a comprehensive suite of real-world benchmarks show that SurgeQ generally achieves higher fidelity than up-to-date baselines, and remains effective in combating exponential fidelity decay, achieving up to a million-fold improvement in large-scale circuits.
Holographic codes seen through ZX-calculus
This paper analyzes holographic quantum error correcting codes using ZX-calculus, a graphical language for quantum computation. The authors study the pentagon holographic code's structure through diagrams and introduce new codes based on hyperbolic tessellations, testing their error correction performance.
Key Contributions
- Diagrammatic analysis of pentagon holographic quantum error correcting code using ZX-calculus
- Introduction of new quantum error correcting codes based on dual hyperbolic tessellations with belief propagation decoding
View Full Abstract
We re-visit the pentagon holographic quantum error correcting code from a ZX-calculus perspective. By expressing the underlying tensors as ZX-diagrams, we study the stabiliser structure of the code via Pauli webs. In addition, we obtain a diagrammatic understanding of its logical operators, encoding isometries, Rényi entropy and toy models of black holes/wormholes. Then, motivated by the pentagon holographic code's ZX-diagram, we introduce a family of codes constructed from ZX-diagrams on its dual hyperbolic tessellations and study their logical error rates using belief propagation decoders.
Conveyor-mode electron shuttling through a T-junction in Si/SiGe
This paper demonstrates a T-junction device that can route single electrons and spin qubits between two conveyor-belt shuttle lanes in silicon quantum dots, achieving nearly perfect transfer fidelity and enabling two-dimensional quantum computing architectures.
Key Contributions
- Demonstrated T-junction routing between independent shuttle lanes with 99.9999991% fidelity
- Showed controllable electron pattern swapping across 54 quantum dots using simple atomic pulses
- Established foundation for scalable 2D quantum computing architectures with flexible spin qubit routing
View Full Abstract
Conveyor-mode shuttling in gated Si/SiGe devices enables adiabatic transfer of single electrons, electron patterns and spin qubits confined in quantum dots across several microns with a scalable number of signal lines. To realize their full potential, linear shuttle lanes must connect into a two-dimensional grid with controllable routing. We introduce a T-junction device linking two independently driven shuttle lanes. Electron routing across the junction requires no extra control lines beyond the four channels per conveyor belt. We measure an inter-lane charge transfer fidelity of $F = 100.0000000^{+0}_{-9\times 10^{-7}}\,\%$ at an instantaneous electron velocity of $270\,\mathrm{mm}\,\mathrm{s}^{-1}$. The filling of 54 quantum dots is controlled by simple atomic pulses, allowing us to swap electron patterns, laying the groundwork for a native spin-qubit SWAP gate. This T-junction establishes a path towards scalable, two-dimensional quantum computing architectures with flexible spin qubit routing for quantum error correction.
Integration and Resource Estimation of Cryoelectronics for Superconducting Fault-Tolerant Quantum Computers
This paper analyzes the requirements and approaches for integrating cryogenic electronics into large-scale superconducting quantum computers, providing a framework for estimating resources needed to control fault-tolerant quantum systems. The authors examine how to partition control functions between room-temperature electronics, cryo-CMOS at 4K, and superconducting logic to enable scaling to cryptographically relevant quantum computers.
Key Contributions
- Development of transparent first-order accounting framework for cryoelectronics resource estimation
- Analysis of functional partitioning strategies across temperature stages for fault-tolerant quantum computer control systems
- Demonstration of scaling constraints using RSA-2048 cryptographic benchmark as reference point
View Full Abstract
Scaling superconducting quantum computers to the fault-tolerant regime calls for a commensurate scaling of the classical control and readout stack. Today's systems largely rely on room-temperature, rack-based instrumentation connected to dilution-refrigerator cryostats through many coaxial cables. Looking ahead, superconducting fault-tolerant quantum computers (FTQCs) will likely adopt a heterogeneous quantum-classical architecture that places selected electronics at cryogenic stages -- for example, cryo-CMOS at 4~K and superconducting digital logic at 4~K and/or mK stages -- to curb wiring and thermal-load overheads. This review distills key requirements, surveys representative room-temperature and cryogenic approaches, and provides a transparent first-order accounting framework for cryoelectronics. Using an RSA-2048-scale benchmark as a concrete reference point, we illustrate how scaling targets motivate constraints on multiplexing and stage-wise cryogenic power, and discuss implications for functional partitioning across room-temperature electronics, cryo-CMOS, and superconducting logic.
Strip-Symmetric Quantum Codes for Biased Noise: Z-Decoupling in Stabilizer and Floquet Codes
This paper introduces a framework for quantum error correction codes called 'strip-symmetric biased codes' that are optimized for dephasing noise, where errors can be efficiently decoded by breaking the problem into independent one-dimensional strips rather than solving the full two-dimensional decoding problem.
Key Contributions
- Defines strip-symmetric biased codes as a unifying framework for existing bias-tailored quantum error correction codes
- Shows that Z-error decoding can be factorized across independent strips, reducing computational complexity for matching-based decoders
- Provides design tools for constructing new bias-tailored Floquet codes using synthetic detector models and domain-wise Clifford constructions
View Full Abstract
Bias-tailored codes such as the XZZX surface code and the domain wall color code achieve high dephasing-biased thresholds because, in the infinite-bias limit, their $Z$ syndromes decouple into one-dimensional repetition-like chains; the $X^3Z^3$ Floquet code shows an analogous strip-wise structure for detector events in spacetime. We capture this common mechanism by defining strip-symmetric biased codes, a class of static stabilizer and dynamical (Floquet) codes for which, under pure dephasing and perfect measurements, each elementary $Z$ fault is confined to a strip and the Z-detector--fault incidence matrix is block diagonal. For such codes the Z-detector hypergraph decomposes into independent strip components and maximum-likelihood $Z$ decoding factorizes across strips, yielding complexity savings for matching-based decoders. We characterize strip symmetry via per-strip stabilizer products, viewed as a $\mathbb{Z}_2$ 1-form symmetry, place XZZX, the domain wall color code, and $X^3Z^3$ in this framework, and introduce synthetic strip-symmetric detector models and domain-wise Clifford constructions that serve as design tools for new bias-tailored Floquet codes.
Optimizing Fault-tolerant Cat State Preparation
This paper presents an optimized method for preparing cat states (quantum superposition states) that are essential for fault-tolerant quantum computing. The approach uses two low-depth circuits combined with a transversal CNOT operation to create high-quality cat states with fewer resources than previous methods.
Key Contributions
- Novel cat state preparation scheme achieving fault distance up to 9 with reduced circuit depth and CNOT count
- Three optimization methods for transversal CNOT wiring including SAT-based and heuristic approaches
- Resource-efficient construction requiring only ⌈log₂ w⌉+1 depth and at most 3w-2 CNOTs
View Full Abstract
Cat states are an important resource for fault-tolerant quantum computing, where they serve as building blocks for a variety of fault-tolerant primitives. Consequently, the ability to prepare high-quality cat states at large fault distances is essential. While optimizations for low fault distances or small numbers of qubits exist, higher fault distances can be achieved via generalized constructions with potentially suboptimal circuit sizes. In this work, we propose a cat state preparation scheme based on preparing two cat states with low-depth circuits, followed by a transversal CNOT and measurement of one of the states. This scheme prepares $w$-qubit cat states fault-tolerantly up to fault distances of $9$ using $\lceil\log_2 w\rceil+1$ depth and at most $3w-2$ CNOTs and $2w$ qubits. We discuss that the combinatorially challenging aspect of this construction is the precise wiring of the transversal CNOT and propose three methods for finding these: two based on Satisfiability Modulo Theory solving and one heuristic search based on a local repair strategy. Numerical evaluations show that our circuits achieve a high fault-distance while requiring fewer resources as generalized constructions.
FTCircuitBench: A Benchmark Suite for Fault-Tolerant Quantum Compilation and Architecture
This paper introduces FTCircuitBench, a comprehensive benchmark suite and toolkit for evaluating quantum error correction and fault-tolerant quantum computing compilation. It provides standardized algorithms, compilation pipelines, and evaluation tools to help researchers develop and optimize fault-tolerant quantum computing systems.
Key Contributions
- Created standardized benchmark suite for fault-tolerant quantum compilation with pre-compiled algorithm instances
- Developed modular end-to-end compilation pipeline supporting various fault-tolerant architectures and optimization passes
- Provided comprehensive toolkit for evaluating quantum algorithms and optimizations across the full compilation stack
View Full Abstract
Realizing large-scale quantum advantage is expected to require quantum error correction (QEC), making the compilation and optimization of logical operations a critical area of research. Logical computation imposes distinct constraints and operational paradigms that differ from those of the Noisy Intermediate-Scale Quantum (NISQ) regime, motivating the continued evolution of compilation tools. Given the complexity of this emerging stack, where factors such as gate decomposition precision and computational models must be co-designed, standardized benchmarks and toolkits are valuable for evaluating progress. To support this need, we introduce FTCircuitBench, which serves as: (1) a benchmark suite of impactful quantum algorithms, featuring pre-compiled instances in both Clifford+T and Pauli Based Computation models; (2) a modular end-to-end pipeline allowing users to compile and decompose algorithms for various fault-tolerant architectures, supporting both prebuilt and custom optimization passes; and (3) a toolkit for evaluating the impact of algorithms and optimization across the full compilation stack, providing detailed numerical analysis at each stage. FTCircuitBench is fully open-sourced and maintained on Github.
Energetics of Rydberg-atom Quantum Computing
This paper analyzes the energy consumption of Rydberg-atom quantum computers by examining the energetic costs of different components and algorithms. The researchers investigate the energy efficiency of executing quantum algorithms like Quantum Phase Estimation and Quantum Fourier Transform on Rydberg platforms, comparing these costs to classical supercomputers to evaluate potential quantum energy advantages.
Key Contributions
- First comprehensive analysis of energy consumption in Rydberg-atom quantum computing platforms
- Energy scaling analysis and comparison between quantum Fourier transform on Rydberg systems versus classical discrete Fourier transform on supercomputers
- Identification of energy bottlenecks and optimization opportunities in different components of Rydberg quantum computers
View Full Abstract
Quantum computing exploits the properties of Quantum Mechanics to solve problems faster than classical computers. The potential applications of this technology have been widely explored, and extensive research over the past decades has been dedicated to developing scalable quantum computers. However, the question of the energetic performance of quantum computation has only gained attention more recently, and its importance is now recognized. In fact, quantum computers can only be a viable alternative if their energy cost scales favorably, and some research has shown that there is even a potential quantum energy advantage. Rydberg atoms have emerged recently as one of the most promising platforms to implement a large-scale quantum computer, with significant advances made in recent years. This work aims at contributing first steps to understand the energy efficiency of this platform, namely by investigating the energy consumption of the different elements of a Rydberg atom quantum computer. First, an experimental implementation of the Quantum Phase Estimation algorithm is analyzed, and an estimation of the energetic cost of executing this algorithm is calculated. Then, a potential scaling of the energy cost of performing the Quantum Fourier Transform with Rydberg atoms is derived. This analysis facilitates a comparison of the energy consumption of different elements within a Rydberg atom quantum computer, from the preparation of the atoms to the execution of the algorithm, and the measurement of the final state, enabling the evaluation of the energy expenditure of the Rydberg platform and the identification of potential improvements. Finally, we used the Quantum Fourier Transform as an energetic benchmark, comparing the scaling we obtained to that of the execution of the Discrete Fourier Transform in two state-of-the-art classical supercomputers.
Gradient descent reliably finds depth- and gate-optimal circuits for generic unitaries
This paper demonstrates that gradient descent optimization can efficiently find optimal quantum circuits for generic unitary operations, challenging previous assumptions that such optimization required more complex combinatorial search methods. The key insight is that avoiding parameter-deficient circuit structures allows simple gradient descent to reliably achieve both depth-optimal and gate-optimal circuits.
Key Contributions
- Showed that gradient descent can reliably find optimal quantum circuits for generic unitaries, contrary to previous beliefs requiring combinatorial search
- Identified that avoiding parameter-deficient circuit skeletons is key to successful optimization, explaining discrepancies with earlier work
View Full Abstract
When the gate set has continuous parameters, synthesizing a unitary operator as a quantum circuit is always possible using exact methods, but finding minimal circuits efficiently remains a challenging problem. The landscape is very different for compiled unitaries, which arise from programming and typically have short circuits, as compared with generic unitaries, which use all parameters and typically require circuits of maximal size. We show that simple gradient descent reliably finds depth- and gate-optimal circuits for generic unitaries, including in the presence of restricted chip connectivity. This runs counter to earlier evidence that optimal synthesis required combinatorial search, and we show that this discrepancy can be explained by avoiding the random selection of certain parameter-deficient circuit skeletons.
Minimization of AND-XOR Expressions with Decoders for Quantum Circuits
This paper presents new methods for designing quantum circuits that perform logical operations more efficiently by using decoder-based three-level structures instead of traditional two-level approaches, aiming to reduce the quantum cost of reversible circuits through novel mathematical forms called MVI-FPRM.
Key Contributions
- Introduction of Multi-Valued Input Fixed Polarity Reed-Muller (MVI-FPRM) forms for quantum circuit synthesis
- Development of decoder-based three-level circuit architecture to reduce quantum costs compared to traditional two-level ESOP methods
- Creation of two practical algorithms (products-matching and butterfly diagrams) for three-level circuit synthesis
View Full Abstract
This paper introduces a new logic structure for reversible quantum circuit synthesis. Our synthesis method aims to minimize the quantum cost of reversible quantum circuits with decoders. In this method, multi-valued input, binary output (MVI) functions are utilized as a mathematical concept only, but the circuits are binary. We introduce the new concept of ``Multi-Valued Input Fixed Polarity Reed-Muller (MVI-RM)" forms. Our decoder-based circuit uses three logical levels in contrast to commonly-used methods based on Exclusive-or Sum of Products (ESOP) with two levels (AND-XOR expressions), realized by Toffoli gates. In general, the high number of input qubits in the resulting Toffoli gates is a problem that greatly impacts the quantum cost. Using decoders decreases the number of input qubits in these Toffoli gates. We present two practical algorithms for three-level circuit synthesis by finding the MVI-FPRM: products-matching and the newly developed butterfly diagrams. The best MVI-FPRM forms are factorized and reduced to approximate Multi-Valued Input Generalized Reed-Muller (MVI-GRM) forms.
Developments in superconducting erasure qubits for hardware-efficient quantum error correction
This paper reviews recent developments in superconducting erasure qubits, a specialized type of quantum bit designed to have predictable error patterns that make quantum error correction more efficient. The authors focus on dual-rail encoded implementations and discuss how these qubits can enable hardware-efficient quantum error correction by combining built-in error correction with additional outer codes.
Key Contributions
- Comprehensive review of superconducting erasure qubit implementations and their hardware-efficient quantum error correction capabilities
- Analysis of dual-rail encoding schemes and concatenated error correction approaches for fault-tolerant quantum computing
View Full Abstract
Quantum computers are inherently noisy, and a crucial challenge for achieving large-scale, fault-tolerant quantum computing is to implement quantum error correction. A promising direction that has made rapid recent progress is to design hardware that has a specific noise profile, leading to a significantly higher threshold for noise with certain quantum error correcting codes. This Perspective focuses on erasure qubits, which enable hardware-efficient quantum error correction, by concatenating an inner code built-in to the hardware with an outer code. We focus on implementations of dual-rail encoded erasure qubits using superconducting qubits, giving an overview of recent developments in theory and simulation, and hardware demonstrators. We also discuss the differences between implementations; near-term applications using quantum error detection; and the open problems for developing this approach towards early fault-tolerant quantum computers.
Flux-noise-resilient transmon qubit via a doubly-connected gradiometric design
This paper presents a new transmon qubit design called the '8-mon' that uses a doubly-connected gradiometric structure with a nano-airbridge to significantly reduce sensitivity to magnetic flux noise while maintaining full electrical tunability. The design achieves nearly threefold improvement in coherence time compared to standard X-mon qubits without requiring additional control overhead.
Key Contributions
- Novel doubly-connected gradiometric transmon qubit design that suppresses flux noise while preserving tunability
- Demonstration of threefold enhancement in Ramsey coherence time T2* reaching the same order as T1
- Development of spatially correlated flux-noise model that quantitatively reproduces experimental coherence trends
- Practical pathway toward more stable superconducting quantum processors with superior long-term frequency stability
View Full Abstract
Frequency-tunable superconducting transmon qubits are a cornerstone of scalable quantum processors, yet their performance is often degraded by sensitivity to low-frequency flux noise. Here we present a doubly-connected gradiometric transmon (the ``8-mon") that incorporates a nano-airbridge to link its two loops. This design preserves full electrical tunability and remains fully compatible with standard X-mon control and readout, requiring no additional measurement overhead. The airbridge interconnect eliminates dielectric loss, which enables the 8-mon to achieve both energy relaxation times $T_{\rm 1}$ comparable to reference X-mons and, in the small flux-bias regime, a nearly threefold enhancement in Ramsey coherence time $T_{\rm 2}^*$. This improved $T_{\rm 2}^*$ reaches the same order as $T_{\rm 1}$ without employing echo decoupling. The device also exhibits superior long-term frequency stability even without any magnetic field shielding. We develop a spatially correlated flux-noise model whose simulations quantitatively reproduce the experimental coherence trends, revealing the coexistence of short- and long-correlation-length magnetic noise in the superconducting chip environment. By unifying high tunability with intrinsic flux-noise suppression through a robust geometric design, the 8-mon provides a practical pathway toward more coherent and stable superconducting quantum processors.
Parallel Quantum Gates via Scalable Subsystem-Optimized Robust Control
This paper presents a method to reduce crosstalk errors when running multiple quantum gates simultaneously by optimizing control over smaller subsystems rather than the entire quantum processor. The approach dramatically reduces computational costs while improving gate fidelities, making it practical for large-scale quantum computers with hundreds of qubits.
Key Contributions
- Scalable subsystem-based optimization that reduces crosstalk errors in parallel quantum gate operations
- Demonstration of improved noise scaling from exponential to linear for parallel single-qubit gates across multiple quantum computing platforms
- Platform-agnostic framework that works without precise crosstalk knowledge or specific connectivity assumptions
View Full Abstract
Accurate and efficient implementation of parallel quantum gates is crucial for scalable quantum information processing. However, the unavoidable crosstalk between qubits in current noisy processors impedes the achievement of high gate fidelities and renders full Hilbert-space control optimization prohibitively difficult. Here, we overcome this challenge by reducing the full-system optimization to crosstalk-robust control over constant-sized subsystems, which dramatically reduces the computational cost. Our method effectively eliminates the leading-order gate operation deviations induced by crosstalk, thereby suppressing error rates. Within this framework, we construct analytical pulse solutions for parallel single-qubit gates and numerical pulses for parallel multi-qubit operations. We validate the proposed approach numerically across multiple platforms, including coupled nitrogen-vacancy centers, a nuclear-spin processor, and superconducting-qubit arrays with up to 200 qubits. As a result, the noise scaling is reduced from exponential to linear for parallel single-qubit gates, and an order-of-magnitude reduction is achieved for parallel multi-qubit gates. Moreover, our method does not require precise knowledge of crosstalk strengths and makes no assumption about the underlying qubit connectivity or lattice geometry, thereby establishing a scalable framework for parallel quantum control in large-scale quantum architectures.
Design and Characterization of Compact Acousto-Optic-Deflector Individual Addressing System for Trapped-Ion Quantum Computing
This paper presents a compact beam-steering system using acousto-optic deflectors to individually address ions in trapped-ion quantum computers. The system achieves high precision beam control with minimal crosstalk, enabling manipulation of individual qubits in chains of up to 30 ions.
Key Contributions
- Compact AOD-based beam steering system with <1 square foot footprint for improved optical stability
- Demonstrated individual ion addressing in 30-ion chains with <9×10^-4 intensity crosstalk
- Fast beam switching capability (~240 ns) enabling high-fidelity quantum operations on long ion chains
View Full Abstract
We present a compact design for a beam-steering system based on acousto-optic-deflectors (AODs) used as an individual addressing system for trapped-ion quantum computing. The design targets to minimize the optomechanical degrees of freedom and the optical beam paths to improve optical stability, and we successfully implemented a solution with a compact footprint of less than 1 square foot. The system characterization results show that we achieve clean Gaussian beams at 355nm wavelength with a beam steering range of $\sim$50 times the beam diameter, and an intensity crosstalk of $< 9 \times 10^{-4}$ at all neighboring ions in a five-ion chain. Based on these capabilities, we experimentally demonstrate individual addressing of a 30-ion chain. We estimate the beam switching time of the AOD to be $\sim$240 ns. The compact system design is expected to provide high optical stability, providing the potential for high-fidelity trapped-ion quantum computing with long ion chains.
Neural Minimum Weight Perfect Matching for Quantum Error Codes
This paper develops a neural network-based decoder for quantum error correction that combines Graph Neural Networks and Transformers to dynamically predict edge weights for the Minimum Weight Perfect Matching algorithm. The hybrid approach aims to improve error correction performance by leveraging machine learning to better identify and correct quantum errors.
Key Contributions
- Novel hybrid architecture combining GNNs and Transformers for quantum error correction
- Proxy loss function enabling end-to-end training through non-differentiable MWPM algorithm
- Demonstrated reduction in Logical Error Rate compared to standard baselines
View Full Abstract
Realizing the full potential of quantum computation requires Quantum Error Correction (QEC). QEC reduces error rates by encoding logical information across redundant physical qubits, enabling errors to be detected and corrected. A common decoder used for this task is Minimum Weight Perfect Matching (MWPM) a graph-based algorithm that relies on edge weights to identify the most likely error chains. In this work, we propose a data-driven decoder named Neural Minimum Weight Perfect Matching (NMWPM). Our decoder utilizes a hybrid architecture that integrates Graph Neural Networks (GNNs) to extract local syndrome features and Transformers to capture long-range global dependencies, which are then used to predict dynamic edge weights for the MWPM decoder. To facilitate training through the non-differentiable MWPM algorithm, we formulate a novel proxy loss function that enables end-to-end optimization. Our findings demonstrate significant performance reduction in the Logical Error Rate (LER) over standard baselines, highlighting the advantage of hybrid decoders that combine the predictive capabilities of neural networks with the algorithmic structure of classical matching.