Quantum Physics Paper Analysis

This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:

  • CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
  • Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
  • Quantum Sensing – Metrology, magnetometry, and precision measurement advances
  • Quantum Networking – QKD, quantum repeaters, and entanglement distribution

Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.

Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.

Archive: Mar 1 - Mar 5, 2026 Back to Current Week
200 Papers This Week
535 CRQC/Y2Q Total
4717 Total Analyzed

Universal quantum computation with group surface codes

Naren Manjunath, Vieri Mattei, Apoorv Tiwari, Tyler D. Ellison

2603.05502 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces group surface codes, which generalize the standard surface code used in quantum error correction. The authors show these codes can perform non-Clifford gates transversally, enabling universal quantum computation by bypassing theoretical limitations that restrict the computational power of standard topological quantum error correction schemes.

Key Contributions

  • Introduction of group surface codes as a generalization of Z2 surface codes
  • Demonstration that non-Clifford gates can be performed transversally in these codes, enabling universal quantum computation
  • Method to bypass Bravyi-König theorem restrictions on topological Pauli stabilizer models
  • Unified framework connecting various recent constructions including sliding group surface codes and magic state preparation
surface codes quantum error correction universal quantum computation non-Clifford gates topological quantum computing
View Full Abstract

We introduce group surface codes, which are a natural generalization of the $\mathbb{Z}_2$ surface code, and equivalent to quantum double models of finite groups with specific boundary conditions. We show that group surface codes can be leveraged to perform non-Clifford gates in $\mathbb{Z}_2$ surface codes, thus enabling universal computation with well-established means of performing logical Clifford gates. Moreover, for suitably chosen groups, we demonstrate that arbitrary reversible classical gates can be implemented transversally in the group surface code. We present the logical operations in terms of a set of elementary logical operations, which include transversal logical gates, a means of transferring encoded information into and out of group surface codes, and preparation and readout. By composing these elementary operations, we implement a wide variety of logical gates and provide a unified perspective on recent constructions in the literature for sliding group surface codes and preparing magic states. We furthermore use tensor networks inspired by ZX-calculus to construct spacetime implementations of the elementary operations. This spacetime perspective also allows us to establish explicit correspondences with topological gauge theories. Our work extends recent efforts in performing universal quantum computation in topological orders without the braiding of anyons, and shows how certain group surface codes allow us to bypass the restrictions set by the Bravyi-K{ö}nig theorem, which limits the computational power of topological Pauli stabilizer models.

Mirror codes: High-threshold quantum LDPC codes beyond the CSS regime

Andrey Boris Khesin, Jonathan Z. Lu

2603.05496 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces mirror codes, a new class of quantum error-correcting codes that go beyond traditional CSS codes and achieve high error correction thresholds. The authors demonstrate these codes can achieve error pseudothresholds around 0.2% with efficient syndrome extraction circuits, making them promising for near-term fault-tolerant quantum devices.

Key Contributions

  • Introduction of mirror codes, a flexible LDPC stabilizer code construction that generalizes beyond CSS codes
  • Development of syndrome extraction circuits with provable fault tolerance using 1-6 ancillae per check
quantum error correction LDPC codes fault tolerance stabilizer codes syndrome extraction
View Full Abstract

The realization of quantum error correction protocols whose logical error rates are suppressed far below physical error rates relies on an intricate combination: the error-correcting code's efficiency, the syndrome extraction circuit's fault tolerance and overhead, the decoder's quality, and the device's constraints, such as physical qubit count and connectivity. This work makes two contributions towards error-corrected quantum devices. First, we introduce mirror codes, a simple yet flexible construction of LDPC stabilizer codes parameterized by a group $G$ and two subsets of $G$ whose total size bounds the check weight. These codes contain all abelian two-block group algebra codes, such as bivariate bicycle (BB) codes. At the same time, they are manifestly not CSS in general, thus deviating substantially from most prior constructions. Fixing a check weight of 6, we find $[[ 60, 4, 10 ]], [[ 36, 6, 6 ]], [[ 48, 8, 6 ]]$, and $[[ 85, 8, 9 ]]$ codes, all of which are not CSS; we also find several weight-7 codes with $kd > n$. Next, we construct syndrome extraction circuits that trade overhead for provable fault tolerance. These circuits use 1-2, 3, and 6 ancillae per check, and respectively are partially fault-tolerant (FT), provably FT on weight-6 CSS codes, and provably FT on \emph{all} weight-6 stabilizer codes. Using our constructions, we perform end-to-end quantum memory experiments on several representative mirror codes under circuit-level noise. We achieve an error pseudothreshold on the order of $0.2\%$, approximately matching that of the $[[ 144, 12, 12 ]]$ BB code under the same model. These findings position mirror codes as a versatile candidate for fault-tolerant quantum memory, especially on smaller-scale devices in the near term.

Improved Decoding of Quantum Tanner Codes Using Generalized Check Nodes

Olai Å. Mostad, Eirik Rosnes, Hsuan-Yin Lin

2603.05486 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper improves the decoding of quantum Tanner codes by grouping check nodes into more powerful generalized check nodes and using enhanced iterative belief propagation decoding. The proposed method significantly outperforms standard quaternary BP decoders and other recent approaches for quantum low-density parity-check codes.

Key Contributions

  • Enhanced generalized belief propagation decoder for quantum Tanner codes that significantly outperforms existing methods
  • Greedy algorithm for combining checks in generalized BP decoding for quantum LDPC codes
  • Theoretical cycle analysis for various quantum LDPC code classes
quantum error correction quantum LDPC codes belief propagation quantum Tanner codes fault tolerance
View Full Abstract

We study the decoding problem for quantum Tanner codes and propose to exploit the underlying local code structure by grouping check nodes into more powerful generalized check nodes for enhanced iterative belief propagation (BP) decoding by decoding the generalized checks using a maximum a posteriori (MAP) decoder as part of the check node processing of each decoding iteration. We mainly study the finite-length setting and show that the proposed enhanced generalized BP decoder for quantum Tanner codes significantly outperforms the standard quaternary BP decoder with memory effects, as well as the recently proposed Relay-BP decoder, even outperforming generalized bicycle (GB) codes with comparable parameters in some cases. For other classes of quantum low-density parity-check (qLDPC) codes, we propose a greedy algorithm to combine checks for generalized BP decoding. However, for GB codes, bivariate bicycle codes, hypergraph product codes, and lifted-product codes, there seems to be limited gain by combining simple checks into more powerful ones. To back up our findings, we also provide a theoretical cycle analysis for the considered qLDPC codes.

High-performance syndrome extraction circuits for quantum codes

Armands Strikis, Dan E. Browne, Michael E. Beverland

2603.05481 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved framework for designing syndrome extraction circuits used in quantum error correction, achieving up to an order of magnitude better performance than existing designs. The authors generalize left-right circuit constructions to work with arbitrary CSS quantum codes and introduce formal tools to analyze error propagation and optimize circuit performance.

Key Contributions

  • Generalization of left-right syndrome extraction circuits to arbitrary CSS codes with optimized performance
  • Introduction of formal residual error analysis framework for quantifying circuit-level error propagation
  • Demonstration of order-of-magnitude improvements in logical performance over existing single-ancilla designs
quantum error correction syndrome extraction circuits CSS codes fault tolerance circuit distance
View Full Abstract

We present a fast and effective framework for analysing and designing syndrome-extraction circuits (SECs). Our approach is based on left-right circuits, a general design for SECs which maintain low depth by staggering $X$ and $Z$ checks without interleaving gates. Initially proposed for specific classes of codes, we generalise this construction to arbitrary CSS codes and optimise the circuit structure to achieve low qubit idling time, large effective distance, and reduced minimum-weight failure mechanisms. A key component of our framework is the formal notion of residual errors and their associated distance metrics, which form lightweight tools for capturing error propagation and quantifying the potential harm of circuit-level errors. Applying our automated framework to diverse classes of codes, we observe consistent improvements in logical performance of up to an order of magnitude compared to existing single-ancilla SEC designs. We also use these tools to prove that no non-interleaving SEC can achieve circuit distance $12$ for the gross code, and identify an explicit circuit that we conjecture achieves distance $11$, exceeding previously known constructions.

Low-depth amplitude estimation via statistical eigengap estimation

Po-Wei Huang, Bálint Koczor

2603.05475 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper develops new algorithms for quantum amplitude estimation by reframing the problem as estimating energy gaps in effective Hamiltonians rather than using traditional phase estimation approaches. The methods achieve optimal performance while using simpler classical post-processing and offering flexible tradeoffs between query complexity and circuit depth.

Key Contributions

  • Reframes amplitude estimation as statistical eigengap estimation of effective Hamiltonians
  • Develops algorithms achieving Heisenberg-limited scaling with simplified classical post-processing
  • Establishes optimal query-depth tradeoffs for low-depth quantum circuits with theoretical guarantees
amplitude estimation quantum algorithms Heisenberg limit fault-tolerant quantum computing eigengap estimation
View Full Abstract

Amplitude estimation, in its original form, is formulated as phase estimation upon the Grover walk operator. Since its introduction, subsequent improvements to the algorithm have removed the use of phase estimation and introduced low-depth variants that trade speedup factors for lower circuit depth. We make the key observation that amplitude estimation is equivalent to estimating the energy gap of an effective Hamiltonian, whereby discrete time evolution is generated by amplitude amplification. This enables us to develop two amplitude estimation algorithms for both Heisenberg-limited and low-depth circuit regimes, inspired by statistical phase estimation techniques developed for seemingly unrelated early fault-tolerant ground-state energy estimation. Our approach has significant technical and practical benefits, and uses simplified classical post-processing compared to prior techniques -- our theoretical and numerical results indicate that we achieve state-of-the-art performance. Furthermore, while our approach achieves Heisenberg-limited scaling, we also establish optimal query-depth tradeoffs up to polylogarithmic factors in the low-depth regime with provable theoretical guarantees. Due to its flexibility, generality, and robustness, we expect our approach to be a key enabler for a broad range of early fault-tolerant applications.

Spatiotemporal Pauli processes: Quantum combs for modelling correlated noise in quantum error correction

John F Kam, Angus Southwell, Spiro Gicev, Muhammad Usman, Kavan Modi

2603.05474 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper introduces Spatiotemporal Pauli Processes (SPPs), a mathematical framework that bridges the gap between simple error models used in quantum error correction and the complex, correlated noise that occurs in real quantum devices. The authors demonstrate how this framework can model realistic noise patterns and show that certain types of correlated noise can cause complete breakdown of quantum error correction codes.

Key Contributions

  • Introduction of Spatiotemporal Pauli Processes framework that maps arbitrary non-Markovian quantum dynamics to tractable multi-time Pauli processes
  • Demonstration that correlated noise can cause complete breakdown of surface code error correction through critical slowing down and macroscopic error avalanches
  • Development of efficient tensor network representations and transfer operator diagnostics for analyzing correlated quantum noise
quantum error correction correlated noise surface codes non-Markovian dynamics process tensors
View Full Abstract

Correlated noise is a critical failure mode in quantum error correction (QEC), as temporal memory and spatial structure concentrate faults into error bursts that undermine standard threshold assumptions. Yet, a fundamental gap persists between the stochastic Pauli models ubiquitous in QEC and the microscopic, non-Markovian descriptions of physical device dynamics. We close this gap by introducing \emph{Spatiotemporal Pauli Processes} (SPPs). By applying a multi-time Pauli twirl -- operationally realised by Pauli-frame randomisation -- to a general process tensor, we map arbitrary multi-time, non-Markovian dynamics to a multi-time Pauli process. This process is represented by a process-separable comb, or equivalently, a well-defined joint probability distribution over Pauli trajectories in spacetime. We show that SPPs inherit efficient tensor network representations whose bond dimensions are bounded by the environment's Liouville-space dimension. To interpret these structures, we develop transfer operator diagnostics linking spectra to correlation decay, and exact hidden Markov representations for suitable classes of SPPs. We demonstrate the framework via surface code memory and stability simulations of up to distance \(19\) for (i) a temporally correlated ``storm'' model that tunes correlation length at fixed marginal error rates, and (ii) a genuinely spatiotemporal 2D quantum cellular automaton bath that maps exactly to a nonlinear probabilistic cellular automaton under twirling. Tuning coherent bath interactions drives the system into a pseudo-critical regime, exhibiting critical slowing down and macroscopic error avalanches that cause a complete breakdown of surface code distance scaling. Together, these results justify SPPs as an operationally grounded, scalable toolkit for modelling, diagnosing, and benchmarking correlated noise in QEC.

Heuristics for Shuttling Sequence Optimization for a Linear Segmented Trapped-Ion Quantum Computer

J. Durandau, C. A. Brunet, F. Schmidt-Kaler, U. Poschinger, F. Mailhot, Y. Bérubé-Lauzière

2603.05464 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops optimization algorithms for moving trapped ions between different zones in a linear ion-trap quantum computer, focusing on minimizing the number of physical ion movements needed to execute quantum circuits. The authors present heuristics for determining optimal initial ion ordering and demonstrate improved performance for quantum Fourier transform-like circuits.

Key Contributions

  • Development of heuristic algorithms for optimizing ion shuttling sequences in trapped-ion quantum computers
  • Implementation of qubit mapping strategies to determine optimal initial ion ordering
  • Demonstration that multiple interaction zones can reduce register reordering overhead
trapped-ion quantum computing shuttling optimization qubit mapping quantum Fourier transform ion displacement
View Full Abstract

An algorithm for the generation of shuttling sequences is necessary for the operation of a linear segmented ion-trap quantum computer. The present work provides an implementation of an algorithm that produces sequences proved to be optimal for circuits with a quantum Fourier transform-like structure. Such optimality was proved in previous work of our group. We first present an approach for qubit mapping, i.e. determining the initial ordering of the ions, termed the common ion order, and develop a heuristic algorithm for its implementation. We explain how this heuristic is integrated in the shuttling sequence generation algorithm described in the previous work. The results show the increased performance of the heuristic in terms of reducing the number of required shuttling operations. The number of ion displacements required exhibits a polynomial increase in terms of the number of qubits, such that these operations become the main contribution to the overall resource cost. Furthermore, we show that multiple zones for gate interactions can reduce the amount of qubit register reordering.

Constant depth magic state cultivation with Clifford measurements by gauging

Bence Hetényi, Benjamin J. Brown, Dominic. J. Williamson

2603.05429 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a method to improve magic state preparation for quantum error correction by using constant-depth measurements instead of depth-scaling measurements, making the approach practical for larger quantum codes. The technique uses 'gauging' to perform logical measurements on color codes with better scalability than previous cultivation methods.

Key Contributions

  • Development of constant-depth logical measurement circuits for color codes using gauging technique
  • Achievement of 10^-12 logical error rates for d=7 color codes with improved scalability over magic state cultivation
magic states quantum error correction color codes Clifford measurements fault tolerance
View Full Abstract

Magic states are a scarce resource for two-dimensional qubit stabilizer codes. Magic state cultivation was recently proposed to reduce the cost of magic state preparation by measuring the transversal Clifford operator of the color code. Cultivation achieves $\sim 10^{-9}$ logical error rates for the $d=5$ color code, with substantially lower space-time overhead than magic state distillation. However, due to the $\mathcal{O}(d)$ depth of the Clifford measurement circuit, magic state cultivation becomes impractical for $d>5$. Here, we perform logical $XS^\dagger$ measurements on the color code by gauging a transversal Clifford gate, resulting in a constant-depth logical measurement circuit. We employ repeated gauging measurements with post-selection rather than performing error correction on the Clifford stabilizer code that emerges during the gauging protocol, thus gaining simplicity at the cost of scalability. Our protocol requires a regular square grid connectivity and yields logical error rates comparable to magic state cultivation. The $d=7$ version of our protocol gives access to the $10^{-12}$ logical error rate regime at $0.05\%$ physical error rate while retaining more than $1\%$ of the shots after the equivalent of the cultivation stage.

Optimal Decoding with the Worm

Zac Tobias, Nikolas P. Breuckmann, Benedikt Placke

2603.05428 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a new quantum error correction decoder called the 'worm algorithm' that uses Markov-Chain Monte-Carlo methods to optimally decode errors in quantum low-density parity-check (qLDPC) codes. The decoder can handle various quantum error correction codes including surface codes and hyperbolic surface codes, and demonstrates superior performance compared to existing decoding methods.

Key Contributions

  • Novel worm algorithm decoder for optimal decoding of matchable qLDPC codes using MCMC methods
  • Rigorous analysis of mixing time guarantees and connection to defect susceptibility
  • Demonstration of superior decoding thresholds compared to minimum-weight perfect matching
  • Extension to correlated decoding schemes that work beyond independent error models
quantum error correction qLDPC codes surface codes MCMC decoding fault tolerance
View Full Abstract

We propose a new decoder for ``matchable'' qLDPC codes that uses a Markov-Chain Monte-Carlo algorithm -- called the \emph{worm algorithm} -- to approximately compute the probabilities of logical error classes given a syndrome. The algorithm hence performs (approximate) \emph{optimal} decoding, and we expect it to be computationally efficient in certain settings. The algorithm is applicable to decoding random errors for the surface code, the honeycomb Floquet code, and hyperbolic surface codes with constant rate, in all cases with and without measurement errors. The efficiency of the decoder hinges on the mixing time of the underlying Markov chain. We give a rigorous mixing time guarantee in terms of a quantity that we call the \emph{defect susceptibility}. We connect this quantity to the notion of disorder operators in statistical mechanics and use this to argue (non-rigorously) that the algorithm is efficient for \emph{typical} errors in the entire decodable phase. We also demonstrate the effectiveness of the worm decoder numerically by applying it to the surface code with measurement errors as well as a family of hyperbolic surface codes. For most codes, the matchability condition restricts direct application of our decoder to noise models with independent bit-flip, phase-flip, and measurement errors. However, our decoder returns \emph{soft information} which makes it useful also in heuristic ``correlated decoding'' schemes which work beyond this simple setting. We demonstrate this by simulating decoding of the surface code under depolarizing noise, and we find that the threshold for ``correlated worm decoding'' is substantially higher than for both minimum-weight perfect matching and for correlated matching.

Decay Rates in Interleaved Benchmarking with Single-Qubit References

Ilya A. Simakov, Arina V. Zotova, Tatyana A. Chudakova, Alena S. Kazmina, Artyom M. Polyanskiy, Nikolay N. Abramov, Mikhail A. Tarkhov, Alexander M. M...

2603.05422 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops improved theoretical foundations for characterizing multi-qubit quantum gates using cross-entropy benchmarking with single-qubit reference sequences. The authors identify and correct systematic errors in current approaches, providing more accurate gate fidelity measurements that match standard benchmarking methods while achieving higher precision.

Key Contributions

  • Derived analytical expression for joint decay of simultaneous single-qubit reference sequences
  • Introduced refined expression for interleaved gate fidelity estimation that corrects systematic overestimation
  • Validated theory experimentally on superconducting quantum processor showing agreement with standard interleaved randomized benchmarking
cross-entropy benchmarking gate fidelity interleaved randomized benchmarking multi-qubit gates superconducting quantum processor
View Full Abstract

Cross-entropy benchmarking (XEB) with single-qubit reference sequences is widely used to characterize multi-qubit gates in large-scale quantum processors, despite the lack of a rigorous theoretical justification. Here we show that the commonly employed additive single-qubit errors approximation underlying this approach breaks down and leads to a systematic overestimation of gate fidelities. We derive an analytical expression for the joint decay of simultaneous single-qubit reference sequences and introduce a refined expression for the interleaved gate fidelity estimation. Experiments on a superconducting quantum processor validate the theory and demonstrate that fidelities obtained using XEB with single-qubit references agree with those extracted from standard interleaved randomized benchmarking (IRB), while achieving higher precision due to reduced reference-sequence errors. Our results establish theoretical foundation for the single-qubit-based XEB and show that, with appropriate post-processing, it enables a reliable and robust approach for entangling gates benchmarking without the need for multi-qubit Clifford reference sequences.

Recursive Magic State Distillation on the Surface Code

Jonathan E. Moussa

2603.05409 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a more efficient method for preparing magic states needed for quantum computation by using recursive 15-to-1 distillation with lattice surgery on surface codes. The approach reduces the physical qubit requirements and time needed to create high-quality magic states, though it requires lower physical error rates to be effective.

Key Contributions

  • Recursive implementation of 15-to-1 magic state distillation reducing resource overhead
  • Specific resource estimates for T and CCZ magic state preparation on surface codes with lattice surgery
magic state distillation surface code lattice surgery fault-tolerant quantum computing error correction
View Full Abstract

I reduce the cost to prepare magic states with lattice surgery operations on the surface code by using a recursive implementation of 15-to-1 magic state distillation. On a rotated surface code with distance $d$, $|T\rangle$ preparation requires a $d$-by-$3 d$ grid of data qubits for up to $15 d$ error correction cycles, and $|CCZ\rangle$ preparation requires a $3 d$-by-$2 d$ grid for up to $10.5 d$ cycles. However, a significantly lower physical error threshold than that of the underlying surface code is required to match the error probability of the output magic state with the logical error rate of the output surface code at large code distances.

Generalized matching decoders for 2D topological translationally-invariant codes

Shi Jie Samuel Tan, Ian Gill, Eric Huang, Pengyu Liu, Chen Zhao, Hossein Dehghani, Aleksander Kubica, Hengyun Zhou, Arpit Dua

2603.05402 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new graph-matching decoders for 2D topological quantum error-correcting codes like bivariate bicycle codes, which work by converting the syndrome information into an equivalent toric code representation that can be efficiently decoded using graph-matching techniques.

Key Contributions

  • Development of graph-matching decoders for general translationally-invariant topological codes
  • Proof that these decoders correct errors up to a constant fraction of code distance with provable performance guarantees
  • Numerical demonstration of competitive performance with existing decoders for bivariate bicycle codes
quantum error correction topological codes graph matching fault tolerant quantum computing bivariate bicycle codes
View Full Abstract

Two-dimensional topological translationally-invariant (TTI) quantum codes, such as the toric code (TC) and bivariate bicycle (BB) codes, are promising candidates for fault-tolerant quantum computation. For such codes to be practically relevant, their decoders must successfully correct the most likely errors while remaining computationally efficient. For the TC, graph-matching decoders satisfy both requirements and, additionally, admit provable performance guarantees. Given the equivalence between TTI codes and (multiple copies of) the TC, one may then ask whether TTI codes also admit analogous graph-matching decoders. In this work, we develop a graph-matching approach to decoding general TTI codes. Intuitively, our approach coarse-grains the TTI code to obtain an effective description of the syndrome in terms of TC excitations, which can then be removed using graph-matching techniques. We prove that our decoders correct errors of weight up to a constant fraction of the code distance and achieve non-zero code-capacity thresholds. We further numerically study a variant optimized for practically relevant BB codes and observe performance comparable to that of the belief propagation with ordered statistics decoder. Our results indicate that graph-matching decoders are a viable approach to decoding BB codes and other TTI codes.

QGPU: Parallel logic in quantum LDPC codes

Boren Gu, Andy Zeyi Liu, Armanda O. Quintavalle, Qian Xu, Jens Eisert, Joschka Roffe

2603.05398 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces clustered-cyclic codes, a new family of quantum low-density parity-check (LDPC) codes that enable highly parallel logical operations, and proposes parallel product surgery techniques to perform multiple logical measurements simultaneously with fixed overhead.

Key Contributions

  • Introduction of clustered-cyclic quantum LDPC codes with competitive parameters like [[136,8,14]] and [[198,18,10]]
  • Development of parallel product surgery protocol enabling surface-code-style maximal parallelism for logical operations
  • Proof that parallel product surgery preserves code distance and demonstration of fault-tolerant Clifford group generation
quantum error correction LDPC codes fault-tolerant quantum computing logical qubits parallel operations
View Full Abstract

Quantum error correction is critical to the design and manufacture of scalable quantum computing systems. Recently, there has been growing interest in quantum low-density parity-check codes as a resource-efficient alternative to surface codes. Their adoption is hindered by the difficulty of compiling fault-tolerant logical operations. A key challenge is that logical qubits do not necessarily map to disjoint sets of physical qubits, which limits parallelism. We introduce clustered-cyclic codes, a quantum low-density parity-check code family with finite-size instances such as [[136,8,14]] and [[198,18,10]] that are competitive with state-of-the-art constructions. These codes admit a directly addressable logical basis, enabling highly parallel logical measurement layers. To leverage this structure, we propose parallel product surgery for quantum product codes. Using an auxiliary copy of the data patch and an engineered product-connection structure, the protocol performs many logical Pauli-product measurements in a single surgery round with small, fixed overhead. For clustered-cyclic codes, this yields surface-code-style maximal parallelism: up to k/2 disjoint Pauli-product measurements per round under explicit algebraic conditions. We prove that parallel product surgery preserves the code distance for hypergraph product codes and numerically verify distance preservation for the listed clustered-cyclic instances with k = 8. Finally, for the [[24,8,3]] clustered-cyclic code, treating half of the logical qubits as auxiliaries enables arbitrary parallel CNOTs on disjoint pairs; combined with symmetry-derived operations, these gates generate the full Clifford group fault-tolerantly.

SpiderCat: Optimal Fault-Tolerant Cat State Preparation

Andrey Boris Khesin, Sarah Meng Li, Boldizsár Poór, Benjamin Rodatz, John van de Wetering, Richie Yeung

2603.05391 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper develops optimal methods for preparing fault-tolerant CAT states (multi-qubit entangled states) needed for quantum error correction, using graph theory to find circuits that minimize the number of CNOT gates while preventing error spread. The authors provide both theoretical lower bounds and practical constructions that significantly improve upon previous resource requirements.

Key Contributions

  • Derived formal lower bounds on CNOT gate requirements for fault-tolerant n-qubit CAT state preparation using ZX-diagram analysis and graph theory
  • Provided explicit optimal circuit constructions for CAT states up to n≤100 qubits that significantly improve resource counts over previous methods
  • Developed constant-depth fault-tolerant implementations using O(n) ancilla qubits and O(n) CNOT gates
fault-tolerant quantum computing CAT states GHZ states quantum error correction CNOT optimization
View Full Abstract

The ability to fault-tolerantly prepare CAT states, also known as multi-qubit GHZ states, is an important primitive for quantum error correction. It is required for Shor-style syndrome extraction, and can also be used as a subroutine for doing fault-tolerant state preparation of CSS codewords. Existing approaches to fault-tolerant CAT state preparations have been found using computationally expensive heuristics involving SAT solving, reinforcement learning, or exhaustive analysis. In this paper, we constructively find optimal circuits for CAT states in a more scalable way. In particular, we derive formal lower bounds on the number of CNOT gates required for circuits implementing $n$-qubit CAT states that do not spread errors of weight at most $t$ for $1\leq t \leq 5$. We do this by using fault-equivalent rewrites of ZX-diagrams to reduce it to a problem of characterising certain 3-regular simple graphs. We then provide families of such optimal graphs for infinitely many values of $n$ and $t\leq5$. By encoding the construction of optimal graphs as a constraint satisfaction problem we find explicit constructions for circuits that match this lower bound on CNOT count for all $n\leq50$ and $t \leq 5$ and for nearly all pairs $(n,t)$ with $n\leq 100$ and $t\leq 5$ or $n\leq 50$ and $t\leq 7$, significantly extending the regimes that were achievable by previous methods and improving the resource counts for existing constructions. We additionally show how to trade CNOT count against depth, allowing us to construct constant-depth fault-tolerant implementations using $O(n)$ ancilla and $O(n)$ CNOT gates.

Achieving Thresholds via Standalone Belief Propagation on Surface Codes

Pedro Hack, Luca Menti, Francisco Lazaro, Alexandru Paler

2603.05381 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new belief propagation decoders for quantum error correction that achieve threshold performance on surface codes by operating on decoding graphs rather than traditional Tanner graphs. The approach achieves performance comparable to minimum weight perfect matching decoders while being more suitable for hardware acceleration.

Key Contributions

  • Novel belief propagation decoders that achieve thresholds on surface codes by operating on decoding graphs instead of Tanner graphs
  • Hardware-scalable decoder implementation that matches minimum weight perfect matching performance
quantum error correction surface codes belief propagation decoding threshold fault tolerance
View Full Abstract

The usual belief propagation (BP) decoders are, in general, exchanging local information on the Tanner graph of the quantum error-correcting (QEC) code and, in particular, are known to not have a threshold for the surface code. We propose novel BP decoders that exchange messages on the decoding graph and obtain code capacity thresholds via standalone BP for the surface code under depolarizing noise. Our approach, similarly to the minimum weight perfect matching (MWPM) decoder, is applicable to any graphlike QEC code. The thresholds observed with our decoders are close to those obtained by MWPM. This result opens the path towards scalable hardware-accelerated implementations of MWPM-compatible decoders.

Simplified circuit-level decoding using Knill error correction

Ewan Murphy, Subhayan Sahu, Michael Vasmer

2603.05320 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper investigates Knill error correction, a quantum error correction technique that uses a single round of measurements instead of repeated syndrome measurements, requiring an auxiliary logical Bell state. The authors prove its fault tolerance and show that it can use simpler decoding algorithms, potentially reducing the classical control requirements for large-scale quantum computers.

Key Contributions

  • Theoretical proof of fault tolerance for Knill error correction under circuit-level noise
  • Demonstration that Knill error correction can use simpler code-capacity decoders instead of complex circuit-level decoders
  • Numerical benchmarking of the protocol's performance on quantum low-density parity-check codes
quantum error correction Knill error correction fault tolerance quantum decoding circuit-level noise
View Full Abstract

Quantum error correction will likely be essential for building a large-scale quantum computer, but it comes with significant requirements at the level of classical control software. In particular, a quantum error-correcting code must be supplemented with a fast and accurate classical decoding algorithm. Standard techniques for measuring the parity-check operators of a quantum error-correcting code involve repeated measurements, which both increases the amount of data that needs to be processed by the decoder, and changes the nature of the decoding problem. Knill error correction is a technique that replaces repeated syndrome measurements with a single round of measurements, but requires an auxiliary logical Bell state. Here, we provide a theoretical and numerical investigation into Knill error correction from the perspective of decoding. We give a self-contained description of the protocol, prove its fault tolerance under locally decaying (circuit-level) noise, and numerically benchmark its performance for quantum low-density parity-check codes. We show analytically and numerically that the time-constrained decoding problem for Knill error correction can be solved using the same decoder used for the simpler code-capacity noise model, illustrating that Knill error correction may alleviate the stringent requirements on classical control required for building a large-scale quantum computer.

Robust and optimal control of open quantum systems

Zi-Jie Chen, Hongwei Huang, Lida Sun, Qing-Xuan Jie, Jie Zhou, Ziyue Hua, Yifang Xu, Weiting Wang, Guang-Can Guo, Chang-Ling Zou, Luyan Sun, Xu-Bo Zou

2603.05249 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: low

This paper develops an improved algorithm for controlling quantum systems that accounts for real-world imperfections like noise and parameter uncertainties. The researchers demonstrate their approach experimentally using superconducting quantum circuits, achieving very low error rates of about 0.60%.

Key Contributions

  • Enhanced scalable algorithm for robust quantum control in open systems with noise and imperfections
  • Experimental validation achieving 0.60% infidelity in superconducting quantum circuits
quantum control open quantum systems decoherence superconducting circuits quantum error mitigation
View Full Abstract

Recent advancements in quantum technologies have highlighted the importance of mitigating system imperfections, including parameter uncertainties and decoherence effects, to improve the performance of experimental platforms. However, most of the previous efforts in quantum control are devoted to the realization of arbitrary unitary operations in a closed quantum system. Here, we improve the algorithm that suppresses system imperfections and noises, providing notably enhanced scalability for robust and optimal control of open quantum systems. Through experimental validation in a superconducting quantum circuit, we demonstrate that our approach outperforms its conventional counterpart for closed quantum systems with an ultra-low infidelity of about $0.60\%$, while the complexity of this algorithm exhibits the same scaling, with only a modest increase in the prefactor. This work represents a notable advancement in quantum optimal control techniques, paving the way for realizing quantum-enhanced technologies in practical applications.

Quantum advantages for syndrome-aware noisy logical observable estimation

Kento Tsubouchi, Hyukgun Kwon, Liang Jiang, Nobuyuki Yoshioka

2603.05145 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: none

This paper develops a theoretical framework to analyze how error syndrome information can improve the estimation of quantum observables in fault-tolerant quantum computers. The research shows that classical post-processing of syndromes provides limited improvement, but quantum protocols that adapt measurements based on syndromes can achieve exponentially better performance.

Key Contributions

  • Proves universal limitation that classical syndrome-aware protocols can improve logical error rates by at most factor of two
  • Demonstrates quantum protocols with syndrome-conditioned control can achieve exponential improvement in effective logical error rate
fault-tolerant quantum computing error correction quantum estimation theory logical observables syndrome information
View Full Abstract

Recent progress in fault-tolerant quantum computing suggests that leveraging error-syndrome information at the logical layer can substantially improve performance, including the estimation of logical observables from noisy states. In this work, based on quantum estimation theory, we develop an information-theoretic framework to quantify the utility of error syndromes for noisy logical observable estimation. We distinguish two operational regimes of such syndrome-aware protocols: classical protocols, in which the logical measurement basis is fixed and syndrome information is used only in classical post-processing, and quantum protocols, in which the logical quantum control can be tailored to depend on the observed error syndrome. For classical syndrome-aware protocols, we prove a universal limitation: on average, syndrome information can improve the effective logical error rate by at most a factor of two, implying at most a quadratic reduction in sampling overhead. In contrast, once syndrome-conditioned quantum control is permitted, we exhibit settings in which the effective logical error rate decays exponentially with the number of logical qubits. These findings provide fundamental guidance for designing future fault-tolerant architectures that actively exploit syndrome records rather than discarding them after decoding.

Parsimonious Quantum Low-Density Parity-Check Code Surgery

Andrew C. Yuan, Alexander Cowtan, Zhiyang He, Ting-Chun Lin, Dominic J. Williamson

2603.05082 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces a more efficient method for quantum code surgery in quantum Low-Density Parity-Check codes, reducing the number of ancilla qubits needed from linear to logarithmic scaling when measuring logical operators. The work improves the overhead costs of fault-tolerant quantum computing schemes that rely on measuring logical operators within error-correcting codes.

Key Contributions

  • Development of O(W log W) ancilla system construction for measuring logical Pauli operators of weight W
  • Asymptotic overhead reduction across various quantum code surgery schemes in qLDPC codes
quantum error correction fault-tolerant quantum computing qLDPC codes quantum code surgery logical operators
View Full Abstract

Quantum code surgery offers a flexible, low-overhead framework for executing logical measurements within quantum error-correcting codes. It encompasses several fault-tolerant logical computation schemes, including parallel surgery, universal adapters and fast surgery, and serves as the key primitive in extractor architectures. The efficiency of these schemes crucially depends on constructing low-overhead ancilla systems for measuring arbitrary logical operators in general quantum Low-Density Parity-Check (qLDPC) codes. In this work, we introduce a method to construct an ancilla system of qubit size $O(W \log W)$ to measure an arbitrary logical Pauli operator of weight $W$ in any qLDPC stabilizer code. This new construction immediately reduces the asymptotic overhead across various quantum code surgery schemes.

Quantum Weight Reduction with Layer Codes

Andrew C. Yuan, Nouédyn Baspin, Dominic J. Williamson

2603.04883 • Mar 5, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: low

This paper introduces a new quantum weight reduction method that makes quantum error correction codes easier to implement by replacing components of existing codes with surface code patches connected together. The method achieves lower weight checks and qubit degrees than existing approaches, making the codes more practical for modular quantum computing architectures.

Key Contributions

  • Novel quantum weight reduction procedure achieving check weight 6 and qubit degree 6
  • Introduction of Layer Codes formed by connecting surface code patches for practical implementation
quantum error correction surface codes Calderbank-Shor-Steane codes weight reduction fault tolerance
View Full Abstract

Quantum weight reduction procedures ease the implementation of quantum codes by sparsifying them, resulting in low-weight checks and low-degree qubits. However, to date, only few quantum weight reduction methods have been explored. In this work we introduce a simple and general procedure for quantum weight reduction that achieves check weight 6 and total qubit degree 6, lower than existing procedures at the cost of a potentially larger qubit overhead. Our quantum weight reduction procedure replaces each qubit and check in an arbitrary Calderbank-Shor-Steane code with an ample patch of surface code, these patches are then joined together to form a geometrically nonlocal Layer Code. This is a quantum analog of the simple classical weight reduction procedure where each bit and check is replaced by a repetition code. Due to the simplicity of our weight reduction procedure, bounds on the weight and degree of the resulting code follow directly from the Layer Code construction and hence are easily verified by inspection. Our procedure is well suited for implementation in modular architectures that consist of surface code patches networked via long-range interconnects.

HyQBench: A Benchmark Suite for Hybrid CV-DV Quantum Computing

Shubdeep Mohapatra, Yuan Liu, Eddy Z. Zhang, Huiyang Zhou

2603.04398 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper introduces HyQBench, a benchmarking framework for hybrid quantum systems that combine continuous-variable (CV) and discrete-variable (DV) quantum computing approaches. The researchers developed a simulation tool and created standardized benchmarks to evaluate the performance and capabilities of these hybrid quantum systems across various computational tasks.

Key Contributions

  • Development of HyQBench simulation and benchmarking framework for hybrid CV-DV quantum circuits using Bosonic Qiskit
  • Creation of standardized benchmark suite including cat state generation, GKP states, hybrid quantum Fourier transform, and Shor's algorithm
  • Definition of CV-DV-specific feature maps and metrics for evaluating circuit complexity, scalability, and hardware requirements
hybrid quantum computing continuous variable discrete variable quantum benchmarking Shor's algorithm
View Full Abstract

Hybrid continuous-variable (CV)-discrete-variable (DV) quantum systems present a promising direction for quantum computing by combining the high dimensional encoding capabilities of qumodes with the control offered by DV qubits on the coupled qumodes. There have been exciting recent progresses on hybrid CV-DV quantum computing, including variational algorithms, error correction, compiler-level optimizations for Hamiltonian simulation, etc. However, there is a lack of a standardized CV-DV benchmark suite for assessing various emerging hardware platforms and evaluating software optimizations on hybrid CV-DV circuits. In this work, we introduce a simulation and benchmarking framework for hybrid CV-DV circuits, implemented using Bosonic Qiskit-a tool specifically designed to model CV-DV systems, along with QuTip for functional correctness verification. We construct and characterize representative CV-DV benchmarks, including cat state generation, GKP state generation, CV-DV state transfers, hybrid quantum Fourier transform, variational quantum algorithms, Hamiltonian simulation, and Shor's algorithm. To assess circuit complexity and scalability, we define a feature map organized into two categories: general features (e.g., qubit/qumode count, gate counts) and CV-DV-specific features (e.g., Wigner negativity, energy, truncation cost). These metrics enable evaluation of both classical simulability and hardware resource requirements. Our results, including one benchmark on real hardware, demonstrate that hybrid CV-DV architectures are not only viable but well-suited for a range of computational tasks, from optimization to Hamiltonian simulation. This framework lays the groundwork for systematic evaluation and future development of hybrid quantum systems.

On Error Thresholds for Pauli Channels: Some answers with many more questions

Avantika Agarwal, Alan Bu, Amolak Ratan Kalra, Debbie Leung, Luke Schaeffer, Graeme Smith

2603.04357 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: medium

This paper analyzes error thresholds for quantum error correction codes, specifically studying how well different stabilizer codes can protect quantum information from Pauli noise channels. The researchers compute bounds on error rates that quantum codes can tolerate and discover some codes perform better when combined than theory predicts.

Key Contributions

  • Numerical computation of lower bounds for error thresholds in Pauli channels using coset weight enumerators
  • Discovery of significant non-additivity in concatenated stabilizer codes and closed-form expressions for repetition code concatenations
  • Optimization of channel parameters for maximal non-additivity and threshold estimates for large concatenated codes
error correction stabilizer codes Pauli channels error thresholds concatenated codes
View Full Abstract

This paper focuses on error thresholds for Pauli channels. We numerically compute lower bounds for the thresholds using the analytic framework of coset weight enumerators pioneered by DiVincenzo, Shor and Smolin in 1998. In particular, we study potential non-additivity of a variety of small stabilizer codes and their concatenations, and report several new concatenated stabilizer codes of small length that show significant non-additivity. We also give a closed form expression of coset weight enumerators of concatenated phase and bit flip repetition codes. Using insights from this formalism, we estimate the threshold for concatenated repetition codes of large lengths. Finally, for several concatenations of small stabilizer codes we optimize for channels which lead to maximal non-additivity at the hashing point of the corresponding channel. We supplement these results with a discussion on the performance of various stabilizer codes from the perspective of the non-additivity and threshold problem. We report both positive and negative results, and highlight some counterintuitive observations, to support subsequent work on lower bounds for error thresholds.

Magic state distillation with permutation-invariant codes and a two-qubit example

Heather Leitch, Yingkai Ouyang

2603.04310 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a new magic state distillation protocol that uses permutation-invariant codes as small as two qubits to create clean quantum states needed for fault-tolerant quantum computing. The protocol achieves better performance than previous methods by allowing non-Clifford gates and flexible output states, with a 0.5 error threshold and can distill magic states with arbitrary magic levels.

Key Contributions

  • Novel magic state distillation protocol using permutation-invariant codes with minimal two-qubit overhead
  • Achievement of 0.5 error threshold and 1/2 distillation rate surpassing comparable schemes
  • Flexible protocol that can distill magic states with arbitrary magic levels by varying ideal input state positions
magic state distillation fault-tolerant quantum computation permutation-invariant codes non-Clifford gates gate teleportation
View Full Abstract

Magic states, by allowing non-Clifford gates through gate teleportation, are important building blocks of fault-tolerant quantum computation. Magic state distillation protocols aim to create clean copies of magic states from many noisier copies. However, the prevailing protocols require substantial qubit overhead. We present a distillation protocol based on permutation-invariant gnu codes, as small as two qubits. The two-qubit protocol achieves a 0.5 error threshold and 1/2 distillation rate, surpassing prior schemes for comparable codes. Our protocol furthermore distils magic states with arbitrary magic by varying the position of the ideal input states on the Bloch sphere. We achieve this by departing from the usual magic state distillation formalism, allowing the use of non-Clifford gates in the distillation protocol, and allowing the form of the output state to differ from the input state. Our protocol is compatible for use in tandem with existing magic state distillation protocols to enhance their performance.

Minimum Weight Decoding in the Colour Code is NP-hard

Mark Walters, Mark L. Turner

2603.04234 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proves that exact decoding of the colour code, a promising quantum error correction scheme, is computationally intractable (NP-hard). Unlike the surface code which can be decoded efficiently, colour code decoding cannot be solved exactly in polynomial time unless P=NP.

Key Contributions

  • Proves that minimum weight decoding in the colour code is NP-hard
  • Establishes fundamental computational limitations that distinguish colour codes from surface codes
quantum error correction colour code topological codes computational complexity NP-hard
View Full Abstract

All utility-scale quantum computers will require some form of Quantum Error Correction in which logical qubits are encoded in a larger number of physical qubits. One promising encoding is known as the colour code which has broad applicability across all qubit types and can decisively reduce the overhead of certain logical operations when compared to other two-dimensional topological codes such as the surface code. However, whereas the surface code decoding problem can be solved exactly in polynomial time by finding minimum weight matchings in a graph, prior to this work, it was not known whether exact and efficient colour code decoding was possible. Optimism in this area, stemming from the colour code's significant structure and well understood similarities to the surface code, fanned this uncertainty. In this paper we resolve this, proving that exact decoding of the colour code is NP-hard -- that is, there does not exist a polynomial time algorithm unless P=NP. This highlights a notable contrast to some of the colour code's key competitors, such as the surface code, and motivates continued work in the narrower space of heuristic and approximate algorithms for fast, accurate and scalable colour code decoding.

Achieving Optimal-Distance Atom-Loss Correction via Pauli Envelope

Pengyu Liu, Shi Jie Samuel Tan, Eric Huang, Umut A. Acar, Hengyun Zhou, Chen Zhao

2603.04156 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new methods to correct atom loss errors in neutral-atom quantum computers, which account for over 40% of physical errors. The researchers propose a 'Pauli Envelope' framework with improved syndrome extraction circuits and decoders that achieve better error correction performance than existing approaches.

Key Contributions

  • Pauli Envelope framework for bounding atom loss effects with efficient computation
  • Mid-SWAP syndrome extraction circuit that reduces error propagation without additional overhead
  • Envelope-MLE decoder achieving optimal effective code distance for atom-loss errors
  • Envelope-Matching decoder providing improved performance within MWPM framework
quantum error correction atom loss neutral atom quantum computing syndrome extraction quantum decoding
View Full Abstract

Atom loss is a major error source in neutral-atom quantum computers, accounting for over 40% of the total physical errors in recent experiments. Unlike Pauli errors, atom loss poses significant challenges for both syndrome extraction and decoding due to its nonlinearity and correlated nature. Current syndrome extraction circuits either require additional physical overhead or do not provide optimal loss tolerance. On the decoding side, existing methods are either computationally inefficient, achieve suboptimal logical error rates, or rely on machine learning without provable guarantees. To address these challenges, we propose the Pauli Envelope framework. This framework constructs a Pauli envelope that bounds the effect of atom loss while remaining low weight and efficiently computable. Guided by this framework, we first design a new atom-replenishing syndrome extraction circuit, the Mid-SWAP syndrome extraction, that reduces error propagation with no additional space-time cost. We then propose an optimal decoder for Mid-SWAP syndrome extraction: the Envelope-MLE decoder formulated as an MILP that achieves optimal effective code distance dloss ~ d for atom-loss errors. Inspired by the exclusivity constraint of the optimal decoder, we also propose an Envelope-Matching decoder to approximately enforce the exclusivity constraint within the MWPM framework. This decoder achieves d_loss ~ 2d/3, surpassing the previous best algorithmic decoder, which achieves dloss ~ d/2 even with an MILP formulation. Circuit-level simulations demonstrate that our approach attains up to 40% higher thresholds and 30% higher effective distances compared with existing algorithmic decoders and syndrome extraction circuits in the loss-dominated regime. On recent experimental data, our Envelope-MLE decoder improves the error suppression factor of a hybrid MLE--machine-learning decoder from 2.14 to 2.24.

Efficient Time-Aware Partitioning of Quantum Circuits for Distributed Quantum Computing

Raymond P. H. Wu, Chathu Ranaweera, Sutharshan Rajasegarar, Ria Rushin Joseph, Jinho Choi, Seng W. Loke

2603.04126 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper develops a time-aware algorithm based on beam search to efficiently partition quantum circuits across multiple quantum processing units in distributed quantum computing networks. The algorithm minimizes communication costs between remote quantum processors while providing significant computational speedup over existing methods.

Key Contributions

  • Time-aware beam search heuristic for quantum circuit partitioning in distributed systems
  • Algorithm with quadratic scaling in qubits and linear scaling in circuit depth, providing computational speedup over metaheuristics
  • Demonstrated reduction in quantum communication overhead across various circuit sizes and network topologies
distributed quantum computing quantum circuit partitioning quantum communication beam search quantum teleportation
View Full Abstract

To overcome the physical limitations of scaling monolithic quantum computers, distributed quantum computing (DQC) interconnects multiple smaller-scale quantum processing units (QPUs) to form a quantum network. However, this approach introduces a critical challenge, namely the high cost of quantum communication between remote QPUs incurred by quantum state teleportation and quantum gate teleportation. To minimize this communication overhead, DQC compilers must strategically partition quantum circuits by mapping logical qubits to distributed physical QPUs. Static graph partitioning methods are fundamentally ill-equipped for this task as they ignore execution dynamics and underlying network topology, while metaheuristics require substantial computational runtime. In this work, we propose a heuristic based on beam search to solve the circuit partitioning problem. Our time-aware algorithm incrementally constructs a low-cost sequence of qubit assignments across successive time steps to minimize overall communication overhead. The time and space complexities of the proposed algorithm scale quadratically with the number of qubits and linearly with circuit depth, offering a significant computational speedup over common metaheuristics. We demonstrate that our proposed algorithm consistently achieves significantly lower communication costs than static baselines across varying circuit sizes, depths, and network topologies, providing an efficient compilation tool for near-term distributed quantum hardware.

Spectrally Corrected Polynomial Approximation for Quantum Singular Value Transformation

Krishnan Suresh

2603.03998 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper improves Quantum Singular Value Transformation (QSVT) by developing a spectral correction method that uses prior knowledge of some eigenvalues to create more efficient polynomial approximations. The approach achieves up to 5× reduction in quantum circuit depth while maintaining high fidelity, demonstrated on solving linear systems like the Poisson equation.

Key Contributions

  • Development of spectral correction method for QSVT that exploits prior eigenvalue knowledge to reduce polynomial degree
  • Demonstration of up to 5× circuit depth reduction while maintaining unit fidelity on linear system solving problems
  • Framework that is agnostic to base polynomial choice and robust to eigenvalue perturbations up to 10%
quantum singular value transformation QSVT polynomial approximation linear systems circuit depth optimization
View Full Abstract

Quantum Singular Value Transformation (QSVT) provides a unified framework for applying polynomial functions to the singular values of a block-encoded matrix. QSVT prepares a state proportional to $\bA^{-1}\bb$ with circuit depth $O(d\cdot\mathrm{polylog}(N))$, where $d$ is the polynomial degree of the $1/x$ approximation and $N$ is the size of $\bA$. Current polynomial approximation methods are over the continuous interval $[a,1]$, giving $d = O(\sqrt{\kap}\log(1/\varepsilon))$, and make no use of any properties of $\bA$. We observe here that QSVT solution accuracy depends only on the polynomial accuracy at the eigenvalues of $\bA$. When all $N$ eigenvalues are known exactly, a pure spectral polynomial $p_{S}$ can interpolate $1/x$ at these eigenvalues and achieve unit fidelity at reduced degree. But its practical applicability is limited. To address this, we propose a spectral correction that exploits prior knowledge of $K$ eigenvalues of $\bA$. Given any base polynomial $p_0$, such as Remez, of degree $d_0$, a $K\times K$ linear system enforces exact interpolation of $1/x$ only at these $K$ eigenvalues without increasing $d_0$. The spectrally corrected polynomial $p_{SC}$ preserves the continuous error profile between eigenvalues and inherits the parity of $p_0$. QSVT experiments on the 1D Poisson equation demonstrate up to a $5\times$ reduction in circuit depth relative to the base polynomial, at unit fidelity and improved compliance error. The correction is agnostic to the choice of base polynomial and robust to eigenvalue perturbations up to $10\%$ relative error. Extension to the 2D Poisson equation suggests that correcting a small fraction of the spectrum may suffice to achieve fidelity above $0.999$.

Overflow-Safe Polylog-Time Parallel Minimum-Weight Perfect Matching Decoder: Toward Experimental Demonstration

Ryo Mikami, Hayata Yamasaki

2603.03776 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops an improved algorithm for quantum error correction that can decode errors much faster than existing methods by solving the minimum-weight perfect matching problem in polylogarithmic time rather than polynomial time. The key innovation is using a truncated polynomial ring framework that prevents numerical overflow issues and reduces memory requirements by over 99.9% while maintaining the speed advantage.

Key Contributions

  • Development of overflow-safe polylog-time parallel MWPM decoder using truncated polynomial ring framework
  • Reduction of arithmetic bit length requirements by over 99.9% while preserving polylogarithmic runtime scaling
  • Hardware-friendly implementation using only bitwise XOR and shift operations
fault-tolerant quantum computation quantum error correction minimum-weight perfect matching polylogarithmic time determinant-based decoding
View Full Abstract

Fault-tolerant quantum computation (FTQC) requires fast and accurate decoding of quantum errors, which is often formulated as a minimum-weight perfect matching (MWPM) problem. A determinant-based approach has been proposed as a promising method to surpass the conventional polynomial runtime of MWPM decoding via the blossom algorithm, asymptotically achieving polylogarithmic parallel runtime. However, the existing approach requires an impractically large bit length to represent intermediate values during the computation of the matrix determinant; moreover, when implemented on a finite-bit machine, the algorithm cannot detect overflow, and therefore, the mathematical correctness of such algorithms cannot be guaranteed. In this work, we address these issues by presenting a polylog-time MWPM decoder that detects overflow in finite-bit representations by employing an algebraic framework over a truncated polynomial ring. Within this framework, all arithmetic operations are implemented using bitwise XOR and shift operations, enabling efficient and hardware-friendly implementation. Furthermore, with algorithmic optimizations tailored to the structure of the determinant-based approach, we reduce the arithmetic bit length required to represent intermediate values in the determinant computation by more than $99.9\%$, while preserving its polylogarithmic runtime scaling. These results open the possibility of a proof-of-principle demonstration of the polylog-time MPWM decoding in the early FTQC regime.

Resource-Efficient Emulation of Majorana Zero Mode Braiding on a Superconducting Trijunction

Rahul Signh, Weixin Lu, Kaelyn J Ferris, Javad Shabani

2603.03645 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a more efficient method for simulating Majorana zero modes (exotic quantum particles) on quantum computers, specifically focusing on braiding operations that could enable fault-tolerant quantum computing. The authors develop direct braiding operators that reduce the computational overhead compared to previous simulation approaches that required very deep quantum circuits.

Key Contributions

  • Development of resource-efficient direct braiding operators for MZM simulation
  • Generalization of the method to extended trijunction architectures based on Kitaev chains
majorana zero modes topological quantum computing fault-tolerant quantum computation braiding operations superconducting qubits
View Full Abstract

Topological superconductivity could host quasiparticles that are key candidates for fault-tolerant quantum computation due to their immunity to noise as they obey non-Abelian exchange statistics. For example, in the case of Majorana Zero Modes (MZM), braiding enables two topologically protected quantum gates. While their direct manipulation in solid-state systems remains experimentally challenging, digital emulation of MZM behavior has provided insight as well as a deeper understanding of controlling these topological quantum systems. This emulation is typically accomplished by mapping the topological and trivial phases of a Majorana system to ferromagnetic and paramagnetic Hamiltonians of a spin-glass model. This approach usually relies on adiabatic evolution of superconducting Hamiltonians, which require circuits with very large depths. In this work, we present a resource-efficient method to emulate MZM braiding in a trijunction geometry using a quantum processor. We introduce direct braiding operators which simulate the evolution more efficiently, reducing the quantum gate overhead. We then further generalize this method to emulate braiding operations in extended trijunction architectures based on Kitaev chains.

Mitigating many-body quantum crosstalk with tensor-network robust control

Nguyen H. Le, Florian Mintert, Eran Ginossar

2603.03639 • Mar 4, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: low

This paper develops a method to suppress quantum crosstalk in large quantum systems by combining tensor network simulations with robust control algorithms. The approach successfully designs high-fidelity quantum operations for up to 50 qubits, achieving order-of-magnitude improvements in performance when unwanted interactions between neighboring qubits are present.

Key Contributions

  • Development of tensor-network based robust control method that overcomes exponential scaling limitations
  • Demonstration of order-of-magnitude fidelity improvements for large-scale quantum operations up to 50 qubits in presence of crosstalk
  • Efficient random sampling technique for noise ensembles combined with GRAPE algorithm for practical implementation
quantum crosstalk robust control tensor networks GRAPE algorithm multi-qubit gates
View Full Abstract

Quantum crosstalk poses a major challenge to scaling up quantum computations as its strength is typically unknown and its effect accumulates exponentially as system size grows. Here, we show that many-body robust control can be utilized to suppress unwanted couplings during multi-qubit gate operations and state preparation. By combining tensor network simulations with the GRAPE algorithm, and leveraging an efficient random sampling over noise ensembles, our method overcomes the exponential scaling of the Hilbert space. We demonstrate its effectiveness for designing control solutions for high-fidelity implementations of parallel X gates and parallel CNOT on a chain of 50 qubits, and for realizing a 30-qubit GHZ state and the ground state of a 20-qubit Heisenberg model. In the presence of many-body quantum crosstalk due to parasitic interaction between neighboring qubits, robust control results in order-of magnitude improvement in fidelity for large system sizes. These findings pave the way for more reliable operations on near-term quantum processors.

Quantum Lego Power-up: Designing Transversal Gates with Tensor Networks

ChunJun Cao, Brad Lackey

2603.03542 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents a new approach using tensor networks and 'quantum lego' formalism to systematically design quantum error-correcting codes that support transversal gates, which are the simplest fault-tolerant quantum gates. The method allows construction of codes with addressable non-Clifford gates like T gates and multi-qubit gates, overcoming limitations of traditional stabilizer code constructions.

Key Contributions

  • Development of tensor network framework for systematic construction of quantum error-correcting codes with transversal gates
  • Construction of new finite-rate code families supporting non-Clifford transversal gates including T, CCZ, and other complex gates
  • Demonstration of addressable transversal gates in holographic codes, reducing overhead for universal fault-tolerant computation
quantum error correction transversal gates fault-tolerant quantum computing tensor networks stabilizer codes
View Full Abstract

Transversal gates are the simplest form of fault-tolerant gates and are relatively easy to implement in practice. Yet designing codes that support useful transversal operations -- especially non-Clifford or addressable gates -- remains difficult within the stabilizer formalism or CSS constructions alone. We show that these limitations can be overcome using tensor-network frameworks such as the quantum lego formalism, where transversal gates naturally appear as global or localized symmetries. Within the quantum lego formalism, small codes carrying desirable symmetries can be "glued" into larger ones, with operator-flow rules guiding how logical symmetries are preserved. This approach enables the systematic construction of codes with addressable transversal single- and multi-qubit gates targeting specific logical qubits regardless of whether the gate is Clifford or not. As a proof of principle, we build new finite-rate code families that support strongly transversal $T$, $CCZ$, $SH$, and Gottesman's $K_3$ gates, structures that are challenging to realize with conventional methods. We further construct holographic and fractal-like codes that admit addressable transversal inter-, meso-, and intra-block $T$, $CS$, and $C^\ell Z$ gates. As a corollary, we demonstrate that the heterogeneous holographic Steane-Reed-Muller black hole code also supports fully addressable transversal inter- and intra-block $CZ$ gates, significantly lowering the overhead for universal fault-tolerant computation.

Generalised All-Optical Cat Correction

Ari John Boon, Olivier Landon-Cardinal, Nicolás Quesada

2603.03263 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: medium

This paper develops improved error correction methods for quantum cat codes using all-optical techniques, showing that higher-order cat codes can dramatically reduce the number of correction iterations needed while using more photons per correction.

Key Contributions

  • Generalized all-optical telecorrection protocol for higher-order cat codes
  • Demonstrated 70x reduction in correction iterations for third-order vs first-order cat codes
  • Introduced probabilistic scheme for correcting state deformation with basis-changing capability
cat codes quantum error correction all-optical telecorrection photonic quantum computing
View Full Abstract

We have generalised an all-optical telecorrection protocol for the higher orders of the cat code, and show that with these higher orders we can achieve target performance at substantially reduced iteration counts at the cost of a higher mean photon-number. We also introduce a probabilistic scheme for correcting deformation of the state, which highlights two interesting abilities of telecorrection: to encode new sets of transformations, and to change the basis of the code. We find that for a target channel fidelity of $99.9\%$ over a channel with $1\text{ dB}$ of loss, a third-order cat code requires $70$ times fewer telecorrection iterations than a first-order one, at a cost of a $3.6$-fold increase in mean photon-number.

Entanglement-Assisted Codes Outside the Stabilizer Framework

Jaszmine DeFranco, Andrew Nemec

2603.03182 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: high

This paper presents methods for constructing entanglement-assisted quantum error-correcting codes from arbitrary quantum codes by connecting them to erasure channel codes. The work extends beyond traditional stabilizer codes to include new types like permutation-invariant and XP-stabilizer codes.

Key Contributions

  • Novel construction method for entanglement-assisted codes from arbitrary quantum codes via erasure channel association
  • First examples of entanglement-assisted codes outside stabilizer and codeword-stabilized frameworks
  • Compression techniques for degenerate codes with analysis of error-correction trade-offs
entanglement-assisted codes quantum error correction erasure channels stabilizer codes quantum communication
View Full Abstract

We show how entanglement-assisted codes can be constructed from arbitrary quantum codes by associating them with quantum codes for erasure channels. If a subset of physical qubits is correctable for an erasure error, then it naturally forms the receiver's share of a bipartite state that can be used for entanglement-assisted communications, both in the noiseless and noisy ebit error models. In the case of degenerate codes, we show that the receiver's share of the bipartite state can sometimes be compressed, at the cost of potentially reduced error-correction ability in the noisy ebit error model. We also give examples of permutation-invariant and XP-stabilizer entanglement-assisted codes, the first outside of the stabilizer and codeword-stabilized frameworks.

Scaling of silicon spin qubits under correlated noise

Juan S. Rojas-Arias, Leon C. Camenzind, Yi-Hsien Wu, Peter Stano, Akito Noiri, Kenta Takeda, Takashi Nakajima, Takashi Kobayashi, Giordano Scappucci, ...

2603.03051 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: low Network: none

This paper studies how noise correlations between closely-packed silicon spin qubits affect quantum error correction by measuring noise in a five-qubit array. The researchers found that while magnetic field drifts create problematic correlations, charge noise correlations are manageable and compatible with fault-tolerant quantum computing.

Key Contributions

  • Quantified spatial extent of noise correlations in silicon spin qubit arrays and identified two distinct sources: global magnetic drifts and localized charge noise
  • Established that charge noise correlations are moderate and compatible with fault-tolerant quantum error correction with minimal overhead
silicon spin qubits quantum error correction correlated noise fault tolerance scalable quantum computing
View Full Abstract

The path to fault-tolerant quantum computing hinges on hardware that scales while remaining compatible with quantum error correction (QEC). Silicon spin qubits are a leading hardware candidate because they combine industrial fabrication compatibility with a nanoscale footprint that could accommodate millions of qubits on a chip. However, their suitability for QEC remains uncertain since spatially correlated noise naturally emerges from the resulting close proximity of qubits. These correlations increase the likelihood of simultaneous errors and erode the redundancy that QEC depends on. Here we quantify the spatial extent of noise correlations in a five-qubit silicon array and assess their impact on QEC. We identify two distinct sources of correlated noise: global magnetic field drifts that generate perfectly correlated fluctuations, and charge noise from two-level fluctuators that produces short-range correlations decaying within neighboring qubits. While magnetic drifts represent a critical correlated noise source that can compromise QEC, they can be mitigated. In contrast, the measured charge noise correlations are moderate, electrically tunable, and compatible with fault-tolerant operation with minimal qubit overhead. Our results establish quantitative benchmarks for correlated noise and clarify how such correlations impact the viability of quantum error correction in scalable qubit arrays.

QFlowNet: Fast, Diverse, and Efficient Unitary Synthesis with Generative Flow Networks

Inhoe Koo, Hyunho Cha, Jungwoo Lee

2603.03045 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper introduces QFlowNet, a machine learning framework that combines Generative Flow Networks with Transformers to efficiently decompose quantum unitary operations into sequences of quantum gates, achieving 99.7% success rate on 3-qubit benchmarks while generating diverse solution sets.

Key Contributions

  • Novel combination of GFlowNet and Transformers for unitary synthesis that generates diverse solutions rather than single policies
  • Achievement of 99.7% success rate on 3-qubit unitary synthesis benchmark with efficient learning from sparse reward signals
unitary synthesis quantum compilation generative flow networks quantum gates transformers
View Full Abstract

Unitary Synthesis, the decomposition of a unitary matrix into a sequence of quantum gates, is a fundamental challenge in quantum compilation. Prevailing reinforcement learning(RL) approaches are often hampered by sparse reward signals, which necessitate complex reward shaping or long training times, and typically converge to a single policy, lacking solution diversity. In this work, we propose QFlowNet, a novel framework that learns efficiently from sparse signals by pairing a Generative Flow Network (GFlowNet) with Transformers. Our approach addresses two key challenges. First, the GFlowNet framework is fundamentally designed to learn a diverse policy that samples solutions proportional to their reward, overcoming the single-solution limitation of RL while offering faster inference than other generative models like diffusion. Second, the Transformers act as a powerful encoder, capturing the non-local structure of unitary matrices and compressing a high-dimensional state into a dense latent representation for the policy network. Our agent achieves an overall success rate of 99.7% on a 3-qubit benchmark(lengths 1-12) and discovers a diverse set of compact circuits, establishing QFlowNet as an efficient and diverse paradigm for unitary synthesis.

Ultra-low loss piezo-optomechanical low-confinement silicon nitride platform for visible wavelength quantum photonic circuits

Mayank Mishra, Gwangho Choi, Wenhua He, Gina M. Talcott, Katherine Kearney, Michael Gehl, Andrew Leenheer, Daniel Dominguez, Nils T. Otterstrom, Matt ...

2603.02584 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: medium Network: high

This paper demonstrates an ultra-low loss silicon nitride photonic platform that combines excellent passive optical properties (0.026 dB/cm loss) with active control via piezo-optomechanical actuation, enabling scalable quantum photonic circuits that operate at visible wavelengths with low power consumption and fast reconfiguration.

Key Contributions

  • Achieved ultra-low propagation loss of 0.026 dB/cm at 780 nm in a low-confinement silicon nitride platform
  • Demonstrated piezo-optomechanical phase shifters with MHz bandwidth and 2.8 V·m voltage-length product
  • Combined passive and active properties to enable scalable visible-wavelength quantum photonic circuits
photonic quantum computing silicon nitride piezo-optomechanical low-loss waveguides visible wavelength
View Full Abstract

The stringent demands of photonic quantum computing protocols motivate photonic integrated circuit (PIC) platforms with passive optical properties such as extremely low losses and correspondingly large circuit depths, as well as active optical properties such as high reconfiguration rates, low power dissipation, and minimal crosstalk. At the same time, many quantum photonic resource state generators, such as single-photon sources and quantum memories, require operation in the visible wavelength range. These requirements make the passive optical properties of CMOS-fabricated, ultralow-loss, low-confinement silicon nitride waveguides especially attractive. However, the conventional active properties of these systems based on thermo-optic modulation are plagued by high levels of crosstalk, slow modulation rates, and high power dissipation. Although there have been recent demonstrations of CMOS-fabricated, visible wavelength, piezo-optomechanical PICs that solve the above challenges associated with implementing active functionality, these have made use of high-confinement waveguides with currently demonstrated losses of order $0.3$-$1~\mathrm{dB/cm}$, precluding circuit depths required for scalable quantum algorithms. Here, we demonstrate that combining piezo-optomechanical actuation with a low-confinement, ultra-low loss silicon nitride platform addresses the scalability challenge while enabling high-performance active functionality at visible wavelengths. This platform achieves a propagation loss $0.026~\mathrm{dB/cm}$ at $780~\mathrm{nm}$, modulation bandwidths in the MHz range, and a phase shifter voltage-length product ($V_πL$) of approximately $2.8~\mathrm{\mathrm{V}\cdot\mathrm{m}}$ and negligible hysteresis. We further demonstrate reconfigurable Mach-Zehnder interferometers based on spiral phase shifters with 0.63 dB loss per phase shifter.

Steering paths mid-flight for fault-tolerance in measurement-based holonomic gates

Anirudh Lanka, Juan Garcia-Nila, Todd A. Brun

2603.02552 • Mar 3, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops a fault-tolerant framework for implementing holonomic quantum gates using continuous measurements and real-time feedback. The approach can correct errors mid-computation and relaxes strict timing requirements, enabling faster and more robust quantum gate operations.

Key Contributions

  • Fault-tolerant framework for measurement-based holonomic gates with real-time error correction
  • Method to suppress non-Markovian decoherence through quantum Zeno effect
  • Protocol for correcting measurement-induced errors from non-adiabatic effects
  • Relaxation of adiabaticity requirements enabling faster gate implementation
holonomic quantum computation fault tolerance continuous measurement quantum Zeno effect error correction
View Full Abstract

Continuous measurement-based holonomic quantum computation provides a route to universal logical computation in quantum error correcting codes. We introduce a fault-tolerant framework for implementing measurement-based holonomic gates that leverages continuous measurements with real-time feedback. We show that non-Markovian decoherence is intrinsically suppressed through the quantum Zeno effect, while Markovian errors are identified by the decoding of measurement records to reveal the rotated syndrome subspace populated during the evolution. This information enables steering holonomic paths mid-flight to ensure that the final evolution realizes the target logical gate. We further demonstrate that non-adiabatic effects give rise to measurement-induced errors, and we show that these can also be corrected by an analogous protocol. This approach relaxes the stringent adiabaticity requirement and enables faster implementation of holonomic gates.

Constant-Time Surgery on 2D Hypergraph Product Codes with Near-Constant Space Overhead

Kathleen Chang, Zhiyang He, Theodore J. Yoder, Guanyu Zhu, Tomas Jochym-O'Connor

2603.02157 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper develops new techniques for performing fault-tolerant quantum computations on quantum error-correcting codes that dramatically reduce the time overhead from O(d) to constant time O(1) while maintaining very low space requirements. The work focuses on improving 'code surgery' methods that allow logical operations on quantum low-density parity-check codes.

Key Contributions

  • Development of constant-time surgery gadgets for 2D hypergraph product codes that achieve O(1) time overhead
  • Demonstration that performing d surgery operations in O(d) time maintains fault tolerance through amortization
quantum error correction fault-tolerant quantum computing qLDPC codes code surgery hypergraph product codes
View Full Abstract

Generalized code surgery is a versatile and low-overhead technique for performing fault-tolerant computation on quantum low-density parity-check (qLDPC) codes. In many settings, surgery exhibits practical space overheads, while its time overhead remains a bottleneck at $O(d)$ syndrome rounds per operation. In this work, we construct surgery gadgets that perform parallel logical measurements on 2D hypergraph product codes in constant time overhead ($O(1)$) and near-constant space overhead ($\tilde{O}(1)$). The reduced time overhead is a result of amortization, as we show, following the formulation by Cowtan et al. (arXiv:2510.14895), that performing $d$ surgery operations in $O(d)$ time is fault tolerant. Our gadgets combine the strengths of different approaches to fault-tolerant logical operations: they partially retain the flexibility of surgery while achieving overheads comparable to transversal gates. Consequently, they are well-suited for near-term experimental realization and demonstrate new possibilities in the design of gadgets for fast logical computation.

Obstacles to Continuous Quantum Error Correction via Parity Measurements

Anton Halaski, Christiane P. Koch

2603.02106 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper identifies fundamental problems with continuous quantum error correction using parity measurements in circuit quantum electrodynamics platforms. The researchers show that approximating required three-body interactions with two-body couplings corrupts the logical quantum information, limiting practical implementation of continuous error correction.

Key Contributions

  • Demonstrates that common parity-measurement protocols in circuit QED corrupt logical information during continuous operation
  • Identifies that the failure mechanism stems from approximating three-body interactions with two-body couplings to meters
  • Proposes alternative approaches including native three-body interaction architectures and erasure-based encodings
quantum error correction continuous measurements circuit quantum electrodynamics parity measurements stabilizer codes
View Full Abstract

Time-continuous quantum error correction, necessary to protect quantum information under time-dependent Hamiltonians, relies on weak continuous syndrome measurements. Implementing these measurements requires a continuous coupling among at least two qubits and a meter, a demanding requirement. We show that, under continuous operation, common parity-measurement protocols in the circuit quantum electrodynamics platform corrupt the logical information. The failure arises from approximating the three-body interaction by a sum of two-body couplings to the meter, which prevents simultaneous suppression of measurement backaction on the logical and error subspaces. We argue that the same mechanism applies more generally beyond the circuit quantum electrodynamics setting. Taken together, our results impose a practical limitation on continuous stabilizer quantum error correction and point to the viable alternatives -- architectures that realize native three-body interactions, or erasure-based encodings in which the error subspace need not be protected.

No More Hooks in the Surface Code: Distance-Preserving Syndrome Extraction for Arbitrary Layouts at Minimum Depth

Yuga Hirai, Shota Ikari, Yosuke Ueno, Yasunari Suzuki

2603.01628 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper proposes a new method called ZX interleaving syndrome extraction for quantum error correction in surface codes that eliminates problematic 'hook errors' while maintaining minimum circuit depth. The technique preserves the full fault tolerance distance for any surface code layout, improving upon existing methods that either add circuit overhead or reduce error correction capability.

Key Contributions

  • ZX interleaving syndrome extraction method that preserves full fault distance d for arbitrary surface code layouts at minimum depth
  • Elimination of hook errors without additional circuit depth or simultaneous measurement/CNOT execution requirements
  • Numerical validation showing full fault distance d achievement versus d-1 for existing minimum-depth approaches
surface code quantum error correction fault tolerance syndrome extraction hook errors
View Full Abstract

Hook errors are a major challenge in implementing logical operations with the surface code, because they can reduce the fault distance below the code distance. This motivates syndrome-extraction circuits that suppress hook-error effects for the stabilizer layouts that appear during logical operations. However, the existing methods either increase circuit depth or require simultaneous execution of measurements and CNOT gates, both of which introduce additional overheads and degrade the threshold. We propose the ZX interleaving syndrome extraction, which preserves the full fault distance $d$ for any surface-code layout with regular stabilizer tiles at minimum depth, i.e., four layers of CNOT gates, without requiring additional circuit depth or simultaneous execution of measurements and CNOT gates. The key idea is to interleave the Z and X stabilizer tiles so that hook-error edges in the decoding graph are shortened and effectively eliminated. Numerical simulations under uniform depolarizing noise for memory and lattice-surgery experiments confirm that the proposed method achieves a full fault distance of $d$, whereas the best existing minimum-depth approach achieves $d-1$. Since the full fault distance is achievable for any regular tiling layout of the surface code, the proposed method may serve as an indispensable technique for practical fault-tolerant quantum computation.

Sustaining high-fidelity quantum logic in neutral-atom circuits via mid-circuit operations

Rui Lin, You Li, Le-Tian Zheng, Tai-Ran Hu, Si-Yuan Chen, Hong-Ming Wu, Yu-Chen Zhang, Hao-Wen Cheng, Yu-Hao Deng, Zhan Wu, Ming-Cheng Chen, Jun Rui, ...

2603.01612 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper demonstrates a neutral-atom quantum computing system that maintains high gate fidelities (~99.8%) across multiple operational rounds by using mid-circuit cooling and qubit reinitialization to counteract atom loss and heating. The approach enables sustained high-performance operation needed for fault-tolerant quantum error correction.

Key Contributions

  • Demonstration of 99.81% fidelity two-qubit gates with erasure detection in neutral atoms
  • In-circuit Raman sideband cooling and qubit re-initialization maintaining ~99.8% fidelity across multiple rounds
  • Hardware-efficient mid-circuit operations framework enabling sustainable deep quantum circuits
neutral atoms fault-tolerant quantum computing mid-circuit operations quantum error correction gate fidelity
View Full Abstract

The realization of fault-tolerant quantum computation hinges on the ability to execute deep quantum circuits while maintaining gate fidelities consistently above error-correction thresholds. Although neutral-atom arrays have recently demonstrated high-fidelity two-qubit gates and early-stage logical quantum processors, sustaining such high performance across deep, repetitive circuits remains a formidable challenge due to cumulative motional heating and atom loss. Here we demonstrate a sustainable neutral-atom framework that overcomes these limitations by integrating a suite of hardware-efficient mid-circuit operations. We report a two-qubit controlled logic gate with a raw fidelity of 99.60(1)%, which is further increased to a fidelity of 99.81(1)% via non-destructive erasure detection. Crucially, by implementing in-circuit Raman sideband cooling and qubit re-initialization, we demonstrate that gate fidelities can be maintained at the ~99.8% level across multiple operational rounds without observable degradation. By actively managing the internal and motional entropy of the system mid-stream, our in-situ refreshable architecture provides a critical pathway for executing the repeated syndrome-extraction cycles required for large-scale, continuous quantum error correction.

QuMeld: A Modular Framework for Benchmarking Qubit Mapping Algorithms

Gabrielius Keibas, Linas Petkevičius

2603.01578 • Mar 2, 2026

CRQC/Y2Q RELEVANT QC: high Sensing: none Network: none

This paper presents QuMeld, an open-source software framework designed to systematically evaluate and compare different algorithms for mapping logical qubits to physical qubits on quantum computers. The framework supports multiple mapping algorithms, quantum computer topologies, and evaluation metrics in a modular design that allows for future extensions.

Key Contributions

  • Development of unified benchmarking framework for qubit mapping algorithms
  • Modular design supporting six algorithms and sixteen quantum computer topologies with extensibility for future additions
qubit mapping quantum circuits benchmarking framework quantum computer topologies compilation optimization
View Full Abstract

The qubit mapping problem is a challenge in quantum computing that is related to mapping logical qubits to the physical ones on the quantum computer. Due to the diversity of quantum computer topologies and circuits, numerous approaches solving this problem exist. Finding the best solution for specific combination of topology and circuit remains difficult and no unified framework currently exists for systematically evaluating and comparing qubit mapping algorithms across different cases. We present QuMeld, an open-source framework that is designed for solving this issue. The framework currently supports six qubit mapping algorithms, sixteen quantum computer topologies and multiple evaluation metrics. The modular design of the framework allows integration of new mapping algorithms, quantum circuits, hardware topologies, and evaluation metrics, ensuring extensibility and adaptability to future developments.

Core-bound waves on a Gross-Pitaevskii vortex

Evan Papoutsis, Nathan Apfel, Nir Navon

2603.05505 • Mar 5, 2026

QC: low Sensing: medium Network: none

This paper studies wave excitations bound to quantum vortices in Bose-Einstein condensates, finding new types of waves (varicose and fluting) that travel along the vortex core and proposing experimental methods to detect them.

Key Contributions

  • Discovery of dispersion relations for varicose and fluting wave families bound to Gross-Pitaevskii vortices
  • Proposal of realistic spectroscopic protocol for creating and detecting varicose waves with numerical validation
Bose-Einstein condensate quantum vortex Gross-Pitaevskii equation collective excitations dispersion relations
View Full Abstract

We find the dispersion relations of two elusive families of core-bound excitations of the Gross-Pitaevskii (GP) vortex, varicose (axisymmetric) and fluting (quadrupole) waves. For wavelengths of order the healing length, these two families -- and the well-known Kelvin wave -- possess an infinite sequence of core-bound, vortex-specific branches whose energies lie below the Bogoliubov dispersion relation. In the short-wavelength limit, these excitations can be interpreted as particles radially bound to the vortex, which acts as a waveguide. In the long-wavelength limit, the fluting waves unbind from the core, the varicose waves reduce to phonons propagating along the vortex, and the fundamental Kelvin wave is the only core-bound vortex-specific excitation. Finally, we propose a realistic spectroscopic protocol for creating and detecting the varicose wave, which we test by direct numerical simulations of the GP equation.

Calculating trace distances of bosonic states in Krylov subspace

Javier Martínez-Cifuentes, Nicolás Quesada

2603.05499 • Mar 5, 2026

QC: medium Sensing: medium Network: medium

This paper develops a new computational method to efficiently calculate trace distances between Gaussian quantum states in continuous-variable systems, using a Lanczos algorithm that works with statistical moments rather than explicit matrix representations. The technique helps distinguish quantum states and provides practical tools for quantum state verification and learning.

Key Contributions

  • Efficient numerical method for computing trace distances between Gaussian states using generalized Lanczos algorithm
  • Extension to non-Gaussian states expressible as linear combinations of Gaussian states
  • Practical tool for quantum state certification and learning in continuous-variable systems
continuous-variable quantum systems Gaussian states trace distance Lanczos algorithm quantum state certification
View Full Abstract

Continuous-variable quantum systems are central to quantum technologies, with Gaussian states playing a key role due to their broad applicability and simple description via first and second moments. Distinguishing Gaussian states requires computing their trace distance, but no analytical formula exists for general states, and numerical evaluation is difficult due to the exponential cost of representing infinite-dimensional operators. We introduce an efficient numerical method to compute the trace distance between a pure and a mixed Gaussian state, based on a generalized Lanczos algorithm that avoids explicit matrix representations and uses only moment information. The technique extends to non-Gaussian states expressible as linear combinations of Gaussian states. We also show how it can yield lower bounds on the trace distance between mixed Gaussian states, offering a practical tool for state certification and learning in continuous-variable quantum systems.

Ansatz-Free Learning of Lindbladian Dynamics In Situ

Petr Ivashkov, Nikita Romanov, Weiyuan Gong, Andi Gu, Hong-Ye Hu, Susanne F. Yelin

2603.05492 • Mar 5, 2026

QC: high Sensing: medium Network: low

This paper develops a new method to characterize how quantum systems interact with their environment and lose quantum properties over time, without needing to know the specific types of noise or errors beforehand. The technique can identify both the quantum evolution and dissipative processes using only simple measurements, making it practical for near-term quantum devices.

Key Contributions

  • First sample-efficient protocol for learning sparse Lindbladian dynamics without assuming prior structure or locality constraints
  • Ancilla-free implementation using only product-state preparations and Pauli measurements with near-optimal time resolution
  • Systematic approach to characterizing unknown error mechanisms in open quantum systems
Lindbladian dynamics open quantum systems quantum error characterization Markovian noise quantum system identification
View Full Abstract

Characterizing the dynamics of open quantum systems at the level of microscopic interactions and error mechanisms is essential for calibrating quantum hardware, designing robust simulation protocols, and developing tailored error-correction methods. Under Markovian noise/dissipation, a natural characterization approach is to identify the full Lindbladian generator that gives rise to both coherent (Hamiltonian) and dissipative dynamics. Prior protocols for learning Lindbladians from dynamical data assumed pre-specified interaction structure, which can be restrictive when the relevant noise channels or control imperfections are not known in advance. In this paper, we present the first sample-efficient protocol for learning sparse Lindbladians without assuming any a priori structure or locality. Our protocol is ancilla-free, uses only product-state preparations and Pauli-basis measurements, and achieves near-optimal time resolution, making it compatible with near-term experimental capabilities. The final sample complexity depends on linear-system conditioning, which we find empirically to be moderate for a broad class of physically motivated models. Together, this provides a systematic route to scalable characterization of open-system quantum dynamics, especially in settings where the error mechanisms of interest are unknown.

Quantum Simulation of Coupled Harmonic Oscillators: From Theory to Implementation

Viraj Dsouza, Weronika Golletz, Dimitrios Kranas, Bakhao Dioum, Vardaan Sahgal, Eden Schirman

2603.05479 • Mar 5, 2026

QC: high Sensing: medium Network: none

This paper develops and compares three concrete implementations of a quantum algorithm for simulating coupled harmonic oscillators, bridging the gap between theoretical proposals and practical quantum computing applications. The authors demonstrate their approaches on a quantum design platform and show how the algorithm can be applied to extract normal modes and simulate energy propagation in physical systems.

Key Contributions

  • Development of three concrete implementations of quantum harmonic oscillator simulation algorithm with resource benchmarks
  • Demonstration that complex initial state preparation can be circumvented for linear-chain cases
  • Bridge between theoretical quantum algorithms and practical implementation on quantum platforms
quantum simulation harmonic oscillators Hamiltonian simulation quantum algorithms block encoding
View Full Abstract

We investigate the quantum algorithm of Babbush et al. (arXiv:2303.13012v3) for simulating coupled harmonic oscillators, which promises exponential speedups over classical methods. Focusing on linearly connected oscillator chains, we bridge the gap between theory and implementation by developing and comparing three concrete realizations of the algorithm. First, we implement a sparse initial state preparation combined with product-formula (Suzuki-Trotter) Hamiltonian simulation. Second, we implement a fully quantum, oracle-based framework in which classical data are accessed via oracles, the Hamiltonian is block-encoded, and time evolution is performed using QSVT-based Hamiltonian simulation. Third, we propose an efficient alternative that combines the sparse state-preparation routine of the first approach with the oracle and block-encoding-based simulation pipeline of the second. We provide these implementations on Classiq, a high-level quantum design platform and provide appropriate resource benchmarks. Our simulation results show that the complex initial state preparation proposed by Babbush et al. can be circumvented at least in the linear-chain case. Finally, we illustrate two physical applications-extracting normal modes and simulating coarse-grained energy propagation-demonstrating how the algorithm connects to measurable observables. Our results clarify the resource requirements of the algorithm and provide concrete pathways toward practical quantum advantage.

Spin-resolved microscopy of $^{87}$Sr SU($N$) Fermi-Hubbard systems

Carlos Gas-Ferrer, Antonio Rubio-Abadal, Sandra Buob, Leonardo Bezzo, Jonatan Höschele, Leticia Tarruell

2603.05478 • Mar 5, 2026

QC: medium Sensing: high Network: low

This paper demonstrates the first quantum-gas microscope for strontium-87 atoms that can detect individual atoms and determine their spin states. The technique enables microscopic study of exotic magnetic behavior in systems with higher symmetries than traditional spin-1/2 systems.

Key Contributions

  • First spin-resolved quantum-gas microscope for fermionic strontium-87
  • Single-atom detection capability across all 10 spin states of the ground-state manifold
  • New platform for studying SU(N) Fermi-Hubbard models and exotic magnetism
quantum gas microscopy strontium-87 SU(N) Hubbard model spin-resolved detection quantum simulation
View Full Abstract

Quantum-gas microscopes provide direct access to the phases of the Hubbard model, bringing microscopic insight into the complex competition between interactions, SU(2) magnetism, and doping. Alkaline-earth(-like) fermions extend this spin-1/2 paradigm by realizing higher symmetries and giving access to SU(N) Hubbard models, with rich phase diagrams to be unveiled. Despite its fundamental interest, a microscopic exploration of SU(N) quantum systems has remained elusive. Here we report the realization of a quantum-gas microscope for fermionic $^{87}$Sr. Our imaging scheme, based on cooling and fluorescence on the narrow intercombination line at 689 nm, enables spin-resolved single-atom detection. By implementing a spin-selective optical pumping protocol, we determine the occupation of each of the 10 spin states in a single experimental realization, a crucial capability for probing site-resolved magnetic correlations. We benchmark our method by observing single-particle Larmor precession across the full spin-9/2 ground-state manifold. These results establish $^{87}$Sr quantum-gas microscopy as a powerful approach to study exotic magnetism in the SU(N) Fermi-Hubbard model, and provide a new detection tool for studies in quantum simulation, computation, and metrology.

Local strategies are pretty good at computing Boolean properties of quantum sequences

Tathagata Gupta, Ankith Mohan, Shayeef Murshid, Vincent Russo, Jamie Sikora, Alice Zheng

2603.05452 • Mar 5, 2026

QC: medium Sensing: low Network: low

This paper studies how to compute global properties of quantum bit sequences using only local measurements on individual qubits, without quantum memory to store intermediate results. The authors prove that simple greedy strategies work optimally for affine Boolean functions and provide performance guarantees showing these local approaches remain competitive even for general functions.

Key Contributions

  • Complete characterization showing greedy local measurement strategies are optimal if and only if the target Boolean function is affine
  • Universal performance guarantee proving local strategies achieve success probability at least the square of optimal global measurement probability
quantum measurement local strategies Boolean functions quantum memory constraints measurement optimization
View Full Abstract

Quantum memory is a scarce and costly resource, yet little is known about which learning tasks remain feasible under severe memory constraints. We study the problem of computing global properties of quantum sequences when quantum systems must be measured individually, without storing or jointly processing them. In our setting, a bit string $x \in \{0,1\}^n$ is encoded into an $n$-qubit product state $|ψ_{x_1}\rangle \otimes \cdots \otimes |ψ_{x_n}\rangle$, and the goal is to infer $f(x) \in \{0,1\}$ from measurements of this quantum encoding. We consider a simple local strategy, which we call the greedy strategy, that applies the same optimal single-system measurement independently to each subsystem and then infers $f(x)$ from the outcomes. Our main result gives a complete characterization of when the greedy strategy is optimal: it achieves the same maximum success probability as an unrestricted global measurement if and only if the target Boolean function is affine (in all but finitely many cases). We establish a universal performance guarantee for general Boolean functions, showing that the success probability of the greedy strategy is always at least the square of the optimal global success probability, in direct analogy with the Barnum-Knill bound for the pretty good measurement. These results demonstrate that even under extreme memory constraints, simple local measurement strategies can remain provably competitive for learning global properties of quantum sequences.

Measurement Induced Asymmetric Entanglement in Deconfined Quantum Critical Ground State

K. G. S. H. Gunawardana

2603.05436 • Mar 5, 2026

QC: medium Sensing: medium Network: medium

This paper studies how weak quantum measurements affect the entanglement properties of a quantum spin system at a critical phase transition point. The researchers find that measurements create asymmetric changes in entanglement depending on which phase the system is in, with entanglement increasing on one side of the transition and decreasing on the other.

Key Contributions

  • Demonstration of asymmetric entanglement restructuring across quantum phase boundaries under weak measurement
  • Numerical evidence that measurement-induced effects can alter the nature of quantum phase transitions from second-order to weak first-order
quantum measurement entanglement quantum phase transitions deconfined quantum criticality matrix product states
View Full Abstract

In this work, we numerically study the effect of weak measurement on deconfined quantum critical point(DQCP). Particularly, we consider the ground state of an one-dimensional spin $1/2$ system with long range exchange interactions($K$), which shows analogues phase transition to DQCP in the thermodynamic limit. This system is in the ferromagnetic phase below the critical exchange interaction $K_c$ and in the valance bond solid phase above $K_c$. The weak measurement is carried out by coupling a secondary ancilla system to the critical system via unitary interactions and later measuring the ancilla spins projectively. We numerically calculate entanglement entropy,correlation length, and order parameters of leading post-measurement states using uniform matrix product state representation of the quantum many-body state in the thermodynamic limit. We report asymmetric restructuring of entanglement of the post measurement states across the phase boundary under weak measurements. Especially, the trajectory $\left(\downarrow \downarrow\right)$ describing a uniform measurement outcome given the all ancilla spins initiated in the same $\left(\downarrow \right)$ state, shows anomalous entanglement when increasing the strength of weak measurement. The bipartite entanglement entropy strongly increases when $K<K_c$ whereas it weakly decreases when $K>K_c$. We argue with numerical evidences that observed asymmetry in entanglement would lead to a weak first order phase boundary in the thermodynamic limit. We also discuss important aspects in experimental observation of measurement induced effects linked to the strength of weak measurement and probability of post-measurement states.

Extreme Quantum Cognition Machines for Deliberative Decision Making

Francesco Romeo, Jacopo Settino

2603.05430 • Mar 5, 2026

QC: medium Sensing: none Network: none

This paper proposes Extreme Quantum Cognition Machines, a quantum learning architecture that uses quantum dynamics to create nonlinear feature maps for decision-making tasks, with a focus on handling noisy training data and linguistic classification problems.

Key Contributions

  • Introduction of Extreme Quantum Cognition Machines architecture combining quantum extreme learning with dynamical attention mechanisms
  • Hardware-compatible quantum implementation framework for deliberative decision making with tolerance to noisy training data
quantum machine learning quantum cognition extreme learning machines quantum reservoir computing deliberative decision making
View Full Abstract

We introduce Extreme Quantum Cognition Machines, a class of quantum learning architectures for deliberative decision making that is tolerant to noisy and contradictory training data. Inspired by the quantum cognition paradigm, Extreme Quantum Cognition Machines are closely related to quantum extreme learning and quantum reservoir computing, where fixed quantum dynamics generates a nonlinear feature map and learning is confined to a linear readout. A dynamical attention mechanism, implemented through an input-dependent interaction term in the Hamiltonian, modulates the quantum evolution and biases the resulting feature embedding toward task-relevant correlations. The approach is validated on linguistic classification tasks, which serve as paradigmatic examples of deliberative inference. Hardware-compatible quantum implementations of the proposed framework are discussed, together with potential applications in symbolic inference, sequence analysis, anomaly detection, and automatic diagnosis, with direct relevance to domains such as biology, forensics, and cybersecurity.

All You Need is Amplifier: Spectral Imposters Without Pulse Shaping

Valeriia Bilokon, Elvira Bilokon, Denys I. Bondar

2603.05417 • Mar 5, 2026

QC: medium Sensing: low Network: none

This paper develops a real-time feedback control method that uses a simple amplifier to make one quantum system mimic the behavior of another without pre-designing complex control pulses. The approach is demonstrated by making hydrogen atoms produce optical emissions like argon and making weakly interacting lattices behave like strongly correlated quantum materials.

Key Contributions

  • Development of real-time feedback control framework using proportional controllers for quantum system control
  • Demonstration of adaptive response tracking that eliminates need for predesigned control waveforms
  • Application to both atomic systems and lattice models showing broad applicability of the control paradigm
quantum control feedback control adaptive control quantum dynamics strong-field physics
View Full Abstract

Quantum tracking control encodes the desired dynamics into a tailored driving field; here, we let the system find its own way there. We propose a real-time feedback control framework in which a proportional controller continuously corrects a simple transform-limited field based on the instantaneous mismatch between two systems' responses - producing the required control on the fly, without prior waveform design. The framework is demonstrated on two distinct examples: a single-active-electron atom, where hydrogen is driven to mimic argon's strong-field optical emission, and a Fermi-Hubbard chain, where a weakly interacting lattice reproduces the transport dynamics of a Mott-insulating reference. By shifting the control paradigm from predesigned inputs to adaptive response tracking, this approach establishes closed-loop feedback as a broadly applicable route to programmable quantum dynamics.

Extending spin-lattice relaxation theory to three-phonon processes

Nilanjana Chanda, Alessandro Lunghi

2603.05393 • Mar 5, 2026

QC: medium Sensing: high Network: none

This paper extends theoretical models of spin relaxation to include three-phonon processes, testing whether the standard weak coupling approximation used for nearly a century is valid. The researchers found that for a chromium nitride complex, three-phonon effects only matter at extremely high temperatures, validating existing theory while showing such effects could become important in materials with stronger spin-phonon coupling.

Key Contributions

  • First-principles extension of spin relaxation theory to three-phonon processes
  • Experimental validation of weak spin-phonon coupling approximation in molecular spin systems
  • Demonstration of conditions where three-phonon processes could dominate spin relaxation
spin-lattice relaxation phonon processes spin-phonon coupling molecular magnets chromium nitride
View Full Abstract

Spin-lattice relaxation theory has been developed over almost a century, but some cardinal assumptions on the nature of the interactions involved have never been fully verified. This includes the weak coupling approximation, which makes it possible to describe spin dynamics perturbatively and leads to the canonical description of spin relaxation in terms of one- and two-phonon processes. Here, we extend the first-principles theory of spin relaxation to three-phonon processes and apply it to the vdW crystal of a spin-1/2 Chromium nitride complex. Results show that three-phonon contributions to spin relaxation only become relevant at temperatures inaccessible to experiments for this molecule, thus providing unprecedented evidence for the validity of the weak spin-phonon coupling assumption in spin relaxation theory. At the same time, we numerically show that a relatively small increase in spin-phonon coupling would lead to a crossover between three- and two-phonon processes' efficiency at room temperature, illustrating the possibility for three-phonon effects in molecular materials as well as paving the way to a systematic exploration of strong coupling in spin systems.

MQED-QD: An Open-Source Package for Quantum Dynamics Simulation in Complex Dielectric Environments

Guangming Liu, Siwei Wang, Hsing-Ta Chen

2603.05378 • Mar 5, 2026

QC: low Sensing: medium Network: low

This paper presents MQED-QD, an open-source software package for simulating how molecular excitons (energy-carrying quasiparticles) behave when placed near complex nanoscale structures like silver nanorods. The software combines electromagnetic field calculations with quantum dynamics to show how plasmonic nanostructures can enhance energy transport in molecular systems.

Key Contributions

  • Development of MQED-QD open-source package for simulating exciton dynamics in complex dielectric environments
  • Demonstration that silver nanorods enhance long-range dipole-dipole interactions and accelerate exciton delocalization compared to planar surfaces
molecular excitons quantum dynamics plasmonics nanophotonics open quantum systems
View Full Abstract

Simulating the dynamics of molecular excitons in complex nanophotonic environments requires integrating rigorous electromagnetic simulations with accurate treatments of open quantum system dynamics. In this work, we develop MQED-QD (Macroscopic Quantum Electrodynamics for Quantum Dynamics), a robust computational package for simulating exciton dynamics in arbitrary dielectric and plasmonic environments. Based on the MQED framework, the package offers a unified workflow for constructing the dyadic Green's functions from classical electromagnetic solvers, parametrizing quantum master equations, and propagating the time evolution to determine the molecular subsystem's dynamical properties. To demonstrate the package's capabilities, we simulate exciton transport within a one-dimensional molecular chain near a silver nanostructure, including benchmarking against planar surfaces and exploring the influence of silver nanorods. Our results reveal that surface plasmon polaritons on nanorods dramatically enhance long-range dipole-dipole interactions, accelerating exciton delocalization and yielding higher participation ratios compared to planar geometries. By elucidating accurate molecular exciton dynamics in conjunction with nanophotonics and plasmonics, MQED-QD provides a powerful, open-source package that facilitates the rational design of nanoscale architectures.

Nonreciprocal transparency windows, Fano resonance, and slow/fast light in a membrane-in-the-middle magnomechanical system induced by the Barnett effect

M. Amghar, M. Amazioug

2603.05359 • Mar 5, 2026

QC: low Sensing: medium Network: medium

This paper studies a hybrid quantum system with magnetic spheres and a mechanical membrane that can create controllable transparency windows and manipulate light propagation. The researchers show how magnetic effects can be used to create non-reciprocal behavior where light behaves differently depending on its direction of travel.

Key Contributions

  • Demonstration of five transparency windows from combined photon-phonon-magnon interactions in a hybrid magnomechanical system
  • Achievement of controllable nonreciprocal absorption and group delay through Barnett effect manipulation
  • Tunable slow/fast light control via photon-phonon coupling strength adjustment
magnomechanics nonreciprocal transparency windows Fano resonance Barnett effect
View Full Abstract

Nonreciprocal phenomena are currently a major focus of research within the fields of classical and quantum technology. In this work, we theoretically investigate the interplay among multiple magnomechanically induced transparency (MMIT) windows, Fano resonances, slow/fast light, and nonreciprocal absorption and group delay in a hybrid cavity magnomechanical system. This system is composed of two yttrium iron garnet (YIG) spheres and a membrane positioned at the center of the cavity. By analyzing the absorption spectrum of a weak probe field in the presence of a strong control field, we demonstrate the emergence of five transparency windows resulting from combined photon-phonon, photon-magnon, and phonon-magnon interactions. The photon-phonon coupling associated with the membrane plays a crucial role in enhancing and tailoring these transparency features. We further examine the impact of the Barnett effect on the absorption and dispersion characteristics, showing that it enables the controllable manipulation of transparency windows and the generation of tunable Fano resonance profiles. The influence of cavity decay and magnon dissipation rates on the spectral response is also analyzed. In addition, we demonstrate that the group delay of the transmitted probe field can be effectively tuned via the photon-phonon coupling strength and the Barnett effect, allowing for a controllable transition between slow and fast light regimes. Finally, nonreciprocal absorption and group delay are achieved through appropriate adjustment of the coupling parameters. These findings highlight the potential of the proposed hybrid system for applications in optical signal processing and quantum information technologies.

Computing Green's functions and improving ground state energy estimation on quantum computers with Liouvillian recursion

Jérôme Leblanc, Olivier Nahman-Lévesque, Julien Forget, Thomas Lepage-Lévesque, Simon Verret, Alexandre Foley

2603.05349 • Mar 5, 2026

QC: high Sensing: none Network: none

This paper presents a hybrid quantum-classical algorithm that uses Liouvillian recursion to compute Green's functions on quantum computers, demonstrated on a four-site Hubbard model. The computed Green's functions are then used to improve ground state energy estimates beyond what direct measurement provides, with results showing the method is robust to noise and imperfect ground state preparation.

Key Contributions

  • First quantum-classical hybrid implementation of Liouvillian recursion for computing many-body Green's functions on quantum hardware
  • Demonstration that Green's functions can improve ground state energy estimation beyond direct Hamiltonian expectation values
  • Empirical evidence of exponential convergence and polynomial complexity in Green's function accuracy
  • Proof of concept showing robustness to quantum noise and imperfect state preparation on near-term devices
quantum algorithms Green's functions Liouvillian recursion Hubbard model hybrid quantum-classical
View Full Abstract

We present a quantum-classical hybrid implementation of the Liouvillian recursion method to compute many-body Green's functions using a quantum computer. From an approximate ground state preparation circuit, this algorithm produces the local ($r=r'$) and inter-site ($r\neq r'$) Green's functions $G_{rr'}(ω)$ by measuring observables generated recursively. We demonstrate the approach on a superconducting quantum processor for the open-boundary four-site Hubbard model. We then use the computed Green's functions as input to the Galitskii-Migdal formula to produce better ground state energy estimation than the expectation value of the Hamiltonian for the approximate circuit. Empirical results indicate exponential convergence in the number of iterations, yielding a computational complexity polynomial in the Green's-function accuracy, as measured with the Wasserstein distance. Our results also indicate significant robustness to noise and to inaccuracies of the ground state preparation, providing evidence that Liouvillian recursion is well adapted to the constraints of near-term quantum computing.

Pulse-duration-sensitive high harmonics and attosecond locally-chiral light from a chiral topological Weyl semimetal

Alba de las Heras, Ofer Neufeld, Angel Rubio

2603.05346 • Mar 5, 2026

QC: low Sensing: medium Network: low

This paper studies how laser pulse duration affects high harmonic generation in materials, with a focus on chiral Weyl semimetals like RhSi that can generate unique twisted light with attosecond timescales. The researchers show that pulse duration significantly impacts the energy range of generated harmonics and demonstrate how chiral crystal structures can create locally chiral light.

Key Contributions

  • Demonstrated that laser pulse duration can extend high harmonic generation to higher photon energies by promoting higher conduction band excitations
  • Elucidated selection rules for generating attosecond locally chiral light in chiral Weyl semimetals with pronounced circular dichroism
high harmonic generation Weyl semimetal chiral light attosecond physics topological materials
View Full Abstract

High harmonic generation (HHG) in solids results from an interplay between intraband acceleration and electron-hole recombination driven by a high-intensity laser pulse. Here, we theoretically reveal that the driving pulse duration can play a major role in extending HHG to higher photon energies by promoting higher conduction band excitations. The effect is present in a conventional semiconductor as Si, restricted in a large-gap insulator as MgO, and most prominent in RhSi, a prototypical chiral Weyl semimetal presenting numerous band crossings. Further, we elucidate the HHG selection rules in RhSi required for the synthesis of attosecond locally chiral light. The chiral crystal structure enables the generation of a local 3D electric field exhibiting an asymmetric instantaneous torsion on attosecond timescales. A pronounced circular dichroism emerges when the driving helicity is either aligned with or opposite to the crystal handedness. Our findings motivate future experiments in chiral Weyl semimetals to track high-energy band crossings and in-situ locally chiral light, paving the way for chiral compact light sources and light-wave driven topological electronics.

Dynamical quantum phase transitions through the lens of mode dynamics

Akash Mitra, Shashi C. L. Srivastava

2603.05284 • Mar 5, 2026

QC: low Sensing: medium Network: none

This paper studies dynamical quantum phase transitions (DQPTs) in fermionic systems by analyzing mode dynamics during sudden quantum quenches. The authors identify that DQPTs occur when spin-flip symmetry is restored in specific zero-energy modes, providing a new framework to understand these nonequilibrium quantum phenomena.

Key Contributions

  • Novel characterization of dynamical quantum phase transitions through symmetry restoration in zero-energy modes
  • Unified framework connecting mode dynamics with traditional DQPT indicators like rate function divergence and topological order parameter jumps
dynamical quantum phase transitions fermionic systems quantum quench symmetry restoration topological order parameter
View Full Abstract

We study the mode dynamics of a generic quadratic fermionic Hamiltonian under a sudden quench protocol in momentum space. Modes with zero energy at any given time, $t$, are referred to as dynamical critical modes. Among all zero-energy modes, spin-flip symmetry is restored in the eigenvector corresponding to selected zero-energy modes. This symmetry restoration is used to define the dynamical quantum phase transition (DQPT). This shows that the occurrence of these dynamical critical modes is necessary but not sufficient for a DQPT. We show that the conditions on the quench protocol and time for such dynamical symmetry restoration are the same as the divergence of the rate function and integer jump in the dynamical topological order parameter, which have been the traditional identifiers of a DQPT. This perspective also naturally explains when one or both of DQPT and ground-state quantum phase transitions will occur.

False traps on quantum-classical optimization landscapes

Xiaozhen Ge, Shuming Cheng, Guofeng Zhang, Re-Bing Wu

2603.05190 • Mar 5, 2026

QC: high Sensing: medium Network: low

This paper analyzes optimization landscapes in quantum-classical hybrid algorithms, showing that local optima (false traps) can persist even when there are sufficient tunable parameters. The authors develop a mathematical framework to identify and classify critical points in these optimization problems and connect the emergence of false traps to loss of quantum distinguishability.

Key Contributions

  • Complete mathematical framework for analyzing critical points in quantum-classical optimization landscapes
  • Proof that parameter sufficiency does not guarantee absence of false traps in quantum optimization
  • Connection between optimization landscape topology and quantum distinguishability
quantum optimization variational quantum algorithms optimization landscapes local optima quantum distinguishability
View Full Abstract

Optimization is ubiquitous in quantum information science and technology, however, the corresponding optimization landscape can encounter false traps, i.e., local but not global optima, likely to prevent used optimizers from finding optimal solutions. Such traps are believed to arise from parameter insufficiency and are expected to disappear when tunable parameters are sufficiently abundant. In this work, we investigate optimization landscapes of quantum optimization problems, and especially obtain that the parameter sufficiency is not enough to ensure the absence of false traps. First, we present a complete framework for analyzing critical features of optimization landscapes, by deriving necessary and sufficient conditions to identify all critical points and to classify them as local maxima, minima, or saddles, under some assumptions. Then, we show that false traps can still emerge on landscapes even with sufficient parameters, implying their appearance cannot be solely attributed to parameter insufficiency. Moreover, a close connection between landscape topology and quantum distinguishability is revealed that the emergence of false traps is linked to the loss of distinguishability among states or operators in the objective function. Finally, implications of our results are noted. Our work not only provides a deeper understanding of the intrinsic complexity of quantum-classical optimization, but also provides practical guidance for solving quantum-classical optimization problems, thus significantly aiding the progress in witnessing quantum advantages of the underlying quantum information processing tasks.

Design and Analysis of an Improved Constrained Hypercube Mixer in Quantum Approximate Optimization Algorithm

Arkadiusz Wołk, Karol Capała, Katarzyna Rycerz

2603.05187 • Mar 5, 2026

QC: high Sensing: none Network: none

This paper improves the Quantum Approximate Optimization Algorithm (QAOA) for solving optimization problems with constraints by developing a more efficient hypercube mixer that uses fewer quantum gates. The improvement makes the algorithm more robust to noise in current quantum computers, bringing it closer to practical applications.

Key Contributions

  • Development of an improved hypercube mixer for QAOA that reduces circuit gate count for constrained optimization problems
  • Analytical upper bound calculation for the number of binary variables where the reduction applies
  • Demonstrated improved noise robustness through numerical experiments
QAOA quantum optimization constrained optimization hypercube mixer NISQ
View Full Abstract

The Quantum Approximate Optimization Algorithm (QAOA) is expected to offer advantages over classical approaches when solving combinatorial optimization problems in the Noisy Intermediate-Scale Quantum (NISQ) era. In its standard formulation, however, QAOA is not suited for constrained problems. One way to incorporate certain types of constraints is to restrict the mixing operator to the feasible subspace; however, this substantially increases circuit size, thereby reducing noise robustness. In this work, we refine an existing hypercube mixer method for enforcing hard constraints in QAOA. We present a modification that generates circuits with fewer gates for a broad class of constrained problems defined by linear functions. Furthermore, we calculate an analytical upper bound on the number of binary variables for which this reduction might not apply. Additionally, we present numerical experimental results demonstrating that the proposed approach improves robustness to noise. In summary, the method proposed in this paper allows for more accurate QAOA performance in noisy settings, bringing us closer to practical, real-world NISQ-era applications.

Security bounds for unidimensional discrete-modulated CV-QKD: a Gaussian extremality approach

John A. Mora Rodríguez, Maron F. Anka, Leonardo J. Pereira, Micael A. Dias, Alexandre B. Tacla

2603.05178 • Mar 5, 2026

QC: none Sensing: none Network: high

This paper analyzes the security of simplified quantum key distribution protocols that use one-dimensional modulation instead of two-dimensional, finding that a common mathematical assumption (Gaussian extremality) becomes too conservative for these protocols, making secure communication appear impossible even when it should be feasible.

Key Contributions

  • Extended Gaussian extremality security analysis method to 1D discrete-modulated continuous-variable QKD protocols
  • Demonstrated fundamental limitations of Gaussian extremality assumption for 1D protocols, showing it overestimates eavesdropper information and prevents secure key extraction for constellations larger than four states
  • Identified that unlike 2D protocols, 1D protocols lack sufficient phase-space isotropy for Gaussian extremality to remain a tight approximation as constellation size increases
quantum key distribution continuous-variable QKD Gaussian extremality discrete modulation semidefinite programming
View Full Abstract

Unidimensional (1D) Gaussian-modulated continuous-variable quantum key distribution protocols have been proposed as a way to simplify implementation and reduce costs through single-quadrature modulation, requiring only one modulator while maintaining compatibility with standard optical infrastructure. Here, we determine security bounds for 1D discrete-modulated protocol under the Gaussian extremality assumption by extending the method of Ghorai et al. [Phys. Rev. X 9, 021059 (2019)]. We establish the appropriate symmetry arguments to extend the method to the 1D discrete-modulated case, define the physicality zone in which the protocol is allowed to operate, and prove security against collective attacks in the asymptotic regime via semidefinite programming. Our analysis for uniformly distributed coherent states reveals a fundamental limitation: the Gaussian extremality assumption systematically overestimates Eve's information with increasing constellation size, yielding bounds so conservative that secure key extraction becomes impossible for constellations larger than four states, even under ideal conditions. This overestimation worsens with excess noise and restricts viable modulation amplitudes to impractically small values. Unlike two-dimensional (2D) protocols, where Gaussian extremality improves with constellation size, 1D protocols lack the growing phase-space isotropy required for the approximation to remain tight as the constellation grows. Our results expose these limitations and highlight the necessity of alternative methods or optimized non-uniform constellation designs for this class of protocols.

Machine Learning the Strong Disorder Renormalization Group Method for Disordered Quantum Spin Chains

A. Ustyuzhanin, J. Vahedi, S. Kettemann

2603.05164 • Mar 5, 2026

QC: low Sensing: none Network: low

This paper uses machine learning, specifically graph neural networks, to learn the strong disorder renormalization group method for analyzing entanglement in disordered quantum spin chains. The ML approach successfully reproduces the entanglement structure and entropy calculations from the physics-based SDRG method.

Key Contributions

  • Development of graph neural network approach to learn SDRG decimation policy for disordered spin chains
  • Demonstration that ML can accurately reproduce entanglement entropy across different interaction exponents and subsystem sizes
  • Extension to finite-temperature entanglement properties through SDRGX framework without retraining
machine learning entanglement entropy disordered spin chains renormalization group graph neural networks
View Full Abstract

We train machine learning algorithms to infer the entanglement structure of disordered long-range interacting quantum spin chains by learning from the strong disorder renormalisation group (SDRG) method. The system consists of $S=1/2$-quantum spins coupled by antiferromagnetic power-law interactions with decay exponent $α$ at random positions on a one-dimensional chain. Using SDRG as a physics-informed teacher, we compare a Random Forest classifier as a classical baseline with a graph neural network (GNN) that operates directly on the interaction graph and learns a bond-ranking rule mirroring the SDRG decimation policy. The GNN achieves a disorder-averaged pairing accuracy close to one and reproduces the entanglement entropy $S(\ell)$ in excellent quantitative agreement with SDRG across all subsystem sizes and interaction exponents. RG flow heat maps confirm that the GNN learns the sequential decimation hierarchy rather than merely fitting final-state observables. Finite-temperature entanglement properties are incorporated via the SDRGX framework through a two-stage strategy, using the zero-temperature GNN to generate the RG flow and sampling thermal occupations from the canonical ensemble, yielding results in agreement with both numerical SDRGX and analytical predictions without retraining.

Constant-Depth Quantum Imaginary Time Evolution Using Dynamic Fan-out Circuits

Albert Lund, Erika Magnusson, Werner Dobrautz, Laura García-Álvarez

2603.05156 • Mar 5, 2026

QC: high Sensing: none Network: none

This paper develops a new approach to Quantum Imaginary Time Evolution (QITE) that uses dynamic quantum circuits with mid-circuit measurements and classical feed-forward to prepare ground states with constant circuit depth. The method reduces the number of parameters and entangling gates needed compared to standard QITE while maintaining or improving performance on optimization problems.

Key Contributions

  • Introduction of constant-depth QITE using dynamic fan-out circuits that reduces entangling gate requirements
  • Demonstration of reduced-parameter ansatz that outperforms standard QITE on exact cover and set partitioning problems
  • Experimental implementation and comparison of unitary vs dynamic circuit variants on IBM hardware with performance benchmarks
quantum imaginary time evolution dynamic quantum circuits ground state preparation quantum optimization mid-circuit measurement
View Full Abstract

Dynamic quantum circuits combine mid-circuit measurement with classical feed-forward, enabling circuit constructions with reduced entangling-gate depth. Here, we investigate their use in Quantum Imaginary Time Evolution (QITE), where circuit depth and parameter growth limit practical implementations of ground-state preparation. For dense classical optimization Hamiltonians, we introduce a reduced-parameter QITE ansatz that restricts entanglement generation via a small set of control qubits, enabling each QITE layer to be implemented with constant two-qubit gate depth using fan-out-based dynamic circuits. In noiseless simulations of exact cover and set partitioning instances, the reduced ansatz yields a higher success probability than standard QITE approaches. We implement unitary, dynamic fan-out, and semi-classical adaptive variants on IBM superconducting hardware. The semi-classical variant performs favorably to the unitary implementation, while the fully dynamic construction exposes the trade-offs between entangling-depth reduction and measurement and feed-forward overhead associated to dynamic circuit implementations. Using a fidelity threshold of 0.5 relative to the noiseless QITE ansatz, we show that dynamic fan-out based QITE would outperform unitary implementations on current devices when the measurement and two-qubit gate errors are reduced by 65% and the feedback latency is halved.

Simulating Lattice Gauge Theories with Virtual Rishons

David Rogerson, João Barata, Robert M. Konik, Raju Venugopalan, Ananda Roy

2603.05151 • Mar 5, 2026

QC: high Sensing: none Network: none

This paper develops a new computational framework called 'virtual rishons' to simulate lattice gauge theories, which are important mathematical models in particle physics. The authors test their approach on quantum field theory models in 1D and 2D spacetime, using both classical computers and quantum hardware to study fundamental physics phenomena.

Key Contributions

  • Development of a novel virtual rishon framework for enforcing gauge symmetry in quantum simulations
  • Demonstration of scalable lattice gauge theory simulation on both classical tensor networks and near-term quantum hardware
  • Benchmarking results for U(1) gauge theories including multi-flavor Schwinger models and extraction of confining string tension
lattice gauge theory quantum simulation tensor networks Schwinger model gauge symmetry
View Full Abstract

Classical tensor network and hybrid quantum-classical algorithms are promising candidates for the investigation of real-time properties of lattice gauge theories. We develop here a novel framework which enforces gauge symmetry via a quantum-link virtual rishon representation applied at intermediate steps. Crucially, the gauge and matter degrees of freedom are dynamical variables encoded in terms of qubits, enabling analysis of gauge theories in $d+1$ spacetime dimensions. We benchmark this framework in a U(1) gauge theory with and without matter fields. For $d = 1$, the multi-flavor Schwinger model with $1\leq N_f\leq3$ flavors is analyzed for arbitrary boundary conditions and nonzero topological angle, capturing signatures of the underlying Wess-Zumino-Witten conformal field theory. For $d = 2$, we extract the confining string tension in close agreement with continuum expectations. These results establish the virtual rishon framework as a scalable and robust approach for the simulation of lattice gauge theories using both classical tensor networks as well as near-term quantum hardware.

Advantage of flexible catalysis for entanglement and quantum thermodynamics

Jingsong Ao, Aby Philip, Alexander Streltsov

2603.05146 • Mar 5, 2026

QC: medium Sensing: low Network: medium

This paper investigates flexible catalysis in quantum systems, where auxiliary systems cycle through multiple states before returning to their initial configuration, rather than remaining unchanged throughout the process. The researchers demonstrate that this flexible approach provides advantages over standard catalysis in both quantum entanglement manipulation and quantum thermodynamics applications.

Key Contributions

  • Demonstrated that flexible catalysis offers strict advantages over standard catalysis in stochastic local operations and classical communication for entanglement
  • Proved that flexible catalysis outperforms standard catalysis in deterministic quantum thermodynamic processes
  • Provided specific examples of quantum state transformations impossible with standard catalysts but achievable through flexible catalytic cycles
flexible catalysis entanglement quantum thermodynamics resource theory quantum state transformation
View Full Abstract

Understanding the fundamental limits of state convertibility is crucial for establishing the boundaries of quantum information processing and thermodynamic efficiency. While auxiliary systems, catalysts, can facilitate otherwise impossible transformations, standard catalysis rigidly requires the auxiliary system to return to its exact initial state. In this work, we investigate the power of flexible catalysis, where the catalyst evolves through a cycle of states, restoring its initial configuration only after a finite number of steps. Focusing on the regime of fixed, finite dimensions, we analyze the capabilities of flexible catalysis within the resource theories of entanglement and quantum thermodynamics. In the context of entanglement, we derive conditions limiting flexible catalysts and demonstrate that they offer a strict advantage in the success probability of stochastic local operations and classical communication. Conversely, in quantum thermodynamics, we prove that flexible catalysis strictly outperforms standard catalysis even in deterministic settings. We provide an example identifying state transformations that are impossible with any standard catalyst of fixed dimension and Hamiltonian but become achievable via a flexible cycle.

Standardizing Access to Heterogeneous Quantum Backends: A Case Study on Cloud Service Integration with QDMI

Patrick Hopf, Sebastian Stern, Robert Wille, Lukas Burgholzer

2603.05138 • Mar 5, 2026

QC: medium Sensing: none Network: none

This paper presents a case study on integrating the Quantum Device Management Interface (QDMI) with Amazon Braket cloud service to create a standardized way to access different quantum computing hardware and simulators. The work focuses on software infrastructure that enables unified management of quantum computing tasks across heterogeneous quantum backends through a single interface.

Key Contributions

  • Integration of QDMI standard with Amazon Braket cloud service for unified quantum backend access
  • Engineering framework for managing complete quantum computing task lifecycle across heterogeneous hardware platforms
quantum software stack hardware abstraction cloud computing quantum backends interoperability
View Full Abstract

With an increasingly diverse portfolio of quantum backends, the adoption of standardized interfaces has become a key prerequisite for scalable access and interoperability within quantum software stacks. The Quantum Device Management Interface (QDMI) addresses this challenge and is emerging as one of the de facto standards for hardware abstraction, enabling the unified management not only of individual Quantum Processing Units (QPUs) but also of complete full-stack cloud services. This paper presents a case study demonstrating the integration of QDMI with Amazon Braket, a quantum computing cloud service that provides a single access point to a wide range of hardware technologies. By treating the cloud service itself as a unified device, the proposed implementation enables management of the complete task lifecycle - ranging from authentication and circuit submission to result retrieval - across Braket's heterogeneous set of simulators and hardware backends. We detail the engineering insights gained from this integration and present a hands-on example workflow, ultimately paving the way for integrated access to cloud-hosted quantum resources from QDMI-enabled software stacks.

Classical shadows for non-iid quantum sources

Leonardo Zambrano

2603.05137 • Mar 5, 2026

QC: high Sensing: medium Network: low

This paper develops a more robust version of classical shadow tomography that works even when quantum measurements aren't independent, addressing real-world experimental conditions where noise and drift create correlations between measurement rounds. The authors prove their truncated mean estimator maintains the same favorable scaling properties as standard methods while being much more practical for actual quantum experiments.

Key Contributions

  • Introduction of truncated mean estimator for robust classical shadow tomography under non-i.i.d. conditions
  • Proof that sample complexity maintains standard shadow norm scaling even with history-dependent experimental rounds
classical shadow tomography quantum state characterization non-iid sampling truncated mean estimator shadow norm
View Full Abstract

Classical shadow tomography has emerged as a powerful framework for predicting properties of quantum many-body systems with favorable sample complexity. Standard theoretical guarantees, however, rely on the assumption that experimental rounds are independent and identically distributed (i.i.d.). This idealization is often violated in practice, where parameter drift, environmental noise, and active feedback generate history-dependent sequences of states or channels. To address this, we introduce a robust classical shadow protocol based on a truncated mean estimator. We prove that its sample complexity for predicting properties of the time-averaged state or channel matches the standard i.i.d. scaling governed by the shadow norm, even when experimental rounds depend arbitrarily on the past. Our results establish the robustness of the shadow formalism beyond the i.i.d. regime.

Emergence of Turbulence in a counterflow geometry of 2D Polariton Quantum Fluids

Louis Depaepe, Kayce Ouahrouche, Alberto Amo, Clement Hainaut

2603.05125 • Mar 5, 2026

QC: low Sensing: low Network: none

This paper studies quantum fluids made of exciton-polaritons (light-matter hybrids) driven by two counter-propagating laser beams, identifying four distinct flow regimes including turbulent behavior. The researchers create detailed phase diagrams showing how different experimental parameters control transitions between linear, solitonic, turbulent, and superfluid states.

Key Contributions

  • Identification and characterization of four distinct regimes in driven polariton quantum fluids
  • Construction of quantitative phase diagrams mapping transitions between linear, solitonic, turbulent, and superfluid behaviors
  • Demonstration that quantum turbulence persists in experimentally realistic parameter ranges for GaAs-based microcavity platforms
exciton-polaritons quantum fluids turbulence driven-dissipative systems Gross-Pitaevskii equation
View Full Abstract

We numerically investigate the nonlinear dynamics of a two-dimensional exciton-polariton quantum fluid coherently driven by two counter-propagating laser beams. Using an exciton-photon coupled driven-dissipative Gross-Pitaevskii framework, we identify four distinct regimes-linear, solitonic, turbulent, and superfluid-emerging from the interplay between pump strength, laser detuning, and injected momentum, which together control the balance between kinetic and interaction energies in the quantum fluid. The different regimes are characterized through real-space and momentum-space observables, as well as through the temporal first-order coherence function. We show that turbulence occupies a well-defined and extended region of parameter space, marked by spontaneous vortex nucleation, and a pronounced reduction of temporal coherence, providing a clear signature of nonstationary dynamics. By constructing quantitative phase diagrams, we delineate the transitions between the various regimes and identify multiple pathways connecting solitonic, turbulent, and superfluid behaviors. Finally, we demonstrate that the turbulent regime persists over experimentally realistic parameter ranges compatible with state-of-the-art GaAs-based micro-cavity platforms, establishing counter-propagating polariton flows as a robust and versatile setting for the study of driven-dissipative quantum turbulence in two dimensions.

Double-sphere enhanced optomechanical spectroscopy constrains symmetron dark energy

Jiawei Li, Ka-Di Zhu

2603.05090 • Mar 5, 2026

QC: none Sensing: high Network: none

This paper proposes using two optically levitated nanospheres in a cavity to detect symmetron dark energy fields by measuring how these hypothetical fields cause splitting in the optomechanical resonance spectrum. The technique could improve laboratory constraints on screened scalar field dark energy models by several orders of magnitude.

Key Contributions

  • Novel optomechanical spectroscopy method for detecting symmetron dark energy using two levitated nanospheres
  • Theoretical framework showing potential for orders-of-magnitude improvement in laboratory constraints on screened fifth forces
optomechanical quantum sensing dark energy symmetron precision measurement
View Full Abstract

Screened scalar fields such as the symmetron provide a viable description of dark energy yet their laboratory detection remains challenging. We propose an optomechanical scheme to constrain symmetron interactions using two optically levitated nanospheres inside a cavity. The symmetron-mediated interaction induces an effective coupling which leads to a measurable splitting in the optomechanical resonance spectrum. We forecast constraints in the regime $μ\sim 10^{-2}$eV-$10^{-4}$ eV, which shows that this approach can improve existing laboratory bounds by up to several orders of magnitude, demonstrating the sensitivity of optomechanical spectroscopy to screened fifth forces.

Quantum field theory for classical fields

Christof Wetterich

2603.05061 • Mar 5, 2026

QC: low Sensing: medium Network: low

This paper proposes a novel approach to bridge classical and quantum field theories by treating classical fields with probabilistic initial conditions as quantum systems. The authors show how statistical observables based on fluctuating classical fields naturally lead to non-commuting operators and quantum mechanical rules.

Key Contributions

  • Development of statistical observables framework that transforms probabilistic classical field theory into quantum field theory
  • Construction of functional integral formulation for the resulting quantum field theory with application to Klein-Gordon equation
quantum field theory statistical observables classical-quantum correspondence Klein-Gordon equation functional integral
View Full Abstract

For classical field theories with probabilistic initial conditions the classical field observables are an idealization. Their arbitrarily precise values poorly reflect the characteristic uncertainty in the presence of substantial fluctuations. We propose to describe this system by observables based on fluctuating fields. In terms of these "statistical observables" the probabilistic classical field theory becomes a quantum field theory. Non-commuting operators are associated to observables. The quantum rules follow from the laws for classical probabilities. We construct the functional integral for the quantum field theory, and discuss in detail the classical relativistic Klein-Gordon equation with interactions.

Interplay of internal and external coupling phases in cavity magnonics: from level repulsion to attraction

Guillaume Bourcin, Mufti Avicena, Vincent Vlaminck, Jeremy Bourhill, Vincent Castel

2603.05051 • Mar 5, 2026

QC: medium Sensing: high Network: medium

This paper develops and experimentally validates a theoretical model for cavity magnonic systems that accounts for both internal and external coupling phases, enabling precise control over quantum interference effects and the transition between level repulsion and attraction behaviors.

Key Contributions

  • Development of unified input-output model incorporating internal and external coupling phases in cavity magnonic systems
  • Experimental demonstration of phase-controlled interference effects enabling transition from level repulsion to level attraction
  • Achievement of quantitative agreement between theory and experiment across all coupling regimes for nonreciprocal transmission
cavity magnonics quantum interference level repulsion level attraction nonreciprocal transmission
View Full Abstract

We experimentally validate a unified input--output model that incorporates internal and external coupling phases in a room-temperature cavity magnonic system. By explicitly accounting for phase effects, the model provides full control of interference-induced antiresonances and enables a clear interpretation of the transition from level repulsion to level attraction. Nonreciprocal transmission -- originating from internal phases -- is accurately reproduced under specific coupling conditions. Quantitative agreement between experiments and simulations is obtained across all coupling regimes, demonstrating a practical route toward phase-controlled cavity--magnon devices.

Uniform process tensor approach for the calculation of multi-time correlation functions of non-Markovian open systems

Matteo Garbellini, Konrad Mickiewicz, Valentin Link, Alexander Eisfeld, Walter T. Strunz

2603.04970 • Mar 5, 2026

QC: medium Sensing: medium Network: low

This paper develops an improved computational method for studying how quantum systems behave when strongly coupled to non-Markovian environments, using a time-translation invariant matrix product operator approach that enables more efficient calculation of multi-time correlation functions and spectroscopic properties.

Key Contributions

  • Development of uniform time-evolving matrix product operator (uniTEMPO) method for process tensor calculations
  • Improved numerical scaling for computing multi-dimensional spectra in non-Markovian open quantum systems
process tensor non-Markovian dynamics matrix product operators open quantum systems multi-time correlations
View Full Abstract

The process tensor framework to open quantum systems provides the most general description of multi-time correlations in non-Markovian quantum dynamics. A compressed representation of a process tensor in terms of matrix product operators (MPO) can be used for numerically exact calculations of multi-time correlation functions in systems strongly coupled to a non-Markovian reservoir. We show here that the numerical scaling for computing multi-dimensional spectra can be significantly improved using a time-translation invariant MPO representation of the process tensor obtained from the uniform time-evolving matrix product operator (uniTEMPO) method. In particular, this approach provides a spectral representation of the non-Markovian dynamics that gives direct access to correlation functions in Fourier-space, avoiding explicit real-time evolution. We calculate linear and 2D electronic spectra for an example system and discuss the performance and numerical scaling of our simulations.

A Dynamical Lie-Algebraic Framework for Hamiltonian Engineering and Quantum Control

Yanying Liang, Ruibin Xu, Mao-Sheng Li, Haozhen Situ, Zhu-Jun Zheng

2603.04916 • Mar 5, 2026

QC: high Sensing: medium Network: low

This paper develops a mathematical framework using dynamical Lie algebras to systematically engineer and control quantum system dynamics under realistic physical constraints. The work provides methods for constructing efficient Hamiltonian structures that can simulate multiple quantum subsystems in parallel while preserving controllability and enabling targeted dynamical reductions.

Key Contributions

  • Unified framework for engineering Hamiltonian-driven quantum dynamics based on dynamical Lie algebras
  • Methods for constructing qubit-efficient direct-sum Hamiltonian structures enabling parallel quantum subsystem simulation
  • Identification of Hamiltonian modifications that preserve full controllability while introducing physically motivated control terms
quantum control Hamiltonian engineering dynamical Lie algebras quantum dynamics controllability
View Full Abstract

Determining the physically accessible unitary dynamics of a quantum system under finite Hamiltonian resources is a central problem in quantum control and Hamiltonian engineering. Dynamical Lie algebras (DLAs) provide the fundamental link between available control Hamiltonians and the resulting quantum dynamics. While the structural classification of DLAs is well-established, how to systematically engineer and reshape these algebraic structures under realistic physical constraints remains largely unexplored. In this work, building upon recent results on direct sums of identical DLAs, we develop a unified framework for engineering Hamiltonian-driven quantum dynamics based on DLAs: (i) constructing qubit-efficient direct-sum Hamiltonian structures via spectral decomposition of Hermitian operators, enabling parallel simulation of multiple quantum subsystems; (ii) identifying Hamiltonian modifications that preserve full controllability, including the $\mathfrak{su}(2^N)$ algebra, even when additional physically motivated control terms are introduced; and (iii) engineering restricted Hamiltonian sets that confine quantum dynamics to target subalgebras through irreducible Lie-algebra decompositions, providing a principled approach to symmetry-based dynamical reduction. By bridging these Lie-algebraic insights with practical control objectives, our framework provides a systematic pathway for engineering expressive and resource-efficient unitary evolutions, thus unlocking greater structural flexibility of Hamiltonian-driven quantum systems.

Benchmarking Quantum Computers via Protocols, Comparing IBM's Heron vs IBM's Eagle

Nitay Mayo, Tal Mor, Yossi Weinstein

2603.04377 • Mar 4, 2026

QC: high Sensing: none Network: none

This paper develops and applies a protocol-based benchmarking method to evaluate quantum computer performance by comparing IBM's newer Heron processors against older Eagle processors. The approach uses quantumness thresholds to assess whether quantum devices can demonstrate practical quantum advantage at the protocol level rather than individual gate operations.

Key Contributions

  • Development of protocol-based benchmarking methodology using quantumness thresholds for quantum processor evaluation
  • Comparative performance analysis demonstrating substantial improvements in IBM's Heron architecture over Eagle architecture
quantum benchmarking IBM quantum processors protocol-based evaluation quantumness thresholds quantum advantage
View Full Abstract

As quantum computing hardware rapidly advances, objectively evaluating the capabilities and error rates of new processors remains a critical challenge for the field. A clear and realistic understanding of current quantum performance is essential to guide research priorities and drive meaningful progress. In this work, we apply and extend a protocol-based benchmarking methodology (presented in arXiv:2505.12441) that utilizes well-defined quantumness thresholds. By evaluating performance at protocol level rather then the gate level, this approach provides a transparent and intuitive assessment of whether specific quantum processors, or isolated sub-chips within them, can demonstrate a practical quantum advantage. To illustrate the utility of this method, we compare two generations of IBM quantum computers: the older Eagle architecture and the newer Heron architecture. Our findings reveal the genuine operational strengths and limitations of these devices, demonstrating substantial performance improvements in the newer Heron generation.

Non-Hermitian Quantum Mechanics with Applications to Gravity

Oem Trivedi, Alfredo Gurrola, Robert J. Scherrer

2603.04375 • Mar 4, 2026

QC: low Sensing: low Network: none

This paper proposes that the Hermiticity requirement in quantum mechanics should be understood as a conservation law for inner product current rather than a fundamental axiom. The authors argue that near black holes and other causal horizons, this conservation becomes obstructed, leading to effective non-Hermitian quantum dynamics that connects to thermodynamics and entropy production.

Key Contributions

  • Reinterprets Hermiticity as an emergent symmetry from inner product current conservation rather than a fundamental axiom
  • Demonstrates connection between non-Hermitian quantum dynamics near horizons and gravitational thermodynamics through entropy balance
non-Hermitian quantum mechanics black hole thermodynamics causal horizons inner product conservation generalized second law
View Full Abstract

Hermiticity is usually treated as a foundational axiom of quantum mechanics, guaranteeing real spectra and unitary time evolution. In this work we argue that Hermiticity is more naturally understood as a symmetry law arising from the global conservation of an inner product current. We show that in spacetimes admitting complete Cauchy surfaces without boundary flux this conservation reduces to the familiar Hermiticity condition of the canonical inner product. However, in the presence of causal horizons, most strikingly in black hole geometries, this conservation law becomes obstructed for restricted observers. Tracing over inaccessible degrees of freedom then inevitably yields completely positive trace preserving dynamics with an effective non-Hermitian generator. Using quantum thermodynamics and the monotonicity of relative entropy, we demonstrate that the generalized second law may be reinterpreted as an entropy balance that compensates precisely for the flux of inner product charge through the horizon. The structure of Einstein equations, through the Bianchi identity and the Raychaudhuri focusing equation, provides the geometric mechanism underlying this balance. We also show that black hole ringdown can serve as a realistic observational probe of this idea and may provide quantitative upper bounds on the strength of horizon-induced inner product flux. In this way gravity, entropy production, and effective non-Hermiticity are unified under a single structural principle, with Hermiticity emerging as the special case of globally conserved inner product symmetry.

Dynamical Behaviour of Density Correlations Across the Chaotic Phase for Interacting Bosons

Óscar Dueñas, Alberto Rodríguez

2603.04373 • Mar 4, 2026

QC: medium Sensing: medium Network: low

This paper studies how density correlations spread in a one-dimensional Bose-Hubbard quantum system, finding that while integrable systems show ballistic (fast) correlation spreading, chaotic systems exhibit slower sub-ballistic spreading due to long-range correlation tails and weakened correlation fronts.

Key Contributions

  • Demonstrated that correlation fronts maintain ballistic propagation even in chaotic regimes while overall correlation transport becomes sub-ballistic
  • Identified the physical mechanism behind chaos-induced correlation slowdown as arising from long-time distance-dependent correlation tails and enhanced decay of correlation front amplitude
Bose-Hubbard model correlation transport quantum chaos many-body dynamics thermodynamic limit
View Full Abstract

We investigate the propagation of two-point density correlations in the one-dimensional Bose-Hubbard Hamiltonian in the thermodynamic limit in terms of the correlation transport distance (CTD), an experimentally measurable magnitude that characterizes the spatial spreading of correlations in time. We confirm that the integrable limits of the model exhibit CTD ballistic growth, while the onset of the chaotic phase leads to the emergence of a pronounced sub-ballistic regime, in agreement with previous results for finite systems. By a meticulous analysis of the spatio-temporal correlation profiles, we show that the correlation front nonetheless propagates ballistically for all interaction strengths, and that the chaos-induced slowdown of the CTD originates from the emergence of long-time distance-dependent correlation tails, together with an enhanced decay of the correlation front amplitude. Our results thus provide a detailed characterization of correlation transport that goes beyond a simple light-cone picture.

Quantum error mitigation by hierarchy-informed sampling: chiral dynamics in the Schwinger model

Theo Saporiti, Oleg Kaikov, Vasily Sazonov, Mohamed Tamaazousti

2603.04339 • Mar 4, 2026

QC: high Sensing: none Network: none

This paper introduces a new quantum error mitigation method for noisy quantum computers that uses mathematical hierarchy equations to identify and correct errors in quantum simulations. The authors test their approach on simulations of the Schwinger model, demonstrating systematic noise reduction with increasing computational overhead.

Key Contributions

  • Novel quantum error mitigation scheme using BBGKY hierarchy equations as sampling criterion
  • Demonstration of systematic noise reduction in Schwinger model simulations with polynomial overhead
  • Empirical validation of chiral magnetic effect recovery from noisy quantum simulations
quantum error mitigation NISQ BBGKY hierarchy Schwinger model chiral magnetic effect
View Full Abstract

Quantum simulations on current NISQ hardware are limited by its noisy nature, making efficient quantum error mitigation methods highly demanded. In this paper we introduce a novel mitigation scheme, applicable to arbitrary quantum simulations of time-dependent Hamiltonian dynamics on NISQ devices. The scheme uses a polynomial subset of extended qubit Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy equations as a sampling criterion of possible mitigated candidates for the quantum observables. We show that for favorable Hamiltonians the polynomial subset of BBGKY hierarchy equations leads to a polynomial overhead in both classical and quantum resources. We employ the method to mitigate simulations of the chiral magnetic effect (CME), a chiral feature of the Schwinger model. We empirically show the effectiveness of our scheme at recovering the real-time dynamics of the CME from noisy quantum simulations of the Schwinger model, for a range of different parameter values of the model. We numerically demonstrate a systematic reduction of quantum noise, together with an increasing noise reduction capability as the amount of BBGKY constraints grows.

Direct derivation of the modified Langevin noise formalism from the canonical quantization of macroscopic electromagnetism

Alessandro Ciattoni

2603.04336 • Mar 4, 2026

QC: low Sensing: medium Network: medium

This paper provides a rigorous mathematical derivation showing how the modified Langevin noise formalism (MLNF) emerges directly from canonical quantum electromagnetism theory. The authors derive exact analytical expressions for polariton operators and prove they diagonalize the electromagnetic Hamiltonian when interacting with lossy materials.

Key Contributions

  • Derived exact analytical expressions for polariton operators in terms of canonical electromagnetic field operators
  • Provided rigorous proof that these polariton operators are bosonic and diagonalize the macroscopic electromagnetic Hamiltonian
  • Established direct mathematical connection between MLNF formalism and canonical quantization of macroscopic electromagnetism
quantum electromagnetism polaritons Langevin noise canonical quantization macroscopic Maxwell equations
View Full Abstract

The modified Langevin noise formalism (MLNF) models the interaction of the quantized electromagnetic field with an arbitrary lossy magneto-dielectric object placed in vacuum using three types of non-interacting bosonic polaritons: scattering, electric, and magnetic. These respectively represent free-space photons scattered by the object, and photons radiated by quantized electric and magnetic dipolar sources embedded within its volume. Recently [A. Ciattoni, Phys. Rev. A 110, 013707 (2024)], this formalism was justified from the canonical quantization of macroscopic electromagnetism (CQME) [Philbin, New J. Phys. 12, 123008 (2010)] in the Heisenberg picture. This was achieved by identifying the polariton operators within the formal solution of the macroscopic Maxwell equations, assuming they obey bosonic commutation relations to retrieve the canonical ones, and showing they diagonalize the CQME Hamiltonian. However, the explicit functional dependence of these polaritons on the underlying canonical field operators remained undetermined. In this paper, we derive the exact analytical expressions for the polariton operators in terms of the canonical CQME field operators. Using these mappings, we provide a direct and rigorous derivation of the MLNF from the canonical theory in the Schrödinger picture. Our derivation is structured in three foundational steps: 1) adopting the derived analytical expressions as the constitutive definitions of the polariton operators; 2) mathematically proving that these operators are strictly bosonic as a direct consequence of the canonical commutation relations; and 3) demonstrating that they exactly diagonalize the macroscopic CQME Hamiltonian.

On the operational and algebraic quantum correlations

Shun Umekawa, Jaeha Lee

2603.04332 • Mar 4, 2026

QC: medium Sensing: medium Network: low

This paper investigates how different ways of measuring quantum correlations (through actual measurements versus theoretical calculations) relate to each other, showing that the differences are bounded by how much the measurements disturb the quantum system. The authors provide mathematical bounds on these differences and identify when different correlation measurement approaches give equivalent results.

Key Contributions

  • Established quantitative bounds on differences between operational and algebraic quantum correlations based on measurement invasiveness
  • Derived new uncertainty relations for discrepancies between operational and algebraic joint probability distributions
  • Identified equivalence conditions under which operational and algebraic correlations coincide
quantum correlations measurement invasiveness Leggett-Garg inequality weak measurements uncertainty relations
View Full Abstract

We investigate the intrinsic ambiguity in the definition of correlation functions arising from the inevitable invasiveness of quantum measurements. While algebraic correlations defined as expectation values of products of observables are widely used, their relationship to operational ones defined through actual measurement procedures remain unclear. We demonstrate that the differences among various definitions of correlation functions and those among their underlying (quasi-)joint probability distributions are bounded above by a quantitative measure of measurement invasiveness. We further obtain a lower bound on the discrepancy among operational and algebraic (quasi-)joint probability distributions, providing a new form of the uncertainty relation. In addition, we identify an equivalence condition under which operational and algebraic correlations coincide. As an application, we analyze the quantum violation of the Leggett-Garg inequality and clarify the structural origin of the equivalence among different approaches to observing the violation, including sequential projective measurements and weak-measurement. Our results provide an operational foundation for the commonly used algebraic concepts of quantum theory.

Long-lived metastable states in the 4f$^{13}$5d6s configuration of Yb$^+$

Z. E. D. Ackerman, A. Cadarso Quevedo, Ilango Maran, L. P. H. Gallagher, R. J. C. Spreeuw, J. C. Berengut, R. Gerritsma

2603.04250 • Mar 4, 2026

QC: medium Sensing: high Network: low

This paper experimentally demonstrates and measures extremely long-lived metastable electronic states in trapped ytterbium ions, with lifetimes ranging from ~1 second to over 30 seconds. The researchers use optical pumping and sympathetic cooling techniques to prepare and study these states, identifying potential applications in quantum state detection and atomic clocks.

Key Contributions

  • Experimental demonstration of metastable states in Yb+ with lifetimes of 0.92s, 9.8s, and >30s
  • Development of optical pumping techniques to prepare and measure these long-lived states in trapped ions
  • Theoretical atomic structure calculations supporting the observed lifetimes and decay pathways
trapped ions metastable states optical clocks atomic spectroscopy ytterbium
View Full Abstract

We study the occurrence of long-lived metastable states in the 4f$^{13}$5d6s electron configuration of Yb$^+$. By optical pumping of a single trapped ion on the $^2F^\text{o}_{7/2}\rightarrow (7/2,0)_{7/2}$ transition at 377.5 nm, we prepare a wide range of metastable electronic states. We use a co-trapped control ion to sympathetically cool the spectroscopy ion, allowing us to accurately time its subsequent decay. We record a strong decay signal corresponding to a lifetime of 0.92(8) s, a weaker decay signal with lifetime 9.8(+2.9, -2.0) s, and find evidence for a much longer lifetime, $>$ 30 s. We identify the metastable states with these lifetimes qualitatively, and corroborate our results with atomic structure calculations that support the observed lifetimes and decay paths. These long-lived states provide new opportunities in qubit and qudit state detection and optical clocks.

Progress on artificial flat band systems: classifying, perturbing, applying

Carlo Danieli, Sergej Flach

2603.04248 • Mar 4, 2026

QC: low Sensing: medium Network: low

This paper reviews recent advances in artificial flat band systems, which are engineered quantum materials where certain energy levels have zero dispersion. The authors examine three main areas: the fundamental physics and classification of these flat bands, how they respond to perturbations like disorder and interactions, and their experimental realizations across different physical platforms.

Key Contributions

  • Classification and theoretical framework for flat band generators based on compact localized states
  • Review of perturbation effects on flat bands including disorder and many-body interactions
  • Survey of experimental realizations of flat band systems across multiple physical platforms
flat bands artificial quantum systems localized states many-body interactions quantum materials
View Full Abstract

We highlight recent progress in the study of artificial flat band systems with a threefold focus. First, we discuss single-particle flat band physics, which has advanced through the design of various flat band generators. These generators rely on the classification of flat bands in terms of compact localized states - their fundamental building blocks. A related development is the complete real-space description of flat band projectors. Next, we review studies on perturbations of flat bands, which provide new insights into the effects of disorder and, more importantly, the intricate interplay between many-body interactions and flat band physics. Finally, we survey the growing number of experimental realizations of flat bands across diverse physical platforms.

Constructing Arbitrary Coherent Rearrangements in Optical Lattices

Alexander Roth, Liyang Qiu, Timon Hilker, Titus Franz, Philipp M. Preiss

2603.04210 • Mar 4, 2026

QC: high Sensing: medium Network: low

This paper develops a method to coherently rearrange ultracold atoms in optical lattices using controlled tunneling and phase shifts, enabling arbitrary unitary transformations on atomic motional states. The approach uses an analogy with linear optics and the Clements scheme to systematically construct quantum operations with atoms trapped in arrays of double wells.

Key Contributions

  • Systematic construction of arbitrary N-dimensional single-particle unitaries using optical superlattices and the Clements scheme
  • Demonstration of key quantum subroutines including Discrete Fourier Transform and non-native Hamiltonian implementation
  • Two-dimensional extension enabling all-to-all atomic rearrangement with sublinear scaling in circuit depth
optical lattices ultracold atoms quantum control unitary transformations Clements scheme
View Full Abstract

Coherent control of motional degrees of freedom of ultracold atoms in optical lattices offers a promising route towards programmable quantum dynamics with massive particles. We propose and analyze a scheme for implementing coherent rearrangement of ultracold atoms, corresponding to arbitrary unitary transformations on single-particle motional states. Exploiting an analogy between dynamics in optical superlattices and discrete linear optics, we employ the Clements scheme to systematically construct any global $N$-dimensional single-particle unitary from tunneling and phase shifts in arrays of double wells. Tunneling is controlled globally, while local operations are achieved through site-resolved potential shifts. We numerically investigate the susceptibility of the scheme to intensity noise and addressing crosstalk. We identify key subroutines enabled by this unitary construction, including the Discrete Fourier Transform and the implementation of non-native Hamiltonians. Extending the scheme to two dimensions enables all-to-all atomic rearrangement with a circuit depth that scales sublinearly with the atom number, providing a high-density and highly scalable approach to atom rearrangement.

OptiQKD: A Machine Learning-Optimized Framework for Real-Time Parameter Tuning in Quantum Key Distribution

Noureldin Mohamed, Jawaher Kaldari, Saif Al-Kuwari

2603.04192 • Mar 4, 2026

QC: none Sensing: none Network: high

This paper presents OptiQKD, a machine learning framework that uses neural networks and reinforcement learning to automatically optimize parameters in quantum key distribution systems in real-time, improving secure key rates by 20-30% while reducing error rates.

Key Contributions

  • Protocol-agnostic ML framework combining TCNs and RL for real-time QKD parameter optimization
  • Demonstrated 20-30% improvement in secure key rate and reduction of QBER from 3.0% to 1.5% while maintaining security guarantees
quantum key distribution machine learning temporal convolutional networks reinforcement learning secure key rate
View Full Abstract

Despite the robust security guarantees of Quantum Key Distribution (QKD), its practical deployment is significantly challenged by the dynamic nature of quantum channels and the complexity of real-time parameter optimization. In this paper, we propose OptiQKD, a protocol-agnostic machine learning framework specifically engineered to maximize the Secure Key Rate (SKR) and minimize the Quantum Bit Error Rate (QBER) for the BB84, E91, and COW protocols. OptiQKD integrates Temporal Convolutional Networks (TCNs) for high-accuracy and short-horizon forecasting of channel-state fluctuations with a Reinforcement Learning (RL) controller for autonomous and real-time parameter selection. This optimization stack is strictly constrained by standard composable-security assumptions to ensure that performance gains do not compromise the underlying quantum security. We evaluate the framework by simulating critical environmental stressors, including depolarizing and amplitude-damping noise, under realistic device constraints, including channel loss, detector efficiency, and dark counts. Our results demonstrate substantial protocol-agnostic improvements: the median SKR increases by 20--30%, while the median QBER is reduced from 3.0% to 1.5% through predictive state optimization. These findings establish that OptiQKD provides an efficient, security-preserving mechanism for dynamic parameter tuning, paving the way for more resilient and high-throughput practical QKD deployments.

Distributed optimization of Lindblad equations for large-scale cavity QED systems

Hui-hui Miao

2603.04187 • Mar 4, 2026

QC: medium Sensing: medium Network: medium

This paper develops a distributed computing framework to efficiently simulate large-scale cavity QED systems by solving Lindblad master equations. The approach uses clever algorithms to dramatically reduce computational complexity and memory requirements when simulating open quantum systems with many atoms and dissipative channels.

Key Contributions

  • Reduced computational complexity for non-unitary terms from O(MN³) to O(MN) using sparsity of jump operators and Cannon algorithm
  • Dynamic subspace construction method that reduces Hamiltonian dimension to 5.63% of full size with only 0.32% memory footprint for 10-atom systems
  • Distributed computing framework enabling simulation of large-scale open quantum systems where dissipative channels greatly outnumber Hamiltonian dimensions
cavity QED Lindblad equation open quantum systems distributed computing quantum simulation
View Full Abstract

This paper proposes a distributed computing framework for solving the Lindblad master equation in large-dimensional cavity QED systems. By leveraging the sparsity of the jump operator and combining this approach with the Cannon algorithm, the computational complexity of non-unitary terms is reduced from $O(MN^3)$ to $O(MN)$. For unitary terms, a combination of Taylor series approximation and the Cannon algorithm enables distributed matrix exponentiation, though scalability is limited by cross-processor communication. The proposed dynamic subspace construction method further reduces the Hamiltonian dimension: when $n_{\text{at}}=10$, the dimension is reduced to $5.63\%$ of the full Hamiltonian, with a memory footprint of only $0.32\%$. Results show that this framework significantly accelerates non-unitary evolution, providing a feasible solution for simulating large-scale open quantum systems where the number of dissipative channels $M$ is much larger than the Hamiltonian dimension $N$.

Non-local nonstabiliserness in Gluon and Graviton Scattering

John Gargalionis, Nathan Moynihan, Michael L. Reichenberg Ashby, Ewan N. V. Wallace, Chris D. White, Martin J. White

2603.04148 • Mar 4, 2026

QC: medium Sensing: none Network: none

This paper analyzes a quantum information property called 'magic' or non-stabiliserness in high-energy physics processes involving gluons and gravitons. The authors develop a basis-independent framework to quantify this quantum resource and show how it behaves in particle scattering, providing theoretical insights into quantum information aspects of fundamental physics.

Key Contributions

  • Development of basis-independent non-local non-stabiliserness framework for high-energy physics
  • Demonstration that helicity basis provides natural framework for studying quantum information in gluon/graviton scattering
non-stabiliserness magic states fault-tolerant quantum computing gluon scattering graviton scattering
View Full Abstract

The property of non-stabiliserness, or ``magic'', is of interest in quantum computing due to its role in developing fault-tolerant quantum algorithms with genuine computational advantage over classical counterparts. There has been much interest in quantifying magic in various physical systems, in order to probe how to produce and enhance it. The production of magic has previously been quantified in gluon and graviton scattering, in the so-called helicity basis relating particle spins with momentum directions. For a basis-independent statement, one should instead use the recently developed concept of non-local non-stabiliserness, and our aim in this paper is to derive how this varies for gluon and graviton scattering processes. Our results show that, for many initial states, including those produced with polarised beams, the helicity basis coincides with a basis in which the non-local magic is manifest, providing a physical motivation for using the helicity basis to study quantum information quantities. However, this property breaks upon adding additional operators to the Yang-Mills Lagrangian, as would be the case in new physics scenarios.

Entanglement between quantum dots transmitted via Majorana wire: Insights from the fermionic negativity, concurrence and quantum mutual information

C. Jasiukiewicz, A. Sinner, I. Weymann, T. Domański, L. Chotorlishvili

2603.04108 • Mar 4, 2026

QC: medium Sensing: none Network: high

This paper studies quantum entanglement between two quantum dots connected through a topological superconducting nanowire containing Majorana modes. The researchers analyze how entanglement varies with energy levels and coupling strengths, finding optimal conditions for entanglement transmission at different temperatures.

Key Contributions

  • Characterization of entanglement transmission through Majorana modes using fermionic negativity and thermal concurrence
  • Identification of optimal conditions for quantum dot entanglement via topological superconducting nanowires
  • Development of protocols for robust finite-temperature entanglement transmission
quantum entanglement Majorana fermions quantum dots topological superconductivity quantum networking
View Full Abstract

We study quantum entanglement in a system comprising two quantum dots interconnected through the short topological superconducting nanowire, which hosts overlapping boundary Majorana modes. Inspecting the fermionic negativity, we analyze the variation of entanglement against the position of the energy levels of quantum dots and their hybridization with the topological superconducting nanowire. In the absence of electron correlations, the optimal entanglement occurs when the energy levels coincide with the zero-energy Majorana modes, whereas upon increasing the hybridizations, the entanglement is gradually suppressed. Such monotonous behavior is no longer valid when the quantum dot levels are detuned from the zero-energy. Under these circumstances, the quantum dots become maximally entangled for a certain optimal hybridization. Moreover, we study the thermal concurrence to explore the entanglement properties at finite temperatures. We also compute the quantum mutual information and propose recipes for robust finite-temperature entanglement transmission via Majorana modes.

Spectral Bath Engineering for Quantum-Enhanced Agrivoltaics: Advancing Efficiency and Environmental Sustainability via Non-Markovian Dynamics

Steve Cabrel Teguia Kouam, Theodore Goumai Vedekoi, Jean-Pierre Tchapet Njafa, Jean-Pierre Nguenang, Serge Guy Nana Engo

2603.04097 • Mar 4, 2026

QC: none Sensing: none Network: none

This paper proposes using quantum coherence effects in photosynthetic systems to improve agrivoltaic installations, where solar panels and crops share the same land. The researchers claim that specially designed organic solar panels can filter sunlight to enhance quantum effects in plant photosynthesis, potentially increasing both crop yields and energy production.

Key Contributions

  • Application of quantum coherence modeling to photosynthetic light-harvesting complexes
  • Design of spectral filtering systems for enhancing non-Markovian quantum effects in biological systems
quantum biology photosynthesis non-Markovian dynamics spectral engineering agrivoltaics
View Full Abstract

As global demand for food and clean energy intensifies, agrivoltaic systems have emerged as a vital solution for land-use optimization. However, current designs overwhelmingly treat incident light as a classical photon flux, overlooking the quantum mechanical nature of photosynthetic energy transfer. We introduce spectral bath engineering-the strategic spectral filtering of sunlight through semi-transparent organic photovoltaic (OPV) panels to exploit non-Markovian quantum coherence in biological light-harvesting. Using Process Tensor HOPS (PT-HOPS) and Spectrally Bundled Dissipators (SBD) to simulate the Fenna-Matthews-Olson complex, we demonstrate that selective filtering at vibronic resonance wavelengths (750nm and 820nm) enhances the electron transport rate (ETR) by 25% relative to standard Markovian models. This quantum advantage is driven by vibronic resonance-assisted transport, which extends coherence lifetimes by 20% to 50% and nearly doubles pairwise concurrence (89%). Multi-objective Pareto optimization identifies OPV configurations reaching 18.8% power conversion efficiency while sustaining an 80.5% system ETR, potentially generating an additional USD 470 to 3000 $ha^{-1}$$yr^{-1}$ in revenue. Environmental simulations across nine climate zones, including sub-Saharan Africa, confirm persistent ETR enhancements of 18% to 24%. Finally, eco-design analysis using quantum reactivity descriptors ensures that these technological gains are achieved using sustainable, biodegradable materials. By bridging quantum biology and renewable energy engineering, this work provides a quantitative blueprint for next-generation agrivoltaic materials that co-optimize agricultural productivity and energy yield.

The Steiner Tree Problem: Novel QUBO Formulation and Quantum Annealing Implementation

Dan Li, Xiang-Hui Wu, Ji-Rong Liu

2603.04089 • Mar 4, 2026

QC: high Sensing: none Network: low

This paper develops a quantum annealing algorithm to solve the Steiner Tree Problem by converting it into a quantum-suitable QUBO formulation. The authors demonstrate their approach can find high-quality solutions for moderate-scale network optimization problems with relatively low computational cost.

Key Contributions

  • Novel QUBO formulation for the Steiner Tree Problem suitable for quantum annealing
  • Quantum annealing algorithm with encoding strategy for solving NP-hard combinatorial optimization
quantum annealing QUBO Steiner tree problem combinatorial optimization quantum algorithms
View Full Abstract

The Steiner Tree Problem (STP) is a well-known NP-hard combinatorial optimization problem, which has wide applications in network design, integrated circuit layout, bioinformatics, and other fields. However, traditional algorithms often struggle to balance efficiency and solution quality when dealing with large-scale STP instances. In this paper, we propose a new quantum annealing-based algorithm for solving the STP: we first model the STP into a quadratic unconstrained binary optimization (QUBO) form suitable for quantum annealing, then design a corresponding encoding strategy, and finally verify the algorithm through experimental tests. The results show that our quantum annealing-based method can obtain high-quality solutions with relatively low computational overhead for moderate-scale STP instances, providing a new feasible path for handling this intractable combinatorial optimization problem.

Air-stable bright entangled photon-pair source from graphene-encapsulated van der Waals ferroelectric NbOI2

Mayank Joshi, Mengting Jiang, Yu Xing, Yuerui Lu, Jie Zhao, Ping Koy Lam, Syed M Assad, Xuezhi Ma, Young-Wook Cho

2603.04082 • Mar 4, 2026

QC: medium Sensing: low Network: high

Researchers developed an air-stable source of entangled photon pairs using a van der Waals ferroelectric material (NbOI2) protected by graphene encapsulation. The graphene coating prevents degradation from air exposure and heat, enabling bright, stable generation of quantum-entangled light that could be used for quantum communication and computing applications.

Key Contributions

  • Achieved record photon-pair generation rate of 258 Hz with 19,900 Hz/(mW.mm) normalized brightness using graphene-encapsulated NbOI2
  • Demonstrated air-stable operation and environmental protection through graphene encapsulation preventing material degradation
  • Generated polarization entangled photon pairs with 94% fidelity to maximally entangled Bell states using twisted bilayer configuration
  • Established practical pathway for scalable integrated quantum light sources for on-chip quantum photonics
entangled photons spontaneous parametric down-conversion van der Waals materials ferroelectric graphene encapsulation
View Full Abstract

Van der Waals (vdW) ferroelectrics are emerging nonlinear photonic materials that combine large second-order susceptibility \c{hi}(2) with heterostructure compatibility, offering an attractive route toward miniaturized spontaneous parametric down-conversion (SPDC) sources. However, vdW SPDC sources operating under continuous irradiation in air remain limited in low brightness and poor operational stability, as oxygen and moisture exposure, together with pump-induced heating, lead to material degradation and permanent damage. Here we demonstrate an air-stable, bright SPDC source based on ferroelectric NbOI2 enabled by graphene encapsulation. Graphene provides robust environmental protection and can effectively supress pump induced degradation by enhancing heat dissipation. We report a record photon-pair generation absolute rate of 258 Hz and a normalized brightness of 19,900 Hz/(mW.mm). Leveraging this stabilized platform, we further generate polarization entangled photon pairs with 94% fidelity with respect to the maximally entangled Bell states from graphene-encapsulated 90° twisted bilayer NbOI2. Our results establish a practical and air-stable vdW ferroelectric SPDC platform that overcomes key limitations of existing vdW quantum light sources and provides a viable pathway toward scalable, integrated entangled photon sources for on chip quantum photonics.

(Quantum) reference frames, relational observables, gauge reduction and physical interpretation

Thomas Thiemann

2603.04072 • Mar 4, 2026

QC: low Sensing: medium Network: low

This paper develops a theoretical framework for understanding how quantum reference frames work in gauge theories like General Relativity, addressing fundamental questions about how to define observables and measurements when coordinates themselves are not physically meaningful. It introduces the concept of relational reference frame transformations to handle the mathematical and conceptual challenges that arise when quantizing gauge systems.

Key Contributions

  • Development of relational reference frame transformation (RRFT) formalism for quantum gauge theories
  • Theoretical framework addressing quantization order issues in gauge systems with reference frames
  • Conceptual analysis of how gauge-dependent fields relate to relational observables in quantum contexts
quantum reference frames gauge theory relational observables general relativity quantum field theory
View Full Abstract

It is mandatory to know how to operationally define and translate a reference frame into mathematics, in order that a physical interpretation of theory calculations in terms of observational data is possible. The situation is particularly challenging for gauge systems such as General Relativity where spacetime coordinates are subject to spacetime diffeomorphisms considered as gauge transformations turning coordinates into non-observables. This motivates the idea of operationally defined (material) reference frames which specify coordinates in terms of matter or geometry reference fields leading to the concept of relational observables, relational reference frames and gauge reduction. Upon quantisation, all fields become operator valued distributions. Now new conceptual and technical questions arise such as: Should one reduce before or after quantisation and how are the reference fields quantised respectively in either route? Is a reference frame itself subject to quantisation and how are different quantum reference frames related? How does the gauge reduction fit into this, i.e. how can it be that a certain reference field is considered a non-observable in one reference frame and an observable in another which upon quantisation even displays fluctuations? How precisely are gauge dependent fields interpreted in terms of the relational observables in a given reference frame? What is the relative dynamics, e.g. how exactly are physical Hamiltonians of two relational reference frames related? The present conceptual work addresses these and related questions in a non-perturbative field theory context of sufficient generality to cover General Relativity coupled to standard matter. A central role is played by the concept of the relational reference frame transformation (RRFT) for which a general formula is derived and its properties are explored.

Deterministic Quantum Jump (DQJ) Method for Weakly Dissipative Systems

Marcus Meschede, Ludwig Mathey

2603.04066 • Mar 4, 2026

QC: high Sensing: medium Network: low

This paper introduces a deterministic quantum jump method for simulating weakly dissipative quantum systems, which improves computational efficiency by removing stochastic sampling errors when quantum jumps are rare events. The method reconstructs the density matrix evolution more accurately than standard quantum jump approaches for systems with weak environmental coupling.

Key Contributions

  • Development of deterministic quantum jump method that outperforms stochastic approaches in weakly dissipative regimes
  • Demonstration of single-jump and two-jump level reconstructions with applications to transverse-field Ising model and Kerr oscillator
quantum jump methods open quantum systems Lindblad master equation weakly dissipative dynamics density matrix simulation
View Full Abstract

Physical quantum systems are generically coupled to an environment, resulting in open system dynamics. A typical approach to simulating this dynamics is to propagate the density matrix of the system via the Lindblad master equation. This approach is numerically challenging due to the size of the density matrix, which has led to the development of quantum jump methods, which unravel the density matrix into an ensemble of state vectors. These methods utilize a stochastic sampling of the quantum jump times, which becomes inefficent for weakly dissipative dynamics, in which jumps are rare events. Here, we propose the deterministic quantum jump (DQJ) method, which we show to outperform standard quantum jump methods in the weakly dissipative regime, by removing the error of stochastic sampling. We describe the methodology at the single-jump and two-jump level, reconstructing the density matrix at the corresponding level. We demonstrate the performance of the method for two examples, the dissipative transverse-field Ising model, and the dissipative Kerr oscillator. Given that quantum technologies such as quantum computing have weakly dissipative quantum dynamics as their central focus, we propose this method to be utilized in that context, for exploring and understanding quantum technology platforms.

Fermi-Dirac thermal measurements: A framework for quantum hypothesis testing and semidefinite optimization

Nana Liu, Mark M. Wilde

2603.04061 • Mar 4, 2026

QC: high Sensing: medium Network: none

This paper introduces a new framework that connects quantum measurement optimization to fermionic physics, where measurement operators are interpreted as fermionic modes following Fermi-Dirac statistics. The authors develop 'Fermi-Dirac machines' as quantum machine learning models and show how this approach can solve semidefinite optimization problems on quantum computers.

Key Contributions

  • Novel interpretation of quantum measurements through fermionic physics and Fermi-Dirac statistics
  • Development of Fermi-Dirac machines as quantum machine learning models alternative to quantum Boltzmann machines
  • New paradigm for solving semidefinite optimization problems on quantum computers using thermal measurements
quantum machine learning quantum measurements fermi-dirac statistics semidefinite optimization quantum hypothesis testing
View Full Abstract

Quantum measurements are the means by which we recover messages encoded into quantum states. They are at the forefront of quantum hypothesis testing, wherein the goal is to perform an optimal measurement for arriving at a correct conclusion. Mathematically, a measurement operator is Hermitian with eigenvalues in [0,1]. By noticing that this constraint on each eigenvalue is the same as that imposed on fermions by the Pauli exclusion principle, we interpret every eigenmode of a measurement operator as an independent effective fermionic mode. Under this perspective, various objective functions in quantum hypothesis testing can be viewed as the total expected energy associated with these fermionic occupation numbers. By instead fixing a temperature and minimizing the total expected fermionic free energy, we find that optimal measurements for these modified objective functions are Fermi-Dirac thermal measurements, wherein their eigenvalues are specified by Fermi-Dirac distributions. In the low-temperature limit, their performance closely approximates that of optimal measurements for quantum hypothesis testing, and we show that their parameters can be learned by classical or hybrid quantum-classical optimization algorithms. This leads to a new quantum machine-learning model, termed Fermi-Dirac machines, consisting of parameterized Fermi-Dirac thermal measurements-an alternative to quantum Boltzmann machines based on thermal states. Beyond hypothesis testing, we show how general semidefinite optimization problems can be solved using this approach, leading to a novel paradigm for semidefinite optimization on quantum computers, in which the goal is to implement thermal measurements rather than prepare thermal states. Finally, we propose quantum algorithms for implementing Fermi-Dirac thermal measurements, and we also propose second-order hybrid quantum-classical optimization algorithms.

Variance-Driven Mean Temperature Reduction in Nonuniformly Heated Radiative-Conductive Systems

Juntao Lu, Zihan Zhang, Yongjian Xiong, Jie Fu

2603.03979 • Mar 4, 2026

QC: low Sensing: low Network: none

This paper studies how temperature varies in systems that lose heat through radiation, showing that when heating is uneven, the average temperature is lower than if heating were uniform. The authors derive a mathematical formula showing this temperature reduction is directly proportional to how much the temperature varies across the system.

Key Contributions

  • Derived analytical expression linking area-averaged temperature to isothermal equilibrium temperature in radiative-conductive systems
  • Established quantitative relationship showing mean temperature reduction is linearly proportional to temperature variance
thermal radiation nonlinear systems temperature variance radiative heat transfer thermal averaging
View Full Abstract

Radiative-conductive systems are intrinsically nonlinear due to the quartic temperature dependence of thermal radiation. Under fixed total heating power, convexity arguments imply that nonuniform temperature distributions radiate more efficiently and therefore exhibit a lower mean temperature than their isothermal counterparts. However, this conclusion remains qualitative, and an explicit quantitative relation between temperature heterogeneity and mean temperature reduction has been lacking. Here we derive a variance-based analytical expression linking the area-averaged temperature to the corresponding isothermal equilibrium temperature in a nonuniformly heated radiative--conductive system. By integrating the governing equation and performing a systematic second-order expansion about the ambient temperature, we show that the decrease of the mean temperature relative to the isothermal equilibrium value is linearly proportional to the temperature variance, with a proportionality coefficient set solely by the ambient temperature. This result transforms the convexity-based inequality into a quantitative statistical relation within the perturbative regime and provides a physically transparent framework for describing nonlinear radiative averaging in thermally heterogeneous systems.

Imaginary-time evolution of interacting spin systems in the truncated Wigner approximation

Tom Schlegel, Dennis Breu, Michael Fleischhauer

2603.03950 • Mar 4, 2026

QC: medium Sensing: medium Network: none

This paper develops a semiclassical method called imaginary-time truncated Wigner approximation (iTWA) to efficiently calculate ground states and thermal states of large interacting spin systems by mapping quantum evolution to stochastic differential equations.

Key Contributions

  • Extension of truncated Wigner approximation to imaginary time evolution for finding ground states
  • Demonstration of method on NP-hard frustrated antiferromagnetic Ising systems and transverse-field Ising models
spin systems truncated Wigner approximation imaginary time evolution Ising model quantum phase transitions
View Full Abstract

We present a semiclassical phase-space method to calculate thermal and ground states of large interacting spin systems. To this end, we extend the recently developed truncated Wigner approximation for spins (TWA) to the imaginary time, termed iTWA. The evolution of the canonical density matrix in imaginary time is mapped to a partial differential equation of its Wigner function. Truncation at the Fokker-Planck level leads to a set of stochastic differential equations, which can be efficiently simulated. We show that the iTWA can provide very good approximations to the ground state of a random and in general frustrated anti-ferromagnetic Ising Hamiltonian on a 3-regular graph, for which finding the exact ground state and approximations to it beyond a certain accuracy is NP hard. Furthermore in order to assess the ability of the method to properly account for leading-order quantum effects, we analyze the ground-state quantum phase transition of the nearest-neighbor, transverse-field Ising model in one and two spatial dimensions, finding very good agreement with the exact behaviour. The critical behavior obtained in iTWA follows the quantum-classical correspondence.

Resource State Distillation via Stabilizer Channels

Christopher Popp, Tobias C. Sutter, Beatrix C. Hiesmayr

2603.03925 • Mar 4, 2026

QC: medium Sensing: none Network: high

This paper develops a unified mathematical framework for improving degraded quantum states (like entangled states used in quantum communication) by using stabilizer-based distillation protocols that can recover high-quality resource states from multiple noisy copies. The work introduces several optimization protocols and demonstrates how to reduce computational complexity when designing quantum channels for different distillation tasks.

Key Contributions

  • Unified framework for stabilizer-based resource distillation with closed-form expressions for quantum channel outputs
  • Introduction of optimization protocols (gF-IMAX, CI-IMAX, PI-IMAX) for different distillation objectives including fidelity, coherent information, and private information
  • Identification of key invariances in resource measures that significantly reduce numerical complexity of channel optimization
quantum distillation stabilizer codes entanglement purification quantum cryptography quantum channels
View Full Abstract

Quantum technologies rely on high-quality resource states, such as maximally entangled or private states, which are indispensable for quantum communication and cryptography. In practice, however, these states are inevitably degraded by noise. Distillation protocols aim to recover high-resource states from multiple imperfect copies, and while stabilizer-based methods have demonstrated high performance in entanglement purification, they have yet to be established for broader tasks such as secret-key distillation. This work introduces a unified framework for stabilizer-based resource distillation in systems of prime local dimension. By formulating stabilizer routines as quantum channels and deriving closed-form expressions for their output, we enable the application of stabilizer operations to general input states and diverse distillation objectives. We identify key invariances in resource measures, such as coherent and private information, and demonstrate how they can be leveraged to significantly reduce the numerical complexity of channel optimization. To illustrate the framework's versatility, we introduce several protocols: gF-IMAX for general fidelity optimization, and (S)CI-IMAX and (S)PI-IMAX for optimizing (smooth) coherent and private information in both asymptotic and one-shot regimes. Our numerical results confirm that these protocols effectively tailor stabilizer channels to specific operational tasks, establishing them as a robust and flexible tool for quantum resource distillation.

Collective purification of interacting quantum networks beyond symmetry constraints

Saikat Sur, Pritam Chattopadhyay, Arnab Chakrabarti, Nikolaos E. Palaiodimopoulos, Özgur E. Müstecaplıoğlu, Amit Finkler, Durga Bhaktavatsala Rao ...

2603.03917 • Mar 4, 2026

QC: high Sensing: medium Network: medium

This paper presents a universal cooling strategy for resetting mixed states in interacting quantum spin networks to pure computational-zero states. The method uses an ancilla spin that couples to the network and dumps entropy to an ultracold bath, employing alternating non-commuting interactions to break symmetry constraints that normally prevent effective cooling.

Key Contributions

  • Universal cooling strategy for multi-spin interacting networks using collective ancilla coupling
  • Graph-based analysis method to avoid complex dynamics calculations
  • Demonstration that alternating non-commuting Hamiltonians can break symmetry constraints for purification
quantum purification spin networks ancilla cooling symmetry breaking quantum state reset
View Full Abstract

Following any quantum information processing protocol, it is essential to reset a mixed state of a many-body interacting spin-network to the computational-zero pure state. This task is challenging, both theoretically and experimentally, because of the quantum correlations. There is currently no effective cooling strategy for both high and low temperatures in such networks. Here we put forth a universal cooling strategy for multi-spin interacting networks. The strategy is based on the collective coupling of the system to an ancilla spin that intermittently dumps part of its entropy into an ultracold bath. Yet this strategy should overcome the symmetry-imposed correlations that impede the cooling. To avoid the prohibitive complexity of computing the dynamics, we resort to graph analysis of the network. %To approach the desired state, We show that a unique choice of alternating, non-commuting system-ancilla interaction Hamiltonians exists that breaks the symmetry constraints and allows the network to approach the desired pure state. We illustrate this universal purification strategy in diverse experimental settings.

Numerical evaluation of Casimir forces using the discontinuous Galerkin time-domain method

Carles Martí Farràs, Bettina Beverungen, Philip Trøst Kristensen, Francesco Intravaia, Kurt Busch

2603.03888 • Mar 4, 2026

QC: low Sensing: medium Network: none

This paper develops a computational method to calculate Casimir forces (quantum electromagnetic forces between objects in vacuum) using time-domain simulations. The method can handle complex geometries and materials at finite temperature, which is important for designing micro and nanoscale devices.

Key Contributions

  • Development of time-domain computational method for Casimir forces using discontinuous Galerkin finite element approach
  • Validation against known solutions and demonstration on complex cylindrical geometries where analytical solutions don't exist
Casimir forces electromagnetic Green's tensor Maxwell stress tensor discontinuous Galerkin nanoscale physics
View Full Abstract

We present a time-domain scheme for computing Casimir forces within the Maxwell stress tensor formalism, together with a specific realization using the finite-element-based discontinuous Galerkin time-domain method. The approach enables accurate evaluation of Casimir--Lifshitz interactions for a wide range of geometries and material properties at finite temperature. At the core of the method, the electromagnetic Green's tensor is expressed as the system's response to dipolar excitations, thereby recasting the Maxwell stress tensor into a set of classical scattering problems driven by electric and magnetic dipoles. We validate the approach against reference calculations of the Casimir interaction between parallel half-spaces at both zero and nonzero temperature. We further demonstrate its applicability to finite, cylindrically symmetric geometries for which closed-form solutions are unavailable, obtaining accurate agreement with asymptotic predictions based on physical considerations. These findings illustrate the method's potential for studying Casimir interactions in realistic micro- and nanoscale structures, relevant to nanodevice design and experimental settings.

Kinematic budget of quantum correlations

Maaz Khan, Subhadip Mitra

2603.03887 • Mar 4, 2026

QC: medium Sensing: low Network: medium

This paper develops a geometric framework that maps different types of quantum correlations (entanglement, steering, Bell nonlocality) onto compact 2D manifolds using second-moment statistics, revealing universal structural relationships between these quantum resources. The approach bypasses exponential scaling issues in quantum state analysis by focusing on global and marginal purities rather than full state tomography.

Key Contributions

  • Unified geometric framework mapping diverse quantum correlations onto 2D manifolds
  • Scalable approach to quantum correlation analysis that avoids exponential complexity of full state tomography
  • Universal kinematic limits connecting state purity to correlation structure and entanglement properties
quantum correlations entanglement geometric framework second moments state purity
View Full Abstract

The diversity of quantum correlations -- discord, entanglement, steering, and Bell nonlocality -- disappears at the observable second-moment kinematic level. By treating state purity as a finite resource, we introduce a local-unitary-invariant budget split of symmetrised second moments into local and nonlocal sectors that maps quantum systems onto compact, two-dimensional, hole-free manifolds. The topology of these manifolds is governed by state purity and time-reversal symmetry. This dimensional reduction reveals a deep structural link: exceeding classical capacity limits forces the activation of time-asymmetric generators, guaranteeing non-positive partial transpose entanglement. For two qubits, the geometry is analytically solvable. A single boundary elegantly isolates classical correlations, while nested regions physically dictate entanglement, steering, Bell nonlocality, and bounds on non-stabiliser magic. Beyond two qubits, dimensional capacity bottlenecks enforce these universal kinematic limits on correlation structures. Because this macroscopic representation is completely determined by global and marginal purities, it bypasses the exponential scaling of full-state tomography. By coarse-graining over gauge-like first moments, the budget geometry acts as a thermodynamic phase diagram, exposing both the static hierarchy of quantum resources and their dynamic redistribution under decoherence.

Many-Body Structural Effects in Periodically Driven Quantum Batteries

Rohit Kumar Shukla, Cheng Shang

2603.03883 • Mar 4, 2026

QC: low Sensing: none Network: none

This paper studies quantum batteries (quantum energy storage devices) made from collections of spin-1/2 particles that are charged using periodic driving fields. The researchers identify key structural features like long-range interactions and non-integrability that enable these many-body quantum systems to store energy more efficiently and charge faster.

Key Contributions

  • Demonstrated that long-range interacting quantum batteries can achieve superextensive energy storage approaching fundamental limits
  • Showed that non-integrable systems with ergodic Floquet dynamics enable more efficient charging by promoting population of the full many-body energy spectrum
quantum batteries many-body systems Floquet dynamics periodic driving collective spin systems
View Full Abstract

While quantum batteries have been widely studied under static driving, their performance under periodic driving in many-body systems remains far less understood. In this Letter, we uncover structural principles showing that many-body structure fundamentally determines the charging performance of a collective spin-1/2 quantum battery driven by a periodic Ising charger. In particular, interaction range, boundary conditions, system size, and integrability -- capturing graph connectivity, geometry, even-odd effects, and many-body dynamics -- emerge as critical factors for enhancing stored energy and charging power. First, we analyze how connectivity scaling and boundary geometry shape battery performance. We show that long-range interacting chargers exhibit superextensive energy storage, approaching the fundamental upper bound over broad ranges of driving periods and system sizes. In contrast, nearest-neighbor chargers achieve optimal charging only under finely tuned commensurability conditions. Moreover, we find that open boundary conditions (OBC) enhance robustness compared to periodic boundary conditions (PBC). Second, we examine the role of integrability under periodic driving. We demonstrate that nonintegrability enhances energy storage by suppressing conserved quantities and promoting ergodic Floquet dynamics, thereby enabling efficient population of the many-body spectrum. Through systematic structural optimization across multiple parameters, we identify long-range nonintegrability as a central resource for fast, scalable, and robust charging of collective quantum batteries. Our results clarify how structural features of many-body systems, together with periodic driving, can be harnessed to achieve efficient collective charging dynamics.

Fractional topology in open systems

Xi Wu, Xiang Zhang, Fuxiang Li

2603.03854 • Mar 4, 2026

QC: medium Sensing: low Network: none

This paper studies how topological properties change in quantum systems that interact with their environment, specifically showing how integer topological invariants can become fractional in open quantum systems. The researchers demonstrate this using a chain of atoms with gain and loss, and propose experimental observation methods using photonic lattices.

Key Contributions

  • Discovery of fractional topological invariants in open quantum systems governed by Lindblad master equations
  • Demonstration that fractional topology can emerge through parameter tuning or dynamical evolution
  • Proposal for experimental detection using Bloch state tomography in photonic lattices
fractional topology open quantum systems Su-Schrieffer-Heeger chain Lindblad master equation topological invariants
View Full Abstract

We investigate the emergence of fractional topological invariants in a periodic Su-Schrieffer- Heeger chain subject to gain and loss, governed by the Gorini-Kossakowski-Sudarshan-Lindblad master equations. After preparing the symmetry condition for integer topological invariants, we investigate their transition to fractional ones in steady states, which can happen either by tuning parameters in jump operators or as a dynamical transition during time evolution. Moreover, we show that these fractional topological invariants no longer possess quantized topology in the conventional sense. However, by extending the Brillouin zone to cover multiple cycles, the total winding regains integer quantization. Finally, we show how such effects can be observed in long-range hopping photonic lattices with fractional fillings, via Bloch state tomography. Our results open a new pathway to understand fractional topology in open quantum systems.

Towards Practical Quantum Federated Learning: Enhancing Efficiency and Noise Tolerance

Suzukaze Kamei, Hideaki Kawaguchi, Takahiko Satoh

2603.03853 • Mar 4, 2026

QC: medium Sensing: none Network: high

This paper develops methods to make quantum federated learning more practical by reducing communication overhead and improving noise tolerance. The researchers propose hybrid architectures and parameter reduction techniques to minimize quantum transmissions while maintaining model performance, and analyze how quantum error correction helps in noisy environments.

Key Contributions

  • Hybrid QFL architecture that reduces quantum transmissions from 3NMP to {3t + 2(T - t)}NMP per round
  • Analysis of communication-convergence-noise trade-offs with explicit cost formulas and noise resilience comparison between centralized vs decentralized aggregation
quantum federated learning quantum communication parameter aggregation depolarizing noise quantum error correction
View Full Abstract

Federated Learning (FL) enables privacy-preserving distributed model training, yet remains vulnerable to gradient inversion and model leakage attacks. Quantum communication has been proposed to provide information-theoretic security for parameter aggregation. However, practical deployment is severely constrained by communication overhead and quantum channel noise. In this work, we present a systematic quantitative study of communication--convergence--noise trade-offs in Quantum Federated Learning (QFL). We introduce two complementary strategies to reduce quantum transmissions: (1) structured parameter reduction based on light-cone feature selection in parametrized quantum circuits, and (2) a Hybrid QFL architecture that dynamically switches from centralized to decentralized aggregation during training. We derive explicit communication cost formulas and show that Hybrid QFL reduces quantum transmissions from $3NMP$ per round to $\{3t + 2(T - t)\}NMP$, achieving substantial savings while preserving near-centralized convergence. We further analyze robustness under depolarizing noise and show that decentralized aggregation is more noise-resilient because it transmits fewer qubits per round. Finally, we evaluate the effectiveness of Steane code-based quantum error correction under high-noise regimes. Our results provide an integrated design framework for communication-efficient and noise-aware QFL, clarifying practical trade-offs necessary for scalable quantum-secure distributed learning.

Local vs global dynamics in a dissipative qubit-impurity system

Giuseppe Emanuele Chiatto, Giuliano Chiriacó, Elisabetta Paladino, Giuseppe Antonio Falci

2603.03834 • Mar 4, 2026

QC: medium Sensing: low Network: low

This paper compares two different mathematical approaches (local vs global) for describing how a quantum bit (qubit) behaves when coupled to a dissipative impurity that causes decoherence. The authors find that the local approach better captures the qubit's dynamics in experimentally relevant conditions.

Key Contributions

  • Demonstrates that local derivation schemes better capture qubit coherence dynamics than global approaches
  • Clarifies the domains of validity for different GKSL master equation approximation schemes in dissipative qubit systems
qubit decoherence master equation Born-Markov approximation dissipative systems quantum dynamics
View Full Abstract

We analyse the dynamics of a qubit coupled to a dissipative impurity by comparing local and global derivation schemes of a Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation within the Born-Markov and full secular (FS) approximations. We show that the local approach correctly captures a crossover in the dynamics of the qubit coherence, while the FS approximation restricts the validity of the global approach to regimes with well-separated energy scales. Our results clarify the domains of validity of the two approaches and show that the local scheme provides a better GKSL description of the qubit dynamics in the experimentally relevant parameter regime.

O-Sensing: Operator Sensing for Interaction Geometry and Symmetries

Meng Ye-Ming, Shi Zhe-Yu

2603.03826 • Mar 4, 2026

QC: medium Sensing: high Network: none

This paper introduces O-Sensing, a method to reverse-engineer quantum many-body systems by determining their Hamiltonian, interaction patterns, and symmetries from just a few low-energy quantum states. The approach uses sparsity optimization to find the simplest explanation for the observed states, then selects the true Hamiltonian by maximizing spectral entropy.

Key Contributions

  • Development of O-Sensing protocol for extracting Hamiltonian and symmetries from eigenstates using parsimony-driven optimization
  • Demonstration of learnability phase diagram showing when interaction geometry can be successfully reconstructed versus when 'confusion' regimes emerge
quantum many-body systems Hamiltonian reconstruction operator sensing sparsity optimization spectral entropy
View Full Abstract

We ask whether the Hamiltonian, interaction geometry, and symmetries of a quantum many-body system can be inferred from a few low-lying eigenstates without knowing which sites interact with each other. Directly solving the eigenvalue equations imposes constraints that yield a highly degenerate subspace of candidate operators, where the local Hamiltonian is hidden among an extensive family of conserved quantities, obscuring the interaction geometry. Here we introduce O-Sensing, a protocol designed to extract the Hamiltonian and symmetries directly from these states. Specifically, O-Sensing employs parsimony-driven optimization to extract a maximally sparse operator basis from the degenerate subspace. The Hamiltonian is then selected from this basis by maximizing spectral entropy (effectively minimizing degeneracy) within the sampled subspace. We validate O-Sensing on Heisenberg models on connected Erdős--Rényi graphs, where it reconstructs the interaction geometry and uncovers additional long-range conserved operators. We establish a learnability phase diagram across graph densities, featuring a pronounced ``confusion'' regime where parsimony favors a dual description on the complement graph. These results show that sparsity optimization can reconstruct interaction geometry as an emergent output, enabling simultaneous recovery of the Hamiltonian and its symmetries from low-energy eigenstates.

Approximate Amplitude Encoding with the Adaptive Interpolating Quantum Transform

Gekko Budiutama, Shunsuke Daimon, Xinchi Huang, Hirofumi Nishi, Yu-ichiro Matsushita

2603.03803 • Mar 4, 2026

QC: high Sensing: none Network: none

This paper introduces the Adaptive Interpolating Quantum Transform (AIQT) as an improved method for encoding classical data into quantum states. The AIQT learns data-specific patterns to encode information more efficiently than standard Fourier-based methods, achieving 40-50% lower reconstruction errors on financial and image data while maintaining similar computational costs.

Key Contributions

  • Development of the Adaptive Interpolating Quantum Transform (AIQT) that learns data-adapted bases for more efficient amplitude encoding
  • Demonstration of 40-50% reduction in reconstruction error compared to Fourier-based methods at equivalent sparsity levels
  • Creation of a training method that doesn't require quantum hardware sampling, removing a major bottleneck in data-driven quantum encoding
amplitude encoding quantum data encoding quantum Fourier transform adaptive basis sparse encoding
View Full Abstract

Amplitude encoding of real-world data on quantum computers is often the workflow bottleneck: direct amplitude encoding scales poorly with input size and can offset any speedups in subsequent processing. Fourier-based sparse amplitude encoding lowers cost by retaining only a small subset of dominant coefficients, but its fixed, non-adaptive basis leads to significant information loss. In this work, we replace the Fourier transform with the adaptive interpolating quantum transform (AIQT) in the sparse amplitude encoding workflow. The AIQT learns a data-adapted basis that concentrates information into a small number of coefficients. Consequently, at matched sparsity, the AIQT retains more information and achieves lower reconstruction error compared to the Fourier baseline. On financial time-series data, the AIQT reduces reconstruction error by 40% relative to the Fourier baseline, and on image datasets the reduction is up to 50% at the same sparsity level, with nearly identical encoding gate cost. Crucially, the approach preserves the efficiency of Fourier-based methods: the AIQT is built on the structure of the quantum Fourier transform circuit. Its gate count scales quadratically with the number of qubits, while classical evaluation can be carried out in quasilinear time. In addition, the AIQT is trained without labels and does not require sampling from quantum hardware or a simulator, removing a major bottleneck in data-driven amplitude-encoding methods.

Variational Gibbs State Preparation on Trapped-Ion Devices

Reece Robertson, Mirko Consiglio, Josey Stevens, Emery Doucet, Tony J. G. Apollaro, Sebastian Deffner

2603.03801 • Mar 4, 2026

QC: high Sensing: low Network: none

This paper demonstrates a variational quantum algorithm to prepare thermal equilibrium states (Gibbs states) of an Ising model on IonQ quantum computers. The researchers found that hardware noise causes 'digital heating,' making the prepared quantum states appear hotter than intended, with fidelity decreasing as system size and inverse temperature increase.

Key Contributions

  • Experimental implementation of variational Gibbs state preparation on trapped-ion quantum hardware
  • Discovery that quantum hardware noise leads to digital heating effect in thermal state preparation
variational quantum algorithms Gibbs state preparation trapped-ion quantum computing transverse-field Ising model quantum state tomography
View Full Abstract

We implement a variational quantum algorithm for Gibbs state preparation of a transverse-field Ising model on IonQ's quantum computers. To this end, we train the variational parameters via classical simulation and perform state tomography on the quantum devices to evaluate the fidelity of the prepared Gibbs state. As a main result, we find that fidelity decreases (non-monotonically) as a function of the inverse temperature $β$ of the system. Fidelity also decreases as a function of the size of the system. Interestingly, we find that a Gibbs state prepared for a specified $β$ is a better representative of a Gibbs state prepared for a $\textit{lower}$ $β$; or in other words, thermal fluctuations in the quantum hardware lead to digital heating, that is, an increase in the temperature of the prepared Gibbs state above what was intended.

Enhancing Variational Quantum Eigensolvers for SU(2) Lattice Gauge Theory via Systematic State Preparation

Klaus Liegener, Dominik Mattern, Alexander Korobov, Lisa Krüger, Manuel Geiger, Malay Singh, Longxiang Huang, Christian Schneider, Federico Roy, Stef...

2603.03799 • Mar 4, 2026

QC: high Sensing: none Network: none

This paper develops an improved variational quantum eigensolver algorithm for simulating non-Abelian gauge theories like SU(2) Yang-Mills theory on quantum computers. The researchers create a systematic method for preparing gauge-invariant quantum states that avoids barren plateau problems and test their approach on a minimal toy model while investigating the effects of quantum device noise.

Key Contributions

  • Development of systematic state preparation ansatz for gauge-invariant excitations in variational quantum eigensolvers
  • Demonstration of scaling advantages using spin-network basis for non-Abelian gauge theory simulations
  • Investigation of noise impact on SU(2) lattice gauge theory computations for near-term quantum devices
variational quantum eigensolver lattice gauge theory SU(2) barren plateau gauge invariant
View Full Abstract

Computing the vacuum and energy spectrum in non-Abelian, interacting lattice gauge theories remains an open challenge, in part because approximating the continuum limit requires large lattices and huge Hilbert spaces. To address this difficulty with near-term quantum computing devices, we adapt the variational quantum eigensolver to non-Abelian gauge theories. We outline scaling advantages when using a spin-network basis to simulate the gauge-invariant Hilbert space and develop a systematic state preparation ansatz that creates gauge-invariant excitations while alleviating the barren plateau problem. We illustrate our method in the context of SU(2) Yang-Mills theory by testing it on a minimal toy model consisting of a single vertex in 3+1 dimensions. In this toy model, simulations allow us to investigate the impact of noise expected in current quantum devices.

Quantum anomaly for benchmarking quantum computing

Tomoya Hayata, Arata Yamamoto

2603.03697 • Mar 4, 2026

QC: high Sensing: none Network: none

This paper proposes using the axial anomaly from gauge theories as a benchmark test for quantum computers, demonstrating that current trapped-ion quantum computers can correctly simulate this fundamental physics phenomenon. The researchers successfully reproduced the theoretical anomaly coefficient on the 'Reimei' quantum computer, providing a new verification method for quantum simulations.

Key Contributions

  • Proposes axial anomaly as a systematic benchmark for verifying quantum computation correctness
  • Successfully demonstrates quantum simulation of lattice gauge theories on trapped-ion hardware
  • Shows that fundamental physics phenomena can be reproduced on current quantum computers without error mitigation
quantum simulation lattice gauge theory trapped-ion quantum computer benchmarking axial anomaly
View Full Abstract

Given the rapid advances in quantum computing hardware, establishing systematic strategies for verifying the correctness of quantum computations has become increasingly important. Exploiting the fact that the axial anomaly in gauge theories is exact to all orders in perturbation theory, we propose the axial anomaly as a nontrivial benchmark for quantum simulations of lattice gauge theories. We simulate anomalous axial-charge production in ${\mathbb Z}_N$ lattice gauge theories on the trapped-ion quantum computer ``Reimei''. After taking the U(1), infinitesimal time, and infinite volume limits, we successfully reproduce the anomaly coefficient within statistical uncertainties, even without error mitigation. Our results demonstrate that the axial anomaly can be simulated on current quantum computers and serves as a verification test of quantum computations.

Variational Quantum Transduction

Pengcheng Liao, Haowei Shi, Quntao Zhuang

2603.03642 • Mar 4, 2026

QC: medium Sensing: low Network: high

This paper introduces a variational quantum transduction framework that uses optimization techniques from quantum computing to improve quantum signal transfer between different frequency domains. The approach systematically optimizes protocols for quantum transducers, which are essential for connecting quantum devices operating at different frequencies.

Key Contributions

  • Introduction of variational quantum transduction framework using variational quantum circuits
  • Demonstration of protocols that surpass existing GKP-based and entanglement-assisted approaches
  • Analysis showing Gaussian adaptive transduction is already near-optimal
quantum transduction variational quantum circuits quantum interconnect frequency conversion quantum information
View Full Abstract

Quantum transducers are critical for quantum interconnect, enabling coherent signal transfer across disparate frequency domains. Beyond material and device advances, protocol design has become a powerful means to improve transduction. We introduce a variational quantum transduction (VQT) framework that employs variational tools from near-term quantum computing to systematically optimize protocol performance. As a variational quantum circuit framework, VQT is not plagued by known training issues such as barren plateau, because a small-scale problem is sufficient for substantial advantage and training only needs to be done once to configure a VQT system. Maximizing the quantum information rate within this framework yields protocols that surpass all known schemes in their respective classes. For non-adaptive protocols, VQT exceeds the performance envelopes of Gottesman-Kitaev-Preskill (GKP)-based and entanglement-assisted approaches. In the adaptive setting, VQT provides only a marginal improvement over Gaussian feedforward strategies, indicating that Gaussian adaptive transduction is already close to optimal. With increasingly universal quantum control, VQT provides a systematic path toward optimal quantum transduction.

Sequence and Image Transformations with Monarq: Quantum Implementations for NISQ Devices

Jan Balewski, Roel Van Beeumen, E. Wes Bethel, Talita Perciano

2603.03582 • Mar 3, 2026

QC: high Sensing: low Network: none

This paper introduces Monarq, a quantum framework that combines QCrank encoding with the EHands protocol to perform data processing tasks like image filtering and signal processing on current noisy quantum computers. The work demonstrates how to implement basic operations like convolution and Fourier transforms on NISQ devices.

Key Contributions

  • Development of Monarq unified quantum data processing framework
  • Demonstration of quantum implementations for signal and image processing on NISQ hardware
  • Integration of QCrank encoding with EHands protocol for polynomial transformations
NISQ quantum algorithms data processing QCrank encoding EHands protocol
View Full Abstract

We introduce Monarq, a unified quantum data processing framework that combines QCrank encoding with the EHands protocol for polynomial transformations, and demonstrate its implementation on noisy intermediate-scale quantum (NISQ) hardware. This framework provides fundamental quantum building blocks for signal and image processing tasks, including convolution, discrete-time Fourier transform (DFT), squared gradient computation, and edge detection, serving as a reference for a broad class of data processing applications on near-term quantum devices.

Frequency-Time Multiplexing for Near-Deterministic Generation of n-Photon Frequency-Bin States

Alex Fischer, Nathan T. Arnold, Colin P. Lualdi, Kelsey Ortiz, Michael Gehl, Paul Davids, Kai Shinbrough, Nils T. Otterstrom

2603.03576 • Mar 3, 2026

QC: medium Sensing: low Network: high

This paper presents a method to generate multiple single photons with different frequencies in the same spatial location using optical quantum memories and fiber Bragg gratings. The technique achieves near-deterministic production of n-photon states by temporally multiplexing heralded single photons, potentially generating 8-photon states at 1 kHz rates.

Key Contributions

  • Novel frequency-time multiplexing scheme using optical quantum memories and fiber Bragg gratings for n-photon state generation
  • Demonstration of feasible 8-photon state production at 1 kHz rates using commercially available hardware
photonic quantum information frequency-bin encoding optical quantum memory multiphoton states time multiplexing
View Full Abstract

One of the primary challenges of photonic quantum information processing is the on-demand preparation of multiple single-photon-level quantum states from probabilistic photon pair sources. Motivated by recent developments in frequency-bin-encoded photonic quantum information processing, here we consider active time multiplexing to generate n-photon states, where n single photons with n distinct frequencies occupy the same spatiotemporal mode. We devise an approach that uses optical quantum memories to manipulate the temporal mode of heralded single photons and an array of fiber Bragg grating reflectors to jointly manipulate the frequency and temporal modes of the photons, overlapping n photons in n separate frequency bins into a single spatiotemporal mode. We calculate multiphoton state generation rates that, accounting for loss, are realistically achievable with commercially available hardware. Using only a single free-space switchable delay loop for an optical quantum memory, this scheme could feasibly produce 8-photon states at an average rate of 1 kHz.

Star-exponential for Fermi systems and the Feynman-Kac formula

J. Berra-Montiel, H. García-Compeán, A. Kafuri, A. Molgado

2603.03558 • Mar 3, 2026

QC: low Sensing: low Network: none

This paper develops mathematical tools for studying fermionic quantum systems by extending the star-exponential formalism from bosonic to fermionic systems using Grassmann variables and coherent states. The authors derive a fermionic version of the Feynman-Kac formula and demonstrate their approach on harmonic oscillator examples.

Key Contributions

  • Extension of star-exponential formalism to fermionic systems using Grassmann variables
  • Derivation of fermionic Feynman-Kac formula for ground state energy calculations in phase space
fermionic systems deformation quantization Grassmann variables coherent states Feynman-Kac formula
View Full Abstract

Inspired by the formalism that relates the star-exponential with the quantum propagator for bosonic systems, in this work we introduce the analogous extension for the fermionic case. In particular, we analyse the problem of calculating the star-exponential (i.e., the symbol of the evolution operator) for Fermi systems within the deformation quantization program. Grassmann variables and coherent states are considered in order to obtain a closed-form expression for the fermionic star-exponential in terms of its associated propagator. As a primary application, a fermionic version of the Feynman-Kac formula is derived within this formalism, thus allowing a straightforward calculation of the ground state energy in phase space. Finally, the method is validated by successfully applying it to the simple harmonic and driven Fermi oscillators, for which the results developed here provide a powerful alternative computational tool for the study of fermionic systems.

Sleeping Beauty in One or Many Worlds: A Defense of the Halfer Position

Jiaxuan Zhang

2603.03553 • Mar 3, 2026

QC: none Sensing: none Network: none

This paper analyzes the Sleeping Beauty Problem in both classical and quantum (Many-Worlds Interpretation) contexts, arguing that the correct probability answer is 1/2 rather than the commonly accepted 1/3. The authors defend the 'Halfer' position by refuting major arguments for the 'Thirder' position and claim this resolves concerns about consistency in the Many-Worlds Interpretation of quantum mechanics.

Key Contributions

  • Defense of the Halfer position (1/2 credence) in both classical and quantum versions of the Sleeping Beauty Problem
  • Refutation of four major Thirder arguments including the Proportion Argument, Elga's Variant Argument, Technicolor Beauty Variant, and decision theory approaches
many-worlds interpretation probability theory sleeping beauty problem quantum foundations subjective probability
View Full Abstract

The Sleeping Beauty Problem (SBP) is a long-standing puzzle in classical probability theory and has been used to challenge the Many-Worlds Interpretation (MWI) of quantum mechanics, since both involve objective determinacy combined with subjective uncertainty about certain events. A common concern is that MWI yields a different answer to the quantum version of SBP than the widely supported Thirder position in the classical case. We argue that this concern is unwarranted. We show that in both the quantum and classical versions of SBP, the correct credence is given by the Halfer position. In the quantum (MWI) SBP, we show that if no unjustified renormalization is introduced, the correct credence is 1/2. We then extend this result to the classical SBP by refuting four major arguments for 1/3. First, we reject the Proportion Argument by distinguishing event weight from probability. Second, we rebut Elga's Variant Argument by extending an earlier critique that identifies the implicit introduction of additional information; we further clarify this point by constructing a new variant and explaining why the Principle of Indifference is inapplicable, drawing an analogy with a mistake by d'Alembert in the history of probability theory. Third, we identify a flaw in the Technicolor Beauty Variant Argument, which arises from treating overlapping events as disjoint. Finally, we argue that causal decision theory is inappropriate for SBP, rendering the Thirders vulnerable to a Dutch Book. Our results support the consistency of MWI under the challenge posed by SBP and suggest that the dominant position on SBP needs careful reconsideration.

Anomalous Klein tunnelling with magnetic barriers in strained graphene

Edgardo Marin-Colli, Tonatiuh Gómez-Ramírez, O-Excell Gutierrez, Yonatan Betancur-Ocampo, Alfredo Raya, Erik Díaz-Bautista

2603.03240 • Mar 3, 2026

QC: low Sensing: medium Network: none

This paper studies how electron transport through graphene changes when the material is mechanically strained and subjected to magnetic barriers. The researchers found that combining strain with magnetic fields creates unusual tunneling effects that can be used to control electrical conductance in graphene-based devices.

Key Contributions

  • Development of modified transfer-matrix framework for analyzing transport in strained graphene
  • Discovery of anomalous Klein tunneling effects through combined strain and magnetic field control
graphene Klein tunneling strain engineering magnetic barriers electron transport
View Full Abstract

We study electron transport in a strained graphene sheet subjected to a sequence of $N$ electrostatic and magnetic barriers. Employing a modified and improved transfer-matrix framework, we examine how the transmission and reflection coefficients evolve with variations in uniaxial strain and in the number of barriers. The interplay of mechanical deformation and external magnetic fields is found to generate an anomalous Klein tunnelling, allowing the conductance to be effectively modulated through strain and barrier configurations. These findings highlight the role of strain engineering and magnetic field modulation as powerful tools for tailoring charge transport in two-dimensional materials. More broadly, they underscore how mechanical and electromagnetic control can be used to design next-generation solid-state devices with tunable electronic properties.

Multiparty Quantum Key Agreement: Architectures, State-of-the-art, and Open Problems

Malik Mouaji, Saif Al-Kuwari

2603.03225 • Mar 3, 2026

QC: low Sensing: none Network: high

This paper provides a comprehensive review of multiparty quantum key agreement (MQKA) protocols that allow three or more mutually distrustful parties to establish shared secret keys. The authors organize MQKA as a three-dimensional design space encompassing network architecture, quantum resources, and security models, identifying patterns and open challenges for future quantum internet deployments.

Key Contributions

  • Develops a three-axis framework for understanding MQKA protocols based on network architecture, quantum resources, and security models
  • Provides comprehensive classification and analysis of existing MQKA protocols revealing design patterns and trade-offs
  • Identifies open research challenges and proposes roadmap for hybrid-resource, bosonic-code-encoded MQKA for future quantum internet
quantum key distribution multiparty protocols quantum cryptography quantum internet quantum communication
View Full Abstract

Multiparty quantum key agreement (MQKA) enables $n \geq 3$ mutually distrustful users to establish a shared secret key through collaborative quantum protocols. In this paper, we provide a comprehensive review where we argue that MQKA is best understood as a design space organized along three orthogonal but tightly coupled axes: (1) network architecture, which determines how quantum states flow between participants; (2) quantum resources, which encode the physical degrees of freedom used for implementation; and (3) security model, which defines trust assumptions about devices and infrastructure. Rather than treating MQKA as a linear sequence of isolated protocols, we develop this three-axis perspective to reveal recurrent patterns, sharp trade-offs, and unexplored design spaces. We classify MQKA protocols into structural families, map them to underlying quantum resources, and analyze how different security models shape fairness and collusion resistance. We further identify open challenges in composable security frameworks, network native integration, device-independent implementations, and propose a research roadmap toward hybrid-resource, bosonic-code-encoded, and fairness-aware MQKA suitable for the future quantum internet deployments in the post-NISQ era.

Recovery-Induced Erasure Attack on QKD Systems

Hashir Kuniyil, Asad Ali, Syed M. Arslan, Muhammad Talha Rahim, Artur Czerwinski, Saif Al Kuwari

2603.03217 • Mar 3, 2026

QC: none Sensing: low Network: high

This paper identifies and demonstrates a new vulnerability in quantum key distribution (QKD) systems where the recovery time of single-photon detectors increases under high photon count rates, allowing attackers to reduce detection efficiency in a basis-dependent manner while keeping error rates below detection thresholds.

Key Contributions

  • Discovery and experimental characterization of count-rate-dependent detector recovery as a new attack vector against QKD systems
  • Demonstration that recovery-induced erasure attacks can reduce quantum bit error rate below abort thresholds while increasing erasure probability, creating a stealth attack mechanism
  • Mathematical modeling of the attack as an adversarial erasure channel with conservative bounds on signal detection probability
quantum key distribution QKD security detector attacks single-photon avalanche photodiodes quantum cryptography
View Full Abstract

Detector dead time is typically treated as a fixed parameter in quantum key distribution (QKD) security analyses. In practice, however, the effective recovery time of single-photon avalanche photodiodes (SPADs) depends on the incident count rate. In this work, we demonstrate that this count-rate-dependent recovery nonlinearity constitutes a distinct attack primitive. We experimentally characterize the dead time shift of a free-running SPAD under controlled broadband loading and observe a substantial increase in effective recovery time as the detected rate rises into the high photon count regime. We show that recovery-induced availability reduction can be modeled as an adversarial erasure channel and derive a conservative bound on the signal detection probability under loading. Unlike previously studied detector-control or efficiency mismatch attacks, the proposed mechanism does not rely on deterministic blinding or timing discrimination. Instead, count-rate-dependent recovery asymmetry induces basis-dependent suppression of detection probabilities ($p_\perp<p_\parallel$), converting mismatch-induced errors into loss. Particularly, we show in active-basis BBM92 systems, this effect reduces the observed quantum bit error rate (QBER) below the abort threshold while increasing erasure probability. Using experimentally measured detector recovery data, we quantify the parameter regime in which such stealth suppression is achievable. These results establish count-rate-dependent detector recovery as a security-relevant vulnerability and show that countermeasures designed for timing-based efficiency mismatch do not directly address recovery-induced erasure (RIE) attack. Our findings underscore the need to incorporate detector recovery dynamics explicitly into practical QKD security models.

Simultaneous anti-bunched and super-bunched photons from a GaAs Quantum dot in a dielectric metasurface

Sanghyeok Park, Oleg Mitrofanov, Kusal M. Abeywickrama, Samuel Prescott, Jaeyeon Yu, Stephanie C Malek, Hyunseung Jung, Emma Renteria, Sadhvikas Addam...

2603.03186 • Mar 3, 2026

QC: low Sensing: medium Network: medium

This paper demonstrates a quantum dot embedded in a special metasurface that can simultaneously produce two different types of quantum light - anti-bunched single photons and super-bunched photons - with comparable brightness levels, overcoming the typical weakness of charged exciton emission.

Key Contributions

  • Achieved simultaneous anti-bunched and super-bunched photon emission from single quantum dot with comparable count rates
  • Demonstrated order-of-magnitude enhancement of both neutral and charged exciton transitions using dielectric Mie-resonant metasurfaces
  • Proved that photonic engineering is essential for accessing weak quantum light states from charged exciton complexes
quantum dots photon bunching metasurfaces excitons single photon sources
View Full Abstract

Semiconductor quantum dots host a rich manifold of excitonic complexes, including neutral excitons that emit anti-bunched single photons and charged exciton complexes capable of producing super-bunched photons via cascade emission. Accessing both emission regimes from a single emitter would open routes to novel quantum protocols, including advanced quantum imaging. In practice, however, emission from charged exciton complexes is intrinsically weak, often orders of magnitude dimmer than neutral excitons, placing simultaneous dual-mode operation out of reach. Here, we overcome this limitation by embedding the quantum dot in a dielectric Mie-resonant metasurface that provides order-of-magnitude photoluminescence enhancement across both neutral and charged exciton transitions of a single GaAs quantum dot. Under identical non-resonant pumping conditions, the emission from the neutral exciton yields anti-bunched emission ($g^{(2)}(0) < 0.5$) and the emission from positively charged exciton complexes shows super-bunched emission ($g^{(2)}(0) > 3.5$) with comparable count rates (~12 kHz). Crucially, super-bunching emerges only when charged exciton emission spectrally overlaps with the Mie resonances and vanishes in un-patterned slabs, demonstrating that photonic engineering, is essential for accessing these weak quantum light states. These results demonstrate a scalable, position-tolerant platform for harnessing the full excitonic structure of solid-state emitters.

Witnesses of non-Gaussian features as lower bounds of stellar rank

Jan Provazník, Šimon Bräuer, Vojtěch Kala, Jaromír Fiurášek, Petr Marek

2603.03185 • Mar 3, 2026

QC: medium Sensing: high Network: low

This paper establishes a theoretical connection between experimentally measurable witnesses of non-Gaussian quantum states and the abstract stellar rank measure, showing how accessible measurements can provide lower bounds on stellar rank to enable practical certification of non-Gaussian quantum resources.

Key Contributions

  • Established quantitative connection between non-Gaussian witnesses and stellar rank hierarchical measure
  • Introduced normalized expectation value and variance-based quantifiers that form consistent hierarchy thresholds corresponding to stellar rank
non-Gaussian states stellar rank quantum metrology quantum witnesses quantum resources
View Full Abstract

Quantum non-Gaussian states and operations serve as fundamental resources for universal quantum computation, error correction, and high-precision metrology, extending beyond the Gaussian limits. While the stellar rank provides a rigorous hierarchical measure of non-Gaussianity, it remains challenging to determine experimentally. Conversely, witnesses of non-Gaussian features, based on the expectation values and variances of measurable observables, offer an accessible method for certifying non-Gaussian behavior but lack a direct connection to stellar rank. In this work, we establish a quantitative connection between these witnesses and stellar rank, demonstrating that the former can provide certifiable lower bounds on stellar rank. We introduce normalized expectation value and variance-based quantifiers and show that these witnesses form a consistent hierarchy of thresholds corresponding to stellar rank. Our results bridge the gap between abstract hierarchical measures and experimentally accessible quantifiers, enabling scalable certification of non-Gaussian states.

Achieving speedup in Dark Matter search experiments with a transmon-based NISQ algorithm

Roberto Moretti, Pietro Campana, Rodolfo Carobene, Alessandro Cattaneo, Marco Gobbo, Danilo Labranca, Matteo Borghesi, Marco Faverzani, Elena Ferri, S...

2603.03157 • Mar 3, 2026

QC: medium Sensing: high Network: none

This paper presents a quantum algorithm using superconducting qubits to search for dark matter particles called hidden photons. The researchers developed a gate-based protocol that can detect these particles up to ten times faster than existing methods by monitoring quantum oscillations in the qubits.

Key Contributions

  • Development of an ancilla-assisted gate-based protocol for enhanced dark matter detection sensitivity
  • Demonstration of up to ten-fold reduction in integration time for achieving exclusion limits on hidden photon mixing parameter
quantum sensing dark matter detection superconducting qubits transmon NISQ
View Full Abstract

Coherent detection of ultralight bosonic dark matter can be achieved by monitoring slow Rabi oscillations in superconducting qubits. We introduce an ancilla-assisted, gate-based protocol that enhances sensitivity to the hidden photon kinetic mixing parameter $ε$ using a single two-qubit gate, bypassing the need to maintain long-lived multi-qubit entangled states and remaining compatible with the limitations of modern quantum hardware. We characterized the increase in sensitivity accounting for decoherence, thermal occupation, errors in readout and reset, indicating up to a ten-fold reduction in the required integration time to reach the same exclusion limit on $ε$ achievable via Rabi-sampling experiments. Under plausible hardware assumptions and three years of data taking, the projected $95\%$ C.L. exclusion limit on the hidden photon mixing parameter reaches $ε\approx 1\times 10^{-14}$ across $2.5$-$6.0$ GHz ($10$-$25$ \textmu eV).

Efficient Image Reconstruction Architecture for Neutral Atom Quantum Computing

Jonas Winklmann, Yian Yu, Xiaorang Guo, Korbinian Staudacher, Martin Schulz

2603.03149 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops a specialized computer chip (FPGA) accelerator that can quickly analyze images to detect atoms in neutral atom quantum computers, reducing the time needed for this critical step from milliseconds to microseconds. The system can process images 35 times faster than existing methods, which helps reduce the control overhead that currently limits these quantum computers.

Key Contributions

  • FPGA-based parallel architecture for atom detection in neutral atom quantum computers achieving 34.9x speedup over CPU baseline
  • Algorithm-level optimizations combined with hardware acceleration reducing image analysis time to 115 microseconds for 256x256 pixel images
neutral atom quantum computing FPGA acceleration image reconstruction atom detection quantum control systems
View Full Abstract

In recent years, neutral atom quantum computers (NAQCs) have attracted a lot of attention, primarily due to their long coherence times and good scalability. One of their main drawbacks is their comparatively time-consuming control overhead, with one of the main contributing procedures being the detection of individual atoms and measurement of their states, each occurring at least once per compute cycle and requiring fluorescence imaging and subsequent image analysis. To reduce the required time budget, we propose a highly-parallel atom-detection accelerator for tweezer-based NAQCs. Building on an existing solution, our design combines algorithm-level optimization with a field-programmable gate array (FPGA) implementation to maximize parallelism and reduce the run time of the image analysis process. Our design can analyze a 256$\times$256-pixel image representing a 10$\times$10 atom array in just 115 $μ$s on a Xilinx UltraScale+ FPGA. Compared to the original CPU baseline and our optimized CPU version, we achieve about 34.9$\times$ and 6.3$\times$ speedup of the reconstruction time, respectively. Moreover, this work also contributes to the ongoing efforts toward fully integrated FPGA-based control systems for NAQCs.

Quantum-Inspired Hamiltonian Feature Extraction for ADMET Prediction: A Simulation Study

B. Maurice Benson, Kendall Byler, Anna Petroff, Shahar Keinan, William J Shipman

2603.03109 • Mar 3, 2026

QC: low Sensing: none Network: none

This paper develops a quantum-inspired method for predicting drug properties by encoding molecular fingerprints into simulated quantum Hamiltonians that capture complex correlations between molecular features. The approach achieves improved performance on drug property prediction benchmarks compared to classical methods, with quantum-derived features showing disproportionately high predictive importance.

Key Contributions

  • Novel quantum-inspired feature extraction method using parameterized Hamiltonians for molecular property prediction
  • Demonstration that quantum-derived features concentrate predictive signal despite comprising small fraction of total features
quantum-inspired algorithms Hamiltonian simulation molecular property prediction ADMET drug discovery
View Full Abstract

Predicting absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties remains a critical bottleneck in drug discovery. While molecular fingerprints effectively capture local structural features, they struggle to represent higher-order correlations among molecular substructures. We present a quantum-inspired feature extraction method that encodes molecular fingerprints into a parameterized Hamiltonian, using mutual information (MI) to guide entanglement structure. By simulating quantum evolution on GPU-accelerated backends, we extract expectation values that capture pairwise and triadic correlations among fingerprint bits. On ten Therapeutic Data Commons (TDC) ADMET benchmarks, our method achieves state-of-the-art performance on CYP3A4 substrate prediction (AUROC 0.673 0.004) and improves over classical baselines on 8/10 tasks. SHAP (SHapley Additive exPlanations) analysis reveals that quantum-derived features contribute up to 33% of model importance despite comprising only 1.6% of features, demonstrating that Hamiltonian encoding concentrates predictive signal. This simulation study establishes the foundation for hardware validation on near-term quantum devices.

Tripartite information of two-dimensional free fermions: a sine-kernel spectral constant from Fermi surface geometry

Aleksandrs Sokolovs

2603.03103 • Mar 3, 2026

QC: low Sensing: none Network: low

This paper analyzes quantum entanglement patterns in two-dimensional free fermion systems, showing that whether a quantum state appears 'holographic' or 'non-holographic' depends on the observation scale rather than being an intrinsic property of the state. The authors identify a universal critical scale that determines when quantum information becomes monogamous.

Key Contributions

  • Demonstrates that monogamy of mutual information is scale-dependent rather than an intrinsic quantum state property
  • Identifies a universal critical value z* ≈ 1.329 from sine-kernel spectral analysis that determines the transition between holographic and non-holographic behavior
tripartite information free fermions entanglement monogamy Fermi surface sine kernel
View Full Abstract

We show that monogamy of mutual information (MMI) in free-fermion ground states is a property of the observation scale, not of the quantum state. For three adjacent strips of width $w$ on a two-dimensional lattice, translation invariance decomposes the tripartite information as $I_3 = \sum_{k_y} g(k_F(k_y)\, w)$, where $g(z)$ is a universal function of the dimensionless product $z = k_F w$, determined by the spectrum of the sine-kernel integral operator (the Slepian concentration operator). We prove that $g(z)$ has a unique zero at $z^* \approx 1.329$: modes with $k_F w < z^*$ violate MMI ($g > 0$), while modes with $k_F w > z^*$ satisfy it ($g < 0$). Since $z^* / k_F w \to 0$ as $w \to \infty$, any Fermi surface eventually satisfies MMI at large $w$, while any gapless system violates it at sufficiently small $w$. The classification of states as "holographic" or "non-holographic" by the sign of $I_3$ is thus scale-dependent. We establish the properties of $g(z)$ analytically and show that $z^*$ is determined to $0.12\%$ by the cancellation of only two Slepian eigenvalue contributions. For Rényi entropies with index $α> 1$, the function $g_α(z)$ oscillates with multiple sign changes. We verify the framework on square and triangular lattices and show that interactions shift $z^*$ by $\sim 1$--$2\%$.

From Reachability to Learnability: Geometric Design Principles for Quantum Neural Networks

Vishal S. Ngairangbam, Michael Spannowsky

2603.03071 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops geometric design principles for quantum neural networks by analyzing how quantum circuits can adaptively deform data representations in high-dimensional quantum state spaces. The authors introduce mathematical frameworks to understand when quantum networks can effectively learn, showing that successful designs require both data encoding and trainable parameters to work together rather than separately.

Key Contributions

  • Introduction of Classical-to-Lie-algebra (CLA) maps and almost Complete Local Selectivity (aCLS) criterion for analyzing quantum neural network expressivity
  • Theoretical proof that effective quantum learning requires joint dependence on data and trainable parameters, not just state reachability
  • Demonstration that parametrized entangling operations are necessary for high-dimensional quantum state manifold control
quantum neural networks variational quantum circuits quantum machine learning geometric quantum computing quantum state manifolds
View Full Abstract

Classical deep networks are effective because depth enables adaptive geometric deformation of data representations. In quantum neural networks (QNNs), however, depth or state reachability alone does not guarantee this feature-learning capability. We study this question in the pure-state setting by viewing encoded data as an embedded manifold in $\mathbb{C}P^{2^n-1}$ and analysing infinitesimal unitary actions through Lie-algebra directions. We introduce Classical-to-Lie-algebra (CLA) maps and the criterion of almost Complete Local Selectivity (aCLS), which combines directional completeness with data-dependent local selectivity. Within this framework, we show that data-independent trainable unitaries are complete but non-selective, i.e. learnable rigid reorientations, whereas pure data encodings are selective but non-tunable, i.e. fixed deformations. Hence, geometric flexibility requires a non-trivial joint dependence on data and trainable weights. We further show that accessing high-dimensional deformations of many-qubit state manifolds requires parametrised entangling directions; fixed entanglers such as CNOT alone do not provide adaptive geometric control. Numerical examples validate that CLS-satisfying data re-uploading models outperform non-tunable schemes while requiring only a quarter of the gate operations. Thus, the resulting picture reframes QNN design from state reachability to controllable geometry of hidden quantum representations.

Exact stabilizer scars in two-dimensional $U(1)$ lattice gauge theory

Sabhyata Gupta, Piotr Sierant, Luis Santos, Paolo Stornati

2603.03062 • Mar 3, 2026

QC: medium Sensing: none Network: none

This paper studies special quantum states called 'sublattice scars' in a two-dimensional lattice gauge theory model, showing that these highly excited states have a hidden stabilizer structure that makes them efficiently simulable on classical computers despite arising from a complex many-body quantum system.

Key Contributions

  • Discovery of exact stabilizer states within the scarred eigenspectrum of the Rokhsar-Kivelson model
  • Construction of explicit Clifford circuits to prepare sublattice scar states in 2D lattice gauge theory
  • Demonstration that scarred subspace forms an intrinsic stabilizer manifold with vanishing stabilizer Rényi entropy
quantum many-body scarring stabilizer states lattice gauge theory Clifford circuits classical simulation
View Full Abstract

The complexity of highly excited eigenstates is a central theme in nonequilibrium many-body physics, underpining questions of thermalization, classical simulability, and quantum information structure. In this work, considering the paradigmatic Rokhsar-Kivelson model, we connect quantum many-body scarring in Abelian lattice gauge theories to an emergent stabilizer structure. We identify a distinct class of scarred eigenstates, termed sublattice scars, originating from gauge-invariant zero modes that form exact stabilizer states. Remarkably, although the underlying Hamiltonian is not a stabilizer Hamiltonian, its eigenspectrum intrinsically hosts exact stabilizer eigenstates. These sublattice scars exhibit vanishing stabilizer Rényi entropy together with finite, highly structured entanglement, enabling efficient classical simulation. Exploiting their stabilizer structure, we construct explicit Clifford circuits that prepare these states in a two-dimensional lattice gauge model. Our results demonstrate that the scarred subspace of the Rokhsar-Kivelson spectrum forms an intrinsic stabilizer manifold, revealing a direct connection between stabilizer quantum information, lattice gauge constraints, and quantum many-body scarring.

Simulating a quantum sensor: quantum state tomography of NV-spin systems

Alberto López-García, Aikaterini Vasilakou, Javier Cerrillo

2603.03049 • Mar 3, 2026

QC: medium Sensing: high Network: none

This paper uses a quantum computer with two transmon qubits to simulate nitrogen-vacancy (NV) centers in diamond and their interaction with spin impurities, studying how these interactions affect the performance of quantum sensors through quantum state tomography.

Key Contributions

  • Demonstration of quantum computer simulation of NV-center quantum sensors using transmon qubits
  • Analysis of spin-sensor coupling effects on coherence and sensitivity optimization
  • Investigation of entanglement generation in sensor-impurity systems using Peres-Horodecki criterion
nitrogen-vacancy centers quantum sensing quantum state tomography transmon qubits spin impurities
View Full Abstract

We employ a quantum computer to simulate the effect of spin impurities on nitrogen-vacancy (NV) centers in diamond. As these defects operate as nanoscale quantum sensors, modeling quantum noise is crucial to identify limitations in precision. The analysis is performed by means of quantum state tomography on two transmon qubits, representing respectively the NV center and a single spin impurity, modeling either a nuclear spin or an additional NV center. We demonstrate a versatile platform to simulate benchmark protocols such as Ramsey or Hahn-echo. Although we focus on a two-spin system, the same approach opens the door to using quantum processors as scalable simulators of many-spin environments, intractable in classical simulation due to the rapid exponential growth of the Hilbert space. The results reveal the effect different spin-sensor coupling regimes have on coherence, helping to identify detection schemes that maximize the sensitivity under the effect of impurities. Moreover, the role of entanglement generation is analyzed using the Peres-Horodecki criterion and CHSH inequalities. Although no violation of the latter is observed, the presence of entanglement is confirmed.

Motion-induced directionality of collective emission in a non-chiral waveguide

Yoan Spahn, Jens Hartmann, Benedikt Saalfrank, Michael Fleischhauer, Thomas Halfmann, Thorsten Peters

2603.03028 • Mar 3, 2026

QC: low Sensing: medium Network: medium

This paper demonstrates how atoms moving in a non-chiral waveguide can emit light preferentially in one direction due to their thermal motion, achieving up to 89% directionality. The researchers use Raman-induced transitions and study both the directional emission and the coherence properties of the emitted light.

Key Contributions

  • Demonstration of motion-induced directionality in collective atomic emission within non-chiral waveguides
  • Achievement of high directionality (up to 0.89) through controllable thermal motion effects
  • Theoretical modeling using Truncated Wigner Approximation that explains the directional emission mechanism
collective emission directional emission waveguide superfluorescence atomic motion
View Full Abstract

We report on the observation of motion-induced directionality in the collective emission of atoms confined within a hollow-core waveguide. Unlike in chiral waveguides, the atom-field coupling is here isotropic in the forward and backward direction. However, Raman-induced effective two-level emitters with spatially oscillating phases of the transition dipole enable thermally induced, but controllable directionality of the collective emission. By tuning the characteristic rate of collective decay we achieve a directionality of up to 0.89(1). We furthermore study the correlations of the emitted light close to and well above the threshold to collective emission, showing a buildup of coherence in the superfluorescent bursts while exhibiting thermal statistics below the threshold. To understand the underlying mechanism we employ numerical simulations based on the Truncated Wigner Approximation for spins and find good agreement. Additionally we present a simple model capable of reproducing the observed directionality via location blurring induced by the thermal motion of the atoms during collective emission. Our results will enable studies of collective, nonreciprocal interactions in non-chiral systems.

Analytical Quantum Full-Wave Analysis of Few-Photon Transport Through a Superconducting Cavity Qubit

Soomin Moon, Thomas E. Roth

2603.03015 • Mar 3, 2026

QC: high Sensing: none Network: high

This paper develops analytical solutions for modeling how single and double photons travel through superconducting quantum devices connected by waveguides. The work provides mathematical tools to validate computer simulations used for designing quantum interconnects that could link different quantum computers together.

Key Contributions

  • First analytical quantum full-wave solutions for one- and two-photon transport through superconducting cavity qubits with coaxial ports
  • Theoretical framework using quantum input-output theory to analyze nonlinear quantum scattering effects in cavity quantum electrodynamics
superconducting qubits quantum interconnects photon transport cavity quantum electrodynamics quantum input-output theory
View Full Abstract

A promising way to scale up superconducting quantum computers is to link different devices together using propagating photons. Correspondingly, accurately modeling the quantum information transfer in such quantum interconnects is critical to advancing this emerging technology. To accomplish this, a full-wave quantum numerical model is essential for describing the few-photon transport characteristics of various components. Unfortunately, validating the accuracy of such numerical models remains a difficult challenge due to the lack of appropriate analytical solutions for standard component types. Recently, progress has been made on creating the first-ever analytical quantum full-wave solutions for a superconducting circuit quantum device. These efforts considered the case of two-photon transport through an empty rectangular waveguide cavity and the interactions of photons inside a closed rectangular waveguide cavity with a transmon qubit formed by a Josephson junction connected across the terminals of a small wire dipole antenna. Here, we advance these efforts by considering the one- and two-photon transport properties through a rectangular waveguide cavity containing a qubit in this form when the cavity is interfaced with via two coaxial ports. Such devices can be used in various ways for quantum interconnects, such as to form parts of a quantum memory or a photon source. We perform this analysis leveraging a quantum input-output theory formalism to derive the relevant single- and two-photon transport characteristics of interest. We then examine the signatures of the nonlinear quantum scattering effects in the good and bad cavity regimes of cavity quantum electrodynamics. In the future, these analytical results can be used to validate numerical full-wave quantum solvers for modeling quantum interconnects.

QAOA-Predictor: Forecasting Success Probabilities and Minimal Depths for Efficient Fixed-Parameter Optimization

Rodrigo Coelho, Georg Kruse, Jeanette Miriam Lorenz

2603.02990 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops a machine learning approach using Graph Neural Networks to predict how well the Quantum Approximate Optimization Algorithm (QAOA) will perform on different combinatorial optimization problems without having to run expensive parameter optimization. The GNN can forecast success probabilities and determine optimal layer depths across various problem types.

Key Contributions

  • Development of GNN-based predictor for QAOA performance that achieves 10% accuracy margin
  • Demonstration of strong generalization across unseen problem classes and larger problem sizes
  • Method to identify optimal parameter initialization and minimal layer depth without costly optimization
QAOA quantum optimization graph neural networks combinatorial optimization quantum algorithms
View Full Abstract

Quantum Computing promises to solve complex combinatorial optimization problems more efficiently than classical methods, with the Quantum Approximate Optimization Algorithm (QAOA) being a leading candidate. Recent fixed-parameter variations of QAOA eliminate costly run-time optimization, but determining their optimal initialization as well as the number of required layers (p) for a target solution remains a critical, unsolved challenge. In this work, we propose a novel approach using a Graph Neural Network (GNN) to predict QAOA performance: Based on a graph representation of the problem, the GNN forecasts the probability of the optimal solution in the resulting distribution across different parameter initializations and layer depths for a wide variety of combinatorial optimization problems. We demonstrate that the GNN accurately predicts QAOA performance within a 10% margin of the true values. Furthermore, the model exhibits strong generalization capabilities across unseen problem classes, larger problem sizes, and higher layer counts. Our approach allows to identify viable problem instances for QAOA and to select an adequate parameter initialization strategy with minimal layer depth, without the need of costly parameter optimization.

Nuclear interference by electronic de-orthogonalisation

Matisse Wei-Yuan Tu, Angel Rubio, E. K. U. Gross

2603.02966 • Mar 3, 2026

QC: low Sensing: medium Network: none

This paper shows that in coupled electron-nuclear quantum systems, interference patterns can spontaneously appear in the nuclear density due to correlations with electrons, even when no such interference existed initially. The researchers demonstrate that non-adiabatic interactions cause initially orthogonal electronic states to become non-orthogonal, creating new interference effects that reveal the composite nature of the quantum system.

Key Contributions

  • Demonstration that nuclear density interference can emerge dynamically from electron-nuclear correlations
  • Identification of electronic de-orthogonalisation as the mechanism generating interference in composite quantum systems
interference superposition electron-nuclear dynamics Born-Oppenheimer non-adiabatic
View Full Abstract

Interference is a universal consequence of superposition, yet in composite quantum systems it can encode correlations between subsystems. We show that in coupled electron-nuclear dynamics, interference in the nuclear density can arise dynamically even when it is initially absent. Starting from a superposition of orthogonal Born-Oppenheimer electronic states, we demonstrate within the exact factorisation framework that genuine non-adiabatic electron-nuclear correlations induce de-orthogonalisation of the electronic factors, thereby generating interference terms in the nuclear density. Such interference has no counterpart in adiabatic evolution. Unlike conventional nuclear wave-packet interference or interference that merely reflects electronic coherence in a chosen basis, the effect identified here is a manifestation of the compositeness of the full electron-nuclear state. Nuclear density interference thus emerges as a direct dynamical signature of correlated quantum motion in composite systems.

Layer-wise QUBO-Based Training of CNN Classifiers for Quantum Annealing

Mostafa Atallah, Rebekah Herrman

2603.02958 • Mar 3, 2026

QC: medium Sensing: none Network: none

This paper proposes a method to train convolutional neural networks for image classification using quantum annealing instead of traditional gradient-based optimization. The approach converts the training problem into multiple smaller binary optimization problems that can be solved on quantum annealing hardware like D-Wave systems.

Key Contributions

  • Development of QUBO-based framework for CNN training that avoids barren plateau problems in variational quantum circuits
  • Demonstration that the method scales with feature dimension and bit precision rather than dataset size, making it more practical for large datasets
quantum annealing QUBO machine learning convolutional neural networks D-Wave
View Full Abstract

Variational quantum circuits for image classification suffer from barren plateaus, while quantum kernel methods scale quadratically with dataset size. We propose an iterative framework based on Quadratic Unconstrained Binary Optimization (QUBO) for training the classifier head of convolutional neural networks (CNNs) via quantum annealing, entirely avoiding gradient-based circuit optimization. Following the Extreme Learning Machine paradigm, convolutional filters are randomly initialized and frozen, and only the fully connected layer is optimized. At each iteration, a convex quadratic surrogate derived from the feature Gram matrix replaces the non-quadratic cross-entropy loss, yielding an iteration-stable curvature proxy. A per-output decomposition splits the $C$-class problem into $C$ independent QUBOs, each with $(d+1)K$ binary variables, where $d$ is the feature dimension and $K$ is the bit precision, so that problem size depends on the image resolution and bit precision, not on the number of training samples. We evaluate the method on six image-classification benchmarks (sklearn digits, MNIST, Fashion-MNIST, CIFAR-10, EMNIST, KMNIST). A precision study shows that accuracy improves monotonically with bit resolution, with 10 bits representing a practical minimum for effective optimization; the 15-bit formulation remains within the qubit and coupler limits of current D-Wave Advantage hardware. The 20-bit formulation matches or exceeds classical stochastic gradient descent on MNIST, Fashion-MNIST, and EMNIST, while remaining competitive on CIFAR-10 and KMNIST. All experiments use simulated annealing, establishing a baseline for direct deployment on quantum annealing hardware.

Improved Grid-Based Simulation of Coulombic Dynamics

Xiaoning Feng, Hans Hon Sang Chan, David P. Tew

2603.02954 • Mar 3, 2026

QC: medium Sensing: low Network: none

This paper develops two computational correction schemes to improve the accuracy of quantum simulations of hydrogen-like atoms, addressing challenges with the Coulomb potential's mathematical singularity. The methods work on both classical computers and quantum computers, with detailed resource estimates showing the approach could be implemented on future quantum computing platforms.

Key Contributions

  • Two correction schemes for grid-based Coulombic quantum dynamics that improve energy accuracy and time evolution fidelity
  • Quantum computing implementation framework using Walsh and Fourier series expansions with detailed resource analysis
quantum dynamics Coulomb potential grid-based simulation Trotter steps quantum algorithms
View Full Abstract

Accurate time-dependent quantum dynamics of Coulombic systems on grid-based representations remains computationally demanding due to the singularity of the Coulomb potential, which necessitates extremely fine spatial grids to mitigate discretisation errors. We propose two complementary correction schemes that, under identical resource budgets, consistently outperform the uncorrected counterparts. The first scheme modifies the potential operator to incorporate grid-basis structure into its representation, while the second introduces a corrected initial wavefunction inspired by analytical solutions of softened Coulomb potentials. Applied to hydrogenic systems, these corrections deliver improved energy accuracy and time fidelity across long evolutions. Beyond classical simulations, the proposed framework aligns naturally with quantum computing architectures, where the corrected operators and states can be encoded through truncated Walsh and Fourier series expansions. A resource analysis for the representative 2D hydrogen system yields a circuit depth of $1.5\times10^{8}$ gates over 6,000 Trotter steps. This study thus establishes practical strategies toward high-accuracy Coulombic dynamics on both classical and emerging quantum platforms.

Fingerprint Recognition of Partial Discharge Signals in Deep Learning Enhanced Rydberg Atomic Sensors

Yi-Ming Yin, Qi-Feng Wang, Yu Ma, Tian-Yu Han, Jia-Dou Nan, Zheng-Yuan Zhang, Han-Chao Chen, Xin Liu, Shi-Yao Shao, Jun Zhang, Qing Li, Ya-Jun Wang, D...

2603.02925 • Mar 3, 2026

QC: none Sensing: high Network: none

This paper uses Rydberg atomic sensors to detect electrical discharge signals from deteriorating high-voltage equipment, then applies deep learning to classify different types of discharge patterns. The approach achieves 94% accuracy in identifying discharge types even when signals are weak, offering a new method for monitoring electrical infrastructure.

Key Contributions

  • Development of Rydberg atomic sensors for broadband partial discharge detection
  • Integration of deep learning (1D ResNet) with quantum sensing for automated pattern recognition without manual feature extraction
Rydberg atoms quantum sensing partial discharge detection deep learning electrical diagnostics
View Full Abstract

Partial discharge originates from microscopic insulation imperfections in high-voltage apparatus and is widely considered a critical marker of incipient deterioration. Conventional partial discharge detection methods are typically constrained by limited bandwidth and often rely on predefined feature extraction, which impedes reliable recognition of broadband transient signals. In this work, we employ a Rydberg atomic sensor to directly capture time-domain responses of partial discharge emissions and construct distinctive spectral fingerprints for different types. A 1D ResNet deep learning model is then applied to recognize these fingerprints from time-domain signals without manual feature engineering. Under increased source-antenna distances, where spectral features are significantly attenuated, the model attains a recognition accuracy of approximately 94\% across four partial discharge categories, demonstrating robustness to attenuation and noise. We further validate the approach in a simulated early-warning scenario, where partial discharge signals mixed with noise are analyzed and the model successfully generates predictive alarms. These results underscore the potential of integrating Rydberg-based broadband sensing with data-driven analysis for non-invasive, high-sensitivity diagnostics of electrical insulation systems.

Toward multi-purpose quantum communication networks: from theory to protocol implementation

Lucas Hanouz, Marc Kaplan, Jean-Sébastien Kersaint Tournebize, Chin-te Liao, Anne Marin

2603.02923 • Mar 3, 2026

QC: low Sensing: none Network: high

This paper demonstrates how to implement multiple quantum communication protocols (quantum oblivious transfer and quantum tokens) on the same quantum key distribution hardware, moving beyond single-purpose networks. The researchers developed a full-stack software framework with both simulation and real hardware backends to simplify deployment and assessment of multi-purpose quantum communication networks.

Key Contributions

  • Development of methodology to implement multiple quantum communication protocols on single QKD hardware platform
  • Creation of full-stack development framework with simulation backend that reproduces real hardware behavior
  • Open-source implementation enabling reproducible research in multi-purpose quantum communication networks
quantum key distribution quantum oblivious transfer quantum tokens quantum communication networks protocol implementation
View Full Abstract

Most quantum communication networks around the world are used for a single task: quantum key distribution. In order to initiate the transition to multi-purpose quantum communication networks, we demonstrate the implementation of two different tasks on the same quantum key distribution hardware. Specifically, we focus on quantum oblivious transfer and quantum tokens. Our main contribution is to establish a methodology that greatly simplifies the expertise required to achieve the deployment, assess its performance, and evaluate its feasibility at a large scale. The implementation that we present is full-stack. It is based on a development framework that allows running user-defined applications both with simulated or real quantum communication backend. The hardware used for the implementation is VeriQloud's Qline. The simulation backend reproduces exactly the inputs and outputs of the real hardware, but also its losses and errors. It can therefore be used to validate the implementation before running it on the real hardware. The sources of the software that we use are fully open, making our research reproducible. The security of the implementations on real hardware are discussed with respect to security bounds previously known in the literature. We also discuss the engineering choices that we made in order to make the implementations feasible. By establishing a methodology to evaluate the performance and security of quantum communication protocols, we take a significant step towards industrializing and deploying large-scale, multi-purpose quantum communication networks.

Learning Hamiltonians for solid-state quantum simulators

Jarosław Pawłowski, Mateusz Krawczyk

2603.02889 • Mar 3, 2026

QC: high Sensing: medium Network: low

This paper develops a machine learning framework that uses physics-informed neural networks to automatically determine the effective Hamiltonians of solid-state quantum systems from experimental transport measurements. The method incorporates physical constraints directly into the model and is demonstrated on triple quantum dot chains, showing it can characterize quantum simulators even with noisy data.

Key Contributions

  • Development of physics-informed neural network architecture for automated Hamiltonian identification from experimental data
  • Demonstration of robust characterization method for programmable solid-state quantum simulators that works with noisy measurements
Hamiltonian learning quantum dots physics-informed neural networks quantum simulators transport measurements
View Full Abstract

We introduce a generalizable framework for learning to identify effective Hamiltonians directly from experimental data in solid-state quantum systems. Our approach is based on a physics-informed neural network architecture that embeds physical constraints directly into the model structure. Unlike purely data-driven supervised schemes, the proposed unsupervised autoencoder-based method incorporates the governing physics (here, the S-matrix formalism) within the decoder network, ensuring that the learned representations remain physically meaningful. Through numerical learning experiments, we demonstrate automated characterization of programmable solid-state simulators from transport measurements, exemplified by a triple quantum dot chain. The trained model generalizes beyond the training domain and accurately infers Hamiltonian parameters from transport data. While the model has finite capacity -- leading to degraded performance when the parameter space becomes excessively large or structurally diverse -- we identify regimes in which robust generalization is maintained. We further show how to train the model to handle noisy measurements, reflecting realistic experimental conditions.

Discrete-modulation continuous-variable quantum key distribution with probabilistic amplitude shaping over a linear quantum channel

Emanuele Parente, Michele N. Notarnicola, Stefano Olivares, Enrico Forestieri, Luca Potì, Marco Secondini

2603.02870 • Mar 3, 2026

QC: none Sensing: none Network: high

This paper develops a new quantum key distribution protocol that uses discrete modulation and probabilistic amplitude shaping instead of Gaussian modulation, making it easier to implement with existing telecom equipment while maintaining security and performance comparable to the standard GG02 protocol.

Key Contributions

  • Development of a discrete-modulation continuous-variable QKD protocol using probabilistic amplitude shaping that is implementable with current telecom technologies
  • Demonstration that the new protocol achieves performance comparable to GG02 benchmark while maintaining unconditional security
quantum key distribution continuous variable discrete modulation probabilistic amplitude shaping quadrature amplitude modulation
View Full Abstract

The practical implementation difficulties arising from the Gaussian modulation of the GG02 protocol lead us to investigate the possibilities offered by the combination of probabilistic amplitude shaping technique and quadrature amplitude modulation formats in the context of continuous variable quantum key distribution systems. Our interest comes from the fact that quadrature amplitude modulation and probabilistic shaping can be implemented with current technologies and are widely used in classical telecom equipment. In this treatment, we assume to work in the scenario of a linear quantum channel and we analyze maximum achievable secure key rates, maximum reachable distances and the resilience to noise of our discrete-modulation based protocol with respect to GG02, which is taken as a benchmark. In particular, we deal with the infinite key size regime, consider a homodyne detection scheme, and analyze what happens for different cardinalities of the input alphabet at different distances, in the case of collective attacks and in the reverse reconciliation picture. We find that our protocol, beyond being easily reproducible in the laboratory, provides a way to closely approach the theoretical performance offered by GG02 and, at the same time, preserves the ability to assure an unconditional security level.

An Extensible Quantum Network Simulator Built on ns-3: Q2NS Design and Evaluation

Adam Pearson, Francesco Mazza, Marcello Caleffi, Angela Sara Cacciapuoti

2603.02857 • Mar 3, 2026

QC: low Sensing: none Network: high

This paper presents Q2NS, a quantum network simulator built on the classical ns-3 networking platform that can simulate both quantum operations and classical communications together. The simulator supports multiple quantum state representations and includes visualization tools to help researchers design and test quantum networking protocols.

Key Contributions

  • Development of Q2NS, a modular quantum network simulator that integrates quantum and classical networking protocols
  • Support for multiple quantum state representations (state-vector, density-matrix, stabilizer) through unified interface
  • Comprehensive benchmarking showing superior computational efficiency compared to existing quantum network simulators
  • Visualization tool for entanglement dynamics and quantum network connectivity
quantum networking network simulation entanglement distribution quantum internet hybrid classical-quantum protocols
View Full Abstract

As quantum networking hardware remains costly and not yet widely accessible, simulation tools are essential for the design and evaluation of quantum network architectures and protocols. However, designing a scalable and computationally efficient quantum network simulator is intrinsically challenging: i) quantum dynamics must be emulated on classical computing platforms while capturing the stateful and non-local nature of entanglement, a quantum resource without any classical networking analog; ii) quantum networking is inherently hybrid, as protocol execution also fundamentally depends on classical signaling. This makes a tight and faithful co-simulation of quantum operations and classical message exchanges a core requirement. In this light, we present Q2NS, a modular and extensible quantum network simulator, built on top of ns-3, designed to seamlessly integrate quantum-network primitives with ns-3's established classical protocol stack. Q2NS adopts a modular architecture that decouples protocol control logic from node- and channel-level operations, enabling rapid prototyping and adaptation across heterogeneous and evolving Quantum Internet scenarios. Q2NS natively supports multiple quantum state representations through a unified interface, allowing interchangeable state-vector, density-matrix, and stabilizer backends. We validate Q2NS through realistic use-case studies and comprehensive benchmarks, demonstrating superior computational efficiency over representative state-of-the-art alternatives, while preserving modeling flexibility. Finally, we provide a dedicated visualization tool that jointly captures physical and entanglement-enabled connectivity and supports entangled-state manipulations, facilitating an intuitive interpretation of entanglement dynamics and protocol behavior. Q2NS offers a flexible, open, and scalable simulation platform for advancing Quantum Internet research.

Identification of quantum generative circuits with parallel quantum neural network

Zheping Wu, Xiaopeng Huang, Hengyue Jia, Haobin Shi, Wei-Wei Zhang

2603.02834 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops ParaQuanNet, a parallel quantum neural network designed to identify and classify different quantum generative circuits that produce similar outputs. The authors demonstrate their approach by successfully distinguishing between eight different quantum circuits that all generate W-like states with 99.5% accuracy.

Key Contributions

  • Development of ParaQuanNet, a parallel quantum embedding neural network for quantum circuit identification
  • Novel parallel quantum embedding unit (PQEU) design that enables efficient parallel processing of quantum data
  • Demonstration of 99.5% accuracy in classifying eight different quantum generative circuits producing similar W-like states
quantum neural networks quantum generative circuits quantum machine learning parallel quantum processing quantum circuit identification
View Full Abstract

The rapid emergence of quantum technology has raised new challenges in distinguishing various quantum circuits of similar functions. In this work, we propose parallel quantum embedding neural network (ParaQuanNet) for the efficient identification of quantum generative circuits via classifications of the corresponding output data. Specifically, we generated W-like states with eight generative quantum circuits realizing the generative quantum denoising diffusion probabilistic models (QDDPM). Our ParaQuanNet can classify these eight classes of generated quantum data with an accuracy of {$99.5\%$}, even though all of them are trained to generate the same types of quantum data. With a novel design of parallel quantum embedding unit (PQEU) in our neural networks, our ParaQuanNet enables the quantum kernel circuit parallelly process all the receptive fields of quantum data, which empowers the quantum data processing efficiency. We also integrate the mutually unbiased measurements into our ParaQuanNet and further improve its performance. We apply our ParaQuanNet on the classification of classical data sets and demonstrate a good performance of quantum neural networks on these tasks. Our approach demonstrates good robustness to noisy data and the circuit-level noise with a Python realization in a classical GPU. Our results highlight ParaQuanNet as a scalable and effective framework for quantum circuits identification, contributing to the broader development of quantum machine intelligence.

Charging power enhancement at the phase transition of a non-integrable quantum battery

D. Farina, M. Sassetti, V. Cataudella, D. Ferraro, N. Traverso Ziani

2603.02819 • Mar 3, 2026

QC: medium Sensing: low Network: none

This paper studies quantum batteries based on a non-integrable one-dimensional Ising model, finding that quantum phase transitions can significantly enhance charging power compared to previous studies on integrable systems. The work demonstrates how many-body interactions and critical phenomena can improve quantum battery performance using numerical simulations of realistic quantum systems.

Key Contributions

  • Demonstration of charging power enhancement in non-integrable quantum batteries at phase transitions
  • Numerical characterization of quantum battery performance in realistic many-body systems amenable to experimental verification
quantum batteries quantum phase transitions many-body systems non-integrable models charging power
View Full Abstract

Exploiting many-body interaction and critical phenomena to improve the performance of quantum batteries is an emerging and promising line of research. A central question in this direction is whether quantum phase transitions can enhance the charging energy or the power. While preliminary works have addressed this problem in fine-tuned integrable models, its characterization in non-integrable systems remains limited due to the demanding numerical requirements. Here, we investigate a one-dimensional Axial Next-Nearest-Neighbor Ising model as an example of non-integrable quantum battery charged via a quantum-quench protocol. In contrast to integrable cases, we find that criticality in this setting can lead to a pronounced enhancement of the charging power. Our findings inform quantum-battery design of many-qubit systems and are amenable to experimental verification on current quantum-simulation platforms, including neutral-atom arrays.

Merged amplitude encoding for Chebyshev quantum Kolmogorov--Arnold networks: trading qubits for circuit executions

Hikaru Wakaura

2603.02818 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops a technique called merged amplitude encoding for quantum neural networks that reduces the number of quantum circuit executions needed for computation by packing multiple calculations into a single quantum state. The authors show through experiments that this optimization preserves the network's ability to learn while using fewer quantum resources.

Key Contributions

  • Introduction of merged amplitude encoding technique that reduces circuit executions by factor of n while adding only 1-2 qubits
  • Empirical validation showing merged circuits maintain comparable trainability and performance to original circuits across multiple test conditions
quantum machine learning quantum neural networks amplitude encoding circuit optimization NISQ algorithms
View Full Abstract

Quantum Kolmogorov--Arnold networks based on Chebyshev polynomials (CCQKAN) evaluate each edge activation function as a quantum inner product, creating a trade-off between qubit count and the number of circuit executions per forward pass. We introduce merged amplitude encoding, a technique that packs the element-wise products of all $n$ input-edge vectors for a given output node into a single amplitude state, reducing circuit executions by a factor of $n$ at a cost of only 1--2 additional qubits relative to the sequential baseline. The merged and original circuits compute the same mathematical quantity exactly; the open question is whether they remain equally trainable within a gradient-based optimization loop. We address this question through numerical experiments on 10 network configurations under ideal, finite-shot, and noisy simulation conditions, comparing original, parameter-transferred, and independently initialized merged circuits over 16 random seeds. Wilcoxon signed-rank tests show no significant difference between the independently initialized merged circuit and the original ($p > 0.05$ in 28 of 30 comparisons), while parameter transfer yields significantly lower loss under ideal conditions ($p < 0.001$ in 9 of 10 configurations). On 10-class digit classification with the $8\times8$ MNIST dataset using a one-vs-all strategy, original and merged circuits achieve comparable test accuracies of 53--78\% with no significant difference in any configuration. These results provide empirical evidence that merged amplitude encoding preserves trainability under the simulation conditions tested.

Fast and memory-efficient classical simulation of quantum machine learning via forward and backward gate fusion

Yoshiaki Kawase

2603.02804 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops a method to speed up classical computer simulations of quantum machine learning by optimizing how quantum gates are processed, achieving 20-30x performance improvements and enabling training of large quantum neural networks with thousands of parameters.

Key Contributions

  • Gate fusion technique for forward and backward passes that reduces memory access bottlenecks
  • Demonstration of training large-scale quantum machine learning models (20 qubits, 60,000 parameters) on consumer hardware
quantum machine learning classical simulation variational quantum algorithms gate fusion gradient computation
View Full Abstract

While real quantum devices have been increasingly used to conduct research focused on achieving quantum advantage or quantum utility in recent years, executing deep quantum circuits or performing quantum machine learning with large-scale data on current noisy intermediate-scale quantum devices remains challenging, making classical simulation essential for quantum machine learning research. However, classical simulation often suffers from the cost of gradient calculations, requiring enormous memory or computational time. In this paper, to address these problems, we propose a method to fuse multiple consecutive gates in each of the forward and backward paths to improve throughput by minimizing global memory accesses. As a result, we achieved approximately $20$ times throughput improvement for a Hardware-Efficient Ansatz with $12$ or more qubits, reaching over $30$ times improvement on a mid-range consumer GPU with limited memory bandwidth. By combining our proposed method with gradient checkpointing, we drastically reduce memory usage, making it possible to train a large-scale quantum machine learning model, a $20$-qubit, $1,000$-layer model with $60,000$ parameters, using $1,000$ samples in approximately $20$ minutes. This implies that we can train the model on large datasets, consisting of tens of thousands of samples, such as MNIST or CIFAR-10, within a realistic time frame (e.g., $20$ hours per epoch). In this way, our proposed method drastically accelerates classical simulation of quantum machine learning, making a significant contribution to quantum machine learning research and variational quantum algorithms, such as verifying algorithms on large datasets or investigating learning theories of deep quantum circuits like barren plateau.

Generation of 12 dB squeezed light from a waveguide optical parametric amplifier using a machine-learning-controlled spatial light modulator

Gyeongmin Ha, Kazuki Hirota, Takahiro Kashiwazaki, Takumi Suzuki, Akito Kawasaki, Warit Asavanant, Mamoru Endo, Akira Furusawa

2603.02744 • Mar 3, 2026

QC: low Sensing: high Network: medium

This paper demonstrates the generation of 12.1 dB squeezed light using a waveguide optical parametric amplifier, overcoming previous limitations by employing a machine-learning-controlled spatial light modulator to minimize losses from spatial mode mismatch between the squeezed light and local oscillator.

Key Contributions

  • Achievement of 12.1 dB squeezed light generation, surpassing previous ~10 dB limitation
  • Implementation of machine-learning-optimized spatial light modulator with double-reflection configuration to minimize spatial mode mismatch losses
squeezed light optical parametric amplifier PPLN waveguide spatial light modulator machine learning optimization
View Full Abstract

We demonstrate the generation of $12.1 \pm 0.2$ dB squeezed light from a periodically poled lithium niobate (PPLN) waveguide optical parametric amplifier (OPA). While single-pass OPAs offer squeezed light with THz-order bandwidths, loss from spatial mode mismatch between the squeezed light and the local oscillator (LO) previously capped the squeezing level at $\sim$10 dB [K. Hirota et al., Opt. Express 34, 7958 (2026)]. In this work, we minimize this loss by introducing a machine-learning-optimized spatial light modulator (SLM) in the path of the LO. Specifically, we employed a double-reflection configuration to increase the spatial degrees of freedom, and directly used the measured squeezing level as the optimization's objective function.

Geometric mechanisms enabling spin- and enantio-sensitive observables in one photon ionization of chiral molecules

Philip Caesar M. Flores, Stefanos Carlström, Serguei Patchkovskii, Misha Ivanov, Andres F. Ordonez, Olga Smirnova

2603.02735 • Mar 3, 2026

QC: low Sensing: medium Network: none

This paper studies how spin-polarized electrons are produced when circularly polarized light ionizes chiral molecules, identifying three fundamental geometric mechanisms that control these spin- and chirality-sensitive effects. The work provides a unified theoretical framework that reduces complex photoionization parameters to simple geometric properties described by three pseudovectors.

Key Contributions

  • Identification of three fundamental geometric mechanisms (two intrinsic, one extrinsic) that govern spin- and enantio-sensitive observables in chiral molecule photoionization
  • Reduction of ten independent Cherepkov parameters to moments of three pseudovectors, providing compact expressions and intuitive understanding of chirality-induced spin asymmetries
chiral molecules photoionization spin polarization circular dichroism pseudovectors
View Full Abstract

We examine spin-resolved photoionization of randomly oriented chiral molecules via circularly polarized light, and revisit earlier predictions of Cherepkov (J. Phys. B: Atom. Mol. Phys. 16, 1543, 1983}). We will show that the dynamical origin of spin- and enantio-sensitive observables arise from two intrinsic mechanisms that are quantified by two pseudovectors stemming from the geometric properties of the photoionization dipoles in spin space and in real space, and an extrinsic mechanism which is a directional bias introduced by the well-defined direction of light polarization. These mechanisms arise solely from electric dipole interactions. Consequently, this means that the ten independent parameters that was earlier predicted by Cherepkov to fully describe spin-resolved photoionization of chiral molecules can be reduced as moments of these three pseudovectors. We also find that the molecular pseudoscalars describing the spin- and enantio-sensitive components of the yield can be described by the flux of these pseudovectors through the energy shell, which changes sign upon switching enantiomers. Our results provide compact expressions for these observables which provide an intuitive picture on what determines the strength of these spin- and enantio-sensitive observables. The approach can be readily generalized to photoexcitation, multiphoton processes, and arbitrary field polarizations. Regardless of the specific driving conditions, the resulting spin- and enantio-sensitive observables are still controlled by the same three pseudovectors, underscoring their universal role as the primary generators of chirality-induced spin asymmetries, emphasizing their fundamental geometric origin and the universality of the mechanism identified here.

Non-commutative integration method and generalized coherent states

A. I. Breev, D. M. Gitman

2603.02722 • Mar 3, 2026

QC: low Sensing: medium Network: low

This paper investigates the mathematical relationship between quantum states derived using non-commutative integration methods for solving the Schrödinger equation on Lie groups and generalized coherent states. The authors prove that these solutions are equivalent to generalized coherent states under specific conditions involving real λ-representations.

Key Contributions

  • Establishes connection between non-commutative integration solutions and generalized coherent states
  • Provides mathematical proof that solutions belong to generalized coherent state class when λ-representation is real
coherent states non-commutative integration Lie groups Schrödinger equation quantum states
View Full Abstract

The relationship between states obtained by the non-commutative integration method of the Schrödinger equation on Lie groups and generalized coherent states is investigated. It is shown that such solutions belong to the class of generalized coherent states when the corresponding λ-representation is real.

Correction scheme for total energy obtained on fault-tolerant quantum computer via quantum dominant orbital selection and subspace dynamical correlation methods

Nobuki Inoue, Hisao Nakamura

2603.02715 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper proposes a hybrid quantum-classical method for calculating molecular energies by using quantum computers to identify important molecular orbitals and then applying classical methods to correct for missing electron correlation effects. The approach aims to make quantum chemistry calculations more practical by reducing the quantum computing requirements while maintaining accuracy.

Key Contributions

  • Development of quantum dominant orbital selection (QDOS) method to extract relevant active orbitals from quantum computations
  • Introduction of subspace dynamical correlation (SDC) method to classically correct quantum-computed molecular energies
  • Demonstration of hybrid quantum-classical approach that reduces quantum data readout requirements
quantum chemistry fault-tolerant quantum computing molecular energy calculation hybrid quantum-classical algorithms active space methods
View Full Abstract

We propose a practical method for accurately evaluating molecular energies using a hybrid approach that integrates fault-tolerant quantum computers with classical computing. Our scheme comprises two complementary methods: quantum dominant orbital selection (QDOS) and subspace dynamical correlation (SDC). The QDOS method extracts only the relevant active orbitals from the complete active space (CAS) configuration interaction (CI) state on a quantum computer, thereby defining a more compact active space suitable for subsequent classical CASCI calculations. The SDC method evaluate correction of dynamical correlation of the CASCI obtained by quantum computing by using the compact CASCI state, which can be handled by classical computing. To demonstrate that the CAS energy resulting from the quantum computation is post-corrected by the SDC method, we examine the two frameworks, multi-reference perturbation theory and tailored coupled-cluster theory, for the SDC method. Our scheme does not suffer from massive task to read out quantum data readout and demonstrates the potential to efficiently compute large, complex molecular systems by leveraging quantum-classical hybrid computation with reasonable computational resources.

Neural quantum support vector data description for one-class classification

Changjae Im, Hyeondo Oh, Daniel K. Park

2603.02700 • Mar 3, 2026

QC: medium Sensing: none Network: none

This paper presents NQSVDD, a hybrid classical-quantum machine learning framework that combines neural networks with quantum circuits for one-class classification tasks like anomaly detection. The approach uses quantum measurements to create compact representations of normal data that can be enclosed in a minimum-volume hypersphere for classification.

Key Contributions

  • Novel hybrid classical-quantum framework for one-class classification combining neural networks with variational quantum circuits
  • End-to-end optimization approach that jointly learns feature embeddings and quantum latent representations for anomaly detection
variational quantum circuits quantum machine learning hybrid classical-quantum one-class classification anomaly detection
View Full Abstract

One-class classification (OCC) is a fundamental problem in machine learning with numerous applications, such as anomaly detection and quality control. With the increasing complexity and dimensionality of modern datasets, there is a growing demand for advanced OCC techniques with better expressivity and efficiency. We introduce Neural Quantum Support Vector Data Description (NQSVDD), a classical-quantum hybrid framework for OCC that performs end-to-end optimized hierarchical representation learning. NQSVDD integrates a classical neural network with trainable quantum data encoding and a variational quantum circuit, enabling the model to learn nonlinear feature transformations tailored to the OCC objective. The hybrid architecture maps input data into an intermediate high-dimensional feature space and subsequently projects it into a compact latent space defined through quantum measurements. Importantly, both the feature embedding and the latent representation are jointly optimized such that normal data form a compact cluster, for which a minimum-volume enclosing hypersphere provides an effective decision boundary. Experimental evaluations on benchmark datasets demonstrate that NQSVDD achieves competitive or superior AUC performance compared to classical Deep SVDD and quantum baselines, while maintaining parameter efficiency and robustness under realistic noise conditions.

Qudit Designs and Where to Find Them

Namit Anand, Jeffrey Marshall, Jason Saied, Eleanor Rieffel, Andrea Morello

2603.02659 • Mar 3, 2026

QC: high Sensing: medium Network: low

This paper develops new mathematical tools called weighted state t-designs for quantum systems with more than two levels (qudits), overcoming fundamental limitations that prevent standard qubit techniques from working in higher dimensions. The authors provide methods for benchmarking qudit systems and analyze the quantum circuit complexity needed to generate approximate designs from hardware-native gates.

Key Contributions

  • General technique to construct weighted state t-designs for arbitrary qudit dimensions, extending shadow tomography from qubits to qudits
  • Introduction of Clifford character randomized benchmarking for qudit systems in any dimension
  • Bounds on quantum circuit complexity for generating approximate unitary designs from native gates in qudit hardware
qudits unitary designs randomized benchmarking shadow tomography Clifford group
View Full Abstract

Unitary t-designs are some of the most versatile tools in quantum information theory. Their applications range from randomized benchmarking and shadow tomography, to more fundamental ones such as emulating quantum chaos and establishing exponential separations between classical and quantum query complexity. While unitary designs originating from a group structure, such as the Clifford group, have proven to be incredibly useful for qubit systems, unfortunately, this is no longer true for qudits. In fact, the classification of finite-group representations rules out the existence of unitary 2-designs for arbitrary qudit dimensions. This severely limits the applicability of standard quantum information primitives when it comes to qudit systems. We overcome these limitations with a three-fold contribution. First, we introduce a general technique to construct families of weighted state t-designs in arbitrary qudit dimensions. These weighted state-designs generalize classical shadow tomography protocol from qubits to qudits. Second, we introduce a Clifford character RB that allows us to benchmark the qudit Clifford group in any dimension, including non-prime-power dimensions. And third, we establish bounds on the quantum circuit complexity of generating approximate unitary-designs from native gates in existing quantum hardware such as high-spin and cavity-QED qudits. Our work further highlights the analogy between spin and optical coherent states by proving that spin-GKP codewords form a state 2-design while spin coherent states do not; in direct analogy with the optical case. This work is structured as a pedagogical and self-contained introduction to unitary designs and their applications to qudit systems.

Quantum Algorithms for Approximate Graph Isomorphism Testing

Prateek P. Kulkarni

2603.02656 • Mar 3, 2026

QC: high Sensing: none Network: none

This paper develops quantum algorithms for determining when two graphs are approximately the same structure, allowing for small differences. The quantum approach achieves a polynomial speedup over classical methods by using quantum walk techniques to search for similar vertex matchings between graphs.

Key Contributions

  • Novel quantum algorithm for approximate graph isomorphism with O(n^{3/2} log n/ε) query complexity
  • Proof of polynomial quantum speedup over classical algorithms with Ω(n^2) lower bound
  • Extension to spectral similarity measures and demonstration on near-term quantum devices
quantum algorithms graph isomorphism quantum walk MNRS Grover search
View Full Abstract

The graph isomorphism problem asks whether two graphs are identical up to vertex relabeling. While the exact problem admits quasi-polynomial-time classical algorithms, many applications in molecular comparison, noisy network analysis, and pattern recognition require a flexible notion of structural similarity. We study the quantum query complexity of approximate graph isomorphism testing, where two graphs on $n$ vertices drawn from the Erdős--Rényi distribution $\mathcal{G} (n,1/2)$ are considered approximately isomorphic if they can be made isomorphic by at most $k$ edge edits. We present a quantum algorithm based on MNRS quantum walk search over the product graph $Γ(G,H)$ of the two input graphs. When the graphs are approximately isomorphic, the quantum walk search detects vertex pairs belonging to a dense near isomorphic matching set; candidate pairings are then reconstructed via local consistency propagation and verified via a Grover-accelerated consistency check. We prove that this approach achieves query complexity $\mathcal{O}(n^{3/2} \log n/\varepsilon)$, where $\varepsilon$ parameterizes the approximation threshold. We complement this with an $Ω(n^2)$ classical lower bound for constant approximation, establishing a genuine polynomial quantum speedup in the query model. We extend the framework to spectral similarity measures based on graph Laplacian eigenvalues, as well as weighted and attributed graphs. Small-scale simulation results on quantum simulators for graphs with up to twenty vertices demonstrate compatibility with near-term quantum devices.

Rate-Fidelity Tradeoffs in All-Photonic and Memory-Equipped Quantum Switches

Panagiotis Promponas, Leonardo Bacciottini, Paul Polakos, Gayane Vardoyan, Don Towsley, Leandros Tassiulas

2603.02610 • Mar 3, 2026

QC: low Sensing: none Network: high

This paper compares two quantum switch architectures for early quantum networks: one using only photons with repeated Bell-state measurements, and another with quantum memories that buffer entanglement for more efficient operations. The authors develop a framework to analyze the rate-fidelity tradeoffs between these approaches and identify optimal operating conditions for different applications.

Key Contributions

  • Formal comparison framework for all-photonic versus memory-equipped quantum switch architectures
  • Characterization of achievable rate-fidelity regions for both designs
  • Benchmarking methodology that maps hardware parameters to network-level performance metrics
quantum entanglement switches quantum networks Bell-state measurements quantum memory rate-fidelity tradeoffs
View Full Abstract

Quantum entanglement switches are a key building block for early quantum networks, and a central design question is whether near-term devices should use only flying photons or also incorporate quantum memories. We compare two architectures: an all-photonic entanglement generation switch (EGS) that repeatedly attempts Bell-state measurements (BSM) without storing qubits, and a quantum memory-equipped switch that buffers entanglement and triggers measurements only when heralded connectivity is available (herald-then-swap control). These two designs trade off simple, memoryless operation that avoids decoherence and memory-induced latency against heralding-based control that buffers entanglement to use BSMs more efficiently. We formalize both models under a common hardware abstraction and characterize their achievable rate-fidelity regions, yielding a benchmarking methodology that translates hardware and protocol parameters into network-level performance. Numerical evaluation quantifies the rate-fidelity tradeoffs of both models, identifies operating regions in which each architecture dominates, and shows how hardware and protocol knobs can be tuned to meet application-specific targets.

Measurement of a quantum system using spin-mechanical conversion

A. A. Wood, D. S. Rice, T. Xie, F. H. Cassells, R. M. Goldblatt, T. Delord, G. Hétet, A. M. Martin

2603.02507 • Mar 3, 2026

QC: low Sensing: high Network: none

This paper demonstrates a novel quantum sensing technique where the spin states of nitrogen-vacancy centers in a levitated diamond particle are converted into macroscopic mechanical rotation. The researchers achieve over 70% spin readout contrast by measuring how quantum spin flips create tiny torques that deflect laser beams.

Key Contributions

  • Demonstration of spin-mechanical conversion for quantum measurement with >70% readout contrast
  • Direct measurement of attonewton-scale torques from quantum spin flips with temporal resolution
  • Pulsed mechanical detection of quantum coherent phenomena including Rabi oscillations and spin-echo interferometry
nitrogen-vacancy centers levitated optomechanics spin-mechanical coupling quantum sensing precision measurement
View Full Abstract

Levitated macroscopic particles exhibiting quantum mechanical effects are garnering increased attention as a means for precision sensing and testing quantum mechanics. Defects in diamond, such as the nitrogen-vacancy (NV) centre possess optically-addressable spins with long coherence times at room temperature and offer an intriguing system to examine quantum spin dynamics coupled to a macroscopic classical particle. In this work, we convert the outcome of a quantum measurement on an ensemble of spins into a macroscopic rotation of the host particle via spin-mechanical coupling. Following a sequence of green laser and microwave control pulses, spin-mechanical coupling between the final qubit spin state and the host particle -- an electrically-levitated diamond -- exerts a torque on the particle that deflects a weak near-infra-red laser beam. We measure spin readout contrast in excess of 70\%, and demonstrate pulsed mechanical detection of coherent Rabi oscillations, spin-echo interferometry and $T_1$-induced relaxation. We directly measure with temporal resolution the particle reorientation from a 60\,attonewton-metre spin torque induced by flipping the spins. Our results open up interesting new opportunities for levitated spin-mechanical systems using pulsed control, from improved sensing to the prospect of realising macroscopic quantum superposition states.

High-Stress Si3N4 Reflective Membranes Monolithically Integrated with Cavity Bragg Mirrors

Megha Khokhar, Lucas Norder, Paolo M. Sberna, Richard A. Norte

2603.02490 • Mar 3, 2026

QC: low Sensing: high Network: medium

This paper develops a new method to integrate high-quality silicon nitride membranes with optical mirrors on a single chip, creating tiny optical cavities that can be precisely controlled by mechanical vibrations. The approach enables mass production of devices that combine excellent optical and mechanical properties for quantum applications.

Key Contributions

  • Monolithic wafer-level integration of high-stress Si3N4 membranes with distributed Bragg reflectors
  • Demonstration of high-finesse optical cavities (finesse >800) with high mechanical quality factors (Q >10^5)
  • Scalable fabrication process that preserves both optical and mechanical coherence properties
optomechanics silicon nitride cavity optomechanics distributed Bragg reflector precision sensing
View Full Abstract

High-stress silicon nitride (Si3N4) membranes represent the state-of-the-art for cavity optomechanics, combining ultralow dissipation, optical transparency, and full compatibility with wafer-scale nanofabrication. Yet their integration into high-finesse optical cavities has remained difficult, typically requiring bonding or alignment-sensitive assembly that limits scalability and long-term stability. Here, we introduce a monolithic, wafer-level integration strategy that directly suspends high-stress Si3N4 photonic-crystal membranes above thermally compatible SiN/SiO2 distributed Bragg reflectors (DBRs) capable of withstanding the high temperatures required for stoichiometric Si3N4 growth. A defect-free amorphous-silicon sacrificial layer and stiction-free plasma undercut yield vertically coupled cavities with sub-micron spacing-forming self-aligned resonators within seconds of release. Owing to the intrinsic tensile stress, the suspended membranes exhibit atomic-scale sagging, ensuring near-ideal cavity parallelism and long-term stability. Optical reflectivity measurements reveal cavity finesse exceeding 800 with nanoscale gaps between mirrors. Mechanical ringdown measurements show Q > 10^5, indicating that DBR integration preserves the low-dissipation character of high-stress Si3N4. This demonstrates that the integration process preserves the material's exceptional dissipation dilution, supporting straightforward extension to high-Q nanomechanical architectures reported in the literature. The resulting Si3N4-DBR platform unites optical and mechanical coherence with high fabrication yield and design flexibility, enabling scalable optomechanical devices for precision sensing and quantum photonics.

Optimizing Orbital Parameters of Satellites for a Global Quantum Network

Athul Ashok, Owen DePoint, Jackson MacDonald, Albert Williams, Don Towsley

2603.02480 • Mar 3, 2026

QC: low Sensing: none Network: high

This paper optimizes satellite constellation designs for global quantum networks by using Bayesian optimization and genetic algorithms to determine the best orbital parameters and satellite positions. The goal is to maximize entanglement generation rates between satellites and ground stations worldwide.

Key Contributions

  • Comparison of Bayesian optimization and genetic algorithm approaches for satellite constellation design in quantum networks
  • Optimization framework for maximizing entanglement generation rates between satellites and distributed ground stations
quantum networks satellite constellations entanglement distribution Bayesian optimization genetic algorithms
View Full Abstract

Due to fundamental limitations on terrestrial quantum links, satellites have received considerable attention for their potential as entanglement generation sources in a global quantum internet. In this work, we focus on the problem of designing a constellation of satellites for such a quantum network. We find satellite inclination angles and satellite cluster allocations to achieve maximal entanglement generation rates to fixed sets of globally distributed ground stations. Exploring two black-box optimization frameworks: a Bayesian Optimization (BO) approach and a Genetic Algorithm (GA) approach, we find comparable results, indicating their effectiveness for this optimization task. While GA and BO often perform remarkably similar, BO often converges more efficiently, while later growth noted in GAs is indicative of less susceptibility towards local maxima. In either case, they offer substantial improvements over naive approaches that maximize coverage with respect to ground station placement.

Collapse and transition of a superposition of states under a delta-function pulse in a two-level system

Ariel Edery

2603.02407 • Mar 2, 2026

QC: medium Sensing: low Network: none

This paper analyzes how a superposition of quantum states in a two-level system transitions to definite eigenstates when subjected to a delta-function pulse perturbation. The authors derive analytical expressions showing that under specific pulse strengths, the system can 'collapse' from superposition to a definite state with unit probability, resembling measurement-induced collapse but occurring through Schrödinger evolution.

Key Contributions

  • Derived exact analytical expressions for transition probabilities from superposition states to eigenstates under delta-function pulses
  • Demonstrated that specific pulse strengths can deterministically collapse superposition states to definite eigenstates with unit probability
two-level system superposition collapse delta-function pulse transition probability eigenstate transitions
View Full Abstract

Under a time-dependent perturbation it is common to calculate the transition probability in going from from one eigenstate to another eigenstate of a quantum system. In this work we study the transition in going from a \textit{linear superposition of eigenstates} to an eigenstate under a delta-function pulse (which acts at $t=0$). We consider a two-level system with energy levels $E_1$ and $E_2$ and solve the coupled set of first order equations to obtain exact analytical expressions for the coefficients $c_1(t>0)$ and $c_2(t>0)$ of the final state. The expressions for the final coefficients are general in the sense that they are functions of the interaction strength $β$ and the coefficients $α_1$ and $α_2$ of the initial superposition state which are free parameters constrained only by $|α_1|^2+ |α_2|^2=1$. This opens up new possibilities and in particular, allows for a ``collapse" scenario. We obtain a general analytical expression for the transition probability $P_{α_1,α_2 \to 2}$ in going from an initial superposition state to the second eigenstate. Armed with this general expression we study some interesting special cases. With a delta-function pulse, the transitions are abrupt/instantaneous and we show that they do not depend on the energy gap $E_2-E_1$ and hence on the relative phase between the two eigenstates. For specific multiple values of the interaction strength $β$, we show that the system ends up in a definite eigenstate i.e. probability of unity. Such a transition can be viewed as a ``collapse" since a superposition of states transitions abruptly to a definite eigenstate. The collapse of the wavefunction is familiar in the context of a measurement. Here it occurs via a delta-function pulse in Schrödinger's equation. We discuss how this differs from a collapse due to a measurement.

Barenco gate implementation using driven two- and three-qubit spin chains

Rafael Vieira, Edgard P. M. Amorim

2603.02387 • Mar 2, 2026

QC: high Sensing: none Network: none

This paper presents an analytical method for implementing multi-qubit quantum gates (like CNOT and Toffoli) using driven spin chains with Ising interactions. The authors derive effective Hamiltonians and provide explicit conditions for high-fidelity gate operations in two- and three-qubit systems.

Key Contributions

  • Fully analytical protocol for implementing Barenco gates using driven spin chains
  • Explicit conditions and closed-form expressions for time-evolution operators in decoupled subspaces
  • Demonstration of high-fidelity CNOT and Toffoli gate implementation with robust parameter ranges
quantum gates spin chains CNOT Toffoli Barenco gates
View Full Abstract

We propose a protocol for implementing Barenco-type multi-qubit controlled gates using short driven spin chains. Starting from an Ising interaction with a transverse drive on the last spin, we construct an effective two-qubit Hamiltonian whose time evolution implements the Barenco gate $V_2(\varphi,ω,φ)$ and, in particular, a CNOT gate. We then embed this construction into a three-qubit $XXZ$ chain to realize the three-qubit Barenco gate $V_3(\varphi,ω,φ)$, which includes the Toffoli gate as a special case. The derivation is fully analytical: we perform a sequence of unitary transformations, identify decoupled subspaces, and apply a rotating-wave approximation to obtain simple effective Hamiltonians. We derive explicit conditions on the coupling strengths and driving parameters, provide closed-form expressions for the time-evolution operators in each relevant subspace, and characterize the quality of the implementation using the operator fidelity. Numerical simulations show that the protocol achieves high fidelities over broad parameter ranges, demonstrating its robustness and suitability for quantum information processing in spin-chain platforms.

EAQKD: Entanglement-Based Authenticated Quantum Key Distribution

Noureldin Mohamed, Saif Al-Kuwari

2603.02375 • Mar 2, 2026

QC: none Sensing: none Network: high

This paper presents EAQKD, a new quantum key distribution protocol that combines entanglement-based communication with information-theoretic authentication to provide unconditionally secure key exchange. The researchers demonstrate through simulation that their protocol can maintain secure key rates over distances up to 200 km and potentially beyond 500 km when combined with quantum repeaters.

Key Contributions

  • Novel EAQKD protocol integrating entanglement distribution with information-theoretic authentication
  • Comprehensive simulation framework demonstrating practical performance with QBER below 11% security threshold
  • Extension of secure communication range beyond 500 km using quantum repeater integration
quantum key distribution entanglement authentication quantum communication quantum cryptography
View Full Abstract

The promise of unconditional security in the Quantum Key Distribution (QKD) depends on the availability of an authenticated classical channel. However, practical implementations often overlook this requirement or rely on computational assumptions that compromise long-term security. To overcome these challenges, this paper presents Entanglement-Based Authenticated Quantum Key Distribution (EAQKD), a novel protocol that addresses critical security and practical limitations in quantum cryptographic key exchange. Our approach integrates quantum entanglement distribution with information-theoretic authentication. We evaluate EAQKD's performance through a comprehensive discrete-event simulation framework modeled on realistic channel characteristics and experimental device parameters. Our modeling incorporates parameters from practical quantum optics setups, including SPDC entanglement sources, superconducting nanowire detectors, and fiber channel imperfections. Our results show quantum bit error rates consistently below the 11% security threshold (ranging from 1.86% at 10 km to 9.27% at 200 km), with secure key rates achieving $1.12 \times 10^5$ bits/s at short distances and maintaining practical rates of 9.8 bits/s at 200 km. When integrated with quantum repeater architectures, our analysis projects that EAQKD can extend secure communication beyond 500 km while providing information-theoretic security guarantees. Comparative analysis against the BB84, E91, and Twin-Field QKD protocols demonstrates EAQKD's superior balance of security, practical performance, and implementation robustness. This work advances quantum cryptography by providing a rigorously analyzed engineering reference for secure key distribution in future quantum communication networks.

Solution of Quantum Quartic Potential Problems with Airy Fredholm Operators

Ori J. Ganor

2603.02374 • Mar 2, 2026

QC: low Sensing: low Network: none

This paper introduces new mathematical operators called Airy Fredholm operators that can solve quantum mechanical problems involving quartic (fourth-power) potentials, including anharmonic oscillators and quantum field theories. The operators commute with the system Hamiltonians and have exponentially decaying eigenvalues, potentially enabling more accurate numerical calculations.

Key Contributions

  • Introduction of Airy Fredholm integral operators that commute with quartic potential Hamiltonians
  • Development of dual chain representations for quantum systems with quartic potentials
  • Extension to multivariable and higher-dimensional quantum systems including field theories
Fredholm operators quartic potentials anharmonic oscillator Airy function quantum field theory
View Full Abstract

Fredholm integral operators that commute with the Hamiltonians of certain quantum mechanical problems with quartic potentials are introduced. The operators are expressed in terms of an Airy function, and their eigenvalues fall off exponentially fast. They may help with high-accuracy numerical analysis, and their existence leads to dual descriptions in terms of infinite one-dimensional chains with variables on nodes, and weights on nodes and links. The systems discussed include the anharmonic quartic oscillator as well as multivariable potentials and higher dimensional systems, including certain quantum field theories with nonlocal interactions.

Enhancing entanglement asymmetry in fragmented quantum systems

Lorenzo Gotta, Filiberto Ares, Sara Murciano

2603.02338 • Mar 2, 2026

QC: medium Sensing: low Network: low

This paper studies entanglement asymmetry, a measure of symmetry breaking in quantum many-body systems, focusing on how it behaves in systems with fragmented Hilbert spaces. The authors derive bounds on asymmetry values and show that fragmented systems can exhibit extensively scaling asymmetry, providing a way to distinguish between classical and quantum fragmentation.

Key Contributions

  • Generalization of entanglement asymmetry to fragmented quantum systems using commutant algebra formalism
  • Derivation of universal bounds on asymmetry for both conventional and fragmented symmetries
  • Demonstration that asymmetry can scale extensively in fragmented systems, distinguishing quantum from classical fragmentation
entanglement asymmetry Hilbert space fragmentation many-body quantum systems U(1) symmetries random matrix product states
View Full Abstract

Entanglement asymmetry provides a quantitative measure of symmetry breaking in many-body quantum states. Focusing on inhomogeneous $U(1)$ charges, such as dipole and multipole moments, we show that the typical asymmetry is bounded by a specific fraction of its maximal value, and verify this behavior in several settings, including random matrix product states. Within the latter ensemble, by identifying the bond dimension with an effective time, we qualitatively reproduce recent findings on the entanglement asymmetry dynamics in random quantum circuits, thereby suggesting a universal dynamical structure of the asymmetry of $U(1)$ charges in local ergodic systems. Multipole charges naturally arise in systems with Hilbert-space fragmentation, where the dynamics splits into exponentially many disconnected sectors. Using the commutant algebra formalism, we generalize entanglement asymmetry to account for fragmentation. We derive general upper bounds for both conventional and fragmented symmetries and identify states that saturate them. While the asymmetry grows logarithmically for conventional symmetries, it can scale extensively in fragmented systems, providing a probe that distinguishes classical from genuinely quantum fragmentation.

Thirty-six quantum officers are entangled

Simeon Ball, Robin Simoens

2603.02334 • Mar 2, 2026

QC: low Sensing: none Network: low

This paper proves that classical orthogonal Latin squares of order 6 (Euler's thirty-six officers problem) have no solution, but demonstrates that quantum versions using entangled states do exist. The authors show that without entanglement, even quantum Latin squares of order 6 cannot be mutually orthogonal.

Key Contributions

  • Proof that mutually orthogonal quantum Latin squares of order 6 require entanglement
  • Demonstration of quantum solutions to classical combinatorial problems that have no classical solutions
quantum entanglement Latin squares orthogonality combinatorics Euler problem
View Full Abstract

There exist pairs of orthogonal Latin squares of any order n except if n=2 or n=6 [Bose, Shrikhande and Parker, 1960]. In particular, the problem of Euler's thirty-six officers does not have a solution. However, it has a "quantum solution": there exist so-called entangled quantum Latin squares of order six [Rather et al., 2022]. We prove that mutually orthogonal quantum Latin squares of order six do not exist if entanglement is not allowed.

Single-photon emitters and spin-photon interfaces in silicon

Kilian Sandholzer, Ian Berkman, Peter Deák, Carlos Errando-Herranz, Petros Filippatos, Adam Gali, Andreas Gritsch, Andreas Reiserer

2603.02201 • Mar 2, 2026

QC: medium Sensing: low Network: high

This paper reviews silicon-based single-photon emitters and spin-photon interfaces that can generate individual photons and store quantum information using electron spins. The work focuses on using silicon's advanced manufacturing capabilities and long spin coherence times to create practical quantum networking hardware.

Key Contributions

  • Comprehensive review of silicon-based single-photon emitters for quantum applications
  • Analysis of spin-photon interfaces in silicon for quantum memory and networking
  • Assessment of silicon's advantages including advanced nanofabrication and long spin coherence times
single-photon emitters spin-photon interfaces silicon photonics quantum networks color centers
View Full Abstract

Single photons enable the distribution of quantum information over large distances and thus play a major role in quantum technologies such as communication and computing. Solid-state emitters are practical and efficient sources of single photons that can be manufactured in large numbers. When combined with a spin, the resulting spin-photon interfaces can store quantum states for extended periods and serve as the basis for quantum networks and repeaters. Among the many host materials explored over the past few decades, silicon stands out for its advanced nanofabrication, the maturity of its integrated photonics and microelectronics, and its high isotopic purity, which leads to exceptionally long spin coherence. These properties position silicon single-photon emitters and spin-photon interfaces among the most promising hardware platforms for implementing quantum networks and distributed quantum information processors. This review summarizes the current state of the art and open challenges towards coherent single-photon sources and scalable spin-photon interfaces based on color centers and erbium dopants in nanophotonic silicon structures.

Quantum algorithm for the lattice Boltzmann method with applications on real quantum devices

Antonio Bastida-Zamora, Ljubomir Budinski, Oskari Kerppo, Valtteri Lahtinen, Ossi Niemimäki, William Steadman, Roberto Zamora-Zamora, Pierre Sagaut, ...

2603.02127 • Mar 2, 2026

QC: high Sensing: none Network: none

This paper develops a new quantum algorithm for solving fluid dynamics problems using the lattice Boltzmann method, which simulates fluid flow by tracking particle distributions on a grid. The researchers demonstrated their algorithm on real IBM quantum computers for both linear acoustic problems and nonlinear fluid flow scenarios.

Key Contributions

  • Novel quantum algorithm for lattice Boltzmann method with improved flexibility for modeling different physics
  • Successful implementation and testing on real IBM quantum hardware for both linear and nonlinear fluid dynamics problems
quantum algorithm lattice Boltzmann method computational fluid dynamics quantum simulation hybrid quantum computing
View Full Abstract

We introduce a novel quantum algorithm for the lattice Boltzmann method (LBM) based on the one-step simplified LBM. The structure of the algorithm allows for more flexibility in modelling different physics in contrast to earlier quantum algorithms for the LBM, while retaining computational efficiency in terms of the gate and qubit complexity. The new algorithm has potential for full end-to-end quantum utility especially for linear problems. We discuss the implementation of examples in linear acoustics, as well as a nonlinear Navier-Stokes problem that was solved on an IBM QPU in a hybrid simulation loop.

Transmitting Correlation for Data Transmission over the Bosonic Arbitrarily Varying Channel

Janis Nötzel, Florian Seitz

2603.02078 • Mar 2, 2026

QC: none Sensing: low Network: high

This paper develops methods for quantum communication systems to resist jamming attacks by using shared randomness distributed through either classically correlated thermal light or quantum entangled states. The work focuses on optical communication channels where both sender and receiver use homodyne detection to counter energy-limited jammers.

Key Contributions

  • Demonstrates how classically correlated thermal light and entangled two-mode squeezed states can be used to establish shared randomness against jamming attacks
  • Provides a practical framework for jam-resistant quantum communication using standard homodyne detection with power constraints
quantum communication bosonic channels entanglement homodyne detection jamming resistance
View Full Abstract

Shared randomness is the central ingredient for stabilizing symmetrizable communication systems against arbitrarily varying jammers. Given the presence of the jammer, however, the question arises how this precious resource could have been distributed. Several works discuss the use of external sources for this task. In this work, we show, based on the most standard optical communication model, how the sender and receiver can employ either classically correlated thermal light or entangled two-mode squeezed states created at and transmitted by the sender to counter the jamming attack of an energy-limited jammer during the distribution phase. Both sender and receiver are only allowed to use homodyne detection in our model, and the sender has to obey a power limit as well.

Levitated Ferromagnetic Torsional Oscillators for High-Precision Magnetometry and Probing Exotic Interactions

Ren Yichong, Wu Lielie, Broer Wijnand, Xue Fei, Huang Pu, Du JiangFeng

2603.02074 • Mar 2, 2026

QC: low Sensing: high Network: none

This paper demonstrates a levitated ferromagnetic torsion oscillator that can detect extremely weak magnetic fields with sensitivity of 391 femtotesla per square root hertz. The device operates at room temperature in a compact volume and uses mechanical isolation to minimize noise, potentially enabling searches for new physics beyond the Standard Model.

Key Contributions

  • Achieved exceptional magnetic field sensitivity of 391±59 fT·Hz^{-1/2} using levitated ferromagnetic torsion oscillator
  • Demonstrated compact room-temperature magnetometer with potential for probing exotic interactions beyond Standard Model physics
magnetometry levitated systems torsion oscillator precision measurement ferromagnetic sensing
View Full Abstract

Levitated ferromagnetic systems are expected to have significant potential in precision magnetic field sensing by leveraging mechanical isolation to minimize mechanical contact and associated noise. Here, we report the implementation of a high-sensitivity magnetometer based on a levitated ferromagnetic torsion oscillator, incorporating a centroid tracking method for superior measurement resolution and noises reduction. The device, featuring a compact sensor volume of $(2.5 \, \rm{mm})^3$ and operating under room temperature, attains a remarkable magnetic sensitivity of {$391\pm 59 \, \rm{fT\cdot Hz^{-1/2}}$}. This capability enables precise detection of weak magnetic fields and provides a novel platform for exploring exotic interactions beyond the Standard Model. These results demonstrate that the levitated torsion oscillator system not only serves as a powerful tool for high-precision magnetic sensing but also holds promise for advancing breakthroughs in fundamental physics.

Anisotropic two-dimensional magnetoexciton with exact center-of-mass separation

Dang-Khoa D. Le, Hoang-Viet Le, Dai-Nam Le, Duy-Anh P. Nguyen, Thanh-Son Nguyen, Ngoc-Tram D. Hoang, Van-Hoang Le

2603.02051 • Mar 2, 2026

QC: low Sensing: medium Network: none

This paper develops an exact mathematical framework for analyzing excitons (electron-hole pairs) in anisotropic 2D materials under magnetic fields, improving upon previous approximate methods. The researchers apply their method to specific materials like black phosphorus to calculate energy levels and magnetic responses more accurately.

Key Contributions

  • Exact analytical framework for center-of-mass and relative motion separation in anisotropic 2D magnetoexcitons without approximations
  • Non-perturbative solutions using Feranchuk-Komarov operator method and Levi-Civita transformation for systematically convergent results
magnetoexcitons anisotropic 2D materials magnetic field effects black phosphorus magneto-optical phenomena
View Full Abstract

Excitons in anisotropic two-dimensional (2D) materials, defined by direction-dependent effective masses, are of pronounced interest for their roles in excitonic and magneto-optical phenomena. A perpendicular magnetic field complicates the separation of center-of-mass (c.m.) and relative motions, especially when electron and hole masses are comparable. Conventional theories often employ an approximate c.m. separation using factorized wave functions, modifying magnetic Hamiltonian terms and possibly introducing inaccuracies in magnetoexciton energy predictions. This work develops an exact analytical framework for c.m. and relative motion separation in anisotropic 2D magnetoexcitons, without resorting to the stationary-c.m. approximation. Starting from the full electron-hole Hamiltonian in a homogeneous magnetic field, the formalism uses the conserved pseudomomentum to derive a relative-motion Hamiltonian, revealing new anisotropy-dependent couplings and magnetic coefficients absent in approximate models. The resulting Schrödinger equation is treated via the Feranchuk-Komarov operator method and Levi-Civita transformation, allowing non-perturbative, systematically convergent solutions. Application to monolayer black phosphorus and titanium trisulfide, both freestanding and encapsulated in hexagonal boron nitride, yields magnetoexciton energies, diamagnetic coefficients, and probability densities for the ten lowest states across considerable magnetic-field ranges. The results demonstrate the significant influence of anisotropy-dependent coupling on magnetic response in systems with strong mass anisotropy. This formalism is generalizable to other anisotropic 2D semiconductors, establishing a foundation for advanced magneto-optical studies.

Using anti-squeezed Schrödinger cat states for detection of a given phase shift

V. L. Gorshenin, K. D. Dyadkin, S. D. Chikalkin

2603.02038 • Mar 2, 2026

QC: low Sensing: high Network: low

This paper proposes using anti-squeezed Schrödinger cat states (special quantum light states) to improve the detection of small phase shifts in optical interferometers. The researchers show that anti-squeezed states make the measurement system more robust against optical losses compared to traditional squeezed light states.

Key Contributions

  • Demonstration that anti-squeezed Schrödinger cat states provide enhanced robustness to optical losses in phase detection
  • Optimization of anti-squeezing parameters for experimentally achievable conditions and comparison with Gaussian squeezed states
quantum metrology Schrödinger cat states anti-squeezing optical interferometry phase detection
View Full Abstract

We propose to use the antisqueezing-enhanced non-Gaussian Schrödinger cat quantum states of the probing light for the task of detection of a given phase shift in optical interferometers. We show that the antisqueezing allows to increase the robustness of the setup to optical losses. We find the optimal degrees of the antisqueezing for experimentally achievable values of the Schrödinger cat amplitude and the optical losses and compare the resulting sensitivity with the one provided by the Gaussian squeezed states.

Decoherence and entropy production due to quantum fluctuations of spacetime

Thiago H. Moreira

2603.02034 • Mar 2, 2026

QC: low Sensing: medium Network: low

This paper studies how gravitational effects cause quantum systems to lose their quantum properties (decoherence) and examines the fundamental irreversibility that occurs when quantum systems interact with fluctuations in spacetime itself.

Key Contributions

  • Demonstrates that graviton interactions cause decoherence of spatial superpositions in microscopic systems over long time scales
  • Shows that entropy production arises from quantum fluctuations of spacetime when external agents drive quantum systems through graviton baths
decoherence gravitons open quantum systems entropy production spacetime fluctuations
View Full Abstract

The intersection between quantum mechanics and gravitational physics has been providing challenging puzzles for decades. In this thesis, we study the dynamics of an open quantum system coupled with a bath of gravitons, the quanta of the gravitational field in the linear limit of general relativity. We focus on two main aspects. First, we analyze the decoherence induced by gravitons when we consider the open system to be described by both external and internal degrees of freedom. Since gravity is universal, the internal variables also interact with the gravitons, and here we show that this interaction leads to the decoherence of spatial superpositions of microscopic systems in the long-time regime, even when the graviton bath alone does not. We then proceed to the second main aspect, which is the entropy production that arises when an external agent drives a quantum system through the graviton bath. This irreversibility comes from quantum fluctuations of spacetime itself and, as such, has a fundamentally universal aspect.

Tensor-network methodology for super-moiré excitons beyond one billion sites

Anouar Moustaj, Yitao Sun, Tiago V. C. Antão, Lumen Eek, Jose L. Lado

2603.02011 • Mar 2, 2026

QC: none Sensing: none Network: none
View Full Abstract

Computing excitonic spectra in quasicrystal and super-moiré systems constitutes a formidable challenge due to the exceptional size of the excitonic Hilbert space. Here, we demonstrate a tensor-network method for the real-space Bethe-Salpeter Hamiltonian, allowing us to access the spectra of an excitonic $10^{18}$-dimensional Hamiltonian, and enabling the direct computation of bound-exciton spectral functions for systems exceeding one billion lattice sites, several orders of magnitude beyond the capabilities of conventional approaches. Our method combines a tensor-network encoding of the real-space Bethe-Salpeter Hamiltonian with a Chebyshev tensor network algorithm. This strategy bypasses explicit storage of the Hamiltonian while preserving full real-space resolution across widely different length scales. We demonstrate our methodology for one- and two-dimensional super-moiré systems, achieving the simultaneous resolution of atomistic and mesoscopic structures in the excitonic spectra in billion-size systems, showing exciton miniband formation and moiré-induced spatial confinement. Our results establish a real-space methodology enabling the simulation of excitonic physics in large-scale quasicrystal and super-moiré quantum matter.

Cavity-enhanced optical readout and control of nuclear spin qubits

Alexander Ulanowski, Johannes Früh, Fabian Salamon, Adrian Holzäpfel, Andreas Reiserer

2603.01987 • Mar 2, 2026

QC: medium Sensing: low Network: high

This paper demonstrates all-optical control and readout of individual nuclear spin qubits using erbium atoms in a crystal placed inside a high-quality optical cavity. The system achieves exceptionally long coherence times of 0.2 seconds and 91% readout fidelity, making it suitable for quantum memory applications in fiber-based quantum networks.

Key Contributions

  • Achieved all-optical initialization, control, and readout of individual nuclear spin qubits with 91% fidelity
  • Demonstrated nuclear spin coherence times exceeding 0.2 seconds using magnetic field stabilization
  • Established 167-Er in cavities as a platform for telecommunications-compatible quantum networks
nuclear spin qubits quantum memory optical cavity quantum networks erbium
View Full Abstract

Their exceptional coherence makes nuclear spins in solids a prime candidate for quantum memories in quantum networks and repeaters. Still, the direct all-optical initialization, coherent control, and readout of individual nuclear spin qubits have been an outstanding challenge. Here, this is achieved by embedding 167-Er dopants in yttrium orthosilicate in a cryogenic Fabry-Perot cavity, whose linewidth of 65 MHz is much smaller than the 0.9 GHz separation of neighboring hyperfine levels. Frequency-selective emission enhancement thus enables a single-shot readout fidelity of 91(2)%. Furthermore, a large magnetic field freezes paramagnetic impurities, leading to coherence times exceeding 0.2 s. The combination of nuclear-spin qubits with frequency-multiplexed addressing and lifetime-limited photon emission in the minimal-loss telecommunications C-band establishes 167-Er as a leading platform for long-range, fiber-based quantum networks.

Quantum Network Simulation and Emulation: A Roadmap for Quantum Internet Design

Brian Doolittle, Michael Cubeddu

2603.01980 • Mar 2, 2026

QC: medium Sensing: none Network: high

This paper reviews the current state of quantum network simulation and emulation tools, identifies bottlenecks in classical approaches, and proposes a roadmap for developing quantum-enhanced simulation methods to support quantum internet design and deployment.

Key Contributions

  • Comprehensive review of existing quantum network simulation and emulation tools
  • Identification of scalability bottlenecks in classical simulation methods
  • Roadmap for quantum-enhanced simulation approaches integrated with quantum network testbeds
quantum networks quantum internet network simulation quantum emulation quantum testbeds
View Full Abstract

Quantum networks are advancing the information technology infrastructure of society. Simulation and emulation software tools have emerged to support the design, development, and deployment of quantum networks, however, classical simulation and emulation methods have major bottlenecks in the error, latency, and cost that they can achieve at scale. In this work, we review quantum network simulation and emulation tools, including foundational principles, state-of-the-art tools, and bottlenecks. We then discuss how quantum technologies can address these challenges, and we construct a roadmap for the adoption of quantum simulation and emulation tools, emphasizing codesign with quantum network testbeds.

Minimal-backaction work statistics of coherent engines

Milton Aguilar, Franklin L. S. Rodrigues, Eric Lutz

2603.01962 • Mar 2, 2026

QC: medium Sensing: medium Network: none

This paper develops a new measurement technique using dynamic Bayesian networks to study work statistics in quantum engines without disrupting their quantum coherence. The method minimally disturbs the engine's operation, unlike standard measurement approaches that can interfere so much they prevent coherent quantum engines from functioning properly.

Key Contributions

  • Development of minimal-backaction measurement scheme using dynamic Bayesian networks for quantum engines
  • Demonstration that standard two-point measurements can disrupt coherent quantum engines so severely they cease functioning
  • Finding that universal fluctuation bounds may not apply to coherent quantum machines
quantum engines measurement backaction dynamic Bayesian networks quantum coherence work statistics
View Full Abstract

Determining the work statistics of quantum engines is challenging due to measurement backaction. We here show that a dynamic Bayesian network-based measurement scheme, which preserves quantum coherence within an engine cycle, is minimally invasive, in the sense that the averaged measured state over one cycle exactly coincides with the unmeasured state. It therefore provides a general framework to investigate energy exchange statistics in quantum machines. This stands in contrast to the standard two-point measurement protocol, whose backaction can be so strong that it generally fails to reproduce the average work output of a coherent motor. It may even alter its mode of operation, causing it to cease functioning as an engine under observation. We further demonstrate that recently proposed universal fluctuation bounds do not necessarily apply to coherent machines.

Theory of the Uhlmann Phase in Quasi-Hermitian Quantum Systems

Xu-Yang Hou, Xin Wang, Hao Guo

2603.01908 • Mar 2, 2026

QC: medium Sensing: high Network: low

This paper develops a mathematical theory for understanding geometric phases in quantum systems that are quasi-Hermitian (a type of non-standard quantum system), particularly when mixed with thermal noise at finite temperatures. The authors show how these geometric phases can be experimentally measured and reveal new topological phase transitions driven by temperature changes.

Key Contributions

  • Development of comprehensive theory of Uhlmann phase for quasi-Hermitian quantum systems with parameter-dependent metrics
  • Discovery of rich finite-temperature topological phase diagrams in two-level models with thermal-driven phase transitions
  • Extension of interferometric protocols to enable experimental measurement of geometric phases via Loschmidt fidelity
geometric phases Uhlmann phase quasi-Hermitian systems topological phases finite temperature
View Full Abstract

Geometric phases play a fundamental role in understanding quantum topology, yet extending the Uhlmann phase to non-Hermitian systems poses significant challenges due to parameter-dependent inner product structures. In this work, we develop a comprehensive theory of the Uhlmann phase for quasi-Hermitian systems, where the physical Hilbert space metric varies with external parameters. By constructing a generalized purification that respects the quasi-Hermitian inner product, we derive the corresponding parallel transport condition and Uhlmann connection. Our analysis reveals that the dynamic metric induces emergent geometric features absent in the standard Hermitian theory. Applying this formalism to solvable two-level models, we uncover rich finite-temperature topological phase diagrams, including multiple transitions between trivial and nontrivial phases driven by thermal fluctuations. Crucially, the quasi-Hermitian parameters are shown to profoundly influence the stability of topological regimes against temperature, enabling nontrivial phases to persist within finite-temperature windows. Furthermore, by extending established interferometric protocols originally developed for Hermitian systems, the geometric amplitude can be recast as a measurable Loschmidt fidelity between purified states, providing a practical and experimentally accessible pathway to investigate quasi-Hermitian mixed-state geometric phases and their finite-temperature transitions. This work establishes a unified framework for understanding mixed-state geometric phases in non-Hermitian quantum systems and opens a practical avenue for their experimental investigation.

Configurational control of photon emission from a molecular dimer

Maximilian Kögler, Nicolas Néel, Jörg Kröger

2603.01897 • Mar 2, 2026

QC: low Sensing: medium Network: low

This paper studies how tin-phthalocyanine molecules emit light when excited by electrical current in a scanning tunneling microscope, finding that pairs of molecules (dimers) can have significantly enhanced or reduced light emission compared to single molecules depending on their configuration.

Key Contributions

  • Demonstrated configurational control of photon emission from molecular dimers with significant enhancement or reduction compared to monomers
  • Characterized the one-electron excitation process underlying neutral-exciton luminescence in tin-phthalocyanine molecules
electrofluorescence molecular photonics exciton coupling scanning tunneling microscopy dipole coupling
View Full Abstract

Tin-phthalocyanine molecules adsorbed on a NaCl ultrathin film on Au(111) exhibit electrofluorescence excited by a current across a scanning tunneling microscope junction. Exploring the dependence of the molecular monomer photon yield on the injected current evidences the one-electron excitation process underlying the neutral-exciton luminescence. Photon spectra of the monomer exhibit vibrational progression and hot luminescence, while the dimer electrofluorescence spectroscopic fine structure results from the coupling of the adjacent optical transition dipoles. The photon yield of the dimer is significantly altered upon changing the configurational state of one of the two molecules. In one of the bistable configurations light emission is amplified compared to the monomer, and it is reduced in the other.

Local approach to entropy production in the nonequilibrium dynamics of open quantum systems

Irene Ada Picatoste, Alessandra Colla, Heinz-Peter Breuer

2603.01861 • Mar 2, 2026

QC: medium Sensing: medium Network: low

This paper studies how entropy changes in open quantum systems that are not in equilibrium, establishing relationships between entropy production, memory effects, and whether the quantum dynamics are Markovian or non-Markovian. The authors prove that positive entropy production is sufficient but not necessary to identify non-Markovian quantum dynamics.

Key Contributions

  • Proved that positive entropy production for all initial states requires negative real eigenvalues of the time-local generator
  • Demonstrated that Markovian dynamics implies positive entropy production but showed the converse is not true
  • Established that negative entropy production is sufficient but not necessary for identifying non-Markovian quantum dynamics
  • Proved equivalence between map-based entropy production positivity and Markovianity for finite-dimensional systems
entropy production open quantum systems non-Markovian dynamics quantum master equation nonequilibrium thermodynamics
View Full Abstract

We discuss fundamental features of the local expression for the entropy production rate of the nonequilibrium quantum dynamics of open systems and its relations to memory effects and the spectrum of the generator of the dynamics. Defining the entropy production rate as negative rate of change of the relative entropy with respect to an instantaneous fixed point, it is shown that positivity of the entropy production rate for all possible initial states implies that the real parts of the eigenvalues of the time-local generator for the quantum master equation are always negative. It is further demonstrated that Markovian dynamics, identified as P-divisibility of the quantum dynamical map, implies positivity of entropy production rate, thus providing a kind of generalized second law in the nonequilibrium regime. We also prove by means of the counterexample of a phase covariant quantum master equation that the converse of this statement is not true, i.e., there are non-Markovian dynamics for which the entropy production rate is always positive. Thus, we conclude that the emergence of negative entropy production rates is a sufficient but not necessary condition for non-Markovianity of the quantum dynamics. Finally, we also consider a recently introduced map-based notion of entropy production and show the equivalence between its positivity and Markovianity for general finite-dimensional systems.

Local integrals of motion encoded in a few eigenstates

J. Pawłowski, P. Łydżba, M. Mierzejewski

2603.01859 • Mar 2, 2026

QC: low Sensing: none Network: none

This paper shows that the local integrals of motion that characterize quantum integrable systems can be determined from just a few eigenstates of the system's Hamiltonian, with fewer eigenstates needed as system size increases. The authors demonstrate this using the XXZ model and contrast it with Hilbert space fragmentation scenarios where most eigenstates are required.

Key Contributions

  • Demonstrated that local integrals of motion in integrable quantum systems can be extracted from a vanishingly small fraction of eigenstates in the thermodynamic limit
  • Identified a fundamental difference between quantum integrability and Hilbert space fragmentation in terms of eigenstate requirements for determining conserved quantities
quantum integrability XXZ model local integrals of motion eigenstates Hilbert space fragmentation
View Full Abstract

Many properties of a quantum system can be obtained from just a single eigenstate of its Hamiltonian. For example, a single eigenstate can be used to determine whether a system is integrable or chaotic and, in the latter case, to establish its thermal properties. Focusing on the XXZ model, we show that the local integrals of motion, which lie at the heart of integrability, can also be estimated from a small number of eigenstates. Moreover, as the system size increases, fewer eigenstates are required, so that in the thermodynamic limit, the integrals of motion can be obtained from a vanishingly small fraction of all eigenstates. Interestingly, this property does not extend to integrals of motion arising solely from Hilbert space fragmentation, as found in the folded XXZ model, where the majority of eigenstates has to be used. This represents one of the few fundamental differences known between integrability and Hilbert space fragmentation.

Mapping g-factors and complex intervalley coupling in Si/SiGe by conveyor-mode shuttling

Mats Volmer, Tom Struck, Arnau Sala, Jhih-Sian Tu, Stefan Trellenkamp, Davide Degli Esposti, Giordano Scappucci, Łukasz Cywiński, Hendrik Bluhm, Lar...

2603.01844 • Mar 2, 2026

QC: high Sensing: medium Network: low

This paper develops methods to precisely map electron g-factors and valley coupling in silicon quantum dots with nanometer resolution, using conveyor-belt shuttling of entangled electron pairs. The work provides crucial characterization tools for understanding and optimizing silicon-based quantum dot materials for large-scale quantum computing chips.

Key Contributions

  • Development of 2D mapping technique for electron g-factors in Si/SiGe quantum dots with sub-milliunit precision and nanometer resolution
  • Demonstration of conveyor-belt shuttling of entangled electron spin pairs to characterize intervalley coupling parameters
  • Extraction of complex intervalley coupling parameters by combining g-factor and valley splitting measurements on the same device
silicon quantum dots g-factor mapping valley coupling spin qubits conveyor shuttling
View Full Abstract

As silicon spin qubit chips are increasing in qubit number and area, methods for the screening of qubit related material parameters become vital. Here we demonstrate the two-dimensional mapping of small variations of the electron g-factor of quantum dots formed in planar Si/SiGe quantum wells with precision better than $10^{-3}$ and with nanometer lateral resolution. We scan the electron g-factor across a 40 nm $\times$ 400 nm area and observe two g-factors per QD site which obey a striking symmetry and bimodal distribution across the area. These two g-factors relate to valley states of the electron in the quantum dot in agreement with a recent theoretical model. Using conveyor-belt shuttling of entangled electron spin pairs, complementary to the mapping of the local valley-splitting, we map the g-factor. We compare g-factor and valley splitting maps measured on the same device, and extract the complex intervalley coupling parameter along the shuttle trajectories applying a theoretical model of g-factor dependence on intervalley coupling. These maps will allow unprecedented insights into the spin-valley dynamics during qubit manipulation, readout and shuttling and serve as a benchmark for the engineering of Si/SiGe heterostructures for large-scale quantum chips.

Time-dependent adiabatic elimination in matter-wave optics

Samuel Böhringer, Alexander Bott, Eric P. Glasbrenner

2603.01826 • Mar 2, 2026

QC: low Sensing: medium Network: low

This paper develops a mathematical formalism for adiabatic elimination in quantum systems where different subsystems are coupled with explicit time dependence, specifically applied to matter-wave optics where atomic center-of-mass motion must be considered. The method allows separation of specific quantum state dynamics from the total system without assuming the Hamiltonian elements commute.

Key Contributions

  • Development of time-dependent projector-based formalism for adiabatic elimination
  • Extension to non-commuting Hamiltonian elements relevant for matter-wave systems
adiabatic elimination matter-wave optics time-dependent Hamiltonians projector formalism quantum dynamics
View Full Abstract

We show how the dynamics of a specific subset of states can be separated from the dynamic of the total quantum state via a time-dependent projector-based formalism of adiabatic elimination. Within our formalism, we assume explicit time dependency in the coupling between both subsystems. Additionally, we do not assume that the elements of the Hamiltonian commute, as in matter-wave optics this not given in general. Here the center-of-mass degrees of freedom frequently need to be taken into account. Our formalism allows to perform the adiabatic elimination in such a setting.

Nature abhors macroscopic superpositions

Filippus S. Roux

2603.01811 • Mar 2, 2026

QC: low Sensing: medium Network: none

This paper investigates why large-scale quantum superpositions of massive objects rarely occur in nature, proposing that spacetime geometry creates an energy barrier that opposes the formation of macroscopic superposition states. The authors model this using Schrödinger cat states and suggest this mechanism could help explain the quantum measurement problem.

Key Contributions

  • Theoretical model explaining natural suppression of macroscopic quantum superpositions through spacetime-matter entanglement
  • Energy-based mechanism that creates opposing forces preventing large-scale superposition formation
macroscopic superpositions Schrödinger cat states spacetime entanglement measurement problem decoherence
View Full Abstract

Superpositions of mass distributions can potentially lead to entanglement with the geometry of spacetime. Here we show that there exists a natural reluctance for macroscopic mass distributions to form such superpositions. The macroscopic superposition is modeled as a Schr{ö}dinger cat state. The reluctance manifests as a dip in the total energy of the Schr{ö}dinger cat state as a function of the separation distance between the terms in the superposition. The dip in the energy provides an opposing force preventing the formation of the superposition. A generalization of this phenomenon addressing the measurement problem is also discussed.

Finite-Depth, Finite-Shot Guarantees for Constrained Quantum Optimization via Fejér Filtering

Chinonso Onah, Kristel Michielsen

2603.01809 • Mar 2, 2026

QC: high Sensing: none Network: none

This paper analyzes a quantum optimization algorithm called CE-QAOA that works with constrained problems, providing mathematical guarantees on how well it can find optimal solutions with limited circuit depth and measurement shots. The authors show that by restricting certain parameters to specific values (harmonic lattice), they can guarantee minimum success probabilities for finding optimal solutions.

Key Contributions

  • Provides finite-depth and finite-shot theoretical guarantees for constrained quantum optimization using CE-QAOA
  • Introduces Fejér filtering analysis framework for quantum approximate optimization algorithms with dimension-free bounds
QAOA quantum optimization constrained optimization finite-depth circuits spectral filtering
View Full Abstract

We study finite-layer alternations of the \emph{Constraint--Enhanced Quantum Approximate Optimization Algorithm} (CE--QAOA), a constraint-aware ansatz that operates natively on block one-hot manifolds. Our focus is on feasibility and optimality guarantees. We show that restricting cost angles to a harmonic lattice exposes a positive Fejér filter acting on the cost-phase unitary $U_C(γ)=e^{-iγH_C}$ \emph{in a cost-dephased reference model (used only for analysis)}. Under a wrapped phase-separation condition, this yields \emph{dimension-free} finite-depth and finite-shot lower bounds on the success probability of sampling an optimal solution. In particular, we obtain a ratio-form guarantee \[ q_0 \;\ge\; \frac{x}{1+x}, \qquad x \;=\; (p{+}1)^2 \sin^2(δ/2)\,C_β, \] where $q_0$ is the single-shot success probability, $C_β$ is the mixer-envelope mass on the optimal set, $δ$ is a phase-gap proxy, and $p$ is the number of layers. Riemann--Lebesgue averaging extends the discussion beyond exact lattice normalization. We conclude by outlining coherent realizations of hardware-efficient positive spectral filters as a main open direction.

Experimental realization and self-testing of semisymmetric informationally complete measurements via a one-dimensional photonic quantum walk

Xu Xu, Han-Yu Cheng, Meng-Yun Ma, Chao-Jie Sun, Yan Wang, Li-Jiong Shen, Zhe Sun, Qi-Ping Su, Chui-Ping Yang, Yong-Nan Sun

2603.01802 • Mar 2, 2026

QC: medium Sensing: medium Network: medium

This paper experimentally demonstrates a new type of quantum measurement called semisymmetric informationally complete POVMs (semi-SIC POVMs) using photonic quantum walks. The researchers also perform self-testing of these measurements in a semi-device-independent manner, which could improve quantum certification protocols.

Key Contributions

  • Experimental realization of semi-SIC POVMs using one-dimensional photonic quantum walks
  • Demonstration of semi-device-independent self-testing of these generalized quantum measurements
POVM quantum measurements photonic quantum walk self-testing semi-device-independent
View Full Abstract

Generalized quantum measurements play a crucial role in quantum mechanics, and symmetric informationally complete positive operator-valued measurements (SIC POVMs) provide a powerful and flexible framework for extracting information from quantum systems. However, the existence of SIC-POVMs in every finite dimension remains an open question, which has stimulated extensive research into alternative classes of POVMs. Recently, Geng $et$ $al$. [Phys. Rev. Lett. 126, 100401 (2021)] proposed a broader class of SIC POVM, called semisymmetric informationally complete POVM (semi-SIC POVM), which extends beyond SIC POVM. In this work, we focus on the four-outcome POVMs and experimentally realize the semi-SIC POVMs using a one-dimensional discrete-time quantum walk. Additionally, employing single photons and linear optics, we perform an experimental self-testing of semi-SIC POVMs in the semi-device-independent manner. Our results pave the way for exploring quantum certification with generalized quantum measurements.

Distinguishing thermal and pseudothermal light by testing the Siegert relation

Xi Jie Yeo, Justin Yu Xiang Peh, Darren Ming Zhi Koh, Christian Kurtsiefer, Peng Kian Tan

2603.01764 • Mar 2, 2026

QC: none Sensing: low Network: low

This paper develops a method to distinguish between genuine thermal light (from sources like gas discharge lamps) and pseudothermal light (from laser light scattered by rotating ground glass) by testing the Siegert relation, even though both types of light exhibit photon bunching behavior.

Key Contributions

  • Experimental method to test the Siegert relation for distinguishing thermal vs pseudothermal light sources
  • Demonstration that photon bunching alone is insufficient to characterize thermal light behavior
thermal light pseudothermal light photon bunching Siegert relation quantum optics
View Full Abstract

Thermal light, including blackbody radiation and spontaneous emission, exhibits photon bunching. Thermal light sources, however, typically yield low spectral densities, limiting their practical utility. Pseudothermal light sources with higher brightness and longer coherence time are often employed instead. While pseudothermal light also exhibits photon bunching, this property may not suffice to fully replicate the behavior of genuine thermal light. Here we demonstrate a method to directly test the Siegert relation for two sources of photon-bunched light, laser light scattered from a rotating ground glass and spontaneously emitted light from a gas discharge lamp, probing a fundamental criterion expected of thermal light.

High-Performance Quantum Frequency Conversion from Ultraviolet to Telecom Band

Yi Yang, Bin Wang, Ji-Chao Lin, Yang Gao, Xin Li, Jiu-Peng Chen, Lei Hou, Ye Wang, Yong Wan, Xiu-Ping Xie, Ming-Yang Zheng, Qiang Zhang, Jian-Wei Pan

2603.01745 • Mar 2, 2026

QC: medium Sensing: none Network: high

This paper demonstrates high-performance quantum frequency conversion that translates ultraviolet photons to telecom wavelengths using lithium niobate waveguides, achieving record efficiency and low noise. The technology enables connecting quantum systems operating at UV wavelengths to fiber-optic communication networks.

Key Contributions

  • Record-high 28.8% external conversion efficiency with ultra-low noise for UV-to-telecom frequency conversion
  • Theoretical model correlating conversion efficiency with domain defects and robust noise suppression strategy
  • Demonstration of quantum frequency conversion enabling long-lived remote ion-ion entanglement for scalable quantum networks
quantum frequency conversion lithium niobate telecom wavelength quantum networks ion entanglement
View Full Abstract

Quantum frequency conversion (QFC) is essential for bridging the spectral gap between stationary qubits and low-loss optical communication channels. In this work, we demonstrate a short-wavelength-pumping QFC with the first-order quasi-phase matching period of 3.07 um on thin-film lithium niobate, converting ultraviolet photons to the telecom C-band. By constructing a theoretical model that correlates the normalized conversion efficiency with domain defects in the short-period phase-matched waveguide, we found the critical tolerance of domain defects along the waveguide should be $\le 2$ (excluding the ends). Based on this, we achieved a theoretical limit normalized conversion efficiency of 839%/(W*cm^2) for the fundamental guided mode through fabrication optimization. Furthermore, we propose a robust noise suppression strategy for short-wavelength pumping by utilizing the counter-tuning behaviors of difference-frequency generation and spontaneous parametric down-conversion. By combining these advances with ultra-narrowband filtering, we achieve a record-high external efficiency of 28.8% and an ultra-low noise of 35 counts per second. This high-performance QFC connecting ultraviolet and telecom bands satisfies the stringent requirements for long-lived remote ion-ion entanglement in scalable quantum networks [W.-Z. Liu et al., Nature (2026)].

Gain-induced spectral non-degeneracy in type-II parametric down-conversion

Behnood Taheri, Denis Kopylov, Manfred Hammer, Torsten Meier, Jens Förstner, Polina Sharapova

2603.01656 • Mar 2, 2026

QC: low Sensing: medium Network: high

This paper studies how increasing the gain in type-II parametric down-conversion causes the generated photon pairs to shift from having the same frequency (degenerate) to different frequencies (non-degenerate), making the photons more distinguishable from each other. The researchers developed a rigorous theoretical model to predict this effect, which previous simplified models failed to capture.

Key Contributions

  • Discovery of gain-induced spectral shifts causing transition from degenerate to non-degenerate PDC
  • Development of rigorous theoretical model based on coupled integro-differential equations that captures effects missed by spatially-averaged approximations
parametric down-conversion photon pairs entanglement generation nonlinear optics quantum optics
View Full Abstract

We demonstrate the novel effect of gain-induced spectral shifts in the type-II parametric down-conversion (PDC) process, which results in a transition from degenerate to non-degenerate PDC with increasing parametric gain. This effect, originating from the second-order dispersion terms, significantly alters the properties of PDC in the high-gain regime, where it leads to increased distinguishability of the generated photon pairs. The effect is established by evaluating a rigorous theoretical model, which is based on solving a system of coupled integro-differential equations for monochromatic operators. The widely used spatially-averaged approximate model fails to reproduce this important effect.

Shaping frequency-tunable single photons for quantum networking in waveguide QED

Álvaro Pernas, Álvaro Gómez-León, Ricardo Puebla

2603.01649 • Mar 2, 2026

QC: medium Sensing: none Network: high

This paper develops theoretical methods to control and shape single photons at different frequencies in superconducting waveguide networks, enabling quantum information exchange between network nodes that operate at mismatched frequencies. The work provides protocols for quantum state transfer and entanglement generation between non-resonant nodes in quantum networks.

Key Contributions

  • Theoretical framework for shaping frequency-tunable single photons in waveguide QED systems
  • Protocols for quantum state transfer and entanglement generation between frequency-detuned network nodes
  • Analysis of control requirements and operation regimes for experimental implementation
waveguide QED quantum networking single photons frequency control quantum state transfer
View Full Abstract

The exchange of quantum information among nodes in a quantum network is one of the main challenges in modern technologies. Superconducting waveguide QED networks hold great potential for realizing distributed quantum computation, where distinct nodes communicate via itinerant single photons. Yet, different frequencies among the nodes restrict their applicability and limit scalability. Here we derive the controls required to shape single photons arbitrarily detuned with respect to their natural frequency, allowing thus for an on-demand and deterministic exchange of quantum information among frequency detuned nodes. We provide a theoretical framework, analyzing the properties of the controls for typical photon shapes, identifying operation regimes amenable for experimental realization. We then show how these controls enable frequency-selective quantum state transfer among non-resonant and distant nodes of a realistic network. In addition, we also provide a simple extension for remote entanglement generation between these nodes. The suitability and high-fidelity of these protocols is supported by numerical simulations, highlighting the novel networking possibilities unlocked when shaping frequency-tunable single photons.

Single-ion phonon laser in the quantum regime

Dong Yuanzhang, He Siwen, Deng Zhijiao, Li Peidong, Chen Liang, Feng Mang

2603.01585 • Mar 2, 2026

QC: medium Sensing: high Network: low

This paper demonstrates how a single trapped ion can generate quantum phonon laser states using a three-level atomic model with bichromatic laser driving. The researchers propose an experimental scheme with a trapped 40Ca+ ion and develop methods for precise quantum state tomography of the resulting vibrational states.

Key Contributions

  • Development of single-ion quantum phonon laser using three-level model instead of previous two-ion systems
  • Experimental scheme for 40Ca+ ion with bichromatic sideband lasers and quantum state tomography methods
trapped ions phonon laser quantum state tomography vibrational states sideband cooling
View Full Abstract

The quantum phonon laser state is a vibrational state generated by phonon coherent amplification based on quantum mechanics. Its core is coherent excitation and manipulation of phonon quantum states by controlling phonon dynamics. This technology breaks classical limits of traditional phonon lasers, offering new methods for quantum information. Previous research on quantum phonon lasers focused on quantum van der Pol oscillators. As typical nonlinear quantum systems, they show significant value in trapped-ion systems. These breakthroughs extend nonlinear dynamics into the quantum domain and provide platforms for exploring quantum nonlinear phenomena. Although realized in two-ion systems, practical applications remain challenging. This paper explores how a single trapped ion generates quantum phonon laser states using a three-level model. By solving the quantum master equation numerically, steady-state characteristics are analyzed, focusing on quantum statistics including the Wigner function and second-order correlation function. An experimental scheme is proposed based on a single trapped 40Ca+ ion, using bichromatic blue-sideband and red-sideband lasers to generate quantum phonon laser states. By introducing the characteristic function of motional states, precise quantum state tomography is achieved. Additionally, a two-level model discusses the phonon laser threshold effect. However, the three-level model shows significantly different thresholds and more accurately describes the quantum phonon laser's physical mechanisms.

Intersubjectivity as a principle determining physical observables and non-classicality

Shun Umekawa, Koki Ono, Hayato Arai

2603.01575 • Mar 2, 2026

QC: medium Sensing: medium Network: low

This paper establishes a fundamental principle called intersubjectivity that determines when quantum measurements correspond to traditional physical observables, proving that projection-valued measurements are characterized by inter-observer agreement and that classical systems are distinguished by preserving this agreement under all measurement conditions.

Key Contributions

  • Proves equivalence between projection-valued measures and intersubjective measurements under coarse-graining
  • Establishes complete characterization of classical systems through intersubjectivity preservation
  • Demonstrates operational significance for quantum state tomography and discrimination tasks
quantum measurement theory projection-valued measures POVMs intersubjectivity generalized probabilistic theories
View Full Abstract

We identify an operational principle that singles out Projection-Valued Measures (PVMs) among general Positive Operator-Valued Measures (POVMs), bridging the modern quantum measurement theory and the traditional formulation based on projective measurements of physical observables. We reformulate Ozawa's intersubjectivity condition, which requires inter-observer agreement of the measurement outcomes, in a quantitative manner within the framework of generalized probabilistic theories. We prove that (i) a POVM is a PVM if and only if its every coarse-graining is intersubjective, and (ii) a system is classical if and only if intersubjectivity is preserved under any coarse-graining, establishing a complete characterization of the physical observables and the classical theory. Furthermore, measurements with intersubjectivity are sufficiently rich for the informational tasks of state tomography and state discrimination, testifying to its operational significance in quantum and beyond information processing.

Quantum Thermal Machines Improved by Internal Coupling: From Equilibrium to Non-equilibrium Limit Cycles

Jingyi Gao, Naomichi Hatano

2603.01567 • Mar 2, 2026

QC: low Sensing: none Network: none

This paper studies quantum Otto cycles (quantum heat engines) and shows how internal coupling between system components can improve their performance as engines or refrigerators. The research demonstrates that coupling allows these quantum thermal machines to operate in parameter regimes where uncoupled systems would fail, and can enhance efficiency beyond standard limits while remaining below the fundamental Carnot bound.

Key Contributions

  • Demonstrates that internal coupling significantly broadens the operational regime of quantum Otto cycles, enabling function as engines or refrigerators in parameter ranges where uncoupled systems fail
  • Shows that internal coupling can enhance efficiency and coefficient of performance beyond standard Otto bounds while respecting the Carnot limit, and validates theoretical approaches using GKSL master equation
quantum Otto cycle quantum thermal machines internal coupling thermodynamics limit cycles
View Full Abstract

We investigate how internal coupling influences the operation and performance of a quantum Otto cycle operating as the Gibbs-state limit cycle (GSLC), equilibrating limit cycle (ELC), and non-equilibrating limit cycle (NELC). We show that the internal coupling significantly broadens the operational regime of the cycle. In particular, in parameter regimes where the uncoupled Otto cycle fails to operate as any thermal machine, the coupled system can function as an engine or a refrigerator. For the GSLC, in which we assume that the system quickly equilibrates during the isochoric processes, the internal coupling not only shifts and enlarges the operational regime but also enhances the efficiency and the coefficient of performance (COP), allowing the performance to exceed the standard Otto bounds while remaining below the Carnot limit. For ELC and NELC, we validate the global approach of the Gorini--Kossakowski--Sudarshan--Lindblad (GKSL) master equation by comparison with the GSLC, and examine the NELC for finite interaction time and the ELC for infinite interaction time. Although the efficiency and COP of NELC are lower than those of ELC, shorter interaction times yield higher power output, consistent with the power--efficiency trade-off.

Theory of anomalous Landau-Zener tunneling induced by nonlinear coupling

Wen-Yuan Wang, Hong-Juan Meng

2603.01523 • Mar 2, 2026

QC: medium Sensing: low Network: none

This paper develops a new theory for quantum tunneling in two-level systems with nonlinear coupling, showing that strong nonlinear interactions create a 'black-hole-like' attractor that completely changes how quantum states evolve and breaks down standard tunneling predictions. The work reveals a universal power-law behavior that replaces the usual exponential formula for quantum tunneling probability.

Key Contributions

  • Discovery of black-hole-like fixed point behavior in nonlinear quantum systems that erases initial state memory
  • Derivation of exact analytical expression showing power-law dependence replacing exponential Landau-Zener formula
  • Establishment of universal framework for nonlinear-coupling-induced adiabaticity breaking in driven quantum systems
Landau-Zener tunneling nonlinear coupling two-level systems adiabatic dynamics quantum control
View Full Abstract

We develop a general theory of Landau-Zener (LZ) tunneling in a two-level system with amplitude-dependent, sign-reversible nonlinear coupling, distinguishing it fundamentally from conventional on-site nonlinearity. Through a combination of analytical and phase-space analysis, we show that beyond a critical interaction strength, the nonlinear coupling fundamentally reshapes the adiabatic energy landscape, introducing a topological twisted and knotted structure. This structure leads to a complete breakdown of the standard exponential LZ formula, even in the adiabatic limit. Central to this anomalous behavior is the emergence of a black-hole-like fixed point, which acts as a universal attractor: upon traversing the critical region, all quantum trajectories converge to this fixed point, irreversibly erasing any memory of the initial state. From this fixed-point picture, we derive an exact analytical expression for the adiabatic tunneling probability, revealing a characteristic power-law dependence on both linear and nonlinear coupling strength. Our work establishes a paradigmatic framework for nonlinear-coupling-induced anomalous adiabaticity breaking and offers a universal mechanism for state control in driven quantum and wave systems.

Efficient Learning Algorithms for Noisy Quantum State and Process Tomography

Chenyang Li, Shengxin Zhuang, Yukun Zhang, Jingbo B. Wang, Xiao Yuan, Yusen Wu, Chuan Wang

2603.01521 • Mar 2, 2026

QC: high Sensing: medium Network: low

This paper develops new algorithms for efficiently learning and characterizing large quantum states and processes from noisy quantum circuits, requiring only polynomial (rather than exponential) resources. The methods work for arbitrary noise levels and don't require specific assumptions about input states, making them practically useful for characterizing real quantum devices.

Key Contributions

  • Development of polynomial-time algorithms for quantum state tomography under arbitrary noise
  • Extension to quantum process tomography with unified protocol for both unital and non-unital channels
  • Structure-agnostic framework that works without assumptions about input distributions
  • Demonstration of scalability for large-scale noisy quantum device characterization
quantum tomography noisy quantum circuits quantum state learning quantum process learning polynomial algorithms
View Full Abstract

Efficiently characterizing large quantum states and processes is a central yet notoriously challenging task in quantum information science, as conventional tomography methods typically require resources that grow exponentially with system size. Here, we introduce a provably efficient and structure-agnostic learning framework for noisy $n$-qubit quantum circuits under generic noise with arbitrary noise strength. We first develop a sample-efficient learning algorithm for unital noisy quantum states. Building on this result, we extend the framework to quantum process tomography, obtaining a unified protocol applicable to both unital and non-unital channels. The resulting approach is input-agnostic and does not rely on assumptions about specific input distributions. Our theoretical analysis shows that both state and process learning require only polynomially many samples and polynomial classical post-processing in the number of qubits, while achieving near-unit success probability over ensembles generated by local random circuits. Numerical simulations of two-dimensional Hamiltonian dynamics further demonstrate the accuracy and robustness of the approach, including for structured circuits beyond the random-circuit setting assumed in the theoretical analysis. These results provide a scalable and practically relevant route toward characterizing large-scale noisy quantum devices, addressing a key bottleneck in the development of quantum technologies.

Single impurity-induced localization transitions in electronic systems

Niaz Ali Khan, Munsif Jan, Muzamil Shah, Muhammad Sajid, Muhammad Mateen, Mushtaq Ali

2603.01497 • Mar 2, 2026

QC: low Sensing: medium Network: none

This paper studies how a single impurity in a quantum lattice can create localized bound states that transition from extended to localized behavior as the impurity strength increases, while the rest of the electronic system remains unaffected. The work focuses on understanding localization at the individual eigenstate level rather than system-wide Anderson localization.

Key Contributions

  • Demonstration of single-impurity-induced localization transitions in bound states without global system localization
  • Characterization of two distinct spatial decay profiles (symmetric and exponential) for impurity-induced bound states
Anderson localization bound states tight-binding systems impurity physics localization transition
View Full Abstract

Anderson localization is a fundamental phenomenon in disordered quantum systems, where transport is suppressed by wave interference from extensive randomness. Moving beyond traditional multi-impurity scenarios, we investigate impurity-induced localization phenomena in low-dimensional tight-binding systems by focusing on the properties of impurity-generated bound states. By introducing a single on-site impurity into an otherwise extended lattice, we demonstrate that the impurity can host a bound state whose spatial character undergoes a transition from extended to localized as the impurity strength surpasses a critical value. This transition pertains solely to the impurity state, while the bulk states of the host system remain extended. We characterize the localization behavior by analyzing two distinct spatial profiles of the bound states: one with symmetric decay and another with exponential decay from the impurity site. Our results highlight how a local perturbation can induce nontrivial localization behavior at the level of individual eigenstates, without implying a global localization transition of the underlying electronic system.

Violation of Quantum Bilocal Inequalities on Mutually-Commuting von Neumann Algebra Models

Bingke Zheng, Shuyuan Yang, Jinchuan Hou, Kan He

2603.01466 • Mar 2, 2026

QC: low Sensing: none Network: medium

This paper studies quantum entanglement networks using three mutually-commuting von Neumann algebras and establishes Bell-like inequalities called bilocal inequalities. The authors investigate when these inequalities are violated and use this violation to infer structural properties of the underlying quantum mathematical frameworks.

Key Contributions

  • Establishment of bilocal inequalities for entanglement swapping networks using von Neumann algebraic framework
  • Identification of algebraic structural conditions for violation of bilocal inequalities that reveal properties of quantum field theory observables
bilocal inequalities von Neumann algebras entanglement swapping Bell inequalities quantum field theory
View Full Abstract

Differently from the non-relativistic quantum mechanics, the violation of Bell inequalities in quantum field theory depends more on the structure of observable algebras (typically type III von Neumann algebras) rather than the choice of specific quantum states. Therefore, studying the violation of Bell inequalities based on the von Neumann algebraic framework often reveals information about the algebraic structure. In this paper, we employ three mutually-commuting von Neumann algebras to characterize quantum entanglement swapping networks, and establish Bell-like inequalities thereon, commonly referred to as bilocal inequalities. We investigate the algebraic structural conditions under which bilocal inequalities are satisfied or violated on the generated algebra of these three von Neumann algebras. Furthermore, the conditions for maximal violation of the inequalities can be utilized to infer the structural information of von Neumann algebras in reverse. Our results not only utilize the violation of bilocal inequalities to reveal the structural properties of von Neumann algebras, but can also be applied to quantum mechanics and quantum field theory.

Exact bounds on quantum partial search algorithm and improving the parallel search

Yan-Bo Jiang, Xiao-Hui Wang, Kun Zhang, Vladimir Korepin

2603.01462 • Mar 2, 2026

QC: high Sensing: none Network: none

This paper analyzes Grover's quantum search algorithm in the partial search setting, where you trade accuracy for fewer queries. The authors prove that the Grover-Radhakrishnan-Korepin (GRK) algorithm is optimal for partial search and show how to improve parallel quantum search by combining partial and full search strategies.

Key Contributions

  • Proved strict optimality of GRK algorithm for quantum partial search with tight bounds on success probability
  • Demonstrated improved parallel quantum search efficiency using hybrid partial/full search strategy
Grover algorithm quantum search partial search GRK algorithm parallel quantum computing
View Full Abstract

Grover's algorithm provides a quadratic speedup over classical algorithms for searching unstructured databases and is known to be strictly optimal in oracle query complexity, with tight bounds on its success probability. Although the standard Grover search cannot be further accelerated in the full-search setting, a trade-off between accuracy and query complexity gives rise to the partial search problem. The Grover-Radhakrishnan-Korepin (GRK) algorithm is widely regarded as the optimal protocol for this task. In this work, we provide strong evidence for the strict optimality of the GRK operator sequence among all admissible compositions of global and local Grover operators. By exhaustively examining all operator sequences with a fixed number of oracle queries, we show that the GRK structure universally maximizes the success probability. Building on this result, we derive an asymptotically tight upper bound on the maximal success probability for partial search and establish a matching lower bound on the minimal expected number of oracle queries. Furthermore, we investigate parallel quantum search within the partial-search framework. While a direct GRK-based parallelization does not outperform established parallel Grover schemes, we demonstrate that a hybrid strategy combining partial and full search protocols achieves a strictly improved parallel efficiency. Our results clarify the fundamental limits of quantum partial search and its role in optimizing parallel quantum search algorithms.

Generalized quantum master equation from memory kernel coupling theory

Rui-Hao Bi, Wei Liu, Wenjie Dou

2603.01458 • Mar 2, 2026

QC: medium Sensing: medium Network: low

This paper develops an improved mathematical method called tensorial Memory Kernel Coupling Theory (MKCT) for accurately simulating how quantum systems behave when they interact with their environment over time. The researchers demonstrate their method works well on several important test cases including light-harvesting complexes and charge transport systems.

Key Contributions

  • Extension of Memory Kernel Coupling Theory from scalar to tensorial framework for calculating general expectation values and cross-correlation functions
  • Demonstration of numerical accuracy and efficiency across benchmark systems including spin-boson model, Fenna-Matthews-Olson complex, and charge mobility simulations
open quantum systems non-Markovian dynamics memory kernel quantum master equation decoherence
View Full Abstract

The generalized quantum master equation provides a powerful framework for non-Markovian dynamics of open quantum systems. However, the accurate and efficient evaluation of the memory kernel remains a challenge. In this work, we introduce a comprehensive tensorial extension to the Memory Kernel Coupling Theory (MKCT) to overcome this bottleneck. By elevating the original scalar formalism to a tensorial framework, the extended MKCT enables the calculation of general expectation values and cross-correlation functions. We demonstrate the numerical accuracy and efficiency of this method across multiple benchmark systems: capturing transient populations and coherences in the spin-boson model, resolving the excitonic absorption spectrum of the Fenna-Matthews-Olson complex, and simulating charge mobility in one-dimensional lattice models. These successful applications establish the tensorial MKCT as a highly efficient tool for investigating complex dynamics in open quantum systems.

Applicability and Limitations of Quantum Circuit Cutting in Classical State-Vector Simulation

Mitsuhiro Matsumoto, Shinichiro Sanji, Takahiko Satoh

2603.01443 • Mar 2, 2026

QC: high Sensing: none Network: none

This paper analyzes quantum circuit cutting, a technique that breaks large quantum circuits into smaller pieces that can be simulated independently and then combined classically. The authors determine when this approach provides computational speedups and show it can extend feasible quantum circuit simulation by 4-6 qubits under practical time constraints.

Key Contributions

  • Derived mathematical threshold conditions for when circuit cutting provides speedup benefits over direct simulation
  • Experimentally validated speedup predictions up to 24 qubits and identified computational bottleneck crossovers at 18 and 22 qubits
  • Demonstrated that two-way circuit cutting can extend maximum simulatable qubit count by 4-6 qubits under realistic time budgets
quantum circuit cutting classical simulation state-vector simulation quantum circuit optimization computational complexity
View Full Abstract

Circuit cutting partitions a large quantum circuit into smaller subcircuits that can be executed independently and recombined by classical post-processing. In classical state-vector simulation with full-state reconstruction, the runtime is governed by a trade-off between reduced subcircuit size and the overheads of exponentially many subcircuits and full-state reconstruction. For equal partitioning, we derive threshold conditions on the number of cuts below which cutting reduces the wall-clock time. State-vector experiments validate the predicted speedup boundary up to 24 qubits, and a runtime breakdown up to 30 qubits identifies crossovers at $q \approx 18$ and $q \approx 22$ where merging overtakes first preprocessing and then subcircuit simulation. As a practical guideline, we show that under a 10-minute wall-clock budget, two-way cutting extends the maximum feasible qubit count by 4 to 6 qubits relative to simulation without cutting.

Nonreciprocal entanglement in exciton optomechanics with an optical parametric amplifier

Zhen-Sen Lin, Rui Zhang, Zi-Wei Jiang, Wen-Quan Yang, Ya-Feng Jiao, Hui Jing, Le-Man Kuang

2603.01397 • Mar 2, 2026

QC: low Sensing: medium Network: medium

This paper studies nonreciprocal quantum entanglement in a spinning system containing photons, excitons (bound electron-hole pairs), and phonons (vibrations), enhanced by an optical parametric amplifier. The researchers demonstrate that this system can create and control directional quantum entanglement that works at room temperature and is robust against decoherence.

Key Contributions

  • Demonstration of nonreciprocal tripartite entanglement between photons, excitons, and phonons using optical parametric amplification
  • Achievement of room-temperature nonreciprocal entanglement with high robustness to cavity dissipation
nonreciprocal entanglement exciton-optomechanics optical parametric amplifier tripartite entanglement Sagnac effect
View Full Abstract

We study nonreciprocal bipartite and tripartite entanglement in a spinning exciton-optomechanical system (EOMS) with an optical parametric amplifier (OPA). We demonstrate that nonreciprocal entanglement among photons, excitons, and phonons can be achieved under experimentally feasible parameters. We find that the nonreciprocal entanglement induced by Sagnac effects can be regulated through the OPA. Particularly, We show that the OPA significantly enhances photon-exciton entanglement and tripartite entanglement but weakens photon-phonon and exciton-phonon entanglement. Moreover, we find that the photon-exciton nonreciprocal entanglement not only can be generated at room temperature and even higher temperature but also exhibits highly robustness to cavity dissipation. Our works open a way to manipulate the room-temperature nonreciprocal entanglement, which may be useful for developing nonreciprocal quantum technologies.

Quantum framework for parameterizing partial differential equations via diagonal block-encoding

Hiroshi Yano, Yuki Sato

2603.01358 • Mar 2, 2026

QC: high Sensing: none Network: none

This paper develops a quantum algorithmic framework for solving partial differential equations (PDEs) with spatially varying parameters using diagonal block-encoding techniques. The method enables efficient quantum simulation of PDEs and extends to optimization problems where the goal is to find optimal design parameters, demonstrated through simulations of the 2D wave equation.

Key Contributions

  • Development of diagonal block-encoding framework for parameterized PDEs
  • Extension to PDE-constrained optimization problems
  • Numerical demonstration on 2D wave equation with Gaussian parameter profiles
quantum algorithms partial differential equations block-encoding quantum simulation PDE-constrained optimization
View Full Abstract

We study a quantum-algorithmic framework for parameterizing partial differential equations (PDEs). For a broad class of problems in which the discretized parameter field admits a diagonal representation, block-encodings of diagonal matrices, or diagonal block-encodings, can be used to represent spatially varying coefficients with structured, potentially complicated profiles. This encoding enables efficient quantum simulation of forward PDEs and extends naturally to parameter-dependent settings. Such simulations are a key primitive for quantum algorithms for PDE-constrained optimization, where the goal is to identify optimal design parameters. We illustrate the framework numerically through forward simulation and parameter design for the two-dimensional wave equation with a Gaussian parameter profile.

A moment-based approach to the injective norm of random tensors

Stephane Dartois, Benjamin McKenna

2603.01342 • Mar 2, 2026

QC: medium Sensing: none Network: low

This paper develops a new mathematical method to calculate upper bounds on the injective norm of random tensors, which is simpler than previous approaches and works for both Gaussian and non-Gaussian models. The results have applications to quantum information theory, specifically for understanding the geometric entanglement of random quantum states.

Key Contributions

  • Development of a moment-based method for bounding injective norms of random tensors that is simpler and more general than existing techniques
  • Rigorous bounds on geometric entanglement of random bosonic states and multipartite Schmidt rank states with applications to quantum information theory
random tensors injective norm geometric entanglement quantum states statistical physics
View Full Abstract

In this paper, we present a technically simple method to establish upper bounds on the expected injective norm of real and complex random tensors. Our approach is somewhat analogous to the moment method in random matrix theory, and is based on a deterministic upper bound on the injective norm of a tensor which might be of independent interest. Compared to previous approaches to these problems (spin-glass methods, epsilon-net techniques, Sudakov-Fernique arguments, and PAC-Bayesian proofs), our method has the benefit of being nonasymptotic, relatively elementary, and applicable to non-Gaussian models. We illustrate our approach on various models of random tensors, recovering some previously known (and conjecturally tight) bounds with simpler arguments, and presenting new bounds, some of which are provably tight. From the perspective of statistical physics, our results yield rigorous estimates on the ground-state energy of real and complex, possibly non-Gaussian, spin glass models. From the perspective of quantum information, they establish bounds on the geometric entanglement of random bosonic states and of random states with bounded multipartite Schmidt rank, both in the thermodynamic limits as well as the regimes of large local dimensions.

Remote state preparation of single-partite high-dimensional states in complex Hilbert spaces

Jun-Hai Zhao, Si-Qi Du, Wen-Qiang Liu, Dong-Hong Zhao, Hai-Rui Wei

2603.01323 • Mar 1, 2026

QC: medium Sensing: low Network: high

This paper develops practical schemes for remotely preparing quantum states with 4 and 8 levels in high-dimensional quantum systems. The researchers identify specific measurement bases that enable exact state preparation while minimizing resource requirements, and show these schemes could work with current technology.

Key Contributions

  • Development of minimal-resource schemes for remote state preparation of 4- and 8-level quantum states
  • Identification of orthogonal measurement bases for exact state preparation in high-dimensional systems
  • Demonstration that collection operations can be avoided by encoding computational basis in spatial modes of single-photon systems
remote state preparation high-dimensional quantum systems entangled states quantum communication single-photon systems
View Full Abstract

High-dimensional quantum systems offer a new playground for quantum information applications due to their remarkable advantages such as higher capacity and noise resistance. We propose potentially practical schemes for remotely preparing four- and eight-level equatorial states in complex Hilbert spaces exactly by identifying a set of orthogonal measurement bases. In these minimal-resource-consuming schemes, both pre-shared maximally and non-maximally entangled states are taken into account. The three-, five-, six-, and seven-level equatorial states in complex Hilbert spaces can also be obtained by adjusting the parameters of the desired states. The evaluations indicate that our high-dimensional RSP schemes might be possible with current technology. The collection operations, necessary for our high-dimensional RSP schemes via partially entangled channels, can be avoided by encoding the computational basis in the spatial modes of single-photon systems.

Intermodal entanglement in a quantum optical model of HHG due to the back-action on the driving field

Ákos Gombkötő, Péter Ádám, David Theidel, Tamás Kiss

2603.01315 • Mar 1, 2026

QC: low Sensing: medium Network: medium

This paper theoretically investigates how high-harmonic generation (HHG) can produce quantum entanglement between different light frequencies through back-action effects on the driving laser field. The researchers develop a simplified quantum optical model that explains experimentally observed nonclassical correlations between harmonics, suggesting this entanglement is a universal feature of HHG rather than material-specific.

Key Contributions

  • Development of a general quantum optical model for HHG that explains intermodal entanglement through back-action effects
  • Theoretical demonstration that entanglement between harmonics is a universal phenomenon in HHG rather than material-dependent
entanglement high-harmonic generation nonclassical light quantum optics intermodal correlations
View Full Abstract

Preparation of nonclassical light with special quantum properties is essential for quantum technologies. High-harmonic generation (HHG) is a process which not only enables the creation of attosecond pulses but also has the potential to generate light with intricate quantum properties. In a recent experiment [1], nonclassical inter-harmonic correlations have been measured from a HHG source. In this work, we theoretically investigate entanglement between different harmonics within an effective quantum optical model. This model implements a signifcant degree of simplifcation regarding the processes within the target material, treating the material through susceptibilities, as it is usual in quantum optics. Such an approach yields a general description of HHG, permitting the implications that can be derived within it to hold broadly. We find that entanglement is produced as a result of the often neglected back-action. We can qualitatively reproduce experimentally measured nonclassicalities, which suggests that intermodal entanglement can, to an extent, be considered a universal phenomenon associated with HHG, rather than a result of using specific material targets.

Bounding the classical cost of simulating quantum behaviors in the prepare-and-measure scenario

Sebastian Schlösser, Matthias Kleinmann

2603.01255 • Mar 1, 2026

QC: medium Sensing: none Network: high

This paper studies how to efficiently simulate quantum prepare-and-measure scenarios using classical communication, showing that the communication cost can be reduced from 2 bits to an average of 1.89 bits per qubit transmission. The authors analyze various restrictions on quantum states and develop methods to establish lower bounds on classical communication requirements for simulating quantum behaviors.

Key Contributions

  • Reduced classical communication cost for simulating qubit transmission from 2 bits to average 1.89 bits
  • Identified minimal quantum scenarios requiring specific communication costs with only 6 state preparations and 5 measurements
  • Developed general method to lower bound classical communication cost based solely on quantum state sets
prepare-and-measure classical simulation quantum communication communication complexity quantum states
View Full Abstract

We study the prepare-and-measure scenario in which Alice transmits a quantum system to Bob, who then performs a quantum measurement. The quantum state of the system is unknown to Bob, and the measurement is unknown to Alice. It has recently been shown that shared randomness and two bits of classical communication are necessary and sufficient to simulate the transmission of a qubit. We show that the communication cost can be reduced to an average of $1.89$ bits. We then study restricted sets of state preparations: First, for a restriction to real-valued qubit states, if the communication of a classical trit is sufficient, we show that the corresponding protocol must have a convoluted form. We then reduce the smallest qubit scenario requiring two bits of classical communication to only $6$ state preparations and $5$ measurements. For a qutrit, it is not known whether the communication cost is finite; we identify a scenario that requires at least $5$ classical messages, already for the simulation of the real qutrit. Finally, we develop a method for restricted sets of states, that allows us to lower bound the classical communication cost based solely on the set of quantum states.

Continuum limit of a qubit-regularized SU(3) lattice gauge theory with glueballs

Rui Xian Siew, Shailesh Chandrasekharan, Tanmoy Bhattacharya

2603.01215 • Mar 1, 2026

QC: medium Sensing: none Network: none

This paper studies a simplified quantum model of strong nuclear forces using qubits to represent gauge theory on a chain of plaquettes. The researchers show this model can describe glueball particles (bound states of the strong force) and calculate their masses in a continuum limit.

Key Contributions

  • Demonstrated continuum limit of qubit-regularized SU(3) lattice gauge theory with glueball excitations
  • Mapped plaquette-chain Hamiltonian to three-state quantum clock model and identified Z3 parafermion CFT as UV fixed point
  • Calculated glueball mass ratios and string tension relationships in the continuum theory
lattice gauge theory qubit regularization glueballs quantum simulation SU(3)
View Full Abstract

We show that a simple qubit-regularized $\mathrm{SU}(3)$ lattice gauge theory (LGT) on a plaquette chain admits a continuum limit with massive glueball excitations, providing a minimal toy model of strong interactions without quarks. By mapping the plaquette-chain Hamiltonian to the three-state quantum clock model in a magnetic field, we demonstrate that the theory can be tuned to a continuum limit governed at short distances by the $\mathbb{Z}_3$ parafermion conformal field theory (CFT), which serves as the ultraviolet (UV) fixed point. A small relevant magnetic perturbation then drives the system to a massive continuum quantum field theory in the infrared (IR). The resulting relativistic massive particles can be interpreted as quasi one-dimensional analogues of glueballs. In the continuum theory we compute the ratio of the lowest glueball masses with opposite charge conjugation to be $m^{-}/m^{+} = \,1.459(2)$ and find $\sqrtσ/m^{+}\,= 0.2648(2)$, where $σ$ is the string tension between a static quark and antiquark.

On Utility-optimal Entanglement Routing in Quantum Networks

Sounak Kar, Arpan Mukhopadhyay

2603.01197 • Mar 1, 2026

QC: low Sensing: none Network: high

This paper develops optimization methods for routing quantum entanglement through quantum networks to maximize overall network utility. The authors formulate the problem as a Mixed-Integer Convex Program and propose computational heuristics to find optimal paths for distributing entangled quantum states across network users.

Key Contributions

  • Formulation of utility-optimal entanglement routing as a Mixed-Integer Convex Program with high accuracy
  • Development of randomized rounding heuristics and min-congestion routing alternatives for computational tractability
quantum networks entanglement routing quantum internet network optimization entanglement distribution
View Full Abstract

Quantum networks are envisioned to enable reliable distribution and manipulation of quantum information across distances, forming the foundation of a future quantum internet. The fair and efficient allocation of communication resources in such networks has been addressed through the quantum network utility maximization (QNUM) framework, which optimizes network utility under the assumption of predetermined routes for competing user demands. In this work, we relax this assumption and aim to identify optimal routes that correspond to the maximum achievable network utility. Specifically, we formulate the single-path utility-based entanglement routing problem as a Mixed-Integer Convex Program (MICP). The formulation is exact when negativity is chosen as the entanglement measure for utility quantification or the network supports sufficiently high entanglement generation rates across demands. For other entanglement measures considered, the formulation approximates the problem with over 99.99% accuracy on evaluated real-world examples. To improve computational tractability, we propose a randomized rounding-based heuristic and an upper bound via the relaxation of the MICP. Furthermore, based on min-congestion routing, we introduce an alternative randomized heuristic and upper bound. This heuristic is computationally faster, while both the heuristic and the upper bound often outperform their counterparts on considered real-world networks. Our work provides the framework for extending classical flow-based and quality of service-aware routing concepts to quantum networks.

A high-performance quantum memory for quantum interconnects

H. -X Luo, C. Li, J. -L. Ren, Y. Yuan, Y. -L. Wen, J. -F. Li, Y. -F. Wang, S. -C. Zhang, H. Yan, S. -L. Zhu

2603.01156 • Mar 1, 2026

QC: medium Sensing: none Network: high

This paper demonstrates a high-performance quantum memory system that can store quantum information in 11-dimensional spatial modes with over 80% efficiency and 99% fidelity. The researchers show this memory could enable distribution of 3.56 bits of quantum information over 1000-km distances in one minute, representing a significant advance toward practical quantum networks.

Key Contributions

  • Introduction of quantum interconnect rate as a comprehensive metric for benchmarking quantum memories
  • Demonstration of high-performance multimode quantum memory with simultaneous optimization of capacity, efficiency, and fidelity
  • Achievement of >80% efficiency and >99% fidelity across 11-dimensional spatial modes
  • Practical pathway demonstration for scalable quantum repeaters over 1000-km distances
quantum memory quantum repeaters quantum networks multimode storage spatial modes
View Full Abstract

Single photons are the flying qubits of choice for distributing entanglement in a quantum internet. Quantum memories embedded in quantum repeaters are crucial to overcome transmission loss and enhance the rate of quantum communication. A multimode memory can further boost the channel capacity. However, benchmarking and building a practical quantum memory that simultaneously optimizes multiple performance metrics poses two key challenges. Here, we introduce quantum interconnect rate to comprehensively quantify quantum memories, and further demonstrate a high-performance quantum memory that simultaneously integrates three essential criteria at once: large multimode capacity, high efficiency, and high fidelity. Operating on 11-dimensional spatial modes, our memory achieves a uniform efficiency exceeding 80% and qubit storage fidelities above 99%, enabling the efficient storage of high-dimensional qudits. Based on these capabilities, we estimate a distribution of 3.56 bits of quantum information over a 1000-km repeater link in one minute, highlighting a practical pathway toward scalable quantum interconnects and quantum networks.

Ergotropy from Geometric Phases in a Dephasing Qubit

Fernando C. Lombardo, Paula I. Villar

2603.01129 • Mar 1, 2026

QC: medium Sensing: high Network: low

This paper studies how quantum phases (geometric and dynamic) relate to ergotropy (extractable work) in qubits experiencing decoherence from environmental coupling. The authors show that geometric phases can serve as indirect measurements of a quantum system's energy extraction capabilities.

Key Contributions

  • Established connection between geometric phases and ergotropy in open quantum systems
  • Demonstrated that geometric phases can probe energetic resources and work extraction capabilities
  • Provided method for indirect ergotropy measurement in superconducting circuits via geometric phase detection
geometric phase ergotropy decoherence open quantum systems superconducting circuits
View Full Abstract

We analyze the geometric phase and dynamic phase acquired by a qubit coupled to an environment through pure dephasing, establishing a direct connection between phase accumulation and ergotropy. We show that the dynamic phase depends solely on the incoherent ergotropy, reflecting its purely energetic origin. In contrast, the geometric phase exhibits a nontrivial dependence on both the coherent and incoherent contributions to the total ergotropy, encoding the interplay between coherence, dissipation, and energy extraction. By performing a perturbative expansion in the qubit-environment coupling strength, we demonstrate that, in the weak-coupling and long-time regime, the geometric phase becomes determined exclusively by the incoherent ergotropy, which coincides with the asymptotic value of the total ergotropy reached under decoherence. These results provide a clear physical distinction between dynamic and geometric phases in open quantum systems and establish geometric phases as sensitive probes of energetic resources. Furthermore,~in superconducting circuit implementations, our findings suggest that the ergotropy of a two-level system could be inferred indirectly from geometric-phase measurements using standard techniques such as quantum state tomography.

Multipartite parity bounds and total correlation

James Tian

2603.01105 • Mar 1, 2026

QC: medium Sensing: medium Network: medium

This paper develops mathematical bounds for multipartite quantum observables by analyzing their parity structure and connecting these bounds to total correlation measures. It shows how quantum correlations beyond classical product states can be quantified and demonstrates how these correlations decay under local noise.

Key Contributions

  • Derivation of norm bounds for multipartite observables based on parity structure and commutator/anticommutator decomposition
  • Establishment of explicit lower bounds on total correlation that exceed product state thresholds
  • Analysis of correlation decay mechanisms under local depolarizing noise
multipartite entanglement quantum correlations total correlation tensor product spaces local noise
View Full Abstract

This paper studies multipartite observables formed from sums of local self-adjoint contractions on tensor product Hilbert spaces. The square of such a sum has a parity structure: after decomposing each local product into commutator and anticommutator parts, the odd parity terms cancel and only even parity contributions remain. This yields a norm bound in terms of a family of pairwise defect weights built from local commutator and anticommutator norms. These defect weights also control an information theoretic estimate. The excess of the observable expectation above the product state threshold is shown to necessarily carry a definite amount of total correlation. Under a natural $\ell^{2}$-type bound on each local family, this product state threshold becomes explicit, which leads to a fully explicit lower bound on total correlation. A simple depolarizing example illustrates the resulting decay mechanism under local noise.