Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
g-tensor Optimization in Ge/SiGe Quantum Dots
This paper develops an optimization framework for engineering g-tensor properties in germanium/silicon-germanium quantum dots to improve hole-spin qubit performance. The researchers demonstrate how to reshape quantum well potentials by adjusting silicon concentration to suppress unwanted g-tensor components and achieve better qubit control.
Key Contributions
- Development of flexible optimization framework for g-tensor engineering in Ge/SiGe quantum dots
- Demonstration of heterostructure engineering approach to suppress in-plane g-tensor components for improved qubit reliability
View Full Abstract
Planar germanium heterostructures hosting hole-spin qubits are among the leading platforms for scalable semiconductor-based quantum computing. Yet, device performance is hindered by significant quantum dot variability, which leads to uncertainty in qubit energy levels and random orientations of the spin quantization axis. Tailored control of the g-tensor offers a strategy to overcome these limitations and achieve more reliable qubit operations. Here, we introduce a flexible optimization framework for engineering g-tensor properties. As a benchmark, we numerically obtain the optimal reshaping of the out-of-plane potential in a SiGe-Ge-SiGe quantum well to suppress the in-plane g-tensor components and realize the recently proposed gapless single-spin qubit encoding. This reshaping is achieved through heterostructure engineering, specifically by adjusting the silicon concentration within the quantum well, though the framework remains readily adaptable to alternative design objectives. Our results provide practical design principles for improving the tunability of the spin response, paving the way towards large-scale germanium-based quantum computers.
Branch-Resolved Characterization of Feed-Forward Error in Dynamic Teleportation via Classical Choi Shadows
This paper develops a new method to analyze errors in quantum teleportation circuits that use mid-circuit measurements and classical feed-forward operations. The researchers test different error correction strategies on superconducting quantum processors and show how their branch-resolved analysis reveals error patterns that traditional averaged measurements miss.
Key Contributions
- Framework for characterizing feed-forward error in dynamic circuit teleportation without losing branch-specific information
- Experimental validation of branch Choi operator reconstruction via entangled reference qubits
- Comparative analysis of three error mitigation approaches showing performance depends on measurement readout error rates
View Full Abstract
Mid-circuit measurement and classical feed-forward are essential primitives for dynamic-circuit teleportation on superconducting quantum processors. However, the error associated with measurement-conditioned corrective operations remains poorly understood when evaluated with respect to individual measurement branches. In this paper, we present a framework for characterizing feed-forward error in dynamic circuit teleportation without losing valuable information related to its behavior across separate branches. We analyze three approaches to applying measurement-conditioned corrections: (i) physical application, (ii) post-processing adjustments, and (iii) a mitigated physical application which utilizes Bit-Flip Averaging (BFA)-based Probabilistic Readout Error Mitigation (PROM). We experimentally reconstruct branch Choi operators via an entangled reference qubit, and validate our physical-application and post-processing Choi-shadow estimators against full tomography of the branch Choi operators. We perform experiments on two physical qubit layouts which differ greatly in mid-circuit measurement readout error, and observe a reversal in the relative order in branch qualities obtained from the post-processing and PROM mitigation strategies. In one physical layout with higher measurement readout error, the operational feed-forward penalty is relatively modest (approximately 0.02-0.03) and PROM produces higher branch qualities than post-processing for every branch. In a separate layout with lower readout error, the operational feed-forward penalty increases to roughly 0.07, and post-processing exceeds PROM for all branch qualities. Our characterization framework can reveal branch-specific error structure and mitigation behavior that state-of-the-art outcome-averaged analyses fail to expose.
High-Girth Regular Quantum LDPC Codes from Square-Base Hypergraph Products via CPM Lifts
This paper develops new quantum error-correcting codes called CSS LDPC codes with improved properties for protecting quantum information from errors. The researchers create codes with better structure (high girth, regularity) and demonstrate a specific code that successfully corrected errors in hundreds of millions of test cases.
Key Contributions
- Development of checkable conditions for designing regular high-girth quantum LDPC codes from hypergraph products
- Construction of specific girth-8 CSS-LDPC codes with demonstrated high performance in error correction simulations
- Analysis of CPM lifting techniques and identification of fundamental limits on achievable Tanner girth
View Full Abstract
We study square-base Calderbank--Shor--Steane (CSS) hypergraph-product codes as a finite-length class for regular high-girth quantum low-density parity-check (LDPC) design. For base matrices of small column weight, we give checkable conditions for regularity, rank deficiency, and short-cycle exclusion, and we present explicit column-weight-three and column-weight-four examples with Tanner girth 6 and 8. We also analyze circulant permutation matrix (CPM) lifts of this class. Using the standard voltage-sum criterion, we identify orthogonality-forced Tanner 8-cycles and show that CPM lifting cannot raise the Tanner girth beyond 8 when these cycles are present. As a representative finite-length instance, a randomized CPM lift of the girth-8 base construction gives a $[[28800,62]]$ girth-8 $(3,6)$-regular CSS-LDPC code. Under degeneracy-aware belief-propagation decoding with optional ordered-statistics-decoding-lite post-processing, this code produced zero decoding failures in $2.993\times 10^8$ independent trials at depolarizing probability $p=0.1402$; the Wilson 95% upper confidence bound is $1.28\times 10^{-8}$.
Parametrically Driven iSWAP Gate Using a Capacitively Shunted Double-Transmon Coupler at the Zero-Flux Sweet Spot
This paper demonstrates a new type of quantum gate (iSWAP) for superconducting quantum computers using a parametrically driven approach with a capacitively shunted double-transmon coupler. The researchers achieved 99.92% gate fidelity in 112 nanoseconds without requiring complex pulse corrections, representing an improvement in quantum gate implementation for scalable quantum computing.
Key Contributions
- Demonstrated parametrically driven iSWAP gate with 99.92% fidelity using capacitively shunted double-transmon coupler
- Eliminated need for static flux biasing while maintaining high gate fidelity through zero-flux operation
- Validated theoretical models for both spectral and time-domain gate dynamics in superconducting quantum systems
View Full Abstract
A double-transmon coupler (DTC) enables a fast, high-fidelity CZ gate between two highly detuned, fixed-frequency transmon qubits. Moreover, a recently proposed capacitively shunted DTC (CSDTC) realizes a small residual ZZ interaction over a wide flux-bias range around zero flux, eliminating the necessity of static flux biasing while maintaining high CZ-gate fidelity. However, CZ gates with the DTC and CSDTC require baseband flux pulses with large amplitudes, which are vulnerable to pulse distortion and decoherence due to large qubit-coupler hybridization. To address these issues, we experimentally demonstrate a parametrically driven iSWAP gate operated at zero flux bias between highly detuned, fixed-frequency transmon qubits coupled through a CSDTC. Using a simple flux-drive waveform without predistortion, we realize an average gate fidelity of 99.92(2)% at a total gate time of 112 ns. The observed high-fidelity performance is consistent with small qubit-coupler hybridization and small effective ZZ interaction during the gate. Our numerical simulations reproduce the experimentally observed iSWAP interaction rate and effective ZZ interaction, demonstrating the applicability of the theoretical model not only to spectral information but also to time-domain dynamics such as gate operations. These results boost further progress in the research of superconducting quantum computers.
An Analytical Approach to Design Space Exploration for Cavity-Mediated Quantum State Transfer in Multi-core Architectures
This paper develops exact mathematical formulas to analyze how quantum information moves between qubits in multi-core quantum computers through waveguide connections. The analytical approach is much faster than computer simulations and reveals why some configurations have poor performance due to destructive interference effects.
Key Contributions
- Derived exact analytical expressions for quantum state transfer dynamics in waveguide-coupled qubit systems using Jaynes-Cummings Hamiltonian
- Identified and explained systematic low-fidelity regions caused by destructive interference between internal oscillations and detuning-induced envelopes
- Developed computational speedup method for large-scale parameter optimization in multi-core quantum processor design
View Full Abstract
In multi-core quantum computing architectures, waveguide-mediated interconnects are essential for facilitating fast, high-fidelity quantum state transfer between qubits located in different chips. However, optimizing these systems typically relies on computationally expensive numerical simulations that offer limited physical insight. In this work, we derive exact analytical expressions for the state transfer dynamics of a two-qubit system coupled via a waveguide, modeled through a Jaynes-Cummings Hamiltonian and the Lindblad master equation. We apply the Monte Carlo wave-function method and obtain a closed-form solution for qubit occupation probabilities that accounts for both detuning and dissipative losses. Our analytical framework provides a significant computational speedup compared to standard numerical solvers, enabling large-scale parameter sweeps while maintaining high precision in both fidelity and latency predictions. Furthermore, the model reveals and explains systematic low-fidelity regions arising from destructive interference between internal oscillations and detuning-induced envelopes, which are phenomena that are difficult to characterize through numerical means alone. Finally, we propose a simplified latency model and an efficiency-based function to enable rapid identification of optimal operating points. This analytical approach provides a robust foundation for the design and optimization of interconnects in multi-core quantum processors.
Magnonic Gottesman-Kitaev-Preskill states
This paper presents the first protocol for creating Gottesman-Kitaev-Preskill (GKP) quantum error correction states using magnons (collective spin excitations) in magnetic crystals coupled to superconducting qubits. The approach uses the natural geometric properties of ellipsoidal magnetic crystals and cavity-mediated qubit control to generate these error-protected quantum states.
Key Contributions
- First protocol for preparing magnonic Gottesman-Kitaev-Preskill states using hybrid magnon-qubit systems
- Demonstration of logical qubit gate operations (Pauli, Hadamard, phase gates) for the approximate GKP code
- Novel use of geometric anisotropy in magnetic crystals for intrinsic magnon mode squeezing
View Full Abstract
Bosonic quantum error correction encodes a logical qubit in an oscillator, avoiding the hardware overhead of large qubit arrays. Among such encodings, Gottesman-Kitaev-Preskill (GKP) states are paticularly powerful because their phase-space grid structure protects against small displacement errors simultaneously in both conjugate quadratures. Here we provide the first protocol for preparing magnonic GKP states, which involves an ellipsoidal magnetic crystal effectively coupled to a superconducting qubit via a microwave cavity. The geometric anisotropy intrinsically squeezes the magnon mode, while the cavity-mediated qubit control realizes an effective conditional-displacement interaction. We show that two rounds of a conditional-displacement interaction and a qubit projective measurement yield three- and four-component magnonic GKP-like states. We also show how to realize single logical qubit gate operations, such as Pauli, Hadamard and phase gates, completing the logical Pauli basis of the approximate GKP code. Our results establish hybrid magnon-qubit systems as a promising platform for preparing bosonic code states, with applications in magnonic fault-tolerant quantum computation and quantum sensing.
Demonstration of Exponential Quantum Speedup with Constant-Depth Compiled Circuits for Simon's Problem
This paper demonstrates exponential quantum speedup for Simon's problem on current superconducting quantum processors by developing hardware-aware compilation techniques that create constant-depth circuits. The researchers achieved this speedup on IBM's 156-qubit and 120-qubit processors without requiring error correction, showing that careful circuit design can make quantum advantages experimentally accessible in the NISQ era.
Key Contributions
- Hardware-aware compilation strategy that produces constant-depth circuits for Simon's problem
- Experimental demonstration of exponential quantum speedup on current NISQ devices without error suppression
- Circuit designs with linear connectivity that map directly to common quantum device layouts
View Full Abstract
We demonstrate exponential quantum speedup for a restricted-Hamming-weight version of Simon's problem on present-day superconducting quantum processors by introducing a hardware-aware compilation strategy that compiles the quantum part of each Simon query circuit to constant depth. The resulting compiled circuits have $O(1)$ depth and linear connectivity, map directly onto common device layouts, and avoid additional routing and SWAP overhead. Implemented on IBM's $156$-qubit Boston and $120$-qubit Miami processors, the resulting circuits achieve sufficiently high fidelity to exhibit algorithmic quantum speedup without error suppression. Using the number-of-queries-to-solution metric, we observe exponential speedup over the classical lower bound across the full Hamming-weight range studied on Boston and across low-to-intermediate Hamming weights on Miami; at higher Hamming weights on Miami, we still observe polynomial speedup. The same construction also reaches a regime where the original Simon problem is recovered for the problem sizes studied. These results show that careful hardware-aware compilation can make exponential quantum speedup experimentally accessible for a canonical hidden-subgroup problem in the NISQ regime.
MCMit: Mid-Circuit Measurement Error Mitigation
This paper presents MCMit, a hardware-software system to reduce errors in quantum circuits that use mid-circuit measurements and classical feedback. The approach combines faster hardware control instructions with machine learning-based measurement accuracy improvements and software error mitigation techniques.
Key Contributions
- Hardware-software co-design with constant-latency multi-control branch instruction reducing feedback latency by up to 70%
- Machine learning discriminators (transformer and CNN) achieving 37-73% higher accuracy for short measurement durations
- Software mitigation techniques including static MCM elimination and stochastic branching improving fidelity by 18-30%
View Full Abstract
Distributed Quantum Computing (DQC) and Quantum Error Correction (QEC) rely on dynamic circuits that include Mid-Circuit Measurements (MCMs) and classical feedback. These operations present a major bottleneck: MCMs suffer from high error rates that lead to real-time branching errors, while MCM and classical feedback latencies amplify decoherence errors. Current hardware controllers, qubit-state discriminators, and software error mitigation techniques fail to address these challenges holistically. We propose MCMit, a hardware-software co-design to mitigate branching and latency-induced errors. MCMit introduces a scalable, constant-latency multi-control branch instruction for faster classical feedback and two qubit-state discriminators, a transformer, and a CNN, with high accuracy even under short measurement durations. On the software side, static MCM elimination and stochastic branching complement the hardware by mitigating residual branching errors that persist despite hardware improvements. We implement MCMit on Qubic and evaluate it using experimentally extracted QPU readout traces. Our branch instruction reduces feedback latency by up to 70\%, improving circuit depths by up to $7\times$ over Qubic. Our CNN discriminator achieves 37-73\% higher accuracy for short measurement durations than the baselines, leading to up to 80\% lower logical error rates in QEC. Last, our software mitigation improves fidelity by 18--30\% over baseline methods.
Minimum Toffoli depth for the multi-controlled Toffoli gate via teleportation
This paper presents a new method to implement multi-controlled Toffoli gates using quantum teleportation that achieves constant depth regardless of the number of control qubits, trading off additional ancilla qubits and entanglement distribution for improved circuit depth performance.
Key Contributions
- Novel teleportation-based decomposition achieving unit Toffoli depth for MCT gates independent of control count
- Demonstration of improved performance for quantum algorithms including adders, quantum RAM, quantum neurons, and decision trees
View Full Abstract
The decomposition of complex quantum operations into experimentally feasible gate sets has been a central challenge since the early development of quantum computing. The multi-controlled Toffoli (MCT) gate is a key example, with applications across a wide range of quantum algorithms, whose decomposition into smaller gates, however, typically leads to deep circuits. In this work, we introduce a teleportation-based decomposition that implements an arbitrary MCT gate with unit Toffoli depth, independent of the number of controls, while maintaining a relatively low Toffoli count compared to existing approaches. This is achieved at the cost of a linear overhead in ancilla qubits and the ability to distribute entangled pairs across distant qubits, a capability already available in several quantum computing platforms. We further demonstrate the advantages of this implementation in circuits that rely on MCT gates, such as the adder operator, quantum read-only memory, quantum neurons, and quantum decision trees.
Proof of the Error Scaling for Universally Robust Dynamical Decoupling Sequences
This paper provides a rigorous mathematical proof that universally robust dynamical decoupling (URn) sequences achieve nth-order error suppression while using only a linear number of pulses. The work establishes the theoretical foundation for these quantum control techniques that compensate for pulse imperfections in quantum systems.
Key Contributions
- Rigorous mathematical proof of error scaling for URn dynamical decoupling sequences
- Derivation of necessary and sufficient conditions for high-order error cancellation in quantum control
View Full Abstract
Universally robust dynamical decoupling (UR$n$) sequences were proposed to compensate pulse imperfections arising from arbitrary experimental parameters while achieving high-order error suppression with only a linear increase in the number of pulses. Although their performance was supported by analytical arguments, numerical simulations, and experiments, a complete mathematical proof of the claimed order of error compensation has been absent. In this work, we present a rigorous proof for UR$n$ DD sequences with even $n$. Using a series expansion of a quantity whose modulus is the fidelity $F$, we derive necessary and sufficient conditions for the cancellation of its coefficients up to, but not including, order $n$. The UR$n$ phase prescription satisfies these conditions, and therefore $1-F=O(ε^n)$. Our results establish the UR$n$ construction on firm analytical grounds and clarify the structure responsible for its high-order robustness.
The mixed-dimensional quantum MacWilliams identity: bounds for codes and absolutely maximally entangled states in heterogeneous systems
This paper develops a mathematical framework for quantum error correction and entangled states in mixed-dimensional quantum systems that combine different types of quantum systems (like qubits and qudits). The authors derive new theoretical bounds and identities that characterize how well these heterogeneous quantum networks can protect and distribute quantum information.
Key Contributions
- Introduction of dimension multisets framework for characterizing quantum error-correcting codes in mixed-dimensional Hilbert spaces
- Derivation of the mixed-dimensional quantum MacWilliams identity establishing algebraic relationships between error correction enumerators
- Formulation of generalized quantum bounds (Hamming, Singleton, Scott) for mixed-dimensional systems
- Development of combinatorial methods for constructing mixed-dimensional absolutely maximally entangled states
View Full Abstract
As emerging quantum architectures evolve into heterogeneous networks combining different physical substrates, such as qubits for logic and higher-dimensional qudits for robust communication, the traditional scalar metrics of quantum error correction become insufficient. To address this, we introduce a mathematical framework based on dimension multisets to characterize quantum error-correcting codes (QECC) and absolutely maximally entangled (AME) states in mixed-dimensional Hilbert spaces. By replacing scalar weights with multisets, we accurately capture the exact physical composition of error supports across these diverse systems. Our central result is the mixed-dimensional quantum MacWilliams identity, which establishes the formal algebraic relationship between Shor-Laflamme enumerators and unitary weight enumerators. From this foundation, we deduce the mixed-dimensional shadow identity and derive rigorous, generalized constraints on code parameters, explicitly formulating the mixed-dimensional quantum Hamming, Singleton and Scott bounds, and developing a linear program to systematically evaluate code viability. For the Singleton bound, a tighter bound that has no homogeneous analogue is derived for pure mixed-dimensional codes. Finally, we deploy this enumerator machinery to thoroughly analyze AME states, utilizing shadow inequalities to constrain their existence and introducing a combinatorial grid method for the explicit construction of mixed-dimensional tripartite AME states.
Quantum Error Correction Exploiting Quantum Spatial Distribution and Gauge Symmetry
This paper presents a novel quantum error correction scheme that combines quantum spatial distribution (superposition of spin and position states) with gauge symmetry within stabilizer formalism, using a 5-particle system arranged on nested squares where 3 particles encode Shor's nine-qubit code and 2 particles detect errors.
Key Contributions
- Development of unified noise model covering spin decoherence, position decoherence, and dephasing with proven correctability under gauge symmetry protection
- Demonstration of scalable quantum error correction architecture with nearest-neighbor interactions enabling implementation of logical Hadamard, Toffoli gates and quantum adder
View Full Abstract
We explore what the integrated use of quantum spatial distribution (QSD), or more specifically, superposition of both spin and position states of particles, and gauge symmetry (GS) within stabilizer formalism provides for quantum error correction. The exploration employs $3+2$ particles on nested squares proposed in the companion letter (arXiv:2504.07941), where three of them encode Shor's nine-qubit code and the remaining two detect errors in this code through their spin state measurements (unlike the letter's quantum walk model, each particle evolves by gate operations acting exclusively on either its spin or position state). The first result is that the GS offers resilience against three types of noise acting on a particle: arbitrary decoherence of its spin or position state, and dephasing of both states, which partly or completely destroys its QSD. To show that, we formulate a noise model unifying the above noise and prove the correctability of this unified model under our error-correcting scheme. The second result is that QSD provides architectural flexibility allowing us to stack the error-correcting systems both vertically and horizontally. Indeed, we show implementations of the error detection (stabilizer measurement), logical Hadamard and Toffoli gates, and a quantum adder with the required interactions only between nearest-neighbor and next-nearest-neighbor particles.
Defect-Adaptive Lattice Surgery on Irregular Boundary Surface-Code Patches
This paper develops methods for performing logical operations (specifically lattice surgery) on quantum error-correcting surface codes when the hardware has defects or irregular boundaries. The authors create a mathematical framework to adapt fault-tolerant quantum operations to work on imperfect, non-uniform quantum computing hardware.
Key Contributions
- Development of defect-adaptive lattice surgery methods for irregular surface-code patches
- Introduction of certified parity synthesis as a compilation layer for fault-tolerant operations on imperfect hardware
- Mathematical framework for reconstructing logical operations from seam-related measurements on deformed quantum error correction codes
View Full Abstract
Defect-adaptive surface-code methods have substantially advanced the construction of valid logical patches on imperfect hardware, but fault-tolerant computation also requires executable logical oper ations on the resulting irregular geometries. We formulate the seam-boundary defect problem: how to perform a lattice-surgery merge when the intended seam intersects deformed boundaries, disabled checks, and gauge-inferred super-stabilizers. We introduce a defect-adaptive lattice-surgery method that reconstructs the target joint logical parity from the seam-related measurements available on the irregular merged patch, together with constraints inherited from the separated pre-merge code space. The reconstruction is expressed as a compact GF(2) binary-support synthesis problem. If the requested parity is realizable, the solution gives an executable parity-extraction rule over raw, schedule-tagged gauge outcomes; otherwise, it certifies a parity-synthesis failure rather than conflat ing it with patch invalidity. The framework accommodates boundary data-qubit defects, seam-check ancilla defects, and gauge-inferred seam super-checks within a single synthesis layer. Circuit-level samples of the synthesized merge operation show improved compile yield, preserved effective dis tance, and only modest success-conditioned logical-error overhead relative to the defect-free merge reference; an explicit ZZ-merge sampling check confirms the expected transposed-geometry behav ior under the same success-conditioned observable construction. More broadly, the results identify certified parity synthesis as a compilation layer between defect-adaptive patch construction and executable fault-tolerant logical operations on imperfect surface-code hardware.
Simon's Algorithm for the Even-Mansour Cipher on Quantum Hardware
This paper demonstrates a practical implementation of Simon's algorithm to break the Even-Mansour cipher on IBM quantum hardware, successfully recovering secret keys for small bit lengths (N=3 and N=4). The researchers identified scaling limitations in classical preprocessing tools that prevent attacks on larger key sizes.
Key Contributions
- First practical demonstration of Simon's algorithm attacking Even-Mansour cipher on NISQ hardware
- Identification of classical preprocessing bottlenecks that limit scalability to larger key sizes
- Proof-of-concept quantum cryptanalysis results on IBM quantum processors
View Full Abstract
Simon's algorithm is a polynomial period-finding algorithm that has been used to exploit the algebraic structure of specific symmetric ciphers, showing that exponential speedups in their cryptanalysis are theoretically possible. While the theoretical framework for an attack using Simon's algorithm on the Even-Mansour cipher is well-established, practical implementations on noisy intermediate-scale quantum (NISQ) hardware remain limited. This paper presents a proof of concept quantum cryptanalysis of the Even-Mansour cipher using Simon's period-finding algorithm on NISQ hardware. For N = 3 and N = 4, we successfully demonstrate secret key recovery for N-bit constructions on the ibm_miami processor. Our experiments also identify a scaling limitation in the classical pre-processing stage: The DORCIS circuit optimization tool encountered a memory bottleneck at N = 5, preventing the generation of optimized circuits for larger key lengths. Our results suggest firstly that Simon's algorithm is effective for the Even-Mansour cipher for short bit lengths on current quantum hardware. Secondly, while DORCIS is effective for the small-scale S-boxes for which it was designed, there remains a need for the investigation of more scalable and efficient synthesis tools capable of handling larger and more general permutations in the context of Even-Mansour ciphers.
Quantum-Accelerated Gowers $U_2$ Norm for Bent Boolean Functions
This paper develops a quantum-classical hybrid genetic algorithm that uses quantum circuits to efficiently evaluate the Gowers U2 norm for finding bent Boolean functions. The quantum approach requires only polynomial resources compared to exponential classical computation, providing a significant speedup for problems with more than 25 variables.
Key Contributions
- Quantum circuit for efficient Gowers U2 norm evaluation requiring only 3n qubits and O(n²) gates
- Demonstration of exponential quantum speedup over classical methods for bent Boolean function construction
View Full Abstract
Bent Boolean functions extremal objects that maximally resist affine approximation are notoriously hard to construct for large numbers of variables. We propose a hybrid quantum-classical genetic algorithm (GA) that uses a \emph{quantum circuit} to evaluate the Gowers $U_2$ norm as the evolutionary fitness function. Our central contribution is a complexity-theoretic separation: the quantum evaluation circuit requires only $3n$ qubits and $\bigO(n^2)$ two-qubit gates per function query, whereas the classical computation of the exact Gowers $U_2$ norm demands $\bigO(2^{2n})$ arithmetic operations an exponential overhead that renders it infeasible for $n \gtrsim 25$. We validate the framework on $n=6$ and $n=8$ variable systems. For $n=8$, our classical GA run extended to 1000 generations achieves best fitness $\Utwof = 0.250000$ \emph{exactly} the theoretical bent threshold $2^{-n/4}$ with average fitness $0.257267$, confirming that the Gowers $U_2$ norm is a superior fitness criterion over Walsh-Hadamard spectral flatness. Quantum-assisted evaluation faithfully reproduces the classical trajectory up to finite-sampling noise, and our complexity analysis demonstrates that for $n > 25$ the quantum evaluator provides a decisive computational advantage on fault-tolerant hardware.
A graph-aware bounded distance decoder for all stabilizer codes
This paper develops a new error correction decoder for quantum computers that works with all types of stabilizer codes by representing quantum error syndromes as graphs. The decoder can correct quantum errors up to a specified weight limit and includes optimizations to reduce computational complexity through strategic graph pruning.
Key Contributions
- Universal bounded distance decoder applicable to all stabilizer codes using graph-based representation
- Strategic pruning algorithm with feed-forward network structure to reduce decoder runtime
- Open-source QGDecoder library for implementing graph-aware decoding of arbitrary stabilizer codes
View Full Abstract
We formulate a bounded distance decoding strategy applicable to all stabilizer codes including both CSS and non-CSS code-families. The framework emerges out of the local Clifford equivalence between arbitrary stabilizer states and graph states. Using the graphical representation of the stabilizers and the syndromes, we constitute the bounded distance decoding as an adaptable generalization of maximum likelihood decoding, ensuring correction of all errors with weights upper bounded by a target weight. We show that strategic pruning associated with a feed-forward network structure of the graph can reduce the search space and subsequently the runtime of the designed decoder. We demonstrate satisfactory performance of the bounded distance decoder in the case of the optimal non-CSS codes up to distance $d=11$ subjected to the depolarizing error on all qubits, and near-optimal decoding for the color and the surface codes, both belonging to the CSS family, under the bit-flip errors on the qubits. We also develop an open-source library, QGDecoder, enabling the graph-aware bounded distance decoding of arbitrary stabilizer codes.
Sign Embedding Quantum Algorithms for Matrix Equations and Matrix Functions
This paper develops quantum algorithms for solving matrix equations and computing matrix functions using a novel 'sign embedding' approach. The method embeds target matrices into larger augmented matrices whose matrix sign function contains the desired solution, achieving efficient quantum computation for various linear algebra problems including Sylvester equations and matrix square roots.
Key Contributions
- Development of systematic sign-embedding framework for quantum matrix algorithms
- Logarithmic-sinc approximation method for half-plane sign operators with structure-aware multiplexing
- Linear query complexity algorithms for Sylvester equations under non-normal matrix conditions
- Extension to multiple matrix problems including Lyapunov equations, matrix square roots, and Riccati equations
View Full Abstract
We develop a systematic sign-embedding framework of operator-output quantum algorithms for matrix equations and matrix functions. Differing from the contour-integral treatment, we start with the matrix-sign embedding route: an augmented matrix $M$ whose half-plane matrix sign compresses the target operator either as a block of $\text{sign}(M)$ or, in projector form, through $(I-\text{sign}(M))/2$; we then construct a logarithmic-sinc approximation for the half-plane sign operator and combine it with structure-aware scaled multiplexing and nodewise rebalancing of shifted inverse families. For ordinary Sylvester equations, we offer an explicit block-encoding of the target matrix solution with query complexity linear in the inverse-conditioning parameters and logarithmic in the target error tolerance, under non-normal and non-diagonalizable settings given a field-of-values (FoV) gap or strip-resolvent hypotheses. These algorithms propagate the same overlap-based normalization bookkeeping to ordinary and generalized Sylvester equations, generalized Lyapunov equations, principal square roots and inverse square roots, matrix geometric means, and continuous-time algebraic Riccati equations (CARE). These results identify matrix-sign embeddings and nodewise rebalancing as reusable design principles for structured operator-output quantum linear algebra.
Millikelvin digital-to-analog converter for superconducting quantum processors
This paper demonstrates a superconducting digital-to-analog converter that operates at millikelvin temperatures and can be integrated directly with quantum processors. The device allows for precise control of qubit parameters without requiring individual room-temperature control lines, potentially solving major scaling challenges for large quantum computers.
Key Contributions
- Demonstration of millikelvin-temperature superconducting DACs integrated with high-coherence fluxonium qubits
- SFQ-programmable digital interface that eliminates need for individual room-temperature DC bias lines
- Multi-chip module architecture enabling scalable qubit parameter control without coherence degradation
View Full Abstract
Scaling superconducting quantum processors is increasingly constrained by the wiring, heat load, and calibration overhead associated with delivering high-resolution analog signals from room temperature to qubits at millikelvin temperature. Here we demonstrate a superconducting digital-to-analog converter (DAC) integrated with high-coherence fluxonium qubits in a multi-chip module architecture. The DACs generate persistent analog flux signals for tuning qubit parameters and are programmed deterministically using single-flux-quantum (SFQ) pulses, providing a digital interface compatible with established SFQ routing and demultiplexing technologies. Operating at millikelvin temperature, the DACs enable in-situ tuning of fluxonium qubits without measurable degradation of qubit coherence. The presented device provides a static control primitive for flux-tunable qubits, enabling parameter homogenization and eliminating the need for individual room-temperature DC bias lines. These results establish SFQ-programmable millikelvin DACs as a building block for digitally controlled superconducting quantum processors.
CAbLECAR: efficiently scheduling QLDPC codes on a tileable spin qubit chip with shuttling
This paper develops an efficient scheduling algorithm called CAbLECAR for implementing quantum low-density parity check (QLDPC) error correction codes on spin qubit processors that can shuttle qubits around the chip. The researchers show their approach can dramatically improve error rates and encoding efficiency compared to traditional surface codes by enabling long-range qubit interactions through optimized shuttling.
Key Contributions
- Development of CAbLECAR coordinated shuttle scheduling algorithm that extends feasible shuttling range by 5-10x
- Demonstration that optimized QLDPC codes on shuttling architectures can improve upon surface codes by orders of magnitude in encoding efficiency and logical error rates
View Full Abstract
Semiconductor spin qubits are a promising platform for large-scale quantum computing, but have yet to take full advantage of the broad class of quantum low-density parity check (QLDPC) codes, which promise high encoding rates and efficient logic but require nonlocal connectivity between physical qubits. In this work, we investigate the implementation of QLDPC codes on a tileable, shuttling-based spin qubit architecture. By tailoring syndrome extraction circuits to the shuttling noise model, we significantly improve on previous surface code proposals and extend the feasible shuttling range of the architecture by 5-10x, enabling the implementation of more complex codes with long-range interactions. Taking inspiration from the field of robotics, we develop a coordinated shuttle scheduling algorithm that supports arbitrary codes and use it to benchmark the logical performance of a variety of promising code families. We find that the optimized schedules are up to 86% faster than hand-optimized schedules for certain code families. Through detailed circuit-level simulations, we identify specific QLDPC codes that improve upon prior surface code implementations by orders of magnitude, increasing encoding efficiency and reducing logical error rates. This work demonstrates the potential of shuttling-based spin qubit hardware platforms for scalable and efficient fault-tolerant quantum computation.
DiffQEC: A versatile diffusion model for quantum error correction
This paper presents DiffQEC, a new quantum error correction decoder that uses diffusion models to generate multiple error correction hypotheses instead of just one. The approach improves error correction performance by 5-10% compared to existing methods and provides confidence estimates for the corrections.
Key Contributions
- Introduction of diffusion models for quantum error correction decoding
- Demonstrated 5-10% improvement in logical error rates over existing decoders
- Generative approach that provides confidence estimates and reveals error structure
- Validation on experimental data from Google's superconducting quantum processor
View Full Abstract
Quantum computers could solve problems beyond the reach of classical devices, but this potential depends on quantum error correction (QEC) to protect fragile quantum states from noise. A central challenge in QEC is decoding: inferring likely physical errors from syndrome patterns generated by repeated stabilizer measurements. Existing decoders, including graph-based and neural approaches, typically return a single correction hypothesis and therefore discard the richer posterior structure of the error distribution conditioned on the observed syndrome. Here we recast QEC decoding as posterior inference using discrete denoising diffusion, exploiting the analogy between stochastic error accumulation and the forward diffusion process. We introduce DiffQEC, a generative decoder that combines a syndrome processor for multi-round spatial-temporal syndrome histories with syndrome feature modulation to condition denoising on the observed syndrome throughout inference. On experimental data from Google's superconducting quantum processor, DiffQEC reduces logical error rates by up to 10.2% relative to minimum-weight perfect matching and by about 5% relative to tensor-network decoding. These improvements persist for larger code distances up to 17 under depolarizing noise and for logical circuits of increasing depth. Beyond accuracy, the learned posterior provides confidence estimates for post-selection and reveals physically meaningful error structure, establishing posterior generative decoding as a practical framework for QEC.
GSC-QEMit: A Telemetry-Driven Hierarchical Forecast-and-Bandit Framework for Adaptive Quantum Error Mitigation
This paper presents GSC-QEMit, an adaptive framework that uses machine learning to automatically adjust quantum error mitigation strategies in real-time based on changing noise conditions in quantum devices. The system combines telemetry monitoring, noise forecasting, and intelligent decision-making to optimize the trade-off between quantum computation accuracy and computational overhead.
Key Contributions
- Novel adaptive quantum error mitigation framework that dynamically adjusts mitigation strategies based on real-time noise telemetry
- Integration of hierarchical clustering, Gaussian process forecasting, and contextual bandits for intelligent mitigation policy selection
- Demonstration of 9.0% fidelity improvement while reducing computational overhead through selective heavy intervention deployment
View Full Abstract
Quantum error mitigation (QEM) is essential for extracting reliable results from near-term quantum devices, yet practical deployments must balance mitigation strength against runtime overhead under time-varying noise. We introduce \emph{GSC-QEMit}, a telemetry-driven, \textbf{context--forecast--bandit} framework for \emph{adaptive} mitigation that switches between lightweight suppression and heavier intervention as drift evolves. GSC-QEMit composes three coupled modules: (G) a Growing Hierarchical Self-Organizing Map (GHSOM) that clusters streaming telemetry into operating contexts; (S) an uncertainty-aware subsampled Gaussian-process forecaster that predicts short-horizon fidelity degradation; and (C) a cost-aware contextual multi-armed bandit (CMAB) that selects mitigation actions via Thompson sampling with explicit intervention cost. We evaluate GSC-QEMit on benchmark circuit families (GHZ, Quantum Fourier Transform, and Grover search) under nonstationary noise regimes simulated in Qiskit Aer, using an instrumented testbed where action labels correspond to graded mitigation intensity. Across Clifford, non-Clifford, and structured workloads, GSC-QEMit improves average logical fidelity by \textbf{+9.0\%} relative to unmitigated execution while reducing unnecessary heavy interventions by reserving them for inferred noise spikes. The resulting policies exhibit a favorable fidelity--cost trade-off and transfer across the evaluated workloads without circuit-specific tuning.
Adaptive Tensor Network Sampling for Quantum Optimal Control
This paper introduces a new gradient-free optimization method for quantum optimal control that uses tensor networks (matrix product states) to efficiently search for high-quality control sequences. The method iteratively refines a probability distribution over control parameters to find optimal quantum operations like gates and state transfers.
Key Contributions
- Novel tensor network sampling approach for gradient-free quantum optimal control
- Demonstrated competitive performance on benchmark quantum control tasks including gate synthesis and state transfer
View Full Abstract
Quantum optimal control (QOC) provides a systematic framework for achieving high-fidelity operations in quantum systems and plays a central role in tasks such as gate synthesis, state transfer, and pulse design. Existing QOC methods broadly fall into two categories: gradient-based and gradient-free algorithms. The associated optimization landscape is often high-dimensional, non-convex, and populated by numerous local minima, making efficient gradient-free search strategies essential. To address this, we introduce a gradient-free matrix product state/tensor train (MPS/TT) sampling heuristic for discrete quantum optimal control. In our approach, the MPS defines a score function over the space of discrete control parameters, which in turn induces a sampling distribution over candidate control sequences. This distribution is iteratively refined through selection of better performing sequences and local tensor updates to bias the search toward high-performing sequences. We evaluate the method on a range of benchmark problems, including single-qubit state transfer, Bell-pair preparation, qutrit gate implementation, and open-system population transfer. Across these tasks, the method exhibits stable convergence behavior and competitive empirical performance relative to established gradient-free baselines. These results suggest that tensor network sampling offers a viable heuristic framework for discrete quantum control.
Noise-aware selection of circuit cutting strategies under hardware noise non-uniformity
This paper develops a method for cutting large quantum circuits into smaller pieces that can run on today's noisy quantum computers by strategically avoiding the noisiest parts of the hardware. The approach reduces the computational overhead of circuit cutting by 5-54x while maintaining low noise, making it practical to run larger quantum algorithms on current devices.
Key Contributions
- Hardware-noise-aware circuit cutting framework that exploits spatial non-uniformity of noise in quantum devices
- Demonstration of 5-54x reduction in execution overhead for 20-qubit circuits and tractable cutting for 50-qubit circuits
- Unified gate- and wire-cutting formulation with systematic device-constraint selection methodology
View Full Abstract
Noise in contemporary quantum hardware is highly non-uniform across qubits and couplers, giving rise to localized low-noise "islands" within otherwise noisy device topologies. As quantum workloads scale, executions are increasingly forced to traverse high-noise regions, degrading algorithmic fidelity. Circuit cutting provides a route to circumvent such regions by decomposing large circuits into smaller subcircuits, but its practicality is limited by exponential sampling overhead and the lack of systematic guidance on how cut strategies should align with heterogeneous hardware noise. In this work, we present a hardware-noise-aware circuit cutting framework that explicitly exploits the spatial non-uniformity of noise in quantum devices. Rather than proposing a new cut-finding algorithm, we formalize the problem of device-constraint selection under realistic hardware noise and show that this choice critically determines both execution overhead and effective noise. Using a unified gate- and wire-cutting formulation, we demonstrate that small, hardware-informed relaxations in the device constraint yield exponential reductions in execution overhead while preserving alignment with low-noise hardware regions. Across representative workloads, our method achieves an average reduction in the number of circuit executions ranging from 5-54x for 20-qubit circuits, and enables tractable circuit cutting for 50-qubit circuits and application-level benchmarks where conventional strategies incur prohibitive overhead. These results establish noise-aware device-constraint selection as a necessary ingredient for making circuit cutting resource-efficient and practically deployable on contemporary quantum hardware.
Beyond Monolithic Scaling: Modularity and Heterogeneity as an Architectural Imperative for Utility-Scale Quantum Computing
This paper addresses a fundamental scaling problem in quantum computing where classical control systems become too slow to manage large quantum systems before quantum states lose coherence. The authors propose a modular architecture using distributed control protocols to overcome this bottleneck for utility-scale quantum computers.
Key Contributions
- Identification of a fundamental scaling law that limits monolithic quantum computer architectures due to classical control latency
- Introduction of a time-aware Reserve-Commit protocol for modular quantum system coordination
- Projection of crossover scale at 10^5-10^6 physical qubits where modular architecture becomes necessary
View Full Abstract
Scalable quantum computing is fundamentally bottlenecked not by qubit count or fabrication yield, but by a rigid temporal mismatch: macroscopic classical coordination latency ($τ_c$) inevitably grows with system diameter, while microscopic quantum coherence ($τ_q$) remains strictly bounded. Beyond a critical scale, this mismatch breaches the classical control light cone, triggering a superlinear geometric penalty ($ε> 0$) that renders monolithic synchronization physically impossible. We formalize the resulting structural phase transition through a governing scaling law, $1+ε> γ$, which mandates modular decomposition and a shift from global unitaries to Local Operations and Classical Communication (LOCC). To manage the resulting resource contention under strict coherence budgets, we introduce a layered semantic architecture and a time-aware Reserve--Commit protocol. By embedding predictive temporal pre-validation, the protocol acts as an architectural semantic classifier: it preemptively aborts transactions that exceed the causal horizon and explicitly converts scheduling-induced failures into location-known erasure metadata, directly relaxing hardware fidelity thresholds for downstream QEC decoders. Under near-term transduction targets ($η_{\mathrm{trans}} \sim 0.1$), we project a crossover scale at $N_c \sim 10^5$--$10^6$ physical qubits. This threshold marks a profound architectural convergence: the footprint required for modularity aligns precisely with early fault-tolerant utility, establishing time-aware distributed orchestration, rather than monolithic expansion or centralized classical control, as the physical imperative for utility-scale quantum computing.
Observation of Vinen turbulence during far-from-equilibrium Bose-Einstein condensation
This paper studies quantum turbulence in Bose-Einstein condensates by observing how tangled vortex lines decay as the system relaxes from a far-from-equilibrium state. The researchers used imaging techniques to measure vortex line density and found the decay follows predictions for Vinen ultraquantum turbulence, similar to superfluid helium.
Key Contributions
- First observation of Vinen turbulence decay in atomic Bose-Einstein condensates
- Demonstration that weakly interacting compressible quantum gases exhibit incompressible fluid dynamics at large scales
View Full Abstract
Relaxation of far-from-equilibrium quantum fluids, intimately related to the emergence of long-range order, is theoretically associated with the decay of a turbulent isotropic tangle of vortex lines. We observe and study such decaying quantum turbulence in a homogeneous 3D atomic Bose gas. Using matter-wave techniques to magnify the gas density distribution, and then imaging a thin slice of the magnified cloud, we observe imprints of randomly oriented vortex lines and measure the vortex line-length density $\mathcal{L}$. The observed decay of $\mathcal{L}$ agrees with the prediction for Vinen `ultraquantum' turbulence. Although our weakly interacting gases are highly compressible, their large-scale dynamics are consistent with the behavior of an incompressible hydrodynamic fluid, with the decay of $\mathcal{L}$ not depending on the strength of the interatomic interactions and being similar to that in the strongly interacting superfluid helium.
Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders
This paper proposes using quantum autoencoders to defend quantum machine learning classifiers against adversarial attacks by reconstructing and purifying adversarially manipulated input data. The method provides an alternative to adversarial training and includes a confidence metric to identify samples that cannot be effectively purified.
Key Contributions
- Novel quantum autoencoder-based defense framework for adversarial attacks on quantum classifiers
- Training-free defense method with confidence metric for identifying unpurifiable adversarial samples
View Full Abstract
Machine learning models can learn from data samples to carry out various tasks efficiently. When data samples are adversarially manipulated, such as by insertion of carefully crafted noise, it can cause the model to make mistakes. Quantum machine learning models are also vulnerable to such adversarial attacks, especially in image classification using variational quantum classifiers. While there are promising defenses against these adversarial perturbations, such as training with adversarial samples, they face practical limitations. For example, they are not applicable in scenarios where training with adversarial samples is either not possible or can overfit the models on one type of attack. In this paper, we propose an adversarial training-free defense framework that utilizes a quantum autoencoder to purify the adversarial samples through reconstruction. Moreover, our defense framework provides a confidence metric to identify potentially adversarial samples that cannot be purified the quantum autoencoder. Extensive evaluation demonstrates that our defense framework can significantly outperform state-of-the-art in prediction accuracy (up to 68%) under adversarial attacks.
Reorganizing Quantum Measurement Records Improves Time-Series Prediction
This paper introduces a new method called 'split-ensemble training' for quantum machine learning that reorganizes measurement data from quantum circuits to improve time-series prediction. Instead of averaging all measurement shots into a single feature vector, the method splits shots into groups to create multiple training examples, improving prediction accuracy without requiring additional quantum hardware resources.
Key Contributions
- Introduction of split-ensemble training method for quantum reservoir computing
- Demonstration that reorganizing measurement shots improves machine learning performance without additional quantum hardware cost
View Full Abstract
Near-term quantum computers are accessed through repeated circuit executions, which produce finite measurement records rather than exact deterministic outputs. In quantum reservoir computing, these records are converted to feature vectors for a classical readout. The standard expectation-value approach averages all shots from one labeled time step into a single feature vector. This reduces finite-shot noise, but it also gives the readout only one training example from many circuit executions. We introduce split-ensemble training: the same shots are split into groups, and each group average is used as a separate, partially denoised feature vector for the same target. The quantum circuit, task, and measurement budget remain unchanged. Across simulated forecasting benchmarks and real hardware experiments, this simple reorganization improves prediction when full averaging leaves the readout with too few training examples, with the strongest gains observed on hardware. Our results establish shot-record organization as a simple, broadly applicable algorithmic lever for improving near-term quantum learning without additional quantum hardware cost.
Optimal current-based sensing of phonon temperature using a finite reservoir
This paper develops optimal current-based methods for measuring phonon temperature in nanoscale quantum dots connected to finite-capacity reservoirs. The researchers compare three measurement strategies and demonstrate that monitoring quantum exchanges between the dot and reservoir achieves the best precision.
Key Contributions
- Development of three current-based strategies for temperature sensing in quantum dot systems with finite reservoirs
- Demonstration that monitoring quanta exchanged between quantum dot and finite reservoir achieves optimal precision
- Fisher information analysis showing common factors for finite reservoir contributions across all strategies
- Optimization framework for maximizing precision through gate voltage tuning
View Full Abstract
In realistic nanoscale transport set-ups, electron-phonon coupling leads to the exchange of heat between phonon baths and electronic reservoirs with finite heat capacities. Such exchange affects the finite reservoir's temperature. However, this sensitivity of the finite reservoir temperature to the exchange of heat with the finite reservoir has remained unexplored for thermometry. Here, we fill this gap by combining current metrology techniques with a thermodynamic framework encompassing finite reservoirs. We focus on an experimentally realizable set-up with a quantum dot coupled to a finite reservoir and consider two distinct current-based strategies in the long time limit, namely monitoring quanta exchanged between the quantum dot and finite reservoir and the measurement of the total current flowing from the quantum dot into an infinite reservoir. A third strategy involves measurements of the quantum dot occupation. For a large but finite reservoir, we show that the Fisher information for all three strategies captures the finite reservoir's contribution to sensitivity through common factors. We also demonstrate that monitoring quanta exchanged between the system and finite reservoir in the long time limit achieves optimal precision. Finally, we provide an optimization analysis that explores how maximal precision can be achieved within each of the current-based strategies by tuning the gate voltage.
Domain-wall melting in all-to-all QSSEP from random-matrix theory
This paper studies how domain walls dissolve in a quantum many-body system with long-range interactions using random matrix theory techniques. The authors derive exact formulas for entanglement entropy dynamics and charge statistics, showing that quantum and classical versions produce identical statistical behavior in large systems.
Key Contributions
- Connection between quantum exclusion process dynamics and Jacobi random matrix processes
- Exact analytical formula for von Neumann entanglement entropy evolution in thermodynamic limit
- Proof that quantum and classical full-counting statistics converge in large systems
View Full Abstract
We study the melting of a domain wall in the quantum simple exclusion process with all-to-all hoppings (a.k.a. the charged SYK$_2$ model). We show that the real-time dynamics of physical quantities of interest can be obtained exploiting spectral results in random matrix theory. We first show that the eigenvalues of the correlation matrix corresponding to the initially charged subsystem evolve according to a Jacobi process, which is defined in terms of a closed system of stochastic differential equations. In turn, this observation allows us to obtain the real-time dynamics of all the eigenvalue moments. We present two physical applications. First, we study the dynamics of the averaged von Neumann entanglement entropy, arriving at a fully explicit expression in the thermodynamic limit. Second, we compute analytically the full-counting statistics of the charge. Our formula allows us to perform a thorough comparison with the full-counting statistics of the classical simple exclusion process. Notably, we show that, in the thermodynamic limit, the quantum and classical full-counting statistics coincide, with no finite-time corrections.
Weak-to-Strong Measurement Transition with Thermal Instabilities
This paper develops a theoretical framework for understanding how quantum measurements transition from weak to strong regimes when thermal noise and environmental effects are present. The researchers show that temperature and thermal instabilities significantly modify measurement statistics and the conditions under which weak values emerge.
Key Contributions
- Development of a general framework analyzing weak-to-strong measurement transitions under thermal noise and decoherence
- Demonstration that thermal effects significantly modify weak-value conditions and projective measurement emergence
View Full Abstract
Quantum measurement is physically realized through a finite dynamical interaction between a system and a measuring apparatus, giving rise to a continuous transition from weak to strong regimes. While this crossover is well understood under ideal conditions, the combined role of thermal instabilities and pre- and post-selection open dynamics has not been systematically addressed. Here, we develop a general framework to analyze the weak-to-strong measurement transition in the simultaneous presence of environmental decoherence and thermal noise. We model the probe as a thermal Gaussian state, explicitly incorporating temperature-dependent fluctuations in the measuring device, and include open-system evolution of the measured system prior to post-selection. By deriving the apparatus's final state, we show that the measurement statistics are modified in a nontrivial, highly sensitive manner by the temperature regime of the system's thermal instabilities, the probe's thermal properties, and the particular choice of pre- and post-selection. This approach allows us to characterize how thermal effects reshape the weak-value condition and influence the emergence of projective behavior across the full measurement crossover.
Nodal algebraic curves and entropy diagnostics in degenerate two-dimensional harmonic-oscillator shells
This paper studies how quantum wave functions in 2D harmonic oscillators can have different nodal patterns (zeros) at the same energy level, and uses mathematical tools including entropy measures to characterize these pattern changes.
Key Contributions
- Development of algebraic framework for understanding nodal geometry changes in degenerate quantum eigenspaces
- Introduction of entropy diagnostics to quantify probability redistribution and correlations in quantum states
View Full Abstract
Degenerate quantum eigenspaces can support substantial changes in nodal geometry at fixed energy. We show that, for the two-dimensional isotropic harmonic oscillator, this restructuring is organized by the Hermite-constrained algebraic curve $P_N(x,y)=0$ appearing in every real shell state, $ψ_N=e^{-αr^2/2}P_N(x,y)$. Finite singularities, $P_N=\nabla P_N=0$, and projective degeneracies of the leading homogeneous part identify the strata where topology-changing events can occur. We combine these criteria with entropy diagnostics: the nodal-domain entropy $S_{\mathrm{dom}}$, Cartesian mutual information $I(x;y)$, and the entropic uncertainty sum $S_r+S_p$. The first three shells reveal a hierarchy: $N=1$ only rotates a nodal line; $N=2$ has a conic transition at $b^2=2ac$, sharply detected by $S_{\mathrm{dom}}$ but not by global entropies; and $N=3$ supports cubic close-branch regimes organized by the projective discriminant, with enhanced responses in $S_{\mathrm{dom}}$ and $I(x;y)$. Thus algebraic stratification, rather than spectral ordering, organizes nodal geometry inside a degenerate eigenspace, while entropy diagnostics quantify probability redistribution and correlation. The framework suggests experimentally reconstructible signatures for real-phase Hermite--Gaussian structured light and approximately isotropic trapped motional systems.
Quantum Lattice Boltzmann Solutions for Transport under 3D Spatially Varying Advection on Trapped Ion Hardware
This paper develops quantum algorithms for fluid dynamics simulations using the Quantum Lattice Boltzmann Method (QLBM) to solve transport problems with non-uniform velocity fields. The researchers implemented and tested their approach on IonQ trapped-ion quantum computers, including 64-qubit systems, and introduced new methods for handling boundary conditions.
Key Contributions
- First implementation of QLBM for transport under non-uniform velocity fields on quantum hardware
- Development of MPS shadow tomography for efficient density readout scaling
- Introduction of novel wall boundary implementation methods for advection-diffusion in QLBM
View Full Abstract
The Quantum Lattice Boltzmann Method (QLBM) has emerged as one of the most promising quantum computing approaches for the numerical simulation of problems in computational fluid dynamics (CFD). The dynamics is formulated in terms of mesoscopic particle distribution functions governed by a discrete Boltzmann transport equation, comprising local streaming and collision operations. In this work, the resulting macroscopic behavior corresponds to the advection-diffusion equation, which we adopt as a canonical model problem for transport phenomena. Building upon recent progress in QLBM implementations, we advance towards more realistic problem settings that better reflect conventional CFD requirements. We address, for the first time, transport under the action of non uniform velocity fields on quantum hardware. We implement our demonstration using IonQ's trapped-ion systems including Forte generation systems and a 64-qubit Barium development system similar to the forthcoming IonQ Tempo line. We identify the density readout and subsequent reloading of the fluid density as a potential bottleneck of the current algorithm and discuss several approaches to mitigate this bottleneck. We identify the use of MPS shadow tomography as a promising method to efficiently scale the readout to large system with complex density distributions. Lastly, we introduce and simulate a novel method to implement wall boundaries for advection-diffusion in QLBM, and discuss the prospects of scaling to higher-complexity problems.
Source-independent quantum key distribution without pre-sending entanglement
This paper proposes a new quantum key distribution protocol that protects against all source-side security attacks without requiring pre-shared entangled photons. The method doubles transmission distance compared to existing approaches while maintaining security even with imperfect quantum light sources.
Key Contributions
- Source-independent QKD protocol that eliminates all known and unknown source-side attacks
- Doubles transmission distance while maintaining robustness against source imperfections
- Demonstrates practical security advantages of non-classical light sources over conventional lasers
View Full Abstract
Quantum key distribution (QKD) theoretically offers information-theoretic security. The prevailing approach is the prepare-and-measure BB84 protocol, which implements QKD using conventional laser rather than single-photon source via the decoy-state method. However, side-channel attacks targeting sources severely threaten system security. Despite extensive efforts, including fully passive scheme, this vulnerability persists even with perfect single-photon source. Here, we propose a source-independent (SI) QKD protocol that resolves all known and unknown source-side attacks without pre-sending entanglement source. Aligning with advances in quantum light sources, our protocol simultaneously doubles the transmission distance while remaining robustness against imperfection of source. Theoretical analysis shows that non-classical light source provides practical security advantages unattainable with conventional laser.
A No-Cloning Trade-off Between Black Hole No-Hair and Horizon Smoothness
This paper proves a fundamental trade-off in black hole physics between the no-hair theorem (which says black holes can't have distinguishable quantum states on the outside) and smooth horizons using quantum information theory. It shows that any observable quantum 'hair' on a black hole's exterior must violate the equivalence principle at the horizon by a measurable amount.
Key Contributions
- Establishes quantitative trade-off between black hole no-hair theorem and horizon smoothness using quantum information measures
- Proves that exterior quantum hair requires violation of equivalence principle by amount ε ≥ D²_max/8
View Full Abstract
The black hole no-hair theorem is traditionally derived from the uniqueness theorems of general relativity. We show that a quantitative form follows from unitarity together with the standard semiclassical assumptions of horizon causality and interior accessibility. For a semiclassical black hole, we prove that the trace distance between exterior states corresponding to two same-charge infalling states is bounded by $2\sqrt{2\varepsilon}$, where $\varepsilon$ quantifies the diamond norm departure of the interior channel from a perfect isometry which is a quantitative measure of horizon-smoothness violation that upper-bounds $1 - F_I$, where $F_I$ is the interior fidelity capturing how faithfully the infalling state is retained. Inverting this relation yields a trade-off inequality, $\varepsilon \geq D_{\max}^2/8$, between the maximum exterior distinguishability $D_{\max}$ and the degree of horizon smoothness. This establishes that observable exterior quantum hair is quantitatively incompatible with exact horizon smoothness under unitary evolution: any model predicting nonzero exterior hair must violate the equivalence principle at the horizon by a quantifiable amount. Pre-existing entanglement with the infalling system is the only channel for quantum hair compatible with both unitarity and horizon smoothness.
Deep Strong light-matter Coupling in 3D Kane Fermions
This paper demonstrates deep strong light-matter coupling in Kane fermions using mercury cadmium telluride in a cavity, achieving record coupling strength above room temperature. The work resolves a controversy about whether superradiant quantum phase transitions can occur in relativistic-like matter systems, showing that a diamagnetic term prevents such transitions.
Key Contributions
- Achieved record normalized coupling ratio exceeding 1.6 in Kane fermions above room temperature
- Resolved controversy about superradiant phase transitions in relativistic-like matter by showing diamagnetic A² term prevents such transitions
- Demonstrated continuous tuning from weak to deep-strong coupling regime using thermally tunable carrier density
View Full Abstract
Deep strong light-matter coupling represents an extreme non-perturbative regime of quantum electrodynamics, in which the interaction strength exceeds the bare frequencies of the uncoupled systems. The ground state features strong quantum correlations between photons and matter excitations, and new cavity-driven phase transitions are expected to occur. Whether a superradiant quantum phase transition, marked by spontaneous dipole ordering and photon condensation, is possible has remained a long-standing and controversial question. Such phenomena have been proposed to arise in exotic electronic systems hosting Dirac and Kane fermions, owing to the formal absence of an $A^2$ term in their low-energy Hamiltonian. Here we exploit the ultralow effective mass of Kane fermions to realise Landau polaritons in a bulk mercury cadmium telluride layer coupled to a Fabry-Perot resonator. Using thermally tunable carrier density, we continuously tune the coupling from the weak to the deep-strong regime, achieving a record normalised coupling ratio exceeding 1.6 above room temperature. The measured polariton spectra are in excellent agreement with a rigorous, gauge-invariant microscopic theory. Despite the nonlinear Landau level structure of relativistic Kane fermions, we show that a diamagnetic $A^2$ term naturally emerges and precludes a superradiant phase transition. These results resolve the long-standing controversy surrounding cavity quantum electrodynamics of relativistic-like matter systems, extend deep-strong-coupling physics to Kane fermions, and open new opportunities for polaritonic semiconductor devices operating in extreme light-matter coupling regimes.
Learning quantum disentanglement scheduling from reduced states via modular hybrid policies
This paper develops a hybrid quantum-classical machine learning approach for controlling quantum systems when only partial information is available, specifically focusing on the task of scheduling which pairs of qubits to disentangle when the controller can only observe two-qubit reduced density matrices rather than full quantum state information.
Key Contributions
- Development of modular hybrid quantum-classical policy framework for quantum control under partial state information
- Identification that classical preprocessing dominates performance while quantum circuits provide compact conditional representations
- Discovery of performance-efficiency trade-offs showing circuit width is more beneficial than depth for this application
View Full Abstract
Quantum control with restricted state access is central to near-term quantum devices, where full wave-function information is unavailable. We study this problem through multiqubit disentanglement scheduling from partial observations, where a controller receives only two-qubit reduced density matrices and selects which qubit pair to disentangle at each step. We introduce a modular hybrid quantum--classical policy framework consisting of classical preprocessing, a parameterized quantum circuit as a compact nonlinear latent block, and classical postprocessing for pair-selection probabilities. Benchmarking 4-, 5-, and 6-qubit tasks, we find that preprocessing is the dominant factor governing performance under reduced-state observations, while the quantum module provides a conditional compact representation whose utility depends on the input features and model budget. We further identify a performance--efficiency trade-off across policy families and find that increasing circuit width is generally more useful than increasing depth. These results provide practical design principles for hybrid policies in reduced-information quantum control.
Adaptable Continuous Variable Quantum Network with Finite Size Security
This paper demonstrates an experimental quantum key distribution network that allows one sender to securely share encryption keys with four users simultaneously over 11km fiber optic channels. The researchers achieved practical secret key generation rates while proving the system's security even with finite-sized data samples, making it compatible with existing telecommunications infrastructure.
Key Contributions
- Experimental demonstration of 1:4 multi-user continuous-variable quantum key distribution network
- Finite-size security analysis for practical CV-QKD implementation
- Adaptable protocols allowing different security and key rate requirements for individual network users
View Full Abstract
In recent years, continuous-variable quantum key distribution (CV-QKD) has become a promising paradigm for enabling secure communication among multiple end users sharing the same telecommunication backbone. CV-QKD with reverse reconciliation naturally enables scalability from conventional point-to-point links to quantum access networks based on passive quantum broadcasting channels. Here, we report an experimental demonstration on an active $1:4$ multi-user CV quantum network (QN) in the finite-size regime. With $1.25\cdot10^9$ coherent states exchanged on each $11\text{km}$ quantum channel, the highest performance for secret key generation totaling $1.9\cdot10^{-1}$ bits/channel use. Furthermore, we investigate adaptable CV-QN protocols that comprehensively allow network operation in various security and key rates requirements of individual users. The results establish the practical security of CV-QN compatible with existing telecommunication for broad deployment, and allowing additional degree of freedom for connected end users in existing infrastructures.
Unentangled stoquastic Merlin-Arthur proof systems: the power of unentanglement without destructive interference
This paper studies StoqMA(2), a new complexity class that combines stoquastic (sign-problem-free) quantum systems with unentangled proofs, establishing its computational power lies between classical and fully quantum proof systems. The authors prove inclusion relationships with other complexity classes and show that this framework captures significant computational power despite avoiding quantum interference effects.
Key Contributions
- Introduction and systematic study of StoqMA(2) complexity class combining stoquasticity and unentanglement
- Establishment of complexity-theoretic bounds showing NP ⊆ StoqMA(2) ⊆ EXP and StoqMA(2)₁ ⊆ PSPACE
- Development of rectangular closure testing framework for analyzing nearly perfect completeness cases
- Proof that multiple unentangled stoquastic proofs collapse to two proofs under negligible completeness error
View Full Abstract
Stoquasticity, originating in sign-problem-free physical systems, gives rise to $\sf StoqMA$, introduced by Bravyi, Bessen, and Terhal (2006), a quantum-inspired intermediate class between $\sf MA$ and $\sf AM$. Unentanglement similarly gives rise to ${\sf QMA}(2)$, introduced by Kobayashi, Matsumoto, and Yamakami (CJTCS 2009), which generalizes $\sf QMA$ to two unentangled proofs and still has only the trivial $\sf NEXP$ upper bound. In this work, we initiate a systematic study of the power of unentanglement without destructive interference via ${\sf StoqMA}(2)$, the class of unentangled stoquastic Merlin-Arthur proof systems. Although $\sf StoqMA$ is semi-quantum and may collapse to $\sf MA$, ${\sf StoqMA}(2)$ turns out to be surprisingly powerful. We establish the following results: - ${\sf NP} \subseteq {\sf StoqMA}(2)$ with $\widetilde{O}(\sqrt{n})$-qubit proofs and completeness error $2^{-{\rm polylog}(n)}$. Conversely, ${\sf StoqMA}(2) \subseteq {\sf EXP}$ via the Sum-of-Squares algorithm of Barak, Kelner, and Steurer (STOC 2014); with our lower bound, our refined analysis yields the optimality of this algorithm under ETH. - ${\sf StoqMA}(2)_1 \subseteq {\sf PSPACE}$, and the containment holds with completeness error $2^{-2^{{\rm poly}(n)}}$. - ${\sf PreciseStoqMA}(2)$, a variant of ${\sf StoqMA}(2)$ with exponentially small promise gap, cannot achieve perfect completeness unless ${\sf EXP}={\sf NEXP}$. In contrast, ${\sf PreciseStoqMA}$ achieves perfect completeness, since ${\sf PSPACE} \subseteq {\sf PreciseStoqMA}_1$. - When the completeness error is negligible, ${\sf StoqMA}(k) = {\sf StoqMA}(2)$ for $k\geq 2$. Our lower bounds are obtained by stoquastizing the short-proof ${\sf QMA}(2)$ protocols via distribution testing techniques. Our upper bounds for the nearly perfect completeness case are proved via our new rectangular closure testing framework.
Geometric complexity in thermodynamics
This paper proves a fundamental limit on how precisely physical operations can be performed, showing that achieving perfect accuracy in resetting quantum or classical systems to a specific state requires infinite resources like time, energy, or control precision. The work establishes a universal geometric bound that applies to both classical and quantum thermodynamic processes.
Key Contributions
- Derived universal trade-off relation between geometric complexity and execution error for quantum channels and classical stochastic maps
- Proved that perfect state-reset operations require divergent geometric complexity, establishing fundamental limits on thermodynamic control
View Full Abstract
The third law of thermodynamics forbids cooling a physical system to absolute zero in a finite number of operational steps. Although this unattainability principle has been quantified for specific state-to-state transitions, a universal, dynamics-independent bound for implementing a state-agnostic reset map remains elusive. In this work, we unveil the fundamental limits of physical map implementation by deriving a trade-off relation based on geometric complexity. By analyzing continuous paths of maps on a geometric manifold, we prove that the geometric complexity of any classical stochastic map or quantum channel is bounded from below by its execution error. As a consequence, we show that achieving zero error in a state-reset operation requires a divergent geometric complexity -- a unified measure that naturally incorporates disparate physical resources, including infinite time, energetic cost, or control bandwidth. This unattainability principle holds universally across both classical and quantum regimes, establishing a strict geometric limit on the physical realization of reset operations in thermodynamic control and quantum computation.
Wavelet-based multiresolution analysis of quantum fractals in confined dynamics
This paper develops a new wavelet-based mathematical method to analyze fractal patterns that naturally emerge in quantum systems confined in boxes or wells. The approach provides a more robust and assumption-free way to measure the fractal dimensions of quantum probability distributions compared to previous methods.
Key Contributions
- Development of wavelet-based multiresolution framework for quantifying quantum fractality without prior assumptions
- Unified characterization method for space, time, and space-time quantum fractals in confined systems
View Full Abstract
Fractal structures naturally emerge in quantum systems whose initial states exhibit spatial discontinuities, a phenomenon first identified by Berry in the paradigmatic case of a particle confined in an infinite potential well. While previous analyses of quantum fractals have mainly relied on spectral decompositions and geometric scaling arguments, their quantitative characterization often depends on scale choices and truncation effects. Here we present a wavelet-based multiresolution framework that enables a direct and assumption-free quantification of quantum fractality. Fractal dimensions are extracted from the scale-dependent distribution of wavelet energies, without invoking prior power-law hypotheses. The method is applied to space and time quantum fractals arising in confined dynamics, as well as to dynamical curves generated by the associated quantum probability flux. These flux-driven trajectories provide a natural space--time parametrization of the underlying fractal structure and yield scaling properties fully consistent with Berry's predictions for space--time fractals. The resulting fractal dimensions are shown to be robust with respect to the choice of wavelet family, numerical cutoffs, and system parameters. Beyond validating earlier conjectures, the present framework offers a unified and computationally efficient tool for the multiscale analysis of quantum fractality in confined and interference-driven quantum dynamics. That is, it provides an operational, scale-adaptive criterion that unifies the characterization of space, time, and space--time quantum fractals within a single, hypothesis-free approach.
Heisenberg-limited Hamiltonian learning without short-time control
This paper develops new algorithms for learning the properties of quantum systems (Hamiltonian learning) that don't require ultra-short control pulses, making the methods more experimentally feasible while maintaining optimal efficiency. The work shows that high-bandwidth, ultra-short pulses are not fundamentally necessary for optimal quantum system characterization.
Key Contributions
- Demonstrated Heisenberg-limited Hamiltonian learning without requiring short-time control pulses
- Developed a framework for emulating continuous quantum control using only lower-bounded evolution times
- Achieved information-theoretically optimal scaling for sparse Hamiltonians with arbitrary minimum evolution time constraints
View Full Abstract
Characterizing quantum systems by learning their underlying Hamiltonians is a central task in quantum information science. While recent algorithmic advances have achieved near-optimal efficiency in this task, they critically rely on accessing arbitrarily short-time dynamics. This reliance poses severe experimental challenges due to finite control bandwidth and transient pulse errors. In this work, we demonstrate that Heisenberg-limited Hamiltonian learning can be achieved without short-time control. We introduce a framework in which every query to the unknown dynamics has duration at least a prescribed minimum time $T$, and show that this restriction does not preclude Heisenberg-limited scaling. The key ingredient is a method for emulating the continuous quantum control required by iterative learning algorithms using only such lower-bounded evolution times. This reduces the learning task to sparse pure-state tomography. Notably, for logarithmically sparse Hamiltonians, our algorithm achieves the information-theoretically optimal $1/\varepsilon$ scaling in total evolution time for any arbitrary constant minimum evolution time $T$. For many-body (polynomially sparse) systems, we uncover a rigorous quantitative tradeoff, showing that the minimum required evolution time can be significantly relaxed from the standard limit at a polynomial cost in total evolution time. Our results affirmatively resolve a prominent open problem in the field and reveal that high-bandwidth, ultra-short pulses are not fundamentally necessary for optimal quantum learning.
Towards High Performance Quantum Computing (HPQ): Parallelisation of the Hamiltonian Auto Decomposition Optimisation Framework (HADOF)
This paper presents HADOF (Hamiltonian Auto Decomposition Optimisation Framework), a method that breaks down large quantum optimization problems into smaller pieces that can be solved in parallel across multiple quantum processors. The researchers demonstrated up to 4x speedup using IBM quantum computers while maintaining solution quality, and validated the approach on real-world genome assembly problems.
Key Contributions
- Development of parallelized HADOF framework for quantum optimization that enables scalability beyond single QPU limits
- Demonstration of 3-4x speedup on IBM quantum hardware using multi-QPU parallel execution while maintaining solution quality
- Validation on real-world genome assembly problems showing practical applicability of the parallel quantum optimization approach
View Full Abstract
Practical applicability of quantum optimisation on near term devices is constrained by limited qubit counts and hardware noise, which restricts the scalability of quantum optimisation algorithms for combinatorial problems. The simulation of large quantum circuits is also difficult and constrained by memory requirement. The Hamiltonian Auto Decomposition Optimisation Framework (HADOF) addresses this by decomposing large QUBOs into smaller subproblems that can be solved iteratively on quantum or classical backends. This allows the scalability of quantum QUBO algorithms beyond device limits, as well as their simulation on classical devices. In this research, we extend the evaluation of HADOF by benchmarking on real IBM QPUs across sequential, single-QPU parallel, and multi-QPU parallel execution modes, advancing toward High Performance Quantum (HPQ) computing for combinatorial optimisation problems. Experimental results on IBM quantum hardware demonstrate up to 3-4x reduction in wall clock time when utilising four QPUs compared to sequential execution baseline, while maintaining comparable solution quality. Notably, even single QPU execution benefits from parallelised job orchestration and execution, yielding up to 3x speedup. Simulated results predict over 5x speed-up in parallel execution mode. We further validate the practical applicability of the approach on real world genome assembly instances, showing that both sequential and parallel HADOF variants achieve competitive accuracy while significantly improving time to solution. These results highlight the importance of parallelism at both the algorithmic and system levels, positioning HADOF as a viable pathway toward scalable quantum optimisation.
Hypergeometric Functions of Nilpotent Operators: Functional Collapse and Structural Depth at Exceptional Points
This paper studies mathematical properties of hypergeometric functions applied to nilpotent operators, which are relevant to exceptional points in non-Hermitian quantum systems. The authors prove that these functions collapse to finite polynomials and establish bounds on how function properties affect the algebraic structure of quantum Hamiltonians.
Key Contributions
- Proof that hypergeometric functions of nilpotent operators reduce to finite polynomials without convergence requirements
- Nilpotent depth criterion relating function coefficients to Jordan structure reduction in exceptional point Hamiltonians
View Full Abstract
We study hypergeometric functions of nilpotent operators in finite-dimensional settings, motivated by the algebraic structure of exceptional points in non-Hermitian quantum mechanics. Our starting point is the following exact result: if N is a nilpotent operator of index m+1 in an associative algebra over C, then every generalized hypergeometric function pFq evaluated at N reduces to a finite polynomial in N of degree at most m, without any analytic convergence requirement. This "functional collapse" is distinct from the classical parameter-termination mechanism and arises purely from the nilpotent structure of the argument. The main result is a "nilpotent depth criterion" (Theorem 2): if the first non-constant coefficient of a formal series F appears in degree r >= 1, then the nilpotent part F(N) - F(0)I has nilpotency index bounded above by ceil((m+1)/r). We apply this criterion to Hamiltonians at exceptional points, where H = lambda I + N with N^{m+1} = 0. Theorem 3 establishes that a function F analytic at lambda reduces the Jordan depth of the exceptional point from m+1 to at most ceil((m+1)/r), where r is the contact order of F at lambda. As consequences: the time evolution operator e^{tH} preserves the full Jordan depth for all t != 0; a function with a zero of order m+1 at lambda annihilates the entire Jordan structure; and the order of the pole of the modified resolvent is reduced from m+1 to at most m+1-r. Results are illustrated with explicit 3x3 Jordan block computations for 1F1, 2F1, and the time evolution operator, confirming sharpness of the bounds.
Entanglement of multi-qubit quantum graph states and studies structural properties of tripartite graphs with quantum programming
This paper develops methods for creating multi-qubit entangled quantum states that represent tripartite graphs and establishes relationships between quantum entanglement properties and structural features of these graphs. The authors demonstrate their approach through quantum simulations and show how quantum programming can be used to study graph properties.
Key Contributions
- Development of method for constructing multi-qubit entangled states representing weighted tripartite graphs
- Derivation of entanglement distance expressions for multi-qubit states in tripartite graph structures
- Establishment of relationship between quantum entanglement properties and structural graph properties
View Full Abstract
We propose a method for constructing multi-qubit entangled quantum states representing weighted tripartite graphs. An expression for the entanglement distance for multi-qubit states corresponding to arbitrary tripartite graph structures is obtained. The entanglement of a qubit with the rest of the system in a quantum graph state is determined by the weights of the edges in the closed neighborhood of the corresponding vertex and by its degree with respect to other sets. We also calculate quantum correlators in the general case of tripartite quantum graph states. We establish a relationship between these quantum properties and the structural properties of the corresponding tripartite graphs, including the number of non-overlapping neighbors, the number of common neighbors of the corresponding vertices, and the number of 4-cycles. As an illustrative example, we consider a tripartite graph forming a triangle and compute the entanglement distance using quantum simulations on the AerSimulator with noise models. The numerical results are consistent with the theoretical predictions. The obtained results demonstrate that quantum graph states provide an effective framework for studying structural properties of tripartite graphs. They open up the possibility of investigating such properties using quantum programming. It is worth highlighting that tripartite graphs have applications in solving practical problems such as resource allocation, scheduling, and database and hypergraph modeling.
Compressed Sensing for Efficient Fidelity Estimation of GHZ States
This paper develops a compressed sensing technique to efficiently measure how well quantum computers can create GHZ entangled states, which are important multi-particle quantum states. The method reduces the number of measurements needed while maintaining accuracy, and was tested on both simulators and real quantum hardware.
Key Contributions
- Development of compressed sensing protocol for efficient GHZ state fidelity estimation
- Experimental validation on trapped-ion quantum hardware with error detection
View Full Abstract
Accurately characterizing multipartite entangled states is a critical challenge in quantum information processing. In this work, we focus on applying compressed sensing techniques to efficiently estimate the fidelity of Greenberger-Horne-Zeilinger (GHZ) states. By exploiting the inherent sparsity of these states, our compressed sensing protocol drastically reduces the measurement overhead traditionally required for state verification while maintaining high accuracy. To evaluate the practical performance of this approach, we test the protocol on GHZ states using both quantum simulators and Quantinuum's trapped-ion hardware. Furthermore, we implement error detection techniques during our hardware evaluations, demonstrating the robustness and viability of compressed sensing for fidelity estimation in noisy experimental environments.
Explicit Quantum Search Algorithm for the Densest k-Subgraph Problem
This paper develops quantum algorithms to find the densest k-vertex subgraph in a graph, which is useful for analyzing social networks and detecting fraud. The authors propose using Grover's quantum search algorithm with a specialized oracle circuit to achieve quadratic speedup over classical brute-force methods.
Key Contributions
- Novel quantum oracle design using Dicke states and Quantum Fourier Transform for edge counting
- Demonstration of quadratic speedup for densest k-subgraph problem using Grover's algorithm
View Full Abstract
This paper addresses the problem of finding the densest $k$-vertex subgraph in an arbitrary graph. This problem is NP-hard and has important applications in social network analysis, fraud detection, recommendation systems, and bioinformatics. We propose two quantum approaches to solve this problem: reduction to Quadratic Unconstrained Binary Optimization (QUBO) and using Grover's quantum search algorithm. For the latter approach, we present an explicit gate-based oracle circuit utilizing Dicke states and Quantum Fourier Transform for edge counting. Numerical simulations demonstrate a quadratic speedup over classical Brute-force search.
Macroscopic photon counting beating the Poisson noise limit
This paper demonstrates a highly precise photon counting system that can accurately count up to 9000+ photons per pulse while beating the fundamental Poisson noise limit. The researchers achieved this by combining eight superconducting nanowire detectors across 128 time slots and performing detailed quantum detector tomography to characterize the entire system.
Key Contributions
- Demonstrated photon counting from 0 to over 9000 photons beating Poisson noise limit by 4.1 dB
- Achieved sub-single-photon precision up to 276 photons per pulse using multiplexed superconducting nanowire detectors
- Performed comprehensive quantum detector tomography reconstructing 138 million POVM matrix elements
- Bridged single-photon measurements to high-sensitivity optical power metrology at 71 pW optical power
View Full Abstract
Photon counting is a cornerstone of quantum optics. Here, we demonstrate precisely counting from 0 to over 9000 photons, beating the Poisson noise limit by at least $4.1~\mathrm{dB}$ across this range. We achieve sub-single-photon precision up to 276 photons per pulse. To do so, we multiplex eight intrinsically photon-number-resolving superconducting nanowire single-photon detectors across 128 temporal modes. We use a model-informed characterization of each of the 1024 detection bins, for optimal precision. We perform quantum detector tomography to reconstruct the positive operator valued measures (POVMs) of the complete device, which consists of $1.38\cdot10^8$ matrix elements. At the repetition rate of our experiment of $80~\mathrm{kHz}$, we can precisely count photons corresponding to an optical power of approximately $71~\mathrm{pW}$, bridging the gap from single-photon measurements to high-sensitivity optical power meters. A photon-number-resolving detector of this size, and the tools used to analyze it, will become increasingly important to characterize large quantum states, as well as tasks in precision metrology and optical power standards.
Timescales for Deep and Full Thermalization
This paper studies how isolated quantum systems reach thermal equilibrium, comparing two different approaches beyond the standard Eigenstate Thermalization Hypothesis: 'deep thermalization' involving quantum measurements and 'full thermalization' involving higher-order correlations. The researchers find that both processes follow exponential relaxation but at different rates, with full thermalization being faster at higher orders.
Key Contributions
- Comparative analysis of deep vs full thermalization timescales in chaotic quantum systems
- Discovery that full thermalization occurs faster than deep thermalization for higher-order correlations
View Full Abstract
Isolated quantum systems typically approach thermal equilibrium as described by the Eigenstate Thermalization Hypothesis (ETH). Going beyond this involves either higher order correlators (full thermalization) or the formation of state designs, i.e., the approach of moments of state ensembles after a projective measurement towards thermal equilibrium (deep thermalization). We compare these two extensions of ETH using extensive numerical studies within a paradigmatic model for chaotic many-body quantum dynamics. For this we find exponential relaxation for both extensions: For deep thermalization all moments relax with the same rate, which approximately equals the relaxation rate of the autocorrelation function captured by ETH. In contrast, higher order correlation functions in full thermalization approach equilibrium faster. This means that at higher orders full thermalization is faster than deep thermalization.
OAM-mode sorting with a wavefront twister
This paper proposes a new optical device called a 'wavefront twister' that can sort orbital angular momentum (OAM) modes of light by mapping each mode to distinct ring-shaped patterns. The device works by applying radially-varying rotation to light wavefronts, allowing different OAM modes to be separated spatially for practical applications.
Key Contributions
- Introduction of the wavefront twister concept as a generalization of conventional wavefront rotators
- Demonstration of scalable high-dimensional OAM mode sorting with minimal inter-modal overlap
View Full Abstract
We propose an OAM sorter based on a novel optical element that we refer to as a wavefront twister. It is a generalization of the conventional wavefront rotators such as the Dove prism. However, unlike a Dove prism, which simply rotates a wavefront, the rotation generated by a wavefront twister varies linearly with radial position, resulting in the twisting of the wavefront. We demonstrate that the wavefront twister, followed by a lens, maps each OAM mode to an annulus of distinct radius at the back focal plane of the lens with negligible inter-modal overlap and preserves the circular symmetry. Thus, the proposed wavefront twister offers a scalable scheme for high-dimensional OAM mode sorting, with important consequences for the practical realization of OAM-based applications.
Size-Limited Room Temperature Single-Photon Emission from Sidewall-Treated Fractional Dimension InGaN Quantum Dots: Determined by Density-of-States-Corrected Ultrafast Carrier Dynamics and Improved Signal-to-Noise Ratio
View Full Abstract
Room-temperature single-photon emission (SPE) resulting from a biexciton-exciton cascaded decay is demonstrated for the first time from chemically and photoelectrochemically etched site-controlled In0.14Ga0.86N quantum dots (QDs) embedded in vertical GaN nanowires. Diameter-dependent biexciton-exciton dynamics are analysed to determine the eligibility of QD as a single-photon emitter. The signal-to-noise ratio degrades with increasing QD diameter. Background noise photons pose a bottleneck to achieving SPE. This is also explained from a carrier dynamics perspective. Surface recombination contributes to inhomogeneous broadening at QD diameters larger than 35 nm. Below 35 nm, density-of-states-corrected Auger gradually becomes the principal biexciton-decay route with further reduction in QD diameter, thereby quenching the possibility of thermal broadening and setting a threshold for SPE. Below 9 nm, the Auger recombination rate becomes manyfold of other decay rates, causing multi-photon suppression via single Auger decay to form an exciton. Surface recombination probability of this exciton is minimized while biexciton state filling probability is maximized by reducing sidewall surface states through wet-treatment. These improve biexciton state preparation and enhance the single-photon purity of the exciton towards the exciton Bohr radius (3 nm) regime. Far away from this regime, higher-order autocorrelations to characterize quantum emission involving multi-photon events are discussed. This study establishes a generalized physical framework for predetermining SPE probability as a function of QD surface and geometry down to the exciton Bohr radius regime, with practical implementations. This work shows the pathway to design and develop next-generation semiconductor QDs for high-purity room-temperature SPE.
Observation of attractor transitions in active magnon-polaritons under microwatt drives
This paper demonstrates controlled transitions between different nonlinear states in magnon-polariton systems using very low power (microwatt) drives, achieving complex dynamics like chaos and multiple frequency states that were previously difficult to observe in passive systems.
Key Contributions
- First experimental observation of explosive bistability growth and attractor transitions in active magnon-polaritons at microwatt power levels
- Demonstration of magnetic-field-triggered switching between nonlinear states with spectral amplification 162 times larger than bare gyromagnetic response
View Full Abstract
Magnon-polaritons provide a room-temperature platform for investigating nonlinear cavity quantum electrodynamics in the microwave domain, but experimentally observing controlled transitions among distinct nonlinear attractors remains challenging in conventional passive systems, where strong external driving is usually required. Here we report the observation of attractor transitions in an active magnon-polariton formed by a self-oscillating microwave cavity coupled to a yttrium iron garne (YIG) sphere. The feedback loop supplies an internal microwave drive, while Kerr frequency pulling and Suhl-mediated magnon-magnon scattering produce an enhanced effective nonlinearity. Stability analysis using experimentally calibrated parameters reveals a rich fixed-point (FP) landscape with multiple unstable-FP phases and a triple-point region. By tuning gain across these phases, we observe the first experimental evidence of explosive growth of bistability, followed by transitions to multifrequency limit cycles, comb-like/fractal spectra, and broadband chaotic dynamics at microwatt powers. Near a critical point, magnetic-field-triggered switching between nonlinear emission states produces spectral shifts up to 162 times the bare gyromagnetic response. By enabling low-power attractor transitions and attractor-switching-amplified spectral response, active magnon-polaritons open opportunities for nonlinear microwave signal generation, high-precision sensing, and neuromorphic computing.
Effective Noise Mitigation via Quantum Circuit Learning in Quantum Simulation of Integrable Spin Chains
This paper develops a quantum circuit learning method to reduce noise in quantum simulations of spin chains by training shallow circuits to replicate the behavior of deeper, noisier circuits while preserving important physical properties. The approach leverages the conserved quantities in integrable systems to create more robust quantum simulations on near-term quantum devices.
Key Contributions
- Novel noise mitigation strategy using quantum circuit learning for quantum simulation
- Physics-informed approach that preserves conserved quantities in integrable spin chain simulations
- Demonstration of shorter, more robust circuits without exponential sampling overhead
View Full Abstract
We propose a noise-mitigation quantum simulation strategy for near-term quantum devices based on Quantum Circuit Learning (QCL), which is in particular effective for integrable quantum spin chains. The method trains a shallow variational circuit to approximate a deeper time-evolution circuit by learning the conserved charges and only a small amount of dynamical information in the system. Under realistic noise models, the learned circuit maintains both conserved quantities and dynamical observables significantly closer to their true values than the noisy simulation of the original circuit. This demonstrates QCL as an effective, physics-informed error mitigation strategy, producing shorter, more robust circuits without exponential sampling overhead.
Hyperfine-resolved laser excitation and detection of nuclear isomer in trapped $^{229}$Th$^{3+}$ ions
This paper presents theoretical methods for using lasers to excite and detect a special nuclear state in thorium-229 ions trapped in laboratory conditions. The research focuses on developing practical techniques for nuclear clock applications by analyzing how laser parameters affect the efficiency of exciting this nuclear transition.
Key Contributions
- Development of hyperfine-resolved detection schemes for nuclear isomer states with quantified photon detection rates
- Theoretical framework using quantum master equations to optimize laser excitation parameters for nuclear transitions
View Full Abstract
We present a comprehensive theoretical investigation of hyperfine-resolved excitation and detection of the low-energy isomeric state of $^{229}$Th in trapped $^{229}\mathrm{Th}^{3+}$ ions. Using a quantum master equation approach, we analyze the dependence of the isomeric population on laser linewidth, detuning, and irradiation time, showing that their proper matching is essential for efficient excitation. We further propose two nuclear-state detection schemes based on three hyperfine-resolved electronic fluorescence channels at 690, 984, and 1088 nm. Our analysis shows that the 690-nm and 984-nm scheme yields detectable photon rates on the order of $10^4~\mathrm{s}^{-1}$ per ion for each wavelength, whereas the 1088-nm scheme achieves a higher rate on the order of $10^5~\mathrm{s}^{-1}$ per ion. By quantifying the trade-off between irradiation time and scan-step size, we show that the nuclear transition can be located within one month for a 100-MHz uncertainty using currently available vacuum-ultraviolet laser technology. These results provide practical guidance for trapped-ion $^{229}\mathrm{Th}$ spectroscopy and the development of nuclear clocks.
Quantum Magnetometry with Orientation beyond Steady-State Limits in Cavity-Magnon Systems
This paper develops a new quantum sensing method using cavity-magnon systems that can measure magnetic fields in three dimensions with enhanced precision. The approach uses transient quantum dynamics and engineered initial states to overcome limitations of steady-state sensing protocols.
Key Contributions
- Development of transient quantum sensing framework that surpasses steady-state sensing limits
- Demonstration of crosstalk-free 3D magnetic field reconstruction using cavity-magnon systems
- Discovery of resonance condition for quantum noise cancellation without strong coupling
- Scalable architecture using YIG sphere arrays with 1/N noise scaling
View Full Abstract
We present a transient quantum sensing framework for cavity-magnon systems that circumvents the inevitable loss of initial-state quantum properties plaguing conventional steady-state protocols. Explicitly incorporating finite-time dynamics and adopting an engineered steady state as the initial condition, we derive the exact transient noise spectrum. We show that residual initial quantum correlations alone can drastically enhance the short-time signal-to-noise ratio (SNR) beyond that achievable with unsqueezed steady-state schemes. Through analysis of the transient spectral density and joint measurements of orthogonal cavity quadratures, we realize crosstalk-free reconstruction of all three magnetic field components, enabling orientation of magnetic signals. In the long-time limit, our theory yields a closed-form stationary noise spectrum and uncovers a resonance condition $g_{am}=\sqrt{κ_aκ_m}/2$, where cavity field quantum noise is fully canceled without requiring strong coherent coupling. Away from this resonance, injected squeezing further suppresses cavity induced noise and broadens the detection bandwidth. Extending the framework to an array of $N$ yttrium iron garnet (YIG) spheres generates a collective bright mode, with magnon-probe noise scaling as $1/N$. Our results establish a unified route to scalable, high precision, multidimensional quantum magnetometry using cavity-magnon platforms.
Observation of Universal Spectral Moments and the Dynamic Dispersive-to-Proliferative Transition
This paper studies non-Hermitian quantum systems and demonstrates that spectral moments remain stable even when the actual energy spectra change dramatically due to boundary effects. The researchers used acoustic platforms to show that bulk physics can behave predictably despite boundary-sensitive spectral properties, revealing a new type of transition between different dynamical regimes.
Key Contributions
- Experimental demonstration that spectral moments provide boundary-robust bulk observables in finite non-Hermitian lattices
- Development of loop-counting theory explaining finite-size deviations and predicting scaling laws
- Discovery of dispersive-to-proliferative bulk transition governed by moment structure rather than spectral sensitivity
- Establishment of spectral moments as practical descriptors for finite non-Hermitian systems
View Full Abstract
In non-Hermitian systems, spectra can be maximally boundary-sensitive, yet bulk physics need not be. Here we experimentally show that spectral moments provide boundary-robust bulk observables in finite non-Hermitian lattices, even when the spectra undergo dramatic geometry-dependent reshaping due to the skin effect. Using a unified acoustic platform with full spectral reconstruction and time-domain access, we probe one-, two- and three-dimensional lattices and demonstrate that spectral moments remain nearly invariant across distinct boundary geometries while the corresponding complex spectra differ strongly. To connect the thermodynamic theorem to realistic finite systems, we develop a loop-counting theory that identifies the physical origin of finite-size deviations in terms of missing boundary loops, quantitatively captures the corrections, and predicts a scaling law, which we verify experimentally. Beyond acoustic spectroscopy, we reveal a counterintuitive dynamical consequence of moment invariance: a dispersive-to-proliferative bulk transition governed by bulk moment structure rather than spectral boundary sensitivity. As a result, local bulk dynamics can remain stable (dispersive) even in a $\mathcal{PT}$-broken spectral regime, challenging the conventional expectation that $\mathcal{PT}$ breaking necessarily implies feedback-induced dynamical instability (proliferation) through exponentially amplifying spectral components. These results establish spectral moments as practical bulk descriptors for finite non-Hermitian matter and open a route to extracting and controlling intrinsic bulk behavior in realistic wave-based non-Hermitian devices.
Experimental detection of entanglement in multimode Gaussian states from high-order intensity correlation moments
This paper demonstrates an experimental method to detect quantum entanglement in multimode Gaussian states by measuring high-order intensity correlations using superconducting nanowire detectors, without requiring coherent local oscillators.
Key Contributions
- Experimental detection of entanglement using high-order intensity correlation moments up to sixth-order
- Demonstration of entanglement characterization method for multimode Gaussian states using pseudo-photon-number-resolving detectors
View Full Abstract
Quantum universal invariants of a Gaussian state's covariance matrix, which can be derived from intensity correlation moments, have been adopted to characterize the entanglement properties of Gaussian states via the positive partial transpose criterion, also known as the Peres-Horodecki separability criterion. Such intensity correlation moments enable the extraction of information about the covariance matrix without the need for a coherent local oscillator. Here, we experimentally detect the entanglement properties of multimode Gaussian states using high-order\,(up to sixth-order) intensity correlation moments. These multimode Gaussian states are prepared via spontaneous and cascaded parametric down-conversion pumped by a high-peak-energy pulsed laser. Their intensity correlation moments are measured using a pseudo-photon-number-resolving detector constructed through spatial multiplexing of 32 threshold superconducting nanowire single photon detectors. This method is successfully demonstrated for two-mode and three-mode Gaussian states and can be extended to $N$-mode Gaussian states with $N>3$.
Pauli equation in spaces of constant curvature and extended Nikiforov-Uvarov method
This paper applies the extended Nikiforov-Uvarov mathematical method to solve the Pauli equation (non-relativistic limit of the Dirac equation) for a particle in a Coulomb potential within curved spacetime. The authors find that while the method yields energy spectra, the necessary conditions for polynomial solutions cannot be satisfied, leading them to conclude the method has limited utility for such quantum mechanical problems.
Key Contributions
- Demonstrates non-commutativity of non-relativistic limit with squaring of Dirac equation in curved spaces
- Shows limitations of extended Nikiforov-Uvarov method for quantum mechanical problems in curved spacetime
View Full Abstract
We apply the extended Nikiforov-Uvarov method to the non-relativistic limit of the Dirac equation with a Coulomb potential in spaces of constant curvature. In this case, the radial equation reduces to the Heun equation, and the extended Nikiforov-Uvarov method easily yields a quantization condition which leads to necessary condition under which the resulting Heun equation can have polynomial solutions. The energy spectrum implied by the quantization condition is virtually identical to the spectrum of a spinless particle obtained using the Schrödinger equation, except for the absence of the ``geometric potential", confirming the non-commutativity of the naive non-relativistic limit with the ``squaring" of the Dirac equation, first discovered on curved surfaces. However, the necessary conditions for the existence of polynomial solutions cannot be met, and this fact undermines the reliability of the results obtained. This circumstance forces us to conclude that the extended Nikiforov-Uvarov method has limited, if any, value when considering similar problems in quantum mechanics.
Observation antibunching with classical light in a linear interferometer
This paper demonstrates that antibunching effects, typically considered a quantum phenomenon, can be observed using classical thermal light in a Hanbury Brown-Twiss interferometer when photon-number projection measurements are performed. The research explores the boundary between classical and quantum optical phenomena by showing how measurement techniques can reveal quantum-like correlations in classical light sources.
Key Contributions
- Demonstration of antibunching effects using classical thermal light through photon-number projection measurements
- Analysis of the classical vs nonclassical nature of observed correlations and the role of measurement technique in revealing quantum-like effects
View Full Abstract
Understanding the boundary between classical and nonclassical phenomena is important for both fundamental researches in quantum optics and applications in quantum information. One of the most interesting research directions in this field is exploring nonclassical effects with classical light. In this paper, we will show that it is possible to observe antibunching with thermal light in a Hanbury Brown-Twiss interferometer by treating single-photon detectors as photon-number-resolving detectors to perform photon-number projection measurements. Both temporal and spatial antibunching is observed via the correlation of two detectors detecting one and zero photon, respectively. By comparing the measured results of thermal and laser light, it is found that the observed antibunching arises from the combined effect of photon statistics of thermal light and photon-number projection measurement.The classical and nonclassical nature of the observed antibunching is analyzed. The results are helpful to understand the connection between classical and nonclassical correlation and may find applications in multiphoton interference and quantum imaging.
Finite Imaginary-Time Evolution for Polynomial Unconstrained Binary Optimization
This paper develops FinITE (Finite Imaginary-Time Evolution), a quantum algorithm that uses the linear-combination-of-unitaries framework to prepare ground states for optimization problems like MaxCut. The method provides exact relationships between success probability and solution quality, with theoretical guarantees and amplitude amplification for improved performance.
Key Contributions
- Development of FinITE algorithm for quantum ground-state preparation using linear-combination-of-unitaries without product-formula errors
- Exact mathematical relationship between LCU success probability and ground-subspace fidelity with closed-form threshold conditions
- Integration of fixed-point amplitude amplification with explicit query-complexity bounds for optimization problems
View Full Abstract
Imaginary-time evolution is a standard primitive for ground-state preparation but is nonunitary, precluding direct quantum implementation. We develop Finite Imaginary-Time Evolution (FinITE), a finite-beta construction for diagonal Pauli-Z cost Hamiltonians arising from polynomial unconstrained binary optimization (PUBO) instances, including QUBO and HUBO cases. FinITE uses the linear-combination-of-unitaries (LCU) framework to implement a scaled imaginary-time propagator. The commuting Pauli-Z structure makes termwise block-encodings compose without product-formula error, and higher-order Pauli-Z terms are handled directly without quadratization. The structure yields an exact finite-beta identity between the LCU success probability and the ground-subspace fidelity. Combined with a gap-based fidelity lower bound, the identity yields a closed-form sufficient imaginary-time threshold beta-star for a chosen target fidelity. The threshold depends on estimates of the spectral gap and the initial ground-subspace overlap. Because the LCU success event is flagged by a known ancilla outcome, we integrate fixed-point amplitude amplification with an explicit query-complexity bound. Statevector simulations verify the identity on a five-vertex MaxCut (QUBO) and an eight-qubit cubic HUBO instance, and shot-based simulations on the MaxCut instance illustrate the predicted finite-beta threshold and amplification procedure.
Galilean boost invariance does not survive the trace: symmetry breaking in open quantum systems
This paper shows that when a quantum system interacts with its environment, fundamental symmetries like Galilean boost invariance (how physics looks the same from different moving reference frames) can be broken in the reduced dynamics of the system, even when the full system-environment setup preserves these symmetries. The authors identify exactly where this symmetry breaking occurs and propose parametric driving as a potential way to suppress it.
Key Contributions
- Demonstrated that Galilean boost invariance is broken in open quantum systems through the dissipative anticommutator term in the master equation
- Established fundamental incompatibility between Galilean invariance, fluctuation-dissipation theorem, and reduced boost covariance for bilinear-coupled systems
- Identified parametric driving as a mechanism to suppress boost-breaking while protecting quantum entanglement
View Full Abstract
Tracing out a Galilean-invariant Caldeira-Leggett environment breaks Galilean boost covariance of the reduced dynamics, while spatial translations and rotations survive intact. An operator-level analysis of the exact Hu-Paz-Zhang master equation localizes the violation entirely in the dissipative anticommutator term, scaling with the damping coefficient $Γ(t)f(t)$. The fluctuation-dissipation theorem ties this coefficient to the absorptive bath response that drives equilibrium momentum diffusion, so for any non-trivial bath spectral density bilinear-coupled Galilean invariance, the fluctuation-dissipation theorem, and reduced boost covariance cannot hold simultaneously. The stochastic decomposition of the influence functional extends the mechanism beyond the quadratic regime. The dimensionless ratio $\hbarγ/k_\mathrm{B} T$ delineates the crossover: cold atoms in dissipative optical lattices and ultracold molecules sit at its edge. Parametric driving offers a one-directional escape: the squeezing rate that protects nonequilibrium entanglement above the standard quantum limit also suppresses boost-breaking over a driving cycle.
Nonadiabatic Renormalization Group for Strongly Coupled Multiscale Quantum Systems
This paper introduces a new mathematical technique called 'nonadiabatic renormalization group' for studying complex quantum systems with multiple energy scales. Instead of eliminating high-energy components, the method suppresses them iteratively, creating new types of quantum state representations that can better capture complex entanglement patterns.
Key Contributions
- Novel nonadiabatic renormalization group method for multiscale quantum systems
- New tensor network state formalism with shared physical legs that goes beyond matrix product states
- Applications to interacting boson models and ab initio quantum chemistry
View Full Abstract
Complex quantum systems are often multiscale in nature with strong interactions between different scales. We present a novel idea: iteratively suppressing, rather than tracing out, the fast, high-energy degrees of freedom in strongly correlated quantum systems with multiple energy scales in a non-perturbative way, termed nonadiabatic renormalization group. This leads to a quantum geometric structure of a nested fiber bundle, in which each fiber of a layer is itself a fiber bundle of the next layer. The nonadiabatic renormalization group brings a new type of tensor network states that shares physical legs among ''sites'' and encodes quantum entanglement beyond conventional matrix product states. We demonstrate how to apply the nonadiabatic renormalization group to different types of problems, including an interacting boson model and ab initio quantum chemistry with interacting electrons.
Constructing Bulk Topological Orders via Layered Gauging
This paper presents a new method called 'layered gauging' to construct higher-dimensional topological quantum phases by stacking lower-dimensional quantum systems with symmetries and systematically gauging interactions between adjacent layers. The approach provides a physically intuitive way to generate complex topological orders including fracton phases from various types of quantum symmetries.
Key Contributions
- Development of the layered gauging construction method for systematically generating topological orders from quantum symmetries
- Demonstration of the method across multiple examples including derivation of X-cube model from 2D plaquette Ising subsystem symmetry and construction of double semion topological order from anomalous Z2 symmetry
View Full Abstract
Understanding quantum phases and phase transitions in the presence of symmetries is a central objective of quantum many-body physics. A powerful modern paradigm for investigating this problem is topological holography, which relates symmetries in $k$ dimensions to "bulk" topological orders in $(k+1)$ dimensions. While conceptually profound, most existing bulk construction methods rely on sophisticated mathematical formalisms and can be difficult to apply to certain symmetry types. In this work, we propose a physically intuitive and versatile method, termed the layered gauging construction, to systematically generate $(k+1)$-dimensional (liquid or fracton) topological orders from $k$-dimensional generalized symmetries. Roughly speaking, the prescription is to stack many layers of $k$-dimensional quantum systems with certain symmetries into a $(k+1)$-dimensional pile, and then sequentially gauge a diagonal symmetry acting on each nearest-neighbor pair of layers. The detailed procedure depends on the specific symmetry types. We have successfully implemented the method in a number of examples in different spatial dimensions, with symmetries that are conventional, higher-form, subsystem, anomalous, nonabelian, or noninvertible. We hence conjecture the method to be very general. For example, from the subsystem symmetry of the $2d$ plaquette Ising model, we derive the X-cube model and also an anisotropic fracton topological order. Additionally, starting from an anomalous $\mathbb Z_2$ symmetry in $1d$, we construct a new square lattice model realizing the double semion topological order.
Fixed-PVM Born Rule Uniqueness from Fisher Non-Expansion and Operational Calibration
This paper provides a mathematical proof that the Born rule (quantum mechanics' fundamental probability rule) is uniquely determined by three geometric and information-theoretic conditions when measuring quantum states with a fixed measurement apparatus. The work establishes rigorous foundations for why quantum probabilities must follow the specific form they do.
Key Contributions
- Proves uniqueness of Born rule from Fisher information geometry and operational constraints
- Establishes rigidity theorem for Fisher-non-expanding maps on probability simplices
View Full Abstract
Fix a finite dimension $d \geq 2$ and a fixed rank-1 PVM $M=\{|e_1\rangle\langle e_1|,\ldots,|e_d\rangle\langle e_d|\}$ on ${\bf C}^d$. Let $P_M:\mathbb{CP}^{d-1}\toΔ^{d-1}$ be a readout map on pure states. We prove that three primitives force the Born rule for this fixed measurement: (i) square-root regularity of $R_M=\sqrt{P_M}$ along Fubini-Study geodesics, (ii) the universal readout Cramer-Rao bound $F_{\rm cl}\leq F_Q$ on smooth pure-state curves, and (iii) operational calibration on basis preparations $P_M([e_i])=δ_i$. The geometric core is a rigidity theorem for Fisher-non-expanding self-maps of the probability simplex: after conjugation by the square-root chart, such maps become round-metric 1-Lipschitz self-maps of the positive spherical orthant, and vertex fixing forces the identity. The main readout theorem is dimensionwise, fixed-PVM, and pure-state only. Escort-class Born uniqueness and the Markov/coarse-graining routes appear as corollaries or alternative routes.
Semiclassical Ehrenfest paths in open quantum systems
This paper studies how quantum systems transition to classical behavior when they interact with their environment, using mathematical tools called Ehrenfest trajectories and Fokker-Planck equations. The researchers show how to separate the quantum coherent effects from the irreversible classical effects in this transition process.
Key Contributions
- Derived explicit Fokker-Planck equation for Gaussian mixture evolution in open quantum systems
- Provided phase-space interpretation of quantum-to-classical transition using generalized Ehrenfest theorem
- Demonstrated microscopic separation of coherent and irreversible contributions in open quantum dynamics
View Full Abstract
We study the semiclassical Ehrenfest trajectories in open quantum systems. We first derive in explicit form the Fokker-Planck equation that governs the time evolution of the mixing measure for a Gaussian mixture. Then, we embed the generalized Ehrenfest theorem recently obtained for open quantum systems into this phase-space picture to study the time evolution of the expectations of observable with respect to the Gaussian mixture. We show how the coherent and irreversible contributions are microscopically separated. Our work provides a transparent phase-space interpretation of the emergence of classical trajectories in open quantum dynamics.
High-key-rate Fully-Passive Quantum Access Network with Thermal Source
This paper demonstrates a new quantum access network protocol that achieves record-breaking key generation rates of 19.48 Mbps per network unit using passive state preparation. The system is fully compatible with existing optical communication infrastructure and extends quantum key distribution from point-to-point to point-to-multipoint networks.
Key Contributions
- Demonstrated record-breaking key generation rate of 19.48 Mbps per quantum network unit using passive state preparation
- Extended passive continuous variable quantum key distribution from point-to-point to point-to-multipoint network architecture
- Achieved full compatibility with existing classical optical communication infrastructure without requiring modifications
View Full Abstract
To accommodate classical communication systems with progressively increasing transmission rates, quantum access networks (QAN) have undergone systematic and protocol-level optimizations in recent years, where quantum passive optical network (QPON) architectures are gaining significant attention due to their simple structure. It is challenging for the previous QAN based on active protocols or Stokes operator coding protocols to achieve high-speed linear modulation with high extinction ratio and stability under practical conditions. In this work, we propose and experimentally demonstrate a downstream fully passive quantum access network protocol using passive state preparation (PSP) with free-space and single-mode fiber hybrid channels, and the final key generation rate is up to a record-breaking 19.48 Mbps per quantum network unit. The proposed PSP-QPON scheme extends the scope of PSP-CVQKD from point-to-point to point-to-multi-point networks, which enables high-key-rate, high-stability, and low-resource-consumption implementation. Moreover, the network channel in this experiment is fully compatible with access networks in classical optical communications, which allows integration with existing optical infrastructure without the need for additional modifications, providing a promising solution for local area network quantum access network at home or a mobile terminal.
Q3SAT-GPT: A Generative Model for Discovering Quantum Circuits for the 3-SAT Problem
This paper introduces Q3SAT-GPT, a generative AI model that learns to design quantum circuits for solving the 3-SAT optimization problem by training on high-quality circuit examples from an improved QAOA algorithm. The approach bypasses expensive optimization during inference by having the model directly generate effective quantum circuits.
Key Contributions
- Introduction of Q3SAT-GPT, a generative model for quantum circuit discovery that eliminates costly variational optimization at inference
- Development of Mosaic Adaptive QAOA (MosaicADAPT-QAOA) for constructing high-quality low-depth QAOA circuits as training data
View Full Abstract
This work introduces Q3SAT-GPT, a generative model for discovering quantum circuits for the Max-E3-SAT problem. Our method learns from high-performing QAOA-style ansätze to directly generate candidate circuits. To create high-quality supervision, we also introduce Mosaic Adaptive QAOA (MosaicADAPT-QAOA), an adaptive strategy for constructing low-depth QAOA circuits by selecting subsets of mixer operators in each step, rather than inserting operators sequentially. The resulting circuits serve as training data for the generative model, allowing it to learn effective circuit design patterns while eliminating the need for costly variational optimization at inference time. Experiments show that our framework attains strong solution quality with shallow circuits and scales significantly better than both our adaptive construction procedure and conventional variational baselines. Our results establish generative modeling as a high-performance route toward the scalable discovery of quantum optimization circuits, demonstrating that these models can effectively internalize circuit logic while providing a foundation for future, instance-aware inductive biases. Reproducibility: The source code is available at https://github.com/pratimugale/Q3SAT-GPT.
High-Rate Free-Space Continuous-Variable QKD with Self-Referenced Passive State Preparation
This paper develops an improved quantum key distribution system that uses a new self-referenced method to prepare quantum states, achieving record-high secure communication rates of 10.34 Mbps over free-space channels with significant loss. The work addresses key practical challenges in quantum cryptography by improving signal quality and system stability.
Key Contributions
- First implementation of local local oscillator CVQKD system using self-referenced passive state preparation
- Theoretical proof of equivalence between PSP and GMCS protocols using temporal-mode theory
- Record-high asymptotic secret key rate of 10.34 Mbps over 23.5 dB loss free-space channel
- Novel self-referenced pilot scheme for high-precision frequency and phase compensation
View Full Abstract
Continuous-variable quantum key distribution (CVQKD) using passive state preparation (PSP) offers low-cost, high-rate secure communication. However, the existing PSP-CVQKD scheme with a transmitted local oscillator has high photon leakage noise and poor stability, making it unsuitable for high-loss transmission. In this work, for the first time, we propose and implement a local local oscillator (LLO) CVQKD system using a self-referenced (SR) PSP scheme, and give a theoretical proof of the equivalence of the PSP and GMCS protocol using temporal-mode theory. By employing the novel self-referenced pilot scheme to achieve high-precision time-varying frequency and phase compensation algorithms, we significantly improve the system' s signal-to-noise ratio and stability. The system achieves a record-high asymptotic secret key rate of 10.34 Mbps over a free-space channel with up to 23.5 dB loss, while maintaining low excess noise and robust performance under turbulent conditions. This work establishes the feasibility of SR-LLO CVQKD, providing a practical pathway toward secure, high-rate quantum communication in realistic environments.
Large quantum dot energy level shifts in anomalous photon-assisted tunneling
This paper studies quantum dots in germanium/silicon heterostructures and finds that energy level splittings important for hole spin qubits change significantly when gate voltages are adjusted, contrary to previous assumptions. The researchers used two measurement techniques to characterize these unexpected energy changes and developed a model to explain the linear dependence on gate voltage.
Key Contributions
- Discovery of strong gate-voltage dependence of singlet-triplet splittings in quantum dots, contradicting previous assumptions
- Development of a model combining photon-assisted tunneling and pulsed-gate spectroscopy data to explain linear gate-voltage dependence
View Full Abstract
Orbital energy splittings are important quantum dot parameters for the operation of hole spin qubits. They are known to depend on the lateral confinement of the quantum dots. However, when changing top, plunger gate voltages, which are the typical control parameter for qubit applications, such energy splitting changes are typically negligible, both as measured in experiment and as assumed in effective theories. Here, we study the singlet-triplet (ST) splittings, which depend on the orbital splittings, of a double quantum dot (DQD) in a Ge/SiGe heterostructure using photon-assisted tunneling (PAT) and pulsed-gate spectroscopy. We find that the ST splittings have a surprising, strong dependence on the top gate voltages, leading to anomalous PAT measurements. We combine data from both measurements in a model that well describes the linear gate-voltage dependence of the ST splittings. Finally, we show that the ST splittings of the two dots exhibit similar linear gate-voltage dependences when the device is retuned such that their ratio is significantly different.
Simulating dynamics of RLC circuits with a quantum differential-algebraic equations solver
This paper presents a quantum algorithm for simulating RLC electrical circuits that achieves exponential speedup over classical methods. The algorithm solves differential-algebraic equations to encode circuit voltages and currents in quantum states, running in polylogarithmic time compared to polynomial time for classical approaches.
Key Contributions
- Development of quantum differential-algebraic equation solver for linear DAE systems
- Exponential speedup quantum algorithm for RLC circuit simulation
- Proof that energy estimation in RLC circuits is BQP-hard
- Modified nodal analysis framework compatible with quantum algorithms
View Full Abstract
We introduce a quantum algorithm for simulating the dynamics of electrical circuits consisting of resistors, inductors and capacitors (aka RLC circuits) along with power sources. Given oracle access to the connectivity of the circuit and values of the electrical elements, our algorithm prepares a quantum state that encodes voltages and current values either at a specified time or the history of their evolution over a time-interval. For an RLC circuit with $N$ components, our algorithm runs in time $\textsf{polylog}(N)$ under mild assumptions on the connectivity of the circuit and values of its components. This provides an exponential speed-up over classical algorithms that take $\textsf{poly}(N)$ time in the worst-case. Our algorithm can be used to estimate energy across a set of components or dissipated power in $\textsf{polylog}(N)$ time, a problem that we prove is BQP-hard and therefore unlikely to be efficiently solved by classical algorithms. The main challenge in simulating the dynamics of RLC circuits is that they are governed by differential-algebraic equations (DAEs), a coupled system of differential equations with hidden algebraic constraints. Consequentially, existing quantum algorithms for ordinary differential equations cannot be directly utilized. We therefore develop a quantum DAE solver for simulating the time-evolution of linear DAEs. For RLC circuits, we employ modified nodal analysis to create a system of DAEs compatible with our quantum algorithm. We establish BQP-hardness by demonstrating that any network of classical harmonic oscillators, for which an energy-estimation problem is known to be BQP-hard, is a special case of an LC circuit. Our work gives theoretical evidence of quantum advantage in simulating RLC circuits and we expect that our quantum DAE solver will find broader use in the simulation of dynamical systems.
Schwinger-Keldysh Path Integral for Gauge theories
This paper develops a mathematical framework called the Schwinger-Keldysh path integral for studying non-equilibrium quantum gauge field theories, focusing on how to properly handle gauge symmetries and quantum states that evolve away from thermal equilibrium. The work provides theoretical tools for analyzing open quantum systems with gauge fields using advanced field theory techniques.
Key Contributions
- Development of Schwinger-Keldysh path integral formalism for non-Abelian gauge theories with BRST gauge fixing
- Construction of manifestly BRST-invariant framework for non-equilibrium processes with arbitrary initial states
- Derivation of Ward-Takahashi-Slavnov-Taylor identities and analysis of Keldysh BRST symmetry in Open EFT
View Full Abstract
We develop the Schwinger-Keldysh path-integral formalism for open non-Abelian gauge theories that are gauge-fixed via the BRST method in covariant gauges. We focus on generic initial states, pure and mixed, specified at finite times suitable for non-equilibrium processes. We pay particular attention to the handling of the indefinite Hilbert space, the construction of BRST-invariant Schrodinger picture wavefunctionals, density matrices and inner product, the implementation of the Hata-Kugo prescription, and the role of boundary terms at both the initial and final times. We highlight the advantages of the Nakanishi-Lautrup field representation in dealing with initial/final conditions. The resulting Schwinger-Keldysh path integral is manifestly invariant under a diagonal (retarded) BRST symmetry for arbitrary physical initial states, whether pure or mixed. From this, we obtain the corresponding Ward-Takahashi-Slavnov-Taylor identities, valid perturbatively. Non-perturbatively the Gribov ambiguity is expected to break or modify the BRST symmetry. The naive advanced BRST symmetry is shown to be explicitly violated by the in-in boundary conditions. We show that the Feynman-Vernon influence functional derived by integrating out charged matter and/or hard gluon modes remains (perturbatively) BRST invariant. When the Open EFT action is expanded to second order in advanced fields it exhibits an exact symmetry under a contraction of the original BRST symmetry. This Keldysh BRST symmetry is equivalent to the BRST associated with the retarded gauge transformations together with a linearly realized BRST transformation of the advanced fields. These govern the structure of the leading terms in an Open EFT. We illustrate this with the explicit example of Hard Thermal Loop Effective Theory, and construct the general form of the Open EFT in a Higgs phase when all gauge symmetries are spontaneously broken.
The most discriminable quantum states in the multicopy regime
This paper investigates which quantum states can be distinguished most accurately when multiple identical copies are available, proving that certain symmetric designs achieve optimal discrimination for pure states and that mixed states can sometimes outperform pure states. The authors establish fundamental limits on quantum state discrimination and show quantum systems provide quadratic advantages over classical probability distributions in this task.
Key Contributions
- Proof that state k-designs achieve optimal discrimination when N supports such designs
- Demonstration that mixed states can outperform pure states when N exceeds k-design requirements
- Establishment of quadratic quantum advantage over classical discrimination problems
- Connection between quantum state discrimination and multiplicative Bayes capacity of classical channels
View Full Abstract
This work investigates which sets of quantum states give rise to the highest achievable success probability in minimum-error state discrimination if multiple copies of the unknown state are given. Specifically, we consider uniformly distributed ensembles of the form $\left\{\frac{1}{N},ρ_i^{\otimes k}\right\}_{i=1}^N$, where $N$ states in dimension $d$ are provided in $k$ identical copies, and derive universal limits in this scenario. For pure state ensembles, we prove that whenever $N$ is large enough to support a state $k$-design, these designs will exactly give rise to the maximally discriminable sets. We further show that when $N$ exceeds the size required for a $k$-design, mixed states can outperform all pure state ensembles. We also analyse the analogue classical discrimination problems, in which states are replaced by probability distributions. We recognise that the problem of most discriminable classical states in the multi-copy regime is in one-to-one correspondence to the concept of the multiplicative Bayes capacity of independent uses of classical channels, a concept that emerges naturally in the context of classical information leakage. This connection allows us to completely solve the classical analogue of our problem when $N\geq \binom{d + k - 1}{k}$, and to prove that quantum systems offer a quadratic advantage (in number of copies $k$) over classical ones. Curiously, we also show that this quantum advantage is strongly reduced when one is restricted to real quantum states. Finally, we introduce computational techniques to find sets of most discriminable ensembles, and to obtain rigorous universal upper bounds on the maximal success probability for multi-copy state discrimination in cases that are analytically intractable.
En Route to a Standard QMA1 vs. QCMA Oracle Separation
This paper studies the theoretical separation between quantum complexity classes QMA1 and QCMA by constructing oracle separations that show quantum witnesses can be more powerful than classical witnesses in certain computational scenarios. The work provides theoretical foundations for understanding the relative power of quantum versus classical proof systems.
Key Contributions
- Construction of classical oracle separation between QMA1 and QCMA with restricted adaptive rounds
- Derandomization of existing permutation-oracle separation results
- Analysis of complexity separations with exponentially small gaps
- Implications for approximate ground-state preparation algorithms
View Full Abstract
We study the power of quantum witnesses under perfect completeness. We construct a classical oracle relative to which a language lies in $\mathsf{QMA}_1$ but not in $\mathsf{QCMA}$ when the $\mathsf{QCMA}$ verifier is only allowed polynomially many adaptive rounds and exponentially many parallel queries per round. Additionally, we derandomize the permutation-oracle separation of Fefferman and Kimmel, obtaining an in-place oracle separation between $\mathsf{QMA}_1$ and $\mathsf{QCMA}$. Furthermore, we focus on $\mathsf{QCMA}$ and $\mathsf{QMA}$ with an exponentially small gap, where we show a separation assuming the gap is fixed, but not when it may be arbitrarily small. Finally, we derive consequences for approximate ground-state preparation from sparse Hamiltonian oracle access, including a bounded-adaptivity frustration-free variant.
Rethinking Nonlocality: Locality, Counterfactuals, and the EPR-Bell Argument
This paper challenges the common interpretation that Bell inequality violations prove nature is nonlocal, arguing instead that these violations demonstrate contextuality - the impossibility of assigning definite values to quantum measurements across incompatible contexts. The authors contend that Bell inequalities arise from combining locality with counterfactual reasoning about unperformed measurements, not from locality alone.
Key Contributions
- Reinterpretation of Bell inequality violations as evidence of contextuality rather than nonlocality
- Analysis of the role of counterfactual reasoning in deriving Bell inequalities
- Connection to sheaf-theoretic approaches to quantum contextuality
View Full Abstract
The widespread claim that violations of Bell inequalities establish the nonlocality of nature is critically reexamined. It is argued that this conclusion is not logically compelled by either the Einstein--Podolsky--Rosen (EPR) argument or Bell's theorem. The analysis highlights the central role of counterfactual reasoning -- the assumption that outcomes of unperformed measurements possess definite values -- in deriving Bell inequalities. It is shown that these inequalities follow not from locality alone, but from the conjunction of locality with a global assignment of values across incompatible measurement contexts. Their experimental violation therefore signals the impossibility of such a global assignment, i.e.\ contextuality, rather than necessarily implying nonlocal causation. This interpretation aligns with Bohr's emphasis on the contextual character of physical quantities and is naturally formulated within modern sheaf-theoretic approaches to contextuality.
Digital Simulation of Non-Hermitian Knotted Bands on Quantum Hardware
This paper demonstrates how to use quantum computers to simulate and characterize complex mathematical structures called knots and links in non-Hermitian quantum systems. The researchers developed a method to extract topological information about these braided structures without requiring computationally expensive optimization procedures.
Key Contributions
- Development of non-variational protocol for characterizing complex braided band structures on quantum hardware
- Experimental demonstration of knot and link reconstruction including Hopf chains and Solomon's knot on superconducting quantum processors
- Introduction of efficient measurement strategy that extracts topological invariants without full spectral tomography
View Full Abstract
Knots and links represent a fundamental motif of non-local connectivity that permeates the physical sciences from string theory to protein folds. While spectral braiding has been explored in two-band non-Hermitian models across various platforms, its direct simulation and characterization on programmable quantum hardware, particularly beyond two strands, remains a formidable challenge due to the limitations of variational optimization in these systems. Here, we introduce a family of non-Hermitian multi-band twister models and implement a non-variational protocol to characterize their complex braided band structures on a programmable superconducting quantum processor. By mapping the winding of eigenstates to the spectral topology, we devise an efficient measurement strategy that extracts braid information, including braid words and knot invariants like the Alexander and Jones polynomials, without requiring full spectral tomography or repeated optimization. We experimentally demonstrate the reconstruction of complicated knots and links such as the Hopf chain and Solomon's knot. Our approach provides a general framework for investigating exotic non-Hermitian topology on near-term quantum devices, opening a route to simulate more sophisticated topological structures in knot theory.
Cavity-mediated coherence protection and one-axis twisting for spins in solids
This paper demonstrates long-range quantum interactions between spins in a solid-state crystal coupled to a microwave cavity, achieving collective effects like superradiance and spin squeezing while dramatically extending coherence times from microseconds to milliseconds.
Key Contributions
- First demonstration of coherent cavity-mediated all-to-all interactions in a solid-state spin ensemble
- Achievement of one-axis twisting dynamics enabling spin squeezing for enhanced quantum metrology
- Discovery of many-body energy gap protection extending coherence times by orders of magnitude without decoupling pulses
- Establishment of solid-state platform for collective many-body quantum physics with technological applications
View Full Abstract
Long-range interactions between emitters give rise to collective phenomena, including superradiance, spin squeezing, and coherence protection, that are important to both fundamental physics and quantum technologies. Despite progress in cold atoms, coherent cavity-mediated all-to-all interactions have not yet been realized in a solid-state ensemble. Here we demonstrate such interactions in a $^{171}$Yb$^{3+}$:CaWO$_4$ crystal coupled to a microwave resonator, observing superradiant emission on resonance and unitary one-axis twisting dynamics in the dispersive regime. The same interaction also opens a many-body energy gap that suppresses inhomogeneous dephasing, extending the ensemble Ramsey coherence time from tens of microseconds to milliseconds without decoupling pulses. These results establish a solid-state platform for collective many-body physics with direct implications for quantum technologies. Specifically, the observed one-axis twisting dynamics opens a path towards spin squeezing for entanglement-enhanced quantum metrology, and the extended coherence due to gap-protection is relevant for both microwave photon storage and precision measurement.
Strict Hierarchy for Quantum Channel Certification to Unitary
This paper develops optimal quantum algorithms for testing whether an unknown quantum channel is equal to a target unitary operation or significantly different from it. The researchers establish tight bounds on how many queries are needed under three different access models, showing a clear hierarchy where more powerful access models require exponentially fewer queries.
Key Contributions
- Established optimal query complexities for quantum channel certification across three access models with tight upper and lower bounds
- Demonstrated strict hierarchy showing coherent access requires quadratically fewer queries than incoherent access, and source-code access requires exponentially fewer queries than coherent access
View Full Abstract
We consider the problem of quantum channel certification to unitary, where one is given access to an unknown $d$-dimensional channel $\mathcal{E}$, and wants to test whether $\mathcal{E}$ is equal to a target unitary channel or is $\varepsilon$-far from it in the diamond norm. We present optimal quantum algorithms for this problem, settling the query complexities in three access models with increasing power. Specifically, we show that: (i) $Θ(d/\varepsilon^2)$ queries suffice for incoherent access model, matching the lower bound due to Fawzi, Flammarion, Garivier, and Oufkir (COLT 2023). (ii) $Θ(d/\varepsilon)$ queries suffice for coherent access model, matching the lower bound due to Regev and Schiff (ICALP 2008). (iii) $Θ(\sqrt{d}/\varepsilon)$ queries suffice for source-code access model, matching the lower bound due to Jeon and Oh (npj Quantum Inf. 2026). This demonstrates a strict hierarchy of complexities for quantum channel certification to unitary across various access models.
A Gaussian asymmetry measure
This paper introduces a new way to measure quantum asymmetry that stays within the mathematical framework of Gaussian states, making calculations much easier while still capturing important physical phenomena like the quantum Mpemba effect and symmetry restoration in fermionic systems.
Key Contributions
- Introduction of a Gaussian asymmetry measure that enables exact analytical calculations using correlation matrix techniques
- Demonstration that the measure captures key dynamical signatures like the Mpemba effect and symmetry restoration while remaining computationally tractable
View Full Abstract
The study of Entanglement Asymmetry has emerged in recent years as a powerful tool to characterise the symmetry properties of quantum states in relation to a given charge operator through the lens of entanglement. While extremely powerful and general, the standard definition of asymmetry introduces significant non-Gaussian features in free-fermionic systems, leading to certain analytical limitations. In this work, we introduce an asymmetry measure that remains strictly within the Gaussian manifold and analyse its properties. In particular, we show that it quantifies the minimal distance between a Gaussian state and the manifold of symmetric Gaussian states. We further demonstrate that this measure captures the established dynamical signatures of entanglement asymmetry, such as the Mpemba effect, symmetry restoration, and the lack thereof. The Gaussian structure allows these novel asymmetry measures to be computed exactly using correlation matrix techniques, and to be described asymptotically through the quasiparticle picture. We also comment on the possibility of using charge fluctuations to characterise the asymmetry of a Gaussian state.
Convex combinations of bosonic pure-loss channels
This paper analyzes fading channels in quantum communication, where the transmissivity fluctuates randomly, showing that entanglement distribution and quantum key distribution can achieve positive rates even in very noisy conditions. The authors prove that non-Gaussian quantum states can significantly outperform Gaussian states for communication over these realistic noisy channels.
Key Contributions
- Proved that entanglement distribution and quantum key distribution always achieve strictly positive rates over fading channels unless completely noisy
- Demonstrated that non-Gaussian Fock-diagonal states strictly outperform Gaussian thermal states for entanglement-assisted classical capacity of fading channels
- Identified regimes where thermal inputs have zero coherent information while optimized non-Gaussian states achieve positive values, activating quantum communication
- Developed iterative variational algorithm to optimize coherent and mutual information for general fading distributions
View Full Abstract
The pure-loss channel is a fundamental model for describing noise in bosonic quantum platforms. It is characterised by a single parameter, the transmissivity, which quantifies the fraction of the input energy that reaches the output of the channel. In realistic scenarios, however, such as free-space quantum communication, the transmissivity is not fixed but fluctuates from one channel use to another. In this setting, the overall channel is effectively described as a convex combination of pure-loss channels, known as a fading channel. Despite its practical relevance, the quantum Shannon theory of the fading channel has remained largely unexplored. Here, we address this gap, specifically investigating degradability, anti-degradability, entanglement breakingness, and capacities of the fading channel. Of particular relevance to practical quantum-internet applications, we prove that entanglement distribution and quantum key distribution can always be achieved at a strictly positive rate over any fading channel, no matter how noisy it is or how strongly the transmissivity fluctuates, provided the channel is not completely noisy. Moreover, we prove that thermal states, which are optimal for a broad class of static bosonic Gaussian channels, fail to achieve the entanglement-assisted classical capacity of fading channels: non-Gaussian Fock-diagonal states strictly outperform all Gaussian encodings. Most strikingly, we identify regimes where the coherent information of thermal inputs vanishes, while optimized non-Gaussian states achieve strictly positive values, thereby activating the channel for quantum communication. For a paradigmatic binary fading model we establish this result analytically, deriving the exact capacity-achieving state in closed form. For general fading distributions, we design an iterative variational algorithm to optimize the coherent and mutual information.
MLMC-qDRIFT: Multilevel Variance Reduction for Randomized Quantum Hamiltonian Simulation
This paper develops MLMC-qDRIFT, a multilevel Monte Carlo method that improves the efficiency of quantum Hamiltonian simulation by reducing the number of circuit samples needed for accurate observable estimation from O(ε^-3) to O(ε^-2 log²(1/ε)) gate complexity.
Key Contributions
- Development of multilevel Monte Carlo framework for qDRIFT quantum simulation
- Theoretical proof of improved gate complexity scaling from O(ε^-3) to O(ε^-2 log²(1/ε))
- Demonstration of practical gate-count savings through numerical experiments on spin-chain dynamics
View Full Abstract
Simulating quantum dynamics is one of the central applications of quantum computing. For Hamiltonians written as a sum of many terms, deterministic Trotter--Suzuki product formulas can require applying a large number of term-wise evolutions at each time step, leading to high circuit costs for large or dense systems. Randomized methods such as qDRIFT offer an alternative: each step samples only one Hamiltonian term, giving a circuit depth with no explicit dependence on the number of terms. However, when qDRIFT is used for observable estimation, high precision requires many independent random circuit realizations, resulting in a total gate complexity that scales as $\mathcal{O}(\varepsilon^{-3})$. We introduce a multilevel Monte Carlo framework for qDRIFT that reduces this sampling overhead. The method constructs a hierarchy of qDRIFT estimators with increasing circuit depths and couples adjacent levels by sharing their random Hamiltonian-term samples. This coupling makes the variance of the level differences decay with depth, allowing most samples to be taken on cheaper, coarse circuits and only a few on expensive, fine circuits. We prove that the resulting MLMC-qDRIFT estimator reduces the total gate complexity for fixed-precision observable estimation from the standard qDRIFT scaling $\mathcal{O}(\varepsilon^{-3})$ to $\mathcal{O}(\varepsilon^{-2}\log^2(1/\varepsilon))$, while preserving qDRIFT's lack of explicit dependence on the number of Hamiltonian terms. Numerical experiments for spin-chain dynamics confirm the predicted variance decay and demonstrate the practical gate-count savings of the multilevel construction.
Non-local Tunneling Spectroscopy of Inelastic Quasiparticle Relaxation in Superconducting 1-D Wires
This paper studies how quasiparticles (excitations above the superconducting gap) transport and relax in superconducting nanowires using non-local tunneling spectroscopy. The researchers use three-terminal devices with dual-bias schemes to probe energy imbalance effects and compare results with theoretical simulations to extract inelastic scattering times.
Key Contributions
- Development of dual-bias non-local tunneling spectroscopy technique to probe quasiparticle energy imbalance in superconducting wires
- Observation of anti-symmetric non-local conductance features with sharp onset at 3Δ energy due to pair-breaking effects
- Extraction of energy-dependent inelastic scattering times through comparison with quasiclassical simulations
View Full Abstract
Non-local conductance experiments using tunnel junctions can provide valuable spectroscopic information on both the transport and relaxation of quasiparticles in superconductors, as these techniques directly probe the quasiparticle charge and energy imbalance even at mK temperatures. In this work, we employ mesoscopic three terminal Cu and Al NIS devices to study non-local quasiparticle transport over length-scales on the order of the superconducting coherence length in this regime. Via a dual-bias scheme, which utilizes detector biases both above and below the superconducting gap, we are able to extract the effect of quasiparticle energy imbalance via its impact on the self consistent pair potential by symmetry considerations. We observe non-local conductance features due to pair-breaking which are anti-symmetric with respect to the polarity of the voltage bias, with a sharp onset during single electron tunneling at energies around $3Δ$. We compare these findings with quasiclassical simulations including inelastic effects to obtain estimates of the energy dependent inelastic scattering time. In addition, we demonstrate kinetic effects due to a large applied supercurrent which can also be captured in this formalism and decomposed with respect to the particle-hole symmetry and supercurrent direction, and discuss further opportunities for the advancement of this method.
Protein folding on a 64 qubit trapped-ion hardware via counterdiabatic quantum optimization
This paper demonstrates the largest trapped-ion quantum computer application to protein folding optimization to date, using 64 qubits to fold peptides with 14-16 amino acids. The researchers developed a new quantum optimization algorithm called bias-field digitized counterdiabatic quantum optimization (BF-DCQO) that outperformed random sampling and achieved classical reference energies for some protein folding instances.
Key Contributions
- Largest trapped-ion demonstration of protein folding optimization using 64 qubits
- Development of BF-DCQO algorithm with bias-feedback mechanism for quantum optimization
- Hybrid quantum-classical workflow combining quantum sampling with classical post-processing for protein folding
View Full Abstract
We report the largest trapped-ion hardware demonstration of lattice protein-folding optimization to date, using bias-field digitized counterdiabatic quantum optimization (BF-DCQO) on a fully connected 64-qubit Barium development system similar to the forthcoming IonQ Tempo line. Six peptide sequences with 14-16 amino-acid residues are encoded using a coarse-grained tetrahedral lattice model, yielding higher-order spin-glass Hamiltonians with long-range interactions involving up to five-body terms and mapped to 46-61 qubits. The resulting instances are demanding for near-term quantum hardware because low-energy configurations must satisfy backbone-geometry constraints while optimizing dense residue-contact interactions. BF-DCQO uses a non-variational bias-feedback mechanism, where low-energy samples from each round define longitudinal fields that guide subsequent quantum evolutions. Across the studied instances, BF-DCQO shifts raw sampled energy distributions toward lower energies than uniform random sampling, with the strongest improvements appearing in residue-contact variables. To preserve this signal, we introduce a consensus-based post-processing pipeline that combines quantum-learned contact information with feasible backbone geometries. The resulting hybrid workflow reaches the classical reference energy in multiple instances and improves over the corresponding random-seeded pipeline. These results show that BF-DCQO can generate structured samples for dense protein-folding Hamiltonians at previously unexplored trapped-ion scales.
Fluctuations of path-dependent thermodynamic quantities in open quantum systems via two-point system-only measurements
This paper develops a new method to measure thermodynamic fluctuations (like heat and work) in open quantum systems by performing two measurements on the system itself, without needing to measure the environment. The authors show how this approach can correct known thermodynamic relations like Jarzynski's equality and demonstrate it works for both weakly and strongly coupled quantum systems.
Key Contributions
- Development of a two-point measurement scheme to evaluate thermodynamic fluctuations in open quantum systems using only system observables
- Derivation of exact correction factors to Jarzynski's equality for open quantum systems
- Demonstration that pure decoherence dynamics preserve Jarzynski's equality exactly at any coupling strength
- Extension of the framework to strongly coupled non-Markovian regimes with explicit analysis of qubit phase covariant dynamics
View Full Abstract
We propose a method to evaluate general thermodynamic fluctuations in open quantum systems, based on performing a two-point measurement scheme on the system using dynamics-dependent thermodynamic observables. Our approach allows one to obtain exact equalities for fluctuations of path-dependent thermodynamic quantities such as work and heat, and to isolate correction factors to Jarzynski's equality, requiring only access to the system degrees of freedom. This framework is flexible and can be applied to the limiting case of closed systems, recovering previous, yet seemingly contradictory, results from the literature. Moreover, the formalism admits a straightforward extension to strongly coupled open quantum systems. We investigate the effect of specific dynamical classes on the fluctuation relations, and show that the pure decoherence case is particularly special, as it deterministically does not contain any heat contribution and thus constitutes a class of open system dynamics for which the Jarzynski equality for work fluctuations is identically true at any coupling strength. Finally, we look explicitly at the shape and size of the correction factors to Jarzynski's equality for a qubit undergoing phase covariant dynamics, both in the weakly-coupled regime and in the deep non-Markovian regime.
Quantum Feature Selection with Higher-Order Binary Optimization on Trapped-Ion Hardware
This paper develops a quantum algorithm for selecting the most important features in machine learning datasets by using higher-order optimization that captures complex relationships between features. The researchers tested their approach on trapped-ion quantum hardware and showed it can compete with classical methods while finding compact, informative feature sets.
Key Contributions
- Development of higher-order unconstrained binary optimization (HUBO) formulation for quantum feature selection that captures multivariate dependencies beyond quadratic terms
- Experimental demonstration of digitized counterdiabatic quantum optimization on IonQ Forte trapped-ion hardware for machine learning preprocessing tasks
View Full Abstract
We present a quantum feature-selection framework based on a higher-order unconstrained binary optimization (HUBO) formulation that explicitly incorporates multivariate dependencies beyond standard quadratic encodings. In contrast to QUBO-based approaches, the proposed model includes one-, two-, and three-body interaction terms derived from mutual-information measures, enabling the objective function to capture feature relevance, pairwise redundancy, and higher-order statistical structure within a unified energy model. To suppress trivial all-selected solutions, we further include structured linear penalties that promote sparsity while preserving informative variables. The resulting HUBO instances are optimized with digitized counterdiabatic quantum optimization on IonQ Forte and compared against noiseless quantum simulation as well as two classical dimensionality-reduction baselines: SelectKBest based on mutual information and principal component analysis (PCA). We evaluate the proposed workflow on two benchmark classification datasets, namely the Gallstone dataset and the Spambase dataset, and analyze both predictive performance and selected-subset structure. The results show good qualitative agreement between hardware executions and noiseless simulations, supporting the feasibility of implementing higher-order feature-selection Hamiltonians on current trapped-ion processors. In addition, the quantum approach yields competitive classification performance while producing compact and informative feature subsets, highlighting the potential of higher-order quantum optimization for machine-learning preprocessing tasks.
Gouy phase engineering of self-splitting quantum correlations
This paper demonstrates how to engineer quantum correlations between photon pairs to create self-splitting and recombining beams that behave like an interferometer. The researchers use structured pump beams in spontaneous parametric down conversion to transfer spatial patterns to quantum correlations, enabling novel interference effects.
Key Contributions
- Demonstration of Gouy phase engineering to create self-splitting quantum correlations
- Implementation of Mach-Zehnder-like interferometer using structured quantum correlations
- Observation of heralded single-photon interference and NOON state interference in this system
View Full Abstract
In this work, we demonstrate the effect of self-splitting spatial quantum correlations induced by Gouy phase engineering. In the process of spontaneous parametric down conversion the pump beam is structured with a mode superposition that produces a dynamical splitting and recombination of the light beam. This structure is transferred to the quantum correlations between signal and idler photons. As a result the joint two-photon probability distribution propagates like a self-splitting and recombining light beam, implementing a Mach-Zehnder-like interferometer. We observe heralded single-photon interference and two-photon NOON state interference. These results open new avenues for applications in quantum metrology.
Classical simulation of free-fermionic dynamics and quantum chemistry with magic input
This paper identifies a specific class of quantum fermionic systems that can be efficiently simulated on classical computers despite containing 'magic' non-Gaussian inputs that typically make quantum systems hard to simulate classically. The authors show that certain paired fermionic states can be reduced to classical calculations involving Pfaffian polynomials, providing practical benchmarks for quantum simulation experiments.
Key Contributions
- Identification of a tractable intermediate regime between classical and quantum advantage for fermionic quantum simulation
- Development of efficient classical algorithms for computing quantum observables in paired non-Gaussian fermionic states using Pfaffian polynomial reductions
- Provision of rigorous classical benchmarks for quantum chemistry and trapped-ion quantum simulation experiments
View Full Abstract
Establishing the precise computational boundary between classically tractable fermionic systems and those capable of genuine quantum advantage is a central challenge in quantum simulation. While injecting non-Gaussian ``magic" inputs into free-fermion circuits is widely expected to generate intractable complexity, we identify a physically motivated intermediate regime. Supported by rigorous bounds and numerical evidence, we show that for a class of paired non-Gaussian fermionic states, essential quantum simulation primitives -- transition amplitudes, overlaps, and arbitrary-weight number correlators -- can be efficiently approximated to additive error under free-fermionic dynamics. This tractability stems from an algebraic reduction that compresses exponentially large multiparticle interference into a single coefficient of a multivariate Pfaffian polynomial. Because these classical estimators match the intrinsic $O(1/\sqrt{K})$ statistical uncertainty of quantum hardware utilizing $K$ measurement shots, they constitute a practical benchmark. Building on this foundation, we construct an additive-error estimator for high-weight Wilson observables in the noninteracting quench of recent trapped-ion experiments, providing a rigorous classical benchmark. Extending this to quantum chemistry, we demonstrate that core overlap-based subroutines for antisymmetrized products of strongly orthogonal geminals admit exact Pfaffian reductions. Ultimately, these results sharpen the boundary of quantum advantage, establishing that the paired-electron scaffold is effectively dequantized and clarifying exactly where quantum resources are indispensable.
HyPulse: A Pulse Synthesis Framework for Hybrid Qubit-Oscillator Gates on Trapped-Ion Platform
This paper presents HyPulse, a software framework that translates hybrid qubit-oscillator quantum operations into control pulses for trapped-ion quantum computers. The system uses a two-phase architecture with offline pulse optimization and caching, plus online assembly of pulse sequences for specific quantum circuits.
Key Contributions
- Two-phase pulse synthesis architecture separating optimization from circuit assembly
- Content-addressed caching system for parametrized quantum gate pulses
- Hardware-aware compilation framework for hybrid qubit-oscillator operations on trapped-ion platforms
View Full Abstract
As hybrid qubit-oscillator algorithm development and trapped-ion hardware demonstrations advance in parallel, there is a lack of a compilation layer connecting the two at the pulse level in the vertical software stack. While qubit gate control and pulse synthesis are well-established, the translation of hybrid qubit-oscillator primitives to the pulse level has not been systematically addressed. This gap is further compounded by the inherently continuous parametric nature of such gates. Each distinct parameter value defines a physically unique operation requiring independent pulse optimization, making static pre-compilation strategies inapplicable. To fill this gap, we present HyPulse, a hardware-aware pulse synthesis and generation framework, which contributes a two-phase architecture decoupling pulse discovery from circuit assembly. An offline optimization engine populates a content-addressed cache of high-fidelity primitives: If a pulse for a given gate, parameter, and device specification already exists in the library, it is retrieved instantly; otherwise the optimizer synthesizes, hashes, and caches it automatically. An online assembler then constructs circuit-specific pulse programs ready to drive trapped-ion hardware control systems via DAX/ARTIQ (Duke) and JaqalPaw/QSCOUT (Sandia), trapped-ion pulse execution backends.
Fault-Tolerant Resource Comparison of Qudit and Qubit Encodings for Diagonal Quadratic Operators
This paper compares the resource costs of implementing quadratic diagonal operators using qudit encodings (d-level quantum systems) versus qubit encodings for fault-tolerant quantum simulations of lattice field theories. The authors analyze non-Clifford gate costs and find that while qubits are generally more efficient asymptotically, qudits can provide constant-factor advantages in specific low-dimensional regimes.
Key Contributions
- Derived explicit finite-d break-even conditions for qudit vs qubit synthesis costs in fault-tolerant implementations
- Compared resource requirements for product-formula simulation and LCU/block encoding approaches using non-Clifford gate metrics
- Identified specific low-dimensional regions where qudit encodings can outperform qubit baselines for quantum field theory simulations
View Full Abstract
Finite local Hilbert-space truncations arise naturally in quantum simulations of lattice field theories and motivate qudit encodings, but their fault-tolerant advantage over qubit encodings remains unclear. We compare the non-Clifford cost of implementing quadratic diagonal evolutions, exemplified by $U=e^{-itφ_x^2}$ in a uniform field-amplitude discretization of a real scalar field, using either one logical $d$-level qudit or $n_b=\lceil \log_2 d\rceil$ logical qubits. We analyze two standard settings: product-formula simulation and LCU/block encoding, taking the resource metric to be the number of non-Clifford gates after synthesis into a discrete logical gate set. Because tight synthesis bounds for general single-qudit rotations are not known, we express the qudit constructions in terms of embedded two-level $SU(2)$ rotations and derive explicit finite-$d$ break-even conditions for their synthesis cost; these serve as compiler targets for when qudit encodings can outperform the qubit baseline. Within the constructive models studied here, product-formula implementations would require an exponentially stronger per-primitive synthesis advantage for qudits to win asymptotically, while in the LCU setting the qubit encoding is asymptotically cheaper in $d$. Nevertheless, the finite-$d$ threshold analysis identifies low dimensional regions in which qudits can yield meaningful constant-factor savings, particularly for LCU-based implementations. As a secondary analysis of the LCU construction, we use an idealized negligible-overhead qubit-qudit code-switching model to give an absolute $T$-count comparison, and reinterpret the savings as an allowable per-switch overhead budget.
Chip-to-chip entanglement distribution over 80-km multicore fiber link
This paper demonstrates the distribution of quantum entanglement between silicon photonic chips over an 80-kilometer fiber optic link, achieving secure quantum key distribution with high fidelity. The researchers successfully generated entangled photon pairs on-chip and transmitted them through a dual-core fiber while maintaining quantum coherence sufficient for cryptographic applications.
Key Contributions
- First demonstration of chip-to-chip path-encoded entanglement distribution over 80 km using silicon photonic circuits
- Achievement of 85.7% Bell state fidelity and 2.03 bit/s secure key rate using the BBM92 quantum key distribution protocol
- Integration of on-chip entangled photon pair generation via spontaneous four-wave mixing with long-distance fiber transmission
View Full Abstract
Long-range quantum entanglement is essential for building large-scale quantum networks and unconditionally secure cryptographic systems based on quantum key distribution (QKD). While photonic integrated circuits offer a highly scalable platform, the fragility of phase coherence between spatial modes has prevented the distribution of path-encoded entanglement over long distances. Here, we report chip-to-chip distribution of path-encoded entangled states over 80 km between fully integrated silicon photonic transmitter and receiver chips. Telecom-band entangled photon pairs are generated via spontaneous four-wave mixing in on-chip spiral waveguides and distributed between chips over a dual-core, actively stabilized fiber link. Upon distribution, we measure a Bell state fidelity of $85.7 \pm 0.2 \%$. Implementing the BBM92 protocol with the same source, we obtain a secure key rate of 2.03 bit/s in the infinite-key regime. These results establish silicon photonic chips as a viable platform for long-distance path-encoded entanglement-based quantum key distribution, paving the way toward scalable, device-independent quantum networks.
Optical squeezing mediated by levitated oscillators at their quantum ground state
This paper demonstrates the generation of squeezed light (light with reduced quantum noise) by using levitated nanoparticles cooled to their quantum ground state interacting with optical cavity fields. The researchers achieved 2% reduction below vacuum noise levels in specific frequency bands, establishing levitated optomechanics as a platform for multimode quantum interactions.
Key Contributions
- First demonstration of optical squeezing mediated by multiple mechanical oscillators in their quantum ground state
- Establishment of levitated optomechanics as a viable platform for multimode quantum interactions with squeezed light generation
View Full Abstract
We demonstrate optical squeezing below the shot-noise level generated through the interaction of an optical cavity field with two center-of-mass modes of a levitated nanoparticle, simultaneously cooled to occupation numbers well below unity. By analyzing the quadrature fluctuations of the cavity output through heterodyne detection, we resolve the full spectral covariance matrix of the optical field and map regions of sub-shot-noise squeezing as a function of detection phase and frequency. Operating in the resolved sideband and strong coupling regime where mechanical modes hybridize with the optical mode, we observe consistent squeezing in the band 70-95 kHz with a lowest variance of 0.98 (2$\%$ below vacuum fluctuations). We thus demonstrate optical squeezing mediated by multiple mechanical oscillators in their quantum ground state, bridging mechanical quantum control with non-classical light and establishing levitated optomechanics as a platform for multimode quantum interactions.
A Semantic Quantum Circuit Cache for Scalable and Distributed Quantum-Classical Workflows
This paper introduces a Quantum Circuit Cache system that detects when quantum circuits are semantically equivalent (perform the same operations despite different syntax) and reuses previously computed results to eliminate redundant calculations in hybrid quantum-classical workflows. The system uses advanced graph theory techniques to identify equivalent circuits and demonstrates significant speedups in distributed computing environments and real quantum hardware.
Key Contributions
- Development of a content-addressable quantum circuit cache using ZX-calculus reduction and Weisfeiler-Leman graph hashing for semantic equivalence detection
- Demonstration of significant performance improvements (up to 11.2x speedup on real QPU hardware) by eliminating redundant circuit computations in hybrid quantum-classical workflows
- Implementation of a scalable, backend-agnostic caching system that works across CPU, GPU, and QPU environments with both local and distributed deployment options
View Full Abstract
Hybrid quantum--classical workflows often execute large ensembles of circuits that differ syntactically but implement identical operations, leading to substantial redundant computation. To address this, we introduce the Quantum Circuit Cache, a content-addressable system that detects semantic equivalence and reuses previously computed results across executions, backends, and workflow stages. Our approach combines ZX-calculus reduction with isomorphism-invariant Weisfeiler--Leman graph hashing to generate deterministic circuit identifiers, enabling constant-time lookup in distributed caches supporting both lightweight LMDB and scalable Redis deployments. The system integrates transparently into hybrid HPC workflows and remains backend-agnostic across CPU, GPU, and QPU environments. We evaluate the system on MareNostrum 5 with two representative workloads: distributed wire cutting and Differential Evolution-based QAOA optimization. For wire cutting, caching eliminates up to 91.98% of redundant subcircuit simulations, yielding speedups up to 7.0 times on a single node and maintaining advantages at scale, with Redis-based caching achieving up to 1.6 times speedups under high parallelism. Validation on a 35-qubit superconducting QPU confirms these benefits, achieving an 11.2 times speedup on real hardware. In distributed QAOA optimization, equivalence-aware caching avoids up to 27.6% of circuit evaluations and consistently reduces execution cost without altering the optimization algorithm. In both cases, reuse grows with concurrency and circuit structure, highlighting redundancy as a major systems bottleneck and demonstrating the effectiveness of our Quantum Circuit Cache.
Fast, powerful, low-noise optical pumping of an atomic vapor with semiconductor optical amplifiers
This paper compares three methods for generating pulsed optical pumping in rubidium atomic vapors used for magnetometers, finding that semiconductor optical amplifiers can provide superior performance with lower noise and higher power output.
Key Contributions
- Demonstration that semiconductor optical amplifiers introduce negligible additional noise compared to conventional pumping methods
- Achievement of environment-limited sensitivity of 80 fT/√Hz at 600 Hz, representing 1-2 orders of magnitude improvement over other pumping methods
View Full Abstract
We use a $^{87}\text{Rb}$ atomic vapor, suitable for an optically-pumped magnetometer (OPM) in Earth-field conditions, to study the noise properties of three strategies for generating pulsed optical pumping. We compare a frequency-modulated (FM) laser, amplitude modulation (AM) via an acousto-optic modulator (AOM), and amplitude modulation via a semiconductor optical amplifier (SOA). Pumping the ensemble to operate as a Bell-Bloom OPM, and with an equal degree of spin polarization, the three methods give nearly identical sensitivity, showing that the SOA, despite being an active device, can introduce negligible additional noise. Pumping the ensemble to operate as a free-induction-decay OPM, we observe longer unpumped coherence times with the SOA-AM method than with the FM method. Finally, using the higher power available from the SOA, we demonstrate an environment-limited sensitivity of $80\text{fT}/\sqrt{\text{Hz}}$ at $600\text{Hz}$ and 200fT$200\text{fT}/\sqrt{\text{Hz}}$ at $4\text{kHz}$, one to two orders of magnitude beyond what was achievable with the other pumping methods.
Delayed Choice Phenomena in the Projection Evolution Model
This paper proposes a new theoretical framework called the projection evolution model where time is treated as a quantum observable rather than just a parameter, allowing wave functions to be defined in both space and time. The authors use this approach to explain delayed-choice experiments in Mach-Zehnder interferometers through temporal overlap between photons and measurement devices.
Key Contributions
- Introduction of projection evolution model treating time as quantum observable
- New explanation of delayed-choice experiments via temporal wave function overlap
View Full Abstract
In the Schrödinger evolution of a quantum state time enters as a real parameter representing the coordinate. In a more consistent approach time should be defined as a quantum observable, with the evolution taking place in a four-dimensional spacetime. This is possible in the projection evolution model in which the wave function is defined in both space and time. This allows to construct the time operator and discuss the temporal structure of quantum processes. In this paper we discuss a photon travelling through a Mach-Zehnder interferometer, focusing the description on the temporal profile of the wave function. We show that in this approach the delayed-choice experiments can be explained by the temporal overlap of the photon and the devices in the interferometer.
Observation of Non-Markovian Evolution of Tripartite Quantum Steering
This paper experimentally demonstrates how quantum steering between three particles evolves in noisy environments, showing that quantum correlations can die and revive due to memory effects. The researchers observed unique asymmetric steering patterns that only occur in multipartite systems, not in simple two-particle systems.
Key Contributions
- First experimental observation of non-Markovian tripartite quantum steering evolution including death and revival processes
- Demonstration of asymmetric steering structures unique to multipartite systems that enable directional quantum information processing
View Full Abstract
The memory effects in open quantum systems can induce information backflow and revive quantum correlations, thereby providing a powerful way to protect and recover useful quantum resources in realistic noisy environments. However, such dynamics remains experimentally unexplored in multipartite quantum steering. Here we observe different non-Markovian evolution of tripartite quantum steering using Greenberger-Horne-Zeilinger-type mixed states, covering both death and revival processes. In particular, we experimentally demonstrate the more intricate asymmetric steering structure of tripartite quantum steering through different bipartitions, which do not arise in bipartite systems. Our results provide foundational insights into the hierarchical and directional structures in multipartite quantum steering, and highlight its potential as a useful resource for asymmetric quantum information processing.
Towards Quantum Optimised Malware Containment
This paper proposes using quantum algorithms (Quantum Amplitude Estimation and Grover's algorithm) to accelerate malware containment in computer networks by more efficiently solving the network optimization problem of which connections to disable to limit infection spread.
Key Contributions
- Hybrid quantum approach combining QAE and Grover Minimum Finding for network influence minimization
- Quadratic speedup in both estimation (O(1/ε) vs O(1/ε²)) and optimization (O(√|E_C|) vs O(|E_C|)) components
View Full Abstract
The containment of malware in computing networks may be naturally formulated as a network influence minimisation problem, in which one seeks to limit the expected spread of an infection while balancing the operational cost of disabling network connections. Classical approaches often rely on Monte Carlo simulation of stochastic diffusion processes and greedy optimisation over candidate edge removals, resulting in significant computational overhead due to repeated influence evaluations. In this work, we propose a hybrid quantum approach which combines Quantum Amplitude Estimation (QAE) and Grover Minimum Finding (GMF) to provide quadratic improvements in both the estimation and optimisation components of the problem. Specifically, QAE replaces classical Monte Carlo simulation, reducing the sampling complexity of influence estimation from $O(1/\varepsilon^2)$ to $O(1/\varepsilon)$ for a target additive error $\varepsilon \ll 1$, while GMF reduces the number of candidate evaluations required to identify optimal edge removals from $O(|E_C|)$ to $O(\sqrt{|E_C|})$. We present a formal problem definition, describe the construction of the corresponding quantum oracles, and analyse the resulting complexity improvements under standard oracle assumptions. Preliminary experiments, including classical simulation of QAE and small-scale execution of Grover search on real quantum hardware, support the expected theoretical scaling. While practical implementation at scale requires fault-tolerant quantum devices, our results demonstrate that quantum algorithms offer a promising long-term direction for accelerating stochastic network optimisation problems such as malware containment.
Parameterized Quantum Circuits as Feature Maps: Representation Quality and Readout Effects in Multispectral Land-Cover Classification
This paper investigates using variational quantum circuits as feature extractors for classifying land-cover types from satellite imagery, finding that quantum-generated features work better when combined with classical decision-making algorithms rather than used in end-to-end quantum classifiers. The study systematically compares quantum and classical approaches on the EuroSAT dataset and reveals that the choice of readout method significantly affects quantum classifier performance.
Key Contributions
- Systematic evaluation of variational quantum classifiers for satellite image classification using controlled experimental protocols
- Demonstration that quantum feature maps perform better when combined with classical kernel-based decision mechanisms rather than linear readouts
- Analysis of qubit scaling effects showing saturation due to mismatch between exponential Hilbert space and linear parameter growth
View Full Abstract
We investigate variational quantum classifiers (VQCs) for land-cover classification from multispectral satellite imagery, adopting a feature-map perspective in which the quantum circuit defines a nonlinear data embedding while the readout determines how this representation is exploited. Using the EuroSAT-MS dataset, we perform a systematic one-vs-one evaluation across all class pairs under a controlled experimental protocol, comparing classical baselines (logistic regression, SVMs, neural networks) with VQCs employing both linear readout and quantum-kernel SVM strategies. Our results show that, while VQCs with linear readout do not outperform strong classical baselines such as RBF-SVM, the same trained quantum feature map can significantly improve performance when reused within a kernel-based decision framework. A qubit-count sweep further reveals saturation effects consistent with the mismatch between exponential Hilbert space dimension and linear parameter scaling. Overall, our findings highlight that the effectiveness of quantum models depends critically on the interplay between representation and readout, and that meaningful gains may arise from combining learned quantum feature maps with classical decision mechanisms rather than seeking direct replacement of classical models.
Hardware-Efficient Hamiltonian Simulation via Trotter-Initialized Variational Optimization with Native Placement
This paper develops a new method for compiling quantum simulations of physical systems (like spin models) into efficient circuits for current noisy quantum computers. The approach uses structure-aware compilation that leverages the mathematical properties of Hamiltonian evolution rather than treating it as a generic quantum operation, achieving much shorter circuits with high fidelity.
Key Contributions
- Structure-aware compilation framework that preserves Hamiltonian structure during circuit synthesis
- Demonstration that shorter approximate circuits can outperform longer exact circuits on NISQ hardware
- Native placement algorithm for mapping Hamiltonian terms to hardware topology with greedy Trotter block selection
View Full Abstract
Compiling time-evolution operators of the form $U(t)=e^{-iHt}$ into hardware-native gate sequences is a central bottleneck for digital quantum simulation on noisy intermediate-scale quantum (NISQ) devices. Generic transpilation treats $U(t)$ as an arbitrary unitary, discarding the structure of Hamiltonian dynamics and producing circuits whose depth exceeds hardware coherence limits. We introduce a structure-aware compilation framework that treats product-formula decompositions as synthesis primitives rather than simulation approximations. The method combines (i) native placement of Hamiltonian terms onto the hardware coupling map, (ii) adaptive selection of Trotter blocks via a greedy discretization procedure, and (iii) variational refinement using a Trotter-initialized ansatz. Across Heisenberg, Ising, and XY models with $n=3$--$8$ qubits, the compiled circuits achieve fidelities $F>0.996$ with approximately linear scaling in the number of entangling gates, while generic synthesis produces circuits that are orders of magnitude deeper. On IBM Torino hardware, we observe a regime in which shorter approximate circuits outperform deeper exact decompositions: a 27-CX circuit achieves higher hardware fidelity ($F_{\mathrm{hw}}=0.987$) than a 187-CX exact circuit. These results demonstrate that, in the NISQ regime, structure-aware approximate compilation can outperform exact structure-agnostic synthesis, providing a practical pathway for executing Hamiltonian dynamics without requiring pulse-level control.
Nonclassical traits in multi-copy state discrimination
This paper studies quantum state discrimination when multiple copies of quantum states are available, comparing different measurement strategies and finding that certain theoretical frameworks can outperform quantum mechanics in distinguishing between states. The work identifies cases where nonlocal correlations can be achieved without quantum entanglement.
Key Contributions
- Demonstration that qubit strategies outperform classical bit strategies in multi-copy state discrimination
- Discovery of nonlocality without entanglement in state discrimination tasks
- Establishment of general bounds for bit-like operational theories in discrimination protocols
View Full Abstract
Quantum state discrimination is a fundamental information processing task that serves as a building block for numerous applications and provides implications at the foundational level. In this work, we consider minimum error discrimination of multi-copy states, where instead of preparing a single system we assume that multiple instances of the same state are prepared. Now the discrimination allows for measurements from multiple parties with different measurement strategies varying from global measurement strategy to ones restricted to different forms of local operations and classical communication strategies. By comparing the average success probabilities in quantum and classical cases, we find a qubit strategy that outperforms all the bit strategies. However, we find that there are other bit-like operational theories which can outperform the best qubit strategies even with a classical measurement strategy and we are able to identify instances of different theories where different measurement strategies are optimal. In this way, we are able to find instances of nonlocality without entanglement as well as provide general bounds for bit-like operational theories.
Least constraint approach to non-relativistic quantum mechanics
This paper develops a new mathematical formulation of quantum mechanics based on a variational principle inspired by classical mechanics, where quantum evolution is described by minimizing a 'constraint functional' that treats the quantum potential as an intrinsic constraint on particle motion. The approach provides an alternative framework that may be particularly useful for handling geometric constraints and dissipative forces in quantum systems.
Key Contributions
- Novel variational principle formulation of quantum mechanics using least constraint approach
- Unified treatment of geometric constraints and velocity-dependent dissipative forces in quantum systems
- Instantaneous differential characterization of quantum evolution equivalent to Schrödinger equation
View Full Abstract
We formulate a variational principle for non-relativistic quantum mechanics inspired by Gauss's principle of least constraint. We define a quantum constraint functional as the probability-weighted square deviation between the actual motion and the unconstrained motion that would arise from external forces alone. In this functional, the quantum potential plays the role of an intrinsic constraint that modifies the acceleration. Minimizing this quantum constraint functional with respect to the acceleration field yields the quantum Euler equations, which together with the continuity equation are equivalent to the Schrödinger equation. The principle is instantaneous and provides a differential characterization of quantum evolution. We demonstrate that this formulation is not a mere rewriting of existing dynamics: it provides a unified and technically economical treatment of geometric constraints and velocity-dependent dissipative forces, neither of which admits a straightforward global variational formulation. Potential applications to a broad range of quantum phenomena are also indicated.
The temperature dependent geometric phase
This paper develops a theoretical framework for understanding how temperature affects geometric phases in quantum systems when there is adiabatic evolution between a system and its environment. The authors use the Born-Oppenheimer approximation to introduce temperature dependence and demonstrate their theory with the H2+ ion system.
Key Contributions
- Development of temperature-dependent geometric phase theory using Born-Oppenheimer approximation
- Derivation of temperature-dependent effective potential from Abelian gauge fields
- Demonstration of the theoretical framework using H2+ ion system
View Full Abstract
There exists a geometric phase for a quantum state during the adiabatic evolution of the system. If the adiabatic procedure happens between the system and the environment interacting with it similar to Born-Oppenheimer (BO) approximation, we can introduce a temperature into the environment, which can be regarded as in an equilibrium state. Then a temperature-dependent geometric phase can be obtained for the system, which originates from the Abelian gauge potential induced by the BO approximation. This gauge potential contributes to the effective potential of the system, which is temperature dependent, too. Finally, we demonstrate them using an example of H_2^+ ion system.
Emergence of $π$ from Equatorial Quantum Localization
This paper shows how the mathematical constant π emerges naturally from quantum mechanical systems where particles become localized near the equator of a sphere, connecting the famous Wallis product formula for π to fundamental quantum behavior on spherical surfaces.
Key Contributions
- Demonstrates how π emerges from equatorial quantum localization using spherical harmonics
- Connects Wallis product formula to quantum mechanical observables through geometric rigidity index
- Shows semiclassical correspondence between quantum probability distributions and classical geometric structures
View Full Abstract
We present a genuinely non-radial quantum-mechanical route by which $π$ emerges from equatorial localization on the sphere. For the highest-weight branch of spherical harmonics, this localization is captured by a natural geometric rigidity index, whose exact finite-quantum-number value is a Wallis partial product. The mechanism is realized in two settings: the standard rigid rotor and the surface sector of a thin spherical shell, where radial freezing reduces the dynamics to the same angular problem. In the large-quantum-number limit, the probability cloud collapses toward the equator, the rigidity index approaches its classical value, and the Wallis formula is recovered through the correspondence principle. The result shows that Wallis-type structures in quantum mechanics can arise as exact signatures of semiclassical localization encoded by a simple geometric observable.
Tikhonov-regularised projected gradient flow for equality-constrained bilinear quantum control
This paper develops a mathematically rigorous approach to quantum control optimization by introducing Tikhonov regularization to stabilize gradient-based algorithms used for designing quantum operations. The work addresses numerical instability issues in existing quantum control methods and validates the approach on Bell-state preparation.
Key Contributions
- Rigorous mathematical framework for stabilizing quantum control optimization algorithms using Tikhonov regularization
- Theoretical bounds on convergence rates and constraint drift with computable error estimates
- Validation on three-level quantum system for Bell-state preparation showing order-of-magnitude improvement in numerical stability
View Full Abstract
We study a projection-type gradient flow for equality-constrained maximisation of a smooth bilinear control objective on $\mathcal{H}=L^2(0,T;\mathbb{R})$, eliminating Lagrange multipliers through an $(M{+}1)\times(M{+}1)$ moving Gram matrix $Γ(s)_{\ell\ell'}=\int_0^T S(t)\,c_\ell(s,t)\,c_{\ell'}(s,t)\,\mathrm{d}t$. The flow generates monotonic ascent in continuous time but becomes unstable on discretisation; existing implementations rely on heuristic step-size safeguards lacking rigorous justification. We close this gap by replacing $Γ$ with $Γ_{\varepsilon}:=Γ+\varepsilon^{2}I$ and prove: (i) an exact spectral identity giving $κ(Γ_{\varepsilon})=(σ_{\max}^{2}+\varepsilon^{2})/(σ_{\min}^{2}+\varepsilon^{2})$; (ii) objective monotonicity $\mathrm{d}J/\mathrm{d}s\ge 0$ for all $\varepsilon\ge 0$; (iii) constraint drift $|h_{m}-C_{m}|=\mathcal{O}(\varepsilon^{2})$ with a computable prefactor; (iv) convergence of the regularised trajectory to the unregularised one in $L^{2}(0,T)$ at rate $\mathcal{O}(\varepsilon^{2})$ under uniform invertibility of $Γ$; and (v) a discrete CFL criterion $Δs\,G\,\|Γ_{\varepsilon}^{-1}\|\leα<2$ guaranteeing objective monotonicity of the forward-Euler scheme up to $\mathcal{O}(Δs^{2})$ local truncation error. The theory is validated on a three-level bilinear benchmark for all-optical Bell-state preparation, where $κ(Γ)\in[10^{9},10^{11}]$, the predicted $\varepsilon^{2}$ rate is confirmed over eight decades, and moderate regularisation eliminates step rejections and reduces constraint drift by more than an order of magnitude at unchanged final fidelity.
Probabilistic Condition, Decision and Path Coverage of Circuit-based Quantum Programs
This paper develops testing coverage criteria specifically for quantum programs, creating six new metrics adapted from classical software testing. The researchers built a tool called QaCoCo to evaluate these metrics on 540 quantum circuits and found that while basic coverage is often high, complex quantum behaviors like multi-controlled gates create testing challenges.
Key Contributions
- Introduction of six quantum-tailored coverage criteria (condition, decision, path coverage and their probabilistic variants) for testing quantum programs
- Development of QaCoCo tool for computing coverage metrics on circuit-based quantum programs
- Empirical evaluation on 540 quantum circuits revealing coverage challenges with multi-controlled gates and weak correlation between structural coverage and fault detection
View Full Abstract
Coverage criteria play a central role in assessing test adequacy in classical software, yet their effectiveness for quantum programs remains poorly understood and largely unexplored. In this paper, we propose six quantum-tailored criteria - condition, decision, and path coverage, and their probabilistic variants - adapted from their classical counterparts. We present QaCoCo, a tool that computes these criteria for circuit-based quantum programs. We empirically evaluate these criteria on a large and diverse set of 540 circuits and analyze the coverage achieved. Our results show that while circuits frequently achieve high condition and decision coverage (97.56% and 97.63%, on average), path coverage remains limited (71.84%), particularly in the presence of multi-controlled gates, which induce extreme path explosion and coverage imbalance. Moreover, to account for the probabilistic nature of quantum circuits, we introduce probabilistic coverage, which augments structural coverage with a confidence measure (88.87%, 88.65%, and 37.18% for condition, decision, and path coverage, respectively, on average). Finally, through mutation testing, we find weak or no correlation between fault detection and structural coverage, consistent with observations in classical computing.
All pure entangled states can lead to fully nonlocal correlations
This paper investigates quantum nonlocality and proves that non-maximally entangled states can exhibit 'full nonlocality' - the strongest form of quantum correlations that cannot be reproduced by any classical model. The authors show that all pure entangled states can be activated to display full nonlocality when multiple copies are used together.
Key Contributions
- Established connection between full nonlocality and quantum state antidistinguishability
- Proved that non-maximally entangled states can exhibit full nonlocality in bipartite systems
- Derived sufficient conditions for full nonlocality based on Schmidt coefficients
- Showed all pure entangled states can be activated for full nonlocality in multi-copy scenarios
View Full Abstract
It is a well-established fact that some quantum correlations can be nonlocal, meaning that they cannot be described by a local hidden variable model. Certain quantum correlations have a form of nonlocality so strong that they cannot be reproduced even by models having an arbitrarily small local hidden variable component. These correlations are called fully nonlocal and lead to Bell inequalities in which the maximum quantum value saturates the non-signaling bound. A well-known example of this effect, which is also referred to as quantum pseudo-telepathy or all-versus-nothing proofs of nonlocality, is the quantum distribution fulfilling the Peres-Mermin square, in which the underlying state is a $4\times4$ dimensional maximally entangled state. Other examples of full nonlocality are known but, so far, all of them are for maximally entangled states and it is an open question whether maximal entanglement is necessary for full nonlocality. In this work, we first establish a link between full nonlocality and the concept of antidistinguishability of quantum states. We use this connection to show that in every bipartite $d\times d$ Hilbert space, with $d\geq3$, there are non-maximally entangled states that are fully nonlocal. In fact, we derive simple sufficient conditions for full nonlocality that are only based on the smallest and largest Schmidt coefficients. We also show that in every dimension there exist pure entangled states that do not exhibit full nonlocality. Finally, we show that all pure entangled states can be activated to show full nonlocality in the many-copy scenario.
Addressable Rydberg excitation in arrays of single neutral atoms with a strongly focused flat-top beam
This paper develops a method to create laser beams with flat intensity profiles for precisely controlling individual neutral rubidium atoms in optical trap arrays. The researchers demonstrate improved spatial selectivity when addressing specific atoms for Rydberg excitation, which is important for quantum operations.
Key Contributions
- Development of flat-top beam synthesis using superposition of Hermite-Gaussian or Laguerre-Gaussian modes
- Demonstration of improved spatial selectivity for addressing individual atoms in neutral atom quantum computing platforms
View Full Abstract
We present a method for generating a laser beam with flat intensity and phase profiles in the focal region where the beam interacts with neutral $^{87}$Rb atoms in an array of optical dipole traps. We synthesize the beam as a superposition of Hermite--Gaussian or Laguerre--Gaussian modes. Then we give analytical expressions for the coefficients of such a superposition, an analysis of beam propagation along the $z$ axis in the vicinity of the waist, and several other related theoretical issues. Rydberg two-qubit dynamics driven by this flat-top profile are analyzed through numerical solutions of the Lindblad master equation using our in-house Julia package. Beam preparation is demonstrated on a neutral-atom experimental platform. Measurements reveal a difference in the visibility of Rabi oscillations for addressed atoms compared with neighboring ones, confirming the effective spatial selectivity provided by the flat-top beam profile.
Reservoir-mediated spin entanglement in the mean-force Gibbs state
This paper studies how two qubits can become entangled through their shared connection to a thermal reservoir, even without directly interacting with each other. The researchers derive mathematical expressions to describe this reservoir-mediated entanglement and show that it depends on temperature and coupling strength in complex ways.
Key Contributions
- Derived analytical expressions for the mean-force Gibbs state describing reservoir-mediated two-qubit entanglement
- Characterized how entanglement varies non-monotonically with system-reservoir coupling strength and improves with broader reservoir spectral density
View Full Abstract
Two qubits strongly coupled to a common bosonic reservoir can become entangled with each other, despite having no direct interaction. In equilibrium, such coupling-induced coherences can be described by the mean-force Gibbs state. Here we derive approximate, analytic expressions for the two-qubit mean-force Gibbs state, and use these to characterize equilibrium qubit-qubit entanglement mediated by a thermal reservoir. Entanglement, which is highest at lowest temperatures, is a non-monotonic function of the system-reservoir coupling strength. Moreover, we find that broadening the reservoir spectral density beyond a single mode, as is realistic for typical baths, can enhance the qubit entanglement. Our results provide a comprehensive understanding of reservoir-mediated two-qubit entanglement in thermal equilibrium and provide a benchmark to compare with numerical methods, as well as demonstrating the utility of strong system-reservoir coupling as a resource.
Random Number Generators in Advanced Optical Experiments: A Comparative Analysis of Semiclassical, Quantum, and Hybrid Architectures
This paper compares different optical methods for generating random numbers, including attenuated lasers and single-photon sources, and proposes a hybrid approach that combines both to achieve high-quality random number sequences at faster generation rates.
Key Contributions
- Comparative analysis of semiclassical, quantum, and hybrid optical random number generation architectures
- Novel hybrid architecture combining attenuated laser and heralded single-photon sources for enhanced rate and quality
View Full Abstract
Random numbers sequences (RNSs) play a vital role in various scientific and engineering applications. They are critical to the integrity of classical and quantum cryptography, the accuracy of mathematical modeling and Monte Carlo simulations, and the core mechanics of applications in fields as diverse as gambling and statistical sampling. While the primary criteria for RNSs sources are their quality and generation rate, their integration into experimental designs is equally significant for many fundamental physical tests and applications. This work presents a comparative analysis of optical random number generation architectures, which can be seamlessly included into various advanced classical and quantum optical experimental schemes. In particular, we evaluate the trade-off between the high generation rate of an attenuated laser (a quasi-single-photon source) and the superior statistical quality of a heralded single-photon source operating at a much lower frequency. To overcome the limitations of each individual source, we propose and examine a novel hybrid architecture that utilizes their mixed radiation, enabling the generation of highquality RNSs at an enhanced rate. Furthermore, we demonstrate that the raw sequences generated by such a source can not only exhibit but, in some cases, even surpass the degree of randomness achieved by sequences processed through powerful randomness extractors.
Neural and Tensor Networks in the Study of Quantum Annealing Processors
This paper develops a comprehensive benchmarking framework for D-Wave quantum annealing processors, introducing a tensor-network heuristic called SpinGlassPEPS.jl for optimization problems and analyzing quantum annealers as thermal machines to understand their thermodynamic costs and performance trade-offs.
Key Contributions
- Development of SpinGlassPEPS.jl tensor-network heuristic for topology-aware optimization on Pegasus/Zephyr graphs
- Thermodynamic analysis framework treating quantum annealers as thermal machines to relate performance to physical costs
View Full Abstract
Quantum annealing targets low-energy solutions of Ising/QUBO problems, but reliable assessment requires more than best-energy comparisons. This dissertation develops a benchmarking framework for D-Wave quantum annealers that combines strong classical baselines, sampling and diversity metrics, and thermodynamic cost. Its first contribution, SpinGlassPEPS.jl, is a topology-aware tensor-network heuristic for optimization and sampling on Pegasus/Zephyr-like graphs. It maps Ising instances to local Potts clusters, represents the partition function with PEPS, and performs branch-and-bound search in probability space. Benchmarks show that it is a physically interpretable reference solver, though approximate contractions limit its competitiveness on the largest instances. The second contribution treats quantum annealers as effective thermal machines, relating success probability and solution quality to dissipation, entropy production, and effective temperature. Carefully placed pauses can improve performance while reducing thermodynamic cost, although longitudinal fields may become harmful in paused schedules. The thesis also introduces reinforcement-learning post-processing to improve returned samples and exact small-system simulations to probe annealing dynamics. Overall, it argues for quantum-annealing benchmarks that jointly measure algorithmic performance and physical expenditure.
Fundamental Physics, Existential Risks and Human Futures
This paper discusses 25 years of research into fundamental quantum physics questions including quantum reality, quantum gravity relationships, and consciousness-physics connections. The author suggests this work points toward physics beyond current quantum theory that could transform information processing and AI development.
Key Contributions
- Exploration of physics beyond quantum theory with new evolution laws
- Investigation of quantum-consciousness connections with potential AI implications
View Full Abstract
Over the past 25 years, I have been involved in some intriguing developments in the foundations of physics, exploring the quantum reality problem, the relationship between quantum theory and gravity and the interplay between consciousness and physical laws. These investigations make it plausible that we will find physics beyond quantum theory, potentially including both new evolution laws and new types of measurement. There is also a significant chance they could have potentially transformative impact on information processing and on the development of and our future with AI.
Over forty years of research towards the understanding of Quantum Brownian Motion -- the contributions of A. O. Caldeira
This paper reviews Amir Caldeira's four decades of contributions to quantum Brownian motion theory, particularly focusing on the Caldeira-Leggett model that describes how quantum systems interact with dissipative environments. The work covers his research on quantum tunneling in the presence of dissipation and his influence on quantum decoherence and quantum thermodynamics.
Key Contributions
- Historical review of Caldeira-Leggett model for quantum Brownian motion
- Analysis of dissipation effects on quantum tunneling rates
- Overview of Caldeira's influence on quantum decoherence theory
- Summary of contributions to quantum thermodynamics
View Full Abstract
This article presents a brief account of Amir O. Caldeira's contributions to the theory of quantum Brownian motion. Motivated by its importance, we outline the description of Brownian motion in the quantum regime following Caldeira's first works. In this context, we particularly highlight the effect of dissipation on the tunneling rate out of a metastable state. We then journey along the alternative ways to approach quantum Brownian motion developed by Caldeira during his career, which go beyond the so-called Caldeira-Leggett model. We conclude by summarizing some of Caldeira's contributions to contemporary fields such as the theory of quantum decoherence and quantum thermodynamics, that were strongly inspired by his eponymous approach to quantum Brownian motion.
Multi-Objective Optimization by Quantum-Annealing-Inspired Algorithms
This paper compares quantum annealing approaches to classical methods for solving multi-objective optimization problems, finding that GPU-based quantum-annealing-inspired algorithms outperform both actual quantum processors and classical solvers when accounting for all processing overheads.
Key Contributions
- Comprehensive benchmarking that includes pre- and post-processing overheads for quantum vs classical optimization approaches
- Demonstration that GPU-based quantum-annealing-inspired algorithms can outperform both quantum processors and classical solvers by ~2 orders of magnitude in speed
View Full Abstract
Combinatorial optimization is widely regarded as a primary application for near-term quantum processors, although a definitive demonstration of the practical quantum advantage remains elusive. Recent studies have reported that both gate-based quantum circuits and quantum annealers can outperform state-of-the-art classical heuristics on multi-objective optimization (MO-MaxCut) problems. However, these studies did not fully account for the substantial pre- and post-processing overheads intrinsic to quantum solvers, leading to incomplete comparisons between quantum and classical approaches. In this work, we re-examine the same benchmark suite using GPU-based quantum-annealing-inspired algorithms (QAIAs), which, analogously to quantum processors, generate probabilistic samples and thus serve as formidable classical contenders. Our results show that QAIAs can sample candidate solutions approximately two orders of magnitude faster than previously studied quantum processors. In terms of end-to-end runtime, QAIAs also surpass industry-leading classical solvers, thereby establishing themselves as the superior performers among the quantum and classical solvers evaluated thus far for the MO-MaxCut instances.
The Fock-Darwin-Darboux system: eigenstates, information entropies and dispersion-like measures
This paper studies charged particles in magnetic fields and oscillator potentials on both flat and curved surfaces, calculating various information-theoretic measures like Shannon and Rényi entropies. The work extends the well-known Fock-Darwin system to curved Darboux surfaces and shows that curvature eliminates the infinite degeneracy found in flat-space Landau levels.
Key Contributions
- Analytical expressions for information-theoretic measures in Fock-Darwin systems with effective frequency modification
- Demonstration that curved Darboux surfaces eliminate infinite degeneracy of Landau levels
View Full Abstract
The Fock-Darwin (FD) quantum system describes the motion on the plane of a charged particle under the action of an isotropic oscillator potential together with a perpendicular constant magnetic field. When the isotropic oscillator is suppressed, the FD system leads to the Landau Hamiltonian with infinitely degenerate Landau levels. The Fock-Darwin-Darboux (FDD) system is the generalisation of the FD system to a particle moving on the Darboux III space, which is a conformally flat surface with non-constant negative curvature. We present a systematic study of some information-theoretic entropy and dispersion-like measures for these quantum systems. Since both systems are exactly solvable, analytical expressions for Shannon, Rényi and Tsallis entropies, among others, can be obtained. We show that for the FD system, its information-theoretic measures are formally the same as the ones for the harmonic oscillator, provided a modified effective frequency depending on the magnetic field is introduced. In the FDD case, the nonlinear nature of the underlying manifold precludes the existence of a simple closed form for the wave-function on momentum space, which is numerically analysed. We compare the numerical behaviour of the different entropy measures and we analyse the interplay arising in the FDD system between the curvature parameter and the magnetic field. In particular, it is shown that the Landau system on the Darboux III space has no infinitely degenerate Landau levels.
Solve Crude Oil Scheduling Problems by Using Quantum-Classical Hybrid Algorithms
This paper develops a hybrid quantum-classical algorithm to solve complex crude oil refinery scheduling problems by using quantum computing to handle the discrete optimization parts while classical computers handle continuous flow constraints. The approach combines Benders Decomposition with quantum optimization to achieve significant cost reductions and computational speedups compared to traditional methods.
Key Contributions
- Novel hybrid quantum-classical framework using Benders Decomposition for crude oil scheduling
- QUBO reformulation of discrete logistics problems for quantum solver implementation
- Demonstration of 73-80% cost reduction compared to traditional metaheuristics on industrial-scale problems
View Full Abstract
The optimization of front-end crude oil scheduling is a critical determinant of refinery profitability and operational stability. However, the coupling of discrete logistics events (e.g., vessel berthing) with continuous material flows (e.g., pipeline transfers) renders this problem an NP-hard Mixed-Integer Linear Programming (MILP) challenge, often intractable for classical solvers at industrial scales. This study proposes a novel hybrid quantum-classical framework to address these computational bottlenecks. We employ Benders Decomposition to decouple the monolithic model into a discrete Master Problem (MP) and a continuous Subproblem (SP). To exploit the search capabilities of quantum computing, the MP is reformulated as a Quadratic Unconstrained Binary Optimization (QUBO) model and solved via a hybrid quantum solver, while the SP enforces mass balance and quality constraints through iterative optimality and feasibility cuts. Extensive experiments on 15 multi-scale instances demonstrate that the proposed framework significantly outperforms traditional metaheuristics (e.g., Genetic Algorithms, Tabu Search), reducing total operating costs by approximately 73--80% and achieving computational speeds comparable to state-of-the-art commercial solvers (Gurobi). By effectively leveraging global optimality cuts, the method overcomes the tendency of heuristic approaches to trap in local optima, providing a robust and scalable solution for complex refinery logistics.
A Multi-Level Integrity Evaluation Framework for Quantum Circuits under Controlled Anomaly Injection
This paper develops a three-layer framework to validate the integrity of quantum circuits by combining structural, behavioral, and interaction-based metrics. The authors demonstrate through controlled testing that no single metric is sufficient to detect all types of circuit anomalies or modifications.
Key Contributions
- Development of a multi-layer integrity evaluation framework combining three complementary metrics (SIS, OIS, IGS)
- Demonstration that structural analysis alone is insufficient for circuit validation, with behavioral metrics detecting 93.85% of anomalies missed by structural analysis
View Full Abstract
Ensuring the integrity of quantum circuits is a significant challenge in the Noisy Intermediate-Scale Quantum (NISQ) era, where circuits are subject to compilation transformations, hardware constraints, and potential adversarial modifications. Existing validation approaches typically rely on either structural analysis or behavioral evaluation, leading to incomplete assessment of circuit correctness. In this work, we investigate the relationship between structural, interaction-level, and behavioral perspectives of circuit integrity, demonstrating that a single aspect of integrity is insufficient to guarantee circuit integrity; structural similarity alone does not ensure behavioral equivalence. To address this problem, we use a three-layer metric framework that combines the Structural Integrity Score (SIS), the Operational Integrity Score (OIS), and the Interaction Graph Semantic-Logical Score (IGS). SIS captures global structural properties, OIS quantifies behavioral divergence using Jensen-Shannon distance, and IGS models interaction patterns and dependencies in a pre-execution setting. Through controlled anomaly injection on benchmark quantum circuits, we demonstrate that each metric captures a different aspect of circuit deviation. In particular, structural blind-spot cases (SIS >= 0.95) reveal a clear limitation of structural analysis, where OIS detects anomalies in 93.85% of instances, while IGS detects 72.58%. These results highlight that the metrics provide complementary insights and that a single metric is insufficient for reliable circuit validation.
Large-Scale Quantum Circuit Simulation on an Exascale System for QPU Benchmarking
This paper benchmarks a 98-qubit trapped-ion quantum processor using large-scale classical simulations on an exascale supercomputer to determine when quantum device outputs become unreliable due to noise. The researchers found the quantum processor performs coherently up to 93 qubits before outputs become indistinguishable from random sampling.
Key Contributions
- Demonstrated a methodology for benchmarking quantum processors using exascale classical simulation as validation
- Quantified the noise-tolerant operating regime of a 98-qubit trapped-ion QPU, identifying coherent performance up to 93 qubits
View Full Abstract
Recent advances in quantum computing have enabled the development of quantum processors with hundreds of qubits. However, noise continues to limit the amount of useful information that can be extracted from these systems, making it essential to identify the regime in which experimental outputs remain reliable. In this work, we benchmark Quantinuum Helios-1, a 98-qubit trapped-ion quantum processing unit, using the linear ramp quantum approximate optimization algorithm (LR-QAOA). To this end, we perform large-scale noiseless simulations on JUPITER, Europe's first exascale supercomputer, for circuits of up to 48 qubits and 3,384 two-qubit gates. These simulations, executed on 4,096 nodes equipped with 16,384 GH200 superchips and high-bandwidth CPU-GPU interconnects, provide a reference for validating experimental results at the edge of classical tractability. We find that, up to 48 qubits, Helios-1 remains in a noise-tolerant region, i.e., its samples cannot be clearly distinguished from those coming from a noiseless simulation. We then extend the analysis to larger system sizes using experimental data only, and apply a mean-of-means resampling procedure with a 3$σ$ threshold to determine whether the QPU output is statistically distinguishable from random sampling. This analysis identifies a regime of coherent performance up to 93 qubits (12,834 two-qubit gates), beyond which, at 95 qubits, the outputs become statistically indistinguishable from random sampling. These results demonstrate how exascale classical simulation can be used to validate quantum processors, and provide a quantitative boundary between noise-tolerant and random regimes in quantum processors.
Quantum Gatekeeper: Multi-Factor Context-Bound Image Steganography with VQC Based Key Derivation on Quantum Hardware
This paper presents a quantum-enhanced image steganography system that uses variational quantum circuits to derive keys for hiding messages in images. The system requires multiple authentication factors (password, secret, context string, and reference image) to successfully extract hidden payloads, with quantum hardware used to evaluate the statistical behavior of the key derivation process.
Key Contributions
- Integration of variational quantum circuits for cryptographic key derivation in steganography applications
- Multi-factor context-bound authentication system that silently rejects incorrect extraction attempts
- Dual-region image layout design to resolve nonce bootstrapping dependencies in quantum-enhanced steganography
View Full Abstract
This paper presents Quantum Gatekeeper, a context-bound image steganography framework where successful payload recovery depends on both cryptographic decryption and the reconstruction of a precise extraction path. The system integrates lossless least significant bit (LSB) embedding with a deterministic variational quantum circuit (VQC)-derived gate key, multi-factor contextual binding, and authenticated encryption. Payload extraction is contingent upon four requisite factors: a password, a shared secret, a user-supplied context string, and a reference image signature. Any deviation in these factors causes the system to read from an incorrect pixel sequence or fail authentication, resulting in silent rejection rather than partial disclosure. The proposed method derives a gatecontrolled extraction key from a seed-conditioned variational circuit, with parameters generated via cryptographic hash expansion and context-dependent image features. To ensure encode/decode consistency, the cryptographic key path is generated via exact statevector simulation; concurrently, IBM superconducting quantum hardware is utilized to evaluate the statistical behavior of the circuit family under physical noise. We introduce a dual-region image layout to resolve the nonce bootstrapping dependency, separating header recovery from payload recovery through independently derived keys. Experimental results confirm successful end-to-end message embedding and recovery on PNG images, demonstrating deterministic success under correct conditions and failure otherwise. The framework supports both text and image payloads; in the image-in-image configuration, a secret image is resized to a fixed resolution prior to embedding, enabling exact pixel-level recovery under correct contextual reconstruction.
Imaginarity-generating power of unitaries: A resource-theoretic approach
This paper develops a theoretical framework for quantifying how well quantum operations can generate 'imaginarity' (complex numbers) from real quantum states, treating imaginarity as a fundamental quantum resource. The authors derive mathematical expressions for this imaginarity-generating power and show that typical high-dimensional quantum operations are very effective at producing imaginarity.
Key Contributions
- Introduction of imaginarity-generating power (IGP) as a quantifiable measure within dynamical resource theories
- Exact mathematical expression for purity-constrained IGP in arbitrary dimensions
- Proof that IGP concentrates near maximum values for high-dimensional Haar-random unitaries
- Characterization of unitaries that maximize IGP and their corresponding bounds
View Full Abstract
Imaginarity, stemming from the complex structure of quantum mechanics, has recently emerged as a fundamental resource, yet its dynamical generation remains largely unexplored. In this work, we introduce the notion of imaginarity-generating power (IGP) of unitary dynamics, which quantifies the ability of unitary operations to produce imaginarity from initially real quantum states. To quantify imaginarity, we employ a measure based on the Hilbert--Schmidt norm, which we show to be monotone under real unital operations. Within the framework of dynamical resource theories, we derive an exact expression for the purity-constrained IGP in arbitrary dimensions and show that, for pure real input states, it depends solely on intrinsic and experimentally accessible properties of the unitary. We further analyze its average behavior over ensembles of states with varying purity under both uniform and Hilbert--Schmidt distributions. We prove that it satisfies the essential properties of a valid resource monotone within the dynamical resource theory of imaginarity. We also characterize the unitaries that maximize the IGP and determine the corresponding bounds. Moreover, for Haar-random unitaries, we show that the IGP concentrates near its maximal value in high dimensions with small fluctuations, indicating that typical high-dimensional quantum dynamics are highly effective at generating imaginarity.
Genuine tripartite entanglement in Bhabha scattering with an entangled spectator particle
This paper studies how quantum electrodynamics scattering between an electron and positron can create three-way quantum entanglement when one of the particles is initially entangled with a third spectator particle. The researchers analyze how this tripartite entanglement depends on scattering momentum and initial entanglement, and examine constraints on sharing quantum correlations among the three particles.
Key Contributions
- Demonstration that QED scattering can generate genuine tripartite entanglement in electron-positron systems
- Systematic characterization of tripartite entanglement using four canonical metrics and analysis of monogamy constraints
- Discovery that monogamy constraints are relaxed in non-relativistic regimes enabling enhanced quantum correlation shareability
View Full Abstract
From the perspective of quantum information science, we investigate tree-level Bhabha scattering between an incident electron $A$ and a positron B, where $B$ is initially entangled with a spectator electron $C$, which does not participate in the scattering interaction.We find that the quantum electrodynamics (QED) scattering between $A$ and $B$ can drive the global $ABC$ system into a genuine tripartite entangled (GTE) state. Using four canonical tripartite entanglement metrics, we systematically characterize and quantify the GTE of the composite system, and demonstrate that the scattering momentum of the $A$-$B$ pair and the initial $B$-$C$ entanglement are the key resources governing GTE generation.We further analyze the monogamy of quantum correlations, which imposes fundamental constraints on the shareability of quantum resources in multipartite systems. Specifically, we systematically study the monogamy relations for the squared entanglement of formation and squared quantum discord in our scattering model, and find that monogamy constraints are markedly relaxed in the non-relativistic regime, enabling enhanced shareability of quantum correlations across the three particles. This work uncovers novel quantum correlation properties of fundamental QED scattering processes, and provides direct theoretical guidance for the development of QED-based quantum information processing protocols.
Heralding probability optimization for nonclassical light generated by photon counting measurements on multimode Gaussian states
This paper develops mathematical methods to optimize the probability of successfully generating non-classical quantum states of light by measuring photon numbers in multimode squeezed light systems. The researchers formulate the optimization as a system of polynomial equations, making it more efficient to find optimal experimental configurations for quantum state preparation.
Key Contributions
- Formulated heralding probability optimization as a polynomial equation system for efficient solution finding
- Developed methodology that incorporates experimental constraints like quadrature squeezing bounds
- Extended approach to generation of both Fock state superpositions and squeezed superpositions with well-defined photon number parity
View Full Abstract
Generation of highly non-classical quantum states of light is essential for optical quantum information processing and quantum metrology. Given the lack of sufficiently strong nonlinear interactions between optical fields, the commonly employed optical quantum-state preparation schemes are conditional, based on nonlinearity induced by heralding photon number measurement on a part of a multimode squeezed Gaussian state. Development and optimization of such probabilistic quantum-state engineering schemes represents one of the central challenges in current quantum optics. As technology advances and experiments progress to detection of higher numbers of photons, the maximization of the heralding probability becomes essential to ensure sufficiently high state-preparation rates. Here, we show that for the conditional quantum state preparation schemes based on Gaussian states and photon number measurements the maximization of the heralding probability can be formulated as finding solution to a system of polynomial equations, which offers an efficient way to find the optimal configuration and allows us to apply techniques dedicated specifically to solving such systems of equations. Our approach can seamlessly incorporate bounds on the available single-mode quadrature squeezing, which is highly experimentally relevant. We mainly consider generation of finite superpositions of Fock states but show that the approach can be straightforwardly extended to generation of squeezed superpositions of Fock states. We focus on Gaussian states with vanishing coherent displacements, hence the conditionally generated states have well defined photon number parity. We illustrate our general methodology on examples of generation of single-mode and two-mode states with two heralding modes.
Testing a continuous-variable Bell-like inequality with a hybrid-encoded system
This paper demonstrates a violation of Bell-like inequalities using continuous-variable quantum systems by employing sequential measurements on single photons encoded in the Gottesman-Kitaev-Preskill code space, showing that such systems can exhibit non-classical behavior contrary to previous assumptions.
Key Contributions
- Demonstrated Bell inequality violation in continuous-variable systems using sequential measurements
- Experimental implementation using Gottesman-Kitaev-Preskill encoded single photons from quantum dots
- Addressed conceptual loopholes in previous continuous-variable noncontextuality studies
View Full Abstract
Continuous-variable quantum systems are promising candidates for quantum computing and quantum information processing. It is widely known that quadrature measurements on Gaussian continuous-variable systems can be described by a noncontextual hidden-variable model and cannot violate a Bell inequality. Here, we demonstrate that the observation fails when sequential measurements are involved. Our experiment is realized by mapping the spatial modes of a single photon, deterministically generated from an InAs/GaAs quantum emitter, to the logical operations in the Gottesman--Kitaev--Preskill code space. Employing a black-box-style approach, we observe a violation of the Bell-like noncontextual hidden-variable inequality by 380 standard deviations. Our results address the conceptual loopholes in previous works and open up new possibilities for studying fundamental quantum physics using photonic-encoded continuous-variable systems.
QCalEval: Benchmarking Vision-Language Models for Quantum Calibration Plot Understanding
This paper introduces QCalEval, a benchmark for evaluating how well vision-language models can interpret quantum calibration plots used in quantum computing experiments. The researchers tested various AI models on understanding experimental data visualizations from superconducting qubits and neutral atoms, finding that the best models achieve around 72-75% accuracy.
Key Contributions
- First benchmark dataset for evaluating vision-language models on quantum calibration plot interpretation
- Comprehensive evaluation of multiple VLMs across quantum computing experimental scenarios with release of NVIDIA Ising Calibration 1 model
View Full Abstract
Quantum computing calibration depends on interpreting experimental data, and calibration plots provide the most universal human-readable representation for this task, yet no systematic evaluation exists of how well vision-language models (VLMs) interpret them. We introduce QCalEval, the first VLM benchmark for quantum calibration plots: 243 samples across 87 scenario types from 22 experiment families, spanning superconducting qubits and neutral atoms, evaluated on six question types in both zero-shot and in-context learning settings. The best general-purpose zero-shot model reaches a mean score of 72.3, and many open-weight models degrade under multi-image in-context learning, whereas frontier closed models improve substantially. A supervised fine-tuning ablation at the 9-billion-parameter scale shows that SFT improves zero-shot performance but cannot close the multimodal in-context learning gap. As a reference case study, we release NVIDIA Ising Calibration 1, an open-weight model based on Qwen3.5-35B-A3B that reaches 74.7 zero-shot average score.
Optimized thermal control of a dual-wavelength-resonant nonlinear cavity
This paper presents a new thermal control method for optical resonators using a bimetallic heat sink to create temperature gradients in nonlinear crystals. The technique enables precise control of dispersion to achieve simultaneous resonance of multiple wavelengths, improving the efficiency of nonlinear optical processes like second harmonic generation and squeezed light production.
Key Contributions
- Novel bimetallic heat sink design for thermal dispersion control in nonlinear optical resonators
- Method to achieve simultaneous multi-wavelength resonance while minimizing mechanical and thermal stress in crystals
View Full Abstract
Optical resonator-enhanced nonlinear interactions are of great importance for the efficient generation of continuous-wave second harmonic generation, optical parametric oscillation, frequency mixing, and the generation of squeezed light. In order to maximize these interactions within the intra-cavity nonlinear material, high intensities, optimal phase matching, and simultaneous resonance of all interacting fields are required. However, the dispersion of the optical resonator often prevents the co-resonance of multiple wavelengths. Here, we present a novel implementation using a monolithic bimetallic heat sink for controlling the resonator dispersion based on a shallow temperature gradient directly applied to a section of the nonlinear crystal. This method enables precise dispersion control and is designed to minimize mechanical and thermal stresses in the nonlinear crystal, thus providing an additional method for designing highly efficient and reliable resonator-enhanced nonlinear devices for demanding applications such as gravitational wave detection, quantum optics, and frequency conversion.
Quantum limit cycles with continuous symmetries from coherent parametric driving: exact solutions and many-body extensions
This paper introduces a new class of quantum systems that can maintain stable oscillating behavior (limit cycles) while preserving continuous symmetries, achieved through coherent parametric driving of multi-mode bosonic systems. The authors provide exact mathematical solutions for these systems and demonstrate they can exhibit quantum entanglement and reduced noise properties.
Key Contributions
- Exact solutions for quantum limit cycles with O(N) continuous symmetry under coherent parametric driving
- Demonstration of steady-state entanglement and reduced phase diffusion in these systems
- Unified theoretical framework for understanding symmetry-enriched limit cycles in bosonic systems
View Full Abstract
There is widespread interest in many-body quantum systems that exhibit limit-cycle or time-crystalline behaviour. An ideal quantum limit cycle would be realized using fully coherent driving (to minimize noise) and also have a continuous internal symmetry (to ensure generation of monochromatic radiation). While these two requirements may seem incompatible, we introduce in this work a large class of multi-mode bosonic limit cycle models based on coherent parametric driving which possess an O(N) continuous symmetry. Surprisingly, the full quantum dissipative steady state of these models can be found exactly. They exhibit rich physics, including steady state entanglement, reduced phase diffusion and the possibility of realizing quantum limit tori. The basic mechanism we identify provides a unified way to understand how coherent parametric driving can yield symmetry-enriched limit cycles, and also helps us understand related models where the relevant symmetries are weakly broken. The models we study are compatible with a range of different experimental platforms, including quantum optical setups and superconducting quantum circuits.
Quantum channels preserving sigma-additivity and Ulam measurable cardinals
This paper studies mathematical properties of quantum states on infinite-dimensional Hilbert spaces, focusing on states that have unusual measure-theoretic properties related to large cardinal numbers. The authors develop a representation theory for these states and construct quantum channels that can transform normal quantum states into these exotic singular states.
Key Contributions
- Proved representation theorem for σ-additive states on diagonal algebras as Pettis integrals over singular measures
- Constructed quantum channels using σ-complete ultrafilters that map normal states to singular σ-additive states
View Full Abstract
This paper investigates the interplay between the properties of quantum states on the Hilbert space \(\ell_2(κ)\) and the set-theoretic nature of the cardinal $κ$. We focus on the existence of singular $σ$-additive states~ -- functionals whose induced measures are $σ$-additive yet vanish on singletons. While the existence of such states is known to be equivalent to the Ulam measurability of $κ$, their structural and dynamical properties remain largely unexplored. We prove that any $σ$-additive state on the diagonal algebra is representable as a Pettis integral over a singular $σ$-additive measure, extending the classical representation theory to the non-normal sector. Furthermore, we construct a class of quantum channels using $σ$-complete ultrafilters that map normal states to singular $σ$-additive states, effectively <<archiving>> information into the singular part of the state space.
A Quantum Spectral Framework for Solving PDEs
This paper presents a quantum algorithm for solving partial differential equations (PDEs) using quantum block encoding and Fourier transforms. The approach exploits structural properties in Fourier space to potentially overcome the computational limitations of classical methods when dealing with high-dimensional PDEs.
Key Contributions
- Novel quantum subroutine for second-order linear PDEs using Quantum Block Encoding
- Alternative to standard quantum matrix inversion by exploiting structural properties in Fourier space
- Framework for extending to quantum group Fourier transforms and equivariant quantum neural networks
View Full Abstract
Partial differential equations (PDEs) are fundamental across numerous scientific fields. As these problems scale to high dimensions, classical numerical schemes introduce severe computational bottlenecks, known as the curse of dimensionality. Attempts to solve this problem typically rely on either classical sparsity and low-rank decompositions, or neural network surrogate models. On the other hand, Quantum Computing offers a promising alternative, as it allows us to operate in significantly larger spaces while demanding far fewer resources. In this work, we present a quantum subroutine to solve second-order linear PDEs by exploiting the structural properties of the filter in Fourier space using Quantum Block Encoding (QBE) with quantum reversible arithmetic. This approach serves as a specialized alternative to standard quantum matrix inversion, which typically relies solely on Quantum Singular Value Transformation (QSVT) without exploiting the inherent structural properties of the matrix. We validate the proposed methodology against its classical counterpart to prove its correctness. This framework provides a foundation for extending these methods toward quantum group Fourier transforms, wavelet-based analysis, and equivariant quantum neural networks (EQNNs), offering a promising path toward solving broader classes of problems, including nonlinear PDEs.
A unified quantum random walk model for internal crystal effects in dynamical diffraction
This paper develops a quantum random walk model to better simulate how neutrons and X-rays diffract through imperfect crystals that have defects, temperature variations, and surface irregularities. The model provides a more comprehensive framework for designing precision crystal interferometers used in quantum experiments.
Key Contributions
- Unified quantum random walk model for dynamical diffraction that accounts for crystal imperfections
- Comprehensive framework for modeling internal crystal effects including temperature gradients and crystal miscuts
- Improved design methodology for next-generation neutron interferometers and optical components
View Full Abstract
The theory of dynamical diffraction (DD) in perfect crystals is the backbone of high-precision neutron and X-ray diffraction experiments, enabling accurate determination of crystal structure factors and the realization of perfect crystal interferometers. In practice, however, real crystals exhibit deformations and imperfections, including surface roughness, defects, temperature gradients, angled crystal faces, and curvature, that degrade interferometer performance and are difficult to model using conventional DD theory, particularly in complex geometries. To address these challenges, a quantum information (QI) model for DD has been under development, with demonstrated experimental agreement for both ideal crystals and in the presence of some imperfections such as surface roughness and defects. Here, we present a unified quantum random walk model that is now suitable for reproducing all established DD effects. We demonstrate this by incorporating a broad range of internal crystal effects influencing DD intensity distributions, including linear temperature gradients, the DD Talbot effect, and angled or miscut crystals. These results establish the QI model as a comprehensive and flexible framework for experimental analysis, as well as for the design of next-generation perfect crystal neutron interferometers and neutron optical components, such as condensing monochromators.
Pseudo-Hermiticity of the Nakajima-Zwanzig Projected Liouvillian in the Jaynes-Cummings Model
This paper explains why a mathematical operator in the Jaynes-Cummings model of quantum optics has an unexpectedly real spectrum despite being non-Hermitian. The authors show this occurs because the operator is pseudo-Hermitian, meaning there exists a special metric that makes it behave like a Hermitian operator.
Key Contributions
- Resolved long-standing anomaly of real spectrum in non-Hermitian Nakajima-Zwanzig projected Liouvillian by proving pseudo-Hermiticity
- Demonstrated existence of positive-definite metric that preserves spectral reality across different truncation limits and identified exceptional-point boundaries in extended Rabi model
View Full Abstract
The Nakajima-Zwanzig projected Liouvillian QLQ, the generator of the exact memory kernel in open quantum dynamics, is manifestly non-Hermitian yet has been reported to possess a purely real spectrum in the Jaynes-Cummings model -- an anomaly unexplained since observation. We resolve this anomaly by showing that QLQ is pseudo-Hermitian in the Mostafazadeh sense: a positive-definite metric eta>0 exists such that (QLQ)^dag eta = eta (QLQ), forcing the spectrum to be real. The pseudo-Hermiticity is genuine: the Delta N = 0 and Delta N = +/-2 sectors are individually non-Hermitian (residuals 1.70 and 5.06, respectively), yet the global spectrum is protected by eta. The metric survives the bath-truncation limit (N_max = 3--20, matrix dimension up to 1764 x 1764) with intertwining residual <10^{-11}. A continuous deformation to the full Rabi model reveals a re-entrant pseudo-Hermitian phase with two exceptional-point boundaries, in which the metric condition number diverges. The result supplies a structural reason for Hardy-space analyticity of the memory kernel in the canonical quantum-optical model.
Pulse Quality Optimisation in Quantum Optimal Control
This paper introduces GECKO, a method for improving quantum control pulses after achieving high fidelity by using mathematical techniques to optimize pulse properties like smoothness and robustness while maintaining the same quantum operation. The researchers demonstrate their approach on common quantum gates (CZ and CNOT) and show significant improvements in practical pulse characteristics.
Key Contributions
- Introduction of GECKO method using Riemannian geometry to optimize pulse quality while preserving fidelity
- Demonstration of improved control pulse properties including spectral filtering, smoothness, robustness, and duration for quantum gates
View Full Abstract
Quantum optimal control methods are widely used to design experimental control pulses such as laser amplitudes, phases, or detunings, that implement a target unitary evolution. In practice, what makes a pulse "good" depends not only on its fidelity, but also on the experimental setting and the relevant hardware constraints. Here, we introduce geometric quantum control with kernel optimisation (GECKO), a model-agnostic method for improving control pulses after a high-fidelity solution has been found. GECKO uses the Riemannian geometry of the special unitary group to identify directions in pulse space that leave the implemented unitary unchanged to first order, allowing one to traverse level sets of the control landscape while optimising a chosen differentiable pulse-quality function. We demonstrate GECKO on a transverse-field Ising Hamiltonian implementing CZ and CNOT gates, optimising pulse properties including spectral filtering, smoothness, robustness to parameter deviations, and pulse duration. In all cases, GECKO finds substantially improved pulse solutions.
Beyond Single Trajectories: Optimal Control and Jordan-Lie Algebra in Hybrid Quantum Walks for Combinatorial Optimization
This paper introduces a hybrid quantum walk (HQW) algorithm that improves upon QAOA by coherently superposing multiple evolution paths within each circuit layer using a dynamical coin operator. The authors use optimal control theory and algebraic analysis to show that HQW has enhanced expressivity and demonstrates better performance on combinatorial optimization problems.
Key Contributions
- Development of hybrid quantum walk ansatz that generalizes QAOA by superposing multiple Hamiltonian-driven paths
- Derivation of optimal coin operator form using Pontryagin's minimum principle
- Algebraic proof that HQW generates larger Jordan-Lie algebra providing theoretical foundation for enhanced expressivity
- Demonstration of improved performance over QAOA on Max-Cut and Maximum Independent Set problems
View Full Abstract
The Quantum Approximate Optimization Algorithm (QAOA) follows a single, fixed evolution path, overlooking the potential computational advantage of coherently superposing multiple trajectories. Here we overcome this limitation with a hybrid quantum walk (HQW) ansatz that super poses multiple Hamiltonian-driven paths coherently within each circuit layer via a dynamical coin operator. QAOA emerges as a special case of this framework with a static Pauli-X coin. Using Pontryagin's minimum principle, we derive the optimal form of the coin operator, demonstrating that it generally differs from a constant gate. A dynamical Lie algebra analysis reveals that HQW generates a strictly larger Jordan-Lie algebra, providing an algebraic foundation for its enhanced expressivity. Especially, we reveal the connection between the unique Jordan product negativity in HQW's DLA and its performance advantages. Numerical experiments on Max-Cut and Maximum Independent Set problems show that HQW systematically outperforms QAOA in convergence speed, solution accuracy, and robustness. Our work establishes a path-superposition paradigm for quantum optimization, combining optimal control theory with algebraic structure to guide the design of advanced quantum algorithms.
Quantum-Inspired Robust and Scalable SAR Object Classification
This paper investigates using tensor networks (quantum-inspired mathematical structures) to classify objects in synthetic aperture radar (SAR) images. The research focuses on making these classification models both robust against data poisoning attacks and efficient enough to run on edge devices like drones and military aircraft.
Key Contributions
- Evaluation of tensor networks for SAR object classification robustness against data poisoning
- Demonstration of tensor networks' ability to balance model efficiency and classification accuracy for edge device deployment
View Full Abstract
SAR image classification naturally has to deal with huge noise and a high dynamic range particularly requiring robust classification models. Additionally, the deployment of these models on edge devices, such as drones and military aircraft, requires a careful balance between model size and classification accuracy. This study explores the potential of tensor networks to meet these robustness requirements, specifically evaluating their resilience to data poisoning. Unlike previous works that concentrated on conventional neural networks for SAR object detection, this research focuses on the robustness and model reduction capabilities of tensor networks in object classification. Our findings indicate that tensor networks are adept at addressing both the challenges of robustness and the need for model efficiency, thereby contributing valuable insights to the ongoing discourse in radar applications and deep learning methodologies in general.
Numerically-Exact Quantum-Simulation Approach for Two-Dimensional Spectroscopy of Open Quantum Systems
This paper develops a quantum simulation method using bath-engineering techniques to accurately model two-dimensional spectroscopy experiments, which probe how molecules and quantum systems interact with their environments. The authors demonstrate their approach on molecular systems including a chiral detection system and a rhodium compound in chloroform.
Key Contributions
- Development of numerically-exact quantum simulation approach for 2D spectroscopy using bath-engineering technique
- Demonstration of the method on chiral enantiodetection and molecular systems with validation against experimental data
View Full Abstract
Two-dimensional spectroscopy (2DS) is a powerful ultrafast technique for probing electronic and vibrational dynamics in complex microscopic systems. Extracting detailed information on system dynamics and system-bath interactions from 2DS experiments requires precise theoretical simulations for comparison, which motivates the development of numerically-exact and computationally-efficient simulation approaches. Here, we propose a quantum-simulation approach for 2DS based on the bath-engineering technique (BET), which has been successfully employed in quantum simulations of open quantum dynamics. To demonstrate our approach, we first simulate the 2DS of a driven four-level system in chiral enantiodetection, where we also assess the applicability of the center-line slope (CLS) method for extracting time correlation functions (TCFs) from the 2DS. We further apply our approach to the 2DS of ${\rm Rh(CO)_2C_5H_7O_2}$ (RDC) dissolved in chloroform, where the results reproduce the main spectral patterns observed in experiments. Our work provides a numerically-exact and efficient framework for simulating 2DS, and can offer additional insight into the dynamics of open quantum systems.
Quantum sensing-enabled deuterium NMR spectroscopy with nanoscale sensitivity at low magnetic fields
This paper demonstrates a new quantum sensing technique that uses nitrogen vacancy (NV) centers in diamond to perform deuterium NMR spectroscopy at the nanoscale. The method achieves sensitivity improvements of 6-8 orders of magnitude over conventional NMR while operating at much lower magnetic fields, enabling molecular dynamics studies at unprecedented scales.
Key Contributions
- Demonstration of nanoscale deuterium NMR spectroscopy using NV centers in diamond
- Achievement of 6-8 orders of magnitude sensitivity enhancement over conventional inductive NMR detection
- Operation at magnetic fields two orders of magnitude lower than conventional NMR
- Temperature-dependent measurements revealing molecular dynamics and phase transitions at nanoscale
View Full Abstract
Nuclear magnetic resonance (NMR) spectroscopy provides unparalleled access to molecular structure and dynamics but is traditionally limited by weak signal strength, requiring large sample volumes and high magnetic fields. Here, we demonstrate nanoscale deuterium (2H) NMR spectroscopy using nitrogen vacancy (NV) centers in diamond, reproducing the characteristic quadrupolar powder line shapes that are present in the conventional bulk NMR spectra. By detecting statistical spin fluctuations from nanometer scale detection volumes, our approach delivers a sensitivity enhancement of six to eight orders of magnitude over inductive detection while operating at magnetic fields two orders of magnitude lower than those used in conventional NMR. Temperature dependent measurements of a deuterated polymer and molecular solid reveal distinct motional averaging and phase transitions with nanoscale sensitivity. Powder-like NV detected 2H NMR establishes a powerful tool for probing molecular dynamics on the nanoscale and, in the ultimate limit, at the single molecule level - capabilities beyond those of most existing spectroscopic techniques.
Ground-state energies of Ising models calculated using the samples from a quantum computer that simulates short-time evolution
This paper demonstrates the use of a quantum computer with up to 63 qubits to calculate ground-state energies of Ising models using the Cascaded Variational Quantum Eigensolver algorithm. The researchers study both homogeneous and random-coupling models on a heavy-hex lattice architecture and analyze the boundary of acceptable quantum errors.
Key Contributions
- Demonstration of CVQE algorithm with Guided-Sampling Ansatz on up to 63 qubits
- Analysis of quantum error boundaries as a function of qubit number and coupling strength
- Evidence that Ising models are well-suited for near-term quantum computing applications
View Full Abstract
We find the ground-state energy of the Ising model using the Cascaded Variational Quantum Eigensolver (CVQE) algorithm with the Guided-Sampling Ansatz (GSA) using up to 63 qubits on a quantum computer. We study a heavy-hex lattice to match the qubit architecture, allowing us to perform calculations in the quantum utility regime. We study both a homogeneous and random-coupling model. We locate the boundary of acceptable quantum errors as a function of the number of qubits and coupling strength. An entropic analysis is performed giving insights into the quantum computing performance. A subspace analysis is performed that suggests that the Ising model is especially suited for near-term quantum computing.
Polynomial Resource Classification of Quantum Circuit Familes via Classical Shadows
This paper compares different measurement strategies for classifying quantum circuit families (IQP, Clifford, and Clifford+T) and finds that simple Z-basis measurements outperform more complex approaches like classical shadows. The study shows that circuit classification becomes unreliable above 12 qubits with polynomial resources.
Key Contributions
- Demonstrates that Z-basis measurements outperform classical shadows for quantum circuit classification
- Provides theoretical framework showing circuits with high diagonal fraction concentrate correlator structure in the dominant basis
- Establishes fundamental limits on polynomial-resource classification of quantum circuit families
View Full Abstract
We compare four polynomial-resource measurement strategies, (I) $Z$-basis-only, (II) nearest-neighbor $ZZ$ (NN), (III) multi-basis ($Z$, $X$, $Y$), and (IV) classical shadows, for classifying three quantum circuit families: IQP, Clifford, and Clifford$+T$. We find $Z$-only measurements outperform multi-basis and classical shadows across all qubit counts and all four classifiers evaluated, and the $O(\nqubits)$-feature NN strategy matches $Z$-only to within $0.02$ in Random Forest accuracy. The best result is a Random Forest accuracy of $0.91$ at 4--5 qubits under $Z$-only ($0.89$ for NN, $0.85$ for multi-basis, $0.67$ for shadows). All four strategies collapse to near-chance accuracy ($\approx 0.33$) above approximately 12 qubits under the quadratic shot budget $\shots = 16\nqubits^2$. These findings indicate that the discriminative signal between these circuit families is concentrated in local, nearest-neighbor $Z$-basis correlations, consistent with the diagonal gate structure of IQP circuits, and that additional Pauli correlator types or long-range correlations carry no compensating discriminative power for this task. We provide a formal theoretical framework showing that circuits with high diagonal fraction in a given basis concentrate their correlator structure in that basis, and that any deviation from the dominant basis incurs a provably higher estimator variance. These results establish that a quadratic shot budget is insufficient for reliable classification above approximately 12 qubits, but do not rule out the existence of a subquadratic or otherwise more efficient polynomial-resource strategy; whether any polynomial measurement protocol can classify these families at large qubit counts remains an open question.
Universal Characterization of Classical Qubit Noise
This paper presents a new method to characterize noise affecting qubits using simple Ramsey interferometry measurements instead of complex pulse sequences. The technique can directly measure how noise correlates over time, which is crucial for understanding and mitigating errors in quantum systems.
Key Contributions
- Novel method for characterizing classical qubit noise using simple Ramsey interferometry instead of complex dynamical decoupling pulses
- Direct detection of arbitrary-order correlation functions of noise processes through correlation analysis of measurement outcomes
- Demonstration that the method is robust against qubit decoherence and measurement errors, making it universally applicable across quantum platforms
View Full Abstract
We propose a general method to fully characterize a classical stochastic noise process causing qubit dephasing through repetitive Ramsey interferometry measurements (RIMs) on the qubit. Compared to filter-function-based spectroscopy, our method does not require complicated dynamical decoupling pulses and can directly detect arbitrary-order correlation functions of such noise processes. We show that each RIM with a short evolution time and suitably chosen control pulses can perform a direct sampling of the noise field and the $n$-point correlations of the RIM outcomes are proportional to the $n$-point correlation functions of the noise processes. Then we numerically demonstrate this method for characterizing two typical examples of classical noises, including the Ornstein-Uhlenbeck processes producing Gaussian noises and an ensemble of TLFs producing non-Gaussian noises. Our method is independent of qubit lifetime and robust against qubit decoherence and measurement errors, thus offering a universal and efficient protocol for qubit noise spectroscopy across diverse platforms.
Quantum memory and scrambling from the perspective of a classical neural network
This paper extends the concept of quantum memory to time-dependent systems and compares it with out-of-time-ordered correlators (OTOC) for studying quantum information scrambling. The authors use neural networks to analyze these quantum phenomena in atomic spin chains and find that quantum memory shows more sensitivity to symmetry breaking than OTOC.
Key Contributions
- Extended quantum memory concept to time-dependent systems and demonstrated its application to realistic atomic spin chains
- Showed that quantum memory exhibits faster oscillations and higher sensitivity to symmetry breaking compared to OTOC, using neural network analysis
View Full Abstract
Entropic uncertainty relations are universal quantifiers of fundamental uncertainties of quantum measurements and are widely discussed in the quantum metrology literature. Quantum memory is a phenomenon related to the specific type of quantum correlations that allows for reducing fundamental uncertainties of quantum measurements. In the present work, the modified concept of quantum memory for time-dependent problems is proposed. We compare the time-dependent formulation of quantum memory with the out-of-time-ordered correlator (OTOC). Quantum memory is a rigorous mathematical concept that requires demanding calculations. Thus, until now, quantum memory has been discussed mainly for simple model systems and stationary problems. In the present work, we demonstrate that quantum memory can also be studied for realistic and physically relevant systems, e.g., the atomic helical spin chain, as well as the emergence and propagation of quantum correlations in time. We found that quantum memory manifests faster oscillations in time than OTOC and does not equilibrate. Furthermore, an artificial neural network is trained and asked to predict results for OTOC and quantum memory. These results show that quantum memory is more sensitive than OTOC in terms of broken inversion symmetry and the nonreciprocal effect.
Nanoscale Sensing of Solid-State Samples with High Frequency Resolution
This paper presents a quantum control protocol using nitrogen-vacancy (NV) centers for high-resolution nanoscale chemical analysis of solid-state samples. The method combines synchronized rotating magnetic fields with RF and microwave control to overcome environmental challenges and detect chemical shifts with nanometer precision.
Key Contributions
- Novel quantum control protocol for NV centers that mitigates solid-state environmental noise
- Analytical framework linking measured spectra to control parameters for nanoscale chemical characterization
View Full Abstract
To meet the growing demand for nanoscale surface analysis, nitrogen-vacancy (NV) centers offer a high-sensitivity alternative by leveraging their ability to operate in immediate proximity to the sample. In this work, we propose a quantum control protocol designed to overcome the inherent challenges of solid-state environments, specifically by mitigating anisotropy and strong dipole-dipole interactions to enable the detection of isotropic chemical shifts at the nanoscale. To achieve this, our scheme synchronizes a slowly rotating magnetic field with tailored RF decoupling and MW control of the NV sensors. We provide an analytical mapping that explicitly links the measured spectrum to the control sequence features and the underlying system parameters, enabling a straightforward characterization of the sample.
Efficient Complex-Valued State Preparation on Bucket Brigade QRAM
This paper improves quantum state preparation algorithms for loading classical data into quantum computers by optimizing the Bucket Brigade QRAM architecture. The work enables efficient encoding of complex-valued matrices into quantum states with logarithmic query complexity and extends previous methods to handle both real and complex data.
Key Contributions
- Eliminates the U2CR subroutine by precomputing rotation angles classically and storing them directly in QRAM cells
- Extends quantum state preparation to complex-valued matrices using a two-step magnitude-then-phase procedure with stored leaf phases
View Full Abstract
Efficient quantum state preparation is a critical component in quantum algorithms that process large classical data, and it is fundamental to realizing quantum advantage in domains such as machine learning, quantum linear algebra, and quantum finance. Building on the framework of~\cite{berti2025efficient}, which integrates Bucket Brigade QRAM (BBQRAM) with a segment tree to achieve amplitude encoding in polylogarithmic query time, we present two improvements within the same architecture-aware framework. First, we remove the $U_{2\mathrm{CR}}$ subroutine by classically precomputing the rotation angles determined by the segment tree and storing these angles directly in the BBQRAM cells. The tradeoff is that the classically loaded QRAM stores precomputed fixed-point angles rather than raw subtree weights. Second, we extend the construction to complex-valued matrices $A \in \mathbb{C}^{M \times N}$ by storing a leaf phase alongside each precomputed rotation angle and using a two-step magnitude-then-phase procedure; the real signed case is naturally subsumed as a one-bit phase specialization. At unchanged $\mathcal{O}(\log_2^2(MN))$ BBQRAM query complexity, the QPU procedure reduces to BBQRAM retrievals and controlled-rotation cascades, with $\mathcal{O}(MN)$ memory cells per matrix and no reversible arithmetic on the QPU.
Continuous Reset-Induced Phase Transition in Measurement-Free Random Quantum Circuits
This paper studies quantum circuits that use only reset operations (no measurements) and discovers a continuous phase transition in quantum entanglement properties. The researchers find that this transition behaves differently than classical predictions, particularly showing that the dimension of the quantum system (number of levels per qudit) critically affects the transition properties.
Key Contributions
- Discovery of reset-induced continuous entanglement phase transitions in measurement-free quantum circuits
- Demonstration that qudit dimension critically affects phase transition properties, contradicting large-d classical statistical predictions
View Full Abstract
We study a random unitary quantum circuit with only reset channels, which has high feasibility for real quantum devices. In particular, we investigate the many-body statistical physics properties, "reset-induced" entanglement phase transitions comparing the classical statistical picture in the large "$d$" limit of qudits. In the property of the reset-induced phase transition the parameter of qudit $d$ is essential. That is, the transition properties induced by the reset channel significantly depend on $d$. We numerically elucidate this statement employing efficient stabilizer circuit simulations for $d=2$. Specifically, large fluctuations are observed near the critical point, indicating that the reset-induced phase transition is continuous. We obtain clear data collapses, consistent with a second-order mixed phase transition. This behavior differs from expectations based on the classical statistical mapping in the large-$d$ limit.
Local tensor-train surrogates for quantum learning models
This paper develops a method to create fast classical approximations of quantum machine learning models using tensor-train decomposition and Taylor polynomials, enabling efficient inference without repeated quantum circuit evaluations. The approach provides provable accuracy guarantees while reducing computational complexity from exponential to polynomial scaling.
Key Contributions
- Framework for constructing tensor-train classical surrogates of quantum ML models with provable accuracy
- Polynomial scaling in parameter count versus naive exponential scaling while maintaining controlled approximation errors
- Three-part error analysis separating Taylor truncation, tensor-train approximation, and statistical estimation errors
View Full Abstract
A key bottleneck in quantum machine learning is the computational cost of repeated quantum circuit evaluations during the inference phase. To address this, we present a framework for constructing fast, cheap, provably accurate classical tensor-train surrogates of fully trained quantum machine learning models within local patches of their input data space. The approach combines Taylor polynomial approximation with a tensor-train (TT) representation and embeds it in a statistical learning paradigm via empirical risk minimization. In our analysis, the Taylor-TT construction serves as a deterministic error certificate proving that the TT hypothesis class contains a good approximation; empirical risk minimization then provably recovers a surrogate with controlled generalization error and explicit bounds. This translates into three independently controllable error sources: (i) Taylor truncation error controlled by the patch radius $r$ and polynomial degree $p$, (ii) TT approximation error controlled by the bond dimension $χ$, and (iii) statistical estimation error. While the parameter count scales polynomially in the number of data dimensions $N$, i.e., $d_{\mathrm{eff}} = N(p+1)χ^2$ rather than the naive $(p+1)^N$, the worst-case constants inherit an exponential factor through the tensor-product feature norm during Taylor polynomial embedding onto TT. This cleanly separates representation complexity from feature-induced constants. Our risk bounds and sample complexity depend explicitly on the local patch radius $r$.
Robustness of fiber-optic attenuators to 1061-nm sub-nanosecond pulsed laser radiation in quantum key distribution systems
This paper experimentally tests how different types of fiber-optic attenuators used in quantum key distribution systems respond to attacks using high-power pulsed lasers at 1061 nm. The researchers found that some attenuators can be permanently damaged or have their performance degraded, potentially creating security vulnerabilities that could allow eavesdropping.
Key Contributions
- Experimental demonstration that pulsed laser attacks at 1061 nm can permanently damage or degrade fiber-optic attenuators in QKD systems
- Discovery that initial pulsed irradiation significantly lowers the damage threshold for subsequent continuous-wave attacks
- Identification of specific vulnerabilities in MEMS-based and absorption-element attenuators that could enable hidden eavesdropping channels
View Full Abstract
The security of quantum key distribution (QKD) systems relies on the physical integrity of their components. While laser-damage attacks (LDAs) using high-power continuous-wave (cw) lasers have been well studied, the threat posed by pulsed lasers at alternative wavelengths remains underestimated. Here, we experimentally investigated the stability of four types of fiber-optic attenuators under exposure to sub-picosecond pulses at 1061 nm with average power reaching 1 W. Mechanical variable attenuators with blocking elements and fixed air-gap attenuators show resistance to this attack. MEMS-based variable attenuators exhibit increased attenuation or irreversible damage that causes a permanent reduction in attenuation of approximately 3.8 dB. For fixed attenuators with an absorption element, we demonstrate that initial pulsed irradiation significantly lowers the optical damage threshold of the components compared to direct cw attacks. The attenuation reduction achieved is up to 7 dB at a 1 W cw laser at 1550 nm. These results highlight the possibility of establishing a hidden side-channel for eavesdropping attacks and underscore the insufficiency of existing countermeasures against sophisticated LDA scenarios.
Near-identical photons from distant quantum dot-cavity devices
This paper demonstrates the creation of nearly identical single photons from separate quantum dot-cavity devices located far apart from each other. The researchers achieved 88% indistinguishability between photons from distant sources, which is a crucial step for building large-scale quantum technologies that need many identical photons.
Key Contributions
- Achieved 88% two-photon indistinguishability between distant quantum dot-cavity sources
- Developed nanofabrication techniques for ultra-low spectral noise and wavelength dispersion
- Demonstrated precise wavelength matching using dual tuning mechanisms for scalable photonic quantum technologies
View Full Abstract
Scalable optical quantum technologies require interference between large numbers of indistinguishable single-photons emitted by independent sources. Semiconductor quantum dots are known to be excellent on-demand sources of single-photons. They show record efficiency when inserted into optical cavities to control their spontaneous emission and generate trains of near identical photons over microsecond timescales. However, generating perfectly identical photons from distant cavity-based sources has remained a long-standing challenge. It requires precise matching of the emission wavelengths and emission dynamics, while simultaneously minimizing spectral noise across all time scales for distant emitters in uncorrelated environments. Here, we report on the nanofabrication of a large number of quantum dot-cavity sources with ultra-low spectral noise and wavelength dispersion. The high source efficiency and the use of two tuning mechanisms enable precise optimization of the spectral overlap between distant sources. With this approach, we demonstrate a two-photon indistinguishability of $88\pm1$ % between photons emitted from two distant sources. Remarkably, this value reaches the upper bound set by the intrinsic indistinguishability of photons emitted successively by each source. These results represent a key milestone for scaling photon-based quantum technologies.
One Coordinate at a Time: Convergence Guarantees for Rotosolve in Variational Quantum Algorithms
This paper provides the first rigorous mathematical proof that Rotosolve, a popular optimization algorithm used to train quantum circuits, actually converges to good solutions. The authors establish formal convergence guarantees and rates for this previously heuristic method, comparing it against other optimization approaches in quantum machine learning applications.
Key Contributions
- First rigorous convergence analysis and guarantees for the Rotosolve optimization algorithm in variational quantum algorithms
- Derivation of explicit worst-case convergence rates in finite quantum measurement regimes and comparison with randomized coordinate descent methods
View Full Abstract
In this paper, we resolve an open question in the field of optimization algorithms for training parametrized quantum circuits: Does the popular Rotosolve algorithm converge? Until now, interpolation-based coordinate descent methods such as Rotosolve have mostly been treated as heuristics, lacking any formal convergence guarantees. We rigorously analyze Rotosolve, and show that it converges to $\varepsilon$-stationary points if the optimization landscape is non-convex and smooth; and to $\varepsilon$-suboptimal points if the objective function additionally obeys the Polyak-Lojasiewicz (PL) condition. Further, we derive explicit worst-case rates of convergence in the finite quantum measurement regime. These rates are contrasted against those from a similar coordinate-based method: Randomized Coordinate Descent (RCD). Although in the worst case their rates are, prima facie, equivalent, we present arguments for a more nuanced comparison between the two. We highlight that Rotosolve is hyperparameter-free, and implicitly uses first and second derivatives in its updates. Finally, we supplement our theoretical findings with numerical experiments from Quantum Machine Learning; and compare the performance of Rotosolve against RCD, Stochastic Gradient Descent, Simultaneous Perturbation Stochastic Approximation, and Randomized Stochastic Gradient Free methods.
Optimizing ground state preparation protocols with autoresearch
This paper develops an automated approach using AI coding agents to optimize quantum ground-state preparation protocols by automatically tuning hyperparameters for methods like VQE, DMRG, and AFQMC. The AI agents iteratively modify and improve quantum algorithms based on energy convergence metrics, demonstrating effectiveness on spin models and molecular systems.
Key Contributions
- Development of autoresearch methodology for automated optimization of quantum ground-state preparation protocols
- Demonstration of AI-guided hyperparameter tuning for VQE, DMRG, and AFQMC algorithms with improved energy convergence
View Full Abstract
Artificial intelligent language-model based coding agents have significantly changed the way we interact with computers in our day-to-day, as it is common to use them to create, improve, and run programming scripts only using natural language. Agent code updates can be better guided when such programs can be executed and scored automatically rather than judged by human preference. In quantum computing and classical quantum simulation settings, ground-state preparation has a parallel structure: candidate protocols can be ranked by estimated energies and other proxies indicating proper quantum-state convergence. In this work, we study how autoresearch, a code optimization strategy based on coding agents, can be used to optimize hyperparameter choices of different ground-state preparation and sampling protocols, including the variational quantum eigensolver (VQE), density matrix renormalization group (DMRG), and auxiliary-field quantum Monte Carlo (AFQMC). We validate the viability and capacity of this method on simple spin models and molecular Hamiltonians. Across all three settings, the agent mutates simple baselines into complex protocols with improved energy proxies while operating under constrained space-time computational budgets. We conclude with discussions of other quantum routines that support executable scalar scoring, enabling evolutionary coding agents to automate a substantial portion of the protocol-tuning work that would otherwise be required manually.
Quantum annealing inspired algorithms for the NISQ Era
This paper develops quantum annealing-inspired algorithms designed for near-term quantum devices (NISQ era), including approximate quantum annealing (AQA) and evolving Hamiltonian quantum optimization (EHQO). The algorithms aim to solve optimization problems more efficiently than random initialization approaches by using reduced resources and guiding optimization through intermediate steps.
Key Contributions
- Development of approximate quantum annealing (AQA) with discretized ansatz for NISQ devices
- Introduction of evolving Hamiltonian quantum optimization (EHQO) as a multistep variational scheme
- Demonstration of improved QAOA performance using annealing-inspired parameter initialization
View Full Abstract
We study algorithms inspired by quantum annealing that are suited for the NISQ era. First, we analyze approximate quantum annealing (AQA), which employs a discretized annealing ansatz in which the time step and the number of layers are allowed to deviate from a faithful implementation of quantum annealing. Parameter scans identify regimes that reproduce annealing-like behavior with reduced resources, making them more suitable for NISQ devices. The resulting parameters can then be used as an effective warm start for the quantum approximate optimization algorithm (QAOA), improving its performance compared to random initializations. We also introduce evolving Hamiltonian quantum optimization (EHQO), a multistep variational scheme that guides the optimization process through intermediate Hamiltonians derived from the standard annealing Hamiltonian. Numerical simulations on sets of hard 2-SAT instances suggest that quantum annealing-inspired algorithms provide practical strategies for enhancing variational quantum optimization.
Bond-dimension scaling of a local-refinement advantage over hyperoptimized tensor-network contraction on Sycamore like topologies
This paper improves tensor network contraction methods for simulating quantum circuits on Sycamore-like quantum processor topologies by adding a local refinement step that significantly reduces computational costs, with improvements that scale with bond dimension.
Key Contributions
- Identified and implemented a missing local-refinement stage in tensor network contraction pipelines that provides significant computational advantages
- Demonstrated that the refinement advantage scales monotonically with bond dimension specifically for Sycamore-like topologies, with improvements from 15 bits at χ=2 to 116 bits at χ=16
View Full Abstract
We identify a missing local-refinement stage in the cotengra tensor-network contraction pipeline and show that its impact grows monotonically with bond dimension on the \emph{connectivity graph} of Sycamore-like topologies. Appending a nearest-neighbor interchange (NNI) search to the \cotengra{} output at matched 8-s wallclock yields a median \emph{predicted} cost-model gap $Δ\fT$ at $n{=}500$ that grows monotonically and approximately linearly in $χ$, from $\sim\!15$~bits at $χ{=}2$ to $\sim\!116$~bits at $χ{=}16$ (Fig.~\ref{fig:chi_sweep}), with the refiner winning on $25/25$ seeds at every tested $χ$. Two control families -- random $3$-regular and QAOA $p{=}2$ interaction graphs -- show median $|Δ\fT| \leq 0.71$~bits across both controls at every $χ$, with refiner win rate falling toward chance as $χ$ grows; the signal is topology-specific, not a generic refinement-budget effect. An ablation establishes that refinement itself, not the four-axis Pareto acceptance rule, drives the gain ($|Δ\fT| \lesssim 0.1$ bits between scalar and Pareto arms at $χ{=}2$). The Sycamore-circuit envelope (App.~\ref{em:sec:results:syccirc}) reports the corresponding refinement on actual random circuits at depths $m \in \{4, 6, 8, 10, 12\}$, where the refiner wins on $5/5$ instances at every depth. The advantage is therefore largest precisely in the bond-dimension regime relevant to physical contraction.
Quantum Optimization Methods for the Generalized Traveling Salesman Problem
This paper develops quantum optimization methods for solving the Generalized Traveling Salesman Problem (GTSP), comparing quantum annealing and gate-based QAOA approaches against classical solvers. The authors find that quantum methods can be competitive on smaller problem instances but face significant scalability and feasibility challenges as problem size increases.
Key Contributions
- Novel QUBO formulation for GTSP optimized for quantum annealing
- Hardware-executable QAOA pipeline with XY-mixer for constrained optimization
- Comparative evaluation of quantum vs classical optimization on GTSPLIB benchmarks
- Preprocessing methods to create NISQ-friendly problem instances
View Full Abstract
This paper studies quantum optimization baselines for the Generalized Traveling Salesman Problem (GTSP), a clustered routing problem that naturally models variant selection and sequencing problems under discrete alternatives. We propose a novel GTSP QUBO formulation focused on maintaining feasible solutions for quantum annealing, as well as a hardware-executable gate-based pipeline utilizing the Quantum Approximate Optimization Algorithm (QAOA). We implement a constrained QAOA variant using an XY-mixer, which preserves the stepwise Hamming weight in the ideal circuit model, while feasibility with respect to the full GTSP constraints is tracked explicitly during post-processing. We compare the two quantum optimization paradigms on problem instances from GTSPLIB, an established benchmark dataset, and validate against classical state-of-the-art solvers. To mitigate current quantum hardware size limitations, we further extend a preprocessing method to reduce the node count in instance clusters, constructing new NISQ-friendly instances from reduced subsets. Across all tested instances, quantum solvers often produce competitive solution quality when tested on smaller graphs, but exhibit higher runtimes and a sharp degradation in feasibility and scalability as instance size grows. Our evaluation highlights where quantum optimizers can already succeed and which algorithmic bottlenecks, like sampling rates, runtime issues, and other practical failure modes, remain as open problems.
Sector-dominant graph-local drivers for path-window barrier Hamiltonians on the Boolean hypercube
This paper studies methods for preparing quantum states on Boolean hypercubes using adiabatic quantum computing, specifically developing new graph-local drivers based on sector and path coordinates. The authors find that hybrid drivers combining multiple components can improve ground-state preparation fidelity for certain barrier problems, though the improvement depends on the specific target problem class.
Key Contributions
- Development of sector-dominant graph-local drivers for adiabatic quantum state preparation on Boolean hypercubes
- Demonstration that hybrid drivers combining sector, path-window, and transverse-field components improve ground-state fidelity for centered barrier instances
View Full Abstract
We study finite-size adiabatic state preparation on Boolean hypercubes using graph-local drivers built from sector/path coordinates related to monotone Gray-code representatives. The construction is not presented as a new all-$n$ Gray-code existence theorem; rather, it provides finite representatives, explicitly checked through the cases used in the numerical experiments, for testing problem-dependent graph-local drivers. For ordinary diagonal-cost transverse-field annealing, the ordering does not yield a robust advantage, and we include this negative result as a baseline. For non-diagonal target Hamiltonians whose geometry is expressed in the same sector/path coordinates, hybrid drivers combining sector, path-window, and small transverse-field components can substantially improve the final ground-state fidelity in centered barrier instances. Reproduction runs from the accompanying code confirm a representative centered original-window barrier value of approximately \(0.9799\) for the fixed-control hybrid parameters \((w,α,ε)=(8,0.50,0.15)\), while also showing that the improvement is target-class dependent. Randomized and ablation controls indicate that the dominant contribution is the sector-preserving skeleton, with strict one-bit completion acting as a secondary refinement. We provide code, finite certificates, CSV files, validation logs, and reproduction scripts to make the finite-size claims traceable.
Quantum Compressed Sensing Enables Image Classification with a Single Photon
This paper presents a method for image classification using quantum compressed sensing that can achieve classification with a single photon detection event. The approach uses quantum superposition to encode complete spatial image information in a single photon and employs a diffractive neural network to create optimal measurement bases for classification.
Key Contributions
- Development of quantum compressed sensing for image classification that reduces measurement requirements from O(Klog(N/K)) to constant order
- Demonstration of single-photon image classification with 69% accuracy using quantum superposition encoding and diffractive neural networks
View Full Abstract
Image classification is a core task of intelligent sensing, conventionally follows a sequential imaging then processing pipeline. However, redundant high-dimensional image reconstruction is inherently inefficient, especially in photon limited scenarios. Here we report a photon level image classification method using quantum compressed sensing, which reformulates the classification task as a sparse signal measurement problem directly oriented toward class labels. By exploiting the parallelism of photonic quantum superposition states, a single photon can be encoded the complete spatial information of a high-dimensional image. Through a diffractive deep neural network, we physically construct a dedicated measurement basis aligned with the class space, enabling signal-dependent adaptive compressive measurement. Ideally, our method can extract class information via a single quantum projective measurement, reducing the required number of measurements from the logarithmic scaling O(Klog(N/K)) of classical compressed sensing to the constant-order information-theoretic limit M = K = 1. Experimental results show that a classification accuracy of 69.0% can be achieved by using a single-photon detection event as the decision criterion, while it increases to 95.0% with four-photon detection events. This work demonstrates image classification at the energy efficiency limit and introduces a measurement as decision framework. It provides a foundation for intelligent sensing systems that operate under extreme photon budgets and harsh environments.
Polarization-preserving wavefront rotator
This paper demonstrates a method to rotate optical wavefronts using K-mirrors while preserving polarization by adding synchronized half-wave plates, achieving ~1% polarization error and enabling applications in quantum optics that require precise wavefront control without unwanted polarization changes.
Key Contributions
- Theoretical solution showing synchronized half-wave plates can eliminate polarization-dependent errors in wavefront rotation
- Experimental demonstration achieving ~1% polarization error with 30° base angle K-mirror configuration
View Full Abstract
A K-mirror rotates the wavefront of an incident optical field. However, the rotation always introduces polarization changes in the transmitted field. This is a serious concern for applications ranging from astronomical image derotation to orbital angular momentum spectrum characterization in photonic quantum technology. Recent efforts have shown that the polarization change can be minimized significantly, but these require either a very small base angle that limits the field of view, or mirrors with a customized refractive index. Making the transmitted polarization state completely independent of the rotation angle has remained an open problem. In this work, we show that placing half-wave plates before and after a K-mirror and rotating them synchronously at half the K-mirror rotation angle makes the polarization change in the transmitted field exactly independent of the rotation angle. This works for any wavefront rotator, any base angle, any mirror refractive index, and any input state of polarization. We experimentally demonstrate the approach using a K-mirror with a base angle of $30^{\circ}$, which gives the largest field of view among practical designs, and find a mean polarization error of ~1%, limited only by the retardance imperfection of commercially available half-wave plates. This has significant practical implications for applications that require precise wavefront rotation without polarization change.
Ember: An Extensible Benchmark Suite for Quantum Annealing Embedding Algorithms
This paper introduces Ember, a standardized benchmarking framework for evaluating quantum annealing embedding algorithms that map problem graphs onto quantum hardware topologies. The framework includes a large library of test problems and evaluation tools, revealing that no single embedding algorithm performs best across all problem types.
Key Contributions
- Development of a standardized, reproducible benchmarking framework for quantum annealing embedding algorithms
- Creation of a comprehensive graph library with 24,016 instances spanning multiple problem types and hardware topologies
- Systematic evaluation revealing that embedding algorithm performance varies significantly with problem structure and no universal best algorithm exists
View Full Abstract
Minor embedding is a required compilation step for quantum annealing, mapping logical problem graphs onto sparse hardware topologies. Despite its central role in determining solution quality, no standardized benchmark exists for comparing embedding algorithms: prior studies use incompatible graph libraries, inconsistent metrics, and non-reproducible experimental setups, making cross-algorithm comparisons unreliable. We present Ember (Embedding Minor Benchmark for Evaluative Reproducibility), an open-source benchmarking framework addressing this gap. Ember provides a standardized algorithm interface with seeded, reproducible execution infrastructure; a diverse graph library of 24,016 instances spanning structured, random, and physics-motivated problem types not previously used in embedding benchmarks; and a unified analysis pipeline supporting all three current D-Wave hardware topologies (Chimera, Pegasus, Zephyr). We evaluate five algorithms across the full library on Chimera and find that no algorithm dominates universally: rankings vary systematically with graph structure, and the best algorithm depends on the family being embedded. We also examine the effects of hardware topology (including Pegasus and Zephyr), qubit error rates, and evaluate a reinforcement-learning approach (CHARME) within a narrower test set. Ember is available at https://github.com/zachmacsmith/ember and is installable via pip install ember-qc.
Deterministic Realization of Classical Dissipation on Quantum Computers
This paper develops a method to implement classical fluid dynamics simulations (Lattice Boltzmann Method) on quantum computers by solving the problem of representing dissipative processes using unitary quantum gates. The authors create a block-encoding-free construction that achieves perfect success probability for the dissipative collision step.
Key Contributions
- Development of a block-encoding-free quantum algorithm for implementing dissipative Lattice Boltzmann collision steps with unit success probability
- Introduction of a signed two-rail population encoding scheme that exactly reproduces classical MRT relaxation behavior on quantum hardware
View Full Abstract
Lattice Boltzmann (LB) on quantum devices must reconcile unitary gate evolution with the dissipative \emph{collision} step. In the multiple-relaxation-time (MRT) class, we work in the common setting of \emph{modewise diagonal} moment relaxation, $δm_r'=λ_r\,δm_r$ with $λ_r\in[-1,1]$ (overrelaxation if $λ_r<0$). Embedding that contraction in a unitary by block encoding or a linear combination of unitaries (LCU) typically yields subunitary success probability that decays multiplicatively across modes, sites, and time, a key bottleneck for quantum LB. \emph{For the dissipative MRT block alone} we give a \emph{block-encoding-free} construction: a signed \emph{two-rail} population encoding, then a completely positive trace-preserving (CPTP) map (per-rail amplitude damping with survival $|λ_r|$ and, if $λ_r<0$, a rail SWAP) so that, after the decode, the map agrees with classical MRT relaxation exactly (expectations of the rail number operators, common encoding--decode scale). Trace preservation gives success probability $1$ for that substage. The main result is the dissipative MRT block; construction of the equilibrium moment vector~$m^{\mathrm{eq}}=Mf^{\mathrm{eq}}$ (prescribed~$f^{\mathrm{eq}}$, host moment matrix~$M$; notation as in Section~\ref{subsec:generic-mrt}), moment transforms, streaming, and boundaries are composed with it as in a standard host pipeline and lie outside the scope of the formal theorem. Hybrid and fully coherent encodings, adaptive scales, Carleman-based context, and a one-rail no-go in the same nonnegative population framework are in the main text. Audits of the open-channel map on a long LBM collide-stream simulation and on stencil-free inputs both match the target to machine precision.
Elucidating mechanism of optical cavities in superconducting strip single photon detectors using transmission line and impedance models
This paper develops analytical models using transmission line and impedance theory to understand how optical cavities enhance light absorption in superconducting single photon detectors (SSPDs). The researchers showed that maximum absorption occurs when the detector's input impedance matches the input medium, and validated their models against numerical simulations.
Key Contributions
- Derived analytical formulae for SSPD absorptance using transmission line models
- Demonstrated impedance matching principle for optimal photon detection efficiency
- Provided design framework applicable to multiple superconducting detector types
View Full Abstract
We clarified the physical mechanism of superconducting strip single photon detectors (SSPDs) with optical cavities by using transmission line and impedance models. By introducing the transmission line model, we derived the analytical formulae for the absorptance of SSPDs with optical cavities. We compared the absorptance obtained from the analytical formulae for SSPDs with single-side, double-side, and dielectric multi-layer optical cavities against the results of numerical simulations. The comparison showed that the results were nearly identical. By introducing the impedance model, it was clearly shown that the SSPDs with optical cavities achieved the maximum absorptance when their input impedance of the SSPDs with optical cavities matched the impedance of the input medium. The design concepts proposed in this study are applicable to other superconducting detectors, such as microwave kinetic inductance detectors and transition-edge sensors.
Entanglement Dynamics in a Two Transmon Qubit System under Continuous Measurement and Postselection
This paper studies how continuous measurement and postselection affect entanglement between two transmon qubits coupled through a cavity. The researchers show that postselection can slow down entanglement decay and identify special quantum phases that influence the system's behavior.
Key Contributions
- Demonstrated that postselection significantly slows entanglement decay in transmon systems compared to unmonitored cases
- Identified exceptional points and PT-symmetric phases in the system dynamics that influence entanglement behavior
View Full Abstract
We investigate the role of continuous measurement and postselection in the dynamics and entanglement of a transmon-cavity-transmon coupled system. In the dispersive regime, characterized by a large detuning between the transmons and the cavity, the two transmons interact via virtual excitation of the cavity, giving rise to an effective transmon-transmon coupling. In addition to this coherent interaction, each transmon undergoes spontaneous emission, which is continuously monitored through independent detection channels. By incorporating realistic detector inefficiencies, we analyze both efficient and imperfect monitoring scenarios and demonstrate that postselection significantly slows down the decay of entanglement compared to the unmonitored case. We formulate the stochastic master equation for the coupled system, derive the corresponding postselected master equation, and investigate the dynamics through the Liouvillian superoperator spectrum. In the interaction frame, we identify the emergence of an exceptional point and characterize the associated broken and unbroken PT-symmetric phases. We show how these phases influence the system dynamics and the corresponding entanglement behavior. Our results provide insight into how continuous measurement and postselection affect entanglement in dissipative quantum systems, with potential applications in quantum information processing.
Nonlinearity-enhanced Quantum Sensing in Discrete Time Crystal Probes
This paper demonstrates that discrete time crystals can be used as quantum sensors, and shows that adding nonlinear interactions significantly enhances their sensing precision. The authors find that nonlinearity increases the quantum Fisher information scaling with system size and propose implementing this sensing protocol using superconducting qubits.
Key Contributions
- Extended discrete time crystal sensing to nonlinear interactions showing enhanced precision scaling
- Demonstrated that nonlinearity increases quantum Fisher information system-size scaling approximately linearly
- Showed that pulse imperfections can enhance rather than suppress encoded information
- Proposed digital implementation using superconducting qubits
View Full Abstract
Discrete time crystals are non-equilibrium phases of matter in periodically driven systems, characterized by robust subharmonic oscillations and broken discrete time-translation symmetry. Their long-lived coherent dynamics and resilience to imperfections make them promising resources for quantum sensing. A disorder-free discrete-time crystal probe can provide the quantum-enhanced estimation of the coupling parameter. Here, we extend this sensing mechanism to nonlinear interactions and show that this nonlinear profile strongly enhances the sensing precision by increasing the system-size scaling exponent of the quantum Fisher information. Our analytical discussion separates a rigorous seminorm upper bound from the physically relevant scaling realized by product-state probes in the time crystal regime. Numerically, we find that the quantum Fisher information retains its quadratic long-time growth with the number of Floquet cycles, while its system-size exponent increases approximately linearly with the nonlinearity exponent, identifying nonlinearity as a resource for quantum-enhanced sensitivity. We further show that stronger nonlinearities shrink the time crystal stability window, making the probe more sensitive to small deviations from the resonant condition. We also analyze the effect of imperfect pulses and show that such imperfections can enhance, rather than suppress, the information encoded in the evolved state. Finally, we discuss a digital implementation of the nonlinear DTC sensing protocol using superconducting qubits.
Graph-Conditioned Meta-Optimizer for QAOA Parameter Generation on Multiple Problem Classes
This paper develops a machine learning approach to improve the Quantum Approximate Optimization Algorithm (QAOA) by training a graph-conditioned meta-optimizer that can generate better starting parameters for solving combinatorial optimization problems. The method learns from one type of optimization problem and can transfer that knowledge to solve different types of graph-based optimization problems more efficiently.
Key Contributions
- Development of a graph-conditioned meta-optimizer that improves QAOA parameter initialization across different problem classes
- Demonstration of transferable learning between different combinatorial optimization problems (MaxCut, Maximum Independent Set, Maximum Clique, Minimum Vertex Cover)
View Full Abstract
We study parameter transferability for the Quantum Approximate Optimization Algorithm (QAOA) across multiple combinatorial optimization problem classes from a parameter generation perspective. Specifically, a meta-optimizer is trained on one problem class and deployed on another during test time. Prior work employs a Long Short-Term Memory network to emulate QAOA optimization trajectories, but the learned dynamics usually collapse to near-identical paths, limiting cross-problem transfer efficiency. In this paper, we present a problem-aware graph-conditioned meta-optimizer for QAOA that learns to generate parameter trajectories over a fixed horizon, providing strong initializations with only a few steps. The optimizer is conditioned on compact graph embeddings and trained end-to-end using differentiable feedback from the QAOA objective, avoiding the need for ground-truth angles. We evaluate across multiple graph problem classes, including MaxCut, Maximum Independent Set, Maximum Clique, and Minimum Vertex Cover. We report both solution quality and feasibility-aware metrics where constraints apply. Results across a comprehensive empirical study consisting of 64 settings show that the learned optimizer can reduce optimization effort and improve performance over standard initialization, while exhibiting transferable behavior across graph families and problem types.
Anomalous Mixed-State Floquet Topology in One-Dimensional Open Quantum Systems
This paper studies how topological properties survive in quantum systems that are both periodically driven and coupled to thermal environments, extending Floquet topology concepts from isolated systems to realistic open systems with dissipation. The researchers use a microscopic theory to show that certain protected quantum states can exist even when the system loses energy to its surroundings.
Key Contributions
- Extension of Floquet topological concepts to open quantum systems with thermal dissipation using Floquet-Born-Markov theory
- Identification of topological invariants for mixed-state systems that preserve Z×Z classification from isolated Floquet systems
View Full Abstract
We investigate the non-equilibrium topology of a periodically driven, dissipative Su-Schrieffer-Heeger chain using the ensemble geometric phase (EGP) $φ_{\mathrm{EGP}}$-a generalisation of the Zak phase to open quantum systems. In contrast to earlier work, we use Floquet-Born-Markov theory to describe the coupling to thermal reservoirs microscopically. We show that the steady state can be characterised by a Hermitian purity spectrum, providing a direct analogue of band topology for mixed states. The periodic drive induces nontrivial winding and a quasienergy spectrum with distinct $0$ and $π$ band gaps, with protected edge modes in each gap. We identify a pair of topological invariants $(φ^{0}_{\mathrm{EGP}}, Δφ^π_{\mathrm{EGP}})$, revealing a structure consistent with a $\mathbb{Z}\times\mathbb{Z}$ classification known from isolated Floquet SSH systems, and show how it extends to a dissipative, finite-temperature setting in regimes where the steady-state structure remains well defined. Our results demonstrate when and how known Floquet topology survives in a driven-dissipative Gaussian steady state and establish Floquet topology as a robust concept beyond isolated zero-temperature systems. The underlying formalism provides a general framework for quadratic fermionic systems with linear bath couplings.
Contracting Tensor Networks with Generalized Belief Propagation
This paper develops a generalized belief propagation algorithm for efficiently contracting tensor networks, which are mathematical structures used to represent quantum many-body systems. The method extends traditional belief propagation by using hierarchical overlapping regions and is demonstrated on various physical problems including frustrated magnets and quantum states.
Key Contributions
- Extension of belief propagation to generalized belief propagation for tensor network contraction
- Implementation and testing on multiple physical systems including frustrated Ising models and quantum states
- Development of both numerical and analytical solutions for fixed point equations in the algorithm
View Full Abstract
Recent years have seen a growing interest in the use of belief propagation - an algorithm originally introduced for performing statistical inference on graphical models - for approximate, but highly efficient, tensor network contraction. Here, we detail how to apply generalized belief propagation (GBP) - where messages are passed within a hierarchy of overlapping regions of the tensor network - to approximately contract tensor networks and obtain accurate results. The original belief propagation algorithm is a corner case of this approach, corresponding to a particularly simple choice of regions of the tensor network. We implement the GBP algorithm for a number of different region choices on a range of two- and three-dimensional, infinite and finite tensor networks, solving the corresponding fixed point equations both numerically and, in certain tractable cases, analytically. Our examples include calculating the partition function of the fully frustrated Ising model, computing the ground state degeneracy of three-dimensional ice models, measuring observables on the deformed AKLT quantum state and evaluating the norm of randomly generated tensor network states.
Semiclassical phases of charged spin-$1/2$ matter-wave interferometers in gravitational wave backgrounds
This paper develops a theoretical framework for analyzing how gravitational waves affect matter-wave interferometers using charged spin-1/2 particles. The work shows that gravitational wave detection can be enhanced by exploiting three distinct quantum phase contributions: particle motion, spin, and electromagnetic effects.
Key Contributions
- Unified semiclassical framework for charged spin-1/2 matter-wave interferometry in curved spacetime
- Identification of three distinct phase channels (dynamical, spin, and electromagnetic) for gravitational wave detection
- Analysis of frequency-dependent responses and geometric filtering effects in Mach-Zehnder interferometers
View Full Abstract
A matter wave propagating through curved spacetime accumulates phase that encodes both geometry and gauge structure. We develop a semiclassical framework for charged spin-$1/2$ matter-wave interferometers based on a WKB expansion of the covariant Dirac equation, in which the phase decomposes into dynamical, spin, and electromagnetic Aharonov-Bohm (AB) contributions. In a freely falling detector frame, all three channels are governed by local tidal fields. In a weak gravitational-wave (GW) background, the dynamical and spin phases probe the gravitoelectric and gravitomagnetic sectors of curvature, while the AB phase arises from curvature-induced electromagnetic fields obtained from Maxwell's equations in curved spacetime. For a Mach-Zehnder interferometer (MZI), all three responses are determined by the same tidal scale, $\ddot{h}_A \sim Ω^2_{gw}h_0$, and filtered by a common geometric kernel, while entering through distinct physical couplings. In particular, the AB contribution depends not only on the enclosed flux but also on spatial variations of the induced fields and exhibits an intrinsic frequency dependence set by the traversal time. These results provide a unified description of matter-wave interferometric phases in time-dependent GW backgrounds and identify complementary dynamical, spin, and electromagnetic pathways through which spacetime curvature imprints itself on quantum interference.
Dynamical preparation of U(1) quantum spin liquids in an analogue quantum simulator
Researchers used ultracold atoms in optical lattices to create and study exotic quantum states called U(1) quantum spin liquids across thousands of lattice sites. They developed new techniques to directly observe quantum coherence between many-body states and demonstrated protocols for preparing these highly-entangled states through non-equilibrium dynamics.
Key Contributions
- Large-scale experimental realization of U(1) lattice gauge theory with >3,000 sites using ultracold atoms
- Development of microscopy techniques for detecting doubly occupied sites and measuring many-body coherence
- Demonstration of non-equilibrium protocols for preparing quantum spin liquid states with coherence extending over ~100 lattice sites
- Direct experimental verification of Gauss's law and observation of characteristic correlations in quantum spin liquids
View Full Abstract
Locally constrained gauge theories underpin our understanding of fundamental interactions in particle physics and the emergent behaviour of quantum materials. In strongly correlated systems, they can give rise to quantum spin liquids that lack conventional order and are defined by coherent superpositions of an extensive number of many-body configurations. Realising and probing such exotic states experimentally is an outstanding challenge both in solid-state and synthetic quantum systems, not least due to the difficulty of detecting the fragile coherences between many-body states. Here, we report a large-scale (>3,000 sites) realisation of a two-dimensional U(1) lattice gauge theory with ultracold atoms in a square optical superlattice and demonstrate non-equilibrium preparation of extended regions of U(1) quantum spin liquids. We demonstrate Gauss's law validity in a quench experiment, enabled by a new microscopy technique for detecting doubly occupied sites. We observe characteristic real-space correlations and momentum-space pinch points, hallmarks of the emergent U(1) gauge structure. Using round-trip interferometric protocols, we directly observe large-scale coherence between many-body configurations, providing strong evidence for quantum spin liquid regions extending over ~100 lattice sites. Our results establish non-equilibrium quantum simulation protocols as a powerful route for accessing and probing exotic, highly-entangled states beyond those hosted by the engineered Hamiltonian in thermal equilibrium.
Application of a Quantum Amplitude Redistribution Algorithm to the Data Filtering Problem
This paper explores using a quantum amplitude redistribution algorithm for data filtering applications and compares its performance to traditional median filtering methods through computational modeling.
Key Contributions
- Development of quantum amplitude redistribution algorithm for data filtering
- Comparative analysis between quantum filtering approach and classical median filtering
View Full Abstract
This paper presents an analysis of the applicability of a quantum amplitude redistribution algorithm to the data filtering problem and the results of modeling the algorithm's operation in comparison with a median filter.
Experimental high-dimensional multi-qubit Bell non-locality on a superconducting quantum processor
This paper demonstrates Bell non-locality violations using 12 qubits on a superconducting quantum processor, showing that quantum correlations in high-dimensional systems (up to 64 dimensions) exhibit stronger non-local behavior than lower-dimensional systems. The researchers prove that all qubits collectively contribute to these quantum correlations, advancing our understanding of many-body quantum mechanics.
Key Contributions
- First experimental demonstration of simultaneous high-dimensional and many-body Bell non-locality on superconducting quantum hardware
- Proof that genuinely collective quantum correlations can be achieved where all qubits contribute while pairwise correlations remain Bell-local
- Benchmarking method for quantum processor performance using fundamental quantum mechanical effects
View Full Abstract
Combining recent advances in superconducting quantum hardware, we explore quantum correlations in a previously inaccessible regime by observing \emph{simultaneously} high-dimensional and many-body Bell non-locality. We report a high-confidence Bell violation in the correlations between two $d=64$-dimensional systems encoded in twelve qubits. For system sizes up to $d=32$, the strength of the observed nonlocal correlations exceeds the quantum upper bound for $d=2$ systems, providing direct evidence of high-dimensional nonlocality. Furthermore, we demonstrate that the observed violation is genuinely collective: all qubits contribute to the nonlocal correlations, while most pairwise correlations across the bipartition remain Bell-local. Our work illustrates how present-day quantum processors enable the exploration of fundamental predictions of quantum mechanics in previously inaccessible regimes and, in turn, how fundamental quantum effects can be used to benchmark their performance.
How Quantum Contextuality disappears in the Classical Limit
This paper investigates how quantum contextuality, a fundamental signature of quantum behavior, disappears as quantum systems interact with their environment and become classical. The authors analyze specific quantum measurement scenarios under noise to show how decoherence suppresses the quantum correlations that demonstrate contextuality.
Key Contributions
- Resolves the apparent paradox of how state-independent contextuality can disappear despite persisting in maximally mixed states
- Demonstrates how depolarizing channels and measurement noise suppress contextuality in KCBS and Peres-Mermin scenarios through sequential measurement analysis
View Full Abstract
The emergence of classicality is fundamentally driven by the interaction between a quantum system and its environment. Foundational open-system approaches, notably the Caldeira-Leggett model, successfully captured how these interactions lead to macroscopic effects like quantum dissipation and decoherence. However, these approaches often leave the precise definitions of classicality and quantumness ambiguous. In quantum information theory, this boundary is a heavily scrutinized question, and Kochen-Specker contextuality emerges as a hallmark of nonclassicality. It is therefore natural to investigate whether decoherence can actually suppress this property. Taking this path creates an apparent conundrum, once there exist two distinct manifestations of quantum contextuality: state-dependent and state-independent ones. While state-dependent contextuality naturally vanishes under state degradation, state-independent contextuality could persist for any quantum state, since it shows up even for the maximally mixed state! In this paper, we resolve this apparent paradox by analyzing sequential measurement implementations of the paradigmatic Klyachko, Can, Binicioğlu, and Shumovsky (KCBS) and Peres-Mermin prepare-and-measure scenarios under the influence of depolarizing channels. By introducing noise both prior to and in between measurements, and by analyzing the resulting sequential correlators in both the Schrödinger and Heisenberg pictures, we show how open-system dynamics suppress the correlations required to witness contextuality, leading to classicalization.
Operating a contextual Stern-Gerlach apparatus
This paper proposes a quantum cavity/circuit QED version of the Stern-Gerlach experiment where a two-level atom's pseudo-spin is measured using resonant field driving and continuous cavity field monitoring. The researchers study how the measurement context affects the stability of quantum superposition states and explores bistability regimes in the coupled atom-cavity system.
Key Contributions
- Development of a contextual cavity QED analogue of the Stern-Gerlach experiment using continuous measurement
- Investigation of dressed-state polarization and bistability in strongly coupled atom-cavity systems under continuous monitoring
View Full Abstract
We propose a contextual cavity/circuit QED analogue and extension of the Stern-Gerlach experiment, where the pseudo-spin of a two-state `atomic' transition plays the role of the ``spin'', while the resonant field driving the transition stands for the ``magnetic field''. A phase-sensitive continuous detection of the cavity field coupled to the induced `atomic' dipole affects the stability of the two distinct outcomes. The dressed states comprising the latter give their place to a self-consistent spontaneous dressed-state polarization as the driving strength is lowered. The associated evolution proves anew highly contextual, underpinned by a persistent production of coherent-state superpositions for a particular setting of the monitoring device. Finally, when bistability is absent, we employ the photoelectron `atomic' emission statistics as a diagnostic tool of the cavity field fluctuations.
Gate-dependent offset charge shifts and anharmonicity in gatemon qubits in the weak tunneling regime
This paper analyzes gatemon qubits, which are superconducting qubits that can be electrically tuned using gate voltages. The researchers study how gate voltage affects the qubit's energy levels and propose methods to experimentally detect predicted charge offset effects in these tunable quantum devices.
Key Contributions
- Quantified observable effects of gate-dependent charge offsets on gatemon qubit energy spectrum and anharmonicity
- Proposed experimental protocol to detect predicted charge offsets in gatemon qubits with tunneling asymmetry
View Full Abstract
Gatemon qubits are based on a superconductor-quantum dot-superconductor (S-QD-S) junction which enables in situ electrostatic tuning via a gate electrode. For a single-channel QD this structure gives rise to two subgap Andreev bound states (ABSs), and generally leads to a richer quantum phase dynamics as compared to conventional transmons. In a recent work [Phys. Rev. B 111, 214503 (2025)] we derived the quantum phase dynamics from a many-body treatment which leads to an effective gate voltage-dependent Hamiltonian that self-consistently incorporates the phase quantization. It predicts (i) a renormalization of the junction's effective capacitance and (ii) the presence of gate voltage and occupation-dependent charge offsets in junctions with tunneling asymmetry. Here, we quantify the observable impact of these effects on the qubit's energy spectrum and anharmonicity, by studying the interplay of the two Andreev branches as a function of dot-gate voltages and junction transparencies. We show the relation of these predictions to simplified gatemon models and propose a protocol to experimentally detect the predicted charge offsets.
Encoding strategies for quantum enhanced fluid simulations: opportunities and challenges
This paper reviews different strategies for encoding fluid dynamics information into quantum computers for computational fluid dynamics (CFD) simulations. The authors analyze the trade-offs between various encoding approaches and argue that the choice of encoding method should be considered a primary design variable that depends on the specific fluid problem and quantum hardware constraints.
Key Contributions
- Comprehensive architecture-agnostic assessment of encoding strategies for quantum CFD applications
- Analysis of fundamental trade-offs between compact vs. less compact encodings in terms of state preparation, measurement, and nonlinear processing
View Full Abstract
Quantum computing has emerged as a powerful potential accelerator for computational fluid dynamics (CFD), but whether this promise can be realized in practice depends on how fluid information is encoded on quantum hardware. This review provides an architecture-agnostic assessment of encoding strategies for quantum-enhanced fluid simulation, focusing on the trade-offs they impose on state preparation, measurement, boundary treatment, nonlinear dynamics, and temporal evolution. We examine the principal encoding paradigms used in the literature and relate them to representative quantum algorithms for fluid simulation. Through these examples, we show that encoding choices fundamentally shape both the algorithm itself and also the practical feasibility of quantum CFD. For example, highly compact encodings can offer attractive asymptotic advantages but might introduce severe bottlenecks in readout, state preparation, and nonlinear processing, whereas less compact representations may simplify interactions and improve compatibility with analog and near-term hardware. No single encoding is universally optimal, rather the most suitable choice depends strongly on the structure of the fluid problem, the computational objective and the constraints of the target quantum platform. We therefore argue that encoding should be treated as a primary design variable in quantum CFD and revisited iteratively throughout the design pipeline, as different algorithmic components interact and influence one another.
Singlet-triplet oscillations in multivalley Si double quantum dots
This paper studies how electron spins behave in silicon quantum dots, specifically analyzing the oscillations between singlet and triplet states when electrons are separated between two quantum dots. The research focuses on understanding how valley states (related to silicon's crystal structure) interact with electron spins and affect the dynamics of these quantum states.
Key Contributions
- Theoretical description of singlet-triplet mixing in Si/SiGe double quantum dots accounting for multiple valley occupation patterns
- Detailed analysis of spin-valley coupling effects near resonance conditions and comparison with experimental measurements
- Investigation of valley-dependent g-factors and dephasing mechanisms due to electric field noise
View Full Abstract
Charge separation from the $(4,0)$ to the $(3,1)$ state in a Si/SiGe double quantum dot is commonly used for initialization of spin qubits and Pauli-spin-blockade readout. It was used in recent experiments involving creation of the $(3,1)$ singlet, and subsequent shuttling of one of the electrons. We present a theoretical description of the process of charge separation and singlet-triplet mixing, arriving at expressions for the singlet return probability that take into account experimentally observed finite probabilities of the creation of singlets with various patterns of valley occupations. In our analysis we focus on magnetic fields for which the electron spin Zeeman splitting is close to the valley splitting in one of the dots, when the spin-valley coupling causes a strong renormalization of the frequency of oscillations of singlet return probability. The latter effect has been recently used to perform valley splitting mapping by shuttling of one quantum dot to various locations with respect to the other. We give a detailed description of singlet-triplet dynamics near these spin-valley resonances and compare the results of calculations with measurements on double quantum dots in two distinct Si/SiGe heterostructures. Comparison of theory with experiments in which the presence of a few valley occupation patterns is visible, gives insight into the valley dependence of $g$-factors in these structures, providing support for a recently proposed theoretical model of this dependence. We also discuss how dephasing of singlet return probability oscillations near the spin-valley resonances is affected by valley splitting fluctuations caused by electric field noise.
Optical depth dictates universal bounds on many-body decay in atomic ensembles
This paper derives universal scaling laws for cooperative photon emission in atomic ensembles, showing that the maximum emission rate scales with the product of atom number and optical depth regardless of whether atoms are ordered or randomly distributed. The work establishes optical depth as the key parameter governing many-body cooperative emission and reveals different scaling behaviors depending on detector geometry.
Key Contributions
- Derived universal scaling law unifying cooperative emission in ordered arrays and disordered atomic clouds based on optical depth
- Established scaling laws for directional detection showing detector aperture determines whether quadratic Dicke scaling or universal bound is observed
View Full Abstract
Cooperative emission is well understood for idealized symmetric systems, but its limits in spatially extended, free-space ensembles remain an open question. Here, we derive a universal law for the scaling of the maximum photon emission rate with system size that unifies both ordered arrays and disordered atomic clouds in arbitrary dimensions at fixed density. We demonstrate that, for a fixed atomic density, the maximum emission rate scales universally as the product of the atom number and the system's optical depth, with the latter encoding the dimensional scaling across all regimes from independent emission to the Dicke limit. Furthermore, we establish a scaling law for directional detection, revealing that the observed rate depends on the detector's numerical aperture: small apertures yield Dicke-like quadratic scaling, whereas large apertures recover our integrated universal bound. Our results establish optical depth as the parameter governing many-body cooperative emission in both ordered and disordered ensembles, and reveal that directional and total-emission scalings must be carefully distinguished in experimental settings.
Optimization Using Locally-Quantum Decoders
This paper develops a quantum decoding algorithm for classical LDPC codes that can handle quantum superpositions of errors, applied to solving optimization problems like max-k-XORSAT. The quantum decoder outperforms classical belief propagation in many cases but falls short of achieving quantum advantage due to improvements in classical algorithms.
Key Contributions
- Development of intrinsically quantum decoder for classical LDPC codes that handles coherent superpositions of errors
- Demonstration that quantum decoder outperforms classical belief propagation for average-case D-regular max-k-XORSAT instances
View Full Abstract
It was pointed out in [JSW+25] that widely-studied optimization problems such as D-regular max-k-XORSAT can be reduced to decoding of LDPC codes, using quantum algorithms related to Regev's reduction. LDPC codes have very good decoders, such as Belief Propagation (BP), and this therefore makes D-regular max-k-XORSAT an enticing target for this class of quantum algorithms. However, BP was found insufficient to achieve quantum advantage. Here, we develop an intrinsically quantum decoding technique, which decodes classical LDPC codes subject to coherent superpositions of bit flip errors. For average-case instances of D-regular max-k-XORSAT drawn from Gallager's ensemble, this quantum decoder strongly outperforms classical belief propagation at many values of k and D. For some (k,D) the approximate optima achievable using this decoder surpass both Prange's algorithm and simulated annealing. However, we stop short of achieving quantum advantage because we identify an enhancement to Prange's algorithm that recovers a precise tie, much as a precise tie was observed between the standard version of Prange's algorithm and a more limited version of locally-quantum decoding in [CT24].
Quantum vs. Classical Spin: A Comparative Study of Dipolar Spin Dynamics and the Onset of Chaos
This paper compares quantum and classical descriptions of spin dynamics in dipole-coupled systems, finding significant differences between the two approaches despite qualitative similarities. The researchers use Free Induction Decay measurements to benchmark the differences and trace them to fundamental distinctions between quantum and classical spin descriptions.
Key Contributions
- Direct comparison between quantum and classical spin dynamics in dipole-coupled systems
- Demonstration of significant quantitative differences between quantum and classical descriptions despite qualitative similarities
View Full Abstract
We investigate the spin dynamics of a dipole-coupled system by comparing a direct solution of the Schrodinger equation for quantum spins with simulations of classical spins. Although classical spins have long been used in microscopic spin dynamics simulations, we demonstrate that their results differ significantly from those of quantum spins. Using Free Induction Decay as a benchmark, we find that while the overall patterns are qualitatively similar, significant discrepancies emerge at both short and long timescales. We trace these differences to fundamental distinctions in the two descriptions.
Quantum Kernel Advantage over Classical Collapse in Medical Foundation Model Embeddings
This paper demonstrates that quantum support vector machines (QSVMs) outperform classical linear SVMs for binary insurance classification on medical chest X-ray data, showing quantum kernel advantage when using embeddings from medical foundation models. The quantum approach maintains better performance on minority class prediction while classical methods collapse to majority-class predictions.
Key Contributions
- Demonstration of quantum kernel advantage in medical image classification using foundation model embeddings
- Introduction of a two-tier fair comparison framework for evaluating quantum vs classical machine learning performance
- Evidence that quantum kernels maintain higher effective rank and avoid classifier collapse compared to linear kernels
View Full Abstract
We provide evidence of quantum kernel advantage under noiseless simulation in binary insurance classification on MIMIC-CXR chest radiographs using quantum support vector machines (QSVM) with frozen embeddings from three medical foundation models (MedSigLIP-448, RAD-DINO, ViT-patch32). We propose a two-tier fair comparison framework in which both classifiers receive identical PCA-q features. At Tier 1 (untuned QSVM vs. untuned linear SVM, C = 1 both sides), QSVM wins minority-class F1 in all 18 tested configurations (17 at p < 0.001, 1 at p < 0.01). The classical linear kernel collapses to majority-class prediction on 90-100% of seeds at every qubit count, while QSVM maintains non-trivial recall. At q = 11 (MedSigLIP-448 plateau center), QSVM achieves mean F1 = 0.343 vs. classical F1 = 0.050 (F1 gain = +0.293, p < 0.001) without hyperparameter tuning. Under Tier 2 (untuned QSVM vs. C-tuned RBF SVM), QSVM wins all seven tested configurations (mean gain +0.068, max +0.112). Eigenspectrum analysis reveals quantum kernel effective rank reaches 69.80 at q = 11, far exceeding linear kernel rank, while classical collapse remains C-invariant. A full qubit sweep reveals architecture-dependent concentration onset across models. Code: https://github.com/sebasmos/qml-medimage
Isotopically enriched epitaxial CaWO$_{4}$ thin films for Er$^{3+}$ spin-photon quantum interfaces
This paper develops improved thin film materials for quantum interfaces by creating isotopically purified calcium tungstate films doped with erbium ions. The researchers reduced nuclear spin interference by using tungsten-186 instead of the naturally occurring tungsten-183, achieving 10 times lower concentrations of the spin-carrying isotope and demonstrating single ion photoluminescence.
Key Contributions
- Demonstrated isotopically purified CaWO4 thin films with 10x reduction in 183W nuclear spin abundance
- Achieved single ion photoluminescence detection in integrated nanophotonic devices
- Established a materials platform for improved spin-photon quantum interfaces with extended coherence times
View Full Abstract
Rare earth ion (REI)-doped oxide thin films are attractive for the application of quantum interconnects due to their stable optical levels and scalability$^{1-3}$. Among them, Er$^{3+}$ doped CaWO$_{4}$ is promising because it possesses narrow optical linewidth transitions and a long spin coherence time$^{4-6}$. The electron spin coherence is limited at high temperatures by paramagnetic impurities and by the presence of the 14.3% $^{183}$W nuclear spin. To further increase the spin coherence time at millikelvin temperatures, where the paramagnetic impurities are frozen out, our approach is to synthesize chemically and isotopically purified thin films as a host material. We first grow non-isotopically enriched Er$^{3+}$ doped CaWO$_{4}$ thin films, which exhibit a 214(13) MHz photoluminescence (PL) inhomogeneous linewidth, indicating the thin film has high crystalline quality. We then grow isotopically enriched CaWO$_{4}$ thin films using an isotopically purified $^{186}$WO$_{3}$ source. Time of flight secondary ion mass spectrometry (ToF-SIMS) was used to measure the relative concentration of W isotopes. $^{183}$W, the only W isotope that has a net nuclear spin and is the major cause of spin decoherence, was at a relative abundance of 1.2%, a factor of 10 lower than natural abundance. We also observed PL emission from single ions after integrating nano-photonic devices with the thin film. These results establish isotopically engineered CaWO$_{4}$ thin films as a promising platform for future studies of nuclear-spin-limited coherence and for scalable rare-earth-ion-based quantum nanophotonic devices.
A Spectral Gap Informed Parameter Schedule for QAOA
This paper improves the Quantum Approximate Optimization Algorithm (QAOA) by using spectral gap information to create better parameter schedules that avoid the difficult optimization problem of finding good variational parameters. The new approach called SGIR-QAOA shows performance improvements over existing methods on both Grover's problem and the Maximum Independent Set problem.
Key Contributions
- Development of Spectral Gap Informed Ramps (SGIR-QAOA) that uses adiabatic Hamiltonian spectral gap information to create parameter schedules
- Demonstration of performance improvements over Linear Ramp QAOA on Grover's problem and Maximum Independent Set optimization
- Method for extrapolating spectral gap information to enable scalability to larger problem sizes
View Full Abstract
A challenge with the Quantum Approximate Optimisation Algorithm (QAOA), and variational algorithms in general, is finding good variational parameters, a task which in itself can be NP-hard. Recent work has sought to de-variationalise QAOA by picking well-informed guesses for the variational parameters. The Linear Ramp QAOA (LR-QAOA) achieves this by using parameter schedules inspired by the quantum adiabatic algorithm. We go a step further and use spectral gap information from an adiabatic Hamiltonian, with the QAOA mixer Hamiltonian as our initial Hamiltonian, to make smooth ramps which we call Spectral Gap Informed Ramps (SGIR-QAOA). SGIR-QAOA schedules perform slow evolution where the spectral gap of the adiabatic Hamiltonian is small. We show that SGIR-QAOA has performance improvements over LR-QAOA on Grover's problem at constant depth and that SGIR-QAOA requires shorter depths to achieve the same optimal solution probability. We then show that these performance benefits extend to a problem with potential practical applications -- the Maximum Independent Set (MIS) problem. Finally, we demonstrate the scalability of the SGIR-QAOA method using extrapolated spectral gap information for scales that the spectral gap cannot be exactly evaluated, and show that the advantage appears to persist under mild depolarising noise.
Gauge-covariant projected entangled paired states for interacting systems in a magnetic field
This paper develops a new mathematical method for simulating quantum many-body systems in magnetic fields using tensor networks called PEPS, which maintains physical symmetries regardless of how the magnetic field is mathematically described. The approach allows efficient computation of ground states for interacting particles on 2D lattices in uniform magnetic fields.
Key Contributions
- Development of gauge-covariant PEPS ansatz that preserves translation invariance for systems in magnetic fields
- Method for simulating many-body quantum systems in magnetic fields without dependence on gauge choice or extended unit cells
View Full Abstract
The Hamiltonian for a system of itinerant particles on a two-dimensional lattice in a uniform magnetic field reduces the translational symmetry to a magnetic translation group, because of the need to choose a particular gauge for the vector potential. Nonetheless, in many situations all physical observables of the ground state remain entirely translation invariant. In this work, we introduce a projected entangled-pair state (PEPS) wavefunction with a pattern of virtual flux tensors, for which all physical expectation values are translation invariant by construction, possibly within an enlarged unit cell reflecting any symmetry breaking in the target state. Moreover, we show that the usual contraction and optimization methods for translation-invariant PEPS can be used, with the magnetic flux per plaquette only entering as a continuous parameter in the tensor network contractions. Therefore, our approach provides a method for simulating an interacting many-body system in a uniform magnetic field independently of the gauge choice for the vector potential and bypassing the need to consider extended magnetic unit cells.
Optimization of two-photon excitation by indistinguishable photons in a three-level atom
This paper studies how to optimally excite a three-level atom using pairs of indistinguishable photons, finding that perfect excitation is possible with pulses that are time-reversed versions of spontaneously emitted photon pairs. The work compares this ideal strategy with experimentally realistic pulse shapes and analyzes how quantum interference affects the optimal excitation conditions.
Key Contributions
- Analytical determination of optimal two-photon states for maximizing atomic excitation using time-reversed spontaneous emission states
- Comparison of ideal excitation strategies with experimentally accessible pulse shapes including Gaussian and coherent states
- Analysis of how quantum interference in indistinguishable photons modifies optimal excitation conditions
View Full Abstract
We investigate the excitation of a three-level ladder-type atom by a unidirectional field with a pair of indistinguishable photons. Starting from an analytical expression for the two-photon absorption probability, we determine the two-photon state that maximizes the population of the upper atomic state at a chosen time and show that, in the limit of an infinitely long pulse, perfect excitation is possible. The optimal state is identified as the time-reversed counterpart of the two-photon state emitted in spontaneous cascade decay. We then compare this ideal excitation strategy with experimentally accessible families of states, including symmetrized Gaussian product states, temporally correlated Gaussian states, and coherent pulses. We analyze how the optimal excitation conditions depend on the ratio of atomic decay rates and on the separation of the atomic transition frequencies. For indistinguishable photons described by Gaussian pulses, quantum interference may shift the maxima of the marginal spectral distribution away from the atomic resonances and qualitatively modify the optimal excitation strategy. Our results clarify the role of indistinguishability and correlations in two-photon absorption and provide guidance for designing realistic excitation schemes in quantum-optical light-matter interfaces .
Entropy Signatures of Collective Modes and Vortex Dynamics in Rotating Two--Dimensional Bose--Einstein Condensates
This paper studies rotating two-dimensional Bose-Einstein condensates with vortices, examining how they respond to sudden changes in interactions and trap geometry. The researchers use information theory measures to characterize the complex dynamics and correlations that emerge, particularly when giant vortices undergo chaotic splitting behavior.
Key Contributions
- Demonstrates extreme sensitivity of giant vortices to excitation protocols and their chaotic splitting dynamics
- Establishes information-theoretic measures as effective tools for quantifying correlations and complexity in rotating quantum gases
View Full Abstract
We investigate the nonequilibrium dynamics of a two-dimensional rotating Bose gas confined in a symmetric anharmonic trap, employing the multiconfigurational time-dependent Hartree method for bosons (MCTDHB). We study states ranging from vortex-free configurations to multicharged (giant) vortices, prepared by tuning the rotation frequency, and analyze their response to sudden interaction and trap quenches. In vortex-free states, interaction quenches induce regular breathing--like dynamics, whereas in the presence of giant vortices they lead to symmetry-breaking surface excitations. In contrast, trap deformations that excite quadrupole-like modes produce stable oscillations in vortex-free condensates but trigger rapid, irregular, and effectively chaotic splitting dynamics in multicharged vortices. To characterize these processes beyond conventional density and phase observables, we employ information-theoretic measures, including marginal and joint entropies, mutual information, and Kullback-Leibler (KL) divergence, supplemented by an angular-resolved KL measure that captures symmetry breaking and azimuthal localization. We find that chaotic splitting is accompanied by a pronounced growth of information-theoretic indicators, signaling the buildup of many-body correlations and increasing complexity in the system dynamics. Our results demonstrate the extreme sensitivity of giant vortices to excitation protocols and establish information-theoretic measures as a powerful framework to quantify correlations and complexity in rotating quantum gases.
Balancing Quantum Memories in Asymmetric Repeaters for High-Fidelity Entanglement Distribution
This paper addresses the problem of optimally allocating quantum memories in asymmetric quantum repeaters to improve entanglement distribution. The authors develop a dynamic memory allocation strategy that balances the trade-off between entanglement generation rate and fidelity by reducing mismatches between left and right side entanglement generation.
Key Contributions
- Dynamic optimal memory allocation strategy for asymmetric quantum repeaters
- Statistical lower bounds on achievable rate and fidelity under optimal allocation
View Full Abstract
At the core of the quantum Internet lie quantum repeaters that enable remote end-to-end entanglement generation. Fundamentally, the entanglement generation rate and fidelity of quantum repeaters constitute the bottleneck for end-to-end performance. To achieve high rates, quantum repeaters employ quantum memory multiplexing. In a high-rate standard repeater, each memory sequentially generates an entanglement with its neighboring nodes and then applies entanglement swapping. This, however, results in low fidelity due to decoherence of the first-formed entanglement in the sequential generation process. By allocating different numbers of memories to simultaneously form entanglements with the left and right adjacent nodes, quantum repeaters reduce high waiting times and achieve high fidelity. In such a repeater, a mismatch problem arises due to the difference between the probabilistic number of generated entanglements on both sides. Consequently, some entanglements remain stored until opposite entanglements are available. The mismatch problem reduces the repeater rate and particularly the entanglement fidelity. In this paper, we consider the mismatch problem in an asymmetric repeater with different distances to its adjacent nodes. To mitigate the mismatch problem, we derive a dynamic optimal memory allocation. Under the optimal allocation, we derive statistical lower bounds on the achievable rate and fidelity. We demonstrate that the optimal allocation significantly improves the fidelity while maintaining a comparable rate to the standard repeater. In contrast, our results show that fixed memory allocation may be detrimental to the fidelity.
Witnessing entanglement between photon and matter due to graviton exchange
This paper proposes a method to detect quantum entanglement between photons and matter that arises from the exchange of gravitons, providing a potential experimental test for the quantum nature of gravity. The researchers develop a witness criterion using Stokes parameters to identify this entanglement, which could serve as a laboratory signature of quantum gravity effects.
Key Contributions
- Development of a PPT witness criterion for detecting gravity-mediated entanglement between photons and spin qubits
- Theoretical framework using Stokes parameters to quantify entanglement arising from quantum gravitational interactions
View Full Abstract
The paper presents a scheme to detect entanglement arising from the quantum nature of gravity between a spin qubit and photons, using Stokes parameters. One of the crucial tests of the general theory of relativity is the bending of light due to the curvature. Recently, a quantum counterpart of this experiment to test the quantum nature of the gravitational interaction has been proposed, in which the spin-2, massless graviton yields entanglement between matter and a photon sector. Hence, it provides one of the most crucial experimental signatures for testing the quantum nature of gravity in a lab, since only spin-2-induced entanglement can yield the correct deflection of light due to matter. Here, we propose a positive partial-transpose (PPT) witness criterion for witnessing such an entanglement. We scan the entangled states in this context by studying the overlap of the final state, which is proportional to the entanglement phase. We exploit the Stokes observables to measure the photon state and the spins in the matter sector, thereby constructing a witness for the quantum nature of gravity in this setup. To quantify this entanglement, we will couple the photon to a local oscillator, whose phase need to be controlled to probe the orthogonal components of the macroscopic interference in the laser beam. We have shown that for a non-maximally entangled state mediated by the quantum nature of gravity, the witness attains a maximal negativity of $-0.052$. Our findings indicate that this witness effectively detects entanglement within the range $0.71 \leq |γ| < 1$, where $γ$ is the overlap between the two coherent states of the photon, providing a clear signature of quantum correlations.
Improving Zero-Noise Extrapolation via Physically Bounded Models
This paper improves zero-noise extrapolation (ZNE), a quantum error mitigation technique, by adding physical constraints to extrapolation models to prevent unphysical predictions. The authors test their bounded models on 180,000 synthetic circuits and real quantum hardware, showing improved reliability and stability compared to unconstrained approaches.
Key Contributions
- Introduction of physically bounded variants of polynomial, exponential, and polynomial-exponential extrapolation models for ZNE
- Large-scale benchmarking on 180,000 circuits showing improved stability and reduced unphysical predictions
- Demonstration that physical constraints can be incorporated into existing ZNE workflows with minimal modification
View Full Abstract
Zero-noise extrapolation (ZNE) mitigates errors in near-term quantum devices by extrapolating measurements obtained at amplified noise levels to estimate noise-free expectation values. In practice, commonly used extrapolation models are fitted without enforcing physical constraints, which can yield predictions outside the valid range of quantum observables. In this work, we introduce physically bounded variants of polynomial, exponential, and polynomial--exponential extrapolation models by explicitly parameterizing the zero-noise estimate and constraining it during optimization. We evaluate the approach using a large synthetic benchmark comprising 180,000 circuits and approximately 3.6 million ZNE experiments generated under realistic device noise models derived from IBM quantum backends. We also perform preliminary validation on real quantum hardware using GHZ and W-state circuits. Across the synthetic benchmark, bounded extrapolation substantially reduces unphysical predictions and improves the stability of exponential- and polynomial--exponential-family models, whereas polynomial models show little difference between bounded and unbounded variants. Hardware experiments show similar qualitative behaviour: bounded models generally avoid pathological extrapolations and often provide a more reliable balance between accuracy and usable coverage. At the same time, the results highlight practical limitations of current devices, including stronger-than-expected noise effects and variability not fully captured by simulation models. These results suggest that enforcing physical constraints during extrapolation improves the reliability of ZNE and that this approach can be incorporated into existing workflows with minimal modification.
Noise-robust 1-copy distillation protocol for all distillable Bell-diagonal qutrits
This paper develops a method to purify noisy entangled qutrit pairs (3-level quantum systems) into cleaner entangled states using only local operations and classical communication. The researchers solve the distillability problem for a specific class of qutrit states and create a protocol that works well even when the quantum states are corrupted by noise.
Key Contributions
- Solved the distillability problem for Bell-diagonal qutrits with Weyl structure, proving PPT violation is necessary and sufficient for 1-distillability
- Developed a noise-robust entanglement distillation protocol that constructs Schmidt rank 2 eigenvectors from negative eigenvalues of partially transposed density matrices
View Full Abstract
Entanglement distillation is the process of converting noisy entangled states into maximally entangled pure states via local operations and classical communication. A long-standing, unresolved question is which entangled states are amenable to distillation, known as the distillability problem. We solve this for Bell-diagonal qutrits with Weyl structure, and present a noise-robust scheme for entanglement distillation. In particular, we find that violating the positive partial transposition (PPT) criterion is necessary and sufficient for the 1-distillability of these states. For this, we construct a Schmidt rank 2 eigenvector of the partially transposed density matrix associated with its unique, three-fold degenerate negative eigenvalue. This feature makes the derived entanglement distillation protocol resilient to white-noise effects on the quantum states. Our results thus make noisy entangled qutrit pairs more accessible for future quantum technologies.
Dualistic operational characterization of device-dependent correlation sets via convex analysis in the $(2,m,2)$ Bell scenario
This paper analyzes correlation patterns in Bell experiments with two-qubit systems, developing mathematical tools to distinguish between separable states, standard quantum states, and hypothetical beyond-quantum states. The authors derive explicit formulas for detecting entanglement and characterizing these different types of quantum correlations using convex analysis.
Key Contributions
- Derived explicit support and gauge functions for correlation sets in the (2,m,2) Bell scenario that provide optimal entanglement witnesses and robustness quantification
- Showed that extremal quantum correlations are realized by maximally entangled states and identified fundamental limits governed by linearly independent measurement directions
View Full Abstract
We analyze device-dependent correlation sets generated by fixed local dichotomic measurements for two-qubit systems in the $(2,m,2)$ Bell scenario. We consider three fundamental state spaces for the composite system: the separable state space, the standard quantum state space, and the maximal tensor-product state space, which contains beyond-quantum states compatible with local quantum measurements. We formulate the corresponding correlation sets for general fixed dichotomic measurements and, in the traceless case, derive particularly simple explicit formulae for their support and gauge functions. These functions furnish dual operational characterizations of the three correlation sets: the support functions give optimal witnesses for entanglement and beyond-quantum states, whereas the gauge functions quantify the robustness of these detections against depolarizing noise. We further derive convex-hull representations that elucidate the extremal structures of the correlation sets and the physical states realizing them, showing in particular that extremal quantum correlations are realized by maximally entangled states. The fundamental limits of these dual operational tasks are governed solely by the smaller of the numbers of linearly independent measurement directions available to Alice and Bob. When both parties have three linearly independent measurement directions, our entanglement criterion detects Werner states up to the optimal PPT threshold $p_{\mathrm{crit}}=2/3$. For beyond-quantum-state detection, a nontrivial separation from the quantum set occurs only under the same measurement condition; in that case, the same optimal noise threshold is attained for an extremal state in the maximal tensor-product state space.
Impact of thermal and dissipative effects in a periodically-kicked quantum battery
This paper studies quantum batteries using a kicked-Ising model to understand how thermal effects and environmental noise impact their energy storage and extraction performance. The researchers develop both analytical and numerical methods to characterize quantum battery performance under realistic conditions.
Key Contributions
- Systematic framework for analyzing quantum battery performance under thermal and dissipative effects
- Analytical and numerical characterization of energy injection and extraction using ergotropy as a figure of merit
- Identification of operating regimes where quantum battery charging remains robust despite environmental decoherence
View Full Abstract
Quantum batteries (QBs) have emerged as a promising route for fast energy storage and on-chip power supply in quantum devices. Given the limited analytical understanding of open Floquet QBs, we employ the kicked-Ising model as a tractable platform to systematically study its performance under realistic conditions, including finite temperature effects and environmental dissipation. Starting from Gibbs states of the transverse-field Ising model, we incorporate thermal and decoherence effects along the evolution, using both analytical and numerical approaches. Taking ergotropy as a central figure of merit, we characterize the injected and extractable energy, and identify regimes where charging remains robust despite environmental effects. Our results provide a systematic framework for assessing QB performance under thermal and dissipative effects.
Few-Shot Cross-Device Transfer for Quantum Noise Modeling on Real Hardware
This paper develops a machine learning approach to transfer quantum noise models between different quantum devices using only a small amount of calibration data. The researchers train neural networks to correct for device-specific noise patterns and show that models can be adapted to new quantum hardware with just 20 fine-tuning samples.
Key Contributions
- Demonstration of cross-device transfer learning for quantum noise modeling with minimal data requirements
- Identification that CX gate errors are the primary source of device-to-device noise variation
- Development of a practical framework for quantum error mitigation that can adapt to different quantum hardware platforms
View Full Abstract
In the noisy intermediate-scale quantum (NISQ) regime, quantum devices contain hardware-specific noise sources which restrict device-invariant error mitigation strategies. We explore transfer learning approaches to apply noise models learned on one quantum device to a different device with the help of a small amount of data. We create a real-hardware dataset from two IBM quantum devices, ibm_fez (source) and ibm_marrakesh (target), comprising 170 noisy and ideal circuit output distributions, with device calibration features added. We train a residual neural network on the source device to map noisy to ideal outcomes. The zero-shot transfer test shows a KL divergence of 1.6706 (up from 0.3014), establishing device specificity. With K = 20 fine-tuning samples, KL drops to 1.1924 (28.6% improvement over zero-shot), recovering 34.9% of the gap between zero-shot and in-domain KL. Ablation studies reveal that the major cause of mismatches across devices is CX gate error, followed by readout error. The results show quantum noise can be learned and fine-tuned with minimal samples, and provide a plausible approach to cross-device quantum error mitigation.
Enhancing Phase Retrievability of Quantum Channels via Interferometric Coupling
This paper studies how to reconstruct quantum states from measurements of quantum channels, introducing a new interferometric method that combines two channels coherently to improve phase retrieval capabilities. The work connects quantum information theory with frame theory and shows that interference effects can enhance state reconstruction even when individual channels cannot achieve it.
Key Contributions
- Established equivalence between quantum channel phase retrievability and complementary channel pure-state informational completeness
- Demonstrated that interferometric coupling can enhance phase retrievability through coherent interference effects
View Full Abstract
Phase retrievability of a quantum channel asks whether pure states can be reconstructed from suitable measurements. In this paper, we study this problem from three complementary viewpoints: quantum information theory, operator-valued frames, and the physical realization through quantum interferometry. We first show that a quantum channel is phase retrievable if and only if its complementary channel is pure-state informationally complete. This structural characterization leads to several consequences for phase retrievability, including criteria involving the dimension of the complementary operator system, Choi-rank type bounds, and specific results for entanglement breaking channels and twirling channels. We then introduce an interferometric coupling in which two arm channels are coherently recombined through port operators \(M_i(θ)=A_i+e^{iθ}B_i\). Unlike classical mixing, this construction produces interference cross terms that can enlarge the complementary operator system and thereby enhance phase retrievability. From the frame theory viewpoint, the interferometer realizes a coherent coupling of operator-valued frames. To quantify this effect, we introduce injectivity indices for completely positive maps. The examples in Section~5 show that coherent interference can significantly improve phase retrieval behavior even when the arm channels are individually not phase retrievable.
Practical lower bounds for hybrid quantum interior point methods in linear programming
This paper evaluates whether hybrid quantum interior point methods for solving linear programming problems can offer practical advantages over classical solvers, finding that quantum approaches will not provide practical benefits for realistic problem instances due to inherent runtime limitations.
Key Contributions
- Rigorous benchmarking methodology comparing hybrid quantum interior point methods against classical linear programming solvers across diverse problem instances
- Demonstration that quantum linear solver approaches fail to achieve practical advantage over classical methods like HiGHS solver under realistic assumptions
View Full Abstract
Quantum interior point methods (QIPMs) promise polynomial speed-ups over classical solvers for linear programming by outsourcing the solution of Newton linear systems to quantum linear solvers (QLSAs). However, asymptotic speed-ups do not necessarily translate to practical advantages on realistic problem instances. In this work, I evaluate whether practical advantage of a standard hybrid QIPM pipeline can already be excluded relative to the classical open-source solver HiGHS on a broad and diverse collection of LP instances spanning eight problem families, including public benchmark libraries, such as MIPlib, and relaxations of combinatorial optimisation problems. Following the hybrid benchmarking paradigm initiated by Cade et al., I derive rigorous lower bounds on the quantum runtime under a series of highly benevolent assumptions and compare them against classical runtimes. I equip the QIPMs with the best-performing functional QLSA, the Chebyshev-based method, as identified by Lefterovici et al., and evaluate two Newton system formulations proposed by Mohammadisiahroudi et al.: the modified normal equation system and the orthogonal subspace system. The exclusion analysis yields a consistent negative picture: across all instances and for any realistic quantum cycle duration, the quantum runtime lower bounds already exceed the classical runtimes, establishing that these hybrid QIPMs will offer no practical advantage over good classical solvers for realistic linear programming instances.
New non-Euclidean neural quantum states from additional types of hyperbolic recurrent neural networks
This paper develops new types of neural quantum states using non-Euclidean (hyperbolic) recurrent neural networks to better simulate quantum many-body systems. The researchers test four different hyperbolic neural network variants on Heisenberg spin models and find they consistently outperform traditional Euclidean approaches in variational Monte Carlo simulations.
Key Contributions
- Extension of non-Euclidean neural quantum states to include Poincaré RNN, Lorentz RNN, and Lorentz GRU variants
- Demonstration that hyperbolic neural network architectures consistently outperform Euclidean counterparts in quantum many-body simulations
- Identification of Lorentz RNN as the most efficient hyperbolic variant, achieving superior performance with up to three times fewer parameters
View Full Abstract
In this work, we extend the class of previously introduced non-Euclidean neural quantum states (NQS) which consists only of Poincaré hyperbolic GRU, to new variants including Poincaré RNN as well as Lorentz RNN and Lorentz GRU. In addition to constructing and introducing the new non-Euclidean hyperbolic NQS ansatzes, we generalized the results of our earlier work regarding the definitive outperformances delivered by hyperbolic Poincaré GRU NQS ansatzes when benchmarked against their Euclidean counterparts in the Variational Monte Carlo (VMC) experiments involving the quantum many-body settings of the Heisenberg $J_1J_2$ and $J_1J_2J_3$ models, which exhibit hierarchical structures in the forms of the different degrees of nearest-neighbor interactions. Here, in particular, using larger systems consisting of 100 spins, we found that all four hyperbolic RNN/GRU NQS variants always outperformed their respective Euclidean counterparts. Specifically, for all $J_2$ and $(J_2,J_3)$ couplings considered, including $J_2=0.0$, Lorentz RNN NQS and Poincaré RNN NQS always outperformd Euclidean RNN NQS, while Lorentz/Poincaré GRU NQS always outperformed Euclidean GRU NQS, with a single exception when $J_2=0.0$ for Poincaré GRU NQS. Furthermore, among the four hyperbolic NQS ansatzes, depending on the specific $J_2$ or $(J_2, J_3)$ couplings, on four out of eight experiment settings, Lorentz GRU and Poincaré GRU took turns to be the top performing variant among all Euclidean and hyperbolic NQS ansatzes considered, while Lorentz RNN, with up to three times fewer parameters, was capable of not only surpassing the Euclidean GRU eight out of eight times but also outperforming both Lorentz GRU and Poincaré GRU four out of eight times, to emerge as the best overall hyperbolic NQS ansatz.
Exhaustive and feasible parametrisation with applications to the travelling salesperson problem
This paper develops a new approach for quantum optimization algorithms that can guarantee finding optimal solutions to constrained problems like the traveling salesperson problem. The method uses group theory to construct quantum circuits that can reach any valid solution while avoiding invalid ones, unlike current approaches that only work asymptotically.
Key Contributions
- Introduction of exhaustively parametrised, feasibility-respecting quantum circuits for constrained combinatorial optimization
- Development of abstract pipeline using group theory and generating sequences to construct such circuits
- Demonstration on traveling salesperson problem with numerical validation up to 9 cities
View Full Abstract
This paper introduces the concept of exhaustively parametrised, feasibility-respecting quantum circuits for constrained combinatorial optimisation problems. Such circuits can reach, given the right parameter values, every feasible solution with certainty -- including the optimum -- with a fixed number of parameters, while avoiding infeasible solutions altogether. This is in sharp contrast to conventional quantum alternating operator ansatz schemes, which are merely guaranteed to reach the optimum asymptotically. We introduce an abstract pipeline for constructing exhaustively parametrised, feasibility-respecting circuits from a transitive group action on a problem's feasible set. Our constructions rely on the simple combination of the group action with group representation and the novel notion of generating sequences: group elements in fixed order, possibly with repetitions, that generate the entire group. That is, we trace expressivity of parametrised quantum circuits back to the most fundamental concepts of group theory. We apply this pipeline to two concrete examples for the travelling salesperson problem, thus showing that exhaustively parametrised, feasibility-respecting circuits are not an empty definition. Furthermore, we provide numerical proof-of-principles on instances with up to nine cities, comparing the suitability of our constructions for parameter optimisation purposes against established mixers.
Catalytic Enhancement of Coherence Fraction in Noisy Quantum Channels and Characterization of Strictly Incoherent Operations
This paper investigates how to use catalytic preprocessing to enhance the coherence fraction of quantum states after they pass through noisy channels, and provides theoretical characterization of strictly incoherent operations that preserve quantum coherence.
Key Contributions
- Development of catalytic methods to enhance coherence fraction in noisy quantum channels
- Necessary and sufficient conditions for characterizing Strictly Incoherent Operations (SIO)
- Practical application to phase discrimination tasks with numerical examples
View Full Abstract
In realistic quantum information processing tasks, quantum states are inevitably affected by environmental noise, leading to decoherence and degradation of useful quantum resources. The coherence fraction, which serves as an important figure of merit for several quantum protocols, may decrease significantly after the action of a noisy channel. Such degradation can result in unsatisfactory performance in real-world applications. In this work, we investigate whether catalysis can be used to pre-process the input state to enhance the coherence fraction of an output state from a quantum channel. Specifically, we study whether using a processed state $ρ_s'$ as the input to a quantum channel $Λ$, instead of the original state $ρ_s$, can yield an output state $Λ(ρ_s')$ whose coherence fraction exceeds that of $Λ(ρ_s)$. We analyze the conditions under which such an improvement is possible. We also provide a practical application of our setup for the phase discrimination task. Furthermore, we establish a necessary and sufficient condition for an incoherent state preserving CPTP(Completely Positive Trace Preserving) map $\mathcal{E}$ to be a particular type of Strictly Incoherent Operation (SIO). This characterization provides a new structural understanding of SIO and clarifies its role in coherence manipulation. Our results offer practical insights into coherence preservation and enhancement in noisy quantum processes and may be useful for optimizing quantum information protocols under realistic conditions. We also provide numerical examples to support our claims.
On the complexity of quantum numerical integration: an angle-structure characterization
This paper studies quantum numerical integration using quantum amplitude estimation, introducing a hierarchy of function classes based on angle-structure complexity. The authors show that for certain classes of functions with limited encoding complexity, quantum methods can achieve better scaling than classical approaches while accounting for the full cost of quantum state preparation.
Key Contributions
- Introduction of grid function class hierarchy G_n^(d) based on multilinear angle maps with polynomial-time classical membership testing
- Proof of quantum advantage for numerical integration including state preparation costs, achieving O(ε^-1 log(1/ε)) scaling for d=1 case
- Unconditional separation result showing quantum oracle advantage for functions with Sobolev regularity s<1/2
- Experimental validation on real quantum hardware (SpinQ Triangulum and IBM Kingston) demonstrating the theoretical hierarchy predictions
View Full Abstract
We study numerical integration on $[0,1]$ by quantum amplitude estimation (QAE), focusing on the cost of constructing the amplitude oracle. Although QAE improves the statistical component of the integration error, this advantage is relevant only when the integrand has low encoding complexity. We introduce a hierarchy of grid function classes $\mathcal{G}_n^{(d)}$, defined by requiring the angle map $Θ_g:\{0,1\}^n\to[0,π]$ to be multilinear of degree at most $d$. Membership is classically checkable in $O(n2^n)$ time by the Walsh--Hadamard transform. For $g\in\mathcal{G}_n^{(d)}$, the encoding operator factorises into $\sum_{k=0}^d\binom{n}{k}$ multi-controlled $R_Y$ gates, interpolating between an affine $O(n)$ regime and the generic exponential regime. Combining this structure with classical discretisation estimates for $g\in C^α[0,1]$, we obtain a depth-versus-accuracy trade-off: gate count $O((\log(1/\varepsilon))^d\varepsilon^{-1})$ suffices to achieve $\varepsilon$-accuracy with constant probability. For $d=1$ this becomes $O(\varepsilon^{-1}\log(1/\varepsilon))$, improving over classical Monte Carlo for every $α\ge1$. We also prove an unconditional separation: $\mathcal{G}_n^{(1)}$ contains functions of Sobolev regularity $s<1/2$ for which the quantum oracle cost is $O(1/\varepsilon)$, whereas classical deterministic or randomised quadrature requires $Ω(\varepsilon^{-1/s})$ evaluations. These results identify explicit integrand classes for which the full cost of QAE-based integration, including state preparation, is asymptotically better than classical methods. Experiments on SpinQ Triangulum and IBM Kingston illustrate the hierarchy at $n=2$: circuits inside $\mathcal{G}_n^{(d)}$ run successfully, while those exceeding the Triangulum coherence budget fail as predicted.
AutoQResearch: LLM-Guided Closed-Loop Policy Search for Adaptive Variational Quantum Optimization
This paper presents AutoQResearch, a framework that uses large language models to automatically discover and optimize variational quantum algorithms for solving combinatorial optimization problems. The system searches for adaptive policies that can adjust quantum solver configurations based on problem characteristics and performance feedback, demonstrated on routing and graph problems.
Key Contributions
- Development of an LLM-guided closed-loop framework for autonomous discovery of variational quantum algorithm configurations
- Demonstration of adaptive quantum solver policies that outperform static configurations on combinatorial optimization problems
- Introduction of staged confirmation methodology to avoid proxy overfitting in quantum algorithm evaluation
View Full Abstract
Configuring variational quantum algorithms for combinatorial optimization remains a difficult, expert-driven process requiring coordinated choices over solver family, ansatz, objective, and optimizer. We present AutoQResearch, an LLM-guided closed-loop experimentation framework that casts this task as sequential policy search over a curated design space. Instead of a single static configuration, the framework searches for adaptive solver-control policies that condition future decisions on diagnostics such as feasibility, optimality gap, and convergence stagnation. The system operates through a structured workflow: an LLM agent edits a small policy surface under a fixed evaluation harness, candidate policies are screened using cheap scout evaluations, and only the strongest candidates are promoted to full confirmation. This enables controlled autonomous exploration while guarding against proxy overfitting and unstable selection. We evaluate the framework on Maximum Independent Set (MIS) and the Capacitated Vehicle Routing Problem (CVRP). On MIS instances (16--64 vertices), discovered policies substantially outperform static baselines and reveal scale-dependent behavior: CVaR objectives are effective at small scale, while QRAO-based qubit compression provides the most effective explored scaling path. On CVRP curricula (8--12 customers) and a held-out E-n13-k4 benchmark, the framework discovers adaptations involving sampling budget, penalty design, and hybrid repair protocols, yielding high-quality solutions. Methodologically, we find that staged confirmation is essential: cheap proxy evaluations can materially misestimate policy quality and even invert candidate rankings. Overall, the paper positions AutoQResearch as a benchmarked quantum--GenAI co-design workflow for autonomous solver discovery in variational quantum optimization.
On Realization of Back-Action-Evading Measurements and Quantum Non-Demolition Variables via Linear Systems Engineering
This paper develops a theoretical framework for performing back-action-evading measurements and creating quantum non-demolition variables in linear quantum systems by engineering specific Hamiltonian and coupling conditions. When systems don't naturally meet these conditions, the authors propose using coherent feedback control to artificially create the required properties for precise quantum measurements.
Key Contributions
- Established theoretical framework linking purely imaginary Hamiltonians with real/imaginary coupling operators to enable back-action-evading measurements
- Developed coherent feedback engineering approach to create BAE measurement capabilities in non-compliant quantum systems
- Demonstrated that QND interaction conditions simultaneously enable both BAE measurements and promote coupling operators to QND observables
View Full Abstract
We establish a framework for realizing back-action-evading (BAE) measurements and quantum non-demolition (QND) variables in linear quantum systems. The key condition, a purely imaginary Hamiltonian with a real or imaginary coupling operator, enables BAE measurements of conjugate observables. Symmetric coupling further yields QND variables. For non-compliant systems, coherent feedback is designed to engineer BAE measurements. Crucially, the QND interaction condition simultaneously ensures BAE measurements and promotes the coupling operator to a QND observable.
Dynamical generation of stable optical-microwave squeezing in structured reservoirs
This paper develops a theoretical framework for creating stable entangled optical-microwave quantum states using a hybrid system where a mechanical oscillator couples optical and microwave modes. The work shows that non-Markovian (memory-containing) noise environments can actually enhance the quantum entanglement compared to standard memoryless noise.
Key Contributions
- Demonstration that non-Markovian noise can enhance two-mode squeezing compared to Markovian environments
- Development of effective Hamiltonian model for optical-microwave squeezing in hybrid electro-optomechanical systems
- Analysis showing two-mode squeezing can persist without external driving under non-Markovian conditions
View Full Abstract
Two-mode squeezed states as paradigmatic entangled resources have broad applications in quantum information processing. Here, we study the generation of stable optical-microwave squeezing in structured environments within a hybrid electro-optomechanical system, where a mechanical oscillator is simultaneously coupled to an optical cavity mode and a microwave mode of an LC resonator. Specifically, an effective Hamiltonian that captures the optical-microwave squeezing interaction is constructed by combining strongly modulated driving fields applied to both photonic modes with a mechanical parametric amplifier. Based on this effective model, the dynamical evolution of two-mode squeezing in structured environments is analyzed. It is remarkably shown that the non-Markovian noise can substantially enhance the squeezing level in comparison to the Markovian case, and that two-mode squeezing can persist even in the absence of external driving fields under non-Markovian conditions, thereby mitigating the detrimental effects of anti-squeezing. Furthermore, the persistence of the two-mode squeezed state is enhanced when the environmental spectral densities of the microwave and optical modes are identical. Our work provides a theoretical framework for generating and persisting two-mode squeezing in structured environments.
Quantum Prediction of Transport Dynamics in Discretized State Spaces
This paper develops a quantum algorithm that uses quantum computers to solve the Fokker-Planck equation for tracking probability distributions in physics simulations. The method encodes probability densities in quantum amplitudes and uses quantum Fourier transforms to efficiently simulate how these distributions evolve over time.
Key Contributions
- Gate-based quantum algorithm for Bayesian state estimation using Fokker-Planck equation
- Unitary surrogate method using Wick rotation to handle diffusion terms in quantum implementation
- Exponential scaling advantage for high-dimensional probability density representation
View Full Abstract
We propose a gate-based quantum algorithm for the prediction step of Bayesian state estimation based on the Fokker-Planck equation on a discretized position-velocity state space. The probability density is encoded in the amplitudes of a quantum state, enabling a compact representation of high-dimensional distributions. Exploiting the circulant structure of finite-difference operators, the evolution is realized in the spectral domain using quantum Fourier transforms and phase rotations. A key result is that the drift component can be implemented exactly in amplitude space, leading to an accurate reproduction of the classical transport dynamics. In contrast, the diffusion term does not admit a linear representation in amplitude space due to the nonlinear relation between probability density and wave function. To enable a quantum implementation, we introduce a unitary surrogate based on a Wick rotation, transforming diffusion into a dispersive phase evolution. This yields a fully unitary propagation that can be implemented efficiently on a gate-based quantum computer. The proposed method is evaluated numerically for different scenarios and shows strong agreement with the exact solution of the Fokker-Planck equation. The approach demonstrates the potential of quantum computing for Bayesian state estimation, as the representable state space grows exponentially with the number of qubits. This allows the efficient representation and propagation of probability densities that would otherwise require complex tensor decompositions on classical hardware, making the method a promising candidate for high-dimensional filtering problems.
A Novel Hierarchy of Quantum Kernel Networks on Smoothed Particle Hydrodynamics
This paper proposes combining quantum machine learning with smoothed particle hydrodynamics (SPH), a computational method for simulating fluid dynamics. The authors develop quantum neural networks that can be applied to particle-based simulations, though they acknowledge current limitations in computational efficiency.
Key Contributions
- Integration of quantum kernel networks with smoothed particle hydrodynamics
- Development of hybrid quantum-classical framework using Pauli-Z expectation values
- Novel quantum intelligent SPH paradigm combining smoothing kernels with quantum learning
View Full Abstract
Currently, quantum computing and artificial intelligence are driving revolutionary advancements in computational science. This study pioneers the integration of quantum kernel networks on smoothed particle hydrodynamics (SPH). SPH has matured into a highly versatile meshfree/particle method, exceptionally suited for tracking spatiotemporal trajectories and dynamic modeling phenomena. We developed a hierarchy of Lagrangian quantum network models built upon an improved quantum multilayer perceptron (QMLP). Specifically, a sequential hybrid quantum-classical framework is constructed, utilizing Pauli-Z expectation values over traditional probability outputs to ensure robust gradient-based optimization and mitigate barren plateaus. It combines smoothing kernels with quantum learning, establishing a novel quantum intelligent SPH paradigm. The framework is validated through some continuous benchmarks on eurypalynous quantum neural networks, static multi-level nebula vortex interference reconstructions and transient scalar field advectional tests. Numerical results demonstrate that while pure elementary quantum circuits struggle with parameter-specific generalization in unstructured domains, the proposed hybrid crossed-QMLP seamlessly matches the fitting accuracy of classical SPH in quantum optimized space. Although this approach currently faces limitations in computational efficiency and hardware implementation, it nonetheless paves the way for a novel investigation into quantum SPH, by mapping unstructured Lagrangian particle topologies into integrated quantum circuits.
Quantum algorithm for solving high-dimensional linear stochastic differential equations via amplitude encoding of the noise term
This paper develops quantum algorithms to solve high-dimensional stochastic differential equations by encoding solutions in quantum state amplitudes rather than binary representations. The authors propose two methods using quantum linear systems solvers that achieve polylogarithmic scaling in dimension, potentially providing exponential speedup for high-dimensional financial and scientific modeling problems.
Key Contributions
- Novel amplitude encoding approach for stochastic differential equations using quantum circuits
- Two quantum algorithms (Dyson series and Euler-Maruyama methods) achieving polylogarithmic scaling in dimension
- Quantum circuit implementation for pseudorandom number generation in SDE solving
- Methods for estimating expectation values of functions using the prepared quantum states
View Full Abstract
This work studies quantum algorithms to solve high-dimensional stochastic differential equations (SDEs) $\mathrm{d} \mathbf{X}_t = A(t) \mathbf{X}_t \mathrm{d} t + B(t) \mathrm{d} \mathbf{W}_t$. Aiming for a speed-up in the dimension $N$ of $\mathbf{X}_t$, we generate quantum states that encode $\mathbf{X}_t$ in the amplitudes, while most of the existing quantum methods for SDEs employ binary encoding. A key challenge is the amplitude encoding of the noise term, and we address this by utilizing the quantum circuit implementation of a pseudorandom number generator (PRNG). We propose two methods: the Dyson series-based method and the Euler-Maruyama (EM)-based method. In the former, we express the noise term via the Dyson series approximation of the time evolution operator, while in the latter, it is approximated using the EM time discretization. Both methods use the quantum linear systems solver to generate the amplitude-encoding state of $\mathbf{X}_t$, making only ${\rm polylog}(N)$ queries to the PRNG circuit and the block-encodings of $A$ and $B$. Additionally, going beyond state preparation, we present methods to estimate expectations of functions of $\mathbf{X}_t$ using the state.
Natural-orbital locking reveals hidden steady-state skin order in Gaussian open fermion chains
This paper develops a theoretical framework for analyzing open quantum fermion systems, showing how 'natural orbital locking' can reveal hidden ordering patterns in steady states that aren't visible from particle density measurements alone. The authors demonstrate that correlation matrices provide better diagnostics than density profiles for detecting skin modes in non-reciprocal quantum chains.
Key Contributions
- Development of exact steady-state theory for number-conserving Gaussian fermion chains with biorthogonal decomposition
- Discovery of natural-orbital locking as a diagnostic tool for hidden skin order in open quantum systems
View Full Abstract
Nonreciprocal relaxation matrices can have skin-localized right eigenmodes, but their imprint on a mixed steady state is not fixed by the density profile alone. We develop an exact steady-state theory for number-conserving Gaussian fermion chains and show that the dominant natural orbital of the correlation matrix provides a mode-resolved diagnostic of hidden skin order. The steady-state correlator admits a biorthogonal decomposition in terms of the left and right eigenmodes of the relaxation matrix $X$ and the source matrix $Y$. This formula separates three ingredients: slow rapidity denominators, source loading by left eigenmodes, and real-space geometry from right eigenmodes. For a local pump, the pump position is read by the left modes, whereas the selected profile is drawn by the right modes. In a single-slow-mode regime, the dominant natural orbital locks to the Euclidean-normalized slow right mode. The density can follow the same boundary trend, but it is a less selective incoherent sum over occupied natural orbitals. We verify this selection law in a nonreciprocal Hatano--Nelson chain and show that, in a nonreciprocal SSH chain, the selected natural orbital crosses over from a topological edge candidate to a slow bulk-skin candidate. These results identify natural-orbital locking as a steady-state diagnostic of nonreciprocal localization in Gaussian open fermion chains.
Single-copy stabilizer learning: average case and worst case
This paper studies how to efficiently learn stabilizer groups (which describe quantum error-correcting codes and many-body quantum states) from single copies of quantum states. It shows that shallow quantum circuits can learn most stabilizer groups efficiently, but proves fundamental limits requiring exponentially many measurements in worst-case scenarios.
Key Contributions
- Demonstrates that logarithmic-depth local Clifford circuits can efficiently learn almost all stabilizer groups with t=O(log n) symmetries, improving upon previous linear-depth approaches
- Proves a fundamental lower bound showing that worst-case single-copy measurement schemes require exponentially many samples in the number of unknown generators t
View Full Abstract
We study single-copy stabilizer learning, the problem of identifying a stabilizer group of dimension $n-t$ from an $n$-qubit quantum state $ρ$. We obtain two complementary results. First, in the average case, logarithmic-depth local Clifford circuits suffice to efficiently learn almost all stabilizer groups with $t=O(\log n)$, instead of the linear-depth measurements required in previous approaches. We support this result with numerical simulations for systems of up to 100 qubits. Second, we show that, in the worst case, any adaptive single-copy measurement scheme requires a number of samples that scales exponentially in $t$. Together with existing results on two-copy learning, our findings suggest that, for large $t$, identifying Pauli symmetries of a quantum system exhibits a quantum advantage in the learning setting.
Third Quantization for Order Parameters (II): Local Field Quantization in Superconducting Quantum Circuits
This paper derives the quantum behavior of superconducting circuit elements from first principles, starting with microscopic BCS theory and showing how macroscopic variables like current and voltage naturally obey quantum commutation relations through 'third quantization' of the superconducting order parameter. It provides a unified microscopic foundation for understanding why superconducting transmission line resonators behave quantum mechanically.
Key Contributions
- Derives quantum behavior of superconducting circuit elements from microscopic BCS theory rather than phenomenological assumptions
- Extends third quantization framework to spatially local superconducting phases in transmission line resonators
- Establishes quantitative relations between macroscopic observables and microscopic parameters in circuit-QED architectures
- Provides unified microscopic foundation for capacitive and inductive superconducting circuit elements
View Full Abstract
The quantization of superconducting transmission-line resonators is usually introduced phenomenologically by modeling the resonator as an effective LC circuit and imposing canonical commutation relations on macroscopic variables such as charge and flux. Although this approach is highly successful, it leaves open why these macroscopic variables should obey quantum commutation relations and how this behavior emerges from the superconducting state. In this work, starting from the microscopic pairing Hamiltonian underlying BCS superconductivity, we derive the low-energy effective Hamiltonian of a circuit-QED architecture containing a superconducting transmission line with distributed capacitive and inductive elements. We establish quantitative relations between macroscopic observables, including current and voltage, and the spatially local superconducting phase, as well as the microscopic parameters of the electron-phonon system. We then extend the third quantization of the superconducting order parameter, introduced in Paper (I) for the global phase, to the spatially local case. This gives a macroscopic field quantization of the superconducting phase. We show that, after restriction to the low-energy excitation subspace, the local superconducting phase becomes a genuine quantum dynamical variable. Thus, the quantum behavior of transmission-line resonators need not be postulated at the macroscopic level, but follows from the third quantization of the superconducting order parameter. These results suggest that capacitive and inductive superconducting circuit elements share the same microscopic origin, providing a unified framework for superconducting circuit quantization.
Electrically detected magnetic resonance of $^{75}$As magnetic clock transitions in silicon
This paper demonstrates the observation of magnetic clock transitions in arsenic-75 donor atoms in silicon using electrically detected magnetic resonance, which could help reduce decoherence in quantum systems by making them less sensitive to magnetic field noise.
Key Contributions
- First observation of magnetic clock transitions in 75As donor spins using low-field EDMR
- Demonstration that linewidth broadening behavior is consistent with donor Hamiltonian models near clock transition conditions
View Full Abstract
Magnetic clock transitions (CTs), defined by vanishing first-order sensitivity of the transition frequency to magnetic field fluctuations, provide a powerful route to suppress decoherence in donor spin systems. Here, we present the observation of magnetic field CTs from an ensemble of near-surface $^{75}$As ($I = 3/2$) spins in silicon using low-field ($< 10$~mT) continuous-wave electrically detected magnetic resonance (EDMR). As the CT condition is approached, pronounced linewidth broadening is observed, consistent with a donor Hamiltonian informed linewidth model. These results establish low-field EDMR as a sensitive probe of CTs in near-surface donor systems relevant to silicon-based quantum devices.
Lobe Dynamics, Phase-Space Transport, and Non-Adiabatic Leakage Thresholds in the Nonautonomous Kerr-Cat Qubit
This paper analyzes the dynamics of Kerr-cat qubits during state preparation and gate operations, moving beyond simplified static models to examine how time-dependent microwave pulses create complex phase-space transport and leakage mechanisms that affect qubit performance.
Key Contributions
- Development of nonautonomous analysis methods for Kerr-cat qubit state preparation that identify symmetric post-threshold moving branches organizing state formation
- Application of Melnikov's method to derive transport criteria for gate-pulse-induced leakage, providing geometric indicators for non-adiabatic errors
View Full Abstract
The Kerr-nonlinear parametric oscillator (KPO) provides a foundational semiclassical model for cat-state quantum hardware. Standard analyses of the KPO typically rely on autonomous, frozen-time approximations to describe the stabilization of macroscopic coherent states. However, state preparation and gate manipulation are driven by explicitly time-dependent microwave pulses, so the operational dynamics are inherently nonautonomous. In this paper, we show that static algebraic equilibrium pictures are incomplete for describing both state formation and gate-induced transport in the Kerr-cat qubit. For nonautonomous state preparation, we analyze the ramped resonant model by combining a linear nonautonomous stability analysis with a local invariant-graph reduction near the vacuum trajectory. This yields a quintic reduced normal form in the critical direction and identifies two symmetric post-threshold moving branches that organize the local state-formation dynamics. The associated diagnostics separate the reduced branch dynamics from the full two-dimensional phase-twist relaxation observed in the hardware coordinates. For gate execution, we model a fast pulse as a weak aperiodic perturbation of the conservative resonant figure-eight separatrix and apply Melnikov's method to derive a leading-order transport criterion. In this framework, transient lobe dynamics emerge as a semiclassical mechanism for non-adiabatic leakage, and the resulting amplitude-width threshold curve provides a leading-order geometric indicator for the onset of gate-pulse-induced transport.
Universal Complex Quantum-Like Bits from Hermitian Weighted Graphs
This paper studies how to construct complex quantum-like bit states using weighted graph structures, proving that Hermitian coupling between graph blocks can universally realize any target qubit state with real eigenvalues, while symmetric coupling approaches have fundamental limitations.
Key Contributions
- Proves that Hermitian weighted graph couplings can universally realize arbitrary complex qubit amplitudes with real spectra
- Shows fundamental phase constraints for symmetric coupling approaches that prevent universal qubit state realization
View Full Abstract
We study when block-coupled regular graphs can realize prescribed complex quantum-like bit states as exact synchronized eigenstates. Two regular subgraphs $G_A$ and $G_B$ supply normalized all-ones eigenvectors $V_A$ and $V_B$, and algebraically regular bipartite couplings reduce the full graph-supported operator exactly to a $2\times 2$ effective block on $\mathcal S=\operatorname{span} \{ \lvert 0\rangle, \lvert 1\rangle \}$. Within this reduction we prove that two natural symmetric complexifications are not universal under a real-spectrum requirement: complex symmetric coupling with real diagonal regularities forces the target computational basis amplitude ratio $r=ω_2/ω_1$, for $\lvert ψ\rangle = ω_1\lvert 0\rangle + ω_2\lvert 1\rangle$, to satisfy $r^2\in\mathbb{R}$, while real symmetric coupling with complex diagonal regularities forces $r+1/r\in\mathbb{R}$. Replacing complex symmetry by Hermitian coupling removes this phase obstruction. For any nonbasis target state, any prescribed real eigenvalue, and any prescribed nonzero signed spectral gap, a Hermitian weighted coupling realizes the target exactly. Additionally, an independently tuned directed-coupling model gives a second universality mechanism. We then pass from continuous effective parameters to finite weighted graphs with entries in $\{0, \pm1, \pm i\}$ (the fourth roots of unity and zero), characterize the balanced discrete coupling lattice by perfect matchings, and show that exact discrete Hermitian realizations are dense in the synchronized pure-state space. These results give a universality taxonomy for complex QL-bits and identify Hermitian conjugate pairing as the robust structural mechanism that supports arbitrary complex amplitudes with real two-level spectra.