Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Scalable Neural Decoders for Practical Fault-Tolerant Quantum Computation
This paper introduces a neural network-based decoder for quantum error correction that significantly improves the accuracy and speed of correcting errors in quantum computers. The decoder achieves much lower logical error rates and higher throughput than existing methods, making fault-tolerant quantum computing more practical.
Key Contributions
- Development of convolutional neural network decoder for quantum error correction codes
- Demonstration of 17x improvement in logical error rates with 3-5 orders of magnitude higher throughput
- Discovery of waterfall regime showing practical fault-tolerant quantum computing achievable with modest code sizes
View Full Abstract
Quantum error correction (QEC) is essential for scalable quantum computing. However, it requires classical decoders that are fast and accurate enough to keep pace with quantum hardware. While quantum low-density parity-check codes have recently emerged as a promising route to efficient fault tolerance, current decoding algorithms do not allow one to realize the full potential of these codes in practical settings. Here, we introduce a convolutional neural network decoder that exploits the geometric structure of QEC codes, and use it to probe a novel "waterfall" regime of error suppression, demonstrating that the logical error rates required for large-scale fault-tolerant algorithms are attainable with modest code sizes at current physical error rates, and with latencies within the real-time budgets of several leading hardware platforms. For example, for the $[144, 12, 12]$ Gross code, the decoder achieves logical error rates up to $\sim 17$x below existing decoders - reaching logical error rates $\sim 10^{-10}$ at physical error $p=0.1\%$ - with 3-5 orders of magnitude higher throughput. This decoder also produces well-calibrated confidence estimates that can significantly reduce the time overhead of repeat-until-success protocols. Taken together, these results suggest that the space-time costs associated with fault-tolerant quantum computation may be significantly lower than previously anticipated.
Optimized Gottesman-Kitaev-Preskill Error Correction via Tunable Preprocessing
This paper proposes an improved error correction scheme for Gottesman-Kitaev-Preskill (GKP) quantum error correction codes by introducing a tunable preprocessing stage with squeezing parameters. The new P-Steane scheme can outperform existing methods by actively reshaping noise propagation patterns in bosonic quantum systems.
Key Contributions
- Introduction of tunable preprocessing stage with squeezing parameters for GKP error correction
- Unified framework that encompasses existing ME-Steane and teleportation-based schemes as special cases
- Demonstration of improved performance over ME-Steane scheme under specific conditions with optimized parameter selection
View Full Abstract
The Gottesman-Kitaev-Preskill (GKP) code is a promising bosonic candidate for realizing fault-tolerant quantum computation. Among existing error-correction protocols for GKP code, the Steane-type scheme is a canonical and widely adopted paradigm, yet its intrinsic noise propagation pattern limits further performance improvement. In this work, we propose a preprocessing-based Steane-type (P-Steane) scheme, which introduces a tunable preprocessing stage with squeezing parameters $a$ and $b$ to actively reshape noise propagation, thereby constituting a parameter framework. This framework spans a spectrum of protocols beyond existing methods, reproducing the performance of both the ME-Steane scheme ($a=1$, $b=1$) and the teleportation-based scheme ($a=1/\sqrt{2}$, $b=\sqrt{2}$) as special cases. Crucially, in the small-noise regime and when the data qubit is noisier than the ancilla qubits, P-Steane scheme achieves the minimum product of position- and momentum-quadrature output noise variances when $2a = b$, and consistently outperforms the ME-Steane scheme within a specific squeezing-parameter range under this condition.
Belief Propagation Convergence Prediction for Bivariate Bicycle Quantum Error Correction Codes
This paper develops a simple predictive method for determining whether Belief Propagation decoding will converge for quantum error correction codes by checking if the syndrome defect count is divisible by the code's column weight, achieving 95%+ accuracy and potentially reducing computational overhead in quantum error correction.
Key Contributions
- Development of a modulo-based convergence predictor for Belief Propagation decoding with AUC = 0.995
- Identification of structural mechanism linking syndrome defect count divisibility to BP convergence probability
- Validation across multiple Bivariate Bicycle codes including IBM's targeted Gross codes for 2026-2028 deployment
View Full Abstract
Decoding Bivariate Bicycle (BB) quantum error correction codes typically requires Belief Propagation (BP) followed by Ordered Statistics Decoding (OSD) post-processing when BP fails to converge. Whether BP will converge on a given syndrome is currently determined only after running BP to completion. We show that convergence can be predicted in advance by a single modulo operation: if the syndrome defect count is divisible by the code's column weight w, BP converges with high probability (100% at p <= 0.001, degrading to 87% at p = 0.01); otherwise, BP fails with probability >= 90%. The mechanism is structural: each physical data error activates exactly w stabilizers, so a defect count not divisible by w implies the presence of measurement errors outside BP's model space. Validated on five BB codes with column weights w = 2, 3, and 4, mod-w achieves AUC = 0.995 as a convergence classifier at p = 0.001 under phenomenological noise, dominating all other syndrome features (next best: AUC = 0.52). The false positive rate scales empirically as O(p^2.05) (R^2 = 0.98), confirming the analytical bound from Proposition 2. Among BP failures on mod-w = 0 syndromes, 82% contain weight-2 data error clusters, directly confirming the dominant failure mechanism. The prediction is invariant under BP scheduling strategy and decoder variant, including Relay-BP - the strongest known BP enhancement for quantum LDPC codes. These results apply directly to IBM's Gross code [[144, 12, 12]] and Two-Gross code [[288, 12, 18]], targeted for deployment in 2026-2028.
A Review of Variational Quantum Algorithms: Insights into Fault-Tolerant Quantum Computing
This paper reviews variational quantum algorithms (VQAs), which combine quantum circuits with classical optimization to work on current noisy quantum computers, and analyzes how these algorithms might evolve as quantum computers become fault-tolerant. The review examines current challenges like barren plateaus and explores applications across physics, chemistry, and machine learning.
Key Contributions
- Systematic analysis of VQA evolution from NISQ to fault-tolerant quantum computing regimes
- Comprehensive review of training bottlenecks like barren plateaus and mitigation strategies
- Theoretical roadmap for adapting variational algorithms to error-corrected quantum systems
View Full Abstract
Variational quantum algorithms (VQAs) have established themselves as a central computational paradigm in the Noisy Intermediate-Scale Quantum (NISQ) era. By coupling parameterized quantum circuits (PQCs) with classical optimization, they operate effectively under strict hardware limitations. However, as quantum architectures transition toward early fault-tolerant (EFT) and ultimate fault-tolerant (FT) regimes, the foundational principles and long-term viability of VQAs require systematic reassessment. This review offers an insightful analysis of VQAs and their progression toward the fault-tolerant regime. We deconstruct the core algorithmic framework by examining ansatz design and classical optimization strategies, including cost function formulation, gradient computation, and optimizer selection. Concurrently, we evaluate critical training bottlenecks, notably barren plateaus (BPs), alongside established mitigation strategies. The discussion then explores the EFT phase, detailing how the integration of quantum error mitigation and partial error correction can sustain algorithmic performance. Addressing the FT phase, we analyze the inherent challenges confronting current hybrid VQA models. Furthermore, we synthesize recent VQA applications across diverse domains, including many-body physics, quantum chemistry, machine learning, and mathematical optimization. Ultimately, this review outlines a theoretical roadmap for adapting quantum algorithms to future hardware generations, elucidating how variational principles can be systematically refined to maintain their relevance and efficiency within an error-corrected computational environment.
Fast and Coherent Transfer of Atomic Qubits in Optical Tweezers using Fiber Array Architecture
This paper demonstrates a new fiber array architecture for neutral-atom quantum computers that enables fast, coherent transfer of atomic qubits between optical trap sites with extremely low heating and high fidelity. The technique allows qubits to be moved between different locations in the quantum processor while maintaining their quantum states, which is crucial for implementing quantum algorithms that require connectivity between distant qubits.
Key Contributions
- Demonstrated ultrafast qubit transfer (10 μs) with extremely high fidelity (0.99992 per cycle) and ultralow motional heating
- Developed fiber array architecture with site-resolved trap depth control enabling smooth amplitude exchange between static and moving traps
- Established theoretical model connecting array inhomogeneity to transfer heating rates through parallel transfer experiments
View Full Abstract
Programmable neutral-atom arrays offer a promising route toward scalable quantum computing, where coherent qubit transfer enables non-local connectivity and reduces resource overhead. However, transfer speed and motional heating remain key bottlenecks for fast and deep quantum circuits. Here, we employ a fiber array neutral-atom quantum computing architecture with site-resolved control of trap depths to realize smooth amplitude exchange between static and moving traps, thereby enabling fast and coherent qubit transfer with ultralow motional heating. With a 10 $μ$s in situ transfer between static and moving traps, we obtain a per-cycle heating rate of 0.156(9) $μ$K, sustain over 500 cycles with negligible atom loss, and achieve a quantum state fidelity of 0.99992(5) per cycle. For inter-site transfer between two separated static traps, the operation takes 120 $μ$s with 0.783(17) $μ$K heating per transfer, and remains negligible atom loss for up to 100 repeated cycles with a fidelity of 0.9998(1) per transfer. Furthermore, through experimental studies of parallel transfer, we establish a model that elucidates the relationship between array inhomogeneity and the transfer heating rate. This fast, low-heating coherent transfer capability provides a practical route for improving both speed and fidelity in atom-shuttling based quantum computing.
Trotterization with Many-body Coulomb Interactions: Convergence for General Initial Conditions and State-Dependent Improvements
This paper establishes rigorous error bounds for Trotter formulas when simulating many-body quantum systems with Coulomb interactions, showing that second-order Trotter achieves polynomial scaling in particle number despite the challenging mathematical properties of Coulomb potentials. The work identifies conditions under which convergence rates can be improved and connects these to physically meaningful quantum states.
Key Contributions
- Rigorous proof that second-order Trotter formulas achieve 1/4 convergence rate with polynomial particle number dependence for Coulomb systems
- Identification of physically meaningful initial state conditions that improve convergence rates to first and second order
View Full Abstract
Efficiently simulating many-body quantum systems with Coulomb interactions is a fundamental question in quantum physics, quantum chemistry, and quantum computing, yet it presents unique challenges: the Hamiltonian is an unbounded operator (both kinetic and potential parts are unbounded); its Hilbert space dimension grows exponentially with particle number; and the Coulomb potential is singular, long-ranged, non-smooth, and unbounded, violating the regularity assumptions of many prior state-of-the-art many-body simulation analyses. In this work, we establish rigorous error bounds for Trotter formulas applied to many-body quantum systems with Coulomb interactions. Our first main result shows that for general initial conditions in the domain of the Hamiltonian, second-order Trotter achieves a sharp $1/4$ convergence rate with explicit polynomial dependence of the error prefactor on the particle number. The polynomial dependence on system size suggests that the algorithm remains quantumly efficient, even without introducing any regularization of the Coulomb singularity. Notably, although the result under general conditions constitutes a worst-case bound, this rate has been observed in prior work for the hydrogen ground state, demonstrating its relevance to physically and practically important initial conditions. Our second main result identifies a set of physically meaningful conditions on the initial state under which the convergence rate improves to first and second order. For hydrogenic systems, these conditions are connected to excited states with sufficiently high angular momentum. Our theoretical findings are consistent with prior numerical observations.
Defect-free arrays at the thousand-atom scale in a 4-K cryogenic environment
This paper demonstrates a cryogenic system operating at 4 K that can create and maintain arrays of up to 1024 individual atoms trapped by laser tweezers, achieving extremely long trapping times of around 5000 seconds. The system is designed to be compatible with Rydberg-state manipulation, enabling large-scale quantum computing applications.
Key Contributions
- Development of 4K cryogenic platform with high numerical aperture optics for thousand-atom scale arrays
- Achievement of 5000-second trapping lifetimes enabling extended experimental time
- Demonstration of defect-free arrays up to 1024 atoms using dual-wavelength trapping
View Full Abstract
We report on a cryogenic platform at 4 K incorporating high numerical aperture optics for the generation of large-scale tweezers arrays, and compatible with Rydberg-state manipulation. We achieve trapping lifetimes of around 5000 s, significantly extending the available experimental time for the preparation of large-scale arrays. By combining two trapping lasers at different wavelengths and by minimizing other atom losses during the rearrangement and imaging processes, we demonstrate the preparation of defect-free arrays with up to 1024 atoms. Our cryogenic design opens exciting prospects for analog and digital quantum computing.
Coherence and entanglement dynamics in Shor's algorithm
This paper analyzes how quantum coherence and entanglement change during the execution of Shor's algorithm for factoring large numbers. The researchers show that Shor's algorithm generally decreases coherence while increasing entanglement, and they establish relationships between these quantum resources throughout the algorithm's steps.
Key Contributions
- Analysis of coherence and entanglement dynamics throughout Shor's algorithm execution
- Demonstration that Shor's algorithm depletes coherence while producing entanglement
- Establishment of relationships between geometric coherence and geometric entanglement in quantum algorithms
View Full Abstract
Shor's algorithm outperforms its classical counterpart in efficient prime factorization. We explore the coherence and entanglement dynamics of the evolved states within Shor's algorithm, showing that the coherence in each step relies on the dimension of register or the order, and discuss the relations between geometric coherence and geometric entanglement. We investigate how unitary operators induce variations in coherence and entanglement, and analyze the variations of coherence and entanglement within the entire algorithm, demonstrating that the overall effect of Shor's algorithm tends to deplete coherence and produce entanglement. Our research not only deepens the understanding of this algorithm but also provides methodological references for studying resource dynamics in other quantum algorithms.
Quantifying magic via quantum $(α,β)$ Jensen-Shannon divergence
This paper develops new mathematical tools to measure 'magic' in quantum states, which refers to how much a quantum state differs from easily simulatable stabilizer states. The authors propose quantum Jensen-Shannon divergence-based measures that can efficiently quantify this magic property, which is crucial for fault-tolerant quantum computing.
Key Contributions
- Introduction of two new magic quantifiers based on quantum (α,β) Jensen-Shannon divergence
- Demonstration that these quantifiers are efficiently computable in low-dimensional systems and have desirable mathematical properties
- Analysis of how initial nonstabilizerness can enhance magic generation for specific quantum gates
View Full Abstract
Magic states play an important role in fault-tolerant quantum computation, and so the quantification of magic for quantum states is of great significance. In this work, we propose two new magic quantifiers by introducing two versions of quantum $(α,β)$ Jensen-Shannon divergence based on the quantum $(α,β)$ entropy and the quantum $(α,β)$-relative entropy, respectively. We derive many desirable properties for our magic quantifiers, and find that they are efficiently computable in low-dimensional Hilbert spaces. We also show that the initial nonstabilizerness in the input state can boost the magic generating power for our magic quantifiers with appropriate parameter ranges for a certain class of quantum gates. Our magic quantifiers may provide new tools for addressing some specific problems in magic resource theory.
Database Reordering for Compact Grover Oracles with ESOP Minimization
This paper proposes optimizing Grover's quantum search algorithm by reordering database entries and using ESOP minimization to reduce the gate count and circuit depth of the quantum oracle circuit. The researchers demonstrate that strategic database reordering combined with simulated annealing can reduce circuit size by approximately 30% compared to unoptimized approaches.
Key Contributions
- Demonstrated that database reordering can reduce Grover oracle circuit size by up to a factor of two
- Developed a proxy metric for estimating circuit size without full compilation and combined it with simulated annealing for efficient optimization
- Showed 30% circuit size reduction compared to ESOP minimization without reordering through experimental validation
View Full Abstract
Grover's algorithm searches for data satisfying a desired condition in an unstructured database. This algorithm can search a space of size $N$ in $\sqrt{N}$ queries, thereby achieving a quadratic speedup. However, within the Grover oracle circuit that is repeatedly applied, the quantum state preparation circuit -- which embeds database information into quantum states -- suffers from a large gate count and circuit depth. To address this problem, we propose reducing the quantum state preparation circuit by reordering the database. Specifically, we consider a Quantum Read-Only Memory (QROM), where data are assigned to addresses, and assume that the address assignment of data can be freely permuted. By applying Exclusive Sum-of-Products (ESOP) minimization to the resulting truth table, we reduce the quantum circuit. Although the resulting circuit logic differs from the original, the state preparation remains correct in the sense that every desired datum is encoded at some address. Furthermore, we propose a proxy metric that estimates circuit size without compilation, and combine it with simulated annealing to efficiently find a near-optimal data ordering. In our experiments, an exhaustive search over all orderings for databases of size $N=8$ reveals that circuit size varies by up to approximately a factor of two depending on the ordering, demonstrating the utility of reordering. Compared with applying ESOP minimization without reordering, simulated annealing reduces the circuit size by approximately 30\% and yields circuits close to optimal. For $N=64$ and $128$, simulated annealing is shown to discover smaller circuits compared with random search.
Discrete-variable assisted error correction of continuous-variable quantum information
This paper presents a new quantum error correction method for continuous-variable quantum systems that uses discrete-variable ancilla qubits instead of the difficult-to-prepare GKP states. The approach can suppress quantum errors by over 20% and offers a more practical path to implementing error correction in hybrid quantum systems.
Key Contributions
- Novel CV quantum error correction scheme using DV ancilla instead of GKP states
- Demonstration of >20% infidelity suppression with single-qubit ancilla
- New oscillator-in-oscillator code architecture without GKP states
- Practical implementation pathway for CV QEC on realistic platforms
View Full Abstract
Robust continuous-variable (CV) quantum information processing requires correcting realistic errors in bosonic systems, but all existing schemes rely on auxiliary Gottesman-Kitaev-Preskill (GKP) states which the preparation and operation are demanding in many platforms. In this work, we propose a novel CV quantum error correction (QEC) scheme that utilizes a broadly accessible resource: discrete-variable (DV) ancilla. Our scheme extracts information about CV displacement to the DV ancilla, measuring that allows counteracting the unwanted displacement error. We show that a simple single-qubit ancilla can already suppress CV infidelity by more than 20%. By concatenating with DV QEC codes, our scheme is robust against the physical errors in hybrid CV-DV systems, and yields a new class of oscillator-in-oscillator code that does not involve GKP states. Our work facilitates the implementation of CV QEC on realistic platforms.
Error Correction in Lattice Quantum Electrodynamics with Quantum Reference Frames
This paper explores how gauge symmetries in lattice quantum electrodynamics can be understood as quantum error-correcting codes, showing that gauge redundancy serves as a resource for protecting quantum information. The authors construct explicit error recovery operations using quantum reference frames and demonstrate two QECC structures within lattice QED.
Key Contributions
- Established lattice QED as a quantum error-correcting code beyond stabilizer codes
- Constructed explicit recovery operations using quantum reference frames for both gauge and fermionic sectors
- Demonstrated how gauge symmetry provides encoding structure that supports quantum error correction
View Full Abstract
Is gauge symmetry merely a redundancy in our description, or does it carry a deeper information-theoretic significance? Quantum error-correcting codes (QECCs) show that redundancy can serve as a resource for protecting information against noise. In this work, we ask whether gauge theories can be understood in similar terms, and make this idea concrete in lattice quantum electrodynamics (QED), building on and extending earlier works that established a bridge between gauge systems, stabilizer codes, and quantum reference frames (QRFs). For Abelian gauge groups, we show that explicit recovery operations can be constructed using group-theoretical methods for error sets determined by both ideal and non-ideal QRFs. Applied to lattice QED, this yields two QECC structures: one in the pure-gauge sector and one including fermions. We construct a gauge-field QRF based on spanning trees of the lattice and a fermionic field QRF from the matter field, thereby making explicit how physical information is encoded. While the syndromes of gauge-violating errors associated with constraint measurements are generically degenerate, QRFs resolve this degeneracy and single out families of correctable errors. This establishes lattice QED as a QECC beyond the stabilizer setting and shows concretely how gauge symmetry provides an encoding structure that supports error correction.
Gauss law codes and vacuum codes from lattice gauge theories
This paper develops a framework for creating quantum error correcting codes from lattice gauge theories, showing how gauge symmetries can be used to protect quantum information. The work demonstrates connections between quantum error correction and gauge theory physics, with potential applications for simulating gauge theories on noisy quantum computers.
Key Contributions
- Comprehensive framework for constructing QECCs from Abelian lattice gauge theories using quantum reference frames
- Development of two classes of codes: Gauss law codes and vacuum codes with detailed characterization of their algebraic structures
- Demonstration of unitary equivalence between vacuum codes and pure gauge theory codes under specific conditions
View Full Abstract
We develop a comprehensive framework for constructing quantum error correcting codes (QECCs) from Abelian lattice gauge theories (LGTs) using quantum reference frames (QRFs) as a unifying formalism. We consider LGTs with arbitrary compact Abelian gauge groups supported on lattices in arbitrary numbers of spatial dimensions, and we work with both pure gauge theories and theories with couplings to bosonic and fermionic matter. The codes that we construct fall into two classes: First, Gauss law codes identify the code subspace with the full gauge-invariant sector of the theory. In models with matter coupled to gauge fields, these codes inherit a natural subsystem structure in which gauge-invariant Wilson loops and dressed matter excitations factorize the code space. Second, vacuum codes restrict the code subspace to the matter vacuum sector within the gauge-invariant subspace, yielding codes where errors correspond to gauge-invariant charge excitations rather than to violations of the Gauss law. Despite their distinct setup, we show that when the gauge group is finite, vacuum codes are unitarily equivalent to pure gauge theory Gauss law codes, and that when the group is continuous, this is only true upon a charge coarse-graining of the vacuum code. In all cases, QRFs provide a systematic apparatus for fully characterizing the codes' algebraic structures and correctable error sets. For clarity, we illustrate our general results in $\mathbb{Z}_2$-gauge theory, as well as in scalar and fermionic QED. These findings offer fundamental insights into the parallelism between quantum error correction and gauge theory and point toward practical advantages for simulating LGTs on noisy quantum devices.
Adaptive Deformation of Color Code in Square Lattices with Defects
This paper develops methods to adapt color code quantum error correction to work on hardware with defective qubits, proposing a universal scheme that handles both data and ancilla qubit defects while maintaining low error rates and supporting fault-tolerant operations.
Key Contributions
- Universal superstabilizer scheme for handling data qubit defects in arbitrary stabilizer codes
- Concrete repair methods for isolated defects in color codes on square lattices
- Two optimization schemes for ancilla qubit defects that avoid resource waste
- Comprehensive defect adaptive architecture supporting transversal Clifford gates and lattice surgery
View Full Abstract
Quantum error correction is a crucial technology for fault tolerant quantum computing. On superconducting platforms, hardware defects in large scale quantum processors can disrupt the regular lattice structure of topological codes and impair their error correction capabilities. Although defect adaptive methods for surface codes have been extensively studied, other topological codes such as color codes still lack a systematic framework for handling defects. To address this issue, we propose a universal superstabilizer scheme applicable to data qubit defects in arbitrary stabilizer codes. Based on this scheme, we develop concrete repair methods for isolated defects of both internal data qubits and ancilla qubits in color codes defined on square lattices. Furthermore, for ancilla qubit defects, we present two optimization schemes. One scheme reuses neighboring ancilla qubits, and the other employs iSWAP gates. Unlike conventional approaches that directly disable neighboring data qubits and thus cause resource waste, both of our schemes avoid such waste and consequently achieve a lower logical error rate.Integrating the above techniques, we construct a comprehensive defect adaptive architecture for color codes to handle various defect clusters. We also show that our scheme supports a full transversal Clifford gate set and lattice surgery operations. These results provide a systematic theoretical pathway for deploying robust and low overhead color codes on defective quantum hardware.
Dynamical decoupling and quantum error correction with SU(d) symmetries
This paper develops a general framework for dynamical decoupling in qudit (multi-level quantum) systems using Lie group theory, extending beyond the typical qubit case. The authors show how to systematically identify decoupling sequences for higher-dimensional quantum systems and demonstrate that the same mathematical framework unifies dynamical decoupling with quantum error correction.
Key Contributions
- General framework for dynamical decoupling in qudit systems based on SU(d) symmetries and Lie group theory
- Unification of dynamical decoupling and quantum error correction through symmetry-based approach
- Construction of new pulse sequences for qutrit systems and spin-1 systems with practical experimental considerations
View Full Abstract
Dynamical decoupling is a long-established and effective way to suppress unwanted interactions in qubit systems, enabling advances in fields ranging from quantum metrology to quantum computing. For general qudit systems, however, comparable protocols remain rare, mainly because Hamiltonian engineering in higher dimensions lacks the geometric intuition available for qubits. Here we present a general framework for dynamical decoupling in qudit systems, based on Lie group representation theory. By extending the group theory approach to dynamical decoupling, we show how decoupling groups can be systematically identified among the finite subgroups of SU(d) by analyzing their access to the irreducible components of the operator space. As an application, we construct new pulse sequences for interacting qutrit systems based on finite subgroups of SU(3), and show how subgroup factorizations and group orientations can be exploited to obtain shorter and more experimentally practical protocols for spin-1 systems with large zero-field splitting. We further show that the same symmetry-based framework yields quantum error-correcting codes: whenever a finite subgroup of SU(d) acts as a decoupling group for the relevant error algebra, the associated one-dimensional symmetry sectors define codespaces satisfying the Knill-Laflamme conditions, thereby unifying dynamical decoupling and quantum error correction in multi-level quantum systems.
Fault-Tolerant One-Shot Entanglement Generation with Constant-Sized Quantum Devices in the Plane
This paper presents a fault-tolerant protocol that can generate high-fidelity entangled Bell pairs between distant qubits on a 2D grid in constant time, even in the presence of noise. The protocol works with constant-sized quantum devices and requires only a grid that scales linearly with distance in one dimension and logarithmically in the other.
Key Contributions
- First one-shot fault-tolerant entanglement generation protocol for 2D grids with constant-sized devices
- Demonstration of long-range localizable entanglement in short-range entangled 2D states robust to local Pauli noise
- Construction of 2D-local stabilizer Hamiltonian with long-range entanglement at finite temperature
View Full Abstract
Consider a rectangular grid of qubits in 2D with single-qubit and nearest-neighbor two-qubit operations subject to local stochastic Pauli noise. At different length scales, this setup describes both a single quantum computing device with geometrically limited connectivity between qubits arranged on a disc, and planar networks composed of quantum repeater stations of constant size. We give a protocol which robustly generates entanglement between distant qubits in this setup. For noise below a constant threshold error strength, it generates a constant-fidelity Bell pair between qubits separated by an arbitrarily large distance $R$. To generate distance-$R$ entanglement, a rectangular grid of qubits of dimensions $Θ(R)\times Θ(\mathsf{poly}(\log R))$ suffices. Our protocol applies quantum operations in one shot, establishing a Bell state in a constant time up to a known Pauli correction. In contrast, existing entanglement generation protocols either require local quantum devices controlling a number of qubits growing with the targeted distance, or are not single-shot, i.e., have a distance-dependent execution time. The protocol leverages many-body entanglement in networks and provides the first example of a short-range entangled state in 2D with long-range localizable entanglement robust to local stochastic Pauli noise. As an immediate corollary, we construct a 2D-local stabilizer Hamiltonian whose Gibbs states possess long-range localizable entanglement at constant positive temperature.
A plug-and-play superconducting quantum controller at millikelvin temperatures enables exceeding 99.9% average gate fidelity
This paper presents a superconducting quantum controller that operates at millikelvin temperatures and can directly connect to quantum bits, achieving over 99.9% gate fidelity with very low power consumption. The controller addresses a major bottleneck in scaling up superconducting quantum computers by enabling high-precision control operations at the same ultra-cold temperatures where the qubits operate.
Key Contributions
- Development of a plug-and-play superconducting quantum controller operating at 10 mK with direct chip-to-chip qubit interconnection
- Achievement of 99.9% average Clifford gate fidelity with ultralow power consumption of 0.121 fJ per gate operation
- Demonstration of solution to control bottleneck in large-scale superconducting quantum computing
View Full Abstract
The development of large-scale superconducting quantum computing requires efficient in-situ control methods that allow high-fidelity operations at millikelvin temperatures. Superconducting circuits based on Josephson junctions offer a promising solution due to their high speed, low power dissipation, and cryogenic nature. Here, we report a superconducting quantum controller that enables direct chip-to-chip interconnection with qubits at 10 mK and high-fidelity, all-digital manipulation. Randomized benchmarking reveals a uniformly high average Clifford fidelity of 99.9% with leakage to high energy levels on the order of $10^{-4}$, and an estimated average gate operation energy of 0.121 fJ, demonstrating the potential to resolve the control bottleneck in superconducting quantum computing.
PQC-Enhanced QKD Networks: A Layered Approach
This paper presents a hybrid network security architecture that combines Quantum Key Distribution (QKD) with Post-Quantum Cryptography (PQC) to create secure communication networks. The approach uses a layered design where QKD provides hop-by-hop security between trusted nodes, while PQC enables end-to-end encryption across the entire network.
Key Contributions
- Layered network architecture combining QKD and PQC for scalable quantum-safe security
- Practical implementation using open-source components with validation in simulated and lab environments
- Compositional security analysis preserving individual component security properties
View Full Abstract
We present a layered and modular network architecture that combines Quantum Key Distribution (QKD) and Post-Quantum Cryptography (PQC) to provide scalable end-to-end security across long distance multi-hop, trusted-node quantum networks. To ensure interoperability and efficient practical deployment, hop-wise tunnels between physically secured nodes are protected by WireGuard with periodically rotated pre-shared keys sourced via the ETSI GS QKD 014 interface. On top, Rosenpass performs a PQC key exchange to establish an end-to-end data channel without modifying deployed QKD devices or network protocols. This dual-layer composition yields post-quantum forward secrecy and authenticity under practical assumptions. We implement the design using open-source components and validate and evaluate it in simulated and lab test-beds. Experiments show uninterrupted operation over multi-hop paths, low resource footprint and fail-safe mechanisms. We further discuss the design's compositional security, wherein the security of each individual component is preserved under their combination and outline migration paths for operators integrating QKD-aware overlays in existing infrastructures.
Phase-Fidelity-Aware Truncated Quantum Fourier Transform for Scalable Phase Estimation on NISQ Hardware
This paper introduces an optimized quantum Fourier transform algorithm called PFA-TQFT that reduces the number of gates needed for quantum phase estimation from O(m²) to O(m log m) by intelligently truncating low-fidelity operations. The method maintains estimation accuracy while making quantum phase estimation more practical on current noisy quantum computers.
Key Contributions
- Development of Phase-Fidelity-Aware Truncated QFT algorithm that reduces gate complexity from O(m²) to O(m log m)
- Theoretical bound showing estimation error grows by at most O(2^-d) while achieving significant gate count reduction
- Hardware-calibrated truncation strategy that adapts to native gate fidelities of specific quantum devices
- Demonstration of noise-truncation synergy where the truncated algorithm outperforms full QFT under realistic NISQ noise conditions
View Full Abstract
Quantum phase estimation~(QPE) is central to numerous quantum algorithms, yet its standard implementation demands an $\calO(m^{2})$-gate quantum Fourier transform~(QFT) on $m$ control qubits-a prohibitive overhead on near-term noisy intermediate-scale quantum (NISQ) devices. We introduce the \emph{Phase-Fidelity-Aware Truncated QFT} (PFA-TQFT), a family of approximate QFT circuits parameterised by a truncation depth~$d$ that omits controlled-phase rotations below a hardware-calibrated fidelity threshold~$\eps$. Our central result establishes $\TV(P_{\varphi},P_{\varphi}^{d})\leqπ(m{-}d)/2^{d}$, showing that for $d=\calO(\log m)$ circuit size collapses from $\calO(m^{2})$ to $\calO(m\log m)$ while estimation error grows by at most $\calO(2^{-d})$. We characterise $\dstar=\Floor{\log_{2}(2π/\eps_{2q})}$ directly from native gate fidelities, demonstrating 31.3 -43.7\% at m = 30, gate-count reduction on IBM Eagle/Heron and IonQ~Aria with negligible accuracy loss. Numerical experiments on the transverse-field Ising model confirm all theoretical predictions and reveal a \emph{noise-truncation synergy}: PFA-TQFT outperforms full QFT under NISQ noise $\eps_{2q}\gtrsim 2\times10^{-3}$.
Phase-Stable Hologram Updates for Large-Scale Neutral-Atom Array Reconfiguration
This paper introduces a new algorithm called weighted-projective Gerchberg-Saxton (WPGS) that improves how large arrays of neutral atoms are assembled and reconfigured for quantum computing by maintaining phase stability when updating holographic optical tweezers, preventing atom loss during transitions.
Key Contributions
- Development of the WPGS algorithm that enforces inter-frame trap-phase continuity to prevent transient trap loss during hologram updates
- Demonstration of scalable neutral-atom array reconfiguration with over 1000 traps including 2D/3D configurations and multilayer assembly
View Full Abstract
Assembling large-scale, defect-free Rydberg atom arrays is a key technology for neutral-atom quantum computation. Dynamic holographic optical tweezers enable the assembly and reconfiguration of such arrays, but phase mismatches between successive holograms can induce destructive interference and transient trap loss during spatial-light-modulator refresh. In this work, we introduce the weighted-projective Gerchberg--Saxton (WPGS) algorithm, a phase-stable approach to dynamic hologram updates for large-scale Rydberg atom-array reconfiguration. By enforcing inter-frame trap-phase continuity while retaining weighted intensity equalization, WPGS suppresses refresh-induced transient degradation. The phase-difference distribution between consecutive holograms further provides a simple diagnostic of transient robustness. Moreover, enforcing the phase constraint reduces the number of iterations required at each update step, thereby accelerating hologram generation. Numerical simulations of 2D and 3D reconfiguration with more than $10^3$ traps, including multilayer assembly and interlayer transport, show robust transient intensities and significantly faster updates than conventional methods. These results establish inter-frame phase continuity as a practical design principle for dynamic holographic control and scalable neutral-atom array reconfiguration.
Digital-Analog Quantum Simulation and Computing: A Perspective on Past and Future Developments
This perspective paper reviews the emerging digital-analog quantum computing paradigm, which combines large analog quantum operations (from native platform interactions) with digital quantum gates to achieve both scalability and universality. The author provides an overview of the field's evolution over the past decade and discusses future possibilities for this hybrid approach.
Key Contributions
- Comprehensive review of digital-analog quantum computing paradigm evolution
- Analysis of how hybrid approaches can overcome limitations of purely digital or analog quantum computing
- Perspective on future developments combining scalability with universality
View Full Abstract
Quantum simulation and computing traditionally has been based on two main paradigms, namely, digital and analog. In the digital paradigm, usually single and two-qubit gates (where qubit is an acronym for quantum bit) are employed as building blocks for scalable, universal quantum computing, although errors add up fast and error correction will be ultimately needed for scaling up. In the analog paradigm, large analog blocks are normally employed for a unitary dynamics that carries out the computation, enabling quantum operations on many qubits with reduced errors, but with the drawback of a limited choice of evolutions and lack of universality. In the past decade, a new paradigm has emerged, showing interesting possibilities for quantum simulation and computing in the near and mid term. This is the paradigm of digital-analog quantum technologies, which proposes to combine the best of both paradigms: large analog blocks, provided by native interactions of the employed quantum platform, enabling scalability, combined with digital gates, allowing for more versatility and, ultimately, universality. In this Perspective, I give an overview of the evolution of the field along the past decade, and an outlook for its future possibilities.
Noise tolerance via reinforcement in the quantum search problem
This paper demonstrates that reinforcement techniques can exponentially improve quantum search algorithms, reducing computation time from √D to ln D steps and significantly increasing noise tolerance. The researchers use numerical simulations to show that reinforced quantum search maintains higher success probability in noisy environments compared to standard quantum search algorithms.
Key Contributions
- Exponential speedup of quantum search from √D to ln D complexity through reinforcement
- Demonstrated exponentially larger noise threshold for reinforced quantum search algorithms
- Numerical characterization of noise tolerance for both coherent and incoherent noise in multi-qubit and qudit systems
View Full Abstract
We find that reinforcement exponentially reduces computation time of the quantum search problem from $\sqrt{D}$ to $\ln D$ in a $D$-dimensional system. Therefor, a reinforced quantum search is expected to exhibit an exponentially larger noise threshold compared to a standard search algorithm in a noisy environment. We use numerical simulations to characterize the level of noise tolerance via reinforcement in the presence of both coherent and incoherent noise, considering a system of $N$ qubits and a single $D$-level (qudit) system. Our results show that reinforcement significantly enhances the algorithm's success probability and improves the scaling of its computation time with system size. These findings indicate that reinforcement offers a promising strategy for error mitigation, especially when a precise noise model is unavailable.
Microstructural Topology as a Prescriptor for Quantum Coherence: Towards A Unified Framework for Decoherence in Superconducting Qubits
This paper develops a theoretical framework to separate different causes of decoherence in superconducting quantum bits (qubits) by distinguishing between material microstructure effects and device geometry effects. The authors propose a way to independently measure and control these factors to better engineer quantum devices with longer coherence times.
Key Contributions
- Introduction of separable framework distinguishing classical and quantum microstructure effects from geometry-dependent coupling in superconducting qubits
- Development of channel-specific prescriptor methodology for independent optimization of decoherence loss pathways
- Establishment of perturbative separability criterion and falsifiable experimental protocol for validating the theoretical framework
View Full Abstract
In superconducting quantum circuits, decoherence improvements are frequently obtained through process interventions that simultaneously modify surface chemistry, microstructural topology, and device geometry, leaving mechanistic attribution structurally underdetermined. Predictive materials engineering requires measurable structural statistics to be separated from geometry-dependent coupling coefficients into independently testable factors. We introduce the concept of classical and quantum microstructure. In that context, we formulate a channel-wise separable framework for decoherence in superconducting transmon qubits in which each loss channel is described by a reduced prescriptor. Here, a channel-specific microstructural state variable is determined independently of device geometry, and a geometry-dependent coupling functional is computable from field solutions without reference to surface chemistry. We derive this product form from a spatially resolved kernel representation and establish a perturbative separability criterion that defines the regime where independent variation of the variables is valid. The framework specifies five prescriptor classes for dominant loss pathways in transmon-class devices. Falsifiability is operationalized through a pre-committed 2x2 experimental protocol in which the variables must satisfy independent ratio checks within propagated uncertainty. A Minimum-Dataset Specification standardizes reporting for cross-laboratory inference. Part I establishes the conceptual and mathematical architecture; coordinated experimental validation is reserved for Part II.
Measurement-induced state transitions across the fluxonium qubit landscape
This paper theoretically studies measurement-induced state transitions in fluxonium qubits across different parameter ranges, finding that lighter fluxoniums are less susceptible to unwanted transitions during readout than heavier ones. The research aims to improve qubit readout fidelity by understanding how measurement drives can cause population transfer from computational states to higher-energy states.
Key Contributions
- Systematic theoretical analysis of measurement-induced state transitions in fluxonium qubits across experimentally relevant parameter ranges
- Identification that lighter fluxoniums exhibit reduced susceptibility to unwanted state transitions during readout due to lower multi-photon resonance density and more harmonic charge operator structure
- Investigation of superinductor array mode effects on measurement fidelity
View Full Abstract
Understanding the mechanisms that limit high-fidelity readout in circuit quantum electrodynamics is essential for its optimization. Multi-photon resonances are understood to be a limiting factor, causing population transfer from the computational states to higher-energy states under drive. This effect, known as measurement-induced state transitions, has been extensively studied for the transmon qubit. While this exploration has begun for the fluxonium qubit, a systematic study of this effect is lacking. Here, we bridge this gap by theoretically studying measurement-induced state transitions in the fluxonium qubit over a wide range of parameters, comprising essentially all experimentally explored ranges. We find that lighter fluxoniums are less susceptible to these state transitions when compared to their heavier counterparts. We attribute this effect to the combination of lower density of multi-photon resonances, a smaller requisite coupling for a given dispersive shift, and a more harmonic-like structure of the charge operator. We confirm the validity of our analysis by performing time-dependent readout simulations. Finally, we consider the impact of the superinductor's array modes on measurement-induced state transitions over a large range of parameters.
Accelerating Quantum Tensor Network Simulations with Unified Path Variations and Non-Degenerate Batched Sampling
This paper develops improved computational methods for simulating noisy quantum systems using tensor networks, achieving speedups of over 100 million times compared to traditional methods. The work focuses on optimizing quantum trajectory simulations through better path calculations, parallel sampling techniques, and flexible contraction frameworks.
Key Contributions
- Development of error-independent unified path variation for tensor network contractions
- Implementation of non-degenerate batched sampling achieving >10^8x speedup
- Creation of flexible and optimized contraction framework for quantum trajectory simulations
View Full Abstract
Quantum trajectory methods reduce the computational overhead of simulating noisy quantum systems, approximating them with $m$ stochastically sampled $2^n$-entry quantum statevectors rather than exact $2^{2n}$-entry density matrices. Recently, Pre-Trajectory Sampling with Batched Execution (PTSBE) has dramatically increased the data collection rate of these methods. While statevector PTSBE has demonstrated data collection speedups of over $10^6 \times$, tensor network implementations only achieved $\sim 15 \times$ speedup. This comparatively modest tensor network advantage stemmed from 1) contraction path recalculations, 2) sequential tensor network sampling, and 3) inflexible/unoptimized contraction hyperparameters. In this manuscript, we increase PTSBE's tensor network data collection rate to more than $10^8\times$ that of traditional trajectories methods by developing 1) error-independent unified path variation, 2) non-degenerate tensor network sampling, and 3) a flexible/optimized contraction framework. While our methods are particularly powerful for accelerating non-proportional sampling, we also demonstrate a more than $1000\times$ speedup for more general quantum simulations.
Time evolution of impurity models and their universality for quantum computation
This paper proves that time-independent impurity Hamiltonians (quantum systems with a few strongly interacting fermionic modes coupled to many bath modes) can perform universal quantum computation when starting from product states of fermions. The authors establish computational universality and determine that the impurity size scales as O(S log S) for computations of depth S.
Key Contributions
- Proof that time-independent impurity Hamiltonians achieve computational universality on N qubits with fermionic product state inputs
- Established scaling relationship showing impurity size grows as O(S log S) for computation depth S
View Full Abstract
Impurity Hamiltonians are systems of $N$ fermionic modes where $O(1)$ of them interact among themselves via quartic (or higher order) fermion terms, while coupling quadratically with $O(N)$ bath modes. Without the quartic interactions, these systems are classically simulable with $O(N^3)$ resources. It was proved that the time-dependent evolution of these systems can perform universal quantum computation. The question of whether or not this remains true for time-independent evolution remains open. Here, we prove that the time evolution of generic time-independent impurity Hamiltonians on $O(N)$ qubits is universal on $N$ qubits if the input state is a product state of fermions in any single particle basis. In our proof we find that for a computation of depth $S$, the size of the impurity scales as $O(S\log S)$.
Multivariate quantum reservoir computing with discrete and continuous variable systems
This paper develops methods for quantum reservoir computing to process multidimensional time series data instead of just single-variable data. The researchers propose three different encoding schemes and test them on both discrete and continuous quantum systems, finding that the best approach depends on the specific task and that quantum effects enhance performance.
Key Contributions
- Framework for multivariate data processing in quantum reservoir computing with three encoding schemes
- Introduction of mixing capacity metric to evaluate reservoir effectiveness in combining independent data streams
- Demonstration that peak computational performance correlates with non-classical quantum effects
View Full Abstract
Quantum reservoir computing is a promising paradigm for processing temporal data. So far, the primary focus has been on univariate time series. However, the most relevant and complex real-world data is multidimensional. In this paper, we establish an extensive framework for multivariate data processing in quantum reservoir computing. We propose and evaluate three multivariate encoding schemes and introduce the mixing capacity as a novel metric to evaluate the effectiveness with which a reservoir combines independent data streams. The computational performance of these proposed schemes is systematically assessed using this metric, as well as on the chaotic Lorenz-63 system prediction task, for two quantum reservoirs based on discrete and continuous-variable quantum systems. Furthermore, we relate the computational performance on these tasks to the underlying quantum properties of the reservoir. Our findings reveal that the optimal encoding method is highly dependent on the reservoir system and the specific task, underlining the importance of a task-specific input design. Moreover, we observe that peak computational performance coincides with the presence of non-classical effects, which indicates that quantum resources play a role in processing multivariate data.
Rapid mixing for high-temperature Gibbs states with arbitrary external fields
This paper studies quantum Gibbs states (thermal equilibrium states) with external fields, showing how these fields create a crossover between separable and entangled states at high temperatures. The authors develop an efficient quantum algorithm for preparing these states and prove that sampling from them can be classically hard, suggesting potential quantum computational advantages.
Key Contributions
- Identified crossover scale for entanglement in high-temperature Gibbs states with external fields
- Developed efficient quantum Gibbs sampler using quasi-local Lindbladian with O(log(n/ε)) mixing time
- Proved classical hardness of sampling from computational basis distribution under certain conditions
- Established quantum advantage framework for state preparation tasks
View Full Abstract
Gibbs states are a natural model of quantum matter at thermal equilibrium. We investigate the role of external fields in shaping the entanglement structure and computational complexity of high-temperature Gibbs states. External fields can induce entanglement in states that are otherwise provably separable, and the crossover scale is $h\asymp β^{-1} \log(1/β)$, where $h$ is an upper bound on any on-site potential and $β$ is the inverse temperature. We introduce a quasi-local Lindbladian that satisfies detailed balance and rapidly mixes to the Gibbs state in $\mathcal{O}(\log(n/ε))$ time, even in the presence of an arbitrary on-site external field. Additionally, we prove that for any $β<1$, there exist local Hamiltonians for which sampling from the computational-basis distribution of the corresponding Gibbs state with a sufficiently large external field is classically hard, under standard complexity-theoretic assumptions. Therefore, high-temperature Gibbs states with external fields are natural physical models that can exhibit entanglement and classical hardness while also admitting efficient quantum Gibbs samplers, making them suitable candidates for quantum advantage via state preparation.
Sufficiency and Petz recovery for positive maps
This paper studies how quantum states can be transformed using positive trace-preserving maps, developing mathematical tools based on Jordan algebras to characterize when such transformations are possible. The work extends existing theory about quantum state discrimination and provides conditions for when quantum information processing inequalities become equalities.
Key Contributions
- Generalization of Koashi-Imoto decomposition to positive trace-preserving maps using Jordan algebras
- Proof that equality in data-processing inequality for quantum relative entropy implies existence of recovery maps
- Characterization of when quantum dichotomies can be interconverted by PTP maps
View Full Abstract
We study the interconversion of families of quantum states ("statistical experiments") via positive, trace-preserving (PTP) maps and clarify its mathematical structure in terms of minimal sufficient Jordan algebras, which can be seen to generalize the Koashi-Imoto decomposition to the PTP setting. In particular, we show that Neyman-Pearson tests generate the minimal sufficient Jordan algebra, and hence also the minimal sufficient *-algebra corresponding to the Koashi-Imoto decomposition. As applications, we show that a) equality in the data-processing inequality for the relative entropy or the $α$-$z$ quantum Rényi divergence implies the existence of a recovery map also in the PTP case and b) that two dichotomies can be interconverted by PTP maps if and only if they can be interconverted by decomposable, trace-preserving maps. We thoroughly review the necessary mathematical background on Jordan algebras. As a step beyond the finite-dimensional case, we also prove Frenkel's formula for approximately finite-dimensional von Neumann algebras.
Per-Shot Evaluation of QAOA on Max-Cut: A Black-Box Implementation Comparison with Goemans-Williamson
This paper evaluates the Quantum Approximate Optimization Algorithm (QAOA) for solving Max-Cut problems using default settings without optimization, comparing it against the classical Goemans-Williamson algorithm. The study uses a per-shot statistical framework to assess when QAOA outperforms classical methods under realistic usage conditions.
Key Contributions
- Black-box evaluation methodology for QAOA using default parameters without fine-tuning
- Per-shot statistical framework for comparing quantum and classical algorithm performance
- Realistic benchmark using well-known graph generation models for Max-Cut problems
View Full Abstract
The Quantum Approximate Optimization Algorithm (QAOA) has emerged as a promising approach for addressing combinatorial optimization problems on near-term quantum hardware. In this work, we conduct an empirical evaluation of QAOA on the Max-Cut problem, using the Goemans-Williamson (GW) algorithm as a classical baseline for comparison. Unlike many prior studies, our methodology treats QAOA implementations as black-box optimizers, relying solely on default parameter settings without manual fine-tuning. We evaluate specific off-the-shelf QAOA implementations under default settings, not the algorithmic potential of QAOA with optimized parameters. This reflects a more realistic use case for end users who may lack the resources or expertise for instance-specific optimization. To facilitate fair and informative evaluation, we construct benchmark instances using well-known graph generation models that emulate practical graph structures, avoiding synthetic constructions tailored to either quantum or classical algorithms. A central component of our analysis is a per-shot statistical framework, which tracks the quality of QAOA outputs as a function of the number of circuit executions. This enables probabilistic comparisons with the GW algorithm by examining when and how frequently QAOA surpasses classical performance baselines such as the GW expectation and lower bound. Our results provide insight into the practical applicability of QAOA for Max-Cut and highlight its current limitations, offering a framework that can guide the assessment and development of future QAOA implementations.
Thermal Time and Irreversibility from Non-Commuting Observables in Accelerated Quantum Systems
This paper studies how thermal effects in relativistic quantum systems create operationally meaningful time ordering when detectors interact with quantum fields through non-commuting observables. The authors show that uniformly accelerated detectors in vacuum experience thermal responses characterized by Unruh temperature, and that the order of sequential measurements becomes physically distinguishable due to the KMS thermal condition.
Key Contributions
- Demonstrates that temporal ordering becomes operationally meaningful in relativistic quantum systems when detectors couple through non-commuting observables under KMS thermal conditions
- Provides closed-form expressions for order-dependent detector states in terms of dimensionless temperature and energy scale parameters
View Full Abstract
We investigate when temporal ordering becomes operationally meaningful in relativistic quantum field theory using localized detector models. A time parameter alone does not ensure that different sequences of operations are physically distinguishable. We show that distinguishability arises when the state satisfies the Kubo--Martin--Schwinger (KMS) condition and the detector couples through non-commuting observables. We consider uniformly accelerated two-level detectors interacting with a quantum field in the Minkowski vacuum. The restriction of the vacuum to the detector trajectory induces a thermal response characterized by the Unruh temperature and the Tolman profile. For sequential couplings through distinct observables, the reduced detector state depends on the ordering of interactions already at second order, with a dependence controlled by the KMS parameter. This asymmetry is quantified using quantum relative entropy. In a minimal model, the relevant states form a family of non-commuting Gibbs states with identical spectra and different generators, yielding a closed-form expression depending only on the dimensionless combination of temperature and detector energy scale.
Kirkwood-Dirac distributions in classical optics
This paper analyzes Kirkwood-Dirac distributions in classical optics, showing they can be interpreted as generalized mutual coherence functions that connect different optical bases. The work provides a unified explanation for anomalous (complex and negative) values in these distributions as manifestations of optical coherence, with proposed experimental methods for measuring them.
Key Contributions
- Unified interpretation of Kirkwood-Dirac distributions as generalized mutual coherence functions in classical optics
- Explanation of anomalous complex and negative values as direct manifestations of optical coherence
- Proposed experimental methods for determining these distributions using interference techniques
View Full Abstract
We develop a comprehensive analysis of the Kirkwood-Dirac distributions in classical optics, revealing their deep connection with optical coherence as fundamental concept in optics. From their very definition, the Kirkwood-Dirac distributions emerge as generalized mutual coherence functions involving two different bases instead of just one. This perspective provides a unified interpretation of the so-called anomalous values, that are complex and negative values, as direct manifestations of coherence. We show that this interpretation consistently applies across all field variables considered in this work, including polarization, interference and wave propagation. Furthermore, we propose diverse methods of experimental determination of these distributions based on interference, in full agreement with their coherence-based interpretation.
A Model Context Protocol Server for Quantum Execution in Hybrid Quantum-HPC Environments
This paper presents an AI-driven framework that uses a Model Context Protocol (MCP) server to enable large language models to automatically execute quantum computing workflows on hybrid quantum-HPC systems. The system allows AI agents to process natural language instructions and autonomously run quantum algorithms by managing complex quantum hardware resources.
Key Contributions
- Development of MCP server architecture for quantum computing execution
- Pipeline for interpreting OpenQASM code with automated CUDA-Q workflows
- Asynchronous execution system for remote quantum hardware via Quantinuum emulator
- AI agent framework for abstracting quantum hardware complexities
View Full Abstract
The integration of large language models (LLMs) into scientific research is accelerating the realization of autonomous ``AI Scientists.'' While recent advancements have empowered AI to formulate hypotheses and design experiments, a critical gap remains in the execution of these tasks, particularly in the domain of quantum computing (QC). Executing quantum algorithms requires not only generating code but also managing complex computational resources such as QPUs and high-performance computing (HPC) clusters. In this paper, we propose an AI-driven framework specifically designed to bridge this execution gap through the implementation of a Model Context Protocol (MCP) server. Our system enables an LLM agent to process natural language prompts submitted as part of a job, autonomously executing quantum computing workflows by invoking our tools via the MCP. We demonstrate the framework's capability by performing essential quantum algorithmic primitives, including sampling and computation of expectation values. Key technical contributions include the development of an MCP server for quantum execution, a pipeline for interpreting OpenQASM code, an automated workflow with CUDA-Q for the ABCI-Q hybrid platform, and an asynchronous execution pipeline for remote quantum hardware using the Quantinuum emulator via CUDA-Q. This work validates that AI agents can effectively abstract the complexities of hardware interaction through an MCP-based architecture, thereby facilitating the automation of practical quantum research.
Asynchronous Quantum Distributed Computing: Causality, Snapshots, and Global Operations
This paper develops algorithms for quantum distributed computing systems, specifically creating a quantum version of classical snapshot algorithms that can coordinate global operations across multiple quantum components while preserving causality relationships despite quantum entanglement.
Key Contributions
- Formal model for asynchronous quantum distributed computing systems
- QGO Algorithm for implementing decomposable global quantum operations
- Demonstration that Lamport's causality principles extend to quantum distributed systems
View Full Abstract
We initiate the study of asynchronous quantum distributed systems, focusing on the case of implementing atomic quantum global operations that can be decomposed into a collection of local operations on the components of the system. A simple example of such an operation is a quantum snapshot in which the whole system is instantaneously measured. Based on the classical snapshot algorithm of Chandy and Lamport, we design a quantum distributed algorithm to implement such decomposable global operations, which we call the QGO Algorithm. The analysis of our algorithm shows that arguments based on Lamport's computational causality remain valid in the quantum world, even though, due to entanglement, causality is not manifest from the standard description of the system in terms of a (global) quantum state. Our other contributions include a formal model of quantum distributed computing, and a formal specification for the desired behavior of a global operation, which may be of interest even in classical settings (such as in the setting of randomized algorithms).
QARIMA: A Quantum Approach To Classical Time Series Analysis
This paper presents QARIMA, a quantum-inspired approach to time series analysis that uses variational quantum circuits and quantum-assisted methods for parameter estimation in ARIMA models. The method integrates quantum autocorrelation functions and swap-test-driven lag discovery with fixed-configuration quantum circuits for improved forecasting performance.
Key Contributions
- Development of quantum-assisted lag discovery using swap-test-driven quantum autocorrelation and partial autocorrelation functions
- Integration of fixed-configuration variational quantum circuits for ARIMA parameter estimation with reduced meta-optimization overhead
- Systematic framework identifying seven specific quantum contributions to classical time series analysis
View Full Abstract
We present a quantum-inspired ARIMA methodology that integrates quantum-assisted lag discovery with \emph{fixed-configuration} variational quantum circuits (VQCs) for parameter estimation and weak-lag refinement. Differencing and candidate lags are identified via swap-test-driven quantum autocorrelation (QACF) and quantum partial autocorrelation (QPACF), with a delayed-matrix construction that aligns quantum projections to time-domain regressors, followed by standard information-criterion parsimony. Given the screened orders $(p,d,q)$, we retain a fixed VQC ansatz, optimizer, and training budget, preventing hyperparameter leakage, and deploy the circuit in two estimation roles: VQC-AR for autoregressive coefficients and VQC-MA for moving-average coefficients. Between screening and estimation, a lightweight VQC weak-lag refinement re-weights or prunes screened AR lags without altering $(p,d,q)$. Across environmental and industrial datasets, we perform rolling-origin evaluations against automated classical ARIMA, reporting out-of-sample mean squared error (MSE), mean absolute percentage error (MAPE), and Diebold--Mariano tests on MSE and MAE. Empirically, the seven quantum contributions -- (1) differencing selection, (2) QACF, (3) QPACF, (4) swap-test primitives with delayed-matrix construction, (5) VQC-AR, (6) VQC weak-lag refinement, and (7) VQC-MA -- collectively reduce meta-optimization overhead and make explicit where quantum effects enter order discovery, lag refinement, and AR/MA parameter estimation.
Evaluating the performance of a weak-field homodyne receiver in quadrature phase-shift keying optical communication
This paper demonstrates a weak-field homodyne receiver for quantum communication that combines wave-like and particle-like detection features. The researchers test their receiver using quaternary phase-shift keying with coherent states, showing promising results for mutual information transfer and secret key generation rates in quantum communication protocols.
Key Contributions
- Development of weak-field homodyne receiver combining wave-like and particle-like detection
- Demonstration of quaternary phase-shift keying for quantum communication with improved mutual information and secret key rates
View Full Abstract
Quantum communication protocols require efficient detection schemes to maximize the information transfer rate between the sender and the receiver. To this aim, we have demonstrated that weak-field receivers, merging wave-like and particle-like features, can be considered as a valid alternative to already existing receivers, such as optical homodyne detection. To better emphasize the potential of our receiver, in this work we consider a proof of concept for quaternary communication based on coherent states with the same amplitude and different phase values. The encoding in phase requires a fine control of phase noise obtained through a feedback system. The results achieved in terms of mutual information and secret key generation rate encourage further increase of the alphabet towards an approximately continuous phase modulation.
Charging Quantum Batteries via Dissipative Quenches
This paper studies quantum batteries made of interacting spin chains coupled to engineered environments, investigating how different types of noise (dissipative vs dephasing) affect the ability to extract work from these quantum systems. The researchers find that purely dissipative environments can actually enable work extraction from thermal states, while dephasing suppresses this capability.
Key Contributions
- Demonstration that dissipative dynamics can activate ergotropy from passive thermal states enabling work extraction
- Characterization of how environmental structure affects quantum battery performance through interpolation between parallel and collective noise channels
View Full Abstract
We investigate work extraction in open quantum batteries composed of interacting spin chains weakly coupled to engineered environments. Focusing on two- and four-qubit XX models initially prepared in thermal Gibbs states, we analyze how dissipation and dephasing, acting either locally or collectively, can generate and shape ergotropy during both transient and steady-state dynamics. By introducing a continuous interpolation between parallel and collective noise channels, we systematically characterize the impact of environmental structure on work extractability. We show that purely dissipative dynamics can activate finite ergotropy from completely passive thermal states, giving rise to temperature-dependent transient regimes where hotter initial states temporarily outperform colder ones in an ergotropic Mpemba-like fashion. In contrast, collective dissipation leads to steady states whose passivity crucially depends on the initial temperature and system size, a behavior we trace back to the emergence of non-trivial dark subspaces. Finally, we demonstrate that dephasing channels suppress both transient advantages and steady-state work extraction, highlighting the qualitative difference between dissipative and dephasing environments.
Photon pairs, squeezed light and the quantum wave mixing effect in a cascaded qubit system
This paper studies quantum wave mixing in a system of two connected superconducting qubits, where one qubit acts as a source and the other as a probe. The researchers show that when certain conditions are met, the system behaves as if the probe qubit is being driven by squeezed light, and they identify specific patterns in the output spectrum that reveal the quantum nature of the light field.
Key Contributions
- Theoretical description of quantum wave mixing in cascaded superconducting qubit systems
- Identification of selection rules for QWM spectrum peaks that suppress odd-photon processes
- Demonstration that spectral analysis can probe photon statistics in nonclassical fields
View Full Abstract
We develop a theoretical description of quantum wave mixing (QWM) in a cascaded waveguide-QED system of two superconducting qubits, where the probe is driven by an external coherent tone and by the resonance fluorescence of a strongly driven source qubit. Starting from the field correlation functions of the source emission, we derive an effective master-equation treatment for the probe and identify the regime in which the incident fluorescence is characterized by anomalous correlations. When the coherent Rayleigh component of the source spectrum is suppressed, the probe equations of motion become equivalent to those for a qubit driven by a coherent tone and broadband squeezed light. This equivalence implies a selection rule for the peaks of the QWM spectrum, with a strong suppression of sidebands associated with processes involving an odd number of photons taken from the source field. Numerical simulations of the full cascaded two-qubit model for different ratios of radiative decay rates unambiguously confirm the participation of correlated photon pairs in QWM processes. The current research illustrates that the analysis of peak amplitudes can be used to probe photon statistics in the incident nonclassical field.
Divide et impera: hybrid multinomial classifiers from quantum binary models
This paper investigates methods for combining multiple quantum binary classification models into a single multinomial classifier that can distinguish between more than two classes. The researchers compare different hybrid approaches and find that a binary decision tree method provides the best balance of accuracy and computational efficiency.
Key Contributions
- Development of hybrid methods to extend quantum binary classifiers to multinomial classification
- Demonstration that binary decision tree approach offers logarithmic computational overhead while maintaining accuracy compared to other methods
View Full Abstract
We investigate how to combine a collection of quantum binary models into a multinomial classifier. We employ a hybrid approach, adopting strategies like one-vs-one, one-vs-rest and a binary decision tree. We benchmark each method, by emphasizing their computational overhead and their impact on the quantum advantage. By comparison against a classical binary model (generalized using the same approach), we show that the decision tree represents a cost-effective solution, achieving similar accuracies to other methods with an overhead at most logarithmic in the total number of classes.
Orthogonalised Self-Guided Quantum Tomography: Insights from Single-Pixel Imaging
This paper introduces an orthogonalized version of self-guided quantum tomography (SGQT) that improves the accuracy and speed of quantum state reconstruction. By drawing connections between quantum tomography and classical single-pixel imaging techniques, the authors achieve better experimental performance with no additional measurement overhead.
Key Contributions
- Introduction of orthogonalized self-guided quantum tomography with improved convergence
- Mathematical connection established between self-guided imaging and single-pixel imaging techniques
- Experimental demonstration of improved fidelity from 92.1% to 95.3% with no additional overhead
View Full Abstract
We introduce the concept of self-guided imaging (SGI) as a linear analogue of self-guided quantum tomography (SGQT). We show that SGI is mathematically equivalent to single-pixel imaging (SPI). Taking inspiration from orthogonalised ghost imaging, a recent advance in SPI, we introduce orthogonalised SGQT. This requires no additional experimental overhead and leads to faster and more accurate final convergence, as we demonstrate numerically (fidelity $95.2\% \rightarrow 99.17\%$) and experimentally (fidelity $92.1\% \rightarrow 95.3\%$). This work suggests that further routines from SPI and SGQT can be interchanged to optimise measurements and convergence.
Local Marking of Locally Implementable Unitary Operations
This paper studies how spatially separated parties can identify unknown quantum operations from a known set using only local operations and classical communication, introducing the concept of 'local marking' which is distinct from quantum state discrimination. The authors show that some sets of quantum operations that can be distinguished globally cannot be locally marked, demonstrating a form of nonlocality without entanglement.
Key Contributions
- Introduction of local marking as a new task distinct from quantum state discrimination
- Demonstration that globally distinguishable product unitaries can exist that cannot be locally marked
- Analysis of the hierarchy between entangled and product probes for local marking tasks
View Full Abstract
We investigate the task of local marking for locally implementable unitary operations. In this setting, multipartite quantum unitary channels, chosen randomly from a known set, are distributed among spatially separated parties without revealing their identities. The objective is to correctly identify (mark) the applied process using only local operations supplemented with classical communication (LOCC). While local distinguishability implies local marking, local marking does not guarantee either local or even global distinguishability of a set of unitaries. Thus the task of marking is not equivalent to the task of discrimination. We demonstrate a stronger manifestation of nonlocality without entanglement by constructing a set of globally distinguishable tripartite product unitaries that cannot be locally marked. In contrast to state marking, we find that marking a subset of product unitaries does not imply the ability to mark a larger subset. Finally, we explore the hierarchy of probes-entangled and product-in the context of local marking with respect to the standard discrimination scenario.
Fixing semi-classical physics from first principles: how to derive effective classical-quantum dynamics from open quantum theory
This paper develops improved semi-classical approximation methods by incorporating environmental decoherence effects into classical-quantum hybrid dynamics. The authors show how these enhanced semi-classical theories can provide exact descriptions of quantum systems by treating them as effective descriptions of open quantum systems.
Key Contributions
- Development of improved semi-classical approximation methods that incorporate environmental decoherence
- Demonstration that consistent classical-quantum dynamics can emerge as effective descriptions of open quantum systems
View Full Abstract
Semi-classical approaches approximate fully quantum descriptions with partially classical ones. Here we use a toy model to highlight the failings of the standard mean-field semi-classical approach, and show how including environmental decoherence can lead to improved semi-classical theories that are exact descriptions of the original quantum dynamics. In doing so, we show how consistent models of classical-quantum dynamics can arise as effective descriptions of open quantum systems.
Harnessing dark states: coherent control in coupled cavity-Rydberg-atom systems
This paper studies dark states in coupled cavity-Rydberg-atom systems, where multiple atoms interact through dipole-dipole forces and couple to a cavity field. The researchers use mathematical methods to characterize these dark states and propose experimental ways to detect them.
Key Contributions
- Development of arrowhead-matrix method to analyze dark states in cavity-Rydberg systems
- Theoretical characterization of dark state numbers and forms for 2-4 atom cases and general N-atom single-excitation subspace
- Proposal for experimental detection methods of dark states through population measurements
View Full Abstract
The dark-state effect, caused by destructive interference, not only is an important fundamental research topic in atomic physics and quantum optics, but also has wide potential application in quantum physics and quantum information science. Using the arrowhead-matrix method, here we study the dark-state effect in a coupled cavity-Rydberg-atom system, in which $N$ Rydberg atoms with the dipole-dipole interactions are coupled to a single-mode cavity field. We obtain the numbers and form of the dark states in certain excitation-number subspaces for the two-, three-, and four-atom cases, as well as in the single-excitation subspace for a general $N$-atom case. We also suggest to characterize the dark states by inspecting the populations of some specific quantum states, which can be detected in experiments. Furthermore, we analyze the dark-state effect in a realistic case, where both the atomic dipole-dipole interaction strengths and the atom-cavity-field coupling strengths depend on the position of the atoms. Our findings pave the way for studying dark-state physics and applications in the cavity-Rydberg-atom platform.
Leading low-temperature correction to the Heisenberg-Euler Lagrangian
This paper presents an efficient method to calculate temperature corrections to the Heisenberg-Euler Lagrangian in electromagnetic fields using real-time quantum field theory formalism. The authors show how to extract low-temperature corrections from zero-temperature calculations and extend this to higher-loop contributions through resummation techniques.
Key Contributions
- Efficient method to extract low-temperature corrections to Heisenberg-Euler Lagrangian from zero-temperature calculations
- Resummation of higher-loop contributions and extraction of leading strong-field behavior at arbitrary loop orders
View Full Abstract
In this note, we show that the well-known leading low-temperature correction to the Heisenberg-Euler Lagrangian in a constant electromagnetic field arising at two loops can be efficiently extracted from its one-loop zero-temperature analogue. Resorting to the real-time formalism of equilibrium quantum field theory that explicitly separates out the zero-temperature contribution from the finite-temperature corrections the determination becomes essentially trivial. In essence, it only requires taking derivatives of the Heisenberg-Euler Lagrangian at one loop and zero temperature for the field strength. As a bonus, we then effectively dress the low-temperature contribution at two loops by one-particle reducible tadpole structures. This generates a subset of higher-loop contributions to the Heisenberg-Euler Lagrangian in the limit of low temperatures. We extract their leading strong-field behavior at a given loop order, and finally resum these to all loop orders.
Simultaneous ground-state cooling of six mechanical modes of two levitated nanoparticles
This paper demonstrates a method to simultaneously cool six mechanical motion modes of two levitated nanoparticles to their quantum ground state using a controllable optical cavity system. By adjusting the polarization angle of the cavity field relative to optical tweezers, researchers can control coupling channels and avoid problematic dark modes that would prevent effective cooling.
Key Contributions
- Demonstration of simultaneous ground-state cooling of six mechanical modes across two levitated nanoparticles
- Method for controlling coupling channels through polarization angle tuning to avoid dark modes
- Theoretical framework for multi-particle levitated optomechanical systems enabling collective quantum effects
View Full Abstract
Ground-state cooling is a prerequisite for exploring macroscopic quantum effects in mechanical motion of massive objects. Here we construct a polarization-angle-controllable coupled cavity-levitated-nanoparticle system in which two nanoparticles trapped by individual tweezers are coupled to a single-mode field in a cavity. We also study the simultaneous ground-state cooling of six mechanical displacement modes of the two levitated nanoparticles through the coherent scattering mechanism. By deriving the Hamiltonian of the system and performing the linearization, we obtain a linearized seven-mode Hamiltonian, which can exhibit the coupling structure and cooling mechanism. We confirm the physical condition for the appearance of dark modes, which will suppress the simultaneous ground-state cooling of these mechanical modes. We also find that, by properly tuning the polarization angle $θ$ between the cavity field and the optical tweezer fields, the coupling channels can be controlled on demand and simultaneous ground-state cooling of these six motional modes of the two nanoparticles can be realized. Our work paves the way for generation and manipulation of collective macroscopic quantum effects in multiple levitated nanoparticles.
Quantum Property Testing for Bounded-Degree Directed Graphs
This paper develops quantum algorithms for testing properties of directed graphs where vertices have limited connections, showing that quantum methods can examine graph properties almost quadratically faster than classical methods when only outgoing connections are accessible. The work establishes both upper and lower bounds for this quantum advantage in graph property testing.
Key Contributions
- Proves quantum algorithms can test graph properties with n^(1/2-Ω(1)) queries compared to classical methods requiring more queries in unidirectional model
- Establishes near-optimal lower bounds showing the quantum speedup is almost tight
- Demonstrates quantum algorithm for approximating subgraph occurrences with o(√n) queries
View Full Abstract
We study quantum property testing for directed graphs with maximum in-degree and out-degree bounded by some universal constant $d$. For a proximity parameter $\varepsilon$, we show that any property that can be tested with $O_{\varepsilon,d}(1)$ queries in the classical bidirectional model, where both incoming and outgoing edges are accessible, can also be tested in the quantum unidirectional model, where only outgoing edges are accessible, using $n^{1/2 - Ω_{\varepsilon,d}(1)}$ queries. This yields an almost quadratic quantum speedup over the best known classical algorithms in the unidirectional model. Moreover, we prove that our transformation is almost tight by giving an explicit property $P_\varepsilon$ that is $\varepsilon$-testable within $O_\varepsilon(1)$ classical queries in the bidirectional model, but requires $\widetildeΩ(n^{1/2-f'(\varepsilon)})$ quantum queries in the unidirectional model, where $f'(\varepsilon)$ is a function that approaches $0$ as $\varepsilon$ approaches $0$. As a byproduct, we show that in the unidirectional model, the number of occurrences of any constant-size subgraph $H$ can be approximated up to additive error $δn$ using $o(\sqrt{n})$ quantum queries.
Investigation of Automated Design of Quantum Circuits for Imaginary Time Evolution Methods Using Deep Reinforcement Learning
This paper presents a machine learning approach using Deep Reinforcement Learning to automatically design more efficient quantum circuits for finding ground states of quantum systems. The method reduces circuit complexity by 37-43% compared to standard designs while maintaining accuracy for optimization and chemistry problems.
Key Contributions
- Automated quantum circuit design framework using Double Deep-Q Networks for VITE algorithms
- Demonstrated 37% gate reduction and 43% depth reduction compared to standard ansatz while maintaining accuracy
- Multi-objective optimization approach balancing energy minimization with circuit complexity for NISQ devices
View Full Abstract
Efficient ground state search is fundamental to advancing combinatorial optimization problems and quantum chemistry. While the Variational Imaginary Time Evolution (VITE) method offers a useful alternative to Variational Quantum Eigensolver (VQE), and Quantum Approximate Optimization Algorithm (QAOA), its implementation on Noisy Intermediate-Scale Quantum (NISQ) devices is severely limited by the gate counts and depth of manually designed ansatz. Here, we present an automated framework for VITE circuit design using Double Deep-Q Networks (DDQN). Our approach treats circuit construction as a multi-objective optimization problem, simultaneously minimizing energy expectation values and optimizing circuit complexity. By introducing adoptive thresholds, we demonstrate significant hardware overhead reductions. In Max-Cut problems, our agent autonomously discovered circuits with approximately 37\% fewer gates and 43\% less depth than standard hardware-efficient ansatz on average. For molecular hydrogen ($H_2$), the DDQN also achieved the Full-CI limit, with maintaining a significantly shallower circuit. These results suggest that deep reinforcement learning can be helpful to find non-intuitive, optimal circuit structures, providing a pathway toward efficient, hardware-aware quantum algorithm design.
Informational Mpemba Effect for Fast State Purification in Non-Hermitian System
This paper demonstrates a method to rapidly purify quantum states (remove unwanted noise) using engineered dissipation in non-Hermitian quantum systems. The researchers show an 'informational Mpemba effect' where more mixed (noisier) initial states can be purified faster than less noisy ones.
Key Contributions
- Demonstration of rapid quantum state purification using collective reservoir engineering in non-Hermitian systems
- Discovery of informational Mpemba effect where more mixed initial states purify faster
- Showing that efficient purification is governed by collective subradiant mode degeneracy rather than exceptional points
View Full Abstract
Quantum systems are inherently fragile to environmental fluctuations or decoherence, limiting their advantages in applications of quantum information and quantum computation. State purification offers a route to recover the purity of system under noisy conditions. Here, we demonstrate a rapid purification of initially mixed states by harnessing collective reservoir engineering in driven non-Hermitian qubit systems, together with multipartite entanglement generation in larger systems. We show that the onset of efficient purification-assisted entanglement generation is dictated by the degeneracy of collective subradiant modes, rather than by exceptional points. Moreover, the system dynamics manifests an informational Mpemba effect, i.e., a more mixed initial state reaches its steady state with unit purity at a faster rate, resembling the conventional Mpemba effect where a hotter system cools more rapidly. These results reveal a unique advantage of driven non-Hermitian quantum systems with engineered collective dissipation, enabling enhanced purification efficiency and offering new opportunities for quantum engineering.
Non-variational supervised quantum kernel methods: a review
This paper reviews quantum kernel methods for machine learning, which use fixed quantum circuits to encode data into high-dimensional quantum feature spaces for classification tasks. The review examines when these methods might offer advantages over classical approaches and identifies key challenges including exponential concentration and dequantization issues.
Key Contributions
- Comprehensive review of non-variational quantum kernel methods and their theoretical foundations
- Analysis of quantum advantage frameworks including generalization bounds and separation conditions from classical models
- Examination of key challenges like exponential concentration and dequantization via tensor networks
View Full Abstract
Quantum kernel methods (QKMs) have emerged as a prominent framework for supervised quantum machine learning. Unlike variational quantum algorithms, which rely on gradient-based optimisation and may suffer from issues such as barren plateaus, non-variational QKMs employ fixed quantum feature maps, with model selection performed classically via convex optimisation and cross-validation. This separation of quantum feature embedding from classical training ensures stable optimisation while leveraging quantum circuits to encode data in high-dimensional Hilbert spaces. In this review, we provide a thorough analysis of non-variational supervised QKMs, covering their foundations in classical kernel theory, constructions of fidelity and projected quantum kernels, and methods for their estimation in practice. We examine frameworks for assessing quantum advantage, including generalisation bounds and necessary conditions for separation from classical models, and analyse key challenges such as exponential concentration, dequantisation via tensor-network methods, and the spectral properties of kernel integral operators. We further discuss structured problem classes that may enable advantage, and synthesise insights from comparative and hardware studies. Overall, this review aims to clarify the regimes in which QKMs may offer genuine advantages, and to delineate the conceptual, methodological, and technical obstacles that must be overcome for practical quantum-enhanced learning.
Quantum Thermal Field Effect Transistor
This paper proposes a quantum thermal field-effect transistor (qtFET) that uses quantum systems (qubits and a qutrit) to control thermal currents, analogous to how conventional transistors control electrical currents. The device could serve as a building block for quantum thermal management and amplification systems.
Key Contributions
- Design of a quantum thermal transistor using qubit-qutrit-qubit architecture
- Demonstration of thermal current modulation in quantum systems analogous to electronic FET operation
View Full Abstract
We propose and analyse a quantum thermal field-effect transistor (qtFET) composed of left-qubit, middle-qutrit, and right-qubit subsystems. In this architecture, the left qubit is coupled to the middle qutrit, which in turn interacts with the right qubit. Each subsystem interacts independently with its respective baths. The middle subsystem serves as a modulator. We have shown that the qtFET exhibits functionality analogous to that of a conventional electronic field-effect transistor (eFET). The left, right, and middle subsystems of the qtFET correspond to the drain, source, and gate of an eFET in a common gate configuration, respectively. Our results show that the qtFET can precisely modulate thermal currents, highlighting its potential as a fundamental building block for quantum thermal devices and amplifiers in emerging quantum technologies.
Hybrid Quantum--Classical k-Means Clustering via Quantum Feature Maps
This paper develops a hybrid quantum-classical k-means clustering algorithm that uses quantum feature maps to embed classical data into higher-dimensional quantum states, replacing traditional Euclidean distance with quantum kernel-based similarity metrics. The approach demonstrates improved clustering performance on standard datasets like Iris and breast cancer, achieving better stability and accuracy compared to classical k-means.
Key Contributions
- Introduction of quantum kernel-based similarity metrics for k-means clustering
- Demonstration of improved clustering performance using quantum feature maps on NISQ-feasible circuits
View Full Abstract
Clustering is one of the most fundamental tasks in machine learning, and the k-means clustering algorithm is perhaps one of the most widely used clustering algorithms. However, it suffers from several limitations, such as sensitivity to centroid initialization, difficulty capturing non-linear structure, and poor performance in high-dimensional spaces. Recent work has proposed improved initialization strategies and quantum-assisted distance computation, but the similarity metric itself has largely remained classical. In this study, we propose a quantum-enhanced variant of k-means that replaces the Euclidean distance with a quantum kernel derived from the inner product between feature-mapped quantum states. Using the Iris dataset, we use multiple quantum feature maps, including entangled SU2 and ZZ circuits, to embed classical data into a higher-dimensional Hilbert space where cluster structures become more separable. We will also be testing using another dataset, namely the breast cancer dataset. Similarity between data points is computed through the inner product between two states. Our results show that this approach achieves improved clustering stability and competitive accuracy compared to the classical algorithm, with the SU2 feature map yielding an accuracy of 88.6 % on the Iris dataset and 91.0 % on the breast cancer dataset, despite operating on NISQ-feasible shallow circuits. These findings suggest that quantum kernels provide a richer similarity landscape than traditional distance metrics, offering a promising path toward more robust unsupervised learning in the NISQ era.
Hardware-Aware Quantum Support Vector Machines
This paper develops a hardware-aware approach to automatically design quantum machine learning circuits that can run directly on IBM quantum processors without modification. The method uses genetic algorithms to evolve quantum feature maps for Support Vector Machines that are constrained to use only native quantum gates, achieving competitive accuracy while eliminating transpilation overhead.
Key Contributions
- Hardware-aware Neural Architecture Search for quantum circuits constrained to native gate sets
- Demonstration that automated circuit design can achieve competitive QSVM performance while guaranteeing hardware compatibility
- 27 percentage point improvement over hand-crafted quantum feature maps using exclusively IBM native gates
View Full Abstract
Deploying quantum machine learning algorithms on near-term quantum hardware requires circuits that respect device-specific gate sets, connectivity constraints, and noise characteristics. We present a hardware-aware Neural Architecture Search (NAS) approach for designing quantum feature maps that are natively executable on IBM quantum processors without transpilation overhead. Using genetic algorithms to evolve circuit architectures constrained to IBM Torino native gates (ECR, RZ, SX, X), we demonstrate that automated architecture search can discover quantum Support Vector Machine (QSVM) feature maps achieving competitive performance while guaranteeing hardware compatibility. Evaluated on the UCI Breast Cancer Wisconsin dataset, our hardware-aware NAS discovers a 12-gate circuit using exclusively IBM native gates (6 ECR, 3 SX, 3 RZ) that achieves 91.23 % accuracy on 10 qubits-matching unconstrained gate search while requiring zero transpilation. This represents a 27 percentage point improvement over hand-crafted quantum feature maps (64 % accuracy) and approaches the classical RBF SVM baseline (93 %). We show that removing architectural constraints (fixed RZ placement) within hardware-aware search yields 3.5 percentage point gains, and that 100 % native gate usage eliminates decomposition errors that plague universal gate compilations. Our work demonstrates that hardware-aware NAS makes quantum kernel methods practically deployable on current noisy intermediate-scale quantum (NISQ) devices, with circuit architectures ready for immediate execution without modification.
Analysis of State Teleportation using Noisy Quantum Gates
This paper analyzes how different types of noise (depolarization, bit flip, and phase flip) affect the quantum teleportation protocol by measuring the fidelity between ideal and noisy teleported states. The study finds that fidelity decreases polynomially with noise strength in general, but only linearly in low-noise conditions, suggesting quantum teleportation has some inherent robustness to small amounts of noise.
Key Contributions
- Analytical study of noise effects on quantum teleportation protocol with specific noise models
- Characterization of fidelity degradation showing polynomial decay with noise strength but linear decay in low-noise regime
View Full Abstract
Noise is a major challenge in quantum computing, affecting the reliability of quantum protocols. In this work, we analytically study the impact of various noise processes, such as depolarization, bit flip, and phase flip, on the quantum state teleportation protocol. Each noise process is modeled as a quantum channel and is applied individually to all qubits after the corresponding unitary operations to simulate realistic conditions. We evaluate the fidelity between the ideal and noisy teleported states to quantify the effect of noise. Our analysis shows that the fidelity decreases polynomially, in general, as the noise strength increases for all noise types, highlighting the sensitivity of state teleportation to different noise mechanisms. However, in the low noise regime, the fidelity decreases only linearly, indicating the robustness of the teleportation protocol. These results provide insight into error characterization and can inform strategies for noise mitigation in practical quantum computing applications.
Quantum Simulation of Hyperbolic Equations and the Nonexistence of a Dirac Path Measure
This paper examines why there is no well-defined probability measure for representing the Dirac equation (describing relativistic fermions) as a classical path integral in spacetime. The authors unify two existing explanations - one based on mathematical properties of the Dirac propagator and another based on the geometry of spacetime - showing they are different aspects of the same fundamental mathematical obstruction.
Key Contributions
- Unified two complementary explanations for the nonexistence of Dirac path measures from a measure-theoretical perspective
- Clarified the mathematical obstructions to stochastic representations of relativistic first-order equations
View Full Abstract
We revisit the longstanding issue of why no well defined probability measure exists corresponding to a classical (Kolmogorov) path integral representation of the Dirac equation in Minkowski space. Two complementary perspectives are compared: (i) Zastawniak's observation that the distributional character of the Dirac propagator (presence of derivatives of the delta distribution) obstructs the construction of a nonnegative transition kernel, and (ii) the indefinite signature of the Minkowski metric which prevents positivity of the action and yields oscillatory integrals. We show how these viewpoints can be unified as different manifestations of a single mathematical obstruction from measure theoretical point of view, and we discuss consequences for stochastic representations of relativistic first-order equations.
Optimal noisy quantum phase estimation with finite-dimensional states
This paper investigates optimal quantum states for phase estimation in interferometry when particle loss noise is present, finding that previously identified optimal states may no longer be optimal under realistic noisy conditions. The authors develop numerical methods to find true optimal states under noise and propose a two-step measurement strategy to achieve ultimate precision limits in practice.
Key Contributions
- Identification of optimal finite-dimensional probe states for quantum phase estimation under particle loss noise
- Development of a two-step measurement strategy to achieve ultimate precision limits in noisy quantum interferometry
View Full Abstract
Phase estimation in quantum interferometry is a major scenario where the quantum advantage is significantly revealed. Recently, the optimal finite-dimensional probe states (OFPSs) for phase estimation in two-mode quantum interferometry have been provided with the absence of noise [J.-F. Qin et al., Phys. Rev. A 112, 052428 (2025)]. However, the noise is inevitable in practice and the previously obtained OFPSs may cease to be optimal anymore. Hence, the forms of the true OFPSs in the existence of various noises are still open questions. Hereby, the noise of particle loss is studied and the true OFPSs under this noise have been investigated with the numerical algorithm named constrained optimization by linear approximation. Furthermore, a two-step measurement strategy is proposed to realize the ultimate precision limit in practice. The validity of this strategy is confirmed by the numerical simulation of practical experiments.
Complexity phase transition for continuous-variable cluster state
This paper investigates how the level of squeezing in continuous-variable cluster states affects their computational power for measurement-based quantum computing. The researchers identify specific squeezing thresholds that determine when these quantum states can be efficiently simulated classically versus when they become computationally intractable, revealing a phase transition between classical and quantum regimes.
Key Contributions
- Development of explicit measurement-based linear optics framework for CV cluster states
- Identification of squeezing-level thresholds that delineate classical tractability from quantum advantage
- Demonstration of squeezing-driven complexity phase transition in quantum computational systems
View Full Abstract
Continuous-variable (CV) cluster states offer a promising platform for large-scale measurement-based quantum computations (MBQC). However, finite squeezing inevitably introduces Gaussian noise during MBQC. While fault-tolerant MBQC schemes exist in principle, they require the scalable incorporation of non-Gaussian resources, such as GKP states, which remain experimentally challenging. Consequently, a central question at this stage is how finite squeezing fundamentally constrains the intrinsic computational power of CV cluster states themselves. In this work, we address this question by analyzing the classical complexity of measurement-based linear optics (MBLO) implemented with such states, motivated by its near-term feasibility and recent experimental progress. We develop an explicit MBLO framework and examine how the squeezing level governs the complexity of the classical simulation of the resulting output states. Specifically, we identify squeezing-level thresholds that delineate classically tractable and intractable regimes, thereby revealing a squeezing-driven complexity phase transition. These findings advance our understanding of the squeezing resources necessary for meaningful quantum computation in current experimental regimes. Furthermore, they underscore the critical need to either scale the squeezing level or integrate error-correction schemes to achieve reliable, large-scale quantum computation with CV cluster states.
Inverse Laplace and Mellin integral transforms modified for use in quantum communications
This paper proposes modifications to inverse Laplace and Mellin integral transforms, claiming these modifications could be applied to security protocols for quantum computers. The work draws connections between mathematical transforms used in signal processing and techniques from quantum field theory.
Key Contributions
- Modified inverse Laplace and Mellin transforms for extended domains
- Proposed application of modified transforms to quantum computer security protocols
View Full Abstract
Integral transformations are useful mathematical tool to work out signals and wave-packets in electronic devices. They may be used in software protocols. Necessary knowledge may come from quantum field theory, in particular from quantum chromodynamics, in which the optic theorem and the renormalization group equation can be solved by a unique contour integral written in two different "dual" ways related between themselves by a complex map in the complex plane of Mellin variable. The inverse integral transformation should be modified to be applied for these contour integral solutions. These modified inverse transformations may be used in security protocols for quantum computers. Here we do a brief review of the basic integral transforms and propose their modification for the extended domains.
Ghost imaging with zero photons
This paper demonstrates a counterintuitive form of ghost imaging where an image can be reconstructed using only time bins where zero photons interacted with the object, with all photons that actually hit the object being discarded. The technique relies on photon-number projection measurements and the statistical properties of thermal light to extract spatial information without direct photon-object interactions.
Key Contributions
- Demonstration of ghost imaging using only zero-photon events while discarding all photons that interact with the object
- Clarification of the role of photon statistics and measurement projections in ghost imaging, contributing to understanding quantum vs classical correlations
View Full Abstract
Ghost imaging was first demonstrated with entangled photon pairs and well-known for its peculiar properties. The signal beam that illuminates the object possesses no spatial resolution, whereas the reference beam, which never interacts with the object, is spatially resolved. Either beam alone cannot retrieve the image, which can only be obtained when the signal and reference beams are correlated. Here we will report a ghost imaging experiment with even more peculiar properties, in which the image can be reconstructed when no photon interacts with the object or even no photon in neither signal nor reference beam. All the photons interacted with the object are discarded. Only the time bins with zero photon are employed to retrieve the image, a process referred to as "ghost imaging with zero photons" hereafter. The reason why ghost image can be retrieved with zero photons is jointly determined by photon-number projection measurement and photon statistics of thermal light. The results are helpful to resolve the debate on the physics of ghost imaging and understand the relation between quantum and classical correlations.
Critical Entanglement Dynamics at Dynamical Quantum Phase Transitions
This paper studies how entanglement entropy behaves during dynamical quantum phase transitions in various quantum materials models, finding that momentum-space entanglement provides a time-independent way to identify these critical transitions when measured in the right mathematical basis.
Key Contributions
- Establishes momentum-space entanglement entropy as a robust diagnostic for dynamical quantum phase transitions
- Demonstrates that the choice of measurement basis critically affects entanglement behavior during phase transitions
- Provides unified geometric perspective linking entanglement, topology, and non-equilibrium criticality
View Full Abstract
We investigate the critical behavior of momentum-space entanglement entropy at dynamical quantum phase transitions (DQPTs) in translationally invariant two-band insulators and superconductors. By analyzing the Su-Schrieffer-Heeger model, the quantum XY chain, and the Haldane model, we establish that the geometric DQPT condition $\hat{\textbf{d}}_{\textbf{k}}^{i} \cdot \hat{\textbf{d}}_{\textbf{k}}^{f} = 0$ manifests as exact degeneracy $p_{\textbf{k}^{*}}=1/2$ in the entanglement spectrum defined with respect to the post-quench eigenbasis, yielding a maximal momentum-space entropy of $\ln 2$. In one dimension, critical momenta appear as isolated points, whereas in two dimensions they form continuous one-dimensional manifolds, reflecting the dimensional dependence of the underlying critical structure. Importantly, alternative bipartitions such as the sublattice basis produce qualitatively different behavior: the entropy becomes explicitly time-dependent and attains a minimum at DQPT critical times, underscoring the essential role of basis selection. Our results establish that momentum-space entanglement entropy, when evaluated in the appropriate eigenbasis, provides a robust, time-independent diagnostic of DQPTs and offers a unified geometric perspective linking entanglement, topology, and non-equilibrium criticality.
Control-centric quantum noise spectroscopy of time-ordered polyspectra
This paper develops improved methods for characterizing environmental noise in quantum systems by introducing a control-centric approach that focuses on time-ordered polyspectra. The work enables better noise spectroscopy protocols that can work under realistic experimental constraints without requiring special control symmetries.
Key Contributions
- Introduction of control-centric quantum noise spectroscopy framework using time-ordered polyspectra
- Generalization of frequency-comb QNS protocols to arbitrary control scenarios without additional symmetry requirements
View Full Abstract
Precise environmental-noise characterisation in open quantum systems is a key step toward high-fidelity quantum control and targeted decoherence suppression in computing and sensing applications. Non-parametric quantum noise spectroscopy (QNS) provides a general-purpose, model-agnostic framework for estimating the spectral properties of an environment. The ability to perform such protocols under realistic constraints is key to their practical applicability. Notably, it is important to account for control constraints and understand how they limit the ability to learn about noise correlations as experiment-agnostic objects. We show how adopting a control-centric point of view allows one to recast the noise spectroscopy problem in such a way that (i) the central objects are now the time-ordered polyspectra, (ii) control filter functions are no longer encumbered by time-ordering. In particular, we show that this approach enables the seamless generalisation of frequency-comb QNS protocols to arbitrary control scenarios without introducing additional control symmetries that effectively remove time-ordering from filter functions, improving estimation in typically pathological scenarios. We demonstrate the targeted reconstruction of the time-ordered polyspectra across classical Gaussian and quantum non-Gaussian environments via simulations.
A Thermodynamic SU(1,1) Witness Framework for Double-Quantum NMR Signals in Neural Tissue
This paper develops a theoretical framework to distinguish between classical and quantum effects in double-quantum NMR signals from neural tissue by establishing thermodynamic bounds on classical fluctuations. The authors show that classical effects are limited to very small amplitudes, making larger observed signals potentially indicative of quantum phenomena.
Key Contributions
- Development of thermodynamic witness framework for SU(1,1) entanglement detection in biological systems
- Theoretical bounds on classical vs quantum contributions to double-quantum NMR signals in neural tissue
View Full Abstract
Entanglement criteria based on variances or Fisher information are well developed for compact collective spin algebras, but their extension to non-compact dynamical sectors is less straightforward. In particular, double-quantum (DQ) observables associated with effective SU(1,1) structures can lead to formally unbounded classical fluctuation estimates unless additional physical constraints are imposed. In this note, we develop a thermodynamic witness framework in which the classically accessible fluctuation sector is strictly bounded by finite-temperature detailed-balance conditions and motionally narrowed sequence-transfer limits. By analyzing the quantum dynamical semigroup of the spin-bath interaction, we demonstrate that spontaneous transient pair correlations generated by a stationary incoherent bath are contractively capped near an amplitude of \(10^{-9}\). Furthermore, classical coherent sequence amplification is empirically bounded to \(\mathcal{O}(10^{-2})\) in motionally narrowed tissue. The resulting functional provides a concrete, theoretically derived bounding framework against which macroscopic DQ anomalies (e.g., fractional amplitudes on the order of \(10\%\) to \(15\%\)) can be rigorously classified as classically inexplicable, provided macro-scale structural stability (constant \(T_2^*\)) is empirically verified.
Exponential quantum advantage in processing massive classical data
This paper proves that small quantum computers can achieve exponential advantages over classical computers in processing massive classical datasets for machine learning tasks like classification and dimensionality reduction. The authors demonstrate that quantum machines with only polylogarithmic size can match the performance of exponentially larger classical machines, and validate this with real-world applications using fewer than 60 logical qubits.
Key Contributions
- Theoretical proof of exponential quantum advantage in classical data processing and machine learning
- Quantum oracle sketching algorithm that enables quantum superposition access to classical data
- Real-world validation showing 4-6 orders of magnitude size reduction with <60 logical qubits
- Demonstration that quantum advantages persist even under strong classical assumptions like BPP=BQP
View Full Abstract
Broadly applicable quantum advantage, particularly in classical data processing and machine learning, has been a fundamental open problem. In this work, we prove that a small quantum computer of polylogarithmic size can perform large-scale classification and dimension reduction on massive classical data by processing samples on the fly, whereas any classical machine achieving the same prediction performance requires exponentially larger size. Furthermore, classical machines that are exponentially larger yet below the required size need superpolynomially more samples and time. We validate these quantum advantages in real-world applications, including single-cell RNA sequencing and movie review sentiment analysis, demonstrating four to six orders of magnitude reduction in size with fewer than 60 logical qubits. These quantum advantages are enabled by quantum oracle sketching, an algorithm for accessing the classical world in quantum superposition using only random classical data samples. Combined with classical shadows, our algorithm circumvents the data loading and readout bottleneck to construct succinct classical models from massive classical data, a task provably impossible for any classical machine that is not exponentially larger than the quantum machine. These quantum advantages persist even when classical machines are granted unlimited time or if BPP=BQP, and rely only on the correctness of quantum mechanics. Together, our results establish machine learning on classical data as a broad and natural domain of quantum advantage and a fundamental test of quantum mechanics at the complexity frontier.
Fermionic entanglement and quantum correlation measures in molecules
This paper analyzes different measures of quantum entanglement and correlation in the water molecule's electronic structure, examining how these quantum correlations change as the molecule stretches or compresses. The researchers use various mathematical tools to characterize how entangled the electrons are in different molecular configurations.
Key Contributions
- Comprehensive analysis of fermionic entanglement measures in molecular systems using full configuration interaction
- Introduction of new quantum correlation measures including up-down two-body mutual information and two-body negativities
- Characterization of electronic entanglement as a function of internuclear distance in water molecules
View Full Abstract
We analyze fermionic entanglement and correlation measures in the ground and the low temperature thermal state of the water molecule as a function of the internuclear distance in the context of the full configuration interaction approach. The aim is to obtain a general entanglement based characterization of the electronic eigenstates. We consider first the spin-up - spin-down partition and the associated Schmidt decomposition, examining the total up-down entanglement of the electronic wave function. We then consider the one- and two-body entanglement derived from the one- and two-body reduced density matrices (DMs), which measure both the deviation of the state from a Slater Determinant (SD) as well as the up-down correlation at the two-body level. All blocks of these DMs are examined. We also introduce and analyze new measures like the up-down two-body mutual information and two types of two-body negativities, the latter measuring the "inner" entanglement of the reduced two-body DMs, i.e., their deviation from a convex mixture of SDs. Finally, the dissociation limit is also analyzed, considering both the exact ground state (GS) as well as the thermal state in the zero temperature limit, representing the projector onto the "GS band" of almost degenerate lowest lying eigenstates.
Hybrid-2D Excitonic Metasurfaces for Complex Amplitude Modulation
This paper demonstrates a new type of metasurface that uses electrically tunable 2D materials (monolayer WS2) to independently control both the amplitude and phase of visible light, enabling applications like reconfigurable beam steering and holographic displays.
Key Contributions
- Development of hybrid-2D excitonic metasurfaces for independent amplitude and phase control
- Demonstration of reconfigurable beam-steering metadevice using electrically tunable monolayer WS2
View Full Abstract
Dynamic control of visible light is crucial for technologies such as holographic displays and adaptive optics. Passive metasurfaces can shape wavefronts at the subwavelength scale and active metasurfaces promise to extend this functionality into the temporal domain. However, existing metasurfaces for dynamic phase manipulation typically cannot deliver phase modulation across a broad range without causing variations in the scattering amplitude. Here, we use an inverse-design pipeline to numerically demonstrate a hybrid-2D excitonic metasurface platform offering independent amplitude and phase control in the visible regime. Harnessing the gate-tunable excitonic response of monolayer WS2 retrieved from experiments, we design a pi-phase modulator with a uniform amplitude profile. Adding a second tunable monolayer, we achieve independent control of the amplitude and phase over the full 0-2pi phase range, which we leverage for a reconfigurable beam-steering metadevice. Our results demonstrate how hybrid-2D excitonic metasurfaces enable electrically tunable wavefront shaping in the visible regime.
On Lorentzian symmetries of quantum information
This paper explores how Lorentzian symmetries (the mathematical structure underlying special relativity) naturally emerge from quantum information theory without reference to spacetime, showing that certain quantum information measures are invariant under Lorentz transformations. The work supports the 'It from Qubit' paradigm by demonstrating how spacetime structure can arise from fundamental quantum information properties.
Key Contributions
- Derives natural action of Lorentz group on qubit degrees of freedom from preservation of linear entropy
- Shows that n-partite quantum mutual information is SL(2,C)^n invariant
- Demonstrates emergence of Minkowski metric from qubit correlation functions in singlet states
View Full Abstract
A foundational result in relativistic quantum information theory due to Peres, Scudo, and Terno, is that von Neumann entropy is not Lorentz invariant. Motivated by the "It from Qubit" paradigm, here we show that Lorentzian symmetries of quantum information emerge naturally in a pre-spacetime setting, without any reference to external variables such as position or momentum. In particular, we derive the natural action of the restricted Lorentz group $\text{SO}^+(1,3)$ on the internal degrees of freedom of a single qubit from a simple, information-theoretic principle we refer to as preservation of linear entropy. It is then shown that the Lorentz invariance of the linear entropy of a relativistic qubit is a special case of a much more general phenomenon, namely, that any spectral invariant of an operator we term the '$W$-matrix' is an $\text{SL}(2,\mathbb C)^{\otimes n}$ invariant scalar. Consequently, the linear $n$-partite quantum mutual information is shown to be an $\text{SL}(2,\mathbb C)^{\otimes n}$ invariant for all $n$-qubit states. Finally, we show that the correlation function associated with a pair of qubits in the singlet state yields the Minkowski metric on the space of qubit observables, whose symmetry group is the full Lorentz group $\text{SO}(1,3)$. In accordance with the "It from Qubit" paradigm, our results thus establish the natural emergence of relativistic spacetime structure from intrinsic properties of quantum information.
Optimal Quantum State Testing Even with Limited Entanglement
This paper develops new algorithms for quantum state certification that can determine if an unknown quantum state matches a target state using optimal sample complexity while requiring measurements on fewer copies simultaneously. The key innovation is achieving near-optimal performance with joint measurements on only d² copies instead of the full O(d/ε²) copies, which is particularly valuable for high-precision testing scenarios.
Key Contributions
- Novel algorithms for quantum state certification achieving near-optimal sample complexity with limited entanglement measurements on t = d² copies
- Smooth upper and lower bounds demonstrating the fundamental tradeoffs between measurement entanglement and sample efficiency in quantum state testing
- Extension of techniques to mixedness testing and purity estimation with optimal rates using limited joint measurements
View Full Abstract
In this work, we consider the fundamental task of quantum state certification: given copies of an unknown quantum state $ρ$, test whether it matches some target state $σ$ or is $ε$-far from it. For certifying $d$-dimensional states, $Θ(d/ε^2)$ copies of $ρ$ are known to be necessary and sufficient. However, the algorithm achieving this complexity makes fully entangled measurements over all $O(d/ε^2)$ copies of $ρ$. Often, one is interested in certifying states to a high precision; this makes such joint measurements intractable even for low-dimensional states. Thus, we study whether one can obtain optimal rates for quantum state certification and related testing problems while only performing measurements on $t$ copies at once, for some $1 < t \ll d/ε^2$. While it is well-understood how to use intermediate entanglement to achieve optimal quantum state learning, the only protocol known to achieve optimal testing is the one using fully entangled measurements. Our main result is a smooth copy complexity upper bound for state certification as a function of $t$, which achieves a near-optimal rate at $t = d^2$. In the high-precision regime, i.e., for $ε< \frac{1}{\sqrt{d}}$, this is a strict improvement over the entanglement used by the aforementioned optimal protocol. We also extend our techniques to develop new algorithms for the related tasks of mixedness testing and purity estimation, and show tradeoffs achieving the optimal rates for these problems at $t = d^2$ as well. Our algorithms are based on novel reductions from testing to learning and leverage recent advances in quantum state tomography in a non-black-box fashion. We complement our upper bounds with smooth lower bounds that imply joint measurements on $t \geq d^{Ω(1)}$ copies are necessary to achieve optimal rates for certification in the high-precision regime.
Quantum Simulation of Collective Neutrino Oscillations using Dicke States
This paper develops new quantum computing algorithms to simulate how neutrinos change flavor in dense environments like supernovae, where the neutrinos become quantum entangled with each other. The researchers create more efficient algorithms that use fewer qubits by exploiting the mathematical symmetries of these neutrino systems.
Key Contributions
- Development of qubit-efficient quantum algorithms for simulating collective neutrino oscillations
- Novel use of Dicke states and su(2) spin algebra to exploit system symmetries in quantum simulation
View Full Abstract
In dense neutrino gases, which exist for instance in supernovae, the flavour states of different neutrinos may become entangled with one another. The theoretical description of such systems may therefore call for simulations on a quantum computer. Existing quantum simulations of simple toy systems are not optimal in the sense that they do not fully exploit the symmetries of the system. Here, we propose a new class of qubit-efficient algorithms based on Dicke states and the $su(2)$ spin algebra. We demonstrate the excellent performance of these algorithms both on classical and on quantum hardware.
Interaction-Mediated Non-Reciprocal Dynamics in Open Quantum Systems: From an Exactly Solvable Model to Generic Behavior
This paper studies how particle interactions can transfer non-reciprocal (one-way) dynamics between different parts of quantum systems connected to engineered reservoirs. The researchers develop an exactly solvable model showing that interactions can cause directional movement of excitations even in components not directly coupled to the reservoir.
Key Contributions
- Demonstration of exact solvability for interaction-mediated non-reciprocal Lindbladian dynamics
- Show that density-density interactions can transfer bath-induced non-reciprocity between different degrees of freedom
View Full Abstract
Reservoir engineering has emerged as a powerful paradigm to realize non-reciprocal dynamics in open quantum many-body systems. Here, we show that density-density interactions can transfer bath-induced non-reciprocity between different degrees of freedom. Specifically, we investigate a one-dimensional lattice of spin-$\frac{1}{2}$ fermions with all-to-all Hatsugai-Kohmoto interactions in the presence of an engineered reservoir. We establish the exact solvability of the Lindbladian dynamics and show that the interplay between non-reciprocity and interactions qualitatively reshapes the dynamics of excitations. Remarkably, interactions induce directional drift even in spin sectors that are not directly coupled to the reservoir. By analyzing a driven-dissipative Fermi-Hubbard chain, we show that the same mechanism persists for local interactions. The Hatsugai-Kohmoto model thus emerges as a minimal, exactly solvable platform for interaction-mediated non-reciprocal many-body dynamics.
Rotation of the Transition Dipole in Single hBN Quantum Emitters via Vibronic Coupling
This paper studies quantum emitters in hexagonal boron nitride and discovers that their emission polarization can rotate by up to 40 degrees due to vibrations in the crystal lattice. The researchers show this rotation is caused by phonon-induced changes to the electronic wavefunctions and is suppressed at very low temperatures.
Key Contributions
- Discovery of vibronic-induced transition dipole rotation up to 40° in hBN quantum emitters
- Demonstration that phonon coupling fundamentally perturbs electronic wavefunctions causing polarization changes
- Identification of temperature dependence showing suppression at cryogenic conditions
- First-principles theoretical framework explaining the microscopic origin of dipole reorientation
View Full Abstract
The design of polarization-encoded quantum interfaces relies on the assumption that solid-state emitters possess static transition dipoles defined by the host lattice symmetry. Here, we demonstrate the vibronic breakdown of this static dipole approximation in hexagonal boron nitride quantum emitters. Through high-resolution energy-resolved spectroscopy, we reveal a continuous, spectral rotation of the emission dipole orientation reaching up to 40$^{\circ}$, driven by coupling to the phonon bath. This spectral gradient is significantly suppressed at cryogenic temperatures (6 K), identifying thermally activated lattice vibrations as the primary driver of the dipole reorientation. First-principles calculations on two representative defect types indicate the microscopic origin of this phenomenon as a coordinate-dependent transition dipole, where phonon-induced atomic displacements fundamentally perturb the electronic wavefunctions. By comparing the distinct defect environments, we demonstrate that the magnitude of the polarization rotation scales with the strength of the vibronic coupling. Our results not only identify a fundamental limit for polarization fidelity in solid-state quantum networks but also suggest a new class of strain-tunable quantum photonic devices based on vibronic dipole reorientation.
Groenewold-Moyal twists, integrable spin-chains and AdS/CFT
This paper studies quantum spin-chains deformed by mathematical twists in the context of AdS/CFT correspondence, a theoretical framework connecting gravity and quantum field theory. The authors use integrability methods to analyze the spectral properties of these deformed systems and match results between the field theory side (spin-chains) and gravity side (string theory).
Key Contributions
- Development of integrable methods for analyzing Groenewold-Moyal twisted spin-chains in AdS/CFT
- Construction of deformed BMN classical string solutions and matching with spin-chain ground state energies
- Discovery that conserved charges in the deformed string sigma-model are non-local and don't correspond to standard isometries
View Full Abstract
We take the first steps to address via integrability the spectral problem of AdS/CFT dual pairs deformed by Groenewold-Moyal twists. In particular, we start by considering a twisted spin-chain that couples, through a Groenewold-Moyal twist deformation, two $\mathfrak{sl}(2)$-invariant spin-chains. We interpret this deformed spin-chain as a deformation of a subsector of the $AdS_3/CFT_2$ spin-chain, but the construction shares qualitative features also with the corresponding deformation of the $AdS_5/CFT_4$ spin-chain, for example. As in similar types of deformations, we show that there exists a certain basis in which the spin-chain Hamiltonian takes a Jordan-block form. At the same time, by working in the basis of eigenstates of the generators used to construct the Groenewold-Moyal twist, the Hamiltonian appears to be diagonalisable and with a deformed spectrum. Employing the method of the Baxter equation, we write down the energy of the ground state and of excited states in a perturbation of the deformation parameter. We then consider the string-theory side of the duality, where the twist is realised as a deformation of AdS of the type of Maldacena-Russo-Hashimoto-Itzhaki. We construct a deformation of the usual BMN classical solution, and in the large-$J$ limit we match the leading $\mathcal O(J^{-3})$ term of the energy of the spin-chain groundstate with a conserved charge of the string classical solution. Differently from the undeformed setup as well as similar kinds of deformations, we find that the general expression of this charge of the string sigma-model is non-local, and that it does not correspond to a standard isometry. Nevertheless, it can be computed from the monodromy matrix and it is part of the tower of conserved charges provided by integrability.
Physics-Informed Discrete-Event Simulation of Polarization-Encoded Quantum Networks
This paper extends a quantum network simulator to include realistic physics models for polarization-based quantum communication systems. The simulator incorporates optical components and fiber effects to predict how well quantum entanglement can be distributed through real-world quantum networks.
Key Contributions
- Extension of SeQUeNCe simulator with physics-based polarization models
- Integration of realistic fiber effects including dispersion and noise from classical traffic
- Validation against experimental data for quantum state distribution
View Full Abstract
We extend the SeQUeNCe discrete-event simulator with physics-based models for polarization-encoded photonic quantum networks. Our framework integrates Jones-calculus optical components, including an SPDC Bell-state source, wave plates, and polarizing beam splitters, together with a multi-section fiber model capturing polarization mode dispersion, chromatic dispersion, and Raman noise from coexisting classical traffic. We validate the simulator by reproducing experimentally reported spectra, polarization correlations, quantum state tomography, and dispersion- and Raman-induced noise. The resulting platform enables hardware-parameterized prediction of entanglement distribution performance under realistic deployment conditions.
Two-dimensional shelving spectroscopy of ultraviolet ground state transitions in dysprosium
This paper investigates ultraviolet transitions in dysprosium atoms using a new spectroscopy technique, mapping out the atom's energy level structure and properties. The research enables better control of these magnetic atoms for precision measurements and could support development of optical atomic clocks.
Key Contributions
- Development of two-dimensional shelving spectroscopy technique for improved detection sensitivity
- Detailed characterization of UV ground state transitions in dysprosium including isotope shifts and hyperfine structure
- Identification of strong decay pathways to long-lived excited states useful for optical population control
View Full Abstract
The open inner-shell electronic structure of lanthanides with large magnetic moments gives rise to a rich spectrum of transitions available for laser cooling, trapping, and coherent control. Despite this, the large number of ultraviolet (UV) transitions below 400nm have so far been rarely utilized in dipolar atom experiments. Here, we investigate multiple UV ground state transitions in dysprosium. Several of these UV excited states have the largest decay strengths to the ultralong-lived, low-lying first excited state which are comparable to the most commonly used strongest transitions found in dipolar atoms. Using two-dimensional shelving spectroscopy which improves detection sensitivity and provides a straightforward way to determine the hyperfine-isotope structure and excited state total angular momentum $J$, we measure isotope shifts, hyperfine coefficients, and create King plots to determine their electronic nature. Such knowledge of these UV transitions which analogously exist in other magnetic atoms is important for optically populating the first excited state and can be used towards creating an optical clock, high resolution imaging in quantum gas microscopy, and probing lanthanide nuclei with enhanced Schiff moments in search of physics beyond the standard model.
Quantifying and detecting quantum-state texture
This paper develops new mathematical tools for quantifying and detecting 'quantum-state texture,' a quantum resource that measures how unevenly a quantum state's components are distributed in the computational basis. The authors introduce a new texture measure based on Rényi relative entropy and develop 'texture witnesses' for detecting this resource in quantum states.
Key Contributions
- Construction of a new texture measure based on α-z Rényi relative entropy with analysis of its mathematical properties
- Introduction of texture witnesses as detection tools for quantum-state texture with systematic examples
View Full Abstract
Quantum-state texture is a recently proposed quantum resource that characterizes the inhomogeneity of a quantum state's matrix element distribution in the computational basis, enriching our understanding of quantum state structure. To expand its quantification toolkit and establish detection methods, in this article, we investigate the resource theory of texture from both quantitative and detection perspectives. First, we construct a texture measure $\mathcal{T}^{\text{GR}}_{α,z}(ρ)$ based on the $α$-$z$ Rényi relative entropy and present some of its inherent properties. Second, we analyze the mathematical relationships between several existing texture measures, revealing connections among different quantifiers. Finally, drawing on the witness concept from other resource theories, we systematically introduce texture witnesses into the texture theory and provide examples of texture witnesses with special properties.
Fock State Generation and SWAP using a Rabi-Driven Qubit
This paper demonstrates a method to generate specific quantum states (Fock states) and transfer quantum information between different modes using a weakly coupled qubit that is driven by external fields. The approach allows for precise quantum operations while maintaining the isolation needed for high-quality quantum storage and computation.
Key Contributions
- Demonstration of tunable qubit-cavity coupling that preserves cavity isolation while enabling strong interactions on demand
- Deterministic generation of Fock states up to n=5 with microsecond operation times
- Implementation of single-photon SWAP operations and dual-rail Bell state generation using weakly coupled systems
View Full Abstract
The deterministic generation and SWAP of Fock states in isolated high-Q modes form a core foundation for architectures in bosonic quantum computing. Conventionally, these operations necessitate strong coupling to a qubit, which inherently compromises the required cavity isolation. To address this trade-off, we introduce a tunable mechanism wherein a weakly coupled qubit, which preserves mode isolation, is driven to induce a strong interaction on demand. By leveraging a Rabi-driven, qubit-mediated sideband interaction, we realize on-demand Jaynes-Cummings coupling between a transmon and a long-lived cavity mode. Using a superconducting flute cavity with two high-Q modes, we deterministically demonstrate Fock state preparation up to n=5 at operation times of less than 2 microseconds per photon. We also demonstrate and characterize single-photon SWAP in approximately 2 microseconds. Finally, we adapt our SWAP method to generate a dual-rail Bell state. While current performance is constrained by baseline coherence rather than fundamental methodological limits, the protocol scales inherently to accommodate higher photon numbers and faster operational regimes. By enabling complex operations on modes that remain strictly weakly coupled to qubits, this approach establishes a robust pathway for advancing scalable bosonic quantum computing.
Improving Feasibility in Quantum Approximate Optimization Algorithm for Vehicle Routing via Constraint-Aware Initialization and Hybrid XY-X Mixing
This paper improves the Quantum Approximate Optimization Algorithm (QAOA) for solving vehicle routing problems by introducing constraint-aware initialization and a hybrid mixer that preserves important structural constraints while exploring solutions. The approach consistently outperforms standard QAOA in simulations, though the advantage diminishes under realistic noise conditions.
Key Contributions
- Constraint-aware initialization strategy that encodes local one-hot constraints into initial quantum states
- Hybrid XY-X mixer design that preserves constraint structure while maintaining exploration capability
- Comprehensive evaluation showing improved feasible solution ratios across ideal, finite-shot, and noisy simulation regimes
View Full Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is a leading framework for quantum combinatorial optimization. The Vehicle Routing Problem (VRP), a core problem in logistics and transportation, is a natural application target, but it poses a major feasibility challenge for standard QAOA because feasible solutions occupy only a tiny fraction of the search space, and the conventional Pauli-$X$ mixer can disrupt partial solution structures that satisfy key local constraints. To address this issue, we propose a constraint-aware QAOA framework with two complementary components. First, we design a lightweight initialization strategy that encodes a selected subset of simple yet informative local one-hot constraints into the initial state, thereby reducing the initial superposition space and increasing the probability mass on states with important local structure. Second, we introduce a hybrid XY-$X$ mixer that preserves the constraint structure imposed at initialization while retaining exploratory flexibility over the remaining unconstrained degrees of freedom during QAOA evolution. We evaluate the proposed framework against standard QAOA under three progressively more realistic regimes: ideal statevector simulation, finite-shot sampling, and noisy finite-shot sampling. Across all regimes, the proposed method consistently achieves lower average energy and higher feasible-solution ratios than standard QAOA, indicating more effective guidance toward structurally valid, lower-cost VRP solutions. However, the performance gap narrows in the noisy regime. Because this setting adopts a hardware-inspired error model based on near-best-reported laboratory-level qubit gate and readout fidelities, the observed attenuation suggests that the practical advantage of the more structured mixer is likely to grow as quantum hardware improves and error rates decline.
Quantum Gibbs sampling through the detectability lemma
This paper develops improved methods for preparing quantum Gibbs states (thermal equilibrium states) using a mathematical tool called the detectability lemma, avoiding the need to simulate complex quantum evolution processes. The approach achieves significant speedups, including quadratic improvement in dependence on spectral gaps and linear improvement by avoiding simulation overhead.
Key Contributions
- New Gibbs state preparation methods that avoid Lindbladian evolution simulation, reducing cost by factor O(M) for M-term local Lindbladians
- Combination of detectability lemma with quantum singular value transformation for ground state projection with quadratic speedup in spectral gap dependence
- Quadratically improved dependence on Lindbladian spectral gap for local commuting Hamiltonians
View Full Abstract
Gibbs state preparation is an important subroutine in quantum computing. In this work we use the detectability lemma to improve Gibbs state preparation. Specifically, we design new Gibbs state preparation methods that do not rely on simulating Lindbladian evolution, thus avoiding the overhead from it. For local Lindbladians consisting of $M$ terms, this approach reduces the cost by a factor of $O(M)$. We also combine the detectability lemma operator and quantum singular value transformation to implement ground state projection operators of frustration-free Hamiltonians, resulting in a quadratic speedup in the spectral gap dependence. Applying this method to Lindbladians for the Gibbs state of local commuting Hamiltonians, we achieve quadratically better dependence on the Lindbladian spectral gap.
On the Computational Complexity of Geometrically Local QAC0 circuits
This paper studies the computational complexity of quantum circuits with geometric constraints, specifically showing that geometrically local quantum circuits (where gates only act on nearby qubits) have the same computational power as unrestricted quantum circuits, but with different efficiency trade-offs. The authors prove lower bounds on how much depth is needed for one-dimensional quantum circuits to compute certain functions like Parity.
Key Contributions
- Proved that 2D geometrically local QAC0 circuits are equivalent in power to unrestricted QAC0 circuits with quadratic size overhead
- Established nearly logarithmic depth lower bounds for 1D quantum circuits computing Parity function
- Connected geometric locality constraints to the open problem of whether Parity is computable in QAC0
View Full Abstract
The computational complexity of $\mathsf{QAC}^0$, which are constant-depth, polynomial-size quantum circuit families consisting of arbitrary single-qubit unitaries and $n$-qubit generalized Toffoli gates, has gained tremendous focus recently. In this work, we initiate the study of the computational complexity of geometrically local $\mathsf{QAC}^0$ circuits, where all the generalized Toffoli gates act on nearest neighbor qubits. We show that any $\mathsf{QAC}^0$ circuit can be exactly simulated by a two-dimensional geometrically local $\mathsf{QAC}^0$ circuit, i.e., a $\mathsf{2D\text{-}QAC}^{0}$ circuit, with a quadratic size blow-up. This implies that $\mathsf{QAC}^0 = \mathsf{2D\text{-}QAC}^{0}$. We further show that if there existed a $\mathsf{QAC}^0$ circuit that computes Parity with a bounded constant error, then for any $\varepsilon > 0$, there would exist a $\mathsf{2D\text{-}QAC}^{0}$ circuit that exactly computes Parity, with a very "thin" width $n^\varepsilon$. We further study the computational power of $\mathsf{1D\text{-}QAC}^{0} $ circuits, i.e., one-dimensional $\mathsf{QAC}^0$ circuits, which are the "thinnest" $\mathsf{2D\text{-}QAC}^{0}$ circuits. We prove a nearly logarithmic depth lower bound on $\mathsf{1D\text{-}QAC}^{0} $ circuits to compute the Parity function, even if allowing an unlimited number of ancilla. Furthermore, if the inputs are encoded in contiguous qubits, we prove that it requires a nearly linear depth $\mathsf{1D\text{-}QAC}^{0} $ circuit to compute the Parity function. This lower bound is almost tight. The results are proved via the combination of the restriction argument and the light-cone argument. These results may provide a new angle for studying the computational power of $\mathsf{QAC}^0$ circuits and for resolving the long-standing open problem of whether Parity is in $\mathsf{QAC}^0$.
Robust and High-Fidelity Controlled Two-Qubit Gates via Asymmetric Parallel Resonant Excitation
This paper presents a new method for implementing high-fidelity controlled two-qubit gates in rare-earth-ion quantum systems using asymmetric parallel resonant excitation. The approach addresses challenges with spectral inhomogeneity and weak coupling, achieving over 99% gate fidelity across a wide frequency range.
Key Contributions
- Novel asymmetric parallel resonant excitation scheme for controlled two-qubit gates
- Demonstration of 99% gate fidelity with 170 kHz detuning tolerance in rare-earth-ion systems
- Robust pulse engineering approach that reduces susceptibility to frequency errors and AC Stark shifts
View Full Abstract
Implementing high-fidelity controlled two-qubit gates in dipole-dipole interacting systems, such as rare-earth-ion crystals, in hindered by spectral inhomogeneity and weak coupling. Existing method often rely on detuned pulses, making them susceptible to frequency errors and AC Stark shifts. We propose a robust resonant scheme for arbitrary controlled two-qubit gates that utilizes asymmetric excitation and pulse engineering to achieve decoupled, parallel qubit control. Simulations on rare-earth-ion ensemble qubits demonstrate gate fidelities exceeding 99% within a 170 kHz detuning range with off-resonant excitation below 0.2%. This approach offers a robust, scalable route for quantum computing in spectrally crowded systems.
Overlapped groupings for quantum energy estimation: Maximal variance reduction and deterministic algorithms for reducing variance
This paper develops improved methods for measuring quantum system energies by allowing measurement groups to overlap, rather than being completely separate. The authors prove this overlapping approach can significantly reduce measurement uncertainty and demonstrate it scales well to large quantum systems with hundreds of thousands of terms.
Key Contributions
- Theoretical proof that overlapped grouping can achieve maximal variance reduction linear in the number of Hamiltonian terms
- Introduction of a 'repacking' algorithm that transforms disjoint groups into overlapped groups with iterative variance reduction
- Numerical demonstration on systems up to 44 qubits and 575,000 terms showing linear scaling benefits
View Full Abstract
Grouping-based measurement strategies are widely used to reduce measurement complexity in near-term quantum algorithms. While these schemes have typically produced disjoint groups, recently this has been relaxed in what is known as overlapped grouping or coefficient splitting where operators may appear in more than one compatible group. In recent work, it has been numerically shown that this strategy can reduce the variance of energy estimates on small benchmark problems, motivating both the application and further analysis of the method. Here we prove that overlapped grouping for energy estimation can lead to a maximal variance reduction that is linear in the number of Hamiltonian terms. We introduce a new algorithm which we call repacking to transform existing groups into overlapped groups, and we show this repacking procedure iteratively reduces variance under mild assumptions. We also perform numerical simulations with Hamiltonians up to $44$ qubits and $575 \cdot 10^{3}$ terms, assessing overlapped grouping at scale on problems of practical importance. Our numerics show that the variance reduction relative to state-of-the-art (disjoint) grouping increases linearly with the problem size, suggesting that overlapped grouping methods can be a powerful strategy for quantum energy estimation at the scale of Megaquop computers and beyond.
Continuous-variable two-dimensional cluster states in the microwave domain
This paper demonstrates the experimental creation of large-scale two-dimensional quantum cluster states using 191 microwave frequency modes through a Josephson Parametric Amplifier. The researchers engineered honeycomb and square lattice structures by carefully controlling multiple pump tones and verified the quantum properties through nullifier measurements.
Key Contributions
- First experimental demonstration of large-scale 2D continuous-variable cluster states in microwave domain with 191 modes
- Development of engineering technique using multiple pump tones to create honeycomb and square lattice cluster state geometries
- Demonstration of nullifier squeezing up to -1.2 dB and analysis of hidden entanglement properties
View Full Abstract
We demonstrate the experimental realization of two-dimensional, continuous variable (CV) cluster states between 191 microwave frequency modes. This result is obtained by exposing vacuum fluctuations to the input of a Josephson Parametric Amplifier, parametrically pumped by a sum of coherent tones around twice its resonant frequency. By carefully tuning pump frequencies, amplitudes, and phases we engineer the interference between mixing products and realize honeycomb and square lattice CV cluster states with three and four pump tones respectively. We prove the presence of the cluster states with a suitable nullifier test, reaching up to $-1.2$ dB of squeezing of the cluster state's nullifiers. We study hidden entanglement (HE) and show no hidden entanglement up to $\sim -1$ dB of squeezing and negligible HE at optimal squeezing.
Quantum coherent transceivers toward Holevo-limited communications
This paper demonstrates an integrated quantum-limited receiver that approaches the theoretical Holevo limit for quantum communication channels. The researchers built a 32-channel receiver array and showed it can achieve communication rates that surpass classical Shannon limits by using squeezed light states.
Key Contributions
- Demonstration of integrated photonic-electronic quantum-limited coherent receiver with 14.0 dB shot noise clearance
- Scaling to 32-channel receiver array with automatic common-mode rejection ratio correction
- Experimental measurement of 0.15 dB squeezing below shot noise limit using fiber-optic communication setup
View Full Abstract
The Holevo limit bounds the channel capacity of a communication channel in which information is encoded in quantum states in a Hilbert space at the transmitter and decoded using quantum measurements at the receiver. Saturating the Holevo limit requires quantum-limited transceivers that either generate quantum states of light or employ quantum-limited measurements. Here, we demonstrate an integrated photonic-electronic quantum-limited coherent receiver (QRX) achieving 14.0 dB shot noise clearance (SNC), 520 $μ$W knee power, 2.57 GHz 3-dB bandwidth, 3.50 GHz shot-noise-limited bandwidth, and 90.2 dB common-mode rejection ratio ($\mathrm{CMRR}$). We scale this design to a 32-channel QRX array with median 26.6 dB $\mathrm{SNC}$, and automatic $\mathrm{CMRR}$ correction yielding a median 76.8 dB $\mathrm{CMRR}$ at minimum. Using the integrated QRX and fiber-optic transmitter, we measure $0.15\pm0.01$ dB of squeezing below the shot noise limit, limited by off-chip losses. We propose a squeezed light communication scheme that can surpass the Shannon limit, with a path toward the Holevo limit.
Postquantum steering in scenarios with multiple characterised parties
This paper extends the study of postquantum steering - quantum correlations stronger than what standard quantum theory allows - to scenarios involving multiple quantum parties, rather than just one. The authors develop mathematical tools to identify and characterize these stronger-than-quantum phenomena and show they could exist in alternative theories beyond quantum mechanics.
Key Contributions
- Extended postquantum steering theory to multi-party scenarios
- Developed algorithms and semidefinite programming methods to certify postquantum correlations
- Showed fundamental differences between steering and Bell nonlocality in theory-independent descriptions
View Full Abstract
The study of stronger-than-quantum phenomena (i.e., postquantum) has enabled a deeper understanding of the scope of quantum theory. Much is known about the case of correlations in Bell scenarios, where the device-independent framework allowed us to explore its possibilities independently of the formalism of quantum theory. However, less is known about the phenomenon of Einstein-Podolsky-Rosen steering. Here, the `characterised parties' are assumed to describe their systems locally through the quantum formalism, which inconveniences a theory-independent description. In addition, a theorem by Gisin and Hughston, Josza and Wootters further hindered the discovery of the phenomenon. The study of postquantum steering, initiated about a decade ago, has been quite fruitful, including: the development of mathematical formalisms that frame the effect, resource theories that quantify it as a resource, and activation protocols that relate it to Bell correlations. However, all these results have a limitation in common: they apply to scenarios with only one quantum party. Here we articulate the concept of postquantum steering for scenarios with multiple quantum parties, bringing in the missing piece to the puzzle. We provide an algorithm to certify postquantumness, which in some cases also certifies quantumness. We also define a hierarchy of semidefinite programs that bounds the set of quantum assemblages from the outside. Moreover, we show that the study of postquantum steering is fundamentally relevant since it is not just a mere mathematical curiosity allowed by the no-signalling principle, but it may arise within compositional theories beyond quantum theory. Our work further discovers a peculiarity of steering: its theory-independent description fundamentally prevents a direct connection with Bell nonlocality -- e.g., nonclassical Bell correlations do not imply nonclassical steering.
Complete coherent control of spin qubits in self-assembled InAs quantum dots under oblique magnetic fields
This paper demonstrates complete coherent control of spin qubits in quantum dots using oblique magnetic fields, showing that precise quantum control doesn't require the conventional perpendicular (Voigt) field geometry. The researchers achieved full single-qubit operations including Rabi oscillations and arbitrary rotations, providing more flexibility in quantum device design.
Key Contributions
- Demonstrated complete coherent control of spin qubits under oblique magnetic fields as alternative to Voigt geometry
- Showed tunable spin mixing provides additional degree of freedom for engineering spin basis and optical couplings
- Relaxed constraints on device and field alignment for quantum dot-based quantum information processing
View Full Abstract
We demonstrate complete coherent control of a single spin qubit confined in a self-assembled InAs negatively charged quantum dot subjected to an Oblique magnetic field, and directly compare this regime with the conventional Voigt geometry. In the Oblique-field configuration, the groundstate spin eigenstates are found to be unequal superpositions of the bare electron spin, with their composition tunable via the orientation of the applied field. This tunable spin mixing provides an additional degree of freedom to engineer the spin basis and associated optical couplings in the charged quantum dot system. Although this geometry has a distinct structure with important implications, it provides a regime in which we can fully and coherently control the tailored spin qubit. We observe Rabi oscillations and Ramsey fringes, and demonstrate arbitrary single-qubit rotations, enabling a direct comparison with the Voigt case. Our results establish that spin-qubit control does not necessarily require a pure Voigt geometry and can instead be achieved under Oblique magnetic fields. This relaxes constraints on device and field alignment and offers a versatile route to design and optimize quantum information processing architectures in semiconductor quantum dots.
QNAS: A Neural Architecture Search Framework for Accurate and Efficient Quantum Neural Networks
This paper introduces QNAS, a framework that automatically designs quantum neural networks by balancing accuracy, computational efficiency, and hardware constraints for near-term quantum devices. The system discovers optimal quantum circuit architectures across different datasets while considering practical deployment limitations like limited qubit counts.
Key Contributions
- Unified framework for multi-objective optimization of quantum neural networks considering hardware constraints
- Automated discovery of design principles for quantum circuit architectures across different data types
- Integration of circuit cutting overhead awareness into quantum architecture search
View Full Abstract
Designing quantum neural networks (QNNs) that are both accurate and deployable on NISQ hardware is challenging. Handcrafted ansatze must balance expressivity, trainability, and resource use, while limited qubits often necessitate circuit cutting. Existing quantum architecture search methods primarily optimize accuracy while only heuristically controlling quantum and mostly ignore the exponential overhead of circuit cutting. We introduce QNAS, a neural architecture search framework that unifies hardware aware evaluation, multi objective optimization, and cutting overhead awareness for hybrid quantum classical neural networks (HQNNs). QNAS trains a shared parameter SuperCircuit and uses NSGA-II to optimize three objectives jointly: (i) validation error, (ii) a runtime cost proxy measuring wall clock evaluation time, and (iii) the estimated number of subcircuits under a target qubit budget. QNAS evaluates candidate HQNNs under a few epochs of training and discovers clear Pareto fronts that reveal tradeoffs between accuracy, efficiency, and cutting overhead. Across MNIST, Fashion-MNIST, and Iris benchmarks, we observe that embedding type and CNOT mode selection significantly impact both accuracy and efficiency, with angle-y embedding and sparse entangling patterns outperforming other configurations on image datasets, and amplitude embedding excelling on tabular data (Iris). On MNIST, the best architecture achieves 97.16% test accuracy with a compact 8 qubit, 2 layer circuit; on the more challenging Fashion-MNIST, 87.38% with a 5 qubit, 2 layer circuit; and on Iris, 100% validation accuracy with a 4 qubit, 2 layer circuit. QNAS surfaces these design insights automatically during search, guiding practitioners toward architectures that balance accuracy, resource efficiency, and practical deployability on current hardware.
A Simple and Robust Balanced Homodyne Detector for High-Repetition-Rate Pulsed Sources
This paper develops an improved balanced homodyne detector specifically designed for high-speed pulsed laser sources operating at 100 MHz repetition rates. The detector avoids problems with conventional designs by directly amplifying photocurrent without feedback loops, achieving excellent linearity and shot-noise-limited performance.
Key Contributions
- Novel balanced homodyne detector architecture without transimpedance amplifiers that avoids nonlinearities with ultrashort pulses
- Theoretical model predicting detector response, noise characteristics, and pulse-to-pulse correlations for high-repetition-rate sources
- Experimental demonstration of shot-noise-limited performance with 14 dB SNR and negligible inter-pulse correlations at 100 MHz
View Full Abstract
We design and experimentally characterize a balanced homodyne detector optimized for high-repetition-rate (100 MHz) pulsed optical sources. Unlike conventional transimpedance-amplifier architectures, which suffer from nonlinearities and dynamic instabilities with ultrashort pulses, our approach allows to directly amplify the photocurrent extracted at the common photodiode node without feedback loops. A theoretical model describing the detector response, noise, and pulse-to-pulse correlations is developed, providing quantitative predictions for the signal variance, signal-to-noise ratio (SNR), and inter-pulse correlations. Implemented with two matched InGaAs photodiodes illuminated by a 1030 nm mode-locked laser at 100 MHz, the detector exhibits excellent linearity and shot-noise-limited scaling of the signal variance with optical power. Optimizing the temporal integration window yields a maximum SNR of about 14 dB, while correlation measurements confirm negligible inter-pulse correlations. These results demonstrate that the proposed architecture offers a robust and simple solution for high-speed pulsed homodyne detection, suitable for quantum optics and continuous-variable quantum information applications.
Scalable on-chip integration of diamond color centers for cryogenic quantum photonics
This paper demonstrates the integration of diamond nitrogen-vacancy centers with photonic crystal cavities on a chip that operates at cryogenic temperatures. The researchers successfully showed enhanced photon emission from the quantum emitters and coupled the system to optical fibers, creating a platform for quantum communication applications.
Key Contributions
- Successful chip-scale integration of diamond NV centers with photonic crystal cavities at cryogenic temperatures
- Demonstration of Purcell enhancement in the integrated system with fiber coupling for scalable quantum photonic platforms
View Full Abstract
Chip integration of quantum emitters is a crucial milestone for scalable quantum photonic information processing. Among optically active defect centers for quantum photonics, diamond color centers are promising because of their long spin coherence times and high photon emission rates. However, for a coherent-photon emission, they typically require a cryogenic environment to protect optical coherence from thermal phonons, which makes chip integration challenging. In this paper, we develop a chip-integrated diamond photonic crystal cavity embedding an ensemble of nitrogen-vacancy (NV) centers. We confirm cryogenic operation by observing Purcell enhancement of NV-center emission via an edge-coupled optical fiber. This result demonstrates successful integration of diamond color centers, a photonic crystal cavity, and an optical waveguide-fiber package, representing a key step toward scalable diamond-based quantum communication platforms.
Tensor-network simulation of quantum transport in many-quantum-dot systems
This paper develops a new tensor-network computational method to simulate electron transport through large arrays of quantum dots, overcoming previous computational limitations. The approach can model systems with up to 50 quantum dots, far exceeding what was previously possible with traditional density-matrix methods.
Key Contributions
- Extended tensor-network solver with jump-counting estimator for computing steady-state electron currents
- Demonstrated orders of magnitude reduction in computational requirements compared to classical approaches
- Enabled simulation of quantum transport in arrays up to 50 quantum dots
View Full Abstract
Transport through correlated nanoscale systems underpins the operation of quantum-dot and molecular-scale devices, yet accurate simulations of large open quantum systems remain computationally challenging as system size increases. Tensor-network methods offer a promising route past this scaling barrier by efficiently compressing quantum states. Here we extend a tensor-based solver with a jump-counting estimator that enables direct computation of steady-state electron currents from lead-induced tunneling events. We benchmark the resulting currents against the state-of-the-art master-equation solver QmeQ across a range of lead-dot and inter-dot coupling parameters and find quantitative agreement in the tractable regime. Compared with classical approaches, TJM reduces memory requirements and wall-clock time by orders of magnitude, enabling simulations of interacting quantum-dot arrays far beyond the range accessible to density-matrix-based transport solvers and systematic studies of size-dependent nonequilibrium transport in larger arrays. Our approach allow us to model quantum transport in an array of up to fifty (50) quantum dots.
Quantum MIMO Channel Modeling in Turbulent Free-Space Optical Links
This paper develops a theoretical model for quantum communication channels that use multiple spatial light beams (MIMO) in free-space optical links affected by atmospheric turbulence. The work shows how turbulence creates correlated errors and photon loss that can be modeled as erasure channels, providing a framework for understanding quantum communication through turbulent air.
Key Contributions
- First-principles model for Quantum MIMO channels in atmospheric turbulence accounting for intermodal crosstalk and finite detection apertures
- Introduction of erasure-extended encoding that maps turbulence-induced effects to flagged erasure states, creating a completely positive trace-preserving channel description
View Full Abstract
Free-space optical (FSO) links supporting spatial multiplexing provide a natural physical realization of Quantum MIMO channels. We develop a first-principles model for Quantum MIMO channels derived directly from wave-optical propagation through three-dimensional atmospheric turbulence. The framework explicitly accounts for intermodal crosstalk, finite detection apertures, and the system-bath separation induced by spatial-mode projection. We distinguish between distinguishable and indistinguishable photon regimes, showing that indistinguishability leads to intrinsically many-body interference effects described by matrix permanents. To obtain a completely positive and trace-preserving logical description, we introduce an erasure-extended encoding in which turbulence-induced leakage and photon loss are mapped to flagged erasure states. The resulting Quantum MIMO channel naturally reduces to a correlated n-qubit erasure channel, with correlations arising from the shared turbulent medium. Limiting regimes in which correlated Pauli channels emerge as effective approximations are also identified.
A Practical Introduction to Tensor Network Renormalization with TNRKit.jl
This paper introduces TNRKit.jl, a Julia software package for performing tensor network renormalization calculations on classical statistical mechanics models and lattice field theories. The package can extract thermodynamic properties and universal conformal field theory data from these systems using various tensor renormalization algorithms.
Key Contributions
- Development of TNRKit.jl, an open-source Julia package for tensor network renormalization
- Implementation of multiple TNR algorithms (TRG, HOTRG, LoopTNR) with symmetry-aware framework
- Capability to extract universal conformal data including scaling dimensions and central charge from fixed-point tensors
View Full Abstract
We present TNRKit.jl, an open-source Julia package for Tensor Network Renormalization (TNR) of two- and three-dimensional classical statistical models and Euclidean lattice field theories. Built on top of TensorKit.jl\cite{tensorkit}, it provides a symmetry-aware framework for constructing tensor-network representations of partition functions and coarse-graining them using methods such as TRG, HOTRG, and LoopTNR. Beyond thermodynamic quantities, the package enables the extraction of universal conformal data -- including scaling dimensions and the central charge -- directly from fixed-point tensors. TNRKit.jl is designed with both usability and extensibility in mind, offering a practical platform for applying, benchmarking, and developing modern tensor renormalization algorithms. This paper also serves as a self-contained introduction to the TNR framework.
Quantum Relative-alpha-Entropies: A Structural and Geometric Perspective
This paper introduces a new mathematical tool called quantum relative-alpha-entropy that measures how distinguishable quantum states are from each other. Unlike existing methods, this approach captures unique geometric properties of quantum systems and shows how quantum distinguishability relates to classical information theory.
Key Contributions
- Introduction of quantum relative-alpha-entropy extending Umegaki's relative entropy outside f-divergence framework
- Proof of nonlinear convexity property and generalized convexity result for Petz-Renyi divergence
- Establishment of exact correspondence with classical relative-alpha-entropy using Nussbaum-Szkola distributions
View Full Abstract
Most quantum divergences derive their structure from classical f-divergences or Renyi-type constructions, a dependence that obscures several quantum geometric effects. We introduce a quantum relative-alpha-entropy that extends Umegaki's relative entropy while falling outside the f-divergence class. The proposed divergence exhibits a nonlinear convexity property, which yields a generalized convexity result for the Petz-Renyi divergence for alpha greater than one, complementing the known convexity for alpha less than one. It is additive under tensor products, invariant under unitary transformations, and depends only on the relative geometry of quantum states rather than their absolute magnitudes. Using Nussbaum-Szkola-type distributions, we also establish an exact correspondence of this divergence with classical relative-alpha-entropy. This reveals relative-alpha-entropy as a fundamentally geometric notion of quantum distinguishability not captured by existing divergence frameworks.
Emergence of Non-Markovian Classical-Quantum Dynamics from Decoherence
This paper shows that classical-quantum dynamics (where quantum matter interacts with a classical mediator like gravity) can emerge naturally from fully quantum systems through decoherence effects. The authors derive conditions for when this classical-quantum interpretation is valid and demonstrate that experimental agreement with classical-quantum models doesn't prove the mediator is fundamentally classical.
Key Contributions
- Derivation of non-Markovian classical-quantum dynamics from decoherence of fully quantum systems
- Identification of positivity criterion for semi-Wigner operator as validity condition for classical-quantum interpretation
View Full Abstract
The quantum nature of gravity remains experimentally unverified, despite recent proposals to probe it using tabletop experiments such as gravity-mediated entanglement schemes. In parallel, consistent formulations of classical--quantum dynamics have been developed as alternative descriptions of gravity, in which quantum matter interacts with a classical mediator assumed to be fundamentally classical. In this work, we show that classical--quantum dynamics arise generically as an effective description of fully quantum systems under decoherence, providing a bridge between fully quantum and classical--quantum dynamics. We derive the reduced dynamics, which are generically non-Markovian, using an explicit hidden model in which the mediator is coupled to unobserved environmental degrees of freedom. We identify a concrete criterion for when a classical--quantum interpretation is valid: the semi-Wigner operator associated with the mediator sector must remain positive semidefinite, which can be expressed as a positivity condition on nonlocal kernels governing the evolution. In the short-memory limit, the reduced evolution reproduces Markovian classical--quantum dynamics of Oppenheim and collaborators. Our results imply that a classical mediator can arise effectively from decohered quantum dynamics, so that experimental agreement with classical-quantum models does not uniquely determine whether the mediator is fundamentally classical.
Millisecond spin relaxation times of distinct electron and hole subensembles in MA$_x$FA$_{1-x}$PbI$_3$ perovskite crystals
This paper studies the spin properties of electrons and holes in mixed-cation perovskite crystals, measuring exceptionally long spin relaxation times up to 2 milliseconds at cryogenic temperatures. The researchers identify multiple distinct spin subensembles and characterize their relaxation mechanisms, establishing these materials as promising candidates for quantum information applications.
Key Contributions
- Discovery of millisecond-long spin relaxation times in mixed-cation perovskite crystals
- Identification and characterization of multiple distinct electron and hole spin subensembles with different g-factors
- Detailed analysis of spin relaxation mechanisms dominated by nuclear Overhauser fields
View Full Abstract
The unique combination of outstanding optical quality and attractive spin properties opens new avenues for optical spin control in hybrid organic-inorganic perovskite semiconductors. Using the optically detected magnetic resonance technique, we study the spins of electrons and holes in mixed-cation MA$_x$FA$_{1-x}$PbI$_3$ single crystals with $x = 0.4$ and 0.8. Multiple distinct spin subensembles with $g$-factors spanning from 2.9 to 3.6 for electrons and from 0.5 to 1.2 for holes are resolved, revealing diverse localization environments. We measure the longitudinal spin relaxation times, $T_1$, reaching 2 ms and remaining in the $μ$s range even for weakly localized carriers at the cryogenic temperature of 1.6 K. The magnetic-field dependence of $T_1$ is dominated by the random nuclear (Overhauser) fields with strengths of $\sim 0.4-0.8$ mT for electrons and $\sim 4-12$ mT for holes, corresponding to $μ$s-long correlation times of the hyperfine field determined by carrier hopping between shallow localization sites. The temperature dependence of $T_1$ reveals a weak localization potential of the charge carriers and shows a correlation between $T_1$ and the inhomogeneity of the spin ensemble. These results establish mixed-A-site perovskite single crystals as a promising solid-state platform with long-lived spin states for quantum information applications.
Telecom C-band single-photon sources with a semiconductor-dielectric microresonator
This paper demonstrates an improved single-photon source for quantum communication using a hybrid semiconductor-dielectric microresonator design. The device generates C-band photons suitable for fiber-optic quantum key distribution with a record 11% end-to-end efficiency.
Key Contributions
- Novel hybrid semiconductor-dielectric micropillar design combining GaAs/AlGaAs and Si/SiO2 Bragg reflectors
- Record-breaking 11% end-to-end efficiency for C-band single-photon generation
- Demonstration of resonant π-pulse excitation for polarized photon generation
View Full Abstract
Secure communications with quantum key distribution over fiber-optic links is one of the few recognized applications of quantum physics at the level of individual quanta -- single C-band photons. Currently, the widely used sources of such photons are highly attenuated laser pulses, featured by a low probability of single photon occurrence. Here, we present an efficient source with an InAs/GaAs quantum dot on a metamorphic buffer layer inside a micropillar-shaped microcavity. The key innovation is the use of different semiconductor and dielectric materials to form the lower (GaAs/AlGaAs) and upper (Si/SiO$_2$) Bragg reflectors. Compatibility of these materials in a monolithic source is achieved by depositing a small amount of Si/SiO$_2$ pairs on an incomplete micropillar made from a coherent heterostructure grown by molecular beam epitaxy. This design enables resonant excitation with $π$-pulses and generation of polarized photons with a record-breaking end-to-end efficiency of 11%.
A hardware efficient quantum residual neural network without post-selection
This paper develops a new quantum neural network architecture that uses residual connections (similar to ResNets in classical deep learning) but implements them more efficiently without requiring post-selection measurements. The approach achieves comparable accuracy to existing quantum machine learning models while using 10 times fewer quantum gates, making it more practical for near-term quantum devices.
Key Contributions
- Hardware-efficient quantum residual neural network that avoids post-selection
- 10x reduction in gate count while maintaining comparable accuracy
- Mitigation of barren plateau training problems in variational quantum circuits
- Demonstration of adversarial robustness in quantum machine learning models
View Full Abstract
We propose a hardware efficient quantum residual neural network which implements residual connections through a deterministic linear combination of identity and variational unitaries, enabling fully differentiable training. In contrast to the previous implementation of residual connections, our architecture avoids post-selection while preserving residual learning. Furthermore, we establish trainability of our model, mitigating barren plateaus which are considered as a major limitation of variational quantum learning models. In order to show the working of our model, we report its application to image classification tasks by training it for MNIST, CIFAR, and SARFish datasets, achieving accuracies of 99% and 80% for binary and multi-class classifications, respectively. These accuracies are comparable to previously achieved from the standard variational models, however our model requires 10x fewer gates making it better suited for resource constraint near-term quantum processors. In addition to high accuracies, the proposed architecture also demonstrates adversarial robustness which is another desirable parameter for quantum machine learning models. Overall our architecture offers a new pathway for developing accurate, robust, trainable and hardware efficient quantum machine learning models.
Perturbative hydrogenic Lamb shifts and radiative decay rates -- an so(4,2)-based algebraic approach
This paper develops algebraic techniques based on the Lie algebra so(4,2) to calculate Lamb shifts and radiative decay rates in hydrogen-like atoms. The approach exploits the symmetry of the hydrogenic Hamiltonian to provide a unified framework for computing these quantum electrodynamic effects beyond the dipole approximation.
Key Contributions
- Development of so(4,2) algebraic approach for calculating Lamb shifts and radiative decay rates
- Derivation of integral representations for complex energy shifts that go beyond dipole approximation
- Unified framework for evaluating both Lamb shifts and decay rates using symmetry-based methods
View Full Abstract
It is shown that algebraic techniques based on the Lie algebra so(4,2) provide efficient tools for evaluating Lamb shifts and radiative decay rates for hydrogenic energy eigenstates as they systematically exploit the intrinsic symmetry of the hydrogenic Hamiltonian. As a main result in lowest order perturbation theory with respect to the fine-structure constant integral representations are derived for the complex-valued energy shifts of hydrogen-like ions from which Lamb shifts and radiative decay rates can be evaluated in a unified way, thus generalizing a recently discussed algebraic approach of Maclay. In order to exemplify the usefulness of this algebraic approach numerical results are presented for Lamb shifts and radiative decay rates which transcend the dipole approximation and contain the dipole approximation as a limiting case.
Towards National Quantum Communication in Europe: Planning and Sizing Terrestrial QKD Networks
This paper develops a methodology for planning and sizing national quantum key distribution (QKD) networks across Europe as part of the EuroQCI initiative. Using Austria as a case study with Monte Carlo simulations, the authors create scaling rules to estimate infrastructure requirements for other EU member states based on population and geography.
Key Contributions
- Development of a reproducible methodology for planning national QKD network infrastructure
- Creation of scaling rules for estimating QKD network requirements across EU member states based on the Austrian case study
View Full Abstract
The European Union is developing the European Quantum Communication Infrastructure (EuroQCI) as a pan-European network to provide secure communication capabilities across Member States, including governmental and critical-infrastructure domains. While the strategic objective is defined at EU level, the required scale and structure of national quantum key distribution (QKD) networks remain largely unspecified. This work addresses the question of how to plan and size national terrestrial QKD networks to support critical infrastructure and public authorities. We propose a reproducible planning methodology that estimates network size, total fiber length, and the number of required QKD components based on a small set of explicit assumptions. The approach is demonstrated for Austria, where a synthetic but structured network model is constructed and evaluated using Monte Carlo simulation. The model focuses on terrestrial QKD infrastructure and explicitly excludes space-based segments. It estimates endpoint counts, trusted repeater node requirements, and hop-length distributions under realistic operational constraints. The Austrian case is then used as a baseline to derive scaling rules for other EU Member States based on population and geographic extent. The results provide first-order planning estimates for national QKD backbone sizes across Europe. These estimates are not intended as deployment designs but as planning-level references that support early-stage cost assessment and infrastructure dimensioning under the EuroQCI framework.
Quantum simulation of baryon scattering in SU(2) lattice gauge theory
This paper uses tensor-network computational techniques to simulate particle collisions in a simplified quantum field theory model, studying how mesons and baryons (fundamental particles) scatter off each other in different scenarios. The researchers found that mixed particle collisions show unique quantum entanglement behavior where particles become quantum mechanically linked during the collision process.
Key Contributions
- First real-time tensor-network simulation of baryon scattering in SU(2) lattice gauge theory
- Discovery of novel entanglement dynamics in mixed baryon number scattering where particles become spatially delocalized during collisions
View Full Abstract
We present a first real-time study of hadronic scattering in a $(1+1)$-dimensional SU(2) lattice gauge theory with fundamental fermions using tensor-network techniques. Working in the gaugeless Hamiltonian formulation, we investigate scattering processes across sectors of fixed global baryon number $B = 0, 1, 2$, corresponding respectively to meson--meson, meson--baryon, and baryon--baryon collisions. At strong coupling, the $B = 0$ and $B = 2$ channels exhibit predominantly elastic dynamics closely resembling the U(1) Schwinger model. The mixed $B = 1$ sector displays qualitatively new behavior: meson and baryon wavepackets become entangled during the collision, with the slower state becoming spatially delocalized while the faster one propagates ballistically. We characterize these processes through local observables, entanglement entropy, and the information lattice.
Broken Quantum: A Systematic Formal Verification Study of Security Vulnerabilities Across the Open-Source Quantum Computing Simulator Ecosystem
This paper presents the first comprehensive security audit of open-source quantum computing simulators, analyzing 45 frameworks and identifying 547 security vulnerabilities including a novel quantum-specific attack called QASM injection. The study uses formal verification methods to validate vulnerability patterns and reveals security issues that could compromise quantum algorithm research infrastructure.
Key Contributions
- First comprehensive formal security audit of quantum computing simulator ecosystem
- Discovery of QASM injection as a novel quantum-specific vulnerability with no classical analog
- Formal verification of 13 vulnerability patterns using Z3 SMT solver
- Identification of vulnerability transfer from commercial frameworks to national laboratory infrastructure
View Full Abstract
Quantum computing simulators form the classical software foundation on which virtually all quantum algorithm research depends. We present Broken Quantum, the first comprehensive formal security audit of the open-source quantum computing simulator ecosystem. Applying COBALT QAI -- a four-module static analysis engine backed by the Z3 SMT solver -- we analyze 45 open-source quantum simulation frameworks from 22 organizations spanning 12 countries. We identify 547 security findings (40 CRITICAL, 492 HIGH, 15 MEDIUM) across four vulnerability classes: CWE-125/190 (C++ memory corruption), CWE-400 (Python resource exhaustion), CWE-502/94 (unsafe deserialization and code injection), and CWE-77/22 (QASM injection -- a novel, quantum-specific attack vector with no classical analog). All 13 vulnerability patterns are formally verified via Z3 satisfiability proofs (13/13 SAT). The 32-qubit boundary emerges as a consistent formal threshold in both C++ and Python vulnerability chains. Supply chain analysis identifies the first documented case of vulnerability transfer from a commercial quantum framework into US national laboratory infrastructure (IBM Qiskit Aer to XACC/Oak Ridge National Laboratory). Nine frameworks score 100/100 under all four scanners; Qiskit Aer,Cirq, tequila, PennyLane, and 5 others score 0/100.
Attosecond quantum spectroscopy with entangled photon pairs
This paper demonstrates a new technique called attosecond quantum spectroscopy that uses entangled photon pairs to drive high-harmonic generation in solids, successfully transferring quantum correlations from infrared light into the extreme ultraviolet frequency range. The work opens possibilities for quantum-enhanced spectroscopy of ultrafast dynamics in materials.
Key Contributions
- First demonstration of transferring quantum photon correlations into the XUV domain via high-harmonic generation
- Development of attosecond quantum optical spectroscopy technique using entangled photons to probe ultrafast dynamics in solids
View Full Abstract
Bright squeezed light from parametric down-conversion in the infrared (IR) frequency range has triggered the emergence of attosecond quantum optics -- a new research field at the interface of quantum optics, strong-field physics, and attosecond technology. Two challenges arise at this interface: transferring quantum features of the IR light sources to the ultraviolet (UV) and extreme ultraviolet (XUV) frequency range via strong-field nonlinearities, and exploiting quantum optical properties of the nonlinear optical response as a new probe in ultrafast dynamics. Here, we address both by driving high-harmonic generation (HHG) in solids with entangled photon pairs either in degenerate or non-degenerate frequency modes. In the degenerate mode, single-shot measurements of harmonics up to the 10th order reveal strong photon bunching whose $g^{(2)}$ first grows and then decreases with the harmonic order. We show that this behavior tracks different microscopic mechanisms responsible for harmonic emission, demonstrating the potential of attosecond quantum optical spectroscopy. In the non-degenerate case, the harmonics retain quantum-induced correlations, verified by wavelength-resolved second-order cross-correlation maps. Our findings demonstrate transfer of quantum photon correlations into the XUV domain and open a pathway toward quantum-enhanced attosecond spectroscopy and control of ultrafast dynamics in solids.
Strong-field ionization of atoms with bright squeezed vacuum light
This paper demonstrates strong-field ionization of xenon atoms using bright squeezed vacuum light, showing that quantum light fluctuations can selectively enhance certain patterns in electron emission and act as a coherence filter that protects quantum trajectories from noise.
Key Contributions
- First experimental demonstration of strong-field ionization driven by bright squeezed vacuum light
- Discovery of quantum-fluctuation-induced coherence protection mechanism in ultrafast atomic processes
- Development of quantum-light-corrected quantum-trajectory Monte Carlo model for nonclassical light-matter interactions
View Full Abstract
Strong-field ionization is the cornerstone of attosecond physics, which has been extensively studied under coherent-state driving. Recently, the interface between attosecond physics and quantum optics has emerged as a new frontier. Yet, owing to experimental limitations, the role of the quantum nature of light in atomic strong-field ionization has remained unexplored. Here, we demonstrate strong-field ionization of xenon atoms driven by bright squeezed vacuum (BSV) with average pulse energy up to 10 \textmu J. We show that, as a nonclassical state with zero mean field and strong intensity fluctuations, BSV selectively enhances the spider-like holographic structures in the photoelectron momentum distributions. Using a quantum-light-corrected quantum-trajectory Monte Carlo (q-QTMC) model, we attribute this effect to the intrinsic coherence of trajectory pairs emitted within the same subcycle field fluctuation. These dynamically correlated paths exhibit enhanced phase stability and remain robust against dephasing, whereas asynchronous paths are filtered out by field noise. Our results reveal a quantum-fluctuation-induced mechanism for coherence protection in strong-field processes, positioning BSV as an effective coherence filter and establishing a new regime of quantum-enabled noise-resilient ultrafast dynamics.
Magnon harmonic generation in antiferromagnets: Dynamical symmetry enriched by symmetry breaking
This paper studies how intense THz lasers can drive nonlinear spin dynamics in antiferromagnetic materials, creating harmonic generation patterns that reveal information about magnetic ordering and phase transitions. The researchers investigate how different magnetic phases produce distinct harmonic generation spectra with specific symmetries and selection rules.
Key Contributions
- Theoretical and numerical analysis of magnon harmonic generation in different antiferromagnetic phases (Néel, canted, weak ferromagnetic)
- Discovery of dynamical symmetries and selection rules in harmonic generation spectra that can reveal magnetic order and symmetry breaking
View Full Abstract
In recent years, techniques of intense THz laser have enabled us to experimentally observe nonlinear spin dynamics in antiferromagnets since the elementary excitations such as magnons reside on a THz to GHz range in antiferromagnets and THz laser thus can directly excite them. We numerically and theoretically investigate THz-laser or GHz-wave driven harmonic generations in typical ordered phases of antiferromagnets: Néel, canted and weak ferromagnetic phases. The radiation waves (harmonic generations) are created by the incident-wave driven magnon dynamics. We point out that magnetic orders and phase transitions can change the spectra of harmonic generations, differently from those of metallic, semiconductor, or atomic-gas systems without (spontaneous) symmetry breakings. We consider both the magnon harmonic generation driven by standard single-color laser and that by two-color laser in the antiferromagnets, and find several dynamical symmetries and the corresponding selection rules of the harmonic generations. These results indicate that the magnon harmonic generation spectra provide new information about symmetry or symmetry breaking of antiferromagnets.
Environment-Assisted Decoherence Suppression of Optical Non-Gaussian States
This paper demonstrates a method to reduce quantum information loss in optical systems by injecting squeezed light into the environment and using feedforward control, which helps preserve quantum states that would otherwise degrade due to photon loss. The technique uses only Gaussian operations, making it experimentally simpler than previous approaches requiring complex non-Gaussian operations.
Key Contributions
- Demonstration of Gaussian-only decoherence suppression scheme for optical quantum states
- Experimental validation showing improved fidelity and Wigner negativity preservation under loss conditions
- Programmable loop-based optical circuit implementation compatible with other loss-suppression techniques
View Full Abstract
Optical loss is a common bottleneck in photonic quantum information processing, undermining the quantum advantage over classical approaches. Although several countermeasures, such as quantum distillation and error correction, have been proposed, they typically require experimentally demanding non-Gaussian operations. Here, we demonstrate a Gaussian-only scheme that suppresses loss-induced decoherence for general, unknown optical quantum states. By injecting a squeezed vacuum state into an environment of the loss channel and performing feedforward based on environmental monitoring, the scheme effectively suppresses loss-induced noise. Our programmable loop-based optical circuit allows us to implement the scheme for several types of loss-sensitive non-Gaussian states under various loss conditions for up to five steps, and directly compare the results with the unsuppressed case. Our results show that the scheme consistently mitigates state degradation, preserving higher fidelity and Wigner negativity than without suppression. This approach can be applied to mitigating a broad class of errors in optical systems and extending quantum memory lifetimes. Moreover, it is compatible with other loss-suppression techniques and extendable to physical platforms beyond optics, offering a promising route toward reducing the overhead required for fault-tolerant quantum information processing.
Steady-State Statistical Modeling of Digitally Stabilized Laser Frequency with Markov-State Feedback
This paper develops a mathematical framework using Markov chains to model how digitally controlled laser frequency stabilization systems behave, accounting for the discrete sampling and noise effects that occur in real digital implementations rather than idealized continuous models.
Key Contributions
- Development of discrete-time Markov-state framework for modeling digitally stabilized laser frequency locks
- Analytical solution for steady-state actuator and frequency distributions without time-domain simulations
- Characterization of sampling correlation effects and colored noise impact on system performance
View Full Abstract
Laser frequency stabilization is conventionally analyzed using continuous-time control theory, which accurately models analog feedback but is insufficient for digital implementations where quantization, sampling, and stochastic noise shape the dynamics. In modern digital laser systems, such as Photonic Integrated Circuit (PIC)-based lasers, finite discriminator and actuator resolution, sampling delays, and measurement noise introduce stochastic behavior that deterministic models do not capture. We present a discrete-time Markov-state framework that models the evolution of the quantized actuator in a digital laser frequency lock, with state-transition probabilities determined by the frequency discriminator response, noise statistics, and implemented digital control logic. The steady-state actuator and locked-laser frequency distributions are obtained directly from the unit-eigenvalue solution of the transition matrix, providing immediate access to key stability metrics without long time-domain simulations. For white frequency noise, we show that the Markov formulation is exact under decorrelated sampling and update schemes, while correlated discriminator sampling introduces a predictable inflation of actuator variance without shifting the operating point. In the presence of colored noise, long-range temporal correlations induce sampling-dependent deviations in both actuator mean and variance, defining the regime of validity of the memoryless Markov description. This framework provides a compact and physically transparent tool for analyzing and optimizing digitally stabilized lasers in integrated photonic systems.
Quantum target ranging with Hetero-Homodyne detection
This paper proposes a new quantum radar system called hetero-homodyne detection that uses entangled photons to measure target distances more accurately than classical methods. The key innovation is a practical receiver design that requires only simple local measurements instead of complex quantum memories, making quantum radar systems more feasible to build.
Key Contributions
- Development of hetero-homodyne receiver architecture that achieves quantum advantage using only local measurements
- Elimination of impractical quantum memory requirements in quantum radar systems making them experimentally feasible
View Full Abstract
Quantum target ranging, which estimates a target position using entangled photon pairs, is known to offer an error-probability advantage over classical ranging strategies. Yet, realizing this advantage in practice remains challenging, as an existing receiver design relies on collective measurements and requires an impractically large number of quantum memories and linear passive components. In this work, we propose the hetero-homodyne receiver, a practically implementable architecture that achieves quantum advantage in target ranging using only local measurements. The receiver requires only one heterodyne setup, a single homodyne setup, and a delay line, making the implementation scalable and experimentally feasible. Our results establish a realistic framework for demonstrating quantum advantage in target ranging and contribute toward practical quantum radar systems.
Enhanced Precision in Entangled Quantum Clocks with Phase Estimation Algorithm
This paper develops an improved quantum clock system that uses entangled quantum states and phase estimation algorithms to measure time differences with extremely high precision. The method achieves better accuracy than traditional approaches by using multiple quantum clocks working together in an entangled state.
Key Contributions
- Enhanced quantum phase estimation algorithm for proper-time measurement
- Multi-clock entangled states that surpass standard quantum limits
- Systematic framework for high-precision relativistic time comparison
View Full Abstract
We present an enhanced entangled quantum clock protocol that incorporates a quantum phase estimation algorithm to directly estimate proper-time differences as an unknown phase. By employing highly entangled multi-clock states, the achievable uncertainty scales inversely with the total number of quantum clocks, surpassing the standard projection-noise limit. This approach extends the original EQC framework and provides a systematic method for high-precision relativistic time comparison.
Deterministic linear-optical computing with symmetry-based qubits
This paper presents a new approach to quantum computing using linear optics, where photons encoded in spatial symmetry states can be processed through 'Grover four-port' devices to create deterministic quantum gates without requiring post-selection or ancilla measurements. The method enables compact implementation of multi-qubit gates including Fredkin and Toffoli gates using standard optical components.
Key Contributions
- Novel symmetry-based qubit encoding for linear optical quantum computing
- Deterministic implementation of controlled-NOT gates using Grover four-ports without post-selection
- Programmable optical devices capable of implementing multiple quantum gates including Fredkin and Toffoli gates
View Full Abstract
A particular type of linear optical multiport, the Grover four-port, has previously been shown to couple the spatial symmetry of a photon to its direction of travel. It is shown here that use of a nonstandard choice of qubit, based on symmetry, allows Grover four ports to act as compact, low-resource deterministic linear optical controlledNOTgates, with no post-selection or ancilla measurements required. This approach allows programmable devices, made from Grover multiports in combination with other standard optical components, that can implement multiple different one-, two-, and three-qubit gates, including the Fredkin and Toffoli gates.
Coherent feedback $H^\infty$ control of quantum linear systems
This paper develops a simplified method for controlling quantum optical systems using coherent feedback H-infinity control theory. The approach reduces the computational complexity from solving coupled algebraic Riccati equations to solving at most four simpler Lyapunov equations, making it more practical for designing robust controllers for quantum optical and optomechanical devices.
Key Contributions
- Simplified design methodology that reduces computational complexity from coupled Riccati equations to at most four Lyapunov equations
- Necessary and sufficient conditions for passive quantum systems using uncoupled pairs of Lyapunov equations
- Demonstration of effectiveness on quantum optical devices including optical cavities and parametric amplifiers
View Full Abstract
The purpose of this paper is to investigate the coherent feedback $H^\infty$ control problem for linear quantum systems. A key contribution is a simplified design methodology that guarantees closed-loop stability and a prescribed level of disturbance attenuation. It is shown that for general linear quantum systems, a physically realizable quantum controller can be obtained by solving at most four Lyapunov equations. In the passive case, a necessary and sufficient condition is provided in terms of two uncoupled pairs of Lyapunov equations. These results represent a significant simplification over the standard approach, which requires solving two coupled algebraic Riccati equations. The effectiveness of the proposed method is demonstrated through two typical quantum optical devices: an empty optical cavity and a degenerate parametric amplifier. These results provide a computationally efficient procedure for the robust and optimal control of quantum optical and optomechanical systems.
Quantum-Inspired Tensor Network Autoencoders for Anomaly Detection: A MERA-Based Approach
This paper develops a quantum-inspired autoencoder architecture based on MERA (Multiscale Entanglement Renormalization Ansatz) tensor networks to detect anomalous particle jets in high-energy physics experiments. The authors show that hierarchical compression preserving local correlations provides a useful approach for identifying unusual particle collision signatures.
Key Contributions
- First application of MERA-inspired autoencoder architecture to particle physics anomaly detection
- Demonstration that locality-aware hierarchical compression matches the natural structure of particle jets
- Empirical validation that MERA disentangling layers provide benefits under strong compression conditions
View Full Abstract
We investigate whether a multiscale tensor-network architecture can provide a useful inductive bias for reconstruction-based anomaly detection in collider jets. Jets are produced by a branching cascade, so their internal structure is naturally organised across angular and momentum scales. This motivates an autoencoder that compresses information hierarchically and can reorganise short-range correlations before coarse-graining. Guided by this picture, we formulate a MERA-inspired autoencoder acting directly on ordered jet constituents. To the best of our knowledge, a MERA-inspired autoencoder has not previously been proposed, and this architecture has not been explored in collider anomaly detection. We compare this architecture to a dense autoencoder, the corresponding tree-tensor-network limit, and standard classical baselines within a common background-only reconstruction framework. The paper is organised around two main questions: whether locality-aware hierarchical compression is genuinely supported by the data, and whether the disentangling layers of MERA contribute beyond a simpler tree hierarchy. To address these questions, we combine benchmark comparisons with a training-free local-compressibility diagnostic and a direct identity-disentangler ablation. The resulting picture is that the locality-preserving multiscale structure is well matched to jet data, and that the MERA disentanglers become beneficial precisely when the compression bottleneck is strongest. Overall, the study supports locality-aware hierarchical compression as a useful inductive bias for jet anomaly detection.
High-Dimensional Quantum Photonics: Roadmap
This paper provides a comprehensive roadmap for high-dimensional quantum photonics, which uses multiple properties of light (spatial, temporal, spectral) to encode quantum information in multi-level states rather than simple qubits. The authors survey current experimental techniques and theoretical tools across different photonic approaches, identify shared challenges, and outline future research directions for integrating these technologies into quantum applications.
Key Contributions
- Comprehensive survey of high-dimensional quantum photonic approaches across different degrees of freedom
- Identification of shared challenges and opportunities across photonic encoding methods
- Roadmap for integrating high-dimensional photonic systems into quantum technology platforms
View Full Abstract
The field of high-dimensional quantum photonics involves the use of multimode photonic degrees-of-freedom such as the spatial, temporal, or spectral structure of light to encode multi-level quantum states. Recent years have seen rapid progress in the development of methods to generate, manipulate, and distribute such quantum states of light and their use in a range of quantum technology applications that offer practical advantages over conventional qubit-based approaches. High-dimensional quantum states of light encoded in photonic time-bins, frequency-bins, transverse-spatial modes, waveguide paths, and temporal modes have enabled noise-robust fundamental tests of quantum mechanics, error-resilient and high-capacity quantum communication protocols, andas well as efficient approaches for quantum information processing, to name just a few examples. However, research in this field has progressed fairly independently, with little exchange across different photonic degrees-of-freedom or between experiment and theory and no comprehensive comparison between degrees-of-freedom. This roadmap aims to bridge this gap by surveying progress in each area and identifying shared challenges and opportunities that cut across two or more photonic degrees-of-freedoms. We review early work and state-of-the-art experimental techniques under development for high-dimensional quantum states encoded in single and entangled photons, as well as theoretical tools for their measurement and certification. We outline the main outstanding challenges for theory and each experimental degree-of-freedom, identifying promising future directions of research that may enable these to be overcome. We end by discussing interconnections and shared challenges centered around their distribution, measurement, and manipulation, with a view towards their integration into next-generation quantum technology platforms and applications.
Soft-Quantum Algorithms
This paper proposes a faster training method for quantum machine learning circuits by directly optimizing unitary matrices with regularization constraints, then converting them back to gate-based circuits. The approach achieves significantly faster training times compared to traditional variational quantum circuit optimization.
Key Contributions
- Novel soft-unitary training method that maintains unitarity through regularization rather than gate constraints
- Circuit alignment technique to recover gate-based architectures from trained matrix representations
- Demonstration of order-of-magnitude speedup in quantum circuit training (4 minutes vs 2+ hours)
View Full Abstract
Quantum operations on pure states can be fully represented by unitary matrices. Variational quantum circuits, also known as quantum neural networks, embed data and trainable parameters into gate-based operations and optimize the parameters via gradient descent. The high cost of training and low fidelity of current quantum devices, however, restricts much of quantum machine learning to classical simulation. For few-qubit problems with large datasets, training the matrix elements directly, as is done with weight matrices in classical neural networks, can be faster than decomposing data and parameters into gates. We propose a method that trains matrices directly while maintaining unitarity through a single regularization term added to the loss function. A second training step, circuit alignment, then recovers a gate-based architecture from the resulting soft-unitary. On a five-qubit supervised classification task with 1000 datapoints, this two-step process produces a trained variational circuit in under four minutes, compared to over two hours for direct circuit training, while achieving lower binary cross-entropy loss. In a second experiment, soft-unitaries are embedded in a hybrid quantum-classical network for a reinforcement learning cartpole task, where the hybrid agent outperforms a purely classical baseline of comparable size.
One-to-one correspondence between Hierarchical Equations of Motion and Pseudomodes for Open Quantum System Dynamics
This paper proves a mathematical equivalence between two major methods for simulating how quantum systems interact with their environment: the Hierarchical Equations of Motion (HEOM) and the pseudomode method. The authors provide explicit formulas to convert between these approaches and show how this unification leads to computational improvements.
Key Contributions
- Proved one-to-one correspondence between HEOM and pseudomode methods for open quantum system dynamics
- Provided constructive proofs with explicit transformation formulas between the two approaches
- Derived elegant connections to stochastic pure state methods (HOPS and nuHOPS)
- Opened pathways for computational optimization of non-Markovian quantum dynamics simulations
View Full Abstract
We unite two of the most widely used approaches for strongly damped, non-Markovian open quantum dynamics, the Hierarchical Equations of Motion (HEOM) and the pseudomode method by proving two statements: First, every physical bath correlation function (BCF) that can be written as a sum of $N$ exponential terms can be obtained from a physical model with $N$ interacting pseudomodes which are damped in Lindblad form. Second, for every such BCF there exists a non-unitary, linear transformation which mirrors the evolution of the system-pseudomode state onto the HEOM hierarchy, and vice versa. Our proofs are constructive and we give explicit expressions for the mirror transformation as well as for the pseudomode Lindbladian corresponding to a given exponential BCF. This approach also gives insight and provides elegant derivations of the corresponding Hierarchy of stochastic Pure States (HOPS) method and its nearly-unitary version, nuHOPS. Our result opens several avenues for further optimization of non-Markovian open quantum system dynamics methods.
Quantum Fragmentation
This paper introduces a systematic method for constructing quantum fragmented Hamiltonians where the mathematical structure can only be understood using entangled quantum states, unlike classical fragmented models that work with simple product states. The authors develop a protocol that transforms classically fragmented models into quantum fragmented ones and demonstrate how to identify and count the resulting quantum sectors.
Key Contributions
- Systematic protocol for constructing quantum Hilbert-space-fragmented Hamiltonians using Rokhsar-Kivelson type construction
- Method for labeling and counting Krylov sectors in quantum fragmented models and experimental verification protocols
- Extension of quantum fragmentation construction to higher dimensions with explicit 2D examples
View Full Abstract
We introduce a systematic protocol for constructing quantum Hilbert-space-fragmented Hamiltonians, whose Krylov-sector structure, unlike in classically fragmented models, can be fully resolved only in an entangled basis. The protocol takes as input a classically fragmented model and uses a Rokhsar-Kivelson type construction to promote it to a quantum fragmented model. Notably, the procedure also works with non-fragmented inputs (such as Ising models). We explain how the Krylov sectors of the resulting quantum fragmented model may be labeled and counted in one dimension, and outline experimentally accessible verification of quantum fragmentation, assuming the ability to prepare specific initial states and perform tomography on reduced density matrices. We further analyze the entanglement structure of the entangled basis underlying quantum fragmentation, which sharply distinguishes it from both classical fragmentation and the trivial "fragmentation" of generic Hamiltonians in their eigenbasis. We also extend the construction to higher dimensions, with an explicit proof of principle example in two dimensions. We expect these results to open a new route to the systematic exploration of quantum fragmentation.
From generating functions to the geometric Binder cumulant
This paper develops mathematical tools using generating functions to study quantum phase transitions and material properties. The authors extend geometric phase formalism to handle degeneracy points and introduce 'geometric Binder cumulants' that can identify metal-insulator transitions and localization phenomena in quantum systems.
Key Contributions
- Extension of geometric phase formalism to quasiadiabatic cycles with degeneracy points using generalized Bargmann invariants
- Introduction of geometric Binder cumulants as tools for identifying quantum phase transitions and localization
View Full Abstract
We present an overview of the role of generating functions in quantum mechanical contexts, mainly in the modern theory of polarization and in the study of quantum phase transitions. Generating functions enable the derivation of moments and cumulants, quantities which characterize the fluctuations of an underlying probability distribution. In all of the cases we review, the fluctuations are those of a quantum system. We show that the original formalism for geometric phases, in which a quantum system is taken around an adiabatic cycle, can be extended to the case when degeneracy points are encountered along the cycle (quasiadiabatic cycles). The essential tool for this extension is a generalized Bargmann invariant which plays the role of a generating function. From the cumulants generated this way one can form ratios according to the Binder cumulant scheme in statistical mechanics. Such geometric Binder cumulants are sensitive to gap closure, as such, they are useful in identifying metal-insulator transitions, localization, and quantum phase transitions. We present example calculations on simple model systems, whose localization properties are well known, to validate to approach. We also complement our geometric Binder cumulant calculations with results for the fidelity susceptibility, a quantity directly related to the quantum geometry of the parameter space.
Light-Induced Quantum Self-Trapping of Vibrational Excitons in an Optical Cavity
This paper investigates how optical cavities can control energy flow in quantum systems by promoting energy localization rather than the typical delocalization. The researchers show that coupling vibrational excitons to cavity photons can create quantum self-trapping states where energy becomes localized, offering a new method to control quantum energy transport.
Key Contributions
- Demonstration of cavity-induced quantum self-trapping of vibrational excitons using generalized Tavis-Cummings model
- Identification of critical coupling regimes that separate cavity-enhanced self-trapping from cavity-assisted energy transfer
- Discovery of stabilized light-induced quantum self-trapping at specific coupling strengths
View Full Abstract
In an optical cavity, strong light--matter coupling between excitons and photons has been widely reported as a way to enhance energy delocalization through spatially extended polaritonic states. In contrast, leveraging cavity-mediated light--matter effects to promote the reciprocal phenomenon, namely \textit{energy localization}, remains largely underexplored. In the present work, we address this question by focusing on a special form of energy localization arising from nonlinear matter interactions: \textit{Quantum Self-Trapping} (QST). We employ a generalized Tavis--Cummings model to investigate the transport of vibrational excitons -- \textit{i.e., vibrons} -- between two anharmonic vibrational modes and examine their interplay with cavity photons. In the absence of a cavity, the arising of true and complete QST -- \textit{i.e.}, an infinite-lifetime localization -- is not possible due to the symmetry of the system. The energy transfer between the two modes still occurs, slowed down by the many-body interactions. Coupling the system to a single-mode cavity strongly alters this behavior, with two emerging regimes. First, at weak light--matter coupling, destructive interference between newly opened transition pathways suppresses energy exchange, leading to cavity-enhanced self-trapping. As the coupling strength increases, these interference effects evolve leading to cavity-assisted energy transfer, where we observe an acceleration of the vibrational energy flow. Most notably, we identify critical coupling strengths that separate both regimes in which the dynamics almost totally freeze, suggesting the arising of a ``stabilized'' light-induced~QST of many-vibron bound states. These results suggest that optical cavities can not only enhance transport but could also stabilize energy localization phenomena, providing a new route to control energy flow in quantum systems.
Shot-Based Quantum Encoding: A Data-Loading Paradigm for Quantum Neural Networks
This paper introduces Shot-Based Quantum Encoding (SBQE), a new method for loading classical data into quantum neural networks that distributes quantum measurement shots across multiple initial states rather than using encoding gates. The approach achieves better performance than existing encoding methods on image classification tasks while avoiding the circuit depth problems of current quantum machine learning schemes.
Key Contributions
- Novel shot-based data encoding paradigm that avoids encoding gates and reduces circuit depth requirements
- Demonstration of quantum neural network performance matching classical networks on benchmark datasets
View Full Abstract
Efficient data loading remains a bottleneck for near-term quantum machine-learning. Existing schemes (angle, amplitude, and basis encoding) either underuse the exponential Hilbert-space capacity or require circuit depths that exceed the coherence budgets of noisy intermediate-scale quantum hardware. We introduce Shot-Based Quantum Encoding (SBQE), a data embedding strategy that distributes the hardware's native resource, shots, according to a data-dependent classical distribution over multiple initial quantum states. By treating the shot counts as a learnable degree of freedom, SBQE produces a mixed-state representation whose expectation values are linear in the classical probabilities and can therefore be composed with non-linear activation functions. We show that SBQE is structurally equivalent to a multilayer perceptron whose weights are realised by quantum circuits, and we describe a hardware-compatible implementation protocol. Benchmarks on Fashion MNIST and Semeion handwritten digits, with ten independent initialisations per model, show that SBQE achieves 89.1% +/- 0.9% test accuracy on Semeion (reducing error by 5.3% relative to amplitude encoding and matching a width-matched classical network) and 80.95% +/- 0.10% on Fashion MNIST (exceeding amplitude encoding by +2.0% and a linear multilayer perceptron by +1.3%), all without any data-encoding gates.
QAFE$^2$: Quantum Accelerated Multiscale Finite Element Analysis
This paper presents QAFE² (Quantum Accelerated Multiscale Finite Element Analysis), a quantum-classical computational framework that uses quantum algorithms to solve multiple microscopic problems simultaneously in materials engineering simulations. The approach leverages quantum superposition and entanglement to achieve exponential speedups over classical methods when analyzing how materials behave across different length scales.
Key Contributions
- Novel quantum algorithm achieving polylogarithmic complexity for single RVE problems with exponential speedup over classical solvers
- Quantum superposition-based method to solve entire ensembles of RVE problems simultaneously across all macroscopic quadrature points
View Full Abstract
The computational cost of concurrent multiscale finite element methods is dominated by the repeated solution of microscopic representative volume element (RVE) problems at macroscopic quadrature points. In this work, we introduce a quantum-classical framework for multiscale finite element analysis (QAFE$^2$) that leverages quantum parallelism to fundamentally alter the scaling of RVE-based homogenisation. At the single-RVE level, the proposed quantum solver attains polylogarithmic complexity with respect to the microscopic discretisation size, yielding an exponential asymptotic speedup over the best available classical solvers. More importantly, QAFE$^2$ exploits quantum superposition and entanglement to evaluate, in a single quantum execution, the entire ensemble of RVE problems associated with all macroscopic quadrature points. This capability is a form of intrinsic quantum concurrency with no classical analogue. Numerical experiments on one- and two-dimensional model problems with known analytical solutions confirm the accuracy of the proposed formulation and verify the theoretical computational scaling and parallel performance.
Necessary and sufficient conditions for the N-representability of functionals of the one-electron reduced density matrix
This paper establishes mathematical conditions that quantum mechanical functionals must satisfy to guarantee they provide upper bounds on the true energy of electron systems. The authors prove that many existing approximation methods, including Hartree-Fock theory, violate these fundamental constraints and can therefore underestimate energies in certain cases.
Key Contributions
- Derivation of necessary and sufficient conditions for N-representability of one-electron reduced density matrix functionals
- Mathematical proof that existing functionals including Hartree-Fock violate these fundamental constraints
- Framework to guide development of improved quantum mechanical approximation methods
View Full Abstract
We establish necessary and sufficient conditions for the N-representability of the universal one-electron reduced density matrix functional. Functionals satisfying these conditions are guaranteed to yield variational upper bounds on the true energy in one-electron reduced density matrix functional theory, regardless of the strength of the interparticle repulsion. Conversely, any functional violating these conditions will necessarily underestimate the true energy for certain systems. These exact constraints impose a stringent restriction on density matrix functional approximations, as many existing functionals-including the Hartree-Fock functional-appear to violate them. This mathematical formalism, therefore, can guide the development of new approximate functionals and numerical algorithms.
Nonvariational quantum optimisation approaches to pangenome-guided sequence assembly
This paper develops quantum optimization algorithms to solve genome assembly problems, where DNA sequences must be reconstructed from short reads by finding optimal paths through population-level genetic variation graphs. The authors use quantum approximate optimization algorithms (QAOA) on both current problem formulations and a new more efficient encoding to tackle this computationally hard biological problem.
Key Contributions
- Development of a new HUBO (higher-order binary optimization) formulation that reduces variables from O(N²) to O(N log N) for genome assembly problems
- Implementation of Iterative-QAOA framework with custom circuit compilation achieving up to 67% reduction in gate overhead
- Demonstration of quantum optimization on real biological problems using IBM quantum hardware with noise mitigation strategies
View Full Abstract
Assembling genomes from short-read sequencing data remains difficult in repetitive regions, where reference bias and combinatorial complexity limit existing methods. Pangenome-guided sequence assembly (PGSA) mitigates reference bias by reconstructing an individual genome as a walk through a population-level graph. The associated problem, identifying a walk whose node visits match read-derived copy numbers, is NP-hard and already challenges classical solvers at a moderate scale. We develop near-term quantum optimisation approaches for this computational bottleneck. We consider two problem encodings: an established quadratic unconstrained binary optimisation and a new higher-order binary optimisation (HUBO) formulation. The latter reduces the number of variables from $O(N^2)$ to $O(N\log N)$ and places moderate-sized instances within the qubit budget of current devices. We solve both using the Iterative-QAOA framework, which combines a fixed linear-ramp QAOA schedule with iterative warm-start bias updates, avoiding the overhead of full variational parameter optimisation. A custom circuit compilation strategy reduces hardware gate overhead by up to 67\% compared with standard tools. In noiseless simulations of QUBO problems, Iterative-QAOA reliably identifies optimal assemblies from as few as $10^{-17}\%$ of all candidate solutions, and \textit{IBM} quantum hardware closely reproduces relevant results with sufficient sampling via CVaR-style post-selection. For HUBO, the variable reduction comes at the cost of deeper compiled circuits and greater noise sensitivity: an expected qubit--depth trade-off. Our findings establish pangenome assembly as a concrete, biologically motivated problem class at the scale where quantum optimisation may first provide practical value.
Pixel-Translation-Equivariant Quantum Convolutional Neural Networks via Fourier Multiplexers
This paper develops quantum convolutional neural networks that properly handle translation symmetry by using quantum Fourier transforms to create layers that are equivariant under pixel shifts. The authors prove that all translation-equivariant quantum layers can be constructed as Fourier multiplexers and show these networks avoid certain training problems.
Key Contributions
- Constructive characterization of all pixel-cyclic-shift equivariant unitaries using quantum Fourier transforms
- Design of deep quantum convolutional neural networks with provable trainability properties that avoid barren plateaus in certain scaling regimes
View Full Abstract
Convolutional neural networks owe much of their success to hard-coding translation equivariance. Quantum convolutional neural networks (QCNNs) have been proposed as near-term quantum analogues, but the relevant notion of translation depends on the data encoding. For address/amplitude encodings such as FRQI, a pixel shift acts as modular addition on an index register, whereas many MERA-inspired QCNNs are equivariant only under cyclic permutations of physical qubits. We formalize this mismatch and construct QCNN layers that commute exactly with the pixel cyclic shift (PCS) symmetry induced by the encoding. Our main technical result is a constructive characterization of all PCS-equivariant unitaries: conjugation by the quantum Fourier transform (QFT) diagonalizes translations, so any PCS-equivariant layer is a Fourier-mode multiplexer followed by an inverse QFT (IQFT). Building on this characterization, we introduce a deep PCS-QCNN with measurement-induced pooling, deferred conditioning, and inter-layer QFT cancellation. We also analyze trainability at random initialization and prove a lower bound on the expected squared gradient norm that remains constant in a depth-scaling regime, ruling out a depth-induced barren plateau in that sense.
Simulating Thermal Properties of Bose-Hubbard Models on a Quantum Computer
This paper develops the first rigorous mathematical framework for preparing thermal quantum states (Gibbs states) of infinite-dimensional bosonic systems on quantum computers, specifically demonstrating efficient algorithms for Bose-Hubbard models that could enable quantum simulation of many-body thermal properties.
Key Contributions
- First general rigorous Gibbs sampling framework for bosonic many-body systems with proven spectral gap guarantees
- Quantum algorithm for efficient preparation of thermal states in infinite-dimensional systems on qubit hardware
- Mathematical proof of exponential convergence to thermal states for Bose-Hubbard models using finite-rank reduction techniques
View Full Abstract
While recent advances have established efficient quantum algorithms for preparing Gibbs states of finite-dimensional systems, comparable complexity results for bosonic and other infinite-dimensional models remain unexplored. We introduce the first general rigorous Gibbs sampling framework for bosonic many-body systems, showing that physically relevant bosonic models admit gapped dissipative generators, enabling efficient preparation of thermal states. Although our results hold for broad classes of models, we illustrate them using Bose-Hubbard Hamiltonians, both within and beyond the mean-field regime. In both cases, we show that the associated dissipative generators maintain a positive spectral gap, thereby implying exponential convergence to the thermal state. Our argument in the multi-mode case is based on a finite-rank reduction of the dissipative dynamics, which allows us to control the generator via compact perturbations and deduce the discreteness of the spectrum and the stability of the gap. We apply our results to provide efficient preparation of the corresponding Gibbs state on qubit hardware, and by that a quantum algorithm to compute thermal properties of the associated model. This provides the first mathematically controlled route to Gibbs sampling in infinite-dimensional systems, with implications for quantum simulation, thermalization, and many-body complexity, where quantum advantages may arise.
Late Breaking Results: Hardware-Efficient Quantum Reservoir Computing via Quantized Readout
This paper presents a quantum reservoir computing framework for short-term electrical load forecasting that uses a fixed quantum circuit without requiring quantum backpropagation. The researchers demonstrate that quantizing the classical readout layer to 8-bit or 6-bit precision maintains forecasting accuracy while significantly reducing memory requirements, making the approach more practical for deployment on resource-constrained devices.
Key Contributions
- Hardware-efficient quantum reservoir computing framework with fixed untrained quantum circuits
- Demonstration that post-training quantization of classical readout maintains accuracy while reducing memory by up to 81%
View Full Abstract
Due to rising electricity demand, accurate short-term load forecasting is increasingly important for grid stability and efficient energy management, particularly in resource-constrained edge settings. We present a hardware-efficient Quantum Reservoir Computing (QRC) framework based on a fixed, untrained quantum circuit with Chebyshev feature encoding, brickwork entanglement, and single- and two-qubit Pauli measurements, avoiding quantum backpropagation entirely. Using the Tetouan City Power Consumption dataset, we examine the effect of post-training fixed-point quantization on the classical readout layer, with the reservoir architecture selected through a genetic search over 18 candidate configurations. Under finite-shot evaluation, 8-bit and 6-bit quantization maintain forecasting accuracy within 1% of the FP32 baseline while reducing readout memory by 75% and 81%, respectively. These results suggest that quantized readout can improve the hardware efficiency and deployment practicality of QRC for memory-constrained energy forecasting.
A multigraph approach to confusability in quantum channels
This paper develops a new mathematical framework called quantum confusability multigraphs to analyze quantum channels by incorporating output information into graphical structures. The authors extend quantum graph theory to multigraphs and provide conditions for characterizing when such structures arise from quantum channels.
Key Contributions
- Introduction of quantum confusability multigraphs that incorporate output information into channel analysis
- Development of quantum multigraph theory from quantum relations perspective
- Characterization theorem for quantum multigraphs arising from confusability structures
View Full Abstract
We introduce a new approach to confusability in a quantum channel, namely quantum confusability multigraph, which incorporates the output information into the graphical structure. By``counting" the edges between two vertices of this confusability multigraph, one recovers the traditional confusability ``single-edged" graph of the channel. With this physical motivation, we therefore develop a theory of quantum multigraphs from Weaver's quantum relations point of view and explore its quantum graph theoretic properties. Finally, we provide a necessary and sufficient condition characterizing those quantum multigraphs that arise as quantum confusability multigraphs.
Exploring bosonic bound states with parallel reaction coordinates
This paper studies quantum bound states that can survive in systems strongly coupled to reservoirs with energy band gaps, using an exactly solvable bosonic model and a weak-coupling approach with parallel reaction coordinates to represent different energy intervals of the reservoir.
Key Contributions
- Development of parallel reaction coordinate method for analyzing bosonic bound states in open quantum systems
- Demonstration that bound state lifetimes can be extended by increasing system-reservoir coupling strength
View Full Abstract
Bound states are dissipation-resilient states that may emerge when quantum systems are strongly coupled to reservoirs with band gaps. We analyze an exactly solvable bosonic model for bound state existence and reproduce these results by a weak-coupling treatment of a supersystem composed of the original system and multiple reaction coordinates, which are individually representing small energy intervals of the reservoir spectral function. Within the perturbative supersystem treatment, the bound state stability results from its energy being inside the band gap. We discuss cases of multiple band gaps and also show that already in presence of weak interactions the bound state's lifetime is finite -- but can be increased by raising the system-reservoir coupling strength.
Scaling Laws for Hybrid Quantum Neural Networks: Depth, Width, and Quantum-Centric Diagnostics
This paper studies how hybrid quantum neural networks perform as you increase the number of quantum layers or qubits, measuring both standard machine learning metrics and quantum-specific properties. The researchers tested different configurations across multiple datasets to understand scaling patterns and provide guidance for choosing optimal quantum circuit parameters.
Key Contributions
- Systematic scaling analysis of hybrid quantum neural networks along depth and width dimensions
- Introduction of quantum-specific diagnostic metrics (QCE, EEE, QGN) for characterizing quantum behavior in machine learning applications
- Establishment of evaluation protocols and practical guidance for parameter selection in hybrid QNN classifiers
View Full Abstract
Hybrid quantum neural networks are increasingly explored for classification, yet it remains unclear how their performance and quantum behavior scale with circuit depth and qubit count. We present a controlled scaling study of hybrid quantum-classical classifiers along two axes: (1) increasing the number of quantum layers L at fixed qubits Q, and (2) increasing the number of qubits Q at fixed depth L. Across multiple datasets, we evaluate predictive performance using Accuracy, PR-AUC, Precision, Recall, and F1, and track quantum-specific metrics (QCE, EEE, QGN) to characterize how quantum properties evolve under scaling. Our results summarize scaling trends, saturation regimes, and dataset-dependent sensitivity, and further analyze how quantum metrics relate to predictive performance. This study provides practical guidance for selecting (Q,L) in hybrid QNN classifiers and establishes a consistent evaluation protocol for scaling analysis.
Quantum Machine Learning for particle scattering entanglement classification
This paper explores using quantum convolutional neural networks (QCNNs) to classify entanglement levels in particle scattering by analyzing fermion density profiles, which are easier to measure than entanglement directly. The researchers found that compact 4-qubit QCNNs outperformed classical CNNs and larger quantum models in this classification task.
Key Contributions
- Demonstrates QCNNs can effectively classify entanglement from accessible observables like density profiles
- Shows that compact quantum models outperform larger ones, emphasizing trainability over scale
- Provides quantum machine learning approach for extracting quantum information from particle scattering data
View Full Abstract
Entanglement is a key quantity for characterizing quantum correlations in particle scattering processes, but its direct evaluation is computationally demanding on quantum hardware. In this work, we investigate whether fermion density profiles, which are easier to access, can serve as proxies for entanglement by framing the problem as a classification task across multiple entanglement thresholds. Using the fermion scattering in the Thirring model as a test bed, we compare Quantum Convolutional Neural Networks (QCNNs) with classical CNNs of comparable parameter counts, and find that QCNNs achieve consistently competitive or superior accuracy with faster convergence and lower variance. Notably, we observe that increasing the model size does not improve the performance within the architectures studied here, and larger models appear to be more sensitive to the choice of encoding. Instead, a compact 4-qubits QCNN provides the best results, suggesting the importance of trainability and encoding choices over model scaling. These findings demonstrate the potential of quantum and quantum-inspired machine learning models for extracting nontrivial quantum information from accessible observables, with implications for high-energy physics and quantum many-body systems.
Distributions of Noisy Expectation Values over Sets of Measurement Operators
This paper studies how measurement outcomes are distributed when quantum circuits experience noise, developing mathematical models to predict these distributions. The researchers compare theoretical predictions with simulations of noisy quantum circuits and find that different types of measurements produce different distribution patterns.
Key Contributions
- Generalized mathematical framework for expectation value distributions in noisy quantum systems with mixed states
- Effective global depolarizing model that captures behavior of local noise in quantum circuits
- Discovery that symmetric vs non-symmetric measurement operators produce uni-modal vs multi-modal distributions
View Full Abstract
Expectation values of measurement operators, interpreted as measurement probabilities, arise frequently throughout quantum algorithms. When quantum states are randomly distributed, their expectation values are also randomly distributed. In this work, with the goal of understanding non-unitary dynamics, we generalize previous derivations for distributions of expectation values (Campos Venuti and Zanardi, Physics Letters A (377), 2013) to the case of sets of measurement operators and random mixed quantum states within variable sized environments. Using combinatorics approaches, we derive expressions for their moments. We proceed to construct empirical distributions of simulated Haar random brickwork quantum circuits with local depolarizing noise, and compare their form to a proposed effective global-depolarizing-like model with variable effective noise scales and environment dimensions. The fitted effective distributions reproduce peak behaviour across circuit depths, noise scales, and system sizes, while deviations in the distribution tails arise from local noise effects. The fit effective model parameters are also shown to vary smoothly and consistently with circuit depth and noise scale. Finally, sets of non-symmetric measurement operators are shown to exhibit distinct multi-modal distributions relative to uni-modal distributions for symmetric measurement operators, opening up questions about their simulability.
Distributed Quantum Property Testing with Communication Constraints
This paper develops a theoretical framework for distributed quantum inference where multiple nodes each receive copies of an unknown quantum state and must communicate through limited channels to a central node that determines properties of the state. The work focuses on quantum state certification - deciding whether an unknown state matches a known reference state - and establishes sample complexity bounds when quantum communication is constrained.
Key Contributions
- Establishes the first theoretical framework for distributed quantum inference with communication constraints
- Proves tight upper and lower bounds for quantum state certification sample complexity under limited quantum communication channels
View Full Abstract
We introduce a framework for distributed quantum inference under communication constraints. In our model, $m$ distributed nodes each receive one copy of an unknown $d$-dimensional quantum state $ρ$, before communicating via a constrained one-way communication channel with a central node, which aims to infer some property of $ρ$. This framework generalizes the classical distributed inference framework introduced by Acharya, Canonne, and Tyagi [COLT2019], by allowing quantum resources such as quantum communication and shared entanglement. Within this setting, we focus on the fundamental problem of quantum state certification: Given a complete description of some state $σ$, decide whether $ρ=σ$ or $\|ρ-σ\|_1\geq ε$. Additionally, we focus on the case of limited quantum communication between distributed nodes and the central node. We show that when each communication channel is limited to only $n_q\leq \log d$ qubits, then the sample complexity of distributed state certification is $\mathcal{O}(\frac{d^2}{2^{n_q}ε^2})$ when public randomness is available to all nodes. Moreover, under the assumption that the channels used by the distributed nodes are mixedness-preserving, we prove a matching lower bound. We further demonstrate that shared randomness is necessary to achieve the above complexity, by proving an $Ω(\frac{d^3}{4^{n_q} ε^2})$ lower bound in the private-coin setting under the same assumption as above. Our lower bounds leverage a recently introduced quantum analogue of the celebrated Ingster-Suslina method and generalize arguments from the classical setting. Together, our work provides the first characterization of distributed quantum state certification in the regime of limited quantum communication and establishes a general framework for distributed quantum inference with communication constraints.
Quantum phases in the interacting generalized Su-Schrieffer-Heeger model
This paper studies quantum phases in an extended Su-Schrieffer-Heeger model with particle interactions, discovering how topological phases evolve when interactions are added and identifying new gapless phases that emerge from the competition between interaction effects and particle hopping.
Key Contributions
- Characterization of how symmetry-protected topological phases evolve under interactions
- Discovery of gapless symmetry-protected topological phase with current-carrying edge states
- Identification of charge-density-wave and Luttinger liquid phases from attractive interactions
View Full Abstract
We investigate the quantum phases of a half-filled generalized interacting Su-Schrieffer-Heeger model with intracell, nearest-neighbor, and next-nearest-neighbor intercell hoppings, together with an on-site inter-sublattice interaction. In the noninteracting limit, the model hosts one topologically trivial phase and two symmetry-protected topological (SPT) phases, distinguished under periodic boundary conditions by different winding numbers and under open boundary conditions by two-fold and four-fold entanglement-spectrum degeneracies, respectively. When interactions are introduced, these free-fermion SPT phases evolve into distinct interacting topological phases that retain characteristic signatures such as entanglement-spectrum degeneracy structures, boundary modes, and nonzero string order parameters. For strong repulsive interactions, a symmetry-breaking phase with unequal but spatially uniform sublattice densities appears between the trivial and topological regimes. For strong attractive interactions, period-2 and period-4 charge-density-wave phases emerge from particle clustering. At intermediate attractive interactions, the competition between interaction-induced localization and hopping-induced delocalization gives rise to a Luttinger liquid phase, a paired Luttinger liquid phase, and a gapless symmetry-protected topological (gSPT) phase. The gSPT phase is characterized by a gapless charge mode together with symmetry-protected current-carrying edge states. We further characterize the gapless phases and the associated quantum phase transitions through central charges and critical exponents.
Quantum advantage in transfer of quantum states
This paper proves that quantum systems can transfer excitations faster than classical systems by exploiting quantum superposition, where particles can simultaneously explore multiple paths through a lattice. The authors demonstrate a clear quantum advantage in state transfer speed compared to any single classical trajectory.
Key Contributions
- Proof of quantum advantage in time-optimal state transfer through lattices
- Demonstration that quantum superposition of multiple trajectories speeds up excitation transfer compared to classical single-path propagation
View Full Abstract
Quantum advantage, broadly understood as the ability of quantum systems to significantly outperform their classical counterparts, underpins current interest to quantum technologies and is a topic of active investigation. In many situations, its existence is subject to debate, and the areas of supremacy of large-scale quantum systems are not well defined. Here, we uncover a novel niche where quantum advantage can be clearly defined and proven. We study a time-optimal transfer of excitations in the lattice involving both nearest-neighbor and longer-range couplings. We prove that the quantum-mechanical property of a particle to propagate along several trajectories simultaneously speeds up the transfer process, which takes a shorter time compared to any particular trajectory and thus provides a clear example of quantum advantage.
Hybrid Quantum-Classical Algorithm for Hamiltonian Simulation
This paper presents a hybrid classical-quantum algorithm for simulating quantum systems described by Hamiltonians that can be decomposed into tensor products. The method classically diagonalizes component operators and feeds this information into quantum procedures to create block-encodings for time evolution simulation.
Key Contributions
- Novel hybrid classical-quantum algorithm for Hamiltonian simulation using tensor product decomposition
- Extension to time-dependent coefficients for commuting Hamiltonians
- Application of randomized truncation techniques to quantum state preparation
View Full Abstract
We introduce a hybrid classical-quantum algorithm for simulating a Hamiltonian of the form $H= \sum_{i=1}^K H_i = \sum_{i=1}^K H_{i_1} \otimes H_{i_2} \otimes \cdots \otimes H_{i_M}$. Given that the entries of all $\{ H_{i_1}, H_{i_2} , \cdots , H_{i_M}\}$ (for all $i$) are classically known, we present a procedure (with three variants) in which these operators are classically diagonalized, and then this information is fed into three possible quantum procedures to obtain the block-encoding of $H$. The evolution operator $\exp(-iHt)$ is then obtained using the standard block-encoding/quantum singular value transformation framework. In the case where $\{H_i\}_{i=1}^K$ commute pairwise, our method can be trivially extended to the case with time-dependent coefficients. We provide a detailed discussion of the efficient regime of our hybrid framework and compare it with existing quantum simulation algorithms. Our algorithm can serve as a useful complement to existing quantum simulation algorithms, thereby expanding the reach of quantum computers for practically simulating physical systems. As a side contribution, we will show how the recent technique called \textit{randomized truncation to a quantum state} developed by Harrow, Lowe, and Witteveen [arXiv preprint arXiv:2510.08518, 2025] can be applied to the context of quantum simulation and particularly quantum state preparation, for which the latter can be of independent interest.
Exact WKB analysis of inverted triple-well: resonance, PT-symmetry breaking, and resurgence
This paper analyzes quantum mechanics in non-Hermitian systems using an inverted triple-well potential, employing exact WKB methods to study different boundary conditions that create PT-symmetric, resonance, and anti-resonance quantum systems. The work develops mathematical tools to predict when these systems exhibit real versus complex energy spectra and identifies critical transition points called exceptional points.
Key Contributions
- Development of exact WKB quantization conditions for non-Hermitian triple-well systems with different boundary conditions
- Derivation of exact algebraic relations for PT-symmetry breaking exceptional points using bounce and bion action parameters
- Unified framework connecting resurgent trans-series analysis with semi-classical path integral methods for non-Hermitian quantum mechanics
View Full Abstract
We study non-Hermitian quantum mechanics of an inverted triple-well potential within the exact WKB framework. For a single classical potential, different Siegert boundary conditions define three distinct quantum problems: the PT-symmetric, resonance, and anti-resonance systems. For each case, we derive the exact quantization condition and construct the associated trans-series solution. By identifying the resurgent structures and cancellations in these non-Hermitian setups, we obtain the median-summed series, clarifying when the spectra are real or complex in accordance with the physical properties of each system. Establishing explicit links to the semi-classical path integral formalism, we elucidate the roles of bounce and bion configurations in these non-Hermitian systems. This analysis predicts PT-symmetry breaking, which we also verify numerically. Using the median quantization conditions, we prove the existence of this symmetry breaking and establish an exact equation for the exceptional point, which emerges as a remarkably simple algebraic relation between the bounce and bion actions. We further show that the median-summed non-perturbative correction to the spectrum vanishes at the exceptional point, while the resurgent structure survives through a universal minimal trans-series. For the resonance and anti-resonance systems, we find that the exact median-summed spectra are related by complex conjugation, representing time reversal in this setting, are necessarily complex, and do not exhibit an exceptional point. Although their spectra differ significantly from the PT-symmetric case, they share the same minimal trans-series. By maintaining explicit links with the path integral saddles and the formal theory of resurgence, our analysis provides a unified and general perspective on the quantization of non-Hermitian theories.
Quantum optomechanics of lossy bodies: general approach and structured squeezed vacuum effects
This paper develops a theoretical framework for understanding how quantum light fields can exert forces on macroscopic objects through purely quantum mechanical effects, without requiring classical light beams. The researchers show that specially prepared squeezed vacuum states can create directional forces on lossy materials by engineering quantum fluctuations with broken rotational symmetry.
Key Contributions
- Development of Modified Langevin Noise Formalism for calculating optomechanical forces in non-equilibrium quantum scenarios
- Demonstration that anisotropic multimode squeezed vacuum states can generate purely quantum mechanical forces without mean electromagnetic fields
- General formalism for macroscopic quantum optomechanics beyond thermal equilibrium that preserves macroscopic quantum coherence
View Full Abstract
We investigate the overall optomechanical force experienced by a macroscopic lossy object in free space under external quantum illumination. To this end, utilizing the Modified Langevin Noise Formalism (MLNF), we derive the time-averaged expectation value of the Maxwell stress tensor for a non-equilibrium scenario in which the incoming scattering field is prepared in an arbitrary mixed quantum state, while the medium-assisted field is maintained in local thermal equilibrium. In the limit of full radiation-matter thermal equilibrium, our expression exactly recovers the well-known fluctuation-dissipation relation governing the Casimir effect, and, under coherent illumination, it yields the standard classical radiation pressure. We demonstrate that by driving the scattering field with an anisotropic, multimode squeezed vacuum state, the spatial profile of the electromagnetic quantum fluctuations can be engineered to exhibit broken rotational symmetry, thereby inducing a purely quantum mechanical force acting on the object. Such mechanical interaction is generated in the strict absence of a mean field, $\langle\hat{\mathbf{E}}\rangle=0$, and its non-classical nature is evidenced by its reliance on second-order field correlations $\langle\hat{\mathbf{E}}^2\rangle$, unlike classical optical radiation pressure governed by the squared mean field $\langle\hat{\mathbf{E}}\rangle^2$. Applying this exact formulation to a homogeneous lossy sphere, we demonstrate the experimental feasibility of the effect using realistic material parameters and optical estimations. Ultimately, we establish a general formalism for macroscopic quantum optomechanics that operates beyond the constraints of thermal equilibrium, enabling the prediction of regimes where the purely quantum force circumvents classical mean fields and shot noise while preserving the object's macroscopic quantum coherence.
Generalized hydrodynamics of free fermions under extensive-charge monitoring
This paper studies how free fermions transport when their particle number is continuously monitored over half of the system. The researchers develop a mathematical framework using generalized hydrodynamics to understand how this monitoring affects particle flow and creates discontinuities in the system's properties.
Key Contributions
- Development of a generalized hydrodynamics framework for analyzing transport dynamics under extensive-charge monitoring
- Demonstration that monitoring creates discontinuities in charge and current profiles that become more pronounced with increased monitoring rates
View Full Abstract
We study transport dynamics of free fermions subject to the external monitoring of a conserved charge over an extensive region. Focusing on bipartition protocols, we consider monitoring the total particle number over half of the system, and study the profiles of local charges and currents at hydrodynamic scales. While the Lindbladian describing the averaged dynamics is non-local, we show that the profiles can be understood in terms of localized impurities. We present a general framework based on the generalized hydrodynamics (GHD) picture, allowing for a hybrid numerical-analytic solution of the quench dynamics at hydrodynamic scales. We illustrate our approach for domain-wall initial states, showing that monitoring leads to discontinuities in the profiles that become more pronounced as the rate increases and that lead to the absence of transport in the Zeno limit of infinite monitoring rates. Our GHD framework could be naturally extended to interacting systems, paving the way for a systematic study of transport of integrable models subject to extensive-charge measurements.
Deviations from thermal light statistics in ensembles of independent two-level emitters
This paper studies when collections of independent two-level atoms produce thermal light with Gaussian statistics, identifying specific conditions on atom number and coherent/incoherent emission ratios. The research helps understand how thermal light is generated by non-interacting atomic emitters.
Key Contributions
- Derived conditions for thermal light statistics from independent two-level atoms based on atom number and emission ratios
- Characterized deviations from Gaussian Moment Theorem in atomic ensembles for both pure and mixed states
View Full Abstract
We investigate the light statistics of an ensemble of independent motionless two-level atoms in a product state. We identify the conditions under which the cold atomic ensemble emits thermal light statistics characterized by the Gaussian Moment Theorem. For the theorem to hold, we derive for each correlation order two conditions on the atom number and the ratio of coherent to incoherent light emission. We further discuss their validity for atoms either in a pure or mixed state. Our results contribute to the understanding of the generation of thermal light by two-level atoms without interactions among the emitters.
Probing the Factorized Island Branch with the Capacity of Entanglement in JT Gravity
This paper studies black hole physics using JT gravity theory, showing that the capacity of entanglement can reveal additional information about quantum black hole states that the standard von Neumann entropy measure cannot detect. The work demonstrates that replica geometries in semiclassical gravity contain observable information beyond what appears in the entropy limit.
Key Contributions
- Demonstrated that capacity of entanglement can detect structure in black hole island physics that von Neumann entropy cannot reveal
- Showed that replica saddle geometries contain physically meaningful finite-n information beyond the n=1 entropy limit
View Full Abstract
Black hole islands are usually diagnosed through the von Neumann entropy, but the full replica saddle contains more information than survives in the limit $n \to 1$. In this paper we show that the capacity of entanglement can detect that extra structure already within the controlled factorized island branch of JT gravity coupled to a large-$c$ bath. In the late-time high-temperature regime, the entropy plateau remains unchanged at the first nontrivial order, while the capacity acquires a definite correction. This provides a sharp semiclassical example in which nearby replica data are physically meaningful even when the entropy itself appears rigid. Our result shows that the factorized island saddle already carries finite-$n$ information beyond the entropy, and that the capacity is a natural observable for exposing it. More broadly, it highlights that the physics of island saddles is not exhausted by the $n=1$ limit: the surrounding replica geometry can contain additional, and observable, information about how the semiclassical saddle is assembled.
Quantum-Boosted Nonlinear Tunneling Driven by a Bright Squeezed Vacuum
This paper demonstrates the first experimental use of bright squeezed vacuum (quantum light) to dramatically enhance nonlinear tunneling ionization in atoms, achieving the same ionization effect with 24 times less energy than classical light. The researchers showed that quantum light's unique noise properties can boost nonlinear optical processes, with applications for efficient frequency conversion and quantum-controlled reactions.
Key Contributions
- First experimental demonstration of quantum light boosting nonlinear tunneling ionization with 24x efficiency improvement
- Showed precise control of tunneling ionization through phase-squeezing parameters while maintaining constant average energy
View Full Abstract
Nonlinear processes, mediated by multiphoton interactions rather than single-photon response, drive numerous fundamental phenomena and momentous applications in modern physics. Among these processes, tunneling ionization plays a pivotal role as it drives high-harmonic generation, forming the basis of attosecond science and enabling the visualization and control of electron motion at its natural time scale. Quantum light, with its unique capacity for quantum noise redistribution, offers a transformative solution to boost nonlinear responses. Here, we report the first experiment of nonlinear tunneling ionization of the most fundamental system of atoms boosted by a quantum light -- bright squeezed vacuum (BSV). Remarkably, the tunneling ionization of a single sodium atom induced by a 300 nJ BSV beam matches that achieved with a 7.1 {\textmu}J coherent light source, demonstrating a dramatic boost in nonlinear efficiency from phase-squeezed quantum light. Moreover, the effective intensity of the BSV light and thus the boosted tunneling ionization can be precisely controlled by tuning the degree of phase squeezing while maintaining the average pulse energy. These findings provide fundamental insights into quantum-boosted nonlinear effect and pave the way for efficient frequency conversion and quantum-controlled molecular reactions using tailored quantum light sources.
A Nested Amplitude Amplification Protocol for the Binary Knapsack Problem
This paper proposes a nested amplitude amplification protocol that improves quantum optimization for the binary knapsack problem by splitting the search process into partial and global phases, reducing circuit depth requirements compared to standard Grover Adaptive Search while maintaining quantum speedup.
Key Contributions
- Development of nested amplitude amplification protocol that reduces circuit depth for combinatorial optimization
- Introduction of Inner Iteration Finder for optimal rotation count selection in partial amplification
- Demonstration of improved performance over baseline Grover Adaptive Search for specific knapsack problem instances
View Full Abstract
Amplitude Amplification offers a provable speedup for search problems, which is leveraged in combinatorial optimization by Grover Adaptive Search (GAS). The protocol demands deep circuits that are challenging with regards to NISQ capabilities. We propose a nested Amplitude Amplification protocol for the binary knapsack problem that splits the decision tree at a tunable depth, performing a partial amplification on the first variables before executing a global GAS on the full search space. The partial amplification is implemented by an Inner Iteration Finder that selects the rotation count maximizing marked-subspace amplitude. The resulting biased superposition serves as the initial state for the outer Amplitude Amplification. Using the Quantum Tree Generator for feasible-state preparation and an efficient classical amplitude-tracking scheme, we simulate the protocol on knapsack instances of sizes intractable by statevector simulation. Our results show that the nested approach reduces the cost of improving an incumbent solution compared to baseline GAS, particularly for a specific subset of knapsack instances. As combinatorial problems in domains such as semiconductor supply-chain planning grow in scale, methods that reduce circuit cost are an important step toward eventual quantum advantage for such applications.
Spectrum-Generating Algebra in Higher Dimensional Gauge Theories
This paper studies quantum many-body physics in gauge theories by demonstrating that certain quantum spin models have special mathematical structures called spectrum-generating algebras that create unusual quantum states called Quantum Many-Body Scars. The work proposes ways to identify and study these phenomena using quantum simulators.
Key Contributions
- Demonstration of approximate spectrum-generating algebra in spin-1 Quantum Link Models
- Prediction and verification of Quantum Many-Body Scars in gauge theories
- Proposal of observables for diagnosing spectrum-generating algebras in quantum simulators
View Full Abstract
Non-equilibrium properties of strongly interacting gauge theories are often intractable with classical simulation methods. Due to recent developments of quantum simulations, studies of their properties in two spatial dimensions are becoming accessible. By demonstrating the existence of an approximate spectrum-generating algebra for a pure gauge plaquette ladder, we predict and verify the existence of Quantum Many-Body Scars in spin-1 Quantum Link Models. The analysis of the model is facilitated by a dualization process that maps the original gauge theory to a constrained spin chain. Was it not for the constraint, the system would have an exact spectrum-generating algebra. We propose a set of observables for diagnosing an approximate spectrum-generating algebra, which is expected to guide quantum simulators toward interesting physical regimes.
Kinetic Uncertainty Relation in Collective Dissipative Quantum Many-Body Systems
This paper derives fundamental precision limits for quantum many-body systems with collective dissipation, showing that interactions between particles can enhance measurement precision beyond what's possible with single particles. The work establishes theoretical bounds for how accurately these systems can perform measurements and validates the theory across different quantum phases.
Key Contributions
- First derivation of kinetic uncertainty relations for collective dissipative quantum many-body systems
- Discovery of cooperative enhancement mechanism where precision scales with particle number
- Theoretical framework validated across stationary, critical, and boundary time crystal phases
View Full Abstract
Attaining the ultimate precision remains a central objective in the engineering of nanoscale systems and the investigation of nonequilibrium processes. While thermodynamic and kinetic uncertainty relations establish fundamental precision bounds, prior derivations in the quantum regime have remained confined to single-body systems. Consequently, the ultimate precision limits for interacting many-body systems have been unknown. In this Letter, we analytically formulate a kinetic uncertainty relation for a many-body system undergoing collective dissipation, a paradigmatic model of boundary time crystals. By applying a mean-field approximation, we derive lower bounds for relative fluctuations expressed in terms of clear physical quantities. Our analysis identifies a cooperative enhancement mechanism, demonstrating that collective interactions allow the precision to scale with the number of particles. We validate these findings through numerical simulations across the stationary, critical, and boundary time crystal phases. Our work presents the first theoretical description of precision bounds in collective dissipative quantum many-body systems for an arbitrary particle number $N$, providing a solid foundation for designing future quantum technologies that exploit many-body phenomena.
Mirror Dual Symmetry in Physics
This paper proposes a 'mirror dual symmetry' principle for quantum systems, arguing that imposing zero total energy constraints on the quantum Rabi model and Dirac equation could resolve fundamental physics problems like negative energy states, dark matter, and quantum gravity renormalization.
Key Contributions
- Proposes mirror dual symmetry principle with zero total energy constraint
- Suggests alternative interpretation to avoid Dirac sea construction
- Claims potential resolution of dark matter and quantum gravity issues
View Full Abstract
The quantum Rabi model has been a useful and pedagogical quantum model in the past decades, sufficiently simple to be solved analytically and intuitively understood, while sufficiently complex as to provide highly non-trivial eigenstates and a practical description of quantum optical platforms for quantum technologies. The Dirac equation, especially when restricted to 1+1 dimensions, is a simple toy model as well, but its easy diagonalization enabled historically to connect the electron spin to the fermionic statistics, among others. Both models share a symmetry at the purely mathematical level, namely, the spectra of each one has a dual equivalent under energy sign change, that I name a mirror dual symmetry. Usually, one quantizes these equations by assuming a ground state energy for the bosonic mode. But there is another option for the interpretation of the Hamiltonian, as I will argue, that is to assume a total symmetry principle, namely, that the total energy is zero at all times, for either the quantum Rabi model or the Dirac equation, and impose the constraint that every positive energy excitation has a mirror excitation of negative energy. This possibility, which was, apparently, ignored in the times when Paul Dirac was studying the implications of his equation, would avoid the worries in the scientific community that the negative energy solutions would decay until minus infinity, thus obviating the necessity to build a highly artificial Dirac sea, and instead impose what has always been successful in Physics, which is the enforcement of symmetry principles. Assuming a total symmetry principle, many of the problems of current Physics, such as renormalization of quantum gravity, dark matter, and dark energy, may possibly be automatically solved. One obvious result would be the automatic cancellation of the zero point energy.
The final version of a recent approach towards quantum foundation
This paper presents a simplified mathematical foundation for quantum mechanics based on the concept of complementary variables, deriving the Hilbert space formalism from the assumption that two different maximal accessible variables exist in a given context. The author removes previous assumptions about inaccessible variables to create a more streamlined theoretical framework.
Key Contributions
- Simplified foundational approach to quantum mechanics by removing inaccessible variable assumptions
- Derivation of Hilbert space formalism from complementary variable postulates
View Full Abstract
In several articles, this author has advocated an alternative approach towards quantum foundation based upon a set of postulates, and based upon the notions of theoretical variables and of accessible theoretical variables. It is shown in this article that this basis can be considerably simplified. In particular, the assumption that there exists an inaccessible variable $φ$ such that all the accessible ones can be seen as functions of $φ$, can be dropped. This assumption has been difficult to motivate in the previous articles. From this, I get a simple basis for the main Theorems.The essential assumption is that there in the given context exist two different maximal accessible variables, what Niels Bohr would have called two complementary variables. From this, the whole Hilbert space formalism may be derived. It is also discussed in some detail how this Hilbert space should be chosen. The resulting theory is a purely mathematical theory, but it leads to qunantum mechanics by letting the variables be physical variables. Other applications of the main theory are also considered. The mathematical proofs are mostly deferred to the Appendix.
A Global Model Structure for $\mathbb{K}$-Linear $\infty$-Local Systems
This paper develops new mathematical foundations for organizing quantum systems using advanced category theory and homotopy theory. The authors create a 'global model structure' for mathematical objects called K-linear infinity-local systems, which provides better theoretical tools for studying parameterized quantum systems than previous approaches.
Key Contributions
- First dedicated global model structure for K-linear infinity-local systems
- Monoidal model structure for base 1-types with respect to external tensor product
- Candidate target semantics for multiplicative fragment of Linear Homotopy Type Theory
View Full Abstract
Parameterized stable homotopy theory organizes local systems of spectra over homotopy types, governed by a "yoga" of six functors. To provide semantics for the recently developed Linear Homotopy Type Theory (LHoTT), good model categories of these spectra are required, preferably monoidal with respect to the external smash product. We focus on the case of parameterized $H\mathbb{K}$-module spectra ($\infty$-local systems), motivated by recent applications of parameterized homotopy to topological quantum computing. While traditionally treated via dg-categories, we leverage combinatorial model structures on simplicial chain complexes to construct the first dedicated global model structure for $\mathbb{K}$-linear $\infty$-local systems, which offers better control than existing models for general parameterized spectra. In particular, when restricted to base 1-types, our model structure is monoidal with respect to the external tensor product, making it a candidate target semantics for the multiplicative fragment of LHoTT.
Coherence and Imaginarity as Resources in Quantum Circuit Complexity
This paper develops new mathematical tools to understand the minimum cost (number of gates) required to build quantum circuits by analyzing two quantum resources: coherence and imaginarity. The authors show that imaginarity can provide better bounds on circuit complexity than coherence alone, particularly for certain gates like the T gate where coherence-based methods fail.
Key Contributions
- Established tighter lower bounds on quantum circuit complexity using Tsallis relative α entropy of cohering power
- Demonstrated that imaginarity resources can provide non-trivial circuit cost constraints when coherence-based bounds fail, particularly for T gates
- Derived explicit relationships between circuit cost and quantum resource generating power using skew information and relative entropy measures
View Full Abstract
Quantum circuit complexity quantifies the minimal number of gates needed to realize a unitary transformation and plays a central role in quantum computation. In this work, we investigate the complexity of quantum circuits through coherence and imaginarity resources. We establish a lower bound on the circuit cost by the Tsallis relative $α$ entropy of cohering power, which is shown to be tighter than the one presented by Bu et al.[\textit{Communications in Mathematical Physics} 405, no. 7 (2024):161] under restrictive conditions. As a consequence, we obtain the relationships between the circuit cost and the coherence generating power via probabilistic average in terms of skew information/relative entropy, and present explicit bounds of the circuit cost for typical quantum gates. Moreover, we derive lower bounds on the circuit cost via the imaginaring power of the circuit, induced by the Tsallis relative $α$ entropy and relative entropy. We demonstrate that imaginarity can yield nontrivial constraints on the circuit cost even when coherence-based lower bounds are zero (e.g., for the $T$ gate), which implies that imaginarity may provide advantages under certain circumstances compared with coherence. Our results may help better understand the connections between quantum resources and circuit complexity.
Quantum Learning of Classical Correlations with continuous-domain Pauli Correlation Encoding
This paper develops quantum machine learning methods to estimate classical covariance matrices using parameterized quantum circuits. The authors propose two quantum estimators (C-Estimator and E-Estimator) that can learn statistical correlations in classical data, with analysis of their computational trade-offs and strategies to avoid training difficulties.
Key Contributions
- Introduction of two quantum covariance estimators using Pauli-Correlation-Encoding paradigm
- Analysis of barren plateau mitigation strategies for variational quantum circuit training
- Demonstration of quantum advantage for high-dimensional statistical estimation problems
View Full Abstract
We propose a quantum machine learning framework for estimating classical covariance matrices using parameterized quantum circuits within the Pauli-Correlation-Encoding (PCE) paradigm. We introduce two quantum covariance estimators: the C-Estimator, which constructs the covariance matrix through a Cholesky factorization to enforce positive (semi)definiteness, and a computationally efficient E-Estimator, which directly estimates covariance entries from observable expectation values. We analyze the trade-offs between the two estimators in terms of qubit requirements and learning complexity, and derive sufficient conditions on regularization parameters to ensure positive (semi)definiteness of the estimators. Furthermore, we show that the barren plateau phenomenon in training the variational quantum circuit for E-estimator can be mitigated by appropriately choosing the regularization parameters in the loss function for HEA ansatz. The proposed framework is evaluated through numerical simulations using randomly generated covariance matrices. We examine the convergence behavior of the estimators, their sensitivity to low-rank assumptions, and their performance in covariance completion with partially observed matrices. The results indicate that the proposed estimators provide a robust approach for learning covariance matrices and offer a promising direction for applying quantum machine learning techniques to high-dimensional statistical estimation problems.
Symmetry-resolved Krylov Complexity and the Uncoloured Tensor Model
This paper studies symmetry-resolved Krylov complexity, a measure of quantum chaos in systems with symmetries, focusing on when complexity in charge subspaces equals that of the full operator. The authors analyze the Uncoloured Tensor Model (related to the SYK model) and find cases where equipartition holds or fails, with subspace-averaged complexity bounded by full-space complexity.
Key Contributions
- Establishes conditions for when symmetry-resolved Krylov complexity in charge subspaces equals full operator complexity
- Demonstrates cases where equipartition holds and fails in the Uncoloured Tensor Model, with bounds on subspace-averaged complexity
View Full Abstract
The symmetry-resolved Krylov complexity is a useful tool in studying chaotic properties of systems that are endowed with symmetries. We investigate the conditions under which an invariant operator would have the symmetry-resolved Krylov complexity in a charge subspace identical to the Krylov complexity of the full operator. Further, we study the Krylov complexity of the Uncoloured Tensor Model, a disorder-free kin of the SYK Model which has a plethora of symmetries. We find charge subspaces of the same operator in which the equipartition holds as well as where it doesn't. We also find that within the computational limits, the Krylov complexity averaged over the symmetry subspace is bounded above by that of the operator in the full space.
Estimation of trace distance between two arbitrary quantum states
This paper presents a quantum algorithm to calculate the trace distance between two quantum states, which is important for distinguishing quantum states in quantum information processing. The algorithm uses matrix exponentiation and improved quantum phase estimation with O(N^8) time complexity and is demonstrated on IBM quantum computers.
Key Contributions
- Novel quantum algorithm for trace distance estimation using matrix exponentiation and improved quantum phase estimation
- Experimental validation on IBM quantum hardware demonstrating near-term feasibility
View Full Abstract
When it comes to discriminating between two quantum states, trace distance is one of the well-known metrics used in quantum computation and quantum information theory. While there are several quantum algorithms for calculating the trace distance between two quantum states, computing it for any two general density matrices remains computationally demanding. In this paper, we propose a quantum algorithm based on the exponentiation of the density matrix and the improved quantum phase estimation (IQPE) to determine the trace distance for both pure and mixed states, with a time complexity of $O(N^8)$ where $N$ is the number of qubits of the given states. We demonstrate its ability to predict the quantity with proof-of-principle simulations and also quantum hardware computations on the IBM quantum computers, confirming its promise for near-term quantum devices.
Loss-aware state space geometry for quantum variational algorithms
This paper develops improved optimization methods for quantum variational algorithms by incorporating the geometry of both the quantum state space and the loss function space. The authors propose 'loss-aware' modifications to natural gradient descent that can provide better convergence in certain scenarios while maintaining robustness.
Key Contributions
- Introduction of loss-aware natural gradient descent that incorporates geometry of outcome spaces
- Development of conformal variants that rescale step sizes while preserving descent directions
- Benchmarking on variational quantum circuits showing improved best-case convergence
View Full Abstract
The natural gradient descent optimisation technique is an efficient optimising protocol for broad classes of classical and quantum systems that takes the underlying geometry of the parameter manifold into account by means of using either the Fisher information metric of the classical probability distribution function or the Fubini-Study tensor of the associated parametrised quantum states in the consequent update rules. Even though the natural gradient descent procedure utilises the geometry of the space of probability or states, it is, however, insensitive to the measure of parametrised distance on the space of possible outcomes when the corresponding optimising problem is considered for the expectation value of a classical or quantum observable with respect to the probability distribution or the quantum state. In this work, we introduce a generic optimising principle, where the intrinsic geometry of the space of outcomes has been taken into account suitably, either by using an ambient space construction with a base statistical manifold with the usual Fisher information metric (or the Fubini-Study tensor), where the loss hypersurface is embedded to, or by means of a first-principle construction from the overlap of nearby quantum states on the projective Hilbert space. This construction as well as a family of conformal variants yields a form of loss-aware natural gradient updates that rescale the effective step size while preserving the descent direction. We benchmark the resulting optimisers on variational quantum circuit examples and on a classical neural network task, finding that, while the standard natural gradient remains the most robust on average, the proposed conformal schemes can improve best-case convergence in favourable regimes.
A solid-state quantum memory based on a continuous optoacoustic system
This paper proposes a new type of quantum memory that stores optical quantum states by converting light into sound waves (phonons) in a special waveguide material. The system can store and retrieve quantum information on demand using controlled light pulses, potentially enabling faster and more scalable quantum communication systems.
Key Contributions
- Novel quantum memory protocol using photon-phonon transduction in Brillouin-active waveguides
- Demonstration of broadband quantum state storage without discrete cavity modes
- High-fidelity storage and retrieval of squeezed and entangled states with hundreds of MHz bandwidth
View Full Abstract
Quantum memories for optical states are essential resources for quantum communication and information processing. We propose a quantum memory protocol based on coherent photon-phonon transduction in a Brillouin-active optical waveguide supporting traveling acoustic modes. A pulsed pump drives an effective beam-splitter interaction between optical and acoustic fields, enabling the mapping of a propagating optical quantum state onto a traveling phononic excitation and its subsequent retrieval on demand. Using a continuum optoacoustic model, we show that the protocol enables broadband quantum state storage in a distributed medium without relying on discrete cavity modes. Analytical and numerical results demonstrate high-fidelity storage and retrieval of squeezed and entangled states under experimentally realistic parameters. The memory bandwidth is set by the Brillouin interaction and can reach hundreds of MHz. Our results identify continuum Brillouin optomechanical systems as a scalable platform for broadband quantum memories and multimode quantum signal processing.
Resource Implications of Different Encodings for Quantum Computational Fluid Dynamics
This paper analyzes the computational costs of different encoding schemes for quantum algorithms applied to computational fluid dynamics, particularly examining the resource requirements for amplitude encoding including initialization and measurement overhead. The authors derive theoretical bounds for circuit depth and measurement runs needed, and propose a new encoding approach specifically for quantum lattice Boltzmann methods.
Key Contributions
- Quantified circuit depth for amplitude encoding initialization procedures and derived upper bounds for measurement runs needed to extract encoded values
- Proposed new encoding approach specifically optimized for quantum lattice Boltzmann method applications
View Full Abstract
For quantum algorithms for problems in which the task is to compute an entire field of values, like e.g. computational fluid dynamics (CFD), it is often proposed amplitude encoding w.r.t. multiple qubits; however, the efforts implied by it for initialization and read-out are not addressed. This work is devoted specifically to this issue: It reviews different encoding schemes in quantum computing, discussing their computational costs for initialization and read-out as well as resulting aspects for their usage via minimal examples. The considerations in previous literature on the required computational resources for amplitude encoding w.r.t. multiple qubits are extended in the presented quantification by explicitly deducing the circuit depth that results for the decomposed initialization procedure of V. V. Shende et al. [1, 2] and deriving an upper bound for the necessary number of executions of a quantum algorithm to extract the encoded values with a specific accuracy. For these two results, an empirical verification via the means provided by IBM's quantum computing simulation framework $\textit{Qiskit}$ [3] is given. In the framework of the study on the required number of runs to achieve a desired accuracy, it is however found that the derived upper bound, scaling like $ {\tilde{n}^2} ~ {\ln( {\tilde{n}} )} $ with the number of encoded values $ {\tilde{n}} $, is too conservative to be used for precise estimations. Therefore, a corresponding study of the required runs for the reference distribution of equal probabilities for all basis states is done in particular, which suggests $ {\tilde{n}} ~ { \ln( {\tilde{n}} ) } $ as an empirical scaling law. Since the view regarding CFD applications is taken here, it is presented in particular that the insights from this work lead to a new encoding approach, which is proposed specifically for a quantum algorithm for the lattice Boltzmann method.
Quantum state determinability from local marginals is universally robust
This paper proves that quantum states which can be uniquely determined from their local measurements remain determinable even when those measurements contain experimental errors, with the error propagation bounded by power laws. The authors provide a classification system for multiparticle quantum states based on their robustness to measurement imperfections and demonstrate practical applications including entanglement witnesses.
Key Contributions
- Proof that quantum state determinability from local marginals is robust to experimental errors with power-law bounded error propagation
- Classification of multipartite quantum states by robustness exponents and semidefinite programming certification method
- Complete robustness analysis for stabilizer states and Dicke states, plus scalable entanglement witness construction
View Full Abstract
A fundamental problem in quantum physics is to establish whether a multiparticle quantum state can be uniquely determined from its local marginals. In theory, this problem has been addressed in the exact case where the marginals are perfectly known. In practice, however, experiments only have access to finite statistics and therefore can only determine the marginals of a quantum state up to an error. In this Letter, we prove that unique determinability universally survives such local imperfections: specifically, for every uniquely determined state, we show that deviations of local marginals propagate to global states strictly bounded by a power law with exponent $α\in(0,1]$. This result induces a classification of multipartite quantum states by their power-law exponents, with linear scaling $α=1$ as the most favorable regime. We derive a necessary and sufficient criterion for linear robustness and translate it into an executable semidefinite-programming certification. Applying our theory, we prove that stabilizer states are inherently square-root robust and provide a complete robustness classification for the Dicke family. Finally, we exploit these results to construct a scalable two-local genuine multipartite entanglement witness, demonstrating the viability of this framework for broad practical applications.
Qurator: Scheduling Hybrid Quantum-Classical Workflows Across Heterogeneous Cloud Providers
This paper presents Qurator, a scheduling system that optimizes the execution of quantum-classical hybrid workflows across multiple quantum cloud providers by jointly minimizing queue waiting times and maximizing circuit fidelity. The system handles quantum-specific constraints like entanglement dependencies and uses a unified scoring method to compare different quantum hardware platforms.
Key Contributions
- Architecture-agnostic quantum-classical task scheduler that jointly optimizes queue time and circuit fidelity across heterogeneous cloud providers
- Unified logarithmic success score that reconciles incompatible calibration data from multiple quantum hardware providers into canonical performance metrics
View Full Abstract
As quantum computing moves from isolated experiments toward integration with large-scale workflows, the integration of quantum devices into HPC systems has gained much interest. Quantum cloud providers expose shared devices through first-come first-serve queues where a circuit that executes in 3 seconds can spend minutes to an entire day waiting. Minimizing this overhead while maintaining execution fidelity is the central challenge of quantum cloud scheduling, and existing approaches treat the two as separate concerns. We present Qurator, an architecture-agnostic quantum-classical task scheduler that jointly optimizes queue time and circuit fidelity across heterogeneous providers. Qurator models hybrid workloads as dynamic DAGs with explicit quantum semantics, including entanglement dependencies, synchronization barriers, no-cloning constraints, and circuit cutting and merging decisions, all of which render classical scheduling techniques ineffective. Fidelity is estimated through a unified logarithmic success score that reconciles incompatible calibration data from IBM, IonQ, IQM, Rigetti, AQT, and QuEra into a canonical set of gate error, readout fidelity, and decoherence terms. We evaluate Qurator on a simulator driven by four months of real queue data using circuits from the Munich Quantum Toolkit benchmark suite. Across load conditions from 5 to 35,000 quantum tasks, Qurator stays within 1% of the highest-fidelity baseline at low load while achieving 30-75% queue time reduction at high load, at a fidelity cost bounded by a user-specified target.
Mass generation in graphs
This paper develops a theoretical framework where graph structures can generate massive particle-like excitations using a mechanism inspired by the Higgs mechanism from quantum field theory. The authors show how the connectivity patterns in graphs can create emergent massive particles that localize differently based on their mass properties.
Key Contributions
- Development of a Higgs-like mechanism for mass generation in discrete graph structures
- Demonstration that massive excitations localize differently based on graph density and vertex degree properties
View Full Abstract
We demonstrate a mechanism for the production of massive excitations in graphs. We treat the number of neighbors at each vertex in the graph (degree) as a scalar field. Then we introduce a mechanism inspired by the Higgs mechanism in quantum field theory(QFT), that couples the degree field to a vector-like field, introduced via the graph edges, represented mathematically by the incident matrices of the graph. The coupling between the two fields produces a massless ground state and massive excitations, separated by a mass gap. The excitations can be treated as emergent massive particles, propagating inside the graph. We study how the size of the graph and its density, represented by the ratio of edges over vertices, affects the mass gap and the localization properties of the massive excitations. We show that the most massive excitations, corresponding to the heaviest emergent particles, localize on regions of the graph with high density, consisting of vertices with a large degree. On the other hand, the least massive excitations, corresponding to the lightest emergent particles localize on a few vertices but with a smaller degree. Excitations with intermediate masses are less localized, spreading on more vertices instead. Our study shows that emergence of matter-like structures with various mass properties, is possible in discrete physical models, relying only on a few fundamental properties like the connectivity of the models.
Non-Markovian exceptional points in waveguide quantum electrodynamics
This paper studies how atoms coupled to waveguides at multiple points exhibit exceptional points (EPs) - special conditions where the system's dynamics transition from exponential decay to oscillatory behavior. The research focuses on non-Markovian dynamics where photons can be reabsorbed, leading to complex interference effects in quantum electrodynamics.
Key Contributions
- Demonstration of exceptional points in non-Markovian waveguide quantum electrodynamics systems
- Analysis of giant atoms with multiple coupling points and their spontaneous emission dynamics
- Identification of waveguide-QED platforms as experimentally accessible systems for studying non-Markovian exceptional point physics
View Full Abstract
Spontaneous emission of a quantum emitter, such as an excited atom, is a fundamental process in quantum electrodynamics (QED), typically associated with exponential decay to the ground state accompanied by irreversible photon emission. This simple Markovian picture, however, is profoundly modified in the presence of time-delayed feedback, structured continua, or cooperative emission, as occurs when an emitter radiates in front of a mirror, when several emitters radiate collectively, or in the case of a giant atom. In such regimes, strong non-Markovian dynamics arise from photon reabsorption and interference effects, leading to pronounced deviations from exponential decay. Here we demonstrate the emergence of exceptional points (EPs) in these highly non-Markovian waveguide-QED environments, i.e., non-Markovian EPs. These EPs appear directly in the relaxation dynamics as sharp transitions to oscillatory behavior, manifested by the appearance of real zeros in the excited-state amplitude. We analyze in detail the spontaneous emission of giant atoms with two or more coupling points, highlighting the mechanisms leading to non-Markovian EPs, and show that similar phenomena arise in other waveguide-QED settings, such as the collective spontaneous emission of spatially separated point-like emitters. Our results reveal waveguide-QED systems as experimentally accessible platforms for realizing and exploring non-Markovian EP physics.
Another Triumph of Locality: Colliding Histories Skew Handshakes
This paper argues that Bell's theorem, commonly interpreted as disproving local reality in quantum mechanics, can actually be explained through strictly local mechanisms when viewed through the Heisenberg picture of unitary quantum mechanics. The author challenges conventional interpretations by proposing that quantum mechanics provides the fundamental explanation for classical physics, rather than the reverse.
Key Contributions
- Proposes a local explanation for Bell's theorem using the Heisenberg picture
- Argues for quantum-first rather than classical-first interpretation of physical reality
- Challenges the widespread interpretation that Bell's theorem eliminates local hidden variable theories
View Full Abstract
From gravity to electromagnetism, apparent action at a distance has always been resolved by deeper, local explanations. Yet today, Bell's theorem is widely interpreted as the death knell for local reality. In this chapter, I present the theorem in accessible terms, examine the three main strategies that attempt to preserve hidden variables, and argue that they share a common defect: the attempt to explain the quantum from the classical rather than the other way around. In unitary quantum mechanics, classicality itself is given a quantum account, and, when the Bell scenario is formulated in the Heisenberg picture, a strictly local explanation emerges. This chapter serves as a non-technical front-end to 'Explaining Bell Locally' (Proc. R. Soc. A).
Connection between the contextuality breaking and incompatibility breaking qubit channels
This paper investigates the relationship between quantum contextuality and measurement incompatibility by studying how different quantum channels affect these nonclassical properties. The authors use Bell inequalities to establish connections between channels that break contextuality and those that break measurement incompatibility, finding that contextuality-breaking channels also break nonlocality but not vice versa.
Key Contributions
- Established connection between contextuality-breaking and incompatibility-breaking quantum channels using Elegant Bell inequality
- Showed that channels breaking EBI contextuality also break CHSH nonlocality but reverse does not hold
- Demonstrated that depolarizing channels breaking N-wise incompatibility can break certain forms of contextuality
View Full Abstract
Contextuality and measurement incompatibility are two fundamental aspects of nonclassicality, and their manifestations in observed quantum correlations are often deeply interconnected. Recently, measurement incompatibility has been studied in connection with nonlocality, particularly in terms of their robustness under various quantum channels. This line of investigation helps establish a connection between the channels that break nonlocality and those that break incompatibility. In this study, we focus on an asymmetric bipartite Bell scenario involving three and four inputs on Alice and Bob sides, respectively, with each of these inputs having dichotomous outcomes. Under the assumption of locality, the observed statistics in this asymmetric scenario obeys the Elegant Bell inequality (EBI). Here, we use a different version of the EBI that relies on the assumption of the preparation noncontextuality. By taking the violation of this noncontextual version of EBI as a witness of preparation contextuality we establish a connection between the channels that break contextuality and the channels that break triple-wise measurement incompatibility. Our results suggest that any channel which breaks EBI contextuality will also break Clauser-Horne-Shimony-Holt (CHSH) nonlocality; however, the reverse does not hold. We also show that a depolarising channel that breaks N-wise incompatibility can also break a certain form of contextuality, witnessed by a generalised inequality involving N measurements on one wing of a bipartite Bell scenario.
Cloning Encrypted Quantum States in Arbitrary Dimensions
This paper extends a recently discovered protocol for cloning encrypted quantum bits (qubits) to higher-dimensional quantum systems called qudits. The authors develop new mathematical operators that work for these multi-level quantum systems and show that the computational overhead grows linearly with the system size.
Key Contributions
- Generalization of encrypted quantum state cloning from qubits to arbitrary-dimensional qudits
- Introduction of new unitary operators for encryption in multi-level quantum systems
- Demonstration that circuit overhead scales linearly with qudit dimension
View Full Abstract
Recently, Yamaguchi and Kempf [Phys. Rev. Lett. 136:010801, arXiv:2501.02757] proved that encrypted qubits can be cloned. In this work, we generalize the encrypted cloning protocol and prove that it also applies to higher-order quantum systems. Given that a straightforward generalization of the protocol using the exponential of the shift and phase operators fails to satisfy the unitary requirement for a quantum gate, we propose a different approach. We introduce a new operator to be used in the encryption process and show that it is unitary. We adapt the decryption operator from the reference paper to fit in the framework of multi-level quantum systems. We analyze the circuit implementation of the proposed operators and show that the overhead imposed by larger dimensions scales linearly with qudit dimension.
Boltzmann-Loschmidt dispute reloaded quantum 150 years later
This paper investigates the famous 19th-century Boltzmann-Loschmidt dispute about time reversibility from a quantum perspective, showing that quantum systems with cold atoms in optical lattices can achieve near-perfect time reversal unlike classical chaotic systems. The work demonstrates both analytically and numerically that quantum chaos can be inverted with up to 100% efficiency, contrasting sharply with classical systems where tiny errors prevent time reversibility.
Key Contributions
- Demonstrates near-perfect quantum time reversal in cold atom systems with harmonic traps and pulsed optical lattices
- Provides both analytical and numerical evidence contrasting quantum vs classical time reversibility in chaotic systems
View Full Abstract
The Boltzmann-Loschmidt dispute of 1876 questioned the possibility of a statistical irreversible description by time reversible classical equations of motion of atoms. Here we show analytically and numerically that the quantum chaos diffusion of cold atoms, or ions, in a harmonic trap and pulsed optical lattice can be inverted back in time with up to 100\% efficiency. This is in sharp contrast to classical evolution where exponentially small errors break time reversibility. We argue that the existing experimental skills allow highlighting the Boltzmann-Loschmidt dispute from a quantum perspective.
Driving Quantum Heat Engines Beyond Classical Limits through Multilevel Coherence
This paper develops a theoretical framework for quantum heat engines that uses quantum coherence in multilevel atomic systems to control engine temperature beyond classical limits. The authors derive analytical expressions showing how coherence can tune engine efficiency and identify rubidium atoms as a promising experimental platform.
Key Contributions
- Unified analytical framework connecting ground-state and excited-state coherence effects in quantum heat engines
- Demonstration of enhanced temperature tunability through N-level quantum coherence enabling switching between heating, cooling, and cancellation regimes
View Full Abstract
Quantum coherence provides a controllable thermodynamic resource that can raise or lower the effective temperature of a cavity mode, enabling efficiency tuning in quantum heat engines. Here, we derive analytic expressions for the effective engine temperature, demonstrating the enhanced temperature tunability achievable via $N$-level ground-state coherence. We further unify ground- and excited-state coherence within a single analytic framework, revealing their interplay as a mechanism for thermodynamic control. Such quantum resources serve as tunable parameters that enable switching between heating, cooling, and cancellation regimes, driving the effective temperature from near-zero to divergence. Ultimately, our framework connects and generalizes previous models of quantum heat engines, and we identify rubidium atoms as a promising candidate for experimentally realizing these coherence-assisted effects.
Modeling the non-Markovian Brownian motion of an optomechanical resonator
This paper develops a theoretical model for non-Markovian (memory-containing) effects in optomechanical resonators, where light and mechanical motion interact. The authors create a mathematically consistent way to describe how these systems behave when they have strong memory effects from their environment, and show how optical measurements can probe these effects.
Key Contributions
- Development of a globally-admissible phenomenological spectral density that avoids mathematical divergences while capturing observed non-Ohmic behavior
- Framework for reconstructing mechanical susceptibility through optical readout and homodyne detection, enabling complete characterization of dissipative and dispersive bath contributions
View Full Abstract
We propose a globally-admissible phenomenological spectral density of the bath for the non-Markovian Brownian motion of an optomechanical resonator, motivated by the near-resonance experimental observation of a non-Ohmic spectrum in [Nat. Commun. 6, 7606 (2015)]. To avoid divergences arising from a naive global extrapolation, we construct this phenomenological bath spectral density that reproduces the observed local-power-law behavior near the mechanical resonance while remaining well defined globally, ensuring the finiteness of the bath-induced renormalizations and quadrature fluctuations of the resonator. The corresponding model of the structured environment produces a nonlocal mechanical susceptibility whose analytic pole structure encodes the observed linewidth. The resulting dissipation kernel exhibits a power-law-modulated exponential decay with transient negativity, signaling strong memory effects. In the weak-coupling regime, the optical readout based on homodyne detection enables near-resonance spectroscopy and, with a calibrated drive on the resonator, permits, in principle, the reconstruction of the full mechanical susceptibility, thereby providing access to both the dissipative and dispersive bath contributions. Our results provide a consistent route from locally-inferred spectral properties to globally-admissible open-system descriptions and establish a framework for probing structured environments in cavity optomechanics.
Toward Quantum Simulation of SU(2) Gauge Theory using Non-Compact Variables
This paper develops improved methods for simulating SU(2) gauge theories on quantum computers using non-compact variables, reducing the number of qubits needed and circuit depth required. The researchers present new simplified Hamiltonians and encoding techniques that make quantum simulation of these fundamental physics theories more practical.
Key Contributions
- Development of two new simplified Hamiltonians for SU(2) gauge theory simulation
- New encoding method that reduces qubit requirements for SU(2) theory
- Reduction in scalar mass requirements to reach Kogut-Susskind limit through additional Hamiltonian term
View Full Abstract
Simulating lattice gauge theories on quantum computers presents unique challenges that drive the development of novel theoretical frameworks. The orbifold lattice approach offers a scalable method for simulating SU($N$) gauge theories in arbitrary dimensions. In this work, we present three improvements: (i) two new simplified Hamiltonians, (ii) an encoding of the SU(2) theory with smaller number of qubits, and (iii) a reduction in the requirement for large scalar masses to reach the Kogut-Susskind limit, achieved via the inclusion of an additional term in the Hamiltonian. These advancements significantly reduce circuit depth and qubit requirements for quantum simulations. We benchmarked these improvements using Monte Carlo simulations of SU(2) in (2+1) dimensions. Preliminary results demonstrate the effectiveness of these developments and further validate the use of noncompact variables as a promising framework for scalable quantum simulations of gauge theories.
Hybrid Fourier Neural Operator for Surrogate Modeling of Laser Processing with a Quantum-Circuit Mixer
This paper develops HQ-LP-FNO, a hybrid quantum-classical neural network that uses variational quantum circuits to improve surrogate modeling of laser processing simulations. The quantum components replace some dense spectral mixing operations in Fourier Neural Operators, reducing parameters by 15.6% while improving accuracy in modeling complex physics like heat transfer and phase changes.
Key Contributions
- Introduction of hybrid quantum-classical Fourier Neural Operator with VQC-based spectral mixing
- Demonstration of parameter reduction and accuracy improvement in 3D multiphysics surrogate modeling
- Establishment of controlled evaluation protocol for hybrid quantum machine learning applications
View Full Abstract
Data-driven surrogates can replace expensive multiphysics solvers for parametric PDEs, yet building compact, accurate neural operators for three-dimensional problems remains challenging: in Fourier Neural Operators, dense mode-wise spectral channel mixing scales linearly with the number of retained Fourier modes, inflating parameter counts and limiting real-time deployability. We introduce HQ-LP-FNO, a hybrid quantum-classical FNO that replaces a configurable fraction of these dense spectral blocks with a compact, mode-shared variational quantum circuit mixer whose parameter count is independent of the Fourier mode budget. A parameter-matched classical bottleneck control is co-designed to provide a rigorous evaluation framework. Evaluated on three-dimensional surrogate modeling of high-energy laser processing, coupling heat transfer, melt-pool convection, free-surface deformation, and phase change, HQ-LP-FNO reduces trainable parameters by 15.6% relative to a classical baseline while lowering phase-fraction mean absolute error by 26% and relative temperature MAE from 2.89% to 2.56%. A sweep over the quantum-channel budget reveals that a moderate VQC allocation yields the best temperature metrics across all tested configurations, including the fully classical baseline, pointing toward an optimal classical-quantum partitioning. The ablation confirms that mode-shared mixing, naturally implemented by the VQC through its compact circuit structure, is the dominant contributor to these improvements. A noisy-simulator study under backend-calibrated noise from ibm-torino confirms numerical stability of the quantum mixer across the tested shot range. These results demonstrate that VQC-based parameter-efficient spectral mixing can improve neural operator surrogates for complex multiphysics problems and establish a controlled evaluation protocol for hybrid quantum operator learning in practice.
Coexistence of CHSH Nonlocality and KCBS Contextuality in a Single Quantum State
This paper studies how two fundamental quantum phenomena - contextuality and nonlocality - can coexist in a single quantum state made of an entangled qubit-qutrit system. The researchers derive mathematical expressions showing these phenomena are controlled by different physical parameters and identify narrow parameter regimes where both can exist simultaneously.
Key Contributions
- Analytical closed-form expressions for CHSH and KCBS inequalities in hybrid qubit-qutrit systems
- Identification of distinct physical resources governing contextuality versus nonlocality - population parameter p2 for contextuality and coherence parameters for nonlocality
View Full Abstract
Contextuality and nonlocality are distinct manifestations at the foundation of quantum mechanics, yet their coexistence within a single quantum state remains subtle. In a hybrid CHSH--KCBS scenario involving the entanglment of a qubit and a qutrit, the qutrit supports the KCBS contextuality test, and the CHSH nonlocality arises from correlations between the qubit and qutrit. Here, we derive the analytical closed-form expressions for both inequalities and also simulate this physics on a quantum circuit. We show that contextuality is governed solely by a population parameter $p_2$, associated with the occupation of the qutrit subsystem in the $|2\rangle$ level, which plays a distinguished role in the KCBS structure. In contrast, nonlocality depends irreducibly on coherence, involving both amplitudes and phases encoded in parameters $(X_i, Y_i)$. This separation of physical resources reveals parameter regimes that optimize KCBS violation while suppress CHSH violation, and vice versa. As a result, the optimal regions do not overlap, and coexistence is restricted to a narrow intermediate regime in parameter space.
Quadrature-Symmetric PulsePol for Robust Quantum Control Beyond the Ideal Pulse Approximation
This paper improves a quantum control technique called PulsePol that transfers polarization between electron and nuclear spins in nitrogen-vacancy centers. The researchers identified why the original method fails with realistic (non-ideal) microwave pulses and developed Q-PulsePol, a modified version that restores proper symmetry and works reliably under practical conditions.
Key Contributions
- Identified symmetry-breaking mechanism causing PulsePol degradation under finite-pulse conditions using bimodal Floquet theory
- Developed Q-PulsePol with phase adjustments to restore quadrature symmetry and improve robustness to realistic pulse constraints
View Full Abstract
PulsePol is an elegantly designed pulse-sequence-based quantum control scheme that enables polarization transfer between electron and nuclear spins, for example, in nitrogen-vacancy (NV) centers. However, previous analyses of PulsePol assumed very strong, near-ideal, instantaneous microwave pulses, which is rarely achievable at higher magnetic fields. We revisit the PulsePol scheme under finite-pulse constraints and show that its performance significantly degrades due to finite-pulse effects. Using bimodal Floquet theory, we identify the symmetry-breaking mechanism responsible for this deterioration in fidelity. By phase adjustment, we reestablish the proper symmetry of the interaction-frame spin Hamiltonian, leading to a sequence called Q-PulsePol, where "Q" reflects the restored quadrature symmetry. Our results demonstrate robustness to finite-pulse effects and improved polarization transfer efficiency, establishing Q-PulsePol as a practical and reliable scheme for bulk hyperpolarization of nuclear spins in solids using a single-mode (zero-quantum or double-quantum) transfer. This work bridges idealized quantum control with realistic pulse engineering, establishing design rules for spin-based quantum control protocols.
A Quantum Search Approach to Magic Square Constraint Problems with Classical Benchmarking
This paper applies Grover's quantum search algorithm to solve magic square constraint satisfaction problems, using classical preprocessing to generate candidate domains and quantum search to find valid solutions. The authors implement and benchmark their quantum approach against classical methods, demonstrating the theoretical quadratic speedup on small grid instances using Qiskit simulations.
Key Contributions
- Novel application of Grover's algorithm to constraint satisfaction problems through magic square generation
- Implementation of quantum oracle design with multi-register modular arithmetic circuits for combinatorial optimization
View Full Abstract
This paper presents a quantum search approach to combinatorial constraint satisfaction problems, demonstrated through the generation of magic squares. We reformulate magic square construction as a quantum search problem in which a reversible, constraint-sensitive oracle marks valid configurations for amplitude amplification via Grover's algorithm. Classical pre-processing using the Siamese construction and partial constraint checks generates a compact candidate domain before quantum encoding. Rather than integrating classical and quantum solvers in an iterative loop, this work uses the classical component for structured initialisation and the quantum component for search, and benchmarks the quantum approach against classical brute-force enumeration and backtracking. Our Qiskit implementation demonstrates the design of multi-register modular arithmetic circuits, oracle logic, and diffusion operators. Experiments are conducted on small grid instances, as larger grids are intractable on classical statevector simulators due to exponential memory growth. The results validate the correctness of the proposed quantum search pipeline and confirm the theoretical quadratic query advantage over classical search.
Canonical Uncertainty Relations for Madelung Variables in Curved Spacetime
This paper develops uncertainty relations for quantum field variables (density and phase) in curved spacetime using the Madelung representation. The work shows how gravitational fields affect quantum fluctuations and provides theoretical constraints for dark matter models and quantum gravity theories.
Key Contributions
- Derivation of uncertainty relations for Madelung variables in curved spacetime
- Demonstration of how gravitational fields modulate quantum fluctuations
- First-principles constraints for scalar field dark matter and quantum gravity models
View Full Abstract
We establish fundamental uncertainty relations for the hydrodynamic variables arising from the Madelung representation of quantum fields in curved spacetime. Through canonical quantization of the density $n$ and phase $θ$ variables and their conjugate momenta, we derive exact uncertainty principles that depend on spacetime geometry through the lapse function $N$ and spatial metric $γ_{ij}$. These relations reveal how gravitational fields modulate quantum fluctuations and provide first-principles constraints for scalar field dark matter models and stochastic quantum gravity.
QCommute: a tool for symbolic computation of nested commutators in quantum many-body spin-1/2 systems
This paper presents QCommute, a C++ software tool that symbolically computes nested commutators between Hamiltonians and local observables in quantum many-body spin-1/2 systems. The tool works directly in the thermodynamic limit with symbolic parameters, enabling investigation of quantum dynamics in strongly correlated systems that cannot be studied with perturbative methods.
Key Contributions
- Development of QCommute software for symbolic computation of nested commutators in quantum many-body systems
- Algebraic computation directly in thermodynamic limit with symbolic Hamiltonian parameters
- Parallelized implementation enabling investigation of strongly correlated quantum dynamics beyond perturbative regimes
View Full Abstract
We present QCommute, a software tool implemented in C++ for symbolic computation of nested commutators between a Hamiltonian and local observables in quantum many-body spin-1/2 systems on one-, two-, and three-dimensional hypercubic lattices. The computation is performed algebraically directly in the thermodynamic limit, and the Hamiltonian parameters are kept symbolic. Importantly, this way the entire parameter space is covered in a single run. The implementation supports extensive parallelization to achieve high computational performance. QCommute enables the investigation of quantum dynamics in strongly correlated regimes that are inaccessible to perturbative approaches, either through direct Taylor expansion in time or via advanced techniques such as the recursion method.
What quantum computer to buy?
This paper provides a practical framework for institutions deciding how to procure quantum computing capabilities, comparing different quantum platforms and access models rather than focusing on specific hardware choices. It recommends starting with minimal capability that builds expertise while preserving strategic flexibility.
Key Contributions
- Develops a five-layer quantum capability procurement framework
- Compares commercial quantum platforms through institutional fit and access models
- Provides guidance on quantum computer acquisition strategy for organizations
View Full Abstract
The phrase ``buy a quantum computer'' hides several different procurement problems. An institution may be seeking cloud access for teaching, reserved capacity for research, a local instrument for hardware training, an optimization appliance, or a strategic installation that reshapes facilities, staffing, and budgets. Because these choices differ in purpose, operating burden, and useful lifetime, the decision should be framed as acquisition of \emph{quantum capability} rather than selection of a presumed hardware winner. This manuscript develops a practical procurement framework that distinguishes five capability layers, separates peer-reviewed results from commercial offerings, pricing anchors, and public roadmaps, and compares the main commercial platform families -- superconducting circuits, trapped ions, neutral atoms, quantum annealing, and photonics -- through the lens of institutional fit, access model, and refresh pressure. The main conclusion is that most institutions should begin with the smallest layer of capability that produces repeatable near-term value, builds internal expertise, and preserves strategic flexibility. Large on-premises systems are justified only when mission requirements, site readiness, staffing, governance, and upgrade paths are already clear.
Physical currents for stochastic Einstein-Podolsky-Rosen quantum trajectories
This paper investigates quantum measurement theory by simulating Einstein-Podolsky-Rosen correlations using stochastic Schrödinger equations, finding that Stratonovich noise better matches experimental results than Ito noise. The work proposes a modern version of Schrödinger's thought experiment for simultaneous position and momentum measurements.
Key Contributions
- Determined correct stochastic formulation for homodyne current measurements in broad-band limit
- Proposed modern implementation of Schrödinger's gedanken experiment for simultaneous position-momentum measurement
View Full Abstract
Theories of the measured homodyne current generated by a stochastic Schrödinger equation (SSE) can be tested in a simulation of the Einstein-Podolsky-Rosen (EPR) correlations for a two-mode squeezed state. We carry out such a simulation, and determine the correct stochastic term for the measured current in the broad-band limit. Stratonovich rather than Ito stochastic noise agrees with experiment. We show that this is relevant to measurement noise and errors in quantum technologies. By analyzing the SSE trajectories as measurement settings are changed, we propose a modern version of Schrodinger's gedanken experiment, where one measures position and momenta simultaneously, ``one by direct, the other by indirect measurement''.
Interaction-free measurement of multiple objects using a universal integrated photonic processor
This paper demonstrates interaction-free measurement (IFM) that can detect absorbing objects without actually absorbing photons from them, extended to simultaneously detect up to 5 objects using a single photon on a cloud-based photonic processor. The work scales up the original single-object IFM technique to handle multiple objects sequentially.
Key Contributions
- Experimental demonstration of sequential IFM for up to 5 objects using a single photon probe
- Implementation on a cloud-based integrated photonic processor with error mitigation
- Scaling of interaction-free measurement beyond single-object detection
View Full Abstract
The phenomenon of interaction-free measurement (IFM) enables the probabilistic detection of an absorbing object with reduced photon absorption. We report the experimental implementation of a simultaneous IFM of multiple objects using a single quantum probe on the cloud-based Ascella photonic processor of company Quandela. We demonstrate sequential IFM of up to 5 objects using a single photon, significantly extending the original IFM scheme for a single object. The experimental error-mitigated results confirm the theoretical predictions for this sequential IFM setup, and demonstrate a practical approach to scaling IFM to more complex quantum interrogation tasks.
Unsharp Measurement with Adaptive Gaussian POVMs for Quantum-Inspired Image Processing
This paper develops a quantum measurement-inspired framework for processing grayscale images by embedding pixel intensities in a Hilbert space and using adaptive Gaussian-based measurement operators. The method allows continuous control between smooth and sharp image transformations through adjustable parameters.
Key Contributions
- Novel quantum measurement-based framework for image processing using adaptive POVMs
- Introduction of nonlinear sharpening parameter for controlling measurement localization and smoothing trade-offs
View Full Abstract
We propose a quantum measurement-based framework for probabilistic transformation of grayscale images using adaptive positive operator-valued measures (POVMs). In contrast, to existing approaches that are largely centered around segmentation or thresholding, the transformation is formulated here as a measurement-induced process acting directly on pixel intensities. The intensity values are embedded in a finite-dimensional Hilbert space, which allows the construction of data-adaptive measurement operators derived from Gaussian models of the image histogram. These operators naturally define an unsharp measurement of the intensity observable, with the reconstructed image obtained through expectation values of the measurement outcomes. To control the degree of measurement localization, we introduce a nonlinear sharpening transformation with a sharpening parameter, $γ$, that induces a continuous transition from unsharp measurements to projective measurements. This transition reflects an inherent trade-off between probabilistic smoothing and localization of intensity structures. In addition to the nonlinear sharpening parameter, we introduce another parameter $k$ (number of gaussian centers) which controls the resolution of the image during the transformation. Experimental results on standard benchmark images show that the proposed method gives effective data-adaptive transformations while preserving structural information.
Quantum-inspired Ising machine using sparsified spin connectivity
This paper presents E-MVL, a quantum-inspired algorithm that mimics thermal spin dynamics to solve difficult optimization problems by strategically controlling which spins interact with each other. The algorithm outperforms traditional simulated annealing, solving problems with up to 1600 spins compared to 400 for the best baseline, and runs 6 times faster on FPGA hardware.
Key Contributions
- Development of E-MVL algorithm that uses sparsified spin connectivity to efficiently solve combinatorial optimization problems
- Demonstration of superior performance over simulated annealing with exact solutions up to 1600 spins
- FPGA implementation achieving 6-fold speed improvement over traditional methods
View Full Abstract
Combinatorial optimization problems become computationally intractable as these NP-hard problems scale. We previously proposed extraction-type majority voting logic (E-MVL), a quantum-inspired algorithm using digital logic circuits. E-MVL mimics the thermal spin dynamics of simulated annealing (SA) through controlled sparsification of spin interactions for efficient ground-state search. This study investigates the performance potential of E-MVL through systematic optimization and comprehensive benchmarking against SA. The target problem is the Sherrington-Kirkpatrick (SK) model with bimodal and Gaussian coupling distributions. Through equilibrium state analysis, we demonstrate that the sparsity control mechanism provides a consistent search of the solution space regardless of the problem's coupling distribution (bimodal, Gaussian) or size. E-MVL not only achieves the best performance among all tested algorithms-solving exact solutions up to 1600 spins where the best SA baseline is limited to 400 spins-but also provides insights that significantly improve SA's own temperature scheduling. These results establish E-MVL's dual contribution as both an efficient optimizer and a practical methodology for enhancing SA performance. Moreover, FPGA implementation achieved an approximately 6-fold faster solution speed than SA.
Breaking the Entanglement-Structure Trade-off: Many-Body Localization Protects Emergent Holographic Geometry in Random Tensor Networks
This paper investigates how many-body localization (MBL) can preserve holographic geometry in random tensor networks, preventing it from being destroyed by thermalization. The researchers demonstrate that disorder-induced MBL allows quantum entanglement structures to maintain their spatial organization indefinitely, breaking the typical trade-off between entanglement amount and geometric structure.
Key Contributions
- Discovery that many-body localization protects emergent holographic geometry from thermalization in random tensor networks
- Identification of optimal parameter regimes (disorder strength and anisotropy) for preserving entanglement geometry
- Demonstration that MBL breaks the entanglement-structure trade-off by preserving spatial entanglement patterns rather than total entanglement
View Full Abstract
We present a systematic numerical investigation of the "entanglement geometry gravity" chain in random tensor networks (RTN) established by the ER EPR conjecture and Jacobson's thermodynamic derivation. First, we verify the kinematic foundation: the entanglement first law $δ\langle K\rangle=δS$ (slope=1.000), the encoding of geometry by mutual information (correlation=0.92), and the locality of holographic perturbations (3.3x). We also confirm that gravitational dynamics (JT gravity) does not emerge, identifying a sharp kinematics-dynamics boundary. Second, and more importantly, we discover that many-body localization (MBL) is the mechanism that protects emergent holographic geometry from thermalization. Replacing Haar-random evolution (geometry lifetime $t\sim6$) with an XXZ Hamiltonian plus on-site disorder, we observe a finite-size crossover at disorder strength $W_c\approx10-12$ above which mutual-information-lattice correlations persist indefinitely ($r>0.5$ for $t>50$). We map the full parameter space: the optimal regime is a near-Ising anisotropy $Δ\approx50$ with $W=30$ yielding $r=0.779\pm0.002$ (confirmed by a fine scan over $Δ\in[30,70]$); only holographic (RTN) initial states sustain geometry, while product, Néel, and Bell-pair states do not. MBL preserves the spatial structure of entanglement (adjacent/non-adjacent MI ratio ~2.6-4.2x vs. 1.0x in the thermal phase), rather than its total amount. A comparison with classical cellular automata reveals that MBL uniquely breaks the entanglement-structure trade-off imposed by quantum monogamy: classical systems achieve spatial structure only at the cost of negligible mutual information, while MBL sustains both.
Optimal, Qubit-Efficient Quantum Vehicle Routing via Colored-Permutations
This paper develops a new quantum algorithm encoding for the vehicle routing problem that uses colored-permutation matrices to represent routing decisions more efficiently. The approach reduces the number of qubits needed compared to previous methods by eliminating the need for explicit capacity tracking variables.
Key Contributions
- Novel colored-permutation encoding that reduces qubit requirements for vehicle routing problems
- Integration with Constraint-Enhanced QAOA framework for improved quantum optimization
- Demonstrated recovery of optimal solutions on standard benchmarks with qubit-efficient representation
View Full Abstract
We formulate a global-position colored-permutation encoding for the capacitated vehicle routing problem. Each of the $K$ vehicles selects a disjoint partial permutation, and the sum of these $K$ color layers forms a full $n\times n$ permutation matrix that assigns every customer to exactly one visit position. This representation uses $n^2K$ binary decision variables arranged as $K$ color layers over a common permutation structure, while vehicle capacities are enforced by weighted sums over the entries of each color class, requiring no explicit load register and hence no extra logical qubits beyond the routing variables. In contrast, many prior quantum encodings introduce an explicit capacity or load representation with additional qubits. Our construction is designed to exploit the Constraint-Enhanced QAOA framework together with its encoded-manifold analyses. Building on a requirements-based view of quantum utility in CVRP, we develop a routing optimization formulation that directly targets one of the main near-term bottlenecks, namely the additional logical-qubit cost of vehicle labels and explicit capacity constraints. Our proposal shows strong algorithmic performance in addition to qubit efficiency. On a standard benchmark suite, our end-to-end pipeline recovers the independently verified optima. The feasibility oracle may also be of independent interest as a reusable polynomial-time decoding and certification primitive for quantum and quantum-inspired routing pipelines.
A Demon that remembers: An agential approach towards quantum thermodynamics of temporal correlations
This paper develops a theoretical framework for extracting thermodynamic work from quantum systems with temporal correlations using a classical agent that makes adaptive decisions. The work introduces novel concepts like Time-Ordered Free Energy and demonstrates how machine learning algorithms can optimize work extraction from unknown quantum states.
Key Contributions
- Introduction of Time-Ordered Free Energy (TOFE) as a new upper bound for adaptive quantum thermodynamic operations
- Development of reinforcement learning approaches for work extraction from unknown quantum states with polylogarithmic dissipation
View Full Abstract
This thesis develops a decision-theoretic framework for extracting thermodynamic work from temporal correlations in quantum systems. We model a classical agent -- lacking quantum memory -- performing adaptive work extraction through continuous inference and decision-making under uncertainty. By introducing $ρ^*$-ideal protocols, we demonstrate that exploiting memory effects allows adaptive strategies to surpass non-adaptive bounds. We formalize this via the Time-Ordered Free Energy (TOFE), a novel upper bound for causal, adaptive operations that reveals a thermodynamic gap linked to adaptive ordered discord. Additionally, we tackle work extraction from unknown sources using reinforcement learning. By adapting multi-armed bandit algorithms, we show an agent can simultaneously learn an unknown i.i.d. quantum state and extract work, achieving polylogarithmic cumulative dissipation that significantly outperforms standard tomography. Overall, this work lays the foundation for predictive and learning-based quantum thermodynamics.
Efficient direct quantum state tomography using fan-out couplings
This paper presents a new method for quantum state tomography that uses a fan-out coupling architecture to characterize quantum states more efficiently than conventional approaches. The method enables constant circuit depth regardless of system size and was experimentally validated on IBM quantum processors up to 20 qubits.
Key Contributions
- Introduction of fan-out coupling architecture for direct quantum state tomography with constant circuit depth
- Experimental validation on IBM quantum processors demonstrating scalable state reconstruction up to 20 qubits with error mitigation
View Full Abstract
Characterizing quantum states is essential for validating quantum devices, yet conventional quantum state tomography becomes prohibitively expensive as system size grows. Direct tomography offers a distinct route by enabling selective access to individual complex density-matrix elements, with a particular advantage for sparse target states and some verification tasks. Here we introduce a direct quantum state tomography scheme combining strong-measurement estimation with a fan-out coupling architecture. It enables mutually commuting interactions between system qubits and a single meter qubit, thereby achieving constant circuit depth, independent of system size. Notably, the involutory fan-out coupling reduces to the identity under repetition, enabling straightforward noise scaling for quantum error mitigation. We experimentally validate the scheme on a superconducting quantum processor via the IBM Quantum Platform, demonstrating four-qubit state reconstruction and single-circuit GHZ-state fidelity estimation up to 20 qubits with error mitigation. Consistent results with standard tomography and improved efficiency establish our scheme as a promising approach to reconstructing full quantum states and scalable verification tasks.
Quantum Clock Synchronization Networks: A Survey
This survey paper reviews quantum clock synchronization (QCS) methods that use quantum phenomena like entanglement and interference to establish precise shared timing between distant locations. It categorizes different QCS protocols and examines their potential advantages over classical synchronization methods for quantum networks and navigation systems.
Key Contributions
- Comprehensive survey and categorization of quantum clock synchronization protocols
- Analysis of quantum resources needed for QCS including entangled photons and quantum memories
- Review of precision advantages and security benefits over classical time synchronization methods
View Full Abstract
Quantum clock synchronization (QCS) aims to establish a shared temporal reference between distant nodes by exploiting uniquely quantum phenomena such as entanglement, single-photon interference, and quantum correlations. In contrast to classical synchronization and time-transfer techniques, which are limited by signal propagation delays, atmospheric disturbances, and oscillator drift, QCS protocols offer the potential to surpass classical precision bounds and enhance resilience against adversarial manipulations. As precise and secure time synchronization underpins distributed quantum networks, navigation systems, and emerging quantum Internet infrastructures, understanding QCS principles, capabilities, and implementation challenges has become increasingly important. This survey provides a unified and critical overview of the rapidly growing QCS research landscape, highlighting fundamentals, protocol types, enabling resources, performance constraints, security considerations, and practical implementations of QCS. We first introduce the theoretical underpinnings of QCS, including entanglement-assisted time transfer, Hong-Ou-Mandel interference-based synchronization, and quantum slow-clock transport. We then categorize the main QCS protocols, ranging from ticking-qubit and entanglement-based schemes to time-of-arrival correlation methods, conveyor-belt synchronization, and quantum-enhanced two-way time transfer. This organization clarifies the relationships between protocol families and their achievable precision advantages over classical methods. Key quantum resources such as spontaneous parametric down-conversion-based entangled photon pairs, Greenberger-Horne-Zeilinger and W multipartite states, squeezed and frequency-entangled light, quantum frequency combs, and quantum memories are reviewed in the context of scalability and robustness.
Eliminating Vendor Lock-In in Quantum Machine Learning via Framework-Agnostic Neural Networks
This paper presents a framework-agnostic quantum neural network architecture that allows quantum machine learning models to work across different software platforms and hardware backends without vendor lock-in. The system provides a unified interface that can export models to various quantum computing frameworks while maintaining performance parity with native implementations.
Key Contributions
- Framework-agnostic QNN architecture with hardware abstraction layer
- Multi-framework export pipeline supporting cross-platform model deployment
- Benchmarking showing performance parity across different quantum ML frameworks
View Full Abstract
Quantum machine learning (QML) stands at the intersection of quantum computing and artificial intelligence, offering the potential to solve problems that remain intractable for classical methods. However, the current landscape of QML software frameworks suffers from severe fragmentation: models developed in TensorFlow Quantum cannot execute on PennyLane backends, circuits authored in Qiskit Machine Learning cannot be deployed to Amazon Braket hardware, and researchers who invest in one ecosystem face prohibitive switching costs when migrating to another. This vendor lock-in impedes reproducibility, limits hardware access, and slows the pace of scientific discovery. In this paper, we present a framework-agnostic quantum neural network (QNN) architecture that abstracts away vendor-specific interfaces through a unified computational graph, a hardware abstraction layer (HAL), and a multi-framework export pipeline. The core architecture supports simultaneous integration with TensorFlow, PyTorch, and JAX as classical co-processors, while the HAL provides transparent access to IBM Quantum, Amazon Braket, Azure Quantum, IonQ, and Rigetti backends through a single application programming interface (API). We introduce three pluggable data encoding strategies (amplitude, angle, and instantaneous quantum polynomial encoding) that are compatible with all supported backends. An export module leveraging Open Neural Network Exchange (ONNX) metadata enables lossless circuit translation across Qiskit, Cirq, PennyLane, and Braket representations. We benchmark our framework on the Iris, Wine, and MNIST-4 classification tasks, demonstrating training time parity (within 8\% overhead) compared to native framework implementations, while achieving identical classification accuracy.
Measurement-enhanced entanglement in a monitored superconducting chain
This paper studies how continuous measurements affect entanglement in a quantum fermionic chain with superconducting pairing, finding that measurements can counterintuitively enhance entanglement by suppressing pairing correlations that would otherwise limit entanglement growth. The effect occurs in finite systems but disappears in the thermodynamic limit.
Key Contributions
- Discovery of measurement-enhanced entanglement phenomenon where continuous measurements increase rather than decrease entanglement in superconducting fermionic chains
- Theoretical analysis showing the effect scales as ln²L and vanishes in the thermodynamic limit, providing fundamental limits on measurement-enhanced entanglement
View Full Abstract
A common view in monitored quantum dynamics is that local measurements suppress entanglement growth. We show that this intuition can fail in a one-dimensional spinful fermionic chain governed by a BCS Hamiltonian with pairing strength $Δ$ and subject to continuous, on-site, spin-resolved charge measurements at rate $γ$. Using free-fermion simulations and quasiparticle analysis, we show that pairing suppresses entanglement growth, while measurements suppress pairing. Their competition yields measurement-enhanced entanglement: for $Δ>0$, the steady-state entanglement $S_s$ increases with $γ$ over a finite interval $0<γ<γ_{\rm peak}$. This occurs because stronger measurements suppress pairing correlations, which would otherwise suppress entanglement growth. Using a nonlinear sigma-model calculation and free-fermion simulations, we provide evidence that for $Δ>0$ and small but finite $γ$, the steady-state entanglement scales as $S_s\sim \ln^2 L$. This implies that, in this setting, measurement-enhanced entanglement does not persist in the thermodynamic limit.
Refining Quantum Phase Estimation Precision Conditions on Unitaries for Many-Electron Systems
This paper develops improved mathematical conditions for quantum phase estimation (QPE) when applied to many-electron quantum systems, providing tighter bounds on both energy estimation precision and state projection precision. The work focuses on theoretical refinements with numerical validation using the H2 molecule.
Key Contributions
- Derivation of tighter bounds on energy estimation precision for quantum phase estimation
- Introduction of novel conditions to control state projection precision in many-electron systems
- Application of refined conditions to Trotterization with improved bounds
View Full Abstract
Beyond ground state energy estimation, quantum phase estimation (QPE) applied to many-electron systems has the potential to output a projection on the ground state, that would enable the evaluation of observables other than the energy. In this article, after recalling the role of QPE free parameters, we detail the derivation of first-order and unified conditions on unitaries that allow us to control the energy estimation precision and lead to tighter bounds than in previous works. We then introduce a novel condition that allows us to also control the state projection precision. We apply these conditions to a Trotterization case, leading to tighter bounds than the previous ones. The main results in this article are formal, with a first numerical illustration on the H2 molecule that allows us to derive useful insights.
Circuit Harmonic Matrices: A Spectral Framework for Quantum Machine Learning
This paper introduces a mathematical framework that can predict how different quantum circuit architectures will perform in machine learning tasks without needing to actually run experiments or see training data. The approach creates 'architecture matrices' that capture how circuit design choices affect the model's learning capabilities and training difficulty.
Key Contributions
- Development of data-agnostic framework for analyzing parametrized quantum circuits through architecture matrices
- Mathematical connection between circuit structure, feature correlations, and training kernel geometry
- Method to predict circuit performance characteristics from design alone without requiring datasets or optimization
View Full Abstract
Parametrised quantum circuits are a central framework for near term quantum machine learning. However, it remains challenging to determine in advance how architectural choices, such as encoding strategies, gate placement, and entangling structure, influence both the expressive capacity of the model and its trainability during optimisation. We introduce a data-agnostic framework, one requiring no knowledge of a training dataset or optimisation trajectory, that maps a broad family of circuits into a single architecture matrix built over learnable features and parameters. We show that this framework provides an explicit link between circuit structure, the correlations among learnable features, and the geometry of training kernels through the factorisation of each of these objects as quadratic forms in terms of these matrices. We show how correlations between learnable features arise from shared parameter-induced harmonics generated by non-commuting gate-observable interactions during Heisenberg back-propagation, and how these correlations are encoded directly in the architecture matrix. From this perspective, kernel structure and coefficient statistics can be reconstructed analytically from circuit design alone, without reference to a dataset or optimisation trajectory. The resulting framework makes circuit-induced structure explicit, separating architectural effects from data-dependent ones, and provides a principled foundation for analysing and comparing parametrised quantum circuits based on intrinsic, design-level signatures.
Three Hamiltonians are Sufficient for Unitary $k$-Design in Temporal Ensemble
This paper shows that three sequential Hamiltonian evolution steps with random timing can generate unitary k-designs (quantum circuits that mimic random quantum evolution), while two steps cannot, providing a more efficient method for creating pseudorandom quantum dynamics.
Key Contributions
- Proved that three-step protocol (3SP) can generate arbitrary unitary k-designs while two-step protocol (2SP) cannot
- Demonstrated that temporal ensemble approach requires fewer independent Hamiltonian realizations than standard methods
- Showed 3SP achieves better accuracy than 2SP with narrower time windows under imperfect conditions
View Full Abstract
Unitary $k$-designs are central to quantum information and quantum many-body physics as efficient proxies for Haar-random dynamics. We study how chaotic Hamiltonian evolution can generate unitary $k$-designs. Standard approaches typically rely on many independent Hamiltonian realizations or fine-tuning evolution times. Here we show that unitary designs can instead arise from a quenched temporal ensemble, where Hamiltonians are sampled once and held fixed, while randomness enters only through the evolution times. We analyze a two-step protocol (2SP), applying $H_1$ for time $t_1$ and $H_2$ for time $t_2$, and a three-step protocol (3SP) with an additional quench, with all times randomly drawn from a prescribed distribution. Time averaging imposes energy-index matching in the frame potential (FP), which quantifies the distance to Haar random. Analytically and numerically, we show that 2SP cannot realize a general unitary $k$-design, whereas 3SP can do so for arbitrary $k$. The advantage of 3SP is that the additional random phases impose stronger constraints, eliminating independent permutation degrees of freedom in the FP. For Gaussian unitary ensemble Hamiltonians, we prove these results rigorously and show that under imperfect time averaging, 3SP achieves the same accuracy as 2SP with a parametrically narrower time window.
Spatial Localization of Relativistic Quantum Systems: The Commutativity Requirement and the Locality Principle. Part II: A Model from Local QFT
This paper develops a rigorous quantum field theory framework for measuring the spatial location of relativistic quantum particles, creating mathematical tools that respect causality by ensuring measurement outcomes in separated regions don't influence each other faster than light. The work addresses fundamental questions about how to properly define position measurements in relativistic quantum mechanics.
Key Contributions
- Construction of positive operator-valued measures (POVMs) for relativistic spatial localization using stress-energy-momentum tensor
- Development of conditional localization observables that satisfy causality constraints and commute for spacelike-separated regions
- Derivation of quantum energy inequalities and regularization methods to handle non-positive operators in the full Fock space
View Full Abstract
This paper completes a previous work by constructing a class of positive-energy relativistic spatial localization observables in Minkowski spacetime within quantum field theory, using the stress-energy-momentum tensor smeared with suitable test functions. For each timelike direction, the construction yields a family of positive operator-valued measures (POVMs) on spacelike hypersurfaces, well defined on every n-particle sector and satisfying a natural relativistic causality condition excluding superluminal propagation of detection probabilities. These observables arise from local or quasi-local field-theoretic quantities and provide a rigorous version of earlier heuristic proposals. In the one-particle sector, the construction reduces to the observable introduced previously, and its first moment reproduces the Newton-Wigner position operator under suitable normalization conditions. Because the normally ordered stress-energy-momentum tensor is not positive on the full Fock space, as implied by the Reeh-Schlieder theorem, we study quantum energy inequalities and derive lower bounds controlling deviations from positivity. This leads to regularized families of positive operators approximating the localization effects. We also construct conditional localization observables for finite laboratories using modified local energy operators and their Friedrichs self-adjoint extensions. Using Haag duality and Kadison's result on affiliation, we show that the resulting conditional POVMs belong to local von Neumann algebras and therefore commute for causally separated regions, in agreement with the Araki-Haag-Kastler framework. These results support the view that commutativity of localization observables is recovered at the level of conditional measurements in finite spacetime regions.
Generalized Numerical Construction of MUBs: A Group Theoretical Investigation
This paper develops a new numerical method for constructing Mutually Unbiased Bases (MUBs) in quantum systems without relying on traditional group theory approaches, focusing on the challenging problem of finding complete MUB sets in non-prime-power dimensions where analytical solutions are unknown.
Key Contributions
- Development of a generalized numerical construction method for MUBs that bypasses traditional group theoretical constraints
- Formulation of the MUB construction problem as a phase space optimization using Gram matrix constraints and Bargmann invariants
- Demonstration that numerically constructed MUBs in dimensions 3-5 are mutually isomorphic with automorphism groups matching the Clifford group
View Full Abstract
Mutually Unbiased Bases (MUBs) constitute a fundamental geometric structure in quantum theory, known for providing an optimal measurement scheme for quantum state tomography. In prime and prime-power dimensions, analytical constructions of maximal sets of MUBs are well-known and standard construction relies on the Weyl-Heisenberg (WH) group and finite fields. In non-prime-power dimensions, on the other hand, the existence of such maximal sets remains an open question. We present a generalized numerical method of constructing MUBs without any reliance on a priori group structure or specific algebraic frameworks. Formulating the problem at the level of Gram matrix, we reduce the search for complete sets of $d+1$ MUBs in dimension $d$ to a phase space optimisation problem. We use the fact that the MUB Gram matrix is a projection matrix, and the third- and fourth-order trace constraints are necessary and sufficient conditions for a valid projection matrix. We further develop a classification framework based on third-order Bargmann invariants and automorphism groups, allowing us to probe the underlying algebraic and geometric structure of the resulting configurations. Numerical applications of this method in dimensions $3$, $4$, and $5$ demonstrate that all numerically constructed solutions are mutually isomorphic, are isolated points in phase space, and possess automorphism groups that coincide exactly with the Clifford group, the normalizer of the WH group. Though the scope of the search was limited, in dimension $d = 6$ our numerical search yielded no MUBs within explored parameter space.
Entanglement Rate Maximization for Dual-Connectivity Wireless Quantum Networks
This paper develops optimization methods for wireless quantum networks where quantum users can connect to multiple quantum base stations simultaneously to maximize entanglement distribution rates. The authors propose a dual-connectivity architecture and an alternating optimization algorithm to efficiently allocate entanglement resources while meeting practical constraints like minimum rate and fidelity requirements.
Key Contributions
- Dual-connectivity architecture for wireless quantum networks allowing quantum users to associate with up to two quantum base stations
- Alternating optimization algorithm for joint quantum base station association and entanglement rate allocation that achieves near-optimal performance with reduced computational complexity
View Full Abstract
The development of quantum networks (QNs) relies on efficient mechanisms for distributing entanglement among multiple quantum users (QUs) under practical system constraints. This paper investigates the problem of entanglement rate maximization in a dual-connectivity (DC) wireless quantum network comprising multiple quantum base stations (QBSs). Under the DC architecture, each QU can associate with up to two QBSs, thereby enhancing resource utilization compared to conventional single-connectivity (SC) schemes. The joint QBS-QU association and entanglement generation rate allocation problem is formulated as a mixed-integer nonlinear programming problem that incorporates practical constraints, including limited QBS entanglement generation capacity as well as heterogeneous minimum entanglement rate demands and fidelity requirements for QUs. To efficiently solve this challenging problem, an alternating optimization (AO) algorithm is developed, which decomposes the original formulation into entanglement rate allocation and association subproblems. Simulation results demonstrate that the proposed DC architecture significantly outperforms SC schemes, while the AO algorithm achieves near-optimal performance with substantially reduced computational complexity.
Quantization of Lagrangian Descriptors
This paper extends classical Lagrangian descriptors (tools for analyzing particle transport) into the quantum regime using path integral methods. The approach shows how quantum fluctuations broaden the sharp transport barriers found in classical systems, providing a geometric framework for understanding quantum tunneling.
Key Contributions
- Formulation of quantum Lagrangian descriptors using path integral framework
- Demonstration that quantum fluctuations broaden classical invariant manifolds
- Geometric interpretation of quantum tunneling as barrier delocalization
View Full Abstract
We formulate Lagrangian descriptors (LDs) in the path integral framework. Averaging the classical LD over fluctuations about extremal trajectories defines a quantum LD that incorporates quantum effects. Invariant manifolds, which sharply organize classical transport, become finite-width phase space structures under quantum fluctuations, and their overlap provides a geometric mechanism consistent with tunneling as fluctuation-induced delocalization of transport barriers. We demonstrate this approach for the Hamiltonian saddle, where path integral sampling reveals manifold broadening and barrier penetration. This establishes a geometric framework for studying phase space transport and tunneling beyond the classical regime, while also providing a natural route toward the application of LDs to field theory.
Spin-based magnetic detection of optically trapped single cell in microfluidic channel
This paper develops a new method for detecting single cells by combining optical tweezers with quantum magnetometry using nitrogen-vacancy centers, allowing magnetic detection of cells labeled with magnetic nanoparticles in microfluidic channels. The approach overcomes limitations of fluorescence-based detection and achieved detection of 89 μT magnetic signals from individual cells.
Key Contributions
- Integration of nitrogen-vacancy quantum magnetometry with optical tweezers for single-cell detection
- Demonstration of magnetic detection of individual cells with 89 μT sensitivity in microfluidic environment
View Full Abstract
Combining optical tweezers with fluorescence microscopy is a powerful tool for single-cell analysis, playing a pivotal role in disease diagnosis, cell sorting, and the investigation of cellular dynamics. However, fluorescence detection faces challenges such as blinking, photobleaching and autofluorescence in biotissues. To address these limitations, we developed a magnetic detection strategy by integrating quantum magnetometry using nitrogen-vacancy centers into optical tweezers, demonstrating precise trapping and manipulation of individual cells in microfluidic environment. We detected a magnetic signal of 89 μT from a single cell labeled with magnetic nanoparticles, compared to a noise floor of 3.9 μT observed in unlabeled cells. This platform provides a promising approach for high-precision single-cell analysis and holds significant potential for probing cellular activities within biological microenvironments.
Co-Authoring with AI: How I Wrote a Physics Paper About AI, Using AI
This paper examines how AI language models are changing scientific writing by analyzing the author's experience co-writing a computational physics paper with AI. The author argues that humans must shift from writing text to acting as supervisors who guide AI reasoning, and proposes requiring publication of AI interaction transcripts to maintain scientific integrity.
Key Contributions
- Proposes Human-in-the-Loop framework for AI-assisted scientific writing where humans act as Principal Investigators mentoring AI
- Advocates for mandatory publication of unedited AI interaction transcripts as supplementary material to ensure scientific accountability
View Full Abstract
The rapid integration of Large Language Models (LLMs) into scientific writing fundamentally challenges traditional definitions of authorship, responsibility, and scientific integrity. As researchers transition from using computers as deterministic tools to managing them as ``virtual collaborators,'' the nature of human contribution must be re-evaluated. Using the drafting process of a recent computational physics manuscript as a case study, this essay explores the indispensable role of the Human-in-the-Loop (HITL). We demonstrate that while AI excels at structural organization and syntax generation, the human author bears the ultimate responsibility for enforcing rigorous physical logic, maintaining academic diplomacy, and anticipating peer-review critiques. In this paradigm, the human contribution shifts from writing boilerplate text to acting as a Principal Investigator who actively mentors and steers the AI's reasoning. To ensure accountability and preserve the integrity of the scientific record in this new era, I argue that the community must mandate the publication of full, unedited AI interaction transcripts as standard supplementary material.
The physical basis of information flow in neural matter: a thermocoherent perspective on cognitive dynamics
This paper proposes a theoretical framework linking quantum-level correlations and coherence effects in neural tissue to brain function and cognition. The authors argue that quantum entanglement, discord, and other correlations in biological neural matter could serve as physical resources that influence neural transport processes and cross-scale coordination.
Key Contributions
- Development of a multiscale resource-theoretical framework connecting quantum correlations to neural information processing
- Identification of specific biological substrates (ion channels, proton networks, π-electron systems) where quantum effects might influence neural dynamics
View Full Abstract
Information flow is central to contemporary accounts of cognition, yet its physical basis in living neural matter remains poorly specified. Here, we develop a multiscale resource-theoretical framework motivated by the \textit{thermocoherent effect}, where heat flow is reciprocally coupled to a delocalized information flow carried by shared coherence and not reducible to local subsystem variables. Extending this line of work in light of recent results on correlation-enabled Mpemba-type thermal relaxation, we argue that the operational relevance of correlations depends less on their taxonomy than on their dynamical accessibility under the underlying interaction geometry. Relational structure encoded in the state of a single composite system -- including quantum entanglement, quantum discord, and classical correlations -- may therefore act as a usable physical resource that remains hidden from local subsystem descriptions. We propose that electrical, chemical, ionic, and thermal transport processes in neural matter may, under suitable microscopic conditions, generate or transduce partially hidden relational resources whose mutual coupling can progressively build larger-scale thermocoherent organization across spatial or spatiotemporal partitions in neural tissue. Ion-channel interfaces, hydrogen-bonded proton networks, aromatic $π$-electron architectures, and phosphate-rich motifs emerge as plausible substrate classes in which such resources may arise, become transiently accessible under environmental coupling, and leave coarse-grained signatures in neural dynamics. The resulting picture is neither a claim of macroscopic quantum cognition nor a reduction of cognition to abstract coding, but a falsifiable framework in which microscopic relational resources can bias transport, relaxation, signaling, and cross-scale neural coordination.
Dismagicker: Unitary Gate for Non-Stabilizerness Reduction
This paper introduces 'dismagickers' - special quantum gates designed to reduce the 'magic' (non-stabilizerness) in quantum states, making them easier to simulate classically. The authors develop methods to construct these gates and show they can improve both classical simulation of quantum systems and preparation of quantum states on real devices.
Key Contributions
- Introduction of dismagicker gates as a new tool for reducing non-stabilizerness in quantum states
- Development of optimization methods for constructing dismagickers within Matrix Product States framework
- Demonstration that combining non-stabilizerness and entanglement reduction improves classical simulation accuracy and quantum state preparation
View Full Abstract
We introduce the notion of dismagicker: non-Clifford unitary gate designed to reduce the non-stabilizerness (also called magic) of quantum many-body states. Although both entanglement and non-stabilizerness are fundamental quantum resources, they require distinct control strategies. While disentanglers (unitary operations that lower entanglement) are well-established in tensor network methods, analogous concept for non-stabilizerness suppression has been largely missing. In this work, we define dismagicker as non-Clifford unitary operation that actively suppresses non-stabilizerness, steering states toward classically simulatable stabilizer states. We develop optimization method for constructing dismagickers within the Matrix Product States framework. Our numerical results show that the non-stabilizerness reduction procedure, when combined with entanglement reduction steps with Clifford circuits, significantly improves the accuracy for both classical simulation of many-body systems and quantum state preparation on quantum devices. Dismagicker enriches our toolkit for the manipulation of many-body states by unifying non-stabilizerness and entanglement reduction.
Tighter entropic uncertainty relations in the presence of quantum memories for complete sets of mutually unbiased bases
This paper develops improved mathematical bounds for entropic uncertainty relations when quantum memories are present, specifically for complete sets of mutually unbiased bases in systems with multiple quantum particles. The work provides tighter theoretical limits on how precisely we can simultaneously measure complementary quantum observables.
Key Contributions
- Development of tighter lower and upper bounds for quantum-memory-assisted entropic uncertainty relations in multipartite systems
- Demonstration that the new bounds outperform previously existing uncertainty relation bounds for complete sets of mutually unbiased bases
View Full Abstract
Entropic uncertainty relations provide an information-theoretic framework for quantifying the fundamental indeterminacy inherent in quantum mechanics. We propose more stringent quantum-memory-assisted entropic uncertainty relations for complete sets of mutually unbiased bases in multipartite scenarios. We present lower and upper bounds of the quantum uncertainties based on the complementarity of the observables, the purity of the measured state, the (conditional) von-Neumann entropies, the Holevo quantities and mutual information. The results are illustrated by several representative cases, showing that our bounds are tighter than and outperform previously existing bounds.
Statistics of Matrix Elements of Operators in a Disorder-Free SYK model
This paper studies the statistical distribution of matrix elements in a disorder-free Sachdev-Ye-Kitaev (SYK) model, which involves 4-body interactions of Majorana fermions with fixed coupling strengths. The authors find that off-diagonal matrix elements of multi-fermion operators follow a generalized inverse Gaussian distribution rather than the Fréchet distributions found in other models, contributing to understanding of the Eigenstate Thermalization Hypothesis.
Key Contributions
- Discovery that matrix elements in disorder-free SYK model follow generalized inverse Gaussian distribution instead of Fréchet distributions
- Extension of Eigenstate Thermalization Hypothesis studies to a new solvable quantum many-body model
View Full Abstract
Recently, studies have explored the statistics of matrix elements of local operators in the Lieb-Liniger model. It was found that the probability distribution function for off-diagonal matrix elements $\langle \boldsymbolμ|\mathcal{O}|\boldsymbolλ \rangle$ within the same macro-state is well described by the Fréchet distributions. This represents a significant development for the Eigenstate Thermalization Hypothesis (ETH). In this paper, we investigate a similar phenomenon in another solvable model: the disorder-free Sachdev-Ye-Kitaev (SYK) model. The Hamiltonian of this model consists of 4-body interactions of Majorana fermions. Unlike the conventional SYK model, the coupling strengths in this model are fixed to a constant, earning it the name ``disorder-free.'' We evaluate the matrix elements of operators constructed from products of $n$ Majorana fermions: $\mathcal{O} = χ_{a_1}χ_{a_2}\ldots χ_{a_n}$. For a general choice of indices and $n \geq 4$, we find that the statistics of the off-diagonal matrix elements are well-fitted by a generalized inverse Gaussian distribution rather than Fréchet distributions.
Adaptive Tensor Network Simulation via Entropy-Feedback PID Control and GPU-Accelerated SVD
This paper develops an adaptive method for simulating quantum many-body systems using tensor networks, where a PID controller automatically adjusts computational resources based on quantum entanglement measurements. The approach uses GPU acceleration and achieves significant speedups while maintaining accuracy for ground-state calculations of quantum spin systems.
Key Contributions
- Adaptive bond dimension management using PID control with entropy feedback for tensor network simulations
- GPU-accelerated implementation achieving 2.7x speedup in DMRG calculations with maintained accuracy
View Full Abstract
Tensor network methods, particularly those based on Matrix Product States (MPS), provide a powerful framework for simulating quantum many-body systems. A persistent computational challenge in these methods is the selection of the bond dimension chi, which controls the trade-off between accuracy and computational cost. Fixed bond dimension strategies either waste resources in low-entanglement regions or lose fidelity in high-entanglement regions. This work introduces an adaptive bond dimension management framework that uses von Neumann entropy feedback coupled with a Proportional-Integral-Derivative (PID) controller to dynamically adjust chi at each bond during simulation. An Exponential Moving Average (EMA) filter stabilizes entropy measurements against transient fluctuations, and a predictive scheduling module anticipates future bond dimension requirements from entropy trends. The per-bond granularity of the allocation ensures that computational resources concentrate where entanglement is largest. The framework integrates GPU-accelerated Singular Value Decomposition (SVD) via CuPy and the cuSOLVER backend, achieving individual SVD speedups of 4.1x at chi=256 and 7.1x at chi=2048 relative to CPU-based NumPy for isolated matrix factorisations (measured on an NVIDIA A100-SXM4-40GB GPU with CuPy 13.4.1 and CUDA 12.8). At the system level, benchmarks on the spin-1/2 antiferromagnetic Heisenberg chain demonstrate a 2.7x reduction in total DMRG wall time compared to fixed-chi simulations, with energy accuracy within 0.1% of the Bethe ansatz solution. Integration with the Density Matrix Renormalization Group (DMRG) algorithm yields ground-state energies per site converging to E/N = -0.4432 for the isotropic Heisenberg model at chi = 128. Validation against Amazon Web Services (AWS) Braket SV1 statevector simulator confirms agreement within 2-5% for small systems.
Theory of the Collective Many-body Subradiance in Waveguide QED
This paper develops a theoretical framework for understanding subradiant states in arrays of quantum emitters coupled to waveguides, showing how these collective quantum states can have extremely narrow linewidths and large energy shifts. The work provides analytical expressions for how these properties scale with the number of emitters and could enable applications in quantum sensing and spectroscopy.
Key Contributions
- Analytical theory showing universal N^-3 scaling of subradiant state linewidths in waveguide-coupled emitter arrays
- Unified framework connecting Bragg interference, finite-size effects, and dipole interactions in subradiant systems
- Demonstration of potential applications in subradiant spectroscopy and waveguide-QED-based sensing
View Full Abstract
We present an analytical theory for the most subradiant modes in a finite one-dimensional emitter array coupled to either an ideal or a nonideal waveguide. Using an effective non-Hermitian Hamiltonian together with a Bragg-edge open-boundary ansatz, we derive compact eigenvalue expressions showing that the linewidths of the most subradiant states exhibit a universal N^{-3} scaling in both cases. However, in the deep-subwavelength regime, the decay rates display even-odd oscillations due to boundary interference. Furthermore, we demonstrate that the collective energy shift of the most subradiant state approaches a constant value that depends on the atomic separation, with the leading finite-size correction scaling as N^{-2}. These results unify the roles of Bragg-edge interference, finite-size effects, and near-field dipole-dipole interactions in shaping ultranarrow, strongly shifted subradiant resonances, providing a transparent framework beyond the ideal-waveguide limit and opening potential applications in subradiant spectroscopy and waveguide-QED-based sensing.