Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Characterizing Quantum Error Correction Performance of Radiation-induced Errors
This paper develops computational models to simulate how radiation impacts affect quantum error correction performance on superconducting quantum devices, since radiation can cause correlated errors that standard error correction codes struggle with. The researchers create a holistic modeling framework that maps radiation-induced qubit errors onto quantum error channels and tests mitigation strategies for improved error correction.
Key Contributions
- Computational model linking radiation-induced quasiparticle effects to quantum error correction performance
- Performance metric for quantifying QEC code resilience to radiation impacts
- Modular framework for testing error mitigation strategies and chip designs
View Full Abstract
Radiation impacts are a current challenge with computing on superconducting-based quantum devices because they can lead to widespread correlated errors across the device. Such errors can be problematic for quantum error correction (QEC) codes, which are generally designed to correct independent errors. To address this, we have developed a computational model to simulate the effects of radiation impacts on QEC performance. This is achieved by building from recently developed models of quasiparticle density, mapping radiation-induced qubit error rates onto a quantum error channel and simulation of a simple surface code. We also provide a performance metric to quantify the resilience of a QEC code to radiation impacts. Additionally, we sweep various parameters of chip design to test mitigation strategies for improved QEC performance. Our model approach is holistic, allowing for modular performance testing of error mitigation strategies and chip and code designs.
Modeling integrated frequency shifters and beam splitters
This paper develops theoretical methods for designing frequency-mode beam splitters using modulated coupled resonator arrays for photonic quantum computing. The authors create a flexible methodology based on quantum input-output network formalism to construct transfer matrices for these devices and prove limitations on certain implementations.
Key Contributions
- Development of SLH formalism-based methodology for constructing transfer matrices of frequency-mode beam splitters
- Analysis of various device configurations including two-resonator devices and Mach-Zehnder interferometers
- Formal no-go theorem on limitations of native N-mode frequency-domain beam splitters with N-resonator arrays
View Full Abstract
Photonic quantum computing is a strong contender in the race to fault-tolerance. Recent proposals using qubits encoded in frequency modes promise a large reduction in hardware footprint, and have garnered much attention. In this encoding, linear optics, i.e., beam splitters and phase shifters, is necessarily not energy-conserving, and is costly to implement. In this work, we present designs of frequency-mode beam splitters based on modulated arrays of coupled resonators. We develop a methodology to construct their effective transfer matrices based on the SLH formalism for quantum input-output networks. Our methodology is flexible and highly composable, allowing us to define $N$-mode beam splitters either natively based on arrays of $N$-resonators of arbitrary connectivity or as networks of interconnected $l$-mode beam splitters, with $l<N$. We apply our methodology to analyze a two-resonator device, a frequency-domain phase shifter and a Mach-Zehnder interferometer obtained from composing these devices, a four-resonator device, and present a formal no-go theorem on the possibility of natively generating certain $N$-mode frequency-domain beam splitters with arrays of $N$-resonators.
Extended Rydberg Lifetimes in a Cryogenic Atom Array
This paper demonstrates how cooling cesium atoms to 4K in optical tweezers significantly extends the lifetime of Rydberg states by reducing blackbody radiation effects. The extended lifetimes improve the coherence time of ground-Rydberg qubits, which is crucial for reducing errors in neutral-atom quantum computing systems.
Key Contributions
- Demonstration of 3.3x longer Rydberg state lifetimes in cryogenic environment
- Measurement of small differential dynamic polarizability reducing dephasing
- Advancement toward higher fidelity neutral-atom two-qubit gates
View Full Abstract
We report on the realization of a $^{133}$Cs optical tweezer array in a cryogenic blackbody radiation (BBR) environment. By enclosing the array within a 4K radiation shield, we measure long Rydberg lifetimes, up to $406 (36)\,μ$s for the $55 P_{3/2}$ Rydberg state, a factor of 3.3(3) longer than the room-temperature value. We employ single-photon coupling for coherent manipulation of the ground-Rydberg qubit. We measure a small differential dynamic polarizability of the transition, beneficial for reducing dephasing due to light intensity fluctuations. Our results pave the path for advancing neutral-atom two-qubit gate fidelities as their error budgets become increasingly dominated by $T_1$ relaxation of the ground-Rydberg qubit.
Quantum Error Mitigation at the pre-processing stage
This paper introduces a new quantum error mitigation technique that corrects for noise before measurement (pre-processing) rather than after measurement (post-processing). The method finds a surrogate observable to measure on the noisy quantum state that gives the same result as measuring the target observable on the noise-free state, using tensor networks to make this computationally feasible.
Key Contributions
- Development of pre-processing quantum error mitigation approach that finds surrogate observables to compensate for noise effects
- Significant computational complexity improvement (~10^6 times) over existing Tensor Error Mitigation methods by eliminating tomographic measurements
View Full Abstract
The realization of fault-tolerant quantum computers remains a challenging endeavor, forcing state-of-the-art quantum hardware to rely heavily on noise mitigation techniques. Standard quantum error mitigation is typically based on post-processing strategies. In contrast, the present work explores a pre-processing approach, in which the effects of noise are mitigated before performing a measurement on the output state. The main idea is to find an observable $Y$ such that its expectation value on a noisy quantum state $\mathcal{E(ρ)}$ matches the expectation value of a target observable $X$ on the noiseless quantum state $ρ$. Our method requires the execution of a noisy quantum circuit, followed by the measurement of the surrogate observable $Y$. The main enablers of our method in practical scenarios are Tensor Networks. The proposed method improves over Tensor Error Mitigation (TEM) in terms of average error, circuit depth, and complexity, attaining a measurement overhead that approaches the theoretical lower bound. The improvement in terms of classical computation complexity is in the order of $\sim 10^6$ times when compared to the post-processing computational cost of TEM in practical scenarios. Such gain comes from eliminating the need to perform the set of informationally complete positive operator-valued measurements (IC-POVM) required by TEM, as well as any other tomographic strategy.
High-order dynamical decoupling in the weak-coupling regime
This paper develops an improved method for dynamical decoupling, which uses carefully timed quantum pulses to protect quantum systems from environmental noise. The new approach significantly reduces the number of pulses needed compared to existing methods while maintaining better error suppression, making it more practical for real quantum devices.
Key Contributions
- Development of high-order dynamical decoupling scheme with polynomial pulse scaling O(n^(k-1)K) versus exponential scaling O(exp(n)) in existing methods
- Novel mapping to continuous necklace-splitting problem to construct optimal pulse sequences
- Demonstration of asymptotically optimal pulse count with superior performance over Quadratic DD in weak-coupling regime
View Full Abstract
We introduce a high-order dynamical decoupling (DD) scheme for arbitrary system-bath interactions in the weak-coupling regime. Given any decoupling group $\mathcal G$ that averages the interaction to zero, our construction yields pulse sequences whose length scales as $\mathcal{O}(|\mathcal G| K)$, while canceling all error terms linear in the system-bath coupling strength up to order $K$ in the total evolution time. As a corollary, for an $n$-qubit system with $k$-local system-bath interactions, we obtain an $\mathcal{O}(n^{k-1}K)$-pulse sequence, a significant improvement over existing schemes with $\mathcal{O}(\exp(n))$ pulses (for $k=\mathcal{O}(1)$). The construction is obtained via a mapping to the continuous necklace-splitting problem, which asks how to cut a multi-colored interval into pieces that give each party the same share of every color. We provide explicit pulse sequences for suppressing general single-qubit decoherence, prove that the pulse count is asymptotically optimal, and verify the predicted error scaling in numerical simulations. For the same number of pulses, we observe that our sequences outperform the state-of-the-art Quadratic DD in the weak-coupling regime. We also note that the same construction extends to suppress slow, time-dependent Hamiltonian noise.
Digital signatures with classical shadows on near-term quantum computers
This paper introduces a quantum digital signature scheme that uses only classical communication by leveraging 'classical shadows' of quantum states produced by random circuits as public keys. The authors demonstrate improved noise tolerance and experimentally validate their approach using 32-qubit circuits on near-term quantum hardware.
Key Contributions
- Novel quantum digital signature scheme requiring only classical communication using classical shadows
- Improved state-certification primitive with higher noise tolerance and lower sample complexity
- Experimental demonstration on 32-qubit states with circuits containing ≥80 logical gates
View Full Abstract
Quantum mechanics provides cryptographic primitives whose security is grounded in hardness assumptions independent of those underlying classical cryptography. However, existing proposals require low-noise quantum communication and long-lived quantum memory, capabilities which remain challenging to realize in practice. In this work, we introduce a quantum digital signature scheme that operates with only classical communication, using the classical shadows of states produced by random circuits as public keys. We provide theoretical and numerical evidence supporting the conjectured hardness of learning the private key (the circuit) from the public key (the shadow). A key technical ingredient enabling our scheme is an improved state-certification primitive that achieves higher noise tolerance and lower sample complexity than prior methods. We realize this certification by designing a high-rate error-detecting code tailored to our random-circuit ensemble and experimentally generating shadows for 32-qubit states using circuits with $\geq 80$ logical ($\geq 582$ physical) two-qubit gates, attaining 0.90 $\pm$ 0.01 fidelity. With increased number of measurement samples, our hardware-demonstrated primitives realize a proof-of-principle quantum digital signature, demonstrating the near-term feasibility of our scheme.
Review of Superconducting Qubit Devices and Their Large-Scale Integration
This paper provides a comprehensive review of superconducting qubit quantum computers, covering the fundamental physics, device engineering, and scaling challenges. It examines key technical requirements through DiVincenzo's criteria and discusses the path toward large-scale integration using electronic design automation tools.
Key Contributions
- Comprehensive review of superconducting qubit technologies and their implementation challenges
- Analysis of large-scale integration approaches for superconducting quantum computers
- Discussion of electronic design automation tools for quantum computer design
- Review of fault-tolerant quantum computing requirements and entanglement gate operations
View Full Abstract
The superconducting qubit quantum computer is one of the most promising quantum computing architectures for large-scale integration due to its maturity and close proximity to the well-established semiconductor manufacturing infrastructure. From an education perspective, it also bridges classical microwave electronics and quantum electrodynamics. In this paper, we will review the basics of quantum computers, superconductivity, and Josephson junctions. We then introduce important technologies and concepts related to DiVincenzo's criteria, which are the necessary conditions for the superconducting qubits to work as a useful quantum computer. Firstly, we will discuss various types of superconducting qubits formed with Josephson junctions, from which we will understand the trade-off across multiple design parameters, including their noise immunity. Secondly, we will discuss different schemes to achieve entanglement gate operations, which are a major bottleneck in achieving more efficient fault-tolerant quantum computing. Thirdly, we will review readout engineering, including the implementations of the Purcell filters and quantum-limited amplifiers. Finally, we will discuss the nature and review the studies of two-level system defects, which are currently the limiting factor of qubit coherence time. DiVincenzo's criteria are only the necessary conditions for a technology to be eligible for quantum computing. To have a useful quantum computer, large-scale integration is required. We will review proposals and developments for the large-scale integration of superconducting qubit devices. By comparing with the application of electronic design automation (EDA) in semiconductors, we will also review the use of EDA in superconducting qubit quantum computer design, which is necessary for its large-scale integration.
Resource-Efficient Digitized Adiabatic Quantum Factorization
This paper develops a more efficient quantum algorithm for integer factorization using digitized adiabatic quantum computing. The researchers propose encoding the solution in the kernel subspace of the problem Hamiltonian and reformulate the problem as QUBO instead of PUBO, demonstrating improved performance for factoring integers up to 8 bits with reduced circuit complexity.
Key Contributions
- Novel kernel subspace encoding approach for adiabatic quantum factorization
- Reformulation of factorization problem from PUBO to QUBO framework with reduced gate complexity
View Full Abstract
Digitized adiabatic quantum factorization is a hybrid algorithm that exploits the advantage of digitized quantum computers to implement efficient adiabatic algorithms for factorization through gate decompositions of analog evolutions. In this paper, we harness the flexibility of digitized computers to derive a digitized adiabatic algorithm able to reduce the gate-demanding costs of implementing factorization. To this end, we propose a new approach for adiabatic factorization by encoding the solution of the problem in the kernel subspace of the problem Hamiltonian, instead of using ground-state encoding considered in the standard adiabatic factorization proposed by Peng $et$ $al$. [Phys. Rev. Lett. 101, 220405 (2008)]. Our encoding enables the design of adiabatic factorization algorithms belonging to the class of Quadratic Unconstrained Binary Optimization (QUBO) methods, instead the Polinomial Unconstrained Binary Optimization (PUBO) used by standard adiabatic factorization. We illustrate the performance of our QUBO algorithm by implementing the factorization of integers $N$ up to 8 bits. The results demonstrate a substantial improvement over the PUBO formulation, both in terms of reduced circuit complexity and increased fidelity in identifying the correct solution.
Qudit Twisted-Torus Codes in the Bivariate Bicycle Framework
This paper develops improved quantum error correction codes called qudit twisted-torus codes that work with quantum systems based on higher-dimensional units (qudits) rather than just qubits. The researchers show these twisted designs achieve better performance metrics than previous untwisted versions and outperform existing qubit-based codes.
Key Contributions
- Extension of twisted-torus quantum error correction codes to qudit systems over finite fields
- Demonstration that twisted-torus qudit codes achieve larger distances and better rate-distance tradeoffs than untwisted counterparts and previous qubit implementations
View Full Abstract
We study finite-length qudit quantum low-density parity-check (LDPC) codes from translation-invariant CSS constructions on two-dimensional tori with twisted boundary conditions. Recent qubit work [PRX Quantum 6, 020357 (2025)] showed that, within the bivariate-bicycle viewpoint, twisting generalized toric patterns can significantly improve finite-size performance as measured by $k d^{2}/n$. Here $n$ denotes the number of physical qudits, $k$ the number of logical qudits, and $d$ the code distance. Building on this insight, we extend the search to qudit codes over finite fields. Using algebraic methods, we compute the number of logical qudits and identify compact codes with favorable rate--distance tradeoffs. Overall, for the finite sizes explored, twisted-torus qudit constructions typically achieve larger distances than their untwisted counterparts and outperform previously reported twisted qubit instances. The best new codes are tabulated.
Approximate simulation of complex quantum circuits using sparse tensors
This paper presents a new method for simulating quantum circuits on classical computers using sparse tensor networks, which can efficiently represent and manipulate quantum states that don't have underlying symmetries. The approach provides improved runtime scaling with respect to circuit size and depth compared to traditional methods.
Key Contributions
- Novel sparse tensor data structure for quantum states without symmetry
- Efficient contraction and truncation algorithms for sparse tensor networks
- Improved runtime scaling for quantum circuit simulation
View Full Abstract
The study of quantum circuit simulation using classical computers is a key research topic that helps define the boundary of verifiable quantum advantage, solve quantum many-body problems, and inform development of quantum hardware and software. Tensor networks have become forefront mathematical tools for these tasks. Here we introduce a method to approximately simulate quantum circuits using sparsely-populated tensors. We describe a sparse tensor data structure that can represent quantum states with no underlying symmetry, and outline algorithms to efficiently contract and truncate these tensors. We show that the data structure and contraction algorithm are efficient, leading to expected runtime scalings versus qubit number and circuit depth. Our results motivate future research in optimization of sparse tensor networks for quantum simulation.
Detailed, interpretable characterization of mid-circuit measurement on a transmon qubit
This paper develops new methods to analyze and understand mid-circuit measurements on quantum computing hardware by adapting error analysis techniques to break down measurement errors into physically meaningful components. The researchers applied their approach to a transmon qubit device to identify and quantify specific error sources like amplitude damping and readout errors.
Key Contributions
- Adapted error generator formalism to mid-circuit measurements for better physical interpretation
- Demonstrated detailed characterization of measurement errors on transmon qubits including amplitude damping and readout errors
- Showed how measurement errors vary with readout pulse parameters and validated theoretical predictions
View Full Abstract
Mid-circuit measurements (MCMs) are critical components of the quantum error correction protocols expected to enable utility-scale quantum computing. MCMs can be modeled by quantum instruments (a type of quantum operation or process), which can be characterized self-consistently using gate set tomography. However, experimentally estimated quantum instruments are often hard to interpret or relate to device physics. We address this challenge by adapting the error generator formalism -- previously used to interpret noisy quantum gates by decomposing their error processes into physically meaningful sums of "elementary errors" -- to MCMs. We deploy our new analysis on a transmon qubit device to tease out and quantify error mechanisms including amplitude damping, readout error, and imperfect collapse. We examine in detail how the magnitudes of these errors vary with the readout pulse amplitude, recover the key features of dispersive readout predicted by theory, and show that these features can be modeled parsimoniously using a reduced model with just a few parameters.
Accelerating qubit reset through the Mpemba effect
This paper demonstrates how to accelerate qubit reset (initialization to ground state) by up to 50% using the Mpemba effect, where a two-qubit gate converts slow-decaying single-qubit coherences into faster-decaying two-qubit coherences. The researchers implemented and validated this protocol on a superconducting quantum processor.
Key Contributions
- Novel protocol using Mpemba effect to accelerate passive qubit reset by up to 50%
- Experimental demonstration on superconducting quantum processor with analysis of robustness under realistic error conditions
View Full Abstract
Passive qubit reset is a key primitive for quantum information processing, whereby qubits are initialized by allowing them to relax to their ground state through natural dissipation, without the need for active control or feedback. However, passive reset occurs on timescales that are much longer than those of gate operations and measurements, making it a significant bottleneck for algorithmic execution. Here, we show that this limitation can be overcome by exploiting the Mpemba effect, originally indicating the faster cooling of hot systems compared to cooler ones. Focusing on the regime where coherence times exceed energy relaxation times ($T_2 > T_1$), we propose a simple protocol based on a single entangling two-qubit gate that converts local single-qubit coherences into fast-decaying global two-qubit coherences. This removes their overlap with the slowest decaying Liouvillian mode and enables a substantially faster relaxation to the ground state. For realistic parameters, we find that our protocol can reduce reset times by up to $50\%$ compared to standard passive reset. We analyze the robustness of the protocol under non-Markovian noise, imperfect coherent control and finite temperature, finding that the accelerated reset persists across a broad range of realistic error sources. Finally, we present an experimental implementation of our protocol on an IQM superconducting quantum processor. Our results demonstrate how Mpemba-like accelerated relaxation can be harnessed as a practical tool for fast and accurate qubit initialization.
An Evaluation of the Remote CX Protocol under Noise in Distributed Quantum Computing
This paper evaluates how the remote CX protocol performs under noise when connecting multiple quantum processing units (QPUs) in distributed quantum computing networks. The researchers simulate different network configurations and qubit assignment strategies to understand how noise degrades performance when running quantum algorithms across distributed quantum computers.
Key Contributions
- Evaluation of remote CX protocol performance under noise in distributed quantum computing systems
- Comparison of naive versus graph partitioning strategies for qubit assignment across multiple QPUs
- Performance analysis on various quantum algorithms including Grover's algorithm in distributed settings
View Full Abstract
Quantum computers connected through classical and quantum communication channels can be combined to function as a single unit to run large quantum circuits that each device is unable to execute on their own. The distributed quantum computing paradigm is therefore often seen as a potential pathway to scaling quantum computing to capacities necessary for practical and large-scale applications. Whether connecting multiple quantum processing units (QPUs) in clusters or over networks, quantum communication requires entanglement to be generated and distributed over distances. Using entanglement, the remote CX protocol can be performed, which allows the application of the CX gate involving qubits located in different QPUs. In this work, we use a specialized simulation framework for a high-level evaluation of the impact of the protocol when executed under noise in various network configurations using different number of QPUs. We compare naive and graph partitioning qubit assignment strategies and how they affect the fidelity in experiments run on Grover, GHZ, VQC, and random circuits. The results provide insights on how QPU and network configurations or naive scheduling can degrade performance.
Even More Efficient Soft-Output Decoding with Extra-Cluster Growth and Early Stopping
This paper develops more efficient methods for computing soft outputs in quantum error correction decoders, specifically focusing on cluster-based decoders like Union-Find. The authors introduce early-stopping techniques and new soft-output types that reduce computational overhead while maintaining hardware compatibility with existing FPGA implementations.
Key Contributions
- Introduction of bounded cluster gap and extra-cluster gap soft-output methods with early stopping
- Development of hardware-compatible soft-output computation for FPGA-implemented Union-Find decoders
- Improved computational scaling with code distance compared to previous methods
View Full Abstract
In fault-tolerant quantum computing, soft outputs from real-time decoders play a crucial role in improving decoding accuracy, post-selecting magic states, and accelerating lattice surgery. A recent paper by Meister et al. [arXiv:2405.07433 (2024)] proposed an efficient method to evaluate soft outputs for cluster-based decoders, including the Union-Find (UF) decoder. However, in parallel computing environments, its computational complexity is comparable to or even surpasses that of the UF decoder itself, resulting in a substantial overhead. Furthermore, this method requires global information about the decoding graph, making it poorly suited for existing hardware implementations of the UF decoder on Field-Programmable Gate Arrays (FPGAs). In this paper, to alleviate these issues, we develop more efficient methods for evaluating high-quality soft outputs in cluster-based decoders by introducing several early-stopping techniques. Our central idea is that the precise value of a large soft output is often unnecessary in practice. Based on this insight, we introduce two types of novel soft-outputs: the bounded cluster gap and the extra-cluster gap. The former reduces the computational complexity of Meister's method by terminating the calculation at an early stage. Our numerical simulations show that this method achieves improved scaling with code distance $d$ compared to the original proposal. The latter, the extra-cluster gap, quantifies decoder reliability by performing a small, additional growth of the clusters obtained by the decoder. This approach offers the significant advantage of enabling soft-output computation without modifying the existing architecture of FPGA-implemented UF decoders. These techniques offer lower computational complexity and higher hardware compatibility, laying a crucial foundation for future real-time decoders with soft outputs.
Device variability of Josephson junctions induced by interface roughness
This paper develops a quantitative model to predict how microscopic surface roughness at the interfaces of superconducting Josephson junctions causes device-to-device variability in their energy parameters. The researchers simulate thousands of junctions to understand how manufacturing imperfections affect the consistency of quantum processor components.
Key Contributions
- Quantitative model linking interface roughness parameters to Josephson energy variability
- Statistical characterization showing Josephson energy follows log-normal distribution with identified scaling relationships
View Full Abstract
As quantum processors scale to large qubit numbers, device-to-device variability emerges as a critical challenge. Superconducting qubits are commonly realized using Al/AlO$_{\text{x}}$/Al Josephson junctions operating in the tunneling regime, where even minor variations in device geometry can lead to substantial performance fluctuations. In this work, we develop a quantitative model for the variability of the Josephson energy $E_{J}$ induced by interface roughness at the Al/AlO$_{\text{x}}$ interfaces. The roughness is modeled as a Gaussian random field characterized by two parameters: the root-mean-square roughness amplitude $σ$ and the transverse correlation length $ξ$. These parameters are extracted from the literature and molecular dynamics simulations. Quantum transport is treated using the Ambegaokar--Baratoff relation combined with a local thickness approximation. Numerical simulations over $5,000$ Josephson junctions show that $E_{J}$ follows a log-normal distribution. The mean value of $E_{J}$ increases with $σ$ and decreases slightly with $ξ$, while the variance of $E_{J}$ increases with both $σ$ and $ξ$. These results paint a quantitative and intuitive picture of Josephson energy variability induced by surface roughness, with direct relevance for junction design.
Accelerating the Tesseract Decoder for Quantum Error Correction
This paper optimizes the Tesseract decoder, a quantum error correction algorithm that uses A* search to find the most likely errors in quantum codes. The researchers implemented four performance enhancements including better data structures and memory layouts, achieving 2-5x speedups across various quantum error correction code families.
Key Contributions
- Systematic optimization of Tesseract decoder achieving 2-5x performance improvements
- Implementation of four targeted optimization strategies including data structure improvements and memory layout reorganization
- Demonstration of consistent speedups across multiple quantum error correction code families including Surface Codes and Color Codes
View Full Abstract
Quantum Error Correction (QEC) is essential for building robust, fault-tolerant quantum computers; however, the decoding process often presents a significant computational bottleneck. Tesseract is a novel Most-Likely-Error (MLE) decoder for QEC that employs the A* search algorithm to explore an exponentially large graph of error hypotheses, achieving high decoding speed and accuracy. This paper presents a systematic approach to optimizing the Tesseract decoder through low-level performance enhancements. Based on extensive profiling, we implemented four targeted optimization strategies, including the replacement of inefficient data structures, reorganization of memory layouts to improve cache hit rates, and the use of hardware-accelerated bit-wise operations. We achieved significant decoding speedups across a wide range of code families and configurations, including Color Codes, Bivariate-Bicycle Codes, Surface Codes, and Transversal CNOT Protocols. Our results demonstrate consistent speedups of approximately 2x for most code families, often exceeding 2.5x. Notably, we achieved a peak performance gain of over 5x for the most computationally demanding configurations of Bivariate-Bicycle Codes. These improvements make the Tesseract decoder more efficient and scalable, serving as a practical case study that highlights the importance of high-performance software engineering in QEC and providing a strong foundation for future research.
On the Spectral theory of Isogeny Graphs and Quantum Sampling of Hard Supersingular Elliptic curves
This paper presents a quantum algorithm for sampling random supersingular elliptic curves with unknown endomorphism rings, which is crucial for isogeny-based cryptography. The algorithm provides the first provable quantum polynomial-time solution to generate these 'hard' curves without requiring a trusted setup.
Key Contributions
- First provable quantum polynomial-time algorithm for sampling hard supersingular elliptic curves
- Proof of Quantum Unique Ergodicity conjecture for supersingular isogeny graphs
- Stronger eigenvalue separation property for isogeny graphs removing heuristic assumptions in quantum money protocols
View Full Abstract
In this paper we study the problem of sampling random supersingular elliptic curves with unknown endomorphism rings. This task has recently attracted significant attention, as the secure instantiation of many isogeny-based cryptographic protocols relies on the ability to sample such ``hard'' curves. Existing approaches, however, achieve this only in a trusted-setup setting. We present the first provable quantum polynomial-time algorithm that samples a random hard supersingular elliptic curve with high probability.Our algorithm runs heuristically in $\tilde{O}\!\left(\log^{4}p\right)$ quantum gate complexity and in $\tilde{O}\!\left(\log^{13} p\right)$ under the Generalized Riemann Hypothesis. As a consequence, our algorithm gives a secure instantiation of the CGL hash function and other cryptographic primitives. Our analysis relies on a new spectral delocalization result for supersingular $\ell$-isogeny graphs: we prove the Quantum Unique Ergodicity conjecture, and we further provide numerical evidence for complete eigenvector delocalization; this theoretical result may be of independent interest. Along the way, we prove a stronger $\varepsilon$-separation property for eigenvalues of isogeny graphs than that predicted in the quantum money protocol of Kane, Sharif, and Silverberg, thereby removing a key heuristic assumption in their construction.
Real-time detection of correlated quasiparticle tunneling events in a multi-qubit superconducting device
This paper develops a method to detect quasiparticle tunneling events in real-time across multiple superconducting qubits, revealing that these error-causing events occur individually at low rates but in correlated bursts across devices about once per minute.
Key Contributions
- Real-time detection method for quasiparticle tunneling events with microsecond temporal resolution
- Discovery of correlated burst episodes across multiple qubits that increase error rates by 1000-fold
- Characterization of burst lifetimes and spatial correlation structure in superconducting quantum devices
View Full Abstract
Quasiparticle tunneling events are a source of decoherence and correlated errors in superconducting circuits. Understanding and ultimately mitigating these errors calls for real-time detection of quasiparticle tunneling events on individual devices. In this work, we simultaneously detect quasiparticle tunneling events in two co-housed, charge-sensitive transmons coupled to a common waveguide. We measure background quasiparticle tunneling rates at the single-hertz level, with temporal resolution of tens of microseconds. Using time-tagged coincidence analysis, we show that individual events are uncorrelated across devices, whereas burst episodes occur about once per minute and are largely correlated. These bursts have a characteristic lifetime of 7 ms and induce a thousand-fold increase in the quasiparticle tunneling rate across both devices. In addition, we identify a rarer subset of bursts which are accompanied by a shift in the offset charge, at approximately one event per hour. Our results establish a practical and extensible method to identify quasiparticle bursts in superconducting circuits, as well as their correlations and spatial structure, advancing routes to suppress correlated errors in superconducting quantum processors.
Numerical Error Extraction by Quantum Measurement Algorithm
This paper introduces NEEQMA, a quantum algorithm that uses quantum measurements to determine the exact convergence constants for iterative quantum gate implementations, allowing for better optimization of quantum algorithms like Quantum Signal Processing and Hamiltonian Simulation.
Key Contributions
- Introduces NEEQMA protocol for extracting convergence constants from quantum gate approximations
- Demonstrates application to Quantum Signal Processing and Hamiltonian Simulation optimization
- Provides method to minimize convergence parameters while maintaining required accuracy
View Full Abstract
Important quantum algorithm routines allow the implementation of specific quantum operations (a.k.a. gates) by combining basic quantum circuits with an iterative structure. In this structure, the number of repetitions of the basic circuit pattern is associated to convergence parameters. This iterative structure behaves similarly to function approximation by series expansion: the higher the truncation order, the better the target gate (i.e. operation) approximation. The asymptotic convergence of the gate error with respect to the number of basic pattern repetitions is known. It is referred to as the query complexity. The underlying convergence law is bounded, but not in an explicit fashion. Upper bounds are generally too pessimistic to be useful in practice. The actual convergence law contains constants that depend on the joint properties of the matrix encoded by the query and the initial state vector, which are difficult to compute classically. This paper proposes a strategy to study this convergence law and extract the associated constants from the gate (operation) approximation at different accuracy (convergence parameter) constructed directly on a Quantum Processing Unit (QPU). This protocol is called Numerical Error Extraction by Quantum Measurement Algorithm (NEEQMA). NEEQMA concepts are tested on specific instances of Quantum Signal Processing (QSP) and Hamiltonian Simulation by Trotterization. Knowing theexact convergence constants allows for selecting the smallest convergence parameters that enable reaching the required gate approximation accuracy, hence satisfying the quantum algorithm's requirements.
Sawtooth wave adiabatic passage in a grating magneto-optical trap
This paper demonstrates a new cooling technique called sawtooth wave adiabatic passage (SWAP) in a grating magneto-optical trap for strontium atoms. The technique achieves better atom cooling and trapping efficiency compared to conventional methods, resulting in colder temperatures and improved transfer between different atomic states.
Key Contributions
- Demonstration of SWAP technique in grating MOT with complex polarization environment
- Factor of two improvement in transfer efficiency between atomic states
- Achievement of ultra-cold temperatures (4.9 μK) with large atom numbers for sensing applications
View Full Abstract
We demonstrate sawtooth wave adiabatic passage (SWAP) in a grating magneto-optical trap (MOT) operating on the $^1$S$_0$ $\rightarrow$ $^3$P$_1$ transition of neutral $^{88}$Sr. From numerical simulations of SWAP using our laser beam geometry, we find that SWAP provides greater cooling than triangle wave frequency modulation despite the complex polarization environment of a grating MOT. The simulation is confirmed by our experimental results, where we demonstrate a factor of two improvement in transfer efficiency between our $^1$S$_0$ $\rightarrow$ $^1$P$_1$ grating MOT and our $^1$S$_0$ $\rightarrow$ $^3$P$_1$ grating MOT. We trap up to $3\times10^6$ $^{88}$Sr atoms in the $^1$S$_0$ $\rightarrow$ $^3$P$_1$ grating MOT, at an average temperature of 4.9 $μ$K with a lifetime of approximately 0.7 s. Our results show that SWAP is effective in non-orthogonal laser beam geometries, allowing greater duty cycles or higher atom number in sensors based on narrow-line grating MOTs.
Hybrid Quantum Image Preparation via JPEG Compression
This paper develops a hybrid classical-quantum method for loading images into quantum computers more efficiently by using JPEG compression techniques. The approach reduces the number of quantum gates needed while maintaining image quality comparable to classical JPEG compression.
Key Contributions
- Development of JPEG-assisted quantum pixel information encoding (JQPIE) that reduces CX gate count and circuit depth
- Introduction of quantization-free variant (QF-JQPIE) that avoids probabilistic block-encoded quantization while maintaining efficiency
View Full Abstract
We present a hybrid classical-quantum image preparation scheme that reduces the quantum implementation cost of image loading for quantum pixel information encoding (QPIE). The proposed method, termed JPEG-assisted QPIE (JQPIE), loads only the quantized JPEG coefficients into a quantum register, leading to substantial reductions in \texttt{CX} gate count and circuit depth while preserving reconstruction quality comparable to classical JPEG compression. We develop two variants of the hybrid strategy. The first realizes the complete JPEG decompression pipeline coherently by implementing inverse quantization via a block-encoded unitary operator. The second, referred to as \emph{quantization-free JQPIE} (QF-JQPIE), omits quantization altogether, thereby avoiding the probabilistic nature of block-encoded quantization. Numerical simulations on standard benchmark image datasets (USC--SIPI and Kodak) demonstrate that both variants achieve significant constant-factor reductions in \texttt{CX} gate count and circuit depth relative to direct QPIE loading, while maintaining high reconstruction quality as measured by PSNR and SSIM.
Experimental Quantum Bernoulli Factories via Bell-Basis Measurements
This paper demonstrates a quantum Bernoulli factory using Bell-basis measurements on IBM quantum hardware to generate random bits with specific probability distributions. The experiment shows quantum advantages in randomness processing by implementing classically impossible functions like probability doubling using only quantum measurement outcomes.
Key Contributions
- Experimental demonstration of entanglement-assisted quantum Bernoulli factory on superconducting hardware
- Implementation of classically inconstructible randomness processing functions including probability doubling f(p)=2p
- Benchmarking of quantum-to-classical randomness conversion with analysis of device noise effects
View Full Abstract
Randomness processing in the Bernoulli factory framework provides a concrete setting in which quantum resources can outperform classical ones. We experimentally demonstrate an entanglement-assisted quantum Bernoulli factory based on Bell-basis measurements of two identical input quoins prepared on IBM superconducting hardware. Using only the measurement outcomes (and no external classical randomness source), we realize the classically inconstructible Bernoulli doubling primitive $f(p)=2p$ and, as intermediate outputs from the same Bell-measurement statistics, an exact fair coin $f(p)=1/2$ and the classically inconstructible function $f(p)=4p(1-p)$. We benchmark the measured output biases against ideal predictions and discuss the impact of device noise. Our results establish a simple, resource-efficient experimental primitive for quantum-to-classical randomness processing and support the viability of quantum Bernoulli factories for quantum-enhanced stochastic simulation and sampling tasks.
Quantum-enhanced Markov Chain Monte Carlo for Combinatorial Optimization
This paper presents a quantum-enhanced Markov Chain Monte Carlo algorithm for solving combinatorial optimization problems, specifically demonstrating success on Maximum Independent Set problems with up to 117 variables using IBM quantum hardware. The approach combines quantum sampling with classical optimization techniques like warm-starting and parallel tempering to find global optima.
Key Contributions
- Development of quantum-enhanced MCMC algorithm combining quantum sampling with classical optimization techniques
- Empirical demonstration of finding global optima for Maximum Independent Set problems up to 117 qubits on IBM quantum hardware
- Evidence of scaling advantage compared to classical methods for the tested problem instances
View Full Abstract
Quantum computing offers an alternative paradigm for addressing combinatorial optimization problems compared to classical computing. Despite recent hardware improvements, the execution of empirical quantum optimization experiments at scales known to be hard for state-of-the-art classical solvers is not yet in reach. In this work, we offer a different way to approach combinatorial optimization with near-term quantum computing. Motivated by the promising results observed in using quantum-enhanced Markov chain Monte Carlo (QeMCMC) for approximating complicated probability distributions, we combine ideas of sampling from the device with QeMCMC together with warm-starting and parallel tempering, in the context of combinatorial optimization. We demonstrate empirically that our algorithm recovers the global optima for instances of the Maximum Independent Set problem (MIS) up to 117 decision variables using 117 qubits on IBM quantum hardware. We show early evidence of a scaling advantage of our algorithm compared to similar classical methods for the chosen instances of MIS. MIS is practically relevant across domains like financial services and molecular biology, and, in some cases, already difficult to solve to optimality classically with only a few hundred decision variables.
Determining the ensemble N-representability of Reduced Density Matrices
This paper develops a quantum algorithm to determine whether a given density matrix can represent a real N-electron quantum system, addressing a fundamental problem in electronic structure theory. The method uses variational quantum circuits to test if target density matrices are physically valid and can correct defective ones.
Key Contributions
- Development of a variational quantum algorithm for testing N-representability of ensemble reduced density matrices
- Introduction of a purification strategy to embed ensemble states into pure states on extended Hilbert spaces
- Demonstration of quantum-based error correction for defective density matrices with validation on molecular systems
View Full Abstract
The N-representability problem for reduced density matrices remains a fundamental challenge in electronic structure theory. Following our previous work that employs a unitary-evolution algorithm based on an adaptive derivative-assembled pseudo-Trotter variational quantum algorithm to probe pure-state N-representability of reduced density matrices [J. Chem. Theory Comput. 2024, 20, 9968], in this work we propose a practical framework for determining the ensemble N-representability of a p-body matrix. This is accomplished using a purification strategy consisting of embedding an ensemble state into a pure state defined on an extended Hilbert space, such that the reduced density matrices of the purified state reproduce those of the original ensemble. By iteratively applying variational unitaries to an initial purified state, the proposed algorithm minimizes the Hilbert-Schmidt distance between its p-body reduced density matrix and a specified target p-body matrix, which serves as a measure of the N-representability of the target. This methodology facilitates both error correction of defective ensemble reduced density matrices, and quantum-state reconstruction on a quantum computer, offering a route for density-matrix refinement. We validate the algorithm with numerical simulations on systems of two, three, and four electrons in both, simple models as well as molecular systems at finite temperature, demonstrating its robustness.
Krylov Distribution
This paper introduces the Krylov distribution, a new mathematical tool that characterizes how quantum systems respond to energy perturbations by analyzing the resolvent operator in Krylov space. The authors identify three universal regimes of behavior and show connections to quantum geometric properties like fidelity susceptibility.
Key Contributions
- Introduction of Krylov distribution as a diagnostic tool for quantum response functions
- Identification of three universal scaling regimes in quantum systems
- Connection between Krylov-space analysis and quantum geometric tensor decompositions
View Full Abstract
We introduce the Krylov distribution $\mathcal{D}(ξ)$, a static Krylov-space diagnostic that characterizes how inverse-energy response is organized in Hilbert space. The central object is the resolvent-dressed state $(H-ξ)^{-1}|ψ_0\rangle$, whose decomposition in the Krylov basis generated from a reference state defines a normalized distribution over Krylov levels. Unlike conventional spectral functions, which resolve response solely along the energy axis, the Krylov distribution captures how the resolvent explores the dynamically accessible subspace as the spectral parameter $ξ$ is varied. Using asymptotic analysis, exact results in solvable models, and numerical studies of an interacting spin chain, we identify three universal regimes: saturation outside the spectral support, extensive growth within continuous spectra, and sublinear or logarithmic scaling near spectral edges and quantum critical points. We further show that fidelity susceptibility and the quantum geometric tensor admit natural decompositions in terms of Krylov-resolved resolvent amplitudes.
Theory of direct measurement of the quantum pseudo-distribution via its characteristic function
This paper proposes a method to directly measure quantum pseudo-distributions (specifically the Kirkwood-Dirac pseudo-distribution) through their characteristic functions using weak measurements. The approach uses Vandermonde matrices and momentum translations to extract pseudo-distributions in a theory-agnostic way, providing a new experimental framework for probing quantum mechanical properties.
Key Contributions
- Development of characteristic function approach for direct measurement of quantum pseudo-distributions
- Theoretical framework connecting weak measurements to Kirkwood-Dirac pseudo-distributions via Vandermonde matrices
- Method for directly probing canonical commutation relations experimentally
View Full Abstract
We propose a method for directly measuring the quantum mechanical pseudo-distribution of observable properties via its characteristic function. Vandermonde matrices of the eigenvalues play a central role in the theory. This proposal directly finds the pseudo-distribution using weak measurements of the generator of position moments (momentum translations). While the pseudo-distribution can be extracted from the data in a theory-agnostic way, it is shown that under quantum-mechanical formalism, the predicted pseudo-distribution is identified with the Kirkwood-Dirac pseudo-distribution. We discuss the construction of both the joint pseudo-distribution and a conditional pseudo-distribution, which is closely connected to weak-value physics. By permuting position and momentum measurements, we give a prescription to directly probe the canonical commutation relation and verify it for any quantum state. This work establishes the theory of a characteristic function approach to pseudo-distributions, as well as providing a constructive approach to measuring them directly.
Highly-Indistinguishable Single-Photons at 1550 nm from a Two-photon Resonantly Excited Purcell-enhanced Quantum Dot
This paper demonstrates an advanced quantum dot single-photon source operating at telecom wavelengths (1550 nm) with record-low decay times and high photon indistinguishability. The researchers achieve 90% two-photon interference visibility using cavity enhancement and resonant excitation techniques, which is crucial for quantum communication applications.
Key Contributions
- Record-low biexciton decay time of 67.4 ps in telecom C-band quantum dot source
- Achievement of 90% two-photon interference visibility exceeding theoretical limits for standard cascades
- Demonstration of stimulated two-photon excitation improving photon indistinguishability
View Full Abstract
In this work we present a cavity-enhanced InAs/$\mathrm{In_{0.53}Al_{0.23}Ga_{0.24}As}$ quantum dot (QD) single-photon source in the telecom C-band with a record-low biexciton emitter decay time of \SI{67.4(2)}{ps} under resonant two-photon excitation (TPE). We observe strong multiphoton suppression associated with $g^{(2)}_\mathrm{X}(0) = 0.006(1)$ and $g^{(2)}_\mathrm{XX}(0) = 0.007(1)$ for the exciton (X) and biexciton (XX) emission, respectively. Due to a asymmetric Purcell enhancement of the XX-X cascade, the two-photon interference (TPI) visibility of XX photons under $π$-pulse excitation of $V_{\rm{TPI}} = 90(3)\%$ reaches the theoretical limit and clearly exceeds the $\sim60\%$ expected for standard XX-X cascades without photonic engineering. Furthermore, adding a second timed laser pulse coinciding with XX emission energy, we demonstrate stimulated TPE in the telecom C-Band. The result is an improved TPI visibility of the X photons of $V_{\rm{TPI}}=0.69(3)$ compared to TPE with $V_{\rm{TPI}}=0.61(4)$, with both being reduced compared to the theoretical values due to present dephasing effects. The advances presented in this work hold important promises for the implementation of advanced schemes of quantum communication using deterministic quantum light sources.
Warm Starts, Cold States: Exploiting Adiabaticity for Variational Ground-States
This paper introduces a hybrid approach that combines Variational Quantum Eigensolver (VQE) with adiabatic principles to find ground states of quantum many-body systems. The method uses a stepwise deformation of the Hamiltonian to gradually transform from an easy-to-solve system to the target system, helping avoid local minima and barren plateaus that plague traditional variational methods.
Key Contributions
- Novel hybrid VQE-adiabatic algorithm that uses stepwise Hamiltonian deformation to avoid optimization pitfalls
- Theoretical proof of lower bound on loss variance showing trainability throughout the deformation process
- Numerical validation including shot noise effects demonstrating consistent convergence to target ground states
View Full Abstract
Reliable preparation of many-body ground states is an essential task in quantum computing, with applications spanning areas from chemistry and materials modeling to quantum optimization and benchmarking. A variety of approaches have been proposed to tackle this problem, including variational methods. However, variational training often struggle to navigate complex energy landscapes, frequently encountering suboptimal local minima or suffering from barren plateaus. In this work, we introduce an iterative strategy for ground-state preparation based on a stepwise (discretized) Hamiltonian deformation. By complementing the Variational Quantum Eigensolver (VQE) with adiabatic principles, we demonstrate that solving a sequence of intermediate problems facilitates tracking the ground-state manifold toward the target system, even as we scale the system size. We provide a rigorous theoretical foundation for this approach, proving a lower bound on the loss variance that suggests trainability throughout the deformation, provided the system remains away from gap closings. Numerical simulations, including the effects of shot noise, confirm that this path-dependent tracking consistently converges to the target ground state.
A Nonequilibrium Equation of State for a Turbulent 2D Bose Gas
This paper experimentally studies a turbulent two-dimensional Bose gas that is driven out of equilibrium, developing a nonequilibrium equation of state that describes how energy cascades through the system. The researchers find universal scaling laws that connect the energy flow with the system's properties using the Gross-Pitaevskii model.
Key Contributions
- Development of nonequilibrium equation of state for turbulent 2D Bose gas
- Experimental demonstration of universal scaling laws for energy cascade in quantum turbulence
View Full Abstract
Nonequilibrium equations of state can provide an effective thermodynamic-like description of far-from-equilibrium systems. We experimentally construct such an equation for a direct energy cascade in a turbulent two-dimensional Bose gas. Our homogeneous gas is continuously driven on a large length scale and, with matching dissipation on a small length scale, exhibits a nonthermal but stationary power-law momentum distribution. Our equation of state links the cascade amplitude with the underlying scale-invariant energy flux, and can, for different drive strengths, gas densities, and interaction strengths, be recast into a universal power-law form using scalings consistent with the Gross-Pitaevskii model.
U(1) lattice gauge theory and string roughening on a triangular Rydberg array
This paper demonstrates how Rydberg atom arrays can simulate lattice gauge theories to study string roughening, where quantum fluctuations cause flux strings connecting particles to develop increasing width with distance. The researchers map a triangular Rydberg array to a U(1) gauge theory and observe signatures of string roughening including logarithmic width growth and universal corrections to the confining potential.
Key Contributions
- Mapping triangular Rydberg arrays to (2+1)D U(1) lattice gauge theory with natural plaquette interactions
- Experimental demonstration of string roughening signatures including logarithmic transverse width growth and Lüscher corrections
- Real-time observation of string dynamics including fluctuations and breaking via particle-pair creation
View Full Abstract
Lattice gauge theories (LGTs) describe fundamental interactions in particle physics. A central phenomenon in these theories is confinement, which binds quarks and antiquarks into hadrons through the formation of string-like flux tubes of gauge fields. Simulating confinement dynamics is a challenging task, but recent advances in quantum simulation are enabling the exploration of LGTs in regimes beyond the reach of classical computation. For analog devices, a major difficulty is the realization of strong plaquette interactions, which generate string fluctuations that can drive a roughening transition. Understanding string roughening -- where strong transversal functions lead to an effective restoration of translational symmetry at long distances -- is of central importance in the study of confinement. In this work, we show that string roughening emerges naturally in an analog Rydberg quantum simulator. We first map a triangular Rydberg array onto a (2+1)D U(1) LGT where plaquette terms appear as first-order processes. We study flux strings connecting static charges and demonstrate that, near a deconfined quantum critical point, the string exhibits logarithmic growth of its transverse width as the separation between charges increases, along with the universal Lüscher correction to the confining potential -- both signatures of string roughening. Finally, we investigate the real-time dynamics of an initially rigid string, observing large fluctuations after quenching into the roughening regime, as well as string breaking via particle-pair creation. Our results indicate that rough strings can be realized in experimentally accessible quantum simulators, opening the door to detailed studies of how strong fluctuations influence string-breaking dynamics.
Quantum simulation of the Dicke model in a two-dimensional ion crystal: chaos, quantum thermalization, and revivals
This paper demonstrates quantum simulation of the Dicke model using a 2D crystal of ~100 trapped ions, observing dynamical phase transitions, quantum chaos, and entanglement growth in a many-body system. The experiment shows how ion crystals can serve as controllable platforms for studying non-equilibrium quantum dynamics and information scrambling.
Key Contributions
- First experimental realization of the Dicke model in a large-scale 2D ion crystal with ~100 ions
- Observation of quantum chaos signatures including exponential entanglement growth and erratic phase-space trajectories
- Demonstration of spin-phonon squeezing 2.6 dB below standard quantum limit with vacuum Rabi revivals
- Establishment of ion crystals as scalable analog quantum simulators for many-body dynamics
View Full Abstract
Quantum many-body systems driven far from equilibrium can exhibit chaos, entanglement, and non-classical correlations, yet directly observing these phenomena in large, closed quantum systems remains challenging. Here we realize the Dicke model -- a fundamental description of light-matter interactions -- in a two-dimensional crystal of approximately 100 trapped ions. The ions' internal state is optically coupled to the center of mass vibrational mode via an optical spin-dependent force, enabling unitary many-body dynamics beyond the mean-field and few-body limits. In the integrable regime, where the phonons can be adiabatically eliminated, we observe a dynamical phase transition between ferromagnetic to paramagnetic spin phases. In contrast, when the spins and phonons are strongly coupled, we observe clear signatures of non-integrable chaotic dynamics, including erratic phase-space trajectories and the exponential growth of excitations and entanglement quantified by the one-body Rényi entropy. By quenching from an unstable fixed point in the near-integrable regime, quantum noise can generate correlated spin-phonon excitations. Our numerical calculations, in clear agreement with experiment, reveal the generation of two-mode spin-phonon squeezing, 2.6 dB below the standard quantum limit (4.6 dB relative to the initial thermal state), followed by generalized vacuum Rabi collapses and revivals. Our results establish large ion crystals as scalable analog quantum simulators of non-equilibrium light-matter dynamics and provide a controlled platform for experimental studies of information scrambling and entanglement in closed many-body systems.
Private and interpretable clinical prediction with quantum-inspired tensor train models
This paper addresses privacy vulnerabilities in clinical machine learning models by proposing a quantum-inspired defense mechanism that uses tensor train decomposition to obfuscate model parameters while maintaining predictive accuracy and interpretability.
Key Contributions
- Empirical demonstration of privacy vulnerabilities in clinical ML models including logistic regression and neural networks
- Introduction of quantum-inspired tensor train models as a defense mechanism that preserves accuracy while obfuscating parameters
- Enhancement of model interpretability through efficient computation of marginal and conditional distributions
View Full Abstract
Machine learning in clinical settings must balance predictive accuracy, interpretability, and privacy. Models such as logistic regression (LR) offer transparency, while neural networks (NNs) provide greater predictive power; yet both remain vulnerable to privacy attacks. We empirically assess these risks by designing attacks that identify which public datasets were used to train a model under varying levels of adversarial access, applying them to LORIS, a publicly available LR model for immunotherapy response prediction, as well as to additional shallow NN models trained for the same task. Our results show that both models leak significant training-set information, with LRs proving particularly vulnerable in white-box scenarios. Moreover, we observe that common practices such as cross-validation in LRs exacerbate these risks. To mitigate these vulnerabilities, we propose a quantum-inspired defense based on tensorizing discretized models into tensor trains (TTs), which fully obfuscates parameters while preserving accuracy, reducing white-box attacks to random guessing and degrading black-box attacks comparably to Differential Privacy. TT models retain LR interpretability and extend it through efficient computation of marginal and conditional distributions, while also enabling this higher level of interpretability for NNs. Our results demonstrate that tensorization is widely applicable and establishes a practical foundation for private, interpretable, and effective clinical prediction.
Deforming the Double-Scaled SYK & Reaching the Stretched Horizon From Finite Cutoff Holography
This paper studies deformations of the double-scaled SYK model using finite cutoff holography, exploring how these deformations affect thermodynamic properties, correlation functions, and entanglement entropy. The work connects these theoretical constructs to holographic descriptions of gravity and realizes Susskind's stretched horizon proposal in de Sitter space.
Key Contributions
- Development of chord Hamiltonian deformations for the double-scaled SYK model based on finite cutoff holography
- Concrete realization of Susskind's cosmological stretched horizon proposal in de Sitter holography through sequential deformations
View Full Abstract
We study the properties of the double-scaled SYK (DSSYK) model under chord Hamiltonian deformations based on finite cutoff holography for general dilaton gravity theories with Dirichlet boundaries. The formalism immediately incorporates a lower-dimensional analog of $\text{T}\bar{\text{T}}(+Λ_2)$ deformations, denoted $T^2(+Λ_1)$, as special cases. In general, the deformation mixes the chord basis of the Hilbert space in the seed theory, which we order through a modification of the Lanczos algorithm. The resulting chord number in the ordered basis represents a wormhole length at a finite cutoff in the bulk. We study the thermodynamic properties of the deformed theory; the evolution of $n$-point correlation functions with matter chords; the growth of complexity of the Hartle-Hawking state; and the entanglement entropy between the double-scaled algebras for a given chord state. The latter, in the triple-scaling limit, manifests as the minimal codimension-two area in the bulk following the Ryu-Takayanagi formula. By performing a sequence of $T^2$ and $T^2+Λ_1$ deformations in the upper tail of the energy spectrum in the deformed DSSYK, we concretely realize the cosmological stretched horizon proposal in de Sitter holography by Susskind. We discuss other extensions with sine dilaton gravity, end-of-the-world branes, and the Almheiri-Goel-Hu model.
Quantum-controlled synthetic materials
This paper demonstrates a hybrid quantum platform that combines analog quantum simulators with digital quantum control by entangling a synthetic quantum material (Bose-Hubbard circuit) with an ancilla qubit. The researchers show they can create novel quantum states where different phases of matter coexist and use this control to enhance coherence for potential sensing applications.
Key Contributions
- Development of hybrid analog-digital quantum control platform combining quantum simulators with digital quantum computers
- Demonstration of Hamiltonian-level control creating superposition states of different quantum phases of matter
- Implementation of many-body echo technique to enhance coherence of entangled cat states for sensing applications
View Full Abstract
Analog quantum simulators and digital quantum computers are two distinct paradigms driving near-term applications in modern quantum science, from probing many-body phenomena to identifying computational advantage over classical systems. A transformative opportunity on the horizon is merging the high-fidelity many-body evolution in analog simulators with the robust control and measurement of digital machines. Such a hybrid platform would unlock new capabilities in state preparation, characterization and dynamical control. Here, we embed digital quantum control in the analog evolution of a synthetic quantum material by entangling the lattice potential landscape of a Bose-Hubbard circuit with an ancilla qubit. This Hamiltonian-level control induces dynamics under a superposition of different lattice configurations and guides the many-body system to novel strongly-correlated states where different phases of matter coexist -- ordering photons into superpositions of solid and fluid eigenstates. Leveraging hybrid control modalities, we adiabatically introduce disorder to localize the photons into an entangled cat state and enhance its coherence using a many-body echo technique. This work illustrates the potential for entangling quantum computers with quantum matter -- synthetic and solid-state -- for advantage in sensing and materials characterization.
Dissipative Dicke Time Quasicrystals
This paper studies time quasicrystals that emerge in quantum systems with dissipation when driven by quasi-periodic forces, showing these exotic time-ordered phases can exist even in small quantum systems with just two qubits. The researchers demonstrate that these temporal patterns persist longer in larger quantum systems and represent a new type of non-equilibrium quantum matter.
Key Contributions
- Demonstration of time quasicrystal formation in open quantum systems under quasi-periodic driving
- Proof that time quasicrystal behavior persists in the deep quantum regime with only two qubits
- Systematic study showing time quasicrystal lifetime increases with system size
View Full Abstract
We investigate the emergence of time quasicrystals (TQCs) in the open Dicke model, subjected to a quasi-periodic Fibonacci drive. TQCs are characterized by a robust sub-harmonic quasi-periodic response that is qualitatively distinct from the external drive. By directly analyzing the dynamics of the system in the thermodynamic limit, we establish the existence of TQC order in this system for a wide parameter regime. Remarkably, we demonstrate that this behavior persists even in the deep quantum regime with only two qubits. We systematically study the dependence of the TQC lifetime, $τ^{\ast}$, on the number of qubits and demonstrate that $τ^{\ast}$ increases monotonically with the system size. Our work demonstrates that quasi-periodically driven dissipative quantum systems can serve as a powerful platform for realizing novel non-equilibrium phases of matter.
Quantum noise scaling in continuously operating multiparameter sensors
This paper experimentally studies the fundamental quantum noise limits in multiparameter quantum magnetometers that operate continuously. The researchers measured how different types of quantum noise scale with laser power and identified optimal operating conditions for these precision magnetic field sensors.
Key Contributions
- Experimental mapping of quantum noise scaling laws in continuously operating multiparameter quantum sensors
- Quantitative validation of stochastic Bloch-equation model predictions for noise mechanisms
- Identification of fundamental resource-dependent trade-offs for optimal multiparameter sensor operation
View Full Abstract
We experimentally investigate the quantum noise mechanisms that limit continuously operating multiparameter quantum sensors. Using a hybrid rf-dc optically pumped magnetometer, we map the photon shot noise, spin projection noise, and measurement back-action noise over an order of magnitude in probe power and a factor of three in pump power while remaining quantum-noise-limited. We observe linear, quadratic, and cubic scaling of the respective total noise powers with probe photon flux, together with a quadratic dependence of back-action on pump photon flux, in quantitative agreement with a stochastic Bloch-equation model. At higher probe powers, additional probe-induced relaxation modifies the spin-noise spectrum while preserving the integrated noise scaling. Our results reveal fundamental, resource-dependent trade-offs unique to continuously monitored multiparameter sensors and establish experimentally the quantum limits governing their optimal operation.
Efficient net-gain integrated optical parametric amplifier in the quantum regime
This paper demonstrates a highly efficient integrated optical parametric amplifier that achieves significant signal amplification with quantum-limited noise performance using thin-film lithium niobate waveguides. The device shows over 10x improvement in pump efficiency compared to previous systems and provides net gain up to 10 dB with broad bandwidth covering telecommunications bands.
Key Contributions
- Demonstrated integrated optical parametric amplifier with 23.5 dB phase-sensitive gain using only 110 mW pump power
- Achieved quantum-limited noise performance with output field fluctuations below the classical limit
- Showed net gain up to 10 dB with 120 nm bandwidth covering S-, C-, and L-bands for telecommunications
- Improved pump efficiency by over one order of magnitude compared to previous integrated OPAs
View Full Abstract
Optical parametric amplifiers (OPAs) are promising to overcome the wavelength coverage and noise limitations in conventional optical amplifiers based on rare-earth doping and semiconductor gain. However, the high power requirement remains a major obstacle to the widespread use of OPAs. Integrated OPAs can in principle improve the pump efficiency with tight mode confinement; however, challenges associated with propagation loss, limited nonlinearity, and susceptibility to nanoscale fabrication imperfections prevent them from competing with conventional bulk and fiber-based OPAs. Here, we demonstrate a highly efficient integrated OPAs with continuous-wave net gain. The pump efficiency is improved by over one order of magnitude. Phase-sensitive gain of 23.5 dB is demonstrated, significantly exceeding previous integrated OPAs, using only 110 mW pump power and no cavity enhancement. This is achieved with parametric down-conversion in thin-film lithium niobate waveguides using the adapted poling technique to maintain the coherence of nonlinear interactions. Moreover, the high parametric gain exceeds fibre-chip-fibre losses, leading to appreciable net gain up to 10 dB. The 3 dB bandwidth is approximately 120 nm, covering telecommunication S-, C-, and Lbands. Quantum-limited noise performance is confirmed through the measurement of output field fluctuation below the classical limit. We further demonstrate that signalto-noise ratio in noisy optical communications can be increased by leveraging this efficient integrated OPA. Our work marks a significant step towards ideal optical amplifiers with strong amplification, high efficiency, quantum-limited noise, large bandwidth, and continuous-wave operation, unlocking new possibilities for next-generation photonic information processing systems.
Improving Ground State Accuracy of Variational Quantum Eigensolvers with Soft-coded Orthogonal Subspace Representations
This paper proposes a new approach for Variational Quantum Eigensolver (VQE) algorithms that uses 'soft-coded' orthogonality constraints in the cost function rather than hard-coded constraints at the circuit level. This method allows for shallower quantum circuits while maintaining high accuracy in finding ground state energies of quantum systems.
Key Contributions
- Introduction of soft-coded orthogonality constraints via penalty terms in VQE cost functions
- Demonstration that this approach enables shallower quantum circuits while maintaining high fidelity compared to existing multi-state VQE methods
View Full Abstract
We propose a new approach to improve the accuracy of ground state estimates in Variational Quantum Eigensolver (VQE) algorithms by employing subspace representations with soft-coded orthogonality constraints. As in other subspace-based VQE methods, such as the Subspace-Search VQE (SSVQE) and Multistate Contracted VQE (MCVQE), once the parameters are optimized to maximize the subspace overlap with the low-energy sector of the Hamiltonian, one diagonalizes the Hamiltonian restricted to the subspace. Unlike these methods, where \emph{hard-coded} orthogonality constraints are enforced at the circuit level among the states spanning the subspace, we consider a subspace representation where orthogonality is \emph{soft-coded} via penalty terms in the cost function. We show that this representation allows for shallower quantum circuits while maintaining high fidelity when compared to single-state (standard VQE) and multi-state (SSVQE or MCVQE) representations, on two benchmark cases: a $3\times 3$ transverse-field Ising model and random realizations of the Edwards--Anderson spin-glass model on a $4\times 4$ lattice.
Improved Rodeo Algorithm Performance for Spectral Functions and State Preparation
This paper improves the Rodeo Algorithm, a quantum computing method for finding energy levels and preparing quantum states, by using a geometric series of time samples that provides near-optimal performance across different quantum systems without requiring system-specific adjustments.
Key Contributions
- Demonstrates that geometric series time sampling provides near-optimal performance for the Rodeo Algorithm
- Shows the sampling strategy works robustly across various physical Hamiltonians without model-specific fine-tuning
View Full Abstract
The Rodeo Algorithm is a quantum computing method for computing the energy spectrum of a Hamiltonian and preparing its energy eigenstates. We discuss how to improve the performance of the rodeo algorithm for each of these two applications. In particular, we demonstrate that using a geometric series of time samples offers a near-optimal optimization space for a given total runtime by studying the Rodeo Algorithm performance on a model Hamiltonian representative of gapped many-body quantum systems. Analytics explain the performance of this time sampling and the conditions for it to maintain the established exponential performance of the Rodeo Algorithm. We finally demonstrate this sampling protocol on various physical Hamiltonians, showing its practical applicability. Our results suggest that geometric series of times provide a practical, near-optimal, and robust time-sampling strategy for quantum state preparation with the Rodeo Algorithm across varied Hamiltonians without requiring model-specific fine-tuning.
Does Cosmology require Hermiticity in Quantum Mechanics?
This paper investigates whether quantum mechanics in cosmology requires Hermitian operators by extending the Wheeler-DeWitt equation to include non-Hermitian terms. The authors find that observational constraints from the early universe and structure formation severely limit non-Hermitian contributions, suggesting that Hermiticity emerges naturally in cosmological quantum mechanics.
Key Contributions
- Extension of Wheeler-DeWitt framework to non-Hermitian quantum mechanics
- Derivation of observational constraints on non-Hermiticity from cosmological data
- Demonstration that Hermiticity may emerge dynamically in semiclassical cosmology
View Full Abstract
We explore the consequences of allowing non-Hermitian structures in quantum cosmology by extending the Wheeler DeWitt framework beyond strictly Hermitian dynamics. Using a controlled semiclassical reduction, we show how anti Hermitian contributions propagate into both early universe primordial fluctuations and late-time structure growth as effective damping or gain terms. Confronting this framework with inflationary observables, growth of structure and the observed near flatness of the universe, we derive strong infrared constraints that suppress non Hermiticity across cosmic history. We demonstrate that these bounds are mutually consistent between early and late-time probes and can be partially relaxed in theories beyond General Relativity. Our results establish cosmology as a novel arena for testing foundational aspects of quantum mechanics and suggest that Hermiticity may emerge dynamically along the semiclassical branch describing our universe.
One-Way Quantum Secure Direct Communication with Choice of Measurement Basis as the Secret
This paper proposes a new quantum secure direct communication protocol where secret bits are encoded by choosing between two different measurement bases (computational or Hadamard) rather than by applying quantum operations. The authors analyze the security and bit rates of this measurement-based approach against eavesdropping attacks.
Key Contributions
- Novel quantum secure direct communication protocol based on measurement basis selection rather than unitary operations
- Security analysis using quantum wiretap channel theory against BB84-symmetric attacks
- Demonstration of suitability for star network configurations without requiring local unitary operations at receiver
View Full Abstract
Motivated by the question of the distinguishability of ensembles described by the same compressed density operator, we propose a model for one-way quantum secure direct communication using finite ensembles of shared EPR pairs per bit and a public authenticated classical channel, where the local choice of one of two mutually-unbiased measurement bases is the secret bit. In this model, both the encoding and decoding of classical information in quantum systems are implemented by measurements in either the computational or the Hadamard basis. Using the quantum wiretap channel theory, we study the secure net bit rates and certify information-theoretic security of different implementations of our model when the quantum channel is subjected to BB84-symmetric attacks. Since no local unitary operations need to be performed by the receiver, the proposed model is suitable for real-life implementation of secure direct communication in star network configurations.
Quantum Approximate Optimization of Integer Graph Problems and Surpassing Semidefinite Programming for Max-k-Cut
This paper applies the Quantum Approximate Optimization Algorithm (QAOA) to integer optimization problems on graphs, moving beyond traditional binary problems by encoding variables in qudits. The researchers develop theoretical formulas for QAOA performance and demonstrate that for certain Max-k-Cut problems, QAOA can outperform classical semidefinite programming algorithms.
Key Contributions
- Derived general iterative formula for depth-p QAOA expectation on high-girth d-regular graphs with cost exponential in depth but independent of graph size
- Identified parameter regimes where QAOA outperforms classical Frieze-Jerrum semidefinite programming algorithm for Max-k-Cut problems
- Extended QAOA from binary to integer optimization problems using qudit encoding
- Introduced new degree-of-saturation heuristic algorithm as improved classical baseline
View Full Abstract
Quantum algorithms for binary optimization problems have been the subject of extensive study. However, the application of quantum algorithms to integer optimization problems remains comparatively unexplored. In this paper, we study the Quantum Approximate Optimization Algorithm (QAOA) applied to integer problems on graphs, with each integer variable encoded in a qudit. We derive a general iterative formula for depth-$p$ QAOA expectation on high-girth $d$-regular graphs of arbitrary size. The cost of evaluating the formula is exponential in the QAOA depth $p$ but does not depend on the graph size. Evaluating this formula for Max-$k$-Cut problem for $p\leq 4$, we identify parameter regimes ($k=3$ with degree $d \leq 10$ and $k=4$ with $d \leq 40$) in which QAOA outperforms the Frieze-Jerrum semi-definite programming (SDP) algorithm, which provides the best worst-case guarantee on the approximation ratio. To strengthen the classical baseline we introduce a new heuristic algorithm, based on the degree-of-saturation, that empirically outperforms both the Frieze-Jerrum algorithm and shallow-depth QAOA. Nevertheless, we provide numerical evidence that QAOA may overtake this heuristic at depth $p\leq 20$. Our results show that moving beyond binary to integer optimization problems can open up new avenues for quantum advantage.
On the Efimov Effect for Four Particles in Dimension Two
This paper proves that four particles in two dimensions can form infinitely many bound states when interacting through short-range three-body forces, extending the famous Efimov effect to four-particle systems in 2D. The result requires each three-particle subsystem to have a virtual energy level at zero.
Key Contributions
- Proves existence of infinitely many bound states for four-particle systems in 2D with three-body interactions
- Establishes analog of Efimov effect for four particles in two dimensions
View Full Abstract
We prove that the Schrödinger operator describing four particles in two dimensions, interacting solely through short-range three-body forces, can possess infinitely many bound states. This holds under the assumption that each three-body subsystem has a virtual level at zero energy. Our result establishes an analog of the Efimov effect for such four-particle systems in two dimensions.
Quantum Simulation of Bound and Resonant Doubly-Bottom Tetraquark
This paper uses quantum simulation with a 16-qubit register and variational quantum eigensolver to study exotic four-quark particles called doubly-bottom tetraquarks. The researchers map a QCD-inspired model onto quantum hardware to identify bound states and resonances that are difficult to study with classical computational methods.
Key Contributions
- First quantum simulation study of doubly-bottom tetraquark states using variational quantum eigensolver
- Demonstration of 16-qubit quantum register encoding for four-quark systems with color, spin, and spatial degrees of freedom
- Validation that quantum simulation can study exotic multiquark states beyond conventional computational methods
View Full Abstract
We present the first quantum-simulation study of bound and resonant doubly-bottom tetraquark states within a QCD-inspired chiral quark model. An effective four-quark Hamiltonian is mapped onto a 16-qubit register, encoding color, spin, and spatial degrees of freedom, and incorporating both meson-meson and diquark-antidiquark configurations with complete color bases. Using a variational quantum eigensolver, we identify bound and resonance states in the low-lying $S$-wave sector. Deeply bound states are found exclusively in the isoscalar $I(J^{P})=0(1^{+})$ channel, dominated by color-singlet meson-meson components with non-negligible hidden-color contributions. The resulting masses and binding energies are consistent with classical chiral quark model predictions, establishing quantum simulation as a viable framework for studying exotic multiquark states beyond the reach of conventional methods.
"It from Bit": The Hartle-Hawking state and quantum mechanics for de Sitter observers
This paper examines the quantum mechanics experienced by observers inside de Sitter space and resolves an apparent paradox about how single-state closed universes can be compatible with quantum mechanics. The authors distinguish between two different mathematical spaces and show that quantum mechanics emerges from classical probability theory in this cosmological context.
Key Contributions
- Resolves tension between one-state property of closed universes and finite-dimensional quantum mechanics for de Sitter observers
- Demonstrates that baby-universe Hilbert space encodes classical probability rather than quantum mechanics
- Provides concrete realization of Wheeler's 'It from Bit' concept through a solved topological toy model
View Full Abstract
The one-state statement for closed universes has sparked considerable discussion. In this paper we examine its physical meaning in the context of the Hartle-Hawking state and de Sitter space. We argue that the one-state property of closed universes is fully compatible with the finite-dimensional quantum mechanics experienced by observers inside de Sitter space, and that this compatibility requires neither mixing of alpha sectors nor any modification of the rules of the gravitational path integral. The apparent tension is resolved by sharply distinguishing the baby-universe Hilbert space, namely the space of closed universes viewed from the outside, from the bulk Hilbert space that governs quantum mechanics for an observer inside a single de Sitter universe. The baby-universe Hilbert space, together with its commutative operator algebra, is not a quantum-mechanical Hilbert space: it is merely a mathematical repackaging of classical probability theory and does not carry any quantum-mechanical structure at all, a direct consequence of the one-state property of closed universes. Accordingly, attempting to formulate quantum mechanics directly on the baby-universe Hilbert space conflates classical ensemble data with the quantum mechanics experienced by bulk observers and leads to physically incorrect conclusions. By contrast, the quantum mechanics experienced by an observer inside de Sitter space emerges from the classical statistics encoded in the baby-universe Hilbert space, providing a concrete realization of Wheeler's idea of "It from Bit". We demonstrate these features by completely solving a topological toy model of one-dimensional de Sitter spacetime. Along the way we clarify the physical meaning of de Sitter entropy, showing that it corresponds to the coarse-grained entropy of the underlying state.
Measurement-Induced Dynamics of Particles and Quasiparticles in a Bose-Einstein-condensate array
This paper studies how measurement processes like phase contrast imaging affect Bose-Einstein condensate arrays, showing that measurement parameters can control what is observed and how the measurement itself creates and influences quasiparticles in the quantum system. The work reveals how to selectively measure different types of particle dynamics and control measurement-induced effects.
Key Contributions
- Demonstrates how measurement parameters in phase contrast imaging can selectively probe either bare particle or quasiparticle dynamics in BEC arrays
- Shows how to control measurement-induced creation and diffusion of quasiparticles into different momentum states
View Full Abstract
Measurement plays a crucial role in a quantum system beyond just learning about the system state: it changes the post-measurement state and hence influences the subsequent time evolution; further, measurement can even create entanglement in the post-measurement conditional state. In this work, we study how careful choice of parameters for a typical measurement process on cold atoms systems -- phase contrast imaging -- has a strong impact on both what the experimentalist observes but also on the backaction the measurement has on the system, including the creation and diffusion of quasiparticles emerging from the quantum many-body dynamics. We focus on the case of a Bose-Einstein-condensate array, in the low-temperature and low-momentum limit. Our theoretical investigation reveals regimes where the imaging light probes either the bare particle or quasiparticle dynamics. Moreover, we find a path to selectively measuring quasiparticle modes directly, as well as controlling over the measurement-induced creation and diffusion of quasiparticles into different momentum states. This lays a foundation for understanding the effects of both experimental approaches for probing many-body systems, but also more speculative directions such as observable consequences of `spontaneous collapse' predictions from novel models of quantum gravity on aspects of the Standard Model.
Local measurements and the entanglement transition in quantum spin chains
This paper studies how local measurements on quantum spin chains can induce a transition from short-range entangled states to long-range entangled states. The authors show that measuring local charges on increasingly long intervals transforms initially short-range entangled states into states with long-range correlations that cannot be uniformly short-range entangled.
Key Contributions
- Demonstrates that local measurements can induce entanglement transitions from short-range to long-range entangled states in quantum spin chains
- Constructs infinite-volume post-measurement states for systems derived from quantum cellular automata and identifies maximally correlated almost local observables
View Full Abstract
We consider the transition between short-range entangled (SRE) and long-range ordered (and therefore long-range entangled) states of infinite quantum spin chains, which is induced by local measurements. Specifically, we assume that the initial state is in a non-trivial symmetry-protected topological phase with local symmetry group $\mathcal{G} = G \times H$, where $G$ is an Abelian subgroup. We show that the on-site measurements of the local $G$-charge on intervals of increasing lengths transform the initial SRE state into a family of states with increasingly long-range correlations. In particular, the post-measurement states cannot be uniformly short-range entangled. In the case where the initial state is obtained from a product state using a quantum cellular automaton, we construct the infinite-volume post-measurement state and exhibit almost local observables that are maximally correlated.
Thermal-Drift Sampling: Generating Random Thermal Ensembles for Quantum Chaos Diagnostics
This paper introduces a new quantum algorithm called thermal-drift sampling that efficiently generates random thermal states of many-body quantum systems along with their corresponding Hamiltonian parameters. The method uses measurement-based operations to sample thermal ensembles and is demonstrated to scale favorably for studying quantum chaos and thermalization on near-term quantum hardware.
Key Contributions
- Introduction of the thermal-drift channel for measurement-based thermal state preparation
- Proof of favorable scaling (cubic in system size, quadratic in inverse temperature) for the sampling algorithm
- Demonstration of quantum chaos diagnostics using level-spacing statistics from thermal states
View Full Abstract
Random thermal states of many-body Hamiltonians underpin studies of thermalization, chaos, and quantum phase transitions, yet their generation remains costly when each Hamiltonian must be prepared individually. We introduce the thermal-drift channel, a measurement-based operation that implements a tunable nonunitary drift along a chosen Pauli term. Based on this channel, we present a measurement-controlled sampling algorithm that generates thermal states together with their Hamiltonian "labels" for general physical models. We prove that the total gate count of our algorithm scales cubically with system size, quadratically with inverse temperature, and as the inverse error tolerance to the two-thirds power, with logarithmic dependence on the allowed failure probability. We also show that the induced label distribution approaches a normal distribution reweighted by the thermal partition function, which makes an explicit trade-off between accuracy and effective range. Numerical simulations for a 2D Heisenberg model validate the predicted scaling and distribution. As an application, we compute unfolding-free level-spacing ratio statistics from sampled thermal states of a 2D transverse-field Ising model and observe a crossover toward the Wigner--Dyson prediction, demonstrating a practical and scalable route to chaos diagnostics and random matrix universality studies on near-term quantum hardware.
Spontaneous Parity Breaking in Quantum Antiferromagnets on the Triangular Lattice
This paper investigates how parity symmetry breaking affects quantum magnetic phases in frustrated triangular lattice systems. The researchers use advanced tensor network calculations to show that parity breaking systematically determines when exotic phases like supersolids emerge in these quantum many-body systems.
Key Contributions
- Identification of spontaneous parity breaking as a systematic organizing principle for frustrated quantum magnetic phases
- Development of improved tensor network contraction techniques for large-scale quantum many-body calculations
- Theoretical framework connecting spin, symmetry, and frustration effects in triangular lattice systems
View Full Abstract
Frustration on the triangular lattice has long been a source of intriguing and often debated phases in many-body systems. Although symmetry analysis has been employed, the role of the seemingly trivial parity symmetry has received little attention. In this work, we show that phases induced by frustration are systematically shaped by an implicit rule of thumb associated with spontaneous parity breaking. This principle enables us to anticipate and rationalize the regimes and conditions under which nontrivial phases emerge. For the spin-$S$ antiferromagnetic XXZ model, we demonstrate that a controversial parity-broken phase appears only at intermediate values of $S$. In bilayer systems, enhanced frustration leads to additional phases, such as supersolids, whose properties can be classified by their characteristic parity features. Benefiting from our improved tensor network contraction techniques, we confirm these results through large-scale tensor-network calculations. This study offers an alternative viewpoint and a systematic approach for examining the interplay between spin, symmetry, and frustration in many-body systems.
Reducing the Computational Cost Scaling of Tensor Network Algorithms via Field-Programmable Gate Array Parallelism
This paper proposes using field-programmable gate arrays (FPGAs) to dramatically improve the computational efficiency of tensor network algorithms used in quantum many-body calculations. The approach reduces computational scaling from cubic to linear for iTEBD and from sixth-power to quadratic for HOTRG algorithms through parallelized hardware implementation.
Key Contributions
- Development of quad-tile partitioning strategy for mapping tensor operations onto FPGA hardware
- Significant reduction in computational scaling complexity for iTEBD and HOTRG tensor network algorithms
View Full Abstract
Improving the computational efficiency of quantum many-body calculations from a hardware perspective remains a critical challenge. Although field-programmable gate arrays (FPGAs) have recently been exploited to improve the computational scaling of algorithms such as Monte Carlo methods, their application to tensor network algorithms is still at an early stage. In this work, we propose a fine-grained parallel tensor network design based on FPGAs to substantially enhance the computational efficiency of two representative tensor network algorithms: the infinite time-evolving block decimation (iTEBD) and the higher-order tensor renormalization group (HOTRG). By employing a quad-tile partitioning strategy to decompose tensor elements and map them onto hardware circuits, our approach effectively translates algorithmic computational complexity into scalable hardware resource utilization, enabling an extremely high degree of parallelism on FPGAs. Compared with conventional CPU-based implementations, our scheme exhibits superior scalability in computation time, reducing the bond-dimension scaling of the computational cost from $O(D_b^3)$ to $O(D_b)$ for iTEBD and from $O(D_b^6)$ to $O(D_b^2)$ for HOTRG. This work provides a theoretical foundation for future hardware implementations of large-scale tensor network computations.
Giant bubbles of Fisher zeros in the quantum XY chain
This paper introduces a novel method using complex-valued temperature and Fisher zeros to study quantum phase transitions in the quantum XY chain, revealing unexpected energy scales and oscillatory behaviors that contradict standard low-energy theories.
Key Contributions
- Development of thermofield dynamics approach to characterize quantum phases through Fisher zeros
- Discovery of giant bubbles of Fisher zeros that reveal characteristic energy scales contradicting Luttinger liquid theory
View Full Abstract
We demonstrate an alternative approach based on complex-valued inverse temperature and partition function to probe quantum phases of matter with nontrivial spectra and dynamics. It leverages thermofield dynamics (TFD) to quantitatively characterize quantum and thermal fluctuations, and exploit the correspondence between low-energy excitations and Fisher zeros. Using the quantum XY chain in an external field as a testbed, we show that the oscillatory gap behavior manifests as oscillations in the long-time dynamics of the TFD spectral form factor. We also identify giant bubbles, i.e. large-scale closed lines, of Fisher-zeros near the gapless XX limit. They provide a characteristic energy scale that seems to contradict the predictions of the low energy theory of a featureless Luttinger liquid. We identify this energy scale and relate the motion of these giant bubbles with varying external field to the transfer of spectral weight from high to low energies. The deep connection between Fisher zeros, dynamics, and excitations opens up promising avenues for understanding the unconventional gap behaviors in strongly correlated many-body systems.
Entropy Bounds via Hypothesis Testing and Its Applications to Two-Way Key Distillation in Quantum Cryptography
This paper develops improved methods for analyzing quantum key distribution (QKD) security by connecting two-way key distillation rates to quantum hypothesis testing theory. The work provides better bounds on key generation rates for finite block lengths and closes gaps between known conditions for secure key generation in the asymptotic limit.
Key Contributions
- Establishes rigorous connection between two-way key distillation rates and quantum asymptotic hypothesis testing via integral representation of relative entropy
- Improves key rate bounds for small to intermediate blocklengths compared to existing fidelity-based methods
- Closes the gap between sufficient and necessary conditions for asymptotic key generation
View Full Abstract
Quantum key distribution (QKD) achieves information-theoretic security, without relying on computational assumptions, by distributing quantum states. To establish secret bits, two honest parties exploit key distillation protocols over measurement outcomes resulting after the the distribution of quantum states. In this work, we establish a rigorous connection between the key rate achievable by applying two-way key distillation, such as advantage distillation, and quantum asymptotic hypothesis testing, via an integral representation of the relative entropy. This connection improves key rates at small to intermediate blocklengths relative to existing fidelity-based bounds and enables the computation of entropy bounds for intermediate to large blocklengths. Moreover, this connection allows one to close the gap between known sufficient and conjectured necessary conditions for key generation in the asymptotic regime, while the precise finite blocklegth conditions remain open. More broadly, our work shows how advances in quantum multiple hypothesis testing can directly sharpen the security analyses of QKD.
Simulation of Adjoints and Petz Recovery Maps for Unknown Quantum Channels
This paper investigates how to physically implement mathematical transformations (transpose, complex conjugate, adjoint) of unknown quantum channels using quantum supermaps. The authors establish which transformations are possible, develop protocols for some cases, and apply these results to improve methods for estimating Petz recovery map properties.
Key Contributions
- Established hierarchy of physical realizability for channel transformations with exact transpose protocol and no-go theorems for complex conjugate and adjoint
- Developed virtual protocol for complex conjugate using quasi-probability decomposition and improved query complexity for Petz recovery map estimation
View Full Abstract
Transformations of quantum channels, such as the transpose, complex conjugate, and adjoint, are fundamental to quantum information theory. Given access to an unknown channel, a central problem is whether these transformations can be implemented physically with quantum supermaps. While such supermaps are known for unitary operations, the situation for general quantum channels is fundamentally different. In this work, we establish a strict hierarchy of physical realizability for the transposition, complex conjugation, and adjoint transformation of an unknown quantum channel. We present a probabilistic protocol that exactly implements the transpose with a single query. In contrast, we prove no-go theorems showing that neither the complex conjugate nor the adjoint can be implemented by any completely positive supermap, even probabilistically. We then overcome this impossibility by designing a virtual protocol for the complex conjugate based on quasi-probability decomposition, and show its optimality in terms of the diamond norm. As a key application, we propose a protocol to estimate the expectation values resulting from the Petz recovery map of an unknown channel, achieving an improved query complexity compared to existing methods.
Quantum statistical functions
This paper develops a comprehensive mathematical framework for quantum versions of classical statistical functions (like moment-generating functions) by using expectation values with respect to purified states. The framework unifies quantum statistics, quasiprobability distributions, and weak values, providing a cohesive structure for understanding quantum statistical measures.
Key Contributions
- Established a comprehensive framework for quantum statistical functions that overcomes operator noncommutativity limitations
- Unified disparate concepts including quantum statistics, quasiprobability distributions, and weak values under a single mathematical structure
- Defined conditional quantum statistical functions that yield weak values and weak variance through pre- and post-selection
View Full Abstract
Statistical functions such as the moment-generating function, characteristic function, cumulant-generating function, and second characteristic function are cornerstone tools in classical statistics and probability theory. They provide a powerful means to analyze the statistical properties of a system and find applications in diverse fields, including statistical physics and field theory. While these functions are ubiquitous in classical theory, a quantum counterpart has remained elusive due to the fundamental hurdle of noncommutativity of operators. The lack of such a framework has obscured the deep connections between standard statistical measures and the non-classical features of quantum mechanics. Here, we establish a comprehensive framework for quantum statistical functions that transcends these limitations, naturally unifying the disparate languages of standard quantum statistics, quasiprobability distributions, and weak values. We show that these functions, defined as expectation values with respect to the purified state, naturally reproduce fundamental quantum statistical quantities like expectation values, variance, and covariance upon differentiation. Crucially, by extending this framework to include the concepts of pre- and post-selection, we define conditional quantum statistical functions that uniquely yield weak values and weak variance. We further demonstrate that multivariable quantum statistical functions, when defined with specific operator orderings, correspond to well-known quasiprobability distributions. Our framework provides a cohesive mathematical structure that not only reproduces standard quantum statistical measures but also incorporates nonclassical features of quantum mechanics, thus laying the foundation for a deeper understanding of quantum statistics.
Assessing the Sensitivity of Niobium- and Tantalum-Based Superconducting Qubits to Infrared Radiation
This paper compares how niobium and tantalum-based superconducting qubits respond to infrared radiation, finding that tantalum qubits are more sensitive to infrared-induced decoherence but can be improved with proper filtering. The research shows that material choice affects qubit performance and highlights the importance of controlling infrared radiation in quantum computing setups.
Key Contributions
- Comparative analysis of infrared radiation sensitivity between niobium and tantalum superconducting qubits
- Demonstration that infrared filtering can reduce quasiparticle tunneling rates to 100-300 Hz
- Identification of time-dependent decoherence effects from slowly cooling experimental components
View Full Abstract
The use of tantalum films for superconducting qubits has recently extended qubit coherence times significantly, primarily due to reduced dielectric losses at the metal-air interface. However, the choice of base material also influences the sensitivity to quasiparticle-induced decoherence. In this study, we investigate quasiparticle tunneling rates in niobium and tantalum-based offset-charge-sensitive qubits. Using a source of thermal radiation, we characterize the sensitivity of either material to infrared radiation and explore the impact of the infrared background through the targeted use of in-line filters in the wiring and ambient infrared absorbers. We identify both radiation channels as significant contributions to decoherence for tantalum but not for niobium qubits and achieve tunneling rates of 100 Hz and 300 Hz for niobium and tantalum respectively upon installation of infrared filters. Additionally, we find a time-dependence in the observed tunneling rates on the scale of days, which we interpret as evidence of slowly cooling, thermally radiating components in the experimental setup. Our findings indicate that continued improvements in coherence times may require renewed attention to radiative backgrounds and experimental setup design, especially when introducing new material platforms.
The Quantum Message Complexity of Distributed Wake-Up with Advice
This paper studies the quantum message complexity of distributed wake-up problems in networks, where some nodes need to wake up all sleeping nodes efficiently using quantum communication. The authors provide both upper and lower bounds for this problem, showing quantum advantages over classical approaches in certain scenarios.
Key Contributions
- First quantum upper and lower bounds for distributed wake-up problem with advice
- Quantum algorithm that breaks classical barriers in dense graphs with O(√(n³/2^α) · log n) message complexity
- Lower bound showing Ω(n^(3/2)) quantum message complexity without advice
View Full Abstract
We consider the distributed wake-up problem with advice, where nodes are equipped with initial knowledge about the network at large. After the adversary awakens a subset of nodes, an oracle computes a bit string (``the advice'') for each node, and the goal is to wake up all sleeping nodes efficiently. We present the first upper and lower bounds on the message complexity for wake-up in the quantum routing model, introduced by Dufoulon, Magniez, and Pandurangan (PODC 2025). In more detail, we give a distributed advising scheme that, given $α$ bits of advice per node, wakes up all nodes with a message complexity of $O( \sqrt{\frac{n^3}{2^{\max\{\lfloor (α-1)/2 \rfloor},0\}}}\cdot\log n )$ with high probability. Our result breaks the $Ω( \frac{n^2}{2^α} )$ barrier known for the classical port numbering model in sufficiently dense graphs. To complement our algorithm, we give a lower bound on the message complexity for distributed quantum algorithms: By leveraging a lower bound result for the single-bit descriptor problem in the query complexity model, we show that wake-up has a quantum message complexity of $Ω( n^{3/2} )$ without advice, which holds independently of how much time we allow. In the setting where an adversary decides which nodes start the algorithm, most graph problems of interest implicitly require solving wake-up, and thus the same lower bound also holds for other fundamental problems such as single-source broadcast and spanning tree construction.
Advanced Quantum Communication and Quantum Networks -- From basic research to future applications
This review paper provides an overview of quantum communication networks and their fundamental properties, covering the interfaces between classical and quantum systems, methods for transmitting quantum information, and potential future applications of a quantum internet.
Key Contributions
- Comprehensive review of quantum information networks fundamentals
- Analysis of classical-quantum interfaces and transmission methods
- Overview of future quantum internet applications and interconnected quantum devices
View Full Abstract
Classical communication is the basis for many of our current and future technologies, such as mobile phones, video conferences, autonomous vehicles and particularly the internet. In contrast, quantum communication is governed by the laws of quantum mechanics. Due to this fundamental difference, it might offer enormous benefits for security applications, more precise measurements, faster computations, and many other fields of application by interconnecting different quantum devices, such as quantum sensors, quantum computers, or quantum memories. This review provides an overview of the specific properties of quantum information networks. This includes the interfaces between the classical and the quantum regime, the transmission of the quantum information by physical implementations, and potential future applications of quantum networks. We aim to provide a starting point based on fundamental concepts of quantum information processing for further research on a future quantum internet.
Efficient implementation of arbitrary Hermitian-preserving and trace-preserving maps
This paper presents a new efficient method for implementing Hermitian-preserving and trace-preserving (HPTP) quantum maps, which are important for quantum error correction, simulation, and machine learning. The approach converts HPTP maps into executable quantum operations with significantly reduced resource requirements compared to existing methods.
Key Contributions
- Efficient constructive method for implementing arbitrary HPTP maps with reduced Kraus rank
- Single CPTP map compilation approach that avoids decomposition into multiple maps or large Hilbert space approximations
- Demonstrated resource reductions for quantum error mitigation applications including bosonic photon loss channels
View Full Abstract
Quantum control has been a cornerstone of quantum information science, driving major advances in quantum computing, quantum communication, and quantum sensing. Over the years, it has enabled the implementation of arbitrary completely positive and trace-preserving (CPTP) maps; an important next step is to extend control to Hermitian-preserving and trace-preserving (HPTP) maps, which underpin applications such as entanglement detection, quantum error mitigation, quantum simulation, and quantum machine learning. Here we present an efficient and fully constructive method for implementing arbitrary HPTP maps. Unlike existing methods that decompose an HPTP map into multiple CPTP maps or approximate it using bipartite Hamiltonians with large Hilbert spaces, our approach compiles a target HPTP map into a single executable CPTP map whose Kraus rank is guaranteed to be no larger than the intrinsic rank of the target HPTP map plus one, followed by simple classical post-processing. Numerical results for inverse noise channels used in quantum error mitigation, including bosonic photon loss, confirm substantial reductions in resources and highlight scalability in higher-dimensional settings. Together with our numerical benchmarks, these results validate the efficiency and versatility of the proposed framework, opening a route to broader quantum-information applications enabled by HPTP processing.
Color Centers and Hyperbolic Phonon Polaritons in Hexagonal Boron Nitride: A New Platform for Quantum Optics
This paper develops a theoretical framework for using quantum light sources (color centers) in hexagonal boron nitride to generate and control hyperbolic phonon polaritons, creating a new platform that combines quantum optics with mid-infrared light manipulation at subwavelength scales.
Key Contributions
- Establishes cavity-QED framework connecting hBN color centers with hyperbolic phonon polaritons
- Develops two HPP generation schemes: spontaneous emission and stimulated Raman process
- Proposes two-emitter correlation measurement to verify single-polariton character
View Full Abstract
Hyperbolic phonon polaritons (HPPs) in hexagonal boron nitride (hBN) confine mid-infrared light to deep-subwavelength scales and may offer a powerful route to strong light-matter interactions. Generation and control of HPPs are typically accessed using classical near-field probes, which limits experiments at the quantum level.A complementary frontier in hBN research focuses on color centers: bright, stable, atomically localized emitters that have rapidly emerged as a promising platform for solid-state quantum optics. Here we establish a key connection between these two directions by developing a cavity-QED framework in which a single hBN color center serves as a quantum source of HPPs. We quantify the emitter-HPP interaction and analyze two generation schemes. The first is spontaneous emission into the phonon sideband, which can produce single-HPP events and, in ultrathin slabs, becomes single-mode with an enhanced decay rate. The second is a stimulated Raman process that provides frequency selectivity, tunable conversion rate, and narrowband excitation. This drive launches spatially confined, ray-like HPPs that propagate over micrometer distances. We also outline a two-emitter correlation measurement that can directly test the single-polariton character of these emissions. By connecting color-center quantum optics with hyperbolic polaritonics, our approach enables quantum emitters to act as on-chip quantum sources and controls for HPPs, while HPPs provide long-range channels that couple spatially separated emitters. Together, these capabilities point to a new direction for mid-infrared photonic experiments that unite strong coupling, spectral selectivity, and spatial reach within a single material system.
Numerical approaches to entangling dynamics from variational principles
This paper develops numerical methods to identify and track entanglement in quantum systems over time by restricting evolution to separable (non-entangled) states and comparing with unrestricted dynamics. The authors compare different discretization approaches and find that applying restrictions before discretization provides better numerical stability than the reverse order.
Key Contributions
- Development of numerical tools for detecting dynamical entanglement using variational principles and restricted evolution
- Demonstration that 'first-restrict-then-discretize' approaches are more numerically stable than 'first-discretize-then-restrict' methods
View Full Abstract
In this work, we address the numerical identification of entanglement in dynamical scenarios. To this end, we consider different programs based on the restriction of the evolution to the set of separable (i.e., non-entangled) states, together with the discretization of the space of variables for numerical computations. As a first approach, we apply linear splitting methods to the restricted, continuous equations of motion derived from variational principles. We utilize an exchange interaction Hamiltonian to confirm that the numerical and analytical solutions coincide in the limit of small time steps. The application to different Hamiltonians shows the wide applicability of the method to detect dynamical entanglement. To avoid the derivation of analytical solutions for complex dynamics, we consider variational, numerical integration schemes, introducing a variational discretization for Lagrangians linear in velocities. Here, we examine and compare two approaches: one in which the system is discretized before the restriction is applied, and another in which the restriction precedes the discretization. We find that the "first-discretize-then-restrict" method becomes numerically unstable, already for the example of an exchange-interaction Hamiltonian, which can be an important consideration for the numerical analysis of constrained quantum dynamics. Thereby, broadly applicable numerical tools, including their limitations, for studying entanglement over time are established for assessing the entangling power of processes that are used in quantum information theory.
Investigations on Quantum Correlations and Open Quantum System Dynamics Through Nuclear Spins
This paper uses nuclear spins as a platform to experimentally investigate quantum correlations and open quantum system dynamics. The research includes studies of temporal quantum correlations through Leggett-Garg inequality violations, Lee-Yang zeros in thermodynamic systems, the quantum Mpemba effect, and entanglement dynamics in multi-qubit systems.
Key Contributions
- Experimental demonstration of Leggett-Garg inequality violations exceeding classical bounds using superposed unitary operators on nuclear spin qubits
- Novel method to determine Lee-Yang zeros of asymmetric Ising models using a single quantum probe
- Experimental verification of the quantum Mpemba effect in nuclear spin relaxation systems
- Investigation of entanglement localization/delocalization and apparent violations of quantum data processing inequality
View Full Abstract
Nuclear spins provide an ideal platform for studying quantum correlations and open quantum system dynamics across diverse areas, including quantum information, quantum foundations, and many-body physics. This is enabled by their long longitudinal (T1) and transverse (T2) coherence times and precise control using radio frequency pulses. In this thesis, I present my work using nuclear spins to explore these themes. First, I study temporal quantum correlations quantified by the Leggett Garg inequality (LGI) for a qubit evolving under a superposition of unitary operators. Using a three qubit quantum register, we experimentally realized superposed unitaries and observed LGI violations exceeding the maximal quantum bound of 1.5, indicating enhanced non-classicality. Notably, this superposed unitary dynamics also showed improved robustness against decoherence. Next, I investigate Lee Yang zeros, which are zeros of the partition function in the complex plane that reveal thermodynamic behavior near criticality. We proposed and experimentally demonstrated a method to determine the full set of Lee Yang zeros of an asymmetric Ising model using a single quantum probe in a three-qubit nuclear spin register. We further showed that the mutual information between the probe and system peaks at times corresponding to these zeros. I then report our study of the quantum Mpemba effect in nuclear spin relaxation, where systems farther from equilibrium can relax faster than those closer to steady state, verified both theoretically and experimentally using NMR. Finally, I discuss our work on entanglement localization and delocalization induced by local interactions, leading to an apparent violation of the quantum data processing inequality. We showed that this violation is only apparent by constructing a completely positive and trace preserving map describing the dynamics.
A Comparative Study of Correlation and Relativistic Effects on Atomic Ionization Energy
This paper studies how relativistic effects and electron correlation effects interact when calculating ionization energies of heavy atoms (gold through radon). The researchers found that these two quantum mechanical effects don't simply add together but interact in complex, non-linear ways that must be treated simultaneously for accurate predictions.
Key Contributions
- Demonstrated non-additivity of relativistic and correlation effects in heavy atom ionization energies
- Showed that accurate computational predictions require simultaneous treatment of both effects rather than independent contributions
View Full Abstract
This study investigates the interplay between relativistic effects and electron correlation effects on the first ionization energies of heavy atoms (Au through Rn, Z = 79-86). We perform two complementary analyses: (1) comparing relativistic corrections computed at both the Hartree-Fock (HF) and coupled cluster CCSD(T) levels to assess how correlation influences the magnitude of relativistic corrections, and (2) comparing correlation corrections computed within both non-relativistic and relativistic frameworks to determine how relativity influences the magnitude of correlation corrections. Our results reveal a striking non-linear relationship between these two effects. Specifically, the combined effect of relativity and correlation on ionization energy does not equal the sum of their individual contributions. This non-additivity indicates that relativistic and correlation effects are not independent; they interact in complex ways that depend on the atomic system. We find that for some atoms, the two effects enhance each other, while for others they partially cancel. Moreover, the order in which one may add "separate" effects also counts, in that adding "pure" relativistic effects to the remaining outcome (including correlation) would give a different result than when adding "pure" correlation effects to the remaining outcome (including relativity). These findings demonstrate that relativistic and correlation effects are inherently non-additive, reflecting the non-linearity of the quantum many-body problem. Accurate computational predictions of ionization energies in heavy-element systems thus require simultaneous treatment of both effects rather than treating them as independent contributions.
Dispersion in nonlinear interferometry: implications for optical coherence tomography with undetected photons
This paper analyzes dispersion problems in quantum interferometers used for optical coherence tomography with undetected photons, where correlated photons at different frequencies cause imaging degradation. The authors propose a novel numerical compensation method that extracts phase information from time-domain measurements and applies it to mid-infrared spectral-domain OCT signals, achieving 2.2-fold improvement in axial resolution.
Key Contributions
- Analysis of group velocity dispersion effects in SU(1,1) quantum interferometers for OCT imaging
- Development of empirical numerical compensation method using phase extraction from time-domain to spectral-domain signals
View Full Abstract
Nonlinear SU(1,1) quantum interferometers based on non-degenerate optical parametric down-conversion exhibit strong unbalanced group velocity dispersion (GVD). This feature is intrinsic to this type of interferometer as correlated photons of vastly different frequencies propagate through a dispersive nonlinear crystal; consequently, the dispersion arises from the source itself. The resulting GVD degrades the axial point-spread function (PSF) in optical coherence tomography (OCT) with undetected photons; and physical compensation is less straightforward, in particular for non-degenerate broadband regimes due to the limited number of suitable materials. In this contribution, we analyze dispersion in bulk nonlinear interferometry and describe its implications for OCT imaging. Aspects of hardware compensation are addressed, and a novel empirical numerical method of compensation is proposed. The approach is based on the extraction of the phase component directly from the time-domain modality (high precision linearized quantum Fourier transform infrared spectrometer) and its injection into the mid-IR spectral-domain OCT signals (central wavelength of around 3770 nm) before the Fourier transform. The proposed method is compared with an alternative numerical technique. The results demonstrate a 2.2-fold improvement in axial resolution and outperform the alternative correction method in overall imaging performance.
Hamiltonian Benchmark of a Solid-State Spin-Photon Interface for Computation
This paper analyzes the exact quantum dynamics of solid-state spin-photon interfaces without approximations, benchmarking three key protocols for quantum information processing. The study reveals that realistic imperfections severely limit photon-photon gates but have minimal impact on generating photon-number superpositions and linear photonic cluster states.
Key Contributions
- Exact Hamiltonian analysis of spin-photon interfaces without single-mode or open-system approximations
- Identification of fundamental performance limits for photon-photon gates and photonic cluster state generation
- Quantitative assessment showing photon-number superpositions are robust to realistic imperfections
View Full Abstract
Light-matter interfaces are pivotal for quantum computation and communication. While typically analyzed using single-mode or open-quantum-system approximations, these models often neglect multi-mode field states and light-matter entanglement, hindering exact protocol modeling. Here, we solve the full Hamiltonian dynamics of a solid-state spin-photon interface for three key protocols: the generation of photon-number superpositions, a controlled photon-photon gate, and the production of photonic cluster states. By deriving exact fidelities, we identify fundamental performance limits. Our results reveal that while realistic imperfections severely limit photon-photon gates, they only slightly affect linear photonic clusters and are nearly harmless for photon-number state superpositions.
Simultaneous reconstruction of quantum process and noise via corrupted sensing
This paper develops a new method for quantum process tomography that can simultaneously characterize quantum gates/channels and identify corrupted noise in quantum systems. The approach uses mathematical frameworks (Choi-state and process-matrix representations) to efficiently reconstruct quantum processes even when noise is present, requiring fewer experimental measurements.
Key Contributions
- Framework for simultaneous reconstruction of quantum processes and corrupted noise using quantum process tomography
- Demonstration of significant reduction in required experimental configurations for process characterization under noisy conditions
View Full Abstract
Quantum processes, including quantum gates and channels, are integral to various quantum information tasks, making the efficient characterization of these processes and their underlying noise critically important. Here, we propose a framework for quantum process tomography in the presence of corrupted noise that is able to simultaneously reconstruct the process and corrupted noise. Firstly, within the Choi-state representation, we derive the corresponding generalized restricted isometry property and demonstrate the simultaneous reconstruction of various quantum gates under sparse noise. Moreover, in comparison with the Choi-state scheme, the process-matrix representation is employed to simultaneously reconstruct sparse noise and a broader range of target quantum gates. Our results demonstrate that significant reduction in experimental configurations is achievable even under corrupted noise.
Adaptive controllable architecture of analog Ising machine
This paper develops a theoretical framework for analog Ising machines (classical optimization hardware inspired by quantum systems) and proposes an improved controllable version (CAIM) that achieves better speed and accuracy for solving optimization problems like MaxCut.
Key Contributions
- Unified mathematical formulation and analytical framework for analog Ising machines using Lagrange multipliers and Lyapunov analysis
- Development of controllable analog Ising machine (CAIM) with adaptive sampling-feedback control that surpasses conventional performance limits
View Full Abstract
As a quantum-inspired, non-traditional analog solver architecture, the analog Ising machine (AIM) has emerged as a distinctive computational paradigm to address the rapidly growing demand for computational power. However, the mathematical understanding of its principles, as well as the optimization of its solution speed and accuracy, remain unclear. In this work, we for the first time systematically discuss multiple implementations of AIM and establish a unified mathematical formulation. On this basis, by treating the binarization constraint of AIM (such as injection locking) as a Lagrange multiplier in optimization theory and combining it with a Lyapunov analysis from dynamical systems theory, an analytical framework for evaluating solution speed and accuracy is constructed, and further demonstrate that conventional AIMs possess a theoretical performance upper bound. Subsequently, by elevating the binarization constraint to a control variable, we propose the controllable analog Ising machine (CAIM), which integrates control Lyapunov functions and momentum-based optimization algorithms to realize adaptive sampling-feedback control, thereby surpassing the performance limits of conventional AIMs. In a proof-of-concept CAIM demonstration implemented using an FPGA-controlled LC-oscillator Ising machine, CAIM achieves a twofold speedup and a 7\% improvement in accuracy over AIM on a 50-node all-to-all weighted MaxCut problem, validating both the effectiveness and interpretability of the proposed theoretical framework.
Simulation of boson sampling with optical feedback
This paper proposes a modified version of boson sampling where photons from some output ports are fed back into input ports, creating interference between photons from different time periods. The researchers develop mathematical methods to analyze this system and show it reaches a steady state, defining a new computational problem called Stationary Distribution Boson Sampling.
Key Contributions
- Introduction of optical feedback loops in boson sampling systems
- Development of Kraus-operator and correlation-tensor methods for analyzing feedback systems
- Definition of Stationary Distribution Boson Sampling as a new computational complexity problem
View Full Abstract
This work presents a theoretical model of boson sampling with optical feedback, in which a subset of the interferometer's output modes is looped back into the input modes. If the bosons are injected periodically into the input modes of the interferometer and optical feedback lines' length match the period of injection, it allows for interference between bosons injected at the consequent time iterations. We propose several methods methods for computing the output photon distributions in both output spacial and temporal modes, including not only standard spatiotemporal mode-unfolding technique, but also the Kraus-operator formalism, and a correlation-tensor-based approach. The two latter approaches help us to reveal that for random interferometers this system evolves to a unique stationary state over time. Because of the existence of the stationary state, we introduce new computational problem \textit{Stationary Distribution Boson Sampling} which appears to be harder than conventional boson sampling problem and contains it as a special case when there are no optical feedback lines.
Reducing the Complexity of Matrix Multiplication to $O(N^2log_2N)$ by an Asymptotically Optimal Quantum Algorithm
This paper presents a quantum algorithm for matrix multiplication that claims to achieve O(N²log₂N) complexity, which would be faster than the best known classical algorithms. The authors test their quantum kernel-based matrix multiplication (QKMM) algorithm through simulations on both ideal and noisy quantum systems.
Key Contributions
- Novel quantum kernel-based matrix multiplication algorithm with claimed O(N²log₂N) complexity
- Demonstration of quantum advantage over classical matrix multiplication algorithms through simulation experiments
View Full Abstract
Matrix multiplication is a fundamental classical computing operation whose efficiency becomes a major challenge at scale, especially for machine learning applications. Quantum computing, with its inherent parallelism and exponential storage capacity, offers a potential solution to these limitations. This work presents a quantum kernel-based matrix multiplication algorithm (QKMM) that achieves an asymptotically optimal computational complexity of $ O(N^2 \log_2 N) $, outperforming the classical optimal complexity of $ O(N^{2.371552}) $, where $N$ denotes the matrix dimension. Through noiseless and noisy quantum simulation experiments, we demonstrate that the proposed algorithm not only exhibits superior theoretical efficiency but also shows practical advantages in runtime performance and stability.
Arithmetic Reconciliation for CVQKD: Challenges and Feasibility
This paper evaluates Arithmetic Reconciliation, a protocol used in continuous variable quantum key distribution (CVQKD) to help two parties establish shared secret keys. The authors assess the protocol's efficiency in realistic scenarios and demonstrate its feasibility for quantum communication applications.
Key Contributions
- Evaluation of Arithmetic Reconciliation protocol efficiency in realistic CVQKD scenarios
- Demonstration of the protocol's feasibility and promise for continuous variable quantum key distribution applications
View Full Abstract
Continuous variable quantum key distribution allows two legitimate parties to share a common secret key and encompasses reconciliation protocols. A relatively new reconciliation protocol, Arithmetic Reconciliation, presents low complexity and has increasing reconciliation efficiency with lower SNRs. In this paper, we obtain reconciliation efficiencies for this protocol in realistic scenarios, by means of estimation of mutual information, and we also present rates for sequence match of secret keys by Alice and Bob. Results show that this technique is feasible and promising to continuous variable quantum key distribution applications.
Single shot distinguishability of noisy quantum channels
This paper studies how to optimally distinguish between different types of noisy quantum channels using either single quantum systems or entangled probes. The researchers find that the best strategy depends on the specific type of noise, with entanglement providing advantages for some channels but not others.
Key Contributions
- Proved that maximally entangled probes are optimal for discriminating depolarizing channels while single-system probes are optimal for dephasing channels
- Identified noise-dependent regimes for amplitude-damping channels where different probe strategies are optimal
- Demonstrated examples where non-maximally entangled probes outperform both single-system and maximally entangled probes
View Full Abstract
Among the intriguing features of quantum theory, the problem of distinguishing quantum channels is of fundamental interest. In this paper, we focus on the single-shot discrimination of two noisy quantum channels using two distinct classes of probes: single-system (product) probes and entangled probes. Our aim is to identify optimal probing state for specific discrimination tasks and to analyze the necessity and role of entanglement in enhancing channel distinguishability. We show that maximally entangled probes are optimal for discriminating two qubit depolarizing channels, with any nonzero entanglement providing an advantage over single-system probes. In contrast, for dephasing channels in arbitrary dimensions, we prove that single-system probe can be optimal and that entanglement offers no improvement, even when the dephasing unitary is generalized. For qubit amplitude-damping channels, we identify distinct noise-dependent regimes in which either single-system probe outperforms maximally entangled probes and vice-versa. Moreover, we demonstrate that non-maximally entangled probes can act as the optimum probe if the noise parameters restricted to certain values in this task. We also present examples of noisy unitary channels for which discrimination is possible using non-maximally entangled probe, while both single-systems and maximally entangled probes fail. We introduce another class of noisy unitary channels for which perfect discrimination is achievable with a single system, while maximally entangled probes are insufficient. Finally, we show that two erasure channels can be optimally discriminated using any pure single-system probe, with no advantage gained from entanglement.
Bell and EPR experiments with signalling data
This paper develops new theoretical frameworks for analyzing Bell inequality and quantum steering experiments when there is some degree of signalling present due to experimental imperfections. The authors create modified tests that can still detect quantum non-classicality even with bounded amounts of signalling, and demonstrate these methods using data from IBM quantum processors and inefficient detector experiments.
Key Contributions
- Extended local hidden variable and local hidden state theories that accommodate bounded signalling
- Development of non-classicality tests robust to experimental imperfections including exact methods and corrected Bell/steering inequalities
View Full Abstract
The no-signalling principle is a fundamental assumption in Bell-inequality and quantum-steering experiments. Nonetheless, experimental imperfections can lead to apparent violations beyond those expected from finite-sample statistics. Here, we propose extensions of local hidden variable and local hidden state theories that allow for bounded, operationally quantifiable, amounts of signalling. We show how non-classicality tests can be developed for these models, both through exact methods based on the full set of observed statistics and through corrections to the standard Bell and steering inequalities. We demonstrate the applicability of these methods via two scenarios that feature apparent signalling: an IBM quantum processor and post-selected data from inefficient detectors.
Spatiotemporal Topological Phase Transition in non-Hermitian Photonic System
This paper demonstrates a method to control light propagation by creating a photonic crystal system that combines spatial and temporal topological effects using non-Hermitian physics, allowing researchers to continuously tune between different topological phases by adjusting loss and coupling parameters.
Key Contributions
- Introduction of a waveguide-assisted non-Hermitian SSH model that unifies spatial and temporal topological phases
- Experimental demonstration of real-time control over topological phase transitions through spatial translation across a graded photonic crystal
View Full Abstract
While energy band topology in spatial photonic crystals (PCs) and momentum-band topology in temporal crystals have each served as powerful probes of topological phases in their respective domains, their unification in a static platform remains unexplored. In this Letter, we bridge this gap by introducing a waveguide assisted non-Hermitian SSH model, in which controlled tuning of loss and coupling drives PT-symmetry breaking and enables a continuous transition between energy- and momentum-gap regimes. This allows us to construct a complete spatiotemporal topological phase diagram in a unified parameter space. By mapping this phase diagram onto a spatially graded PC, we experimentally observe multiple Bloch momentum-band gaps and a continuous spatiotemporal topological transition via translating across the static sample, enabling real-time control over the evolution pathway of the band topology. Our work creates a versatile, bias-free platform for exploring synthetic spacetime physics and opens new avenues for controlling light via non-Hermitian band engineering.
Quantum-Enhanced Deterministic Inference of $k$-Independent Set Instances on Neutral Atom Arrays
This paper develops a method called deterministic error mitigation (DEM) to better evaluate the performance of noisy quantum annealing experiments on Rydberg atom arrays by accounting for measurement errors and enabling fair comparisons between quantum devices and classical computers.
Key Contributions
- Introduction of deterministic error mitigation (DEM) for shot-level inference from noisy quantum measurements
- Development of entropy-controlled classical postprocessing framework that enables direct cost-based comparison between quantum experiments and classical algorithms
View Full Abstract
Noisy quantum annealing experiments on Rydberg atom arrays produce measurement outcomes that deviate from ideal distributions, complicating performance evaluation. To enable a data-driven benchmarking methodology for quantum devices that accounts for both solution quality and the classical computational cost of inference from noisy measurements, we introduce deterministic error mitigation (DEM), a shot-level inference procedure informed by experimentally characterized noise. We demonstrate this approach using the decision version of the $k$-independent set problem. Within a Hamming-shell framework, the DEM candidate volume is governed by the binary entropy of the bit-flip error rate, yielding an entropy-controlled classical postprocessing cost. Using experimental measurement data, DEM reduces postprocessing overhead relative to classical inference baselines. Numerical simulations and experimental results from neutral atom devices validate the predicted scaling with system size and error rate. These scalings indicate that one hour of classical computation on an Intel i9 processor corresponds to neutral atom experiments with up to $N=250-450$ atoms at effective error rates, enabling a direct, cost-based comparison between noisy quantum experiments and classical algorithms.
Matchgate synthesis via Clifford matchgates and $T$ gates
This paper develops a more efficient method for compiling matchgate quantum circuits (related to non-interacting fermions) by using only matchgate gates instead of the standard Clifford+T approach. The method reduces the compilation problem from exponentially large matrices to smaller 2n×2n matrices and provides both approximate and exact synthesis algorithms.
Key Contributions
- Proves that matchgate-Clifford gates plus T-bar gate are universal for the matchgate group
- Develops efficient synthesis algorithm that reduces compilation complexity from 2^n×2^n to 2n×2n matrices
- Provides exact synthesis method for specific matchgate unitaries with entries in Z[1/√2,i] ring
- Maps exact matchgate synthesis to Boolean satisfiability problem
View Full Abstract
Matchgate unitaries are ubiquitous in quantum computation due to their relation to non-interacting fermions and because they can be used to benchmark quantum computers. Implementing such unitaries on fault-tolerant devices requires first compiling them into a discrete universal gate set, typically Clifford$+T$. Here, we propose a different approach for their synthesis: compile matchgate unitaries using only matchgate gates. To this end, we first show that the matchgate-Clifford group (the intersection of the matchgate and Clifford groups) plus the $\overline{T}$ gate (a $T$ unitary up to a phase) is universal for the matchgate group. Our approach leverages the connection between $n$-qubit matchgate circuits and the standard representation of $\mathbb{SO}(2n)$, which reduces the compilation from $2^n\times 2^n$ unitaries to $2n\times2n$ ones, thus reducing exponentially the size of the target matrix. Moreover, we rigorously show that this scheme is efficient, as an approximation error $\varepsilon_{\mathbb{SO}(2n)}$ incurred in this smaller-dimensional representation translates at most into an $O(n \,\varepsilon_{\mathbb{SO}(2n)})$ error in the exponentially large unitary. In addition, we study the exact version of the matchgate synthesis problem, and we prove that all matchgate unitaries $U$ such that $U\otimes U^*$ has entries in the ring $\mathbb{Z}\big[1/\sqrt 2,i\big]$ can be exactly synthesized by a finite sequence of gates from the matchgate-Clifford$+\overline{T}$ set, without ancillas. We then use this insight to map optimal exact matchgate synthesis to Boolean satisfiability, and compile the circuits that diagonalize the free-fermionic $XX$ Hamiltonian on $n=4,\,8$ qubits.
Hybrid Quantum-Classical Optimization for Multi-Objective Supply Chain Logistics
This paper develops hybrid quantum-classical algorithms to solve real-world supply chain logistics problems by formulating them as quantum optimization problems and testing the approach on IonQ's quantum hardware. The work combines quantum computing subroutines with classical methods to find optimal solutions that minimize cost, emissions, and delivery time simultaneously.
Key Contributions
- Development of hybrid quantum-classical optimization algorithms for multi-objective supply chain problems
- Experimental demonstration on IonQ Aria-1 quantum hardware showing practical application of quantum optimization
View Full Abstract
A multi-objective logistics optimization problem from a real-world supply chain is formulated as a Quadratic Unconstrained Binary Optimization Problem (QUBO) that minimizes cost, emissions, and delivery time, while maintaining target distributions of supplier workshare. The model incorporates realistic constraints, including part dependencies, double sourcing, and multimodal transport. Two hybrid quantum-classical solvers are proposed: a structure-aware informed tree search (IQTS) and a modular bilevel framework (HBS), combining quantum subroutines with classical heuristics. Experimental results on IonQ's Aria-1 hardware demonstrate a methodology to map real-world logistics problems onto emerging combinatorial optimization-specialized hardware, yielding high-quality, Pareto-optimal solutions.
Entanglement-enhanced quantum metrology via alternating in-phase and quadrature modulation
This paper introduces a new technique called alternating in-phase and quadrature modulation (AIQM) that improves quantum sensors by cleverly managing when nonlinear interactions help versus hurt measurement precision. The method uses quantum entanglement to achieve better measurement accuracy than conventional approaches, especially when nonlinear effects are strong.
Key Contributions
- Development of AIQM scheme that eliminates detrimental nonlinear interaction effects during signal accumulation while preserving entanglement benefits
- Demonstration of improved metrological performance beyond standard quantum limit without requiring active control of nonlinear interactions
View Full Abstract
Quantum metrology harnesses quantum entanglement to improve measurement precision beyond standard quantum limit. Although nonlinear interaction is essential for generating entanglement, during signal accumulation, it becomes detrimental and therefore must be suppressed. To address this challenge, we propose an alternating in-phase and quadrature modulation (AIQM) scheme, designed to operate under a fixed nonlinear interaction. During signal accumulation, our time-interleaved approach sequentially applies the in-phase and quadrature driving fields, thereby eliminating the effects of nonlinear interaction on signal accumulation. Our AIQM scheme achieves better metrological performance than conventional schemes, particularly under strong nonlinear interaction and prolonged signal accumulation, with pronounced robustness against parameter variations. By selectively eliminating and utilizing nonlinear interactions via AIQM, our work enables high-precision and high-accuracy entanglement-enhanced sensing without the need for active control of the nonlinear interaction.
More on OTOCs and Chaos in Quantum Mechanics -- Magnetic Fields
This paper studies quantum chaos in magnetic billiards by computing thermal out-of-time-order correlators (OTOCs) to characterize how quantum information scrambles over time. The researchers examine how temperature and magnetic field strength affect the transition between chaotic and ordered quantum dynamics in stadium-shaped billiards.
Key Contributions
- Mapping Lyapunov-like exponents as functions of temperature and magnetic field strength
- Demonstrating crossover from quantum chaos to magnetic rigidity using OTOCs
- Introducing guiding-center operator OTOCs that show qualitatively different dynamics
View Full Abstract
We revisit thermal out-of-time-order correlators (OTOCs) in single-particle quantum systems, focusing on magnetic billiards. Using the stadium billiard as a testbed, we compute the thermal OTOC $C_T(t) = -\langle [x(t), p]^2 \rangle_β$ and extract Lyapunov-like exponents $λ_L$ that quantify early-time growth. We map out $λ_L(T, B)$, revealing a crossover from quantum chaos to magnetic rigidity. In parallel, we compute an alternative OTOC built from guiding-center operators, which exhibits qualitatively distinct dynamics and no exponential growth. Our results offer a controlled framework for probing scrambling, temperature dependence, and the interplay of geometry and magnetic fields in quantum systems.
Gradient Analysis of Barren Plateau in Parameterized Quantum Circuits with multi-qubit gates
This paper develops a theoretical framework to analyze how gradient variance behaves in quantum machine learning circuits that use multi-qubit gates, addressing the barren plateau problem where gradients become too small to enable effective training. The research provides mathematical tools to understand how circuit depth, number of qubits, and gate complexity affect trainability.
Key Contributions
- General theoretical framework for analyzing gradient properties in parameterized quantum circuits with multi-qubit gates
- Analytical results quantifying how gradient variance depends on gate size, qubit count, layer depth, and parameter count
View Full Abstract
The emergence of the Barren Plateau phenomenon poses a significant challenge to quantum machine learning. While most Barren Plateau analyses focus on single-qubit rotation gates, the gradient behavior of Parameterized Quantum Circuits built from multi-qubit gates remains largely unexplored. In this work, we present a general theoretical framework for analyzing the gradient properties of Parameterized Quantum Circuits with multi-qubit gates. Our method generalizes the direct computation framework, bypassing the Haar random assumption on parameters and enabling the calculation of the gradient expectation and variance. We apply this framework to single-layer and deep-layer circuits, deriving analytical results that quantify how gradient variance is co-determined by the size of the multi-qubit gate and the number of qubits, layers, and effective parameters. Numerical simulations validate our findings. Our study provides a refined framework for analyzing and optimizing Parameterized Quantum Circuits with complex multi-qubit gates.
Quantum Dots as Solid-State Sources of Entangled Photon Pairs
This review paper surveys recent advances in using semiconductor quantum dots to generate pairs of entangled photons, examining both traditional and emerging approaches as well as technological improvements needed for practical quantum applications.
Key Contributions
- Comprehensive review of quantum dot-based entangled photon pair generation techniques
- Analysis of transition from biexciton-exciton cascade to spontaneous two-photon emission paradigms
- Assessment of nanophotonic architectures and coherent control strategies for improving source performance
- Discussion of challenges for scaling from laboratory to practical deployment
View Full Abstract
Semiconductor quantum dots (QDs) have emerged as a premier solid-state platform for the deterministic generation of nonclassical light, offering a compelling pathway toward scalable quantum photonic systems. While single-photon emission from QDs has reached a high level of maturity, the realization of high-fidelity entangled photon-pair sources remains an active and rapidly evolving frontier. In this review, we survey the recent progress in QD-based entangled photon sources, highlighting the conceptual evolution from the established biexciton-exciton cascade to the emerging paradigm of spontaneous two-photon emission. We further examine how advances in nanophotonic architectures and coherent control strategies are redefining fundamental performance limits, enabling concurrent improvements in source brightness, coherence, and entanglement fidelity. Finally, we discuss the key physical and technological challenges that must be addressed to bridge the gap between laboratory demonstrations and large-scale deployment. We conclude by outlining the future opportunities for integrating QD-based entangled photon sources into practical quantum communication, computation, and sensing platforms.
Quantum scattering in helically twisted geometries: Coulomb-like interaction and Aharonov-Bohm effect
This paper studies how charged particles scatter in curved, helically twisted space, showing that the problem can be mathematically mapped to a flat 2D Coulomb scattering problem with an additional magnetic flux. The authors derive exact solutions for scattering amplitudes and cross-sections in this twisted geometry.
Key Contributions
- Mathematical mapping of helically twisted geometry scattering to 2D Coulomb+Aharonov-Bohm problem
- Exact analytical solutions for scattering amplitudes and cross-sections in twisted geometries
- Demonstration of consistency between S-matrix pole structure and bound-state quantization
View Full Abstract
We investigate the scattering of a charged quantum particle in a helically twisted background that induces an effective Coulomb-like interaction, in the presence of an Aharonov-Bohm (AB) flux. Starting from the nonrelativistic Schrödinger equation in the twisted metric, we derive the radial equation and show that, after including the AB potential, it can be mapped onto the same Kummer-type differential equation that governs the planar $2D$ Coulomb $+$ AB problem, with a geometry-induced Coulomb strength and the azimuthal quantum number shifted as $m\to m-λ$. We construct the exact scattering solutions, obtain closed expressions for the partial-wave $S$ matrix and phase shifts, and derive the corresponding scattering amplitude, differential cross section, and total cross section. We also show that the pole structure of the $S$ matrix is consistent with the bound-state quantization previously obtained for the helically twisted Coulomb-like problem.
Thermal State Simulation with Pauli and Majorana Propagation
This paper develops a new method for simulating quantum thermal states by evolving from high-temperature states using Pauli and Majorana operator representations in imaginary time. The approach exploits the sparsity of high-temperature thermal states in these bases to efficiently compute thermal properties of quantum many-body systems.
Key Contributions
- Novel propagation-based thermal state simulation using Pauli and Majorana bases
- Analytic error bounds for truncation strategies in imaginary-time evolution
- Numerical validation on complex many-body systems including J1-J2 and Hubbard models
View Full Abstract
We introduce a propagation-based approach to thermal state simulation by adapting Pauli and Majorana propagation to imaginary-time evolution in the Schrödinger picture. Our key observation is that high-temperature states can be sparse in the Pauli or Majorana bases, approaching the identity at infinite temperature. By formulating imaginary-time evolution directly in these operator bases and evolving from the maximally mixed state, we access a continuum of temperatures where the state remains efficiently representable. We provide analytic guarantees for small-coefficient truncation and Pauli-weight (Majorana-length) truncation strategies by quantifying the error growth and the impact of backflow. Large-scale numerics on the 1D J1-J2 model (energies) and the triangular-lattice Hubbard model (static correlations) validate efficiency at high temperatures.
Requirements for Teleportation in an Intercity Quantum Network
This paper analyzes the hardware requirements for quantum teleportation across intercity-scale quantum networks, developing analytical expressions to determine what improvements beyond current technology are needed to achieve reliable quantum communication over metropolitan and long-distance connections.
Key Contributions
- Derived closed-form analytical expressions for teleportation fidelity and rate in heterogeneous quantum networks with memory cut-offs
- Formulated hardware requirements as optimization problems to identify minimal improvements needed beyond state-of-the-art for intercity quantum teleportation
- Demonstrated that metropolitan-scale teleportation is achievable with current hardware while intercity scales require plausible but additional improvements
View Full Abstract
We investigate the hardware requirements for quantum teleportation in an intercity-scale network topology consisting of two metropolitan-scale networks connected via a long-distance backbone link. Specifically, we identify the minimal improvements required beyond the state-of-the-art to achieve an end-to-end expected teleportation fidelity of $2/3$, which represents the classical limit. To this end, we formulate the hardware requirements computation as optimisation problems, where the hardware parameters representing the underlying device capabilities serve as decision variables. Assuming a simplified noise model, we derive closed-form analytical expressions for the teleportation fidelity and rate when the network is realised using heterogeneous quantum hardware, including a quantum repeater chain with a memory cut-off. Our derivations are based on events defined by the order statistics of link generation durations in both the metropolitan networks and the backbone, and the resulting expressions are validated through simulations on the NetSquid platform. The analytical expressions facilitate efficient exploration of the optimisation parameter space without resorting to computationally intensive simulations. We then apply this framework to a representative realisation in which the metropolitan nodes are based on trapped-ion processors and the backbone is composed of ensemble-based quantum memories. Our results suggest that teleportation across metropolitan distances is already achievable with state-of-the-art hardware when the data qubit is prepared after end-to-end entanglement has already been established, whereas extending teleportation to intercity scales requires additional, though plausibly achievable, improvements in hardware performance.
Real and momentum space analysis of topological phases in 2D d-wave altermagnets
This paper studies a new type of magnetic material called altermagnets that combine properties of ferromagnets and antiferromagnets, analyzing their topological phases and proposing their use in spin-based electronic devices. The researchers demonstrate how these materials can exhibit controllable spin-dependent transport properties and propose a novel transistor design.
Key Contributions
- Comprehensive theoretical analysis of topological phases in 2D d-wave altermagnets using real and momentum space methods
- Proposal of a topological altermagnetic field-effect transistor with electrically controllable spin-polarized transport
- Development of information-theoretic framework using fidelity-susceptibility and inverse participation ratio for analyzing edge state topology
View Full Abstract
Altermagnetism has recently emerged as a third fundamental branch of magnetism, combining the vanishing net magnetization of antiferromagnets with the high-momentum-dependent spin splitting of ferromagnets. This study provides a comprehensive real- and momentum-space analysis of topological phases in two-dimensional d-wave altermagnets. By employing a tight-binding Hamiltonian, we characterize the topological phase transition occurring at a critical intra-sublattice hopping strength ($t_a^C$). We examine the emergence of Dirac nodal points and the resulting Berry curvature singularities, supported by a visual analysis of pseudospin texture winding. Crucially, we analize spin splitting, effective altermagnetic strength, and investigate the transport implications of these phases, uncovering giant conductivity anisotropy and spin-dependent "steering" effects driven by group velocity distribution across the Fermi surface. Beyond bulk properties, we analyze the edge state topology in ribbon geometries through the lens of information-theoretic markers like fidelity-susceptibility and inverse participation ratio, offering an alternative to traditional Chern number calculations. Our results demonstrate that the hybridization of edge states in ultra-narrow nanoribbons opens a controllable energy gap, a feature we exploit to propose a novel topological altermagnetic field-effect transistor design where ballistic and spatially spin-polarized transport can be electrostatically gated. This work establishes a theoretical and information-theoretic framework for "edgetronics" in altermagnetic materials, paving the way for next-generation, high-speed spintronic and "spin-splitter" logic devices and architectures.
Correspondence between classical and quantum resonances
This paper studies how classical resonances in molecular systems correspond to quantum mechanical effects by analyzing the CN-Li↔Li-CN isomerization system. The researchers use correlation diagrams plotting energy versus Planck's constant to identify quantum resonances that manifest as avoided crossings, showing how these connect to classical bifurcation energies in the semiclassical limit.
Key Contributions
- Demonstration of correspondence between classical resonance bifurcations and quantum avoided crossings in molecular systems
- Development of semiclassical theory connecting classical and quantum resonances through correlation diagrams versus Planck constant
View Full Abstract
Bifurcations take place in molecular Hamiltonian nonlinear systems as the excitation energy increases, this leading to the appearance of different classical resonances. In this paper, we study the quantum manifestations of these classical resonances in the isomerizing system CN-Li$\leftrightarrows$Li-CN. By using a correlation diagram of eigenenergies versus Planck constant, we show the existence of different series of avoided crossings, leading to the corresponding series of quantum resonances, which represent the quantum manifestations of the classical resonances. Moreover, the extrapolation of these series to $\hbar=0$ unveils the correspondence between the bifurcation energy of classical resonances and the energy of the series of quantum resonances in the semiclassical limit $\hbar\to0$. Additionally, in order to obtain analytical expressions for our results, a semiclassical theory is developed.
Dynamical Quantum Phase Transitions in Boundary Time Crystals
This paper studies dynamical quantum phase transitions in boundary time crystals, where quantum systems exhibit time-periodic behavior in their steady states. The researchers analyze how these systems behave when driven across phase transitions using different protocols and find characteristic signatures in the quantum fidelity that distinguish different phases.
Key Contributions
- Demonstration of dynamical quantum phase transitions in boundary time crystal systems
- Analysis of fidelity-based diagnostics for detecting phase transitions in dissipative quantum systems
- Finite-size scaling analysis showing distinct power-law behaviors for different driving protocols
View Full Abstract
We demonstrate the existence of a dynamical quantum phase transition (DQPT) in a dissipative collective-spin model that exhibits the boundary time crystal (BTC) phase. We initialize the system in the ground state of the Hamiltonian in either the BTC or the non-BTC phase, and drive it across the BTC transition. The driving is done by an abrupt quench or by a finite-time linear ramp of a Hamiltonian control parameter under Markovian Lindblad dynamics. We diagnose DQPTs through zeros of the fidelity-based Loschmidt echo between the initial state and the evolving mixed state, which induce nonanalytic cusp-like features in the associated rate function. For quenches into the BTC phase, the Loschmidt echo exhibits repeated zeros due to the emergent time-periodic steady state, whereas for quenches into the non-BTC phase, the overlap vanishes and remains zero once the dynamics relaxes to a stationary state. We further show that the DQPT persists under the ramp protocol followed by unitary evolution with the final Hamiltonian. Finally, we analyze the finite-size scaling of the first critical time and find convergence to a constant in the thermodynamic limit, with distinct power-law approaches for the quench and the ramp protocols.
Quantifying the Operational Cost of Multipartite Entanglement
This paper develops a new method to quantify multipartite entanglement in quantum systems by measuring the maximum bipartite entanglement within subsystems of different sizes. The approach connects the theoretical structure of entangled states to their experimental creation cost, showing that k-partite entangled states require at least k-1 two-particle entangling gates to create.
Key Contributions
- Introduces a generic method to quantify k-partite entanglement by maximizing bipartite entanglement measures within subsystems
- Establishes connection between entanglement structure and experimental cost, proving k-partite states require at least k-1 two-particle gates
- Analytically calculates k-partite entanglement of formation for important state classes including W states
View Full Abstract
Multipartite entanglement determines the strength and range of interactions in many-body quantum systems. Yet, it is hard to evaluate it, due to the complex structures of quantum states. Here, we introduce a generic method to quantify the k <= N-partite entanglement of an N-particle system, by maximizing an arbitrary bipartite entanglement measure within subsystems of size up to k. The resulting classification of multipartite states captures their experimental cost: creating a k-partite entangled state requires at least k-1 two-particle entangling gates. Further, we analytically calculate the newly defined k-partite entanglement of formation, which generalizes an important bipartite entanglement measure, in several classes of states, including the W states of any dimension.
Generalized quantum theory for accessing nonlinear systems: the case of Levinson-Smith equations
This paper explores connections between a generalized quantum mechanics framework and nonlinear differential equation systems, specifically the Levinson-Smith and Liénard equations. The authors analyze stability properties, derive solutions involving Jacobi elliptic functions, and identify soliton-like solutions emerging from these quantum-nonlinear system connections.
Key Contributions
- Connection between generalized quantum mechanics and Levinson-Smith nonlinear differential equations
- Analysis of stability conditions and closed-form solutions for Liénard equations in quantum context
- Identification of solitonic-like solutions from level surface conditions
View Full Abstract
Motivated by a recently developed generalized scheme of quantum mechanics, we touch upon connections with Levinson-Smith classes of nonlinear systems that contain as a particular case the Liénard family of differential equations. The latter, which has coefficients of odd and odd symmetry, admits a closed form solution when converted to the Abel form. Analysis of the governing condition shows that one of the nontrivial equilibrium points is stable in character. Other classes of differential equations that we encounter speak of solutions involving Jacobi elliptic functions for a certain combination of underlying parameters, while, for a different set, relevance to position-dependent mass systems is shown. In addition, an interesting off-shoot of our results is the emergence of solitonic-like solutions from the condition of the level surface in the system.
Ising-Induced Spectral Broadening Resolves the Relaxation Bottleneck in Superradiant Masers
This paper explains why superradiant masers exhibit unexpectedly slow relaxation times by showing that strong Ising interactions create spectral broadening that suppresses energy transport mechanisms. The authors develop an analytical theory that matches experimental observations of microsecond dynamics in dense spin systems.
Key Contributions
- Identified Ising-induced spectral broadening as the cause of relaxation bottlenecks in superradiant masers
- Developed parameter-free analytical theory that quantitatively reproduces experimental microsecond dynamics
View Full Abstract
The recent observation of self-induced superradiant masing [[W. Kersten et al., Nat. Phys. 22, 158 (2026)]] revealed a collective relaxation timescale significantly slower than predicted by standard coherent transport models. Here, we elucidate the microscopic origin of this ``relaxation bottleneck.'' We show that in the high-density regime relevant to the experiment, diagonal Ising interactions -- often treated as perturbative -- generate profound inhomogeneous broadening that exceeds the intrinsic single-particle dephasing. This intense diagonal disorder suppresses resonant flip-flop exchange, effectively renormalizing the density of states available for spectral diffusion. Our parameter-free analytic theory quantitatively reproduces the experimentally observed microsecond dynamics, identifying Ising-induced broadening as the governing mechanism for energy transport in dense solid-state spin ensembles.
Enabling large-scale digital quantum simulations with superconducting qubits
This doctoral thesis explores methods to improve quantum simulation using superconducting qubits by developing hardware innovations, noise modeling techniques, error mitigation strategies, and algorithmic improvements to overcome current device limitations.
Key Contributions
- Hardware-level innovations for superconducting qubit quantum simulators
- Refined noise modeling and error mitigation techniques
- Algorithmic improvements through efficient measurement processing
View Full Abstract
Quantum computing promises to revolutionize several scientific and technological domains through fundamentally new ways of processing information. Among its most compelling applications is digital quantum simulation, where quantum computers are used to replicate the behavior of other quantum systems. This could enable the study of problems that are otherwise intractable on classical computers, transforming fields such as quantum chemistry, condensed matter physics, and materials science. Despite this potential, realizations of practical quantum advantage for relevant problems are hindered by imperfections of current devices. This also affects quantum hardware based on superconducting circuits which is among the most advanced and scalable platforms. The envisaged long-term solution of fault-tolerant quantum computers that correct their own errors remains out of reach mainly due to the associated qubit number overhead. As a result, the field has developed strategies that combine quantum and classical resources, exploit hardware-native operations, and employ error mitigation techniques to extract meaningful results from noisy data. This doctoral thesis contributes to this broader effort by exploring methods for advancing quantum simulation across the full computational stack, including hardware-level innovations, refined techniques for noise modeling and error mitigation, and algorithmic improvements enabled by efficient measurement processing.
Quantum Advantage in Decision Trees: A Weighted Graph and $L_1$ Norm Approach
This paper develops a new mathematical framework for analyzing single-query quantum algorithms by representing them as weighted graphs, which helps determine when quantum algorithms can outperform classical ones. The authors use this approach to identify functions where quantum algorithms provide exponential speedups and establish conditions for quantum advantage.
Key Contributions
- Novel weighted graph formulation for single-query quantum decision trees
- Mathematical framework using L1 spectral norm to analyze quantum advantage
- Identification of functions with exponential quantum speedup
- Necessary conditions linking quantum advantage to measurement projector dimensions
View Full Abstract
The analysis of the computational power of single-query quantum algorithms is important because they must extract maximal information from one oracle call, revealing fundamental limits of quantum advantage and enabling optimal, resource-efficient quantum computation. This paper proposes a formulation of single-query quantum decision trees as weighted graphs. This formulation has the advantage that it facilitates the analysis of the $L_1$ spectral norm of the algorithm output. This advantage is based on the fact that a high $L_1$ spectral norm of the output of a quantum decision tree is a necessary condition to outperform its classical counterpart. We propose heuristics for maximizing the $L_{1}$ spectral norm, show how to combine weighted graphs to generate sequences with strictly increasing norm, and present functions exhibiting exponential quantum advantage. Finally, we establish a necessary condition linking single-query quantum advantage to the asymptotic growth of measurement projector dimensions.
Pre-optimization of quantum circuits, barren plateaus and classical simulability: tensor networks to unlock the variational quantum eigensolver
This paper develops a method using 2D tensor networks to pre-optimize quantum circuits for finding ground states of quantum systems, specifically addressing the barren plateau problem that makes variational quantum algorithms difficult to train. The approach shows promise for identifying when quantum hardware might outperform classical simulation methods.
Key Contributions
- Development of tensor network pre-optimization method to mitigate barren plateaus in variational quantum eigensolvers
- Identification of scaling regimes where quantum hardware offers advantages over classical tensor network simulations
View Full Abstract
Variational quantum algorithms are practical approaches to prepare ground states, but their potential for quantum advantage remains unclear. Here, we use differentiable 2D tensor networks (TN) to optimize parameterized quantum circuits that prepare the ground state of the transverse field Ising model (TFIM). Our method enables the preparation of states with high energy accuracy, even for large systems beyond 1D. We show that TN pre-optimization can mitigate the barren plateau issue by giving access to enhanced gradient zones that do not shrink exponentially with system size. We evaluate the classical simulation cost evaluating energies at these warm-starts, and identify regimes where quantum hardware offers better scaling than TN simulations.
Pure narrowband photon-pair generation in a monolithic cavity
This paper presents a quantum light source that generates very pure single photons using a specialized cavity design. The system can produce single photons with high efficiency (84%) and very low contamination, which is crucial for quantum technologies that need reliable single-photon sources.
Key Contributions
- Demonstrated 84% heralding efficiency with less than 3% multi-photon contamination
- Achieved 96.2% spectral purity in a monolithic cavity design
- Produced narrowband photons at telecom wavelength (1540 nm) with 168 MHz bandwidth
View Full Abstract
Photonic quantum technologies require efficient sources of pure single photons. Here we present a heralded SPDC single-photon source in a monolithic cavity optimized for high spectral and spatial purity. The source heralds single-photons at a wavelength of 1540 nm and a spectral bandwidth of 168 MHz with a maximum heralding efficiency of 84%, while keeping the multi-photon contamination below 3%. The cavity enhancement generates photons mainly in the central cavity mode with 96.2% spectral purity.
On the emergence of classical stochasticity
This paper investigates how classical random behavior emerges from quantum mechanical systems by analyzing Pauli-type master equations. The authors examine the logical requirements for classical stochastic reasoning and demonstrate this emergence through examples of particles in the ultradecoherence limit.
Key Contributions
- Clarifies the logical structure required for classical stochastic reasoning to emerge from quantum systems
- Demonstrates quantum-to-classical transition through examples in the ultradecoherence limit
View Full Abstract
We examine the logical structure of the emergence of classical stochasticity for a quantum system governed by a Pauli-type master equation. It is well-known that while such equations describe the evolution of probabilities, they do not automatically justify classical reasoning based on the assumption that the system exists in a definite state at intermediate times. On the other hand, we show that this assumption is crucial for the standard calculation of stochastic times such as the persistent time and the time of first arrivals. We then consider examples of single particles, bosons, and fermions in the so-called ultradecoherence limit to illustrate how classical stochasticity may emerge from quantum mechanics.
Dicke Superradiance in Extended 2D Quantum Arrays Coupled to Metasurface Bound States in the Continuum
This paper proposes using specially designed optical surfaces called metasurfaces to make groups of quantum emitters (like atoms) work together more effectively over long distances, enhancing their collective light emission through a phenomenon called Dicke superradiance.
Key Contributions
- Demonstration that bound-states-in-the-continuum can mediate superradiant interactions over multi-wavelength distances
- Analysis of conditions for reaching the idealized Dicke limit in extended 2D quantum emitter arrays
View Full Abstract
Dicke superradiance is a collective phenomenon where the emission from ensembles of quantum emitters is coherently enhanced beyond the sum of each emitter's independent emission. Here, we propose a platform that exploits the delocalised nature of a high-Q, non-local mode supported by a dielectric metasurface (a so-called bound-state-in-the-continuum or BIC) to induce superradiant behaviour within an extended two-dimensional array of distant quantum emitters. We show that these BIC-mediated emitter interactions can span several wavelengths, thus overcoming the traditional subwavelength separation between emitters required in free space. We further show that reaching the idealised Dicke limit is possible in this system, provided that the emitters are coupled to the BIC mode efficiently enough, as quantified through the $β$-factor. Moreover, we demonstrate its experimental viability by analysing its robustness to realistic experimental imperfections. This work puts forward optical metasurfaces supporting BICs as a physically viable platform for realising the upper limits of cooperative emission in physically extended quantum emitter arrays.
Entanglement improves coordination in distributed systems
This paper demonstrates that quantum entanglement can improve coordination in distributed computing systems by enabling better task scheduling between two servers without communication delays. The authors show that entangled quantum states allow for superior performance compared to classical strategies when processing baseline tasks and customer requests.
Key Contributions
- Rigorous analytical proof that entanglement-assisted routing achieves Pareto-superior performance over classical communication-free strategies for convex baseline task throughput functions
- Novel application of quantum entanglement to distributed system coordination and scheduling as a practical use case for quantum networks
View Full Abstract
Coordination in distributed systems is often hampered by communication latency, which degrades performance. Quantum entanglement offers fundamentally stronger correlations than classically achievable without communication. Crucially, these correlations manifest instantaneously upon measurement, irrespective of the physical distance separating the systems. We investigate the application of shared entanglement to a dual-work optimization problem in a distributed system comprising two servers. The system must process both a continuously available, preemptible baseline task and incoming customer requests arriving in pairs. System performance is characterized by the trade-off between baseline task throughput and customer waiting time. We present a rigorous analytical model demonstrating that when the baseline task throughput function is strictly convex, rewarding longer uninterrupted processing periods, entanglement-assisted routing strategies achieve Pareto-superior performance compared to optimal communication-free classical strategies. We prove this advantage through queueing-theoretic analysis, non-local game formulation, and computational certification of classical bounds. Our results identify distributed scheduling and coordination as a novel application domain for near-term entanglement-based quantum networks.
Restoring Landauer's Principle for Unitarily Transformed Thermal Reservoirs
This paper resolves an apparent violation of Landauer's principle (which relates information erasure to energy dissipation) when thermal reservoirs are replaced with squeezed thermal states. The authors develop a generalized framework that extends the principle to these unitarily transformed states and demonstrate it using a moving quantum detector coupled to a quantum field.
Key Contributions
- Generalized Landauer inequality for squeezed thermal states with rigorous mathematical framework
- Resolution of apparent thermodynamic principle violations in non-equilibrium quantum systems
View Full Abstract
Landauer's principle, a cornerstone of quantum information and thermodynamics, appears to be violated when the thermal reservoir is replaced by a squeezed thermal state (STS). We introduce a formal extension of the principle to such unitarily transformed thermal states. By defining an effective Hamiltonian, we rigorously establish a generalized Landauer inequality, which naturally reduces to the standard case for an ordinary thermal reservoir as a special instance. The framework further yields a consistent definition of entropy production and a proof of its non-negativity. We illustrate its utility by studying an arbitrarily moving Unruh-DeWitt detector coupled to a quantum field initially prepared in the STS. Using perturbation theory, we compute the entropy production explicitly, confirming its positivity. As a result of the symmetry breaking induced by the unitary transformation, it depends on both the proper time interval and the absolute spacetime position. Our work resolves the apparent violation of Landauer's principle with STSs. It also provides a robust tool for analyzing quantum thermodynamics in non-equilibrium and relativistic settings, with potential implications for quantum thermal machines and information protocols.
Locally Gentle State Certification for High Dimensional Quantum Systems
This paper investigates quantum state certification using 'gentle' measurements that minimally disturb the quantum state, allowing samples to be reused. The authors derive fundamental limits showing that maintaining low disturbance requires significantly more samples, with the penalty scaling as d/α² where d is the Hilbert space dimension and α is the allowed disturbance level.
Key Contributions
- Derived minimax sample complexity for locally-gentle quantum state certification with explicit measurement operators
- Established fundamental trade-off between measurement gentleness and sample efficiency, showing sample penalty scales as d/α² rather than d²/α²
View Full Abstract
Standard approaches to quantum statistical inference rely on measurements that induce a collapse of the wave function, effectively consuming the quantum state to extract information. In this work, we investigate the fundamental limits of \emph{locally-gentle} quantum state certification, where the learning algorithm is constrained to perturb the state by at most $α$ in trace norm, thereby allowing for the reuse of samples. We analyze the hypothesis testing problem of distinguishing whether an unknown state $ρ$ is equal to a reference $ρ_0$ or $ε$-far from it. We derive the minimax sample complexity for this problem, quantifying the information-theoretic price of non-destructive measurements. Specifically, by constructing explicit measurement operators, we show that the constraint of $α$-gentleness imposes a sample size penalty of $\frac{d}{α^2}$, yielding a total sample complexity of $n = Θ(\frac{d^3}{ε^2 α^2})$. Our results clarify the trade-off between information extraction and state disturbance, and highlight deep connections between physical measurement constraints and privacy mechanisms in quantum learning. Crucially, we find that the sample size penalty incurred by enforcing $α$-gentleness scales linearly with the Hilbert-space dimension $d$ rather than the number of parameters $d^2-1$ typical for high-dimensional private estimation.
Effect of initial intrasystem entanglement on entropy growth in generalized Jaynes-Cummings models
This paper studies how initial quantum entanglement between atomic subsystems affects entropy production when these systems interact with photonic environments in extended Jaynes-Cummings models. The researchers find a consistent positive correlation between initial entanglement and entropy growth across various system configurations and initial state ensembles.
Key Contributions
- Demonstrated positive correlation between initial intrasystem entanglement and entropy growth in atom-photon systems
- Characterized entropy dynamics across multiple ensemble types including Haar-random states and fixed energy/mixedness conditions
View Full Abstract
We investigate how initial intrasystem entanglement influences the entropy generated in atomic systems interacting with a photonic environment in several generalizations of the Jaynes-Cummings model with two or more subsystems. Since the initial entanglement does not uniquely determine the final entropy, we focus on ensemble-averaged behavior. We consider ensembles of initial system states including pure and mixed Haar-random states, ensembles with fixed average energy or fixed mixedness, and varying initial photon numbers in the environment. In all cases, we observe a positive correlation between the initial entanglement and the entropy growth, although the fractional contribution of the initial entanglement varies. Our results emphasize the role of intrasystem correlations as a factor contributing to entropy growth in quantum informational processes.
Thermodynamic Cost of Regeneration in a Quantum Stirling Cycle
This paper analyzes quantum Stirling heat engines and shows that previously reported super-Carnot efficiencies disappear when accounting for the thermodynamic cost of the regeneration process. The authors demonstrate that including regeneration costs keeps efficiency below the Carnot bound while still outperforming conventional Stirling cycles.
Key Contributions
- Corrects previous claims of super-Carnot efficiency in regenerative quantum Stirling cycles by accounting for regeneration costs
- Provides rigorous thermodynamic bounds for quantum heat engines using quantum relative entropy
View Full Abstract
We study the regenerative quantum Stirling heat engine cycle within the standard weak-coupling, Markovian open quantum system framework. We point out that the regeneration process is not thermodynamically free in a reduced open-system description, and we treat the required work input as an explicit regeneration cost by modifying the cycle efficiency accordingly. We consider two working substances--a single spin-$1/2$ and a pair of interacting spin-$1/2$ particles--and investigate the cycle performance by taking the regeneration cost at its minimum value set by the Carnot heat-pump limit. For comparison, we also analyze the conventional Stirling cycle without regeneration under the same conditions. The super-Carnot efficiencies reported under the cost-free regeneration assumption disappear once the regeneration cost is included: the modified efficiency stays below the Carnot bound, while still remaining higher than the efficiency of the conventional Stirling cycle. For the conventional Stirling cycle, we provide a rigorous Carnot bound using quantum relative entropy, whereas for the regenerative cycle we derive a sufficient lower bound on the regeneration cost that guarantees thermodynamic consistency. Finally, we suggest three candidate quantum regenerator models for future work.
A simple means for deriving quantum mechanics
This paper presents a new interpretation of quantum mechanics where particles follow continuous, deterministic paths with classical properties like definite position and momentum, yet reproduces all observable predictions of standard quantum mechanics including entanglement and spin. The authors claim this provides an intuitive framework that connects various quantum mechanical interpretations and extends to relativistic mechanics.
Key Contributions
- Novel interpretation of quantum mechanics with deterministic particle trajectories
- Unified framework connecting Bohmian mechanics, many-worlds, and other quantum interpretations
- Extension to relativistic quantum mechanics
View Full Abstract
A type of mechanics will be presented that possesses some distinctive properties. On the one hand, its physical description & rules of operation are readily comprehensible & intuitively clear. On the other, it fully satisfies all observable predictions of non-relativistic quantum mechanics. Within it, particles exist at points in space, follow continuous, piecewise differentiable paths, and their linear momentum is equal to their mass times their velocity along their path. Yet the probabilities for position and momentum, conditioned on the state of the particle's environment, follow the rules of quantum theory. Indeed, all observable consequences of quantum theory are satisfied; particles can be entangled, have intrinsic spin, this spin is not local to the particle, particle identity can effect probabilities, and so forth. All the rules of quantum mechanics are obeyed, and all arise in a straightforward fashion. After this is established, connections will be drawn out between this type of mechanics and other types of quantum worlds; those that obey Bohmian mechanics, stochastic mechanics, the many worlds interpretation, and physical collapse. In the final section, a relativistic version of the mechanics will be presented.
Squeezing-Enhanced Rotational Doppler Metrology
This paper develops a quantum protocol for measuring the angular velocity of rotating surfaces using the rotational Doppler effect. The method uses squeezed Laguerre-Gaussian light beams to achieve better precision than classical approaches, demonstrating quantum-enhanced metrology for rotation sensing.
Key Contributions
- Development of squeezing-enhanced quantum protocol for rotational Doppler metrology achieving Heisenberg scaling
- Demonstration that optimized quantum strategy outperforms classical methods even in the presence of noise
View Full Abstract
A rotating surface can induce a frequency shift in incident light by changing its angular momentum, a phenomenon known as the rotational Doppler effect. This effect provides a means to estimate the angular velocity of the rotating surface. In this work, we develop a continuous-variable quantum protocol for estimating the angular velocity of a rotating surface via the rotational Doppler effect. Our approach exploits squeezed and displaced Laguerre-Gaussian modes as quantum resources, which interact with a rotating metallic disc with surface roughness. The frequency shift induced by the rotational Doppler effect is then measured using a homodyne detection scheme. By analyzing the Fisher information, we demonstrate that the proposed squeezing-enhanced protocol achieves Heisenberg scaling in the ideal noiseless regime. Furthermore, we investigate the influence of noise and consider different surface models to assess their impact on the protocol's performance. While Heisenberg scaling is degraded in the presence of noise, we show that optimizing the energy allocation ratio between displacement and squeezing of the probe ensures that the quantum strategy consistently outperforms its classical counterpart.
PhoQuPy: A Python framework for Automation of Quantum Optics experiments
This paper presents PhoQuPy, a Python-based automation framework for quantum optics experiments that identifies and characterizes single-photon emitters in quantum materials using confocal photoluminescence scanning, lifetime measurements, and autocorrelation analysis. The system also automates other quantum optics setups including non-linear interferometry for quantum imaging and Fourier transform imaging spectroscopy.
Key Contributions
- Development of automated Python framework for single-photon emitter characterization
- Integration of multiple quantum optics experimental setups including HBT measurements and quantum imaging
- Implementation of galvo-mirror scanning for cryogenic temperature measurements
View Full Abstract
We present the automation of a confocal photoluminescence (PL) scanning system for the identification and characterization of single-photon emitters (SPEs) in quantum materials. The setup excites the sample with a laser and acquires a spectrum at each spatial coordinate in a raster scan pattern. A double-acquisition method is used to remove cosmic ray artifacts by comparing subsequent measurements at the same spatial coordinate. Once identified, the emitter is further characterized via a HBT setup, thereby measuring lifetime as well as second-order autocorrelation g(2) measurements to confirm singlephoton emission. The system integrates Python-based hardware control for motorized stages, spectrometer acquisition, and post-processing, with a migration to a galvo-mirror scanning approach for using it along with a cryostat for low temperature measurements. Our results demonstrate spatially resolved PL maps and temperature-dependent spectra, highlighting the capability of the setup to efficiently benchmark SPE performance. We further went on to perform automation of other experiments such as a Non-Linear Interferometry setup for Quantum Imaging with Undetected Light and a Fourier Transform Imaging Spectroscopy using a common path birefringence Interferometer to obtain hyperspectral images of our samples.
From Florence to Fermions: a historical reconstruction of the origins of Fermi's statistics one hundred years later
This paper provides a historical analysis of how Enrico Fermi developed his quantum statistics for fermions, tracing his intellectual journey from early interests in entropy to applying Pauli's exclusion principle to non-interacting particle systems. It examines the conceptual steps that led to Fermi-Dirac statistics one hundred years after its formulation.
Key Contributions
- Historical reconstruction of Fermi's development of quantum statistics
- Analysis of the conceptual leap from atomic electron dynamics to general fermion systems
View Full Abstract
Aim of this paper is to retrace the path that led the young Enrico Fermi to write his paper on the statistics of an ideal monatomic gas. This discovery originated in his interest, which he had shown since his formative years, in the absolute entropy constant and in the problems he highlighted in Sommerfeld's quantization in the case of identical particle systems. The fundamental step taken by Fermi in writing his work on statistics was to apply the Exclusion Principle, formulated for electrons in an atom and which could therefore have been a pure effect due to dynamics, to a system of non-interacting particles.
Optimal Control Design Guided by Adam Algorithm and LSTM-Predicted Open Quantum System Dynamics
This paper develops a machine learning approach using LSTM neural networks to predict quantum system behavior and optimize control strategies for quantum systems operating in noisy environments. The method combines LSTM prediction with Adam optimization algorithm to design better control pulses that maintain high fidelity while suppressing noise.
Key Contributions
- Novel framework combining LSTM neural networks with optimal control theory for open quantum systems
- Two-step optimization procedure for adiabatic speedup that improves fidelity in non-Markovian environments
View Full Abstract
The realization of high-fidelity quantum control is crucial for quantum information processing, particularly in noisy environments where control strategies must simultaneously achieve precise manipulation and effective noise suppression. Conventional optimal control designs typically requires numerical calculations of the system dynamics. Recent studies have demonstrated that long short-term memory neural networks (LSTM-NNs) can accurately predict the time evolution of open quantum systems. Based on LSTM-NN predicted dynamics, we propose an optimal control framework for rapid and efficient optimal control design in open quantum systems. As an exemplary example, we apply our scheme to design an optimal control for the adiabatic speedup in a two-level system under a non-Markovian environment. Our optimization procedure entails two steps: driving trajectory optimization and zero-area pulse optimization. Fidelity improvement for both steps have been obtained, showing the effectiveness of the scheme. Our optimal control design scheme utilizes predicted dynamics to generate optimized controls, offering broad application potential in quantum computing, communication, and sensing.
Influence of environment on quantum correlations in two-spin systems with dipole-dipole interactions
This paper studies how environmental noise affects quantum correlations (entanglement and quantum discord) in a two-spin system connected by dipole-dipole interactions. The researchers use theoretical modeling to compare how dephasing noise differently impacts these two types of quantum correlations.
Key Contributions
- Theoretical analysis of environmental dephasing effects on quantum correlations in dipole-coupled spin systems
- Comparative study of how entanglement and quantum discord degrade differently under environmental noise
View Full Abstract
An influence of environment on quantum correlations (entanglement and quantum discord) is studied in a two-spin-1/2 system with dipole-dipole interactions on the basis of Lindblad equation. We consider the simplest case when the environment causes only dephasing of system spins. The dependencies of entanglement and the quantum discord on the relaxation rate are obtained. We compare the influence of the environment on entanglement and quantum discord.
UGA-SSMRPT2 -- A Multireference Perturbation Theory Predicting Accurate Electronic Excitation Energies in Diverse Molecular Systems
This paper presents UGA-SSMRPT2, a new computational method for accurately calculating electronic excitation energies in molecules. The method achieves high accuracy comparable to expensive established techniques while being computationally cheaper and avoiding common technical problems like intruder states.
Key Contributions
- Demonstrates UGA-SSMRPT2 achieves near-chemical accuracy for diverse excited states within 0.20 eV of benchmark methods
- Shows the method outperforms popular approaches like NEVPT2 and CASPT2 while requiring smaller active spaces
- Eliminates intruder-state problems and need for empirical parameters through state-specific formulation
View Full Abstract
UGA-SSMRPT2, the spin-free perturbative analogue of Mukerjee's State-Specific Multireference Coupled Cluster Theory (MkMRCC) is known to be successful for size-extensive and intruder-free construction of dissociation curves. This work demonstrates that UGA-SSMRPT2 is also an accurate and computationally inexpensive framework for computing excitation energies. The method achieves near-chemical accuracy for the vast majority of $π\to π^*$, $n \to π^*$, charge-transfer, valence-Rydberg and Rydberg excited states commonly used for benchmarking electronic structure theories for excited states. Our results demonstrate that UGA-SSMRPT2 excitation energies lie within 0.20 eV of EOM-CCSD and/or well-established theoretical best estimates often surpassing the popular MRPT2 approaches like NEVPT2, CASPT2, and MCQDPT while typically requiring smaller active spaces. Its state-specific formulation circumvents the well-known intruder-state problem and eliminates the need for empirical parameters such as IPEA shifts in CASPT2. This work proposes UGA-SSMRPT2 as a robust, and scalable approach for modeling challenging electronic excited states.
Squeezing Enhanced Sagnac Sensing based on SU(1,1) Quantum Interference
This paper presents an enhanced Sagnac interferometer design that uses quantum squeezing and SU(1,1) interference to achieve rotation sensing beyond classical precision limits. The approach places an optical parametric amplifier inside the loop to automatically squeeze light in both directions, improving phase detection sensitivity.
Key Contributions
- Novel SU(1,1) quantum interference design for Sagnac interferometers with automatic bidirectional squeezing
- Demonstration of super-classical sensitivity for rotational sensing under realistic loss and detector inefficiency conditions
View Full Abstract
We present a simple and robust design for a squeezing-enhanced Sagnac interferometer that employs the concept of SU(1,1) interference to significantly surpass the classical sensitivity limit (shot-noise limit - SNL) in rotational sensing. By strategically placing an optical parametric amplifier (OPA) inside the Sagnac loop, light is automatically squeezed in both forward and backward directions of the loop, which enhances the detectability of a small phase. For measuring the squeezed quadrature, we explore two approaches: Direct detection of the output intensity, which is simple, but requires a high-efficiency photo-detector; and parametric homodyne with an additional OPA, which accepts practical detectors with no efficiency limitation, but is technically more complex. Our analysis demonstrates super-classical sensitivity under most realistic conditions of loss and detector inefficiency, thereby leveraging the resources of squeezing and the principles of SU(1,1) interference, while maintaining compatibility with standard Sagnac configurations.
Limitations of an approximative phase-space description in strong-field quantum optics
This paper examines a commonly used approximation method for modeling strong-field quantum optics processes like high-harmonic generation, finding that while it accurately predicts harmonic spectra, it significantly mischaracterizes quantum optical properties by incorrectly representing nonclassical light states as incoherent mixtures of classical states.
Key Contributions
- Demonstrates that approximative phase-space descriptions fail to capture quantum optical observables like sub-Poissonian photon statistics and quadrature squeezing
- Provides exact analytical benchmark using one-band model to quantify errors in the approximative method, showing errors scale with pulse duration and emitter density
View Full Abstract
In recent years, strong-field processes such as high-order harmonic generation (HHG) and above-threshold ionization driven by nonclassical states of light have become an increasingly popular field of study. The theoretical modeling of these processes often applies an approximate phase-space expansion of the nonclassical driving field in terms of coherent states, which has been shown to accurately predict the harmonic spectrum. However, its accuracy for the computation of quantum optical observables like the degree of squeezing and photon statistics has not been thoroughly considered. In this work, we introduce this approximative phase-space description and discuss its accuracy, and we find that it mischaracterizes the quantum optical properties of the driving laser by making it an incoherent mixture of classical states. We further show that this error in the driving field description maps onto the light emitted from HHG, as neither sub-Poissonian photon statistics nor quadrature squeezing below vacuum fluctuations can be captured by the approximative phase-space description. Lastly, to benchmark the approximative phase-space description, we consider the quantum HHG from a one-band model, which yields an exact analytical solution. Using the approximative phase-space representation with this specific model, we find a small quantitative error in the quadrature variance of the emitted field that scales with pulse duration and emitter density. Our results show that using this approximative phase-space description can mischaracterize quantum optical observables. Attributing physical meaning to such results should therefore be accompanied by a quantitative analysis of the error.
Low resource entanglement classification from neural network interpretability
This paper develops machine learning methods to classify quantum entanglement in two- and three-qubit systems using neural networks trained on incomplete measurement data. The researchers use Shapley values to interpret which measurements are most important and demonstrate how to reduce the number of measurements needed for reliable entanglement classification.
Key Contributions
- Unified interpretable framework for SLOCC entanglement classification of both pure and mixed two- and three-qubit states
- Shapley value analysis to identify important measurements and enable measurement reduction schemes
- Systematic comparison of dense vs convolutional neural networks for entanglement classification with design guidelines
View Full Abstract
Entanglement is a central resource in quantum information and quantum technologies, yet its characterization remains challenging due to both theoretical complexity and measurement requirements. Machine learning has emerged as a promising alternative, enabling entanglement characterization from incomplete measurement data, however model interpretability remains a challenge. In this work, we introduce a unified and interpretable framework for SLOCC entanglement classification of two- and three-qubit states, encompassing both pure and mixed states. We train dense and convolutional neural networks on Pauli-measurement outcomes, provide design guidelines for each architecture, and systematically compare their performance across types of states. To interpret the models, we compute Shapley values to quantify the contribution of each measurement, analyze measurement-importance patterns across different systems, and use these insights to guide a measurement-reduction scheme. Accuracy-versus-measurement curves and comparisons with analytical entanglement criteria demonstrate the minimal resources required for reliable classification and highlight both the capabilities and limitations of Shapley-based interpretability when using machine learning models for entanglement detection and classification.
Vistas of Algebraic Probability: Quantum Computation and Information
This paper presents an algebraic framework for probability theory that can handle both classical and quantum-like behaviors, focusing on how non-commutativity in algebraic structures produces quantum effects. The authors restrict to finite-dimensional algebras to avoid analytical complexities while maintaining applicability to quantum computation and quantum-like models.
Key Contributions
- Development of algebraic probability framework that unifies classical and quantum-like behaviors
- Mathematical foundation showing how non-commutativity produces quantum effects
- Finite-dimensional algebraic approach applicable to quantum computation models
View Full Abstract
Kolmogorov's foundation of probability takes measure spaces, $σ$-algebras, and probability measures as basic objects. It is, however, widely recognized that this classical framework is inadequate for random phenomena involving quantum effects, and more generally for \emph{quantum-like} situations. A broader formulation is provided by an algebraic viewpoint: one starts from an algebra of random variables equipped with a distinguished linear functional -- the \emph{state} -- interpreted as expectation. In this sense, the approach can also be viewed as a modern reading of ideas already implicit in early probability (e.g., the Bernoullis), while its contemporary form has been developed and used extensively in quantum physics. The algebraic framework accommodates both classical and quantum-like behaviours, yet it remains underused in classical probability and uncertainty quantification, where it can nevertheless open new perspectives and clarify structural features. Although the language carries a physics flavor, the subject is purely probabilistic. The key distinction between classical and quantum-like behaviour is \emph{commutativity}: its failure produces the characteristic effects of quantum-like situations. The rise of quantum computing is a prominent setting in which such behaviour may become relevant even for practitioners in computational science. Here we focus on the purely algebraic core of the approach. By restricting attention to finite-dimensional algebras, we avoid many analytical subtleties while retaining the main ideas, their classical limit, and their applicability to quantum-like models and quantum computation.
Quantum-Assisted Design of Space-Terrestrial Integrated Networks
This paper uses quantum computing algorithms to optimize the design of integrated satellite-terrestrial networks for global connectivity. The researchers map network optimization problems onto quantum processors and compare quantum solutions against classical approaches across 165 test cases.
Key Contributions
- Demonstration of quantum adiabatic algorithm for satellite network optimization problems
- Comparative analysis showing quantum methods can match classical solvers for network design
View Full Abstract
Achieving ubiquitous global connectivity requires integrating satellite and terrestrial networks, particularly to serve remote and underserved regions. In this work, we investigate the design and optimization of Space-Terrestrial Integrated Networks (STINs) using a hybrid quantum-classical approach. We formalize three key combinatorial optimization problems: the Satellite Selection Problem (SSP), the Gateway Selection Problem (GSP), and the Spectrum Assignment Problem (SAP), each capturing critical aspects of network deployment and operation. Leveraging neutral-atom quantum processors, we map the SSP onto a Maximum Weight Independent Set problem, embedding it onto the Aquila platform and solving it via the Quantum Adiabatic Algorithm (QAA). Postprocessing ensures feasible solutions that guide downstream GSP and SAP optimization. Benchmarking across 165 realistic remote regions shows that QAA solutions closely match classical exact solvers and outperform greedy heuristics, while subsequent GSP and SAP outcomes remain largely robust to differences in initial satellite selection. These results demonstrate that quantum optimization achieves performance broadly comparable to classical approaches for end-to-end STIN design, with rare instances where it can even surpass state-of-the-art solvers. This suggests that, while not yet consistently superior, quantum methods may offer competitive advantages for larger or more complex instances of the underlying combinatorial subproblems.
Does the entropy of systems with larger internal entanglement grow stronger?
This paper investigates whether quantum systems with higher internal entanglement experience faster entropy growth when interacting with their environment. Using a model of qubits coupled to quantum harmonic oscillators, the authors find that on average systems with more internal entanglement do show stronger entropy growth, though specific state choices can reverse this trend.
Key Contributions
- Demonstrates that systems with larger internal entanglement generally exhibit stronger entropy growth when coupled to environments
- Shows that entanglement depth contributes to entropy growth dynamics
- Reveals that specific state selection can reverse the typical entanglement-entropy growth relationship
View Full Abstract
It is known that when a system interacts with its environment, the entanglement contained in the system is redistributed since parts of the system entangle with the environment. On the other hand, the entanglement of a system with its environment is closely related to the entropy of the system. However, does this imply that the entropy of systems with larger internal entanglement will grow stronger? We study the issue using the simplest model as an example: a system of qubits interacts with the environment described by the quantum harmonic oscillator. The answer to the posed question is ambiguous. However, the study of the situation on average (using the simulation of a set of random states) reveals certain patterns and we can say that the answer is affirmative. At the same time, the choice of states satisfying certain conditions in some cases can change the dependence to the opposite. Additionally, we show that the entanglement depth also makes a small contribution to entropy growth.
Canonical Quantization of Cylindrical Waveguides: A Gauge-Based Approach
This paper develops a theoretical framework for quantizing electromagnetic modes in cylindrical waveguides by extending previous gauge-based approaches to handle TEM, TM, and TE modes. The work creates a unified mathematical treatment that connects quantum field operators to measurable electrical quantities like voltage and current in waveguide systems.
Key Contributions
- Unified canonical quantization framework for all electromagnetic mode types in cylindrical waveguides
- Direct connection between quantum field operators and measurable electrical quantities through mode-specific capacitance and inductance
- Extension of gauge-based formalism from Cartesian to cylindrical geometries with future applicability to on-chip coplanar waveguides
View Full Abstract
We present a canonical quantization of electromagnetic modes in cylindrical waveguides, extending a gauge-based formalism previously developed for Cartesian geometries [1]. By introducing the two field quadratures $X,Y$ of TEM (transverse electric-magnetic), but also of TM (transverse magnetic) and TE (transverse electric) traveling modes, we identify for each a characteristic one-dimensional scalar field (a generalized flux $\varphi$) governed by a Klein-Gordon type equation. The associated Hamiltonian is derived explicitly from Maxwell's equations, allowing the construction of bosonic ladder operators. The generalized flux is directly deduced from the electromagnetic potentials $A,V$ by a proper gauge choice, generalizing Devoret's approach [2]. Our analysis unifies the treatment of cylindrical and Cartesian guided modes under a consistent and generic framework, ensuring both theoretical insight and experimental relevance. We derive mode-specific capacitance and inductance from the field profiles and express voltage and current in terms of the canonical field variables. Measurable quantities are therefore properly defined from the mode quantum operators, especially for the non-trivial TM and TE ones. The formalism shall extend in future works to any other type of waveguides, especially on-chip coplanar geometries particularly relevant to quantum technologies.
A Quantum Computing Framework for VLBI Data Correlation
This paper develops a quantum computing approach for processing Very Long Baseline Interferometry (VLBI) data, which is used in radio astronomy. The authors show how classical radio telescope data can be encoded into quantum states and processed using quantum algorithms to potentially achieve computational speedups.
Key Contributions
- Development of amplitude encoding method to represent classical VLBI data in quantum superposition states
- Construction of quantum algorithms for VLBI operations including fringe rotation, Fourier transforms, and cross-correlation
- Demonstration of complete quantum processing pipeline with validation against classical methods
View Full Abstract
We present a quantum computing framework for VLBI data correlation. We point out that a classical baseband time series data of length $N$ can be embedded into a quantum superposition state using amplitude encoding with only $\log_2 N$ qubits. The basic VLBI correlation and fringe fitting operations, including fringe rotation, Fourier transform, delay compensation, and cross correlation, can be implemented via quantum algorithms with significantly reduced computational complexity. We construct a full quantum processing pipeline and validate its feasibility and accuracy through direct comparison with a classical VLBI pipeline. We recognize that amplitude encoding of large data volumes remains the primary bottleneck in quantum computing; however, the quantized nature of VLBI raw data helps reduce the state-preparation complexity. Our investigation demonstrates that quantum computation offers a promising paradigm for VLBI data correlation and is likely to play a role in future VLBI systems.
Constructing Compact ADAPT Unitary Coupled-Cluster Ansatz with Parameter-Based Criterion
This paper presents Param-ADAPT-VQE, an improved quantum algorithm for calculating molecular ground state energies that uses a parameter-based criterion instead of gradient-based selection to choose excitation operators. The method reduces computational costs and measurement requirements while maintaining accuracy compared to the original ADAPT-VQE algorithm.
Key Contributions
- Novel parameter-based criterion for selecting excitation operators in ADAPT-VQE
- Sub-Hamiltonian technique with hot-start VQE optimization to reduce measurement costs
- Demonstrated improvements in computational accuracy, ansatz size, and measurement costs for molecular systems
View Full Abstract
The adaptive derivative-assembled pseudo-trotter variational quantum eigensolver (ADAPT-VQE) is a promising hybrid quantum-classical algorithm for molecular ground state energy calculation, yet its practical scalability is hampered by redundant excitation operators and excessive measurement costs. To address these challenges, we propose Param-ADAPT-VQE, a novel improved algorithm that selects excitation operators based on a parameter-based criterion instead of the traditional gradient-based metric. This strategy effectively eludes redundant operators. We further develop a sub-Hamiltonian technique and integrate a hot-start VQE optimization strategy, achieving a significant reduction in measurement costs. Numerical experiments on typical molecular systems demonstrate that Param-ADAPT-VQE outperforms the original ADAPT-VQE in computational accuracy, ansatz size, and measurement costs. Furthermore, our scheme retains the fundamental framework of ADAPT-VQE and is thus fully compatible with its various modified versions, enabling further performance improvements in specific aspects. This work presents an efficient and scalable enhancement to ADAPT-VQE, mitigating the core obstacles that impede its practical implementation in the field of molecular quantum chemistry.
Benchmarking Quantum and Classical Algorithms for the 1D Burgers Equation: QTN, HSE, and PINN
This paper compares different computational approaches for solving the 1D Burgers equation, including Quantum Tensor Networks (QTN), classical methods, and neural networks. The study finds that while quantum methods show some advantages in accuracy and scaling for low-resolution problems, they don't yet provide computational advantages over classical solvers.
Key Contributions
- Comprehensive benchmark comparing quantum tensor networks with classical methods for fluid dynamics simulation
- Demonstration that quantum methods achieve superior precision but lack computational advantage without fault tolerance
View Full Abstract
We present a comparative benchmark of Quantum Tensor Networks (QTN), the Hydrodynamic Schrödinger Equation (HSE), and Physics-Informed Neural Networks (PINN) for simulating the 1D Burgers' equation. Evaluating these emerging paradigms against classical GMRES and Spectral baselines, we analyse solution accuracy, runtime scaling, and resource overhead across grid resolutions ranging from $N=4$ to $N=128$. Our results reveal a distinct performance hierarchy. The QTN solver achieves superior precision ($L_2 \sim 10^{-7}$) with remarkable near-constant runtime scaling, effectively leveraging entanglement compression to capture shock fronts. In contrast, while the Finite-Difference HSE implementation remains robust, the Spectral HSE method suffers catastrophic numerical instability at high resolutions, diverging significantly at $N=128$. PINNs demonstrate flexibility as mesh-free solvers but stall at lower accuracy tiers ($L_2 \sim 10^{-1}$), limited by spectral bias compared to grid-based methods. Ultimately, while quantum methods offer novel representational advantages for low-resolution fluid dynamics, this study confirms they currently yield no computational advantage over classical solvers without fault tolerance or significant algorithmic breakthroughs in handling non-linear feedback.
Quantum fields from real-time ensemble dynamics
This paper reformulates quantum field theory using a real-time Schrödinger picture where quantum fields evolve as probability ensembles on field configurations, making the causal dynamics more explicit than traditional operator-based formulations. The approach shows how standard QFT results like scattering amplitudes and correlation functions emerge from a single underlying ensemble evolution framework.
Key Contributions
- Real-time Schrödinger-picture formulation of quantum field theory using wavefunctionals and ensemble dynamics
- Unified framework showing how standard QFT tools (operators, path integrals, scattering amplitudes) emerge from single underlying dynamics
- Explicit distinction between fundamental dynamical structure and computational representations in QFT
View Full Abstract
Relativistic quantum field theory (QFT) is commonly formulated in terms of operators, asymptotic states, and covariant amplitudes, a perspective that tends to obscure the real-time origin of field dynamics and correlations. Here we formulate quantum fields in a real-time Schrödinger-picture framework, in which fields evolve as probability ensembles on the space of field configurations. Within this formulation, the wavefunctional $Ψ[φ,t]$ encodes a first-order, causal ensemble dynamics on configuration space. Interactions appear as couplings between configuration-space directions, while propagators arise as derived correlation structures rather than as fundamental postulates. Entanglement, scattering amplitudes, and conformal field theory correlators emerge as distinct projections of the same underlying ensemble evolution, corresponding to equal-time, asymptotic, and symmetry-organized observables. Standard operator, diagrammatic, and path-integral formulations are recovered as computational representations of this single real-time dynamics. This organization makes explicit the distinction between fundamental dynamical structure and representational tools in QFT, and clarifies the scope within which ensemble-averaged correlators account for quantum fluctuations, while also delineating the level at which questions associated with individual realizations and randomness would arise beyond the correlator-based field-theoretic description.
Influence of Noninertial Dynamics on Static Quantum Resource Theories
This paper investigates how noninertial motion (like acceleration) affects quantum resource theories by showing that such effects can be modeled as quantum channels. The authors analyze how the Unruh effect, which causes accelerated observers to see thermal radiation, impacts the fundamental components of quantum resource theories including free states, operations, and resource measures.
Key Contributions
- Established equivalence between noninertial effects and CPTP maps
- Showed Unruh effect can be modeled as a bosonic amplifier channel
- Analyzed impact of noninertial motion on core components of quantum resource theories
View Full Abstract
The effect of noninertial dynamics on static quantum resource theories is investigated. To this end, we first show the equivalence between noninertial effects and a completely positive, trace-preserving (CPTP) map. In this formulation, the Unruh effect is equivalent to a bosonic amplifier channel. The effect of this map on a generic quantum resource is investigated by studying the role of the CPTP map on the three core ingredients of a resource theory, namely, the free states, the free operations and the resource quantifiers. We show several general statements can be made about these three components of a resource theory in the presence of noninertial motion.
Symmetric joint measurement as a complement to the elegant joint measurement
This paper introduces a new type of two-qubit joint measurement called the symmetric joint measurement (SJM) that complements existing elegant joint measurements by covering a different range of quantum entanglement values (0 to 1/2 concurrence). The authors demonstrate that this measurement exhibits rotational symmetry properties and can be extended to multi-qubit systems.
Key Contributions
- Development of symmetric joint measurement complementary to elegant joint measurement with concurrence range 0 to 1/2
- Demonstration of rotational symmetry in reduction vectors and permutation symmetry in triangular networks
- Generalization of the symmetric joint measurement framework to multi-qubit systems with even numbers of qubits
View Full Abstract
Traditional Bell state measurement (BSM) and product basis measurements (PBM) have been integral to nearly the entire development of quantum computing. Unlike the BSM and the PBM, a recently proposed two-qubit joint measurement called the elegant joint measurement (EJM) exhibits novel tetrahedral symmetry in its single-qubit reduced states. In [Phys.Rev.Lett.126:220401], a parameterized two-qubit iso-entangled basis was proposed, with concurrence between 1/2 and 1, perfectly spanning the original EJM and conventional BSM. We present a two-qubit symmetric joint measurement having concurrence from 0 to 1/2, which is complementary to [Phys.Rev.Lett.126:220401] and contains the PBM and the original EJM. We investigate the symmetry of the current structure and its application in triangular networks. The results indicate that the reduction vectors of the current basis states exhibit rotational symmetry, rather than the aforementioned mirror symmetry; moreover, the output probability distributions of three parties in the network explicitly demonstrate the expected permutation symmetry. Furthermore, we generalize the two-qubit symmetric joint measurement to the multiqubit systems with an even number of qubits.
Data Verification is the Future of Quantum Computing Copilots
This paper argues that quantum computing copilots and AI systems need built-in verification mechanisms rather than relying on large language models alone, since LLMs achieve only 79% accuracy in quantum circuit optimization due to inevitable hallucinations. The authors propose that verification should be an architectural foundation rather than an afterthought for AI tools working with quantum programs.
Key Contributions
- Identification of fundamental limitations of LLMs for quantum program generation due to statistical reasoning vs precision requirements
- Proposal for verification-first architecture in quantum computing AI tools rather than post-generation filtering
View Full Abstract
Quantum program generation demands a level of precision that may not be compatible with the statistical reasoning carried out in the inference of large language models (LLMs). Hallucinations are mathematically inevitable and not addressable by scaling, which leads to infeasible solutions. We argue that architectures prioritizing verification are necessary for quantum copilots and AI automation in domains governed by constraints. Our position rests on three key points: verified training data enables models to internalize precise constraints as learned structures rather than statistical approximations; verification must constrain generation rather than filter outputs, as valid designs occupy exponentially shrinking subspaces; and domains where physical laws impose correctness criteria require verification embedded as architectural primitives. Early experiments showed LLMs without data verification could only achieve a maximum accuracy of 79% in circuit optimization. Our positions are formulated as quantum computing and AI4Research community imperatives, calling for elevating verification from afterthought to architectural foundation in AI4Research.
Correlation-Enabled Beatings in Two-Dimensional Electronic Spectroscopy
This paper explains long-lived oscillations observed in two-dimensional electronic spectroscopy experiments by proposing a new mechanism where correlations between the quantum system and its environment, combined with ultrafast laser pulses, can sustain quantum coherence much longer than traditional models predict.
Key Contributions
- Proposes correlation-driven mechanism for persistent quantum beatings in 2DES that goes beyond standard excitonic models
- Demonstrates how ultrafast pulse sequences can dress bath-memory contributions and enable nonsecular population-coherence transfer
- Reframes long-lived coherence as a protocol-level dynamical effect rather than intrinsic system property
View Full Abstract
Long-lived beatings in two-dimensional electronic spectroscopy (2DES) remain difficult to interpret within standard excitonic open-system models, which typically assume factorized initialization and predict rapid coherence decay. We show that persistent beatings can arise from a correlation-driven mechanism that requires both slow bath memory and ultrafast pulse sequences that propagate system-bath correlations across optical interactions. In this regime, the pulse sequence unitarily dresses the bath-memory contribution and activates nonsecular population-coherence transfer during field-free evolution, sustaining coherence signatures far beyond factorized or weak-memory descriptions. Rather than addressing what is oscillating (excitonic versus vibronic) or quantum-versus-classical semantics, this work reframes long-lived beatings as a protocol-level dynamical effect: correlation-mediated retrieval under ultrafast control.
A Unified Categorical Description of Quantum Hall Hierarchy and Anyon Superconductivity
This paper develops a mathematical framework using category theory to unify the understanding of two types of quantum phases: quantum Hall hierarchy states and anyon superconductors. The authors show how both phases can arise from the same underlying process of stacking and condensing topological phases, depending on whether certain symmetries are preserved or broken.
Key Contributions
- Unified category-theoretic framework connecting quantum Hall hierarchy states and anyon superconductors through modular tensor categories
- Prediction of novel anyon superconductor phases including charge-2e superconductors from Laughlin states and charge-ke superconductors from bosonic Read-Rezayi states
- Mathematical formalism showing how U(1) symmetry preservation or breaking determines phase transitions between hierarchy states and superconducting phases
View Full Abstract
We present a unified category-theoretic framework for quantum Hall hierarchy constructions and anyon superconductivity based on modular tensor categories over $\mathrm{Rep}(\mathrm{U}(1))$ and $\mathrm{sRep}(\mathrm{U}(1)^f)$. Our approach explicitly incorporates conserved $\mathrm{U}(1)$ charge and formulates doping via a generalized stack-and-condense procedure, in which an auxiliary topological order is stacked onto the parent phase, and the quasiparticles created by doping subsequently condense. Depending on whether this condensation preserves or breaks the $\mathrm{U}(1)$ symmetry, the system undergoes a transition to a quantum Hall hierarchy state or to an anyon superconductor. For anyon superconductors, the condensate charge is determined unambiguously by the charged local bosons contained in the condensable algebra. Our framework reproduces all known anyon superconductors obtained from field-theoretic analyses and further predicts novel phases, including a charge-$2e$ anyon superconductor derived from the Laughlin state and charge-$ke$ anyon superconductors arising from bosonic $\mathbb{Z}_k$ Read-Rezayi states. By placing hierarchy transitions and anyon superconductivity within a single mathematical formalism, our work provides a unified understanding of competing and proximate phases near experimentally realizable fractional quantum Hall states.
Classical Benchmarks of a Symmetry-Adapted Variational Quantum Eigensolver for Real-Time Green's Functions in Dynamical Mean-Field Theory
This paper develops a variational quantum eigensolver (VQE) approach to solve quantum many-body problems in materials science, specifically using quantum computers to simulate strongly correlated electron systems that are difficult for classical computers. The researchers show that while VQE can accurately find ground state energies, extracting dynamic properties like conductivity remains challenging, especially for metallic materials.
Key Contributions
- Demonstrates VQE with symmetry constraints can accurately solve Anderson Impurity Models beyond minimal approximations while staying within near-term quantum hardware limits
- Shows that accurate ground state energies from VQE do not guarantee accurate dynamical properties, revealing limitations for simulating metallic phases in strongly correlated materials
View Full Abstract
We present a variational quantum eigensolver (VQE) approach for solving the Anderson Impurity Model (AIM) arising in Dynamical Mean-Field Theory (DMFT). Recognizing that the minimal two-site approximation often fails to resolve essential spectral features, we investigate the efficacy of VQE for larger bath discretizations while adhering to near-term hardware constraints. We employ a symmetry-adapted ansatz enforcing conservation of particle number $(N)$, spin projection $(S_z=0)$, and total spin $(S^2=0)$ symmetry, benchmarking the performance against exact diagonalization across different interaction strengths using bath parameters extracted from the DMFT self-consistency loop. For a four-site model, the relative error in the ground state energy remains well below $0.01%$ with a compact parameter set $(N_p \le 30)$. Crucially, we demonstrate that the single-particle Green's function-the central quantity for DMFT-can be accurately extracted from VQE-prepared ground states via real-time evolution in the intermediate to strong interaction regimes. However, in the weak interaction regime, the Green's function exhibits noticeable deviations from the exact benchmark, particularly in resolving low-energy spectral features, despite the ground state energy showing excellent agreement. These findings demonstrate that VQE combined with real-time evolution can effectively extend quantum-classical hybrid DMFT beyond the two-site approximation, particularly for describing insulating phases. While this approach offers a viable pathway for simulating strongly correlated materials on near-term devices, the observation that accurate ground state energy does not guarantee accurate dynamical properties highlights a key challenge for applying such approaches to correlated metals.
Temperature driven false vacuum decay in coherently coupled Bose superfluids
This paper studies how temperature affects the transition of quantum fields from unstable to stable states (false vacuum decay) using a two-dimensional system of coupled Bose superfluids. The researchers use computer simulations to show that decay rates follow exponential temperature dependence, confirming theoretical predictions about quantum tunneling processes.
Key Contributions
- Demonstrated temperature-dependent false vacuum decay rates in 2D Bose-Bose mixtures following instanton theory
- Validated SGPE as effective simulation tool for studying coupled magnetization and phase dynamics in ultracold quantum gases
View Full Abstract
The relaxation of a quantum field from a metastable state (false vacuum) to a stable one (true vacuum), also known as false vacuum decay, is a fundamental problem in quantum field theory and cosmology. We study this phenomenon using a two-dimensional interacting and coherently coupled Bose-Bose mixture, a platform that has already been employed experimentally to investigate false vacuum decay in one dimension. In such a mixture, it is possible to define an effective magnetization that acts as a quantum field variable. Using the Stochastic Gross-Pitaevskii equation (SGPE), we prepare thermal equilibrium states in the false vacuum and extract decay rates from the magnetization dynamics. The decay rates show an exponential dependence on temperature, in line with the thermal theory of instantons. Since the SGPE is based on complex scalar fields, it also allows us to explore the behavior of the phase, which turns out to become dynamic during decay. Our results confirm the SGPE as an effective tool for studying coupled magnetization and phase dynamics and the associated instanton physics in ultracold quantum gases.
Structures and proximity effects of inhomogeneous population-imbalanced Fermi gases with pairing interactions
This paper studies ultracold atomic Fermi gases with spatially varying properties, analyzing how different quantum phases (BCS superfluid, FFLO, and normal phases) can coexist in different regions of space. The researchers use theoretical calculations to understand how these phases interface with each other and affect nearby regions through proximity effects.
Key Contributions
- Theoretical analysis of spatially inhomogeneous multi-phase structures in population-imbalanced Fermi gases using Bogoliubov-de Gennes equations
- Characterization of proximity effects and interfacial properties between different quantum phases (BCS, FFLO, normal) including emergence of buffer FFLO phases
View Full Abstract
By introducing spatially varying profiles of pairing interaction or spin polarization to quasi one-dimensional two-component atomic Fermi gases confined in box potentials, we analyze the ground state structures and properties when multiple phases coexist in real space by implementing the Bogoliubov--de~Gennes equation suitable for describing inhomogeneous fermion systems. While the BCS, Fulde--Ferrell--Larkin--Ovchinnikov (FFLO), and normal phases occupy different regions on the phase diagram when the parameters are uniform, a spatial change of pairing strength or spin polarization can drive the system from the FFLO phase to a normal gas or from a BCS superfluid to the FFLO phase in real space. The FFLO phase exhibits its signature modulating order parameter at the FFLO momentum due to population imbalance, and the pair correlation penetrates the polarized normal phase and exhibits proximity effects. Meanwhile, the BCS phase tends to repel population imbalance and maintain a plateau of pairing. Interestingly, a buffer FFLO phase emerges when the spatial change attempts to join the BCS and normal phase in the presence of spin polarization. By analyzing the pairing correlations, interfacial properties, and momentum-space spectra of the inhomogeneous structures, relevant length- and momentum- scales and their interplay are characterized. We also briefly discuss implications of inhomogeneous multi-phase atomic Fermi gases with population imbalance.
Stochastic Thermodynamics of Quantum-Induced Stochastic Dynamics
This paper develops a thermodynamic framework for systems where classical components interact with quantum environments that can exchange both heat and work. The authors derive modified thermodynamic laws that account for quantum effects like squeezing and demonstrate their approach using an optomechanical system.
Key Contributions
- Development of thermodynamic framework for quantum-induced stochastic dynamics
- Derivation of modified Second Law accounting for non-equilibrium quantum features
- Application to optomechanical systems with characterization of non-stationary noise thermodynamics
View Full Abstract
Quantum-Induced Stochastic Dynamics arises from the coupling between a classical system and a quantum environment. Unlike standard thermal reservoirs, this environment acts as a dynamic bath, capable of simultaneously exchanging heat and performing work. We formulate a thermodynamic framework for this semi-classical regime, defining heat, work, and entropy production. We derive a modified Second Law that accounts for non-equilibrium quantum features, such as squeezing. The framework is exemplified by an optomechanical setup, where we characterize the thermodynamics of the non-stationary noise induced by the cavity field.
Quantum speed limit time for bipartite entanglement in neutrino oscillations in matter with non-standard interactions
This paper studies how quantum entanglement between neutrino flavors evolves during neutrino oscillations through matter, particularly investigating how fast this entanglement can change (quantum speed limit) in the presence of non-standard particle interactions. The research uses theoretical frameworks to analyze entanglement measures that could potentially be observed in current and future neutrino experiments like T2K, NOvA, and DUNE.
Key Contributions
- Quantification of bipartite entanglement measures in three-flavor neutrino oscillations with non-standard interactions
- Analysis of quantum speed limits for entanglement evolution in neutrino systems under different mass ordering scenarios
View Full Abstract
In the three-flavor neutrino oscillation framework, we investigate the transition probabilities of an initial muon neutrino flavor state in the presence of non-standard interactions (NSIs) characterized by complex off-diagonal ($|ε_{αβ}|e^{iφ_{αβ}}$) and diagonal parameters ($|ε_{αα}-ε_{ββ}|$), including a CP-violating phase and a constant matter potential, under both normal (NO) and inverted mass ordering (IO) scenarios. Within these scenarios and through the lens of mode entanglement, bipartite entanglement measures such as entanglement entropy and capacity of entanglement are quantified in terms of the transition probabilities, which can be measured in neutrino oscillation experiments. Using these two bipartite entanglement measures, we further explore the quantum speed limit (QSL) time, which describes how rapidly bipartite entanglement evolves during neutrino oscillations. We illustrate our results using the baseline lengths and energies corresponding to ongoing long-baseline accelerator neutrino experiments, such as T2K, NO$ν$A, and the upcoming DUNE experiment. In the presence of a CP-violating phase and a constant matter potential, both with and without NSI effects, we compare the QSL time behavior for bipartite entanglement in neutrino oscillations for NO and IO. The most pronounced discrepancies in the QSL time for bipartite entanglement arise from the off-diagonal NSI parameter $ε_{μτ}$ across both the NO and IO scenarios. We emphasize that among all the experiments considered, NO$ν$A and DUNE exhibit a rapid suppression of bipartite entanglement in neutrino oscillations in the standard oscillation scenario with NO at the end of their baseline lengths for the corresponding best-fit value of CP-violating phase. Our results hint at a possible imprint of new physics in neutrino oscillations.
Detecting quantum noise of a solid-state spin ensemble with dispersive measurement
This paper develops theoretical protocols for measuring spin polarization in solid-state spin ensembles using microwave resonator-mediated dispersive readout instead of optical methods. The authors analyze noise sources and propose methods to detect spin squeezing, which could enable quantum-enhanced precision measurements.
Key Contributions
- Derived analytic conditions for achieving fundamental spin-projection noise limited measurements using dispersive readout
- Proposed experimental protocol for directly detecting spin squeezing in solid-state spin ensembles for quantum-enhanced metrology
View Full Abstract
We theoretically explore protocols for measuring the spin polarization of an ensemble of solid-state spins, with precision at or below the standard quantum limit. Such measurements in the solid-state are challenging, as standard approaches based on optical fluorescence are often limited by poor readout fidelity. Indirect microwave resonator-mediated measurements provide an attractive alternative, though a full analysis of relevant sources of measurement noise is lacking. In this work we study dispersive readout of an inhomogeneously broadened spin ensemble via coupling to a driven resonator measured via homodyne detection. We derive generic analytic conditions for when the homodyne measurement can be limited by the fundamental spin-projection noise, as opposed to microwave-drive shot noise or resonator phase noise. By studying fluctuations of the measurement record in detail, we also propose an experimental protocol for directly detecting spin squeezing, i.e. a reduction of the spin ensemble's intrinsic projection noise from entanglement. Our protocol provides a method for benchmarking entangled states for quantum-enhanced metrology.
Distributed Phase-Insensitive Displacement Sensing
This paper develops theoretical foundations for quantum sensing using multiple sensors without requiring a shared phase reference, showing that distributed quantum sensors can still achieve precision enhancements beyond classical limits by leveraging quantum correlations between sensor modes.
Key Contributions
- Derives analytical bounds for precision in phase-insensitive distributed quantum sensing
- Identifies multimode states with definite joint parity that achieve optimal sensing precision
- Analyzes decoherence effects and identifies optimal sensing strategies for different noise channels
View Full Abstract
Distributed quantum sensing leverages quantum correlations among multiple sensors to enhance the precision of parameter estimation beyond classical limits. Most existing approaches target phase estimation and rely on a shared phase reference between the signal and the probe, yet many relevant scenarios deal with regimes where such a reference is absent, making the estimation of force or field amplitudes the main task. We study this phase-insensitive regime for bosonic sensors that undergo identical displacements with common phases randomly varying between experimental runs. We derive analytical bounds on the achievable precision and show that it is determined by first-order normal correlations between modes in the probe state, constrained by their average excitations. These correlations yield a collective sensitivity enhancement over the standard quantum limit, with a gain that grows linearly in the total excitation number, revealing a distributed quantum advantage even without a global phase reference. We identify families of multimode states with definite joint parity that saturate this limit and can be probed efficiently via local parity measurements already demonstrated or emerging in several quantum platforms. We further demonstrate that experimentally relevant decoherence channels favor two distinct sensing strategies: splitting of a single-mode nonclassical state among the modes, which is robust to loss and heating, and separable probes, which are instead resilient to dephasing and phase jitter. Our results are relevant to multimode continuous platforms, including trapped-ion, solid-state mechanical, optomechanical, superconducting, and photonic systems.
Quantum Speedups for Derivative Pricing Beyond Black-Scholes
This paper develops quantum algorithms for pricing financial derivatives beyond the simple Black-Scholes model, extending quantum Monte Carlo methods to more realistic models like Cox-Ingersoll-Ross and Heston stochastic volatility models. The authors demonstrate quadratic speedups over classical methods and introduce new quantum sampling techniques for multi-dimensional stochastic processes used in quantitative finance.
Key Contributions
- Extension of quantum Monte Carlo speedups to practical financial models (CIR and Heston) using fast-forwardability property
- Introduction of quantum Milstein sampler based on novel Lévy area sampling for multi-dimensional stochastic processes
- Improved analysis reducing resource requirements for quantum derivative pricing algorithms
- Theoretical analysis showing barriers to quantum speedup in PDE-based approaches for derivative pricing
View Full Abstract
This paper explores advancements in quantum algorithms for derivative pricing of exotics, a computational pipeline of fundamental importance in quantitative finance. For such cases, the classical Monte Carlo integration procedure provides the state-of-the-art provable, asymptotic performance: polynomial in problem dimension and quadratic in inverse-precision. While quantum algorithms are known to offer quadratic speedups over classical Monte Carlo methods, end-to-end speedups have been proven only in the simplified setting over the Black-Scholes geometric Brownian motion (GBM) model. This paper extends existing frameworks to demonstrate novel quadratic speedups for more practical models, such as the Cox-Ingersoll-Ross (CIR) model and a variant of Heston's stochastic volatility model, utilizing a characteristic of the underlying SDEs which we term fast-forwardability. Additionally, for general models that do not possess the fast-forwardable property, we introduce a quantum Milstein sampler, based on a novel quantum algorithm for sampling Lévy areas, which enables quantum multi-level Monte Carlo to achieve quadratic speedups for multi-dimensional stochastic processes exhibiting certain correlation types. We also present an improved analysis of numerical integration for derivative pricing, leading to substantial reductions in the resource requirements for pricing GBM and CIR models. Furthermore, we investigate the potential for additional reductions using arithmetic-free quantum procedures. Finally, we critique quantum partial differential equation (PDE) solvers as a method for derivative pricing based on amplitude estimation, identifying theoretical barriers that obstruct achieving a quantum speedup through this approach. Our findings significantly advance the understanding of quantum algorithms in derivative pricing, addressing key challenges and open questions in the field.
Thermodynamics of the Heisenberg XXX chain with negative spin
This paper studies the thermodynamic properties of a quantum spin chain model with negative spin values, which is mathematically equivalent to certain quantum field theories. The researchers use advanced theoretical techniques to analyze how this unusual system behaves at different temperatures and identify unique quantum phases.
Key Contributions
- Thermodynamic analysis of negative spin XXX chain using Bethe Ansatz methods
- Identification of quantum phase transition and unconventional low-temperature behavior
- Connection to conformal field theory and Luttinger liquid physics in the continuum limit
View Full Abstract
We study the thermodynamics of the isotropic Heisenberg XXX spin chain with negative spin, focusing on the case $s=-1$. The model is equivalent to the quantum lattice nonlinear Schrödinger (NLS) model and appears as an effective theory in deep inelastic scattering in high-energy quantum chromodynamics. Owing to its integrability, it admits a consistent Bethe Ansatz description and a well-defined thermodynamic limit. Using the thermodynamic Bethe Ansatz, we analyze the ground state, elementary excitations, and finite-temperature properties. In contrast to the conventional positive spin XXX chain, the negative spin model exhibits a distinct vacuum structure and excitation spectrum, leading to modified TBA equations and unconventional low-temperature behavior. Although the integral equations resemble those of the Lieb-Liniger Bose gas, the thermodynamics and scaling properties are qualitatively different and cannot be continuously connected. We derive the free energy, entropy, and specific heat, and identify a quantum phase transition separating different thermodynamic regimes. At zero temperature, the excitation spectrum becomes linear in the continuum limit and can be described by a conformal field theory. The low-temperature regime realizes a Luttinger-liquid like phase with features unique to the negative spin XXX chain.
Quantum Computing for Electronic Circular Dichroism Spectrum Prediction of Chiral Molecules
This paper develops a quantum computing framework to predict electronic circular dichroism (ECD) spectra of chiral drug molecules, which helps determine their three-dimensional structure. The method uses variational quantum algorithms with 20-24 qubits to efficiently calculate molecular properties that would be computationally expensive with classical methods.
Key Contributions
- Development of variational quantum framework for ECD spectrum prediction using quantum equation of motion formalism
- Demonstration of quantum advantage for molecular property calculations on 12 clinically relevant chiral drug molecules with 20-24 qubit circuits
View Full Abstract
Electronic circular dichroism (ECD) spectroscopy captures the chiroptical response of molecules, enabling absolute configuration assignment that is vital for enantioselective synthesis and drug design. The practical use of ECD spectra in predictive modeling remains restricted, as existing approaches offer limited confidence for chiral discrimination. By contrast, theoretical ECD calculations demand substantial computational effort rooted in electronic structure theory, which constrains their scalability to larger chemically diverse molecules. These limitations underscore the need for computational approaches that retain first principles physical rigor while enabling efficient and scalable prediction. Motivated by recent advances in quantum algorithms for chemistry, we introduce a variational quantum framework combined with the quantum equation of motion formalism to compute molecular properties and predict ECD spectra, implemented within a multi GPU or QPU accelerated hybrid quantum/classical workflow. We demonstrate its efficient applicability on 12 clinically relevant chiral drug molecules accessing expanded active spaces. The proposed framework is assessed by comparison with established classical wavefunction based methods, employing Coupled Cluster Singles and Doubles (CCSD) for ground-state energy benchmarks and Complete Active Space Configuration Interaction (CASCI) as the reference method for excited state energies and chiroptical properties within the same active orbital space. Notably, the quantum computed ECD spectra, obtained from chemically relevant active spaces mapped onto quantum circuits of approximately 20 to 24 qubits, exhibit near quantitative agreement with classical reference calculations, accurately reproducing spectral line shapes, Cotton effect signs, and relative peak intensities.
Universal Characterization of Quantum Vacuum Measurement Engines
This paper develops a theoretical framework for quantum engines that are powered by measurements rather than heat, introducing a mathematical tool called the quantum vacuum bending function (QVBF) that characterizes how interactions lower ground-state energy. The authors show that this single function determines all thermodynamic properties of these measurement-powered engines, regardless of the specific physical implementation.
Key Contributions
- Introduction of the quantum vacuum bending function (QVBF) as a universal characterization tool for measurement-powered quantum engines
- Demonstration that all thermodynamic observables are determined solely by the QVBF shape, independent of microscopic details
- Derivation of a generalized quantum fluctuation relation connecting quantum Fisher information to ground-state energy landscapes
View Full Abstract
Quantum measurements can inject energy into quantum systems, enabling engines whose operation is powered entirely by measurements. We develop a general theory of quantum vacuum measurement engines by introducing the quantum vacuum bending function (QVBF), a quantity that characterizes the lowering of the ground-state energy due to interactions. We show that all thermodynamic observables, including work and efficiency, are governed solely by the shape of the ground-state energy landscape encoded in the QVBF, regardless of microscopic details. We further demonstrate that work fluctuations are defined by the curvature of QVBF modulated by a model-dependent quantity, and are constrained by a generalized quantum fluctuation relation that involves the interplay between quantum Fisher information and the ground-state energy landscape. Exactly solvable models and numerical simulations of single and many-body systems confirm the theory and illustrate how the QVBF alone determines the performance of quantum vacuum measurement engines.
Anti-Critical Quantum Metrology
This paper introduces 'anti-critical metrology' - a quantum sensing technique that achieves enhanced precision measurement while the energy gap increases, avoiding the critical slowing down problem that plagues traditional quantum metrology approaches near phase transitions.
Key Contributions
- Introduction of anti-critical metrology scheme that avoids critical slowing down while maintaining enhanced precision
- Demonstration that quantum Fisher information growth is not necessarily required for improved metrological performance when accounting for evolution time
View Full Abstract
Critical quantum metrology exploits the dramatic growth of the quantum Fisher information near quantum phase transitions to enhance the precision of parameter estimation. Traditionally, this enhancement is associated with a closing energy gap, which causes the characteristic timescales for adiabatic preparation or relaxation to diverge with increasing system size. Consequently, the apparent growth of the quantum Fisher information largely reflects the increasing evolution time induced by critical slowing down, rather than a genuine gain in metrological performance, thereby severely limiting the practical usefulness of such protocols. Here we show that the relationship between energy-gap variations, quantum Fisher information, and achievable precision is far more subtle in interacting quantum systems: enhanced sensitivity does not require a vanishing gap, and, perhaps more surprisingly, a decreasing quantum Fisher information does not necessarily imply reduced precision once the time is properly taken into account. Building on this insight, we introduce an anti-critical metrology scheme that achieves enhanced precision while the energy gap increases. We illustrate this mechanism using the quantum Rabi model, thereby identifying a route to metrological advantage that avoids the critical slowing down associated with conventional criticality.
Optimal Effective Hamiltonian for Quantum Computing and Simulation
This paper develops a new mathematical framework called the Least Action Unitary Transformation (LAUT) for creating more accurate effective Hamiltonians in quantum systems. The method addresses fundamental problems with existing approaches by minimizing geometric action and preserving symmetries, with experimental validation on superconducting quantum processors showing improved accuracy for quantum gates.
Key Contributions
- Established Least Action Unitary Transformation (LAUT) as a fundamental principle for constructing optimal effective Hamiltonians
- Demonstrated experimental validation on superconducting quantum processors showing quantitative reproduction of interaction rates in driven entangling gates
- Identified Bloch-Brandow formalism as the natural perturbative counterpart that preserves symmetries to high order
View Full Abstract
The effective Hamiltonian serves as the conceptual pivot of quantum engineering, transforming physical complexity into programmable logic; yet, its construction remains compromised by the mathematical non-uniqueness of block diagonalization, which introduces an intrinsic "gauge freedom" that standard methods fail to resolve. We address this by establishing the Least Action Unitary Transformation (LAUT) as the fundamental principle for effective models. By minimizing geometric action, LAUT guarantees dynamical fidelity and inherently enforces the preservation of symmetries--properties frequently violated by conventional Schrieffer-Wolff and Givens rotation techniques. We identify the Bloch-Brandow formalism as the natural perturbative counterpart to this principle, yielding analytic expansions that preserve symmetries to high order. We validate this framework against experimental data from superconducting quantum processors, demonstrating that LAUT quantitatively reproduces interaction rates in driven entangling gates where standard approximations diverge. Furthermore, in tunable coupler architectures, we demonstrate that the LAUT approach captures essential non-rotating-wave contributions that standard models neglect; this inclusion is critical for quantitatively reproducing interaction rates and revealing physical multi-body interactions such as $XZX+YZY$, which are verified to be physical rather than gauge artifacts. By reconciling variational optimality with analytical tractability, this work provides a systematic, experimentally validated route for high-precision system learning and Hamiltonian engineering.
Lee-Yang tensors and Hamiltonian complexity
This paper studies Lee-Yang tensors in quantum systems, which are mathematical objects corresponding to polynomials that don't vanish in certain regions. The authors show that quantum states and operators with Lee-Yang radius greater than 1 have special properties, including efficient preparation protocols and unique ground states, and propose applications to quantum optimization algorithms.
Key Contributions
- Established that quantum states with Lee-Yang radius r > 1 can be prepared by quasipolynomial-sized circuits
- Proved that Hermitian operators with Lee-Yang radius r > 1 have unique principal eigenvectors
- Proposed efficient quantum adiabatic algorithm for quantum Max-Cut problem on bipartite graphs based on Lee-Yang properties
View Full Abstract
A complex tensor with $n$ binary indices can be identified with a multilinear polynomial in $n$ complex variables. We say it is a Lee-Yang tensor with radius $r$ if the polynomial is nonzero whenever all variables lie in the open disk of radius $r$. In this work we study quantum states and observables which are Lee-Yang tensors when expressed in the computational basis. We first review their basic properties, including closure under tensor contraction and certain quantum operations. We show that quantum states with Lee-Yang radius $r > 1$ can be prepared by quasipolynomial-sized circuits. We also show that every Hermitian operator with Lee-Yang radius $r > 1$ has a unique principal eigenvector. These results suggest that $r = 1$ is a key threshold for quantum states and observables. Finally, we consider a family of two-local Hamiltonians where every interaction term energetically favors a deformed EPR state $|00\rangle + s|11\rangle$ for some $0 \leq s \leq 1$. We numerically investigate this model and find that on all graphs considered the Lee-Yang radius of the ground state is at least $r = 1/\sqrt{s}$ while the spectral gap between the two smallest eigenvalues is at least $1-s^2$. We conjecture that these lower bounds hold more generally; in particular, this would provide an efficient quantum adiabatic algorithm for the quantum Max-Cut problem on uniformly weighted bipartite graphs.
Fluctuations of the inverted magnetic state and how to sense them
This paper studies the fluctuations in magnetically inverted states created by injecting spin current into ferromagnets, where the magnetic moment aligns opposite to an applied field. The authors theoretically analyze these fluctuations and propose using qubits to experimentally probe the unique quantum properties of these unstable but dynamically stabilized states.
Key Contributions
- Theoretical characterization of fluctuations in inverted magnetic states with antimagnons
- Proposal to use qubits as probes for detecting quantum fluctuations in these non-equilibrium magnetic systems
- Demonstration that inverted states exhibit enhanced fluctuations compared to equilibrium states in the quantum regime
View Full Abstract
Magnons are the low-energy excitations of magnetically ordered materials. While the magnetic moment of a ferromagnet aligns with an applied magnetic field, it has been experimentally shown that the magnetic order can be inverted by injecting spin current into the magnet. This results in an energetically unstable but dynamically stabilized state where the magnetic moment aligns antiparallel to an applied magnetic field, called the inverted magnetic state. The excitations on top of such a state have negative energy and are called antimagnons. The inverted state is subject to fluctuations, in particular, as shot noise in the spin current, which are different from fluctuations in equilibrium, especially at low temperatures. Here, we theoretically study the fluctuations of the inverted magnetic state and their signatures in experimental setups. We find that the fluctuations from the injection of spin current play a large role. In the quantum regime, the inverted magnetic state exhibits larger fluctuations compared to the equilibrium position, which can be probed using a qubit. Our results advance the understanding of the fundamental properties of antimagnons and their experimental controllability, and they pave the way for applications in spintronics and magnonics, such as spin wave amplification and entanglement.
Microscopic derivation of a completely positive master equation for the description of Open Quantum Brownian Motion of a particle in a potential
This paper develops a theoretical framework for Open Quantum Brownian Motion by deriving master equations that describe how a quantum particle behaves when confined in a potential and coupled to a thermal environment. The work provides both the microscopic derivation and numerical solutions showing how initially non-Gaussian quantum states evolve over time.
Key Contributions
- Microscopic derivation of completely positive master equation for Open Quantum Brownian Motion in harmonic potential
- Development of hybrid quantum-classical master equation through adiabatic elimination
- Numerical analysis showing non-Gaussian intrinsic dynamics despite Gaussian limiting behavior
View Full Abstract
Open Quantum Brownian Motion (OQBM) was introduced as a scaling limit of discrete-time open quantum walks. This limit defines a new class of quantum Brownian motion, which incorporates both the external and internal degrees of freedom of the Brownian particle. We consider a weakly driven Brownian particle confined in a harmonic potential and dissipatively coupled to a thermal bath. Applying the rotating wave approximation (RWA) to the system-bath interaction Hamiltonian, we derive a completely positive Born-Markov master equation for the reduced dynamics. We express the resulting master equation in the coordinate representation and, utilizing the adiabatic elimination of fast variables, derive a completely positive hybrid quantum-classical master equation that defines OQBM. We illustrate the resulting dynamics using examples of initial Gaussian and non-Gaussian distributions of the OQBM walker. Both examples reveal the emergence of Gaussian distributions in the limiting behavior of the OQBM dynamics, which closely matches that of the standard OQBM. With the help of the obtained OQBM master equation, we derive the equations for the $n$-th moments and the cumulants of the position distribution of the open Brownian walker. We subsequently solve these equations numerically for Gaussian initial distributions across various parameter regimes. Notably, we find that the third-order cumulant is nonzero, indicating that the dynamics' intrinsic generator is non-Gaussian.
QRC-Lab: An Educational Toolbox for Quantum Reservoir Computing
This paper introduces QRC-Lab, an open-source Python framework for Quantum Reservoir Computing that helps researchers and students explore quantum machine learning techniques for processing temporal data. The toolbox provides educational tools and case studies to bridge theoretical quantum dynamics with practical machine learning applications.
Key Contributions
- Development of QRC-Lab open-source Python framework for quantum reservoir computing
- Educational case studies including short-term memory reconstruction, temporal parity, and NARMA10 forecasting
- Formalization of reservoir mapping and comparison of physical vs gate-based approaches
- Generalization-gap analysis for quantum feature maps capacity control
View Full Abstract
Quantum Reservoir Computing (QRC) has emerged as a strong pa- radigm for Noisy Intermediate-Scale Quantum (NISQ) machine learning, ena- bling the processing of temporal data with minimal training overhead by exploi- ting the high-dimensional dynamics of quantum states. This paper introduces QRC-Lab, an open-source, modular Python framework designed to bridge the gap between theoretical quantum dynamics and applied machine learning work- flows. We provide a rigorous definition of QRC, contrast physical and gate- based approaches, and formalize the reservoir mapping used in the toolbox. QRC-Lab instantiates a configurable gate-based laboratory for studying in- put encoding, reservoir connectivity, and measurement strategies, and validates these concepts through three educational case studies: short-term memory re- construction, temporal parity (XOR), and NARMA10 forecasting as a deliberate stress test. In addition, we include a learning-theory motivated generalization- gap scan to build intuition about capacity control in quantum feature maps. The full source code, experiment scripts, and reproducibility assets are publicly available at: https://doi.org/10.5281/zenodo.18469026.
Evaluating Quantum Wire Cutting for QAOA: Performance Benchmarks in Ideal and Noisy Environments
This paper evaluates quantum circuit cutting techniques that allow large quantum algorithms to run on smaller quantum computers by decomposing circuits into sub-circuits. The researchers test different cutting strategies on QAOA algorithms and find that while Randomized Clifford measurements work best in ideal conditions, circuit cutting struggles to provide accurate results in noisy quantum environments.
Key Contributions
- Benchmarking comparison showing Randomized Clifford measurements outperform Pauli and random unitary measurements for circuit cutting
- Demonstration that quantum circuit cutting performance degrades significantly in noisy environments, especially with increasing number of sub-circuits
View Full Abstract
Current quantum computers suffer from a limited number of qubits and high error rates, limiting practical applicability. Different techniques exist to mitigate these effects and run larger algorithms. In this work, we analyze one of these techniques called quantum circuit cutting. With circuit cutting, a quantum circuit is decomposed into smaller sub-circuits, each of which can be run on smaller quantum hardware. We compare the performance of quantum circuit cutting with different cutting strategies, and then apply circuit cutting to a QAOA algorithm. Using simulations, we first show that Randomized Clifford measurements outperform both Pauli and random unitary measurements. Second, we show that circuit cutting has trouble providing correct answers in noisy settings, especially as the number of circuits increases.
Quantum Circuit Generation via test-time learning with large language models
This paper uses large language models to generate quantum circuits through an iterative optimization process, where the LLM proposes circuit modifications and a quantum simulator evaluates the resulting states using entanglement measures. The authors develop a test-time learning approach that incorporates feedback and memory to improve circuit synthesis for creating highly entangled quantum states.
Key Contributions
- Novel application of large language models for quantum circuit synthesis with closed-loop optimization
- Development of test-time learning recipe with memory trace, score-difference feedback, and restart-from-best sampling for circuit optimization
- Demonstration of automated generation of highly entangled quantum states on 20-25 qubit systems using Meyer-Wallach entanglement measure
View Full Abstract
Large language models (LLMs) can generate structured artifacts, but using them as dependable optimizers for scientific design requires a mechanism for iterative improvement under black-box evaluation. Here, we cast quantum circuit synthesis as a closed-loop, test-time optimization problem: an LLM proposes edits to a fixed-length gate list, and an external simulator evaluates the resulting state with the Meyer-Wallach (MW) global entanglement measure. We introduce a lightweight test-time learning recipe that can reuse prior high-performing candidates as an explicit memory trace, augments prompts with a score-difference feedback, and applies restart-from-the-best sampling to escape potential plateaus. Across fixed 20-qubit settings, the loop without feedback and restart-from-the-best improves random initial circuits over a range of gate budgets. To lift up this performance and success rate, we use the full learning strategy. For 25-qubit, it mitigates a pronounced performance plateau when naive querying is used. Beyond raw scores, we analyze the structure of synthesized states and find that high MW solutions can correspond to stabilizer or graph-state-like constructions, but full connectivity is not guaranteed due to the metric property and prompt design. These results illustrate both the promise and the pitfalls of memory evaluator-guided LLM optimization for circuit synthesis, highlighting the critical role of prior human-made theoretical theorem to optimally design a custom tool in support of research.
Stationary entanglement of a levitated oscillator with an optical field
This paper demonstrates quantum entanglement between a levitated nanosphere and optical light at room temperature, showing that quantum correlations can be distributed beyond the interaction region through coherent scattering in an optical cavity.
Key Contributions
- First demonstration of stationary entanglement between levitated mechanical oscillator and optical field
- Room temperature operation with robust performance across broad detuning range
- Experimental violation of separability bounds proving nonclassical correlations
View Full Abstract
We report the generation of quantum entanglement between the center-of-mass motion of a levitated nanosphere, coupled by coherent scattering to an optical cavity mode, and the electromagnetic field. Using heterodyne detection, we reconstruct the full set of optical-mechanical correlations and observe a violation of separability bounds between the mechanical degrees of freedom and the propagating optical mode. Thus, we demonstrate the ability to distribute nonclassical correlations beyond the interaction region. Our results are obtained at room temperature and are robust over a broad range of detunings set by the cavity linewidth. These findings establish levitated optomechanical systems as a promising platform for macroscopic quantum optics and for future tests of fundamental physics.
Enhancing Quantum Diffusion Models for Complex Image Generation
This paper develops a hybrid quantum-classical neural network architecture that combines quantum computing with classical U-Net models to generate images, specifically testing on MNIST digit recognition. The approach uses quantum systems to process data in a compressed latent space while classical components handle the main image generation tasks.
Key Contributions
- Hybrid Quantum-Classical U-Net architecture for image generation
- Adaptive Non-Local Observables (ANO) for extracting quantum features
- Demonstration of quantum-enhanced generative models on full MNIST dataset
View Full Abstract
Quantum generative models offer a novel approach to exploring high-dimensional Hilbert spaces but face significant challenges in scalability and expressibility when applied to multi-modal distributions. In this study, we explore a Hybrid Quantum-Classical U-Net architecture integrated with Adaptive Non-Local Observables (ANO) as a potential solution to these hurdles. By compressing classical data into a dense quantum latent space and utilizing trainable observables, our model aims to extract non-local features that complement classical processing. We also investigate the role of Skip Connections in preserving semantic information during the reverse diffusion process. Experimental results on the full MNIST dataset (digits 0-9) demonstrate that the proposed architecture is capable of generating structurally coherent and recognizable images for all digit classes. While hardware constraints still impose limitations on resolution, our findings suggest that hybrid architectures with adaptive measurements provide a feasible pathway for mitigating mode collapse and enhancing generative capabilities in the NISQ era.
Zak phase and bulk-boundary correspondence in a generalized Dirac-Kronig-Penney model
This paper studies a one-dimensional quantum model based on the Dirac equation to understand topological phases of matter. The researchers analyze how different symmetry classes affect the topological properties and find that the Zak phase, a standard tool for characterizing topology, behaves unexpectedly in certain cases.
Key Contributions
- Analytical characterization of spectral properties in a generalized Dirac-Kronig-Penney model across five symmetry classes
- Discovery that Zak phase is non-quantized in symmetry class D, challenging its role as a topological marker
- Analysis of bulk-boundary correspondence showing sensitivity to truncation parameters in classes AIII and BDI
View Full Abstract
We investigate the topological properties of a generalized Dirac--Kronig--Penney model, a continuum one-dimensional model for a relativistic quantum chain. By tuning the coupling parameters this model can accommodate five Altland--Zirnbauer--Cartan symmetry classes, three of which (AIII, BDI and D) support non-trivial topological phases in dimension one. We characterize analytically the spectral properties of the Hamiltonian in terms of a spectral function, and numerically compute the Zak phase to probe the bulk topological content of the insulating phases. Our findings reveal that, while the Zak phase is quantized in classes AIII and BDI, it exhibits non-quantized values in class D, challenging its traditional role as a topological marker in continuum settings. We also discuss the bulk-boundary correspondence for a truncated version of the chain, analyzing how the emergence of edge states depends on both the truncation position and the boundary conditions. In classes AIII and BDI, we find that the Zak phase effectively detects edge states as a relative boundary topological index, although the correspondence is highly sensitive to the parameters characterizing the truncation.
Decoherence-protected entangling gates in a silicon carbide quantum node
This paper demonstrates a functional quantum node using silicon carbide where electron spins process information and nuclear spins store it as memory. The researchers developed special pulse sequences that protect quantum operations from noise, achieving 90% fidelity in creating entangled states between processor and memory qubits.
Key Contributions
- Demonstration of decoherence-protected universal gate operations between electron spin processors and nuclear spin memory qubits in silicon carbide
- Achievement of 90% fidelity entangled state preparation exceeding fault-tolerance thresholds for quantum network architectures
- Design of pulse sequences combining dynamical decoupling with hyperfine interactions for noise-resistant quantum operations
View Full Abstract
Solid-state color centers are promising candidates for nodes in quantum network architectures. However, realizing scalable and fully functional quantum nodes, comprising both processor and memory qubits with high-fidelity universal gate operations, remains a central challenge in this field. Here, we demonstrate a fully functional quantum node in silicon carbide, where electron spins act as quantum processors and nuclear spins serve as quantum memory. Specifically, we design a pulse sequence that combines dynamical decoupling with hyperfine interactions to realize decoherence-protected universal gate operations between the processor and memory qubits. Leveraging this gate, we deterministically prepare entangled states within the quantum node, achieving a fidelity of 90%, which exceeds the fault-tolerance threshold of certain quantum network architectures. These results open a pathway toward scalable and fully functional quantum nodes based on silicon carbide.
Comprehensive Numerical Studies of Barren Plateau and Overparametrization in Variational Quantum Algorithm
This paper conducts comprehensive numerical studies of two key challenges in variational quantum algorithms: barren plateaus (vanishing gradients) and overparametrization effects. The researchers use quantum Ising models to quantitatively evaluate how these phenomena interact and impact optimization performance in variational quantum circuits.
Key Contributions
- Comprehensive numerical analysis of barren plateau and overparametrization effects in the same VQA setting
- Quantitative evaluation of the interplay between barren plateaus and overparametrization using concrete quantum Ising model implementations
- Framework for guiding VQA algorithm and ansatz design with theoretical support for parameter optimization
View Full Abstract
The variational quantum algorithm (VQA) with a parametrized quantum circuit is widely applicable to near-term quantum computing, but its fundamental issues that limit optimization performance have been reported in the literature. For example, VQA optimization often suffers from vanishing gradients called barren plateau (BP) and the presence of local minima in the landscape of the cost function. Numerical studies have shown that the trap in local minima is significantly reduced when the circuit is overparametrized (OP), where the number of parameters exceeds a certain threshold. Theoretical understanding of the BP and OP phenomena has advanced over the past years, however, comprehensive studies of both effects in the same setting are not fully covered in the literature. In this paper, we perform a comprehensive numerical study in VQA, quantitatively evaluating the impacts of BP and OP and their interplay on the optimization of a variational quantum circuit, using concrete implementations of one-dimensional transverse and longitudinal field quantum Ising model. The numerical results are compared with the theoretical diagnostics of BP and OP phenomena. The framework presented in this paper will provide a guiding principle for designing VQA algorithms and ansatzes with theoretical support for behaviors of parameter optimization in practical settings.
Thermodynamic state variables from a minimal set of quantum constituents
This paper demonstrates how fundamental thermodynamic quantities like pressure, temperature, and entropy can be derived from the quantum mechanical properties of just one or two particles confined to two-dimensional spaces. The work connects quantum chaos and statistical mechanics to provide a microscopic foundation for thermodynamic laws.
Key Contributions
- Derivation of macroscopic thermodynamic variables from minimal quantum systems
- Microscopic foundation for first and second laws of thermodynamics
- Demonstration of eigenstate thermalization hypothesis in simple quantum systems
View Full Abstract
We show how the macroscopic state variables pressure, entropy and temperature of equilibrium thermodynamics can be consistently derived from the (quantum) chaotic spectral structure of one or two particles in two-dimensional domains. This provides a definition of work and heat from first principles, a microscopic underpinning of the first and second law of thermodynamics, and a transparent illustration of the ``eigenstate thermalization hypothesis''.
Liouvillian Gap in Dissipative Haar-Doped Clifford Circuits
This paper studies quantum chaos in dissipative systems by examining how adding small amounts of random quantum gates to otherwise structured Clifford circuits affects the system's relaxation properties. The researchers find two distinct regimes depending on the density of random gates, with different scaling behaviors for the Liouvillian gap that characterizes intrinsic relaxation.
Key Contributions
- Identified two distinct scaling regimes for the Liouvillian gap in Haar-doped Clifford circuits based on doping density
- Provided analytical treatment showing gap scaling depends only on spatial doping structure, independent of temporal resampling of random gates
- Established rigorous bounds on the gap in strongly dissipative regimes and supported extension to weak dissipation with numerical evidence
View Full Abstract
Quantum chaos is commonly assessed through probe-dependent signatures such as spectral statistics, OTOCs, and entanglement growth, which need not coincide. Recently, a dissipative diagnostic of chaos has been proposed, in which an infinitesimal coupling to a bath yields a finite Liouvillian gap in chaotic systems, marking the onset of intrinsic relaxation. This raises a conceptual question: what is the minimal departure from Clifford dynamics needed for this intrinsically relaxing behavior to emerge? In this work, we investigate the dynamics under the Floquet two-qubit Clifford circuit interleaved with a finite density of Haar-random single-site gates, followed by a depolarizing channel with strength $γ$. For Floquet Clifford circuits built from an \textit{i}SWAP-class two-qubit gate, our analysis identifies two distinct regimes for the Liouvillian gap in the thermodynamic limit, exemplified by the undoped and fully doped extreme cases. In both regimes, the dissipative diagnostic signals chaotic behavior, differing only in how the gap scales with system size. In the undoped circuit, the gap scales as $Δ\sim γN$, whereas in the fully doped circuit it remains finite as $N\to\infty$. We find that the doping density $p_h$ governs the crossover: as $p_h\to 0$, any spatial structure remains undoped-like, whereas for finite $p_h$ certain structures can enter a finite-gap regime. These results are analytically established in the strongly dissipative regime $γ\gg 1$ by deriving lower bounds on the gap as a function of $p_h$ and explicit finite-gap constructions, and their extension toward $γ\to 0$ is supported by numerics. Importantly, our analytic treatment depends only on the spatial doping structure, so the same gap scaling persists even when the Haar rotations are independently resampled each Floquet period.
Tuning interactions between static-field-shielded polar molecules with microwaves
This paper proposes a method to control interactions between ultracold polar molecules that are protected from destructive collisions using static electric fields, by adding microwave radiation to tune the strength of molecular interactions while maintaining collision protection.
Key Contributions
- Development of a general method to tune interactions in static-field-shielded polar molecules using microwave fields
- Demonstration through coupled-channel scattering calculations that both s-wave scattering length and dipole length can be widely tuned while suppressing lossy collisions
View Full Abstract
The ability to tune interparticle interactions is one of the main advantages of using ultracold quantum gases for quantum simulation of many-body physics. Current experiments with ultracold polar molecules employ shielding with microwave or static electric fields to prevent destructive collisional losses. The interaction potential of microwave-shielded molecules can be tuned by using microwaves of two different polarisations, while for static-field-shielded molecules the tunability of interactions is more limited and depends on the particular species. In this work, we propose a general method to tune the interactions between static-field-shielded molecules by applying a microwave field. We carry out coupled-channel scattering calculations in a field-dressed basis set to determine loss rate coefficients and scattering lengths. We find that both the s-wave scattering length and the dipole length can be widely tuned by changing the parameters of the microwave field, while maintaining strong suppression of lossy collisions.
A Tunable, Modeless, and Hybridization-free Cross-Kerr Coupler for Miniaturized Superconducting Qubits
This paper proposes a new coupling method for superconducting qubits using SQUID (superconducting quantum interference device) couplers that provide tunable interactions without mode hybridization, enabling faster quantum gates and more compact quantum processor designs. The approach uses cross-Kerr interactions controlled by external magnetic flux to implement high-fidelity controlled-Z gates while maintaining qubit coherence and allowing for denser qubit packing.
Key Contributions
- Novel SQUID-based coupler architecture that eliminates mode hybridization issues in tunable qubit interactions
- Demonstration of fast, high-fidelity controlled-Z gates using cross-Kerr interactions
- Scalable tiling strategy for miniaturized superconducting quantum processors with improved qubit density
View Full Abstract
Superconducting quantum circuits typically use capacitive charge-based linear coupling schemes to control interactions between elements such as qubits. While simple and effective, this coupling scheme makes it difficult to satisfy competing circuit design requirements such as maintaining large qubit anharmonicity and coherence along with a high degree of qubit connectivity and packing density. Moreover, tunable interactions using linear coupling elements produce dynamical variations in mode hybridization, which can induce non-adiabatic transitions, resulting in leakage errors and limiting gate speeds. In this work we attempt to address these challenges by proposing a junction-based coupling architecture based on SQUID (superconducting quantum interference device) couplers with relatively small Josephson energies. SQUID couplers provide intrinsic cross-Kerr interactions that can be controlled by external fluxes and that do not rely on mode hybridization. The small Josephson energies of the coupler maintain the interaction at a perturbative scale, which limits undesired higher-order mixing between coupled elements while achieving a sufficiently strong cross-Kerr interaction originating from diagonal coupling elements. Based on these properties, we show that a SQUID coupler can be used to implement a fast, adiabatic, and high-fidelity controlled-Z gate without introducing extra modes, and the operation is robust against junction asymmetry for high-frequency qubits. Although unconventional crosstalk may arise due to junction asymmetries and parasitic hybridization with spectator qubits, we show that these effects are sufficiently small for realistic circuit parameters. As an example of the utility of such junction-based coupling schemes, we present a scalable tiling strategy for a miniaturized superconducting quantum processor based on merged-element transmon qubits.
Surpassing the currently achievable distance of quantum key distribution based on sending-or-not-sending approach
This paper proposes a new quantum key distribution protocol called SNS-PM-QKD that combines sending-or-not-sending principles with phase matching to achieve longer transmission distances than existing QKD protocols. The researchers demonstrate that their approach can surpass the current distance record of 1002km by improving tolerance to phase mismatches in the quantum communication system.
Key Contributions
- Development of SNS-PM-QKD protocol that improves phase mismatch tolerance
- Security analysis proving the protocol's viability under collective attacks
- Demonstration of superior transmission distances compared to existing QKD protocols including experimental SNS-TF-QKD
View Full Abstract
Protocols based on the sending-or-not-sending (SNS) principle have been intensively studied in recent years and have been shown to enable the longest transmission distances in quantum key distribution (QKD). In this work, we propose a sending-or-not-sending phase-matching QKD protocol (SNS-PM-QKD) that improves tolerance to phase mismatch, thereby extending the achievable transmission distance. We present a security analysis of SNS-PM-QKD in the asymptotic (infinite-key) regime under collective attacks. The performance of the proposed protocol is compared with that of standard phase-matching QKD, theoretical SNS-type twin-field QKD protocols (SNS-TF-QKD), and an experimental SNS-TF-QKD operated over transmission distances of up to 1002km. Our results show that SNS-PM-QKD achieves greater transmission distances than these existing protocols, highlighting its potential for long-distance quantum communication.
Validating a Koopman-Quantum Hybrid Paradigm for Diagnostic Denoising of Fusion Devices
This paper develops a hybrid system that combines classical Koopman operators with quantum neural networks to clean noisy data from nuclear fusion reactors. The approach uses the Koopman operator to compress complex waveform data into simpler features that quantum processors can handle more efficiently.
Key Contributions
- Established theoretical connection between Koopman operators and quantum evolution for data processing
- Developed NISQ-friendly hybrid pipeline achieving 97% accuracy on fusion diagnostic data with fewer parameters than classical methods
View Full Abstract
The potential of Quantum Machine Learning (QML) in data-intensive science is strictly bottlenecked the difficulty of interfacing high-dimensional, chaotic classical data into resource-limited, noisy quantum processors. To bridge this gap, we introduce a physics-informed Koopman-Quantum hybrid framework, theoretically grounded in a representation-level structural isomorphism we establish between the Koopman operator, which linearizes nonlinear dynamics, and quantum evolution. Based on this theoretical foundation, we design a realizable NISQ-friendly pipeline: the Koopman operator functions as a physics-aware "data distiller," compressing waveforms into compact, "quantum-ready" features, which are subsequently processed by a modular, parallel quantum neural network. We validated this framework on 4,763 labeled channel sequences from 433 discharges of the tokamak system. The results demonstrate that our model achieves 97.0\% accuracy in screening corrupted diagnostic data, matching the performance of state-of-the-art deep classical CNNs while using orders-of-magnitude fewer trainable parameters. This work establishes a practical, physics-grounded paradigm for leveraging quantum processing in constrained environments, offering a scalable path for quantum-enhanced edge computing.
Quantum Annealing for Combinatorial Optimization: Foundations, Architectures, Benchmarks, and Emerging Directions
This paper provides a comprehensive review of quantum annealing approaches for solving combinatorial optimization problems, analyzing current hardware architectures, algorithms, and implementation challenges. The authors identify that embedding overhead is the primary bottleneck limiting scalability, with minor embeddings requiring 5-12 physical qubits per logical variable and reducing effective problem capacity by 80-92%.
Key Contributions
- Unified framework connecting adiabatic quantum dynamics, Ising/QUBO models, and modern quantum annealer topologies
- Quantitative analysis showing embedding overhead as the primary scalability bottleneck with 80-92% capacity reduction
- Comprehensive benchmarking protocols and comparison of quantum annealing with gate-based and classical approaches
View Full Abstract
Critical decision-making issues in science, engineering, and industry are based on combinatorial optimization; however, its application is inherently limited by the NP-hard nature of the problem. A specialized paradigm of analogue quantum computing, quantum annealing (QA), has been proposed to solve these problems by encoding optimization problems into physical energy landscapes and solving them by quantum tunnelling systematically through exploration of solution space. This is a critical review that summarizes the current applications of quantum annealing to combinatorial optimization and includes a theoretical background, hardware designs, algorithm implementation strategies, encoding and embedding schemes, protocols to benchmark quantum annealing, areas of implementation, and links with the quantum algorithms implementation with gate-based hardware and classical solvers. We develop a unified framework, relating adiabatic quantum dynamics, Ising and QUBO models, stoquastic and non-stoquastic Hamiltonians, and diabatic transitions to modern flux-qubit annealers (Chimera, Pegasus, Zephyr topologies), and emergent architectures (Lechner-Hauke-Zoller systems, Rydberg atom platforms), and hybrids of quantum and classical computation. Through our analysis, we find that overhead in embedding and encoding is the largest determinant of the scalability and performance (this is not just the number of qubits). Minor embeddings also usually have a physical qubit count per logical variable of between 5 and 12 qubits, which limits effective problem capacity by 80-92% and, due to chain-breaking errors, compromises the quality of solutions.
Resource-efficient quantum simulation of transport phenomena via Hamiltonian embedding
This paper develops a new framework for simulating transport phenomena (like fluid flow and diffusion) on quantum computers using a technique called Hamiltonian embedding. The authors demonstrate their approach can reduce quantum circuit depth by up to 42× compared to existing methods and provide the first experimental demonstration of solving a 2D advection equation on a trapped-ion quantum computer.
Key Contributions
- Development of Hamiltonian embedding technique for resource-efficient quantum simulation with rigorous theoretical guarantees
- Experimental demonstration of 2D advection equation solving on trapped-ion quantum hardware
- Order-of-magnitude reduction in circuit depth requirements for transport equation simulation
View Full Abstract
Transport phenomena play a key role in a variety of application domains, and efficient simulation of these dynamics remains an outstanding challenge. While quantum computers offer potential for significant speedups, existing algorithms either lack rigorous theoretical guarantees or demand substantial quantum resources, preventing scalable and efficient validation on realistic quantum hardware. To address this gap, we develop a comprehensive framework for simulating classes of transport equations, offering both rigorous theoretical guarantees -- including exponential speedups in specific cases -- and a systematic, hardware-efficient implementation. Central to our approach is the Hamiltonian embedding technique, a white-box approach for end-to-end simulation of sparse Hamiltonians that avoids abstract query models and retains near-optimal asymptotic complexity. Empirical resource estimates indicate that our approach can yield an order-of-magnitude (e.g., $42\times$) reduction in circuit depth given favorable problem structures. We then apply our framework to solve linear and nonlinear transport PDEs, including the first experimental demonstration of a 2D advection equation on a trapped-ion quantum computer.
Quantum spin-heat engine with trapped ions
This paper proposes using trapped ions to implement a novel quantum spin-heat engine that operates between energy and spin thermal reservoirs, rather than conventional two energy reservoirs. The engine extracts work by converting heat to optical work via Raman transitions, then resets using a spin reservoir at no energy cost.
Key Contributions
- Proposes ion-trap implementation of spin-heat engine operating beyond conventional two-reservoir paradigm
- Demonstrates mechanism for harnessing quantum coherence in conserved quantities other than energy
View Full Abstract
We propose an ion-trap implementation of the Vaccaro, Barnett and Wright et al. spin-heat engine (SHE); a hypothetical engine that operates between energy and spin thermal reservoirs rather than two energy reservoirs. The SHE operates in two steps: first, in the work extraction stage, heat from a thermal energy reservoir is converted into optical work via a two photon Raman transition resonant with close-to energy degenerate spin states; second, the internal spin states are brought back to their initial state via non-energetic information erasure using a spin reservoir. The latter incurs no energy cost, but rather the reset occurs at the cost of angular momentum from a spin bath that acts as the thermal spin reservoir. The SHE represents an important first step toward demonstrating heat engines that operate beyond the conventional paradigm of requiring two thermal reservoirs, paving the way to harness quantum coherence in arbitrary conserved quantities via similar machines.
Physics-inspired transformer quantum states via latent imaginary-time evolution
This paper develops a new approach to neural quantum states by reinterpreting transformer architectures as simulating imaginary-time evolution in quantum systems. The authors create physics-inspired transformer quantum states (PITQS) that use fewer parameters while achieving comparable accuracy to existing methods for modeling frustrated quantum spin systems.
Key Contributions
- Physics-inspired reinterpretation of transformer neural networks as latent imaginary-time evolution for quantum states
- PITQS architecture that achieves comparable accuracy to state-of-the-art methods while using substantially fewer variational parameters
View Full Abstract
Neural quantum states (NQS) are powerful ansätze in the variational Monte Carlo framework, yet their architectures are often treated as black boxes. We propose a physically transparent framework in which NQS are treated as neural approximations to latent imaginary-time evolution. This viewpoint suggests that standard Transformer-based NQS (TQS) architectures correspond to physically unmotivated effective Hamiltonians dependent on imaginary time in a latent space. Building on this interpretation, we introduce physics-inspired transformer quantum states (PITQS), which enforce a static effective Hamiltonian by sharing weights across layers and improve propagation accuracy via Trotter-Suzuki decompositions without increasing the number of variational parameters. For the frustrated $J_1$-$J_2$ Heisenberg model, our ansätze achieve accuracies comparable to or exceeding state-of-the-art TQS while using substantially fewer variational parameters. This study demonstrates that reinterpreting the deep network structure as a latent cooling process enables a more physically grounded, systematic, and compact design, thereby bridging the gap between black-box expressivity and physically transparent construction.
Asymptotically Optimal Quantum Universal Quickest Change Detection
This paper develops a method for detecting when quantum states change unexpectedly, particularly when the new state after the change is unknown. The authors prove that a two-stage approach combining quantum measurements with classical change detection algorithms achieves optimal performance for identifying these quantum state changes as quickly as possible.
Key Contributions
- Established asymptotic optimality of two-stage approach for universal quantum quickest change detection
- Extended classical windowed-CUSUM algorithm to quantum setting with theoretical guarantees
View Full Abstract
This paper investigates the quickest change detection of quantum states in a universal setting: specifically, where the post-change quantum state is not known a priori. We establish the asymptotic optimality of a two-stage approach in terms of worst average delay to detection. The first stage employs block POVMs with classical outputs that preserve quantum relative entropy to arbitrary precision. The second stage leverages a recently proposed windowed-CUSUM algorithm that is known to be asymptotically optimal for quickest change detection with an unknown post-change distribution in the classical setting.
Efficient Three-Dimensional Sub-Doppler Cooling of $^{40}$Ca$^+$ in a Penning Trap
This paper demonstrates a method to cool a trapped calcium ion to very low temperatures in all three dimensions using laser light, achieving much colder temperatures than standard Doppler cooling. The technique uses the same laser setup as conventional cooling but detunes the frequencies to create a quantum interference effect that enables more efficient cooling.
Key Contributions
- Demonstrated efficient sub-Doppler cooling of all three motional modes of a trapped Ca+ ion using two-photon dark resonance
- Achieved cooling from 72 to 1.5 mean thermal occupation in 800 μs with a parametric drive technique for 3D cooling
- Validated semiclassical model combining Lindblad master equation with classical harmonic oscillator dynamics
View Full Abstract
We demonstrate efficient sub-Doppler laser cooling of the three eigenmodes of a $^{40}$Ca$^+$ ion confined in a compact Penning trap operating with a magnetic field of 0.91 T. Using the same set of laser beams as required for the initial Doppler laser cooling operation, we detune the laser frequencies to produce a narrow two-photon dark resonance. The process achieves a 1/e cooling time constant of 108(8) $μ$s, ultimately reducing the mean thermal axial mode occupation from 72(23) to 1.5(3) in 800 $μ$s as measured by resonantly probing an electric quadrupole transition near 729 nm. A parametric drive is applied to the trap electrodes which coherently exchanges the axial mode occupation with that of each radial mode, allowing for three-dimensional sub-Doppler cooling using only the axially-propagating laser beams. This sub-Doppler cooling is achieved for an axial oscillation frequency of $ω_z = 2π~\times~$221 kHz, which places the motion well outside of the Lamb Dicke confinement regime at the Doppler laser cooling limit. Our measured cooling rate and final mode occupation are in good agreement with a semiclassical model which combines a Lindblad master equation solution for ion-photon interactions with classical harmonic oscillator motion of the trapped ion.
Quantum phase transition in transverse-field Ising model on Sierpiński gasket lattice
This paper studies how quantum materials undergo phase transitions when arranged in a fractal geometry called the Sierpiński gasket, finding that the fractal structure changes the critical behavior compared to regular one-dimensional or higher-dimensional systems. The researchers use computational methods to determine the critical point and exponents that characterize this quantum phase transition.
Key Contributions
- Determination of quantum phase transition critical point and exponents for transverse-field Ising model on fractal Sierpiński gasket geometry
- Demonstration that fractal geometry leads to distinct critical behavior with lower dynamical exponent z≈0.84 compared to conventional lattices
View Full Abstract
We study quantum phase transition in the transverse-field Ising model on the Sierpiński gasket. By applying finite-size scaling and numerical renormalization group methods, we determine the critical coupling and the exponents that describe this transition. We first checked our finite-size scaling and the renormalization methods on the exactly solvable one-dimensional chain, where we recovered proper values of critical couplings and exponents. Then, we applied the method to the Sierpiński gasket with 11 and 15 spins. We found a quantum critical point at $λ_c \approx 2.72$ to $2.93$, with critical exponents $z\approx0.84$, $ν\approx 1.12 $, $β\approx 0.30$, and $γ\approx 2.54$. The lower dynamical exponent $z$ indicates that quantum fluctuations slow down due to fractal geometry, yielding an effective critical dimension of about 2.43. The numerical renormalization group method yielded similar results $λ_c = 2.765$, $β= 0.306$, supporting our findings. These exponents differ from those in both the one-dimensional and mean-field cases.
Quantum Information Flow in Microtubule Tryptophan Networks
This paper studies how quantum information flows through networks of tryptophan amino acids in microtubules (cellular structures) using advanced quantum mechanical modeling. The researchers investigate how quantum correlations are created, transported, and lost in these biological networks under different conditions.
Key Contributions
- Development of Lindblad master equation framework for modeling quantum information flow in biological chromophore networks
- Identification of structural and dynamical conditions that preserve quantum correlations in microtubule tryptophan networks
View Full Abstract
Networks of aromatic amino acid residues within microtubules, particularly those formed by tryptophan, may serve as pathways for optical information flow. Ultraviolet excitation dynamics in these networks are typically modeled with effective non-Hermitian Hamiltonians. By extending this approach to a Lindblad master equation that incorporates explicit site geometries and dipole orientations, we track how correlations are generated, routed, and dissipated, while capturing both energy dissipation and information propagation among coupled chromophores. We compare localized injections, fully delocalized preparations, and eigenmode-based initial states. To quantify the emerging quantum-informational structure, we evaluate the $L_1$ norm of coherence, the correlated coherence, and the logarithmic negativity within and between selected chromophore sub-networks. The results reveal a strong dependence of both the direction and persistence of information flow on the type of initial preparation. Superradiant components drive the rapid export of correlations to the environment, whereas subradiant components retain them and slow their leakage. Embedding single tubulin units into larger dimers and spirals reshapes pairwise correlation maps and enables site-selective routing. Scaling to larger ordered lattices strengthens both export and retention channels, whereas static energetic and structural disorder suppresses long-range transport and reduces overall correlation transfer. These findings provide a Lindbladian picture of information flow in cytoskeletal chromophore networks and identify structural and dynamical conditions that transiently preserve nonclassical correlations in microtubules.
Wave packet description of Majorana neutrino oscillations in a magnetic field
This paper studies how Majorana neutrinos oscillate between different types when traveling through magnetic fields, using a wave packet approach that accounts for quantum decoherence effects. The researchers derive analytical solutions showing how the coherence length depends on the relative strength of vacuum oscillation frequency versus magnetic field frequency.
Key Contributions
- Analytical solution of modified Dirac equation for Majorana neutrinos in magnetic fields using wave packet formalism
- Derivation of coherence length expressions showing different scaling regimes depending on vacuum vs magnetic frequency dominance
View Full Abstract
Majorana neutrino oscillations in a magnetic field are considered using the wave packets formalism. The modified Dirac equation for Majorana neutrinos with non-zero transition magnetic moments propagating in a magnetic field is solved analytically in the two flavour case. The expressions for the oscillations probabilities are derived accounting for the decoherence effect emerging at distances exceeding the coherence length. It is shown that for Majorana neutrinos propagating in a magnetic field the coherence length coincides with the coherence length for neutrino oscillations in vacuum when the vacuum frequency is much greater than the magnetic frequency ($ω_{vac} \gg ω_B$), while it is proportional to the cube of the average neutrino momentum if ($ω_{vac} \ll ω_B$). We show that the decoherence effect may appear during neutrino propagation in a magnetic field of supernova.
Experimental Quantification of Spin-Phonon Coupling in Molecular Qubits using Inelastic Neutron Scattering
This paper develops an experimental method to measure how molecular vibrations affect spin coherence in molecular qubits by combining neutron scattering and electron spin resonance techniques. The researchers identify which vibrational modes most strongly couple to electron spins and show how molecular structure can be tuned to reduce this coupling and improve coherence times.
Key Contributions
- Development of fully experimental method to quantify spin-phonon coupling coefficients using inelastic neutron scattering and EPR
- Discovery that structural distortions can redistribute vibrational energy away from spin centers to reduce decoherence and enable room-temperature spin coherence
View Full Abstract
Electronic spin superposition states enable nanoscale sensing through their sensitivity to the local environment, yet their sensitivity to vibrational motion also limits their coherence times. In molecular spin systems, chemical tunability and atomic-scale resolution are accompanied by a dense, thermally accessible phonon spectrum that introduces efficient spin relaxation pathways. Despite extensive theoretical work, there is little experimental consensus on which vibrational energies dominate spin relaxation or how molecular structure controls spin-phonon coupling (SPC). We present a fully experimental method to quantify SPC coefficients by combining temperature-dependent vibrational spectra from inelastic neutron scattering with spin relaxation rates measured by electron paramagnetic resonance. We apply this framework to two model S = 1/2 systems, copper(II) phthalocyanine (CuPc) and copper(II) octaethylporphyrin (CuOEP). Two distinct relaxation regimes emerge: below 40 K, weakly coupled lattice modes below $50~\mathrm{cm}^{-1}$ dominate, whereas above 40 K, optical phonons above ~$185~\mathrm{cm}^{-1}$ become thermally populated and drive relaxation with SPC coefficients nearly three orders of magnitude larger. Structural distortions in CuOEP that break planar symmetry soften the crystal lattice and enhance anharmonic scattering, but also raise the energy of stretching modes at the molecular core where the spins reside. This redistributes vibrational energy toward the molecular periphery and out of plane, ultimately reducing SPC relative to CuPc and enabling room-temperature spin coherence in CuOEP. Although our method does not provide mode-specific SPC coefficients, it quantifies contributions from distinct spectral regions and establishes a broadly applicable, fully experimental link between crystal structure, lattice dynamics, and spin relaxation.
Inducing, and enhancing, many-body quantum chaos by continuous monitoring
This paper studies how continuous monitoring affects quantum chaos in a many-body system, finding that monitoring can surprisingly enhance rather than suppress chaotic behavior under certain conditions. The researchers use the Sachdev-Ye-Kitaev model coupled to a thermal bath and discover that monitoring can increase the Lyapunov exponent, indicating stronger quantum scrambling.
Key Contributions
- Demonstration that continuous monitoring can enhance rather than suppress many-body quantum chaos
- Discovery of re-entrant behavior in Lyapunov exponent with respect to monitoring strength
- Analytical treatment of Green's function decay in monitored quantum chaotic systems
View Full Abstract
It is intuitively expected, and supported by earlier studies, that many-body quantum chaos is suppressed, or even destroyed, by dissipative effects induced by continuous monitoring. We show here that this is not always the case. For this purpose, we study the quenched dynamics of a continuously monitored Sachdev-Ye-Kitaev (SYK) model, described by the Lindblad formalism, coupled to a thermal environment modeled by another SYK maintained at constant temperature. We find that the combined effect of monitoring and the thermal bath drives the system toward a non-thermal steady state independently of the initial conditions. The corresponding retarded Green's function exhibits two stages of exponential decay, with rates that depend non-monotonously on the thermal bath coupling and the monitoring strength. In the limit of weak coupling, the late time decay of the Green's function, computed analytically, is closely related to that of the thermal bath. Strikingly, we identify a range of parameters in which continuous monitoring, despite being a source of decoherence, induces or enhances quantum chaotic dynamics suppressed by the thermal bath. For instance, in the limit of weak coupling to the thermal bath, the Lyapunov exponent increases sharply when monitoring is turned on. For intermediate values of the thermal bath coupling, the Lyapunov exponent exhibits re-entrant behavior: it vanishes at zero or sufficiently weak monitoring strength, and becomes positive again as the monitoring strength is increased. Our results offer intriguing insights on the mechanisms leading to quantum scrambling which paves the way to its experimental control and consequently to a performance enhancement of quantum information devices.
Dynamic Simulations of Strongly Coupled Spin Ensembles for Inferring Nature of Electronic Correlations from Nuclear Magnetic Resonance
This paper develops a simulation package to model nuclear magnetic resonance (NMR) experiments in strongly correlated electronic materials. The researchers use mean-field models to understand how electronic spin correlations affect nuclear spin dynamics, identifying temporal asymmetries and pulse-dependent spectral shifts that can reveal information about electronic interactions in exotic materials.
Key Contributions
- Development of efficient simulation package for NMR spin echo experiments in strongly correlated materials
- Classification of temporal asymmetries and pulse-dependent spectral shifts as signatures of electronic correlations
- Novel methodology for extracting information about anisotropy and range of electronic interactions from NMR measurements
View Full Abstract
We develop an efficient package for the simulation of nuclear magnetic resonance spin echo experiments to study the effects of strong electronic spin correlations on the dynamics of the nuclear spin ensemble. A mean-field model is used to study correlated electronic phases through their hyperfine interaction with nuclear spins. We explore the dynamics of the interacting nuclear ensemble and discuss the key behaviors of the system. In particular, we classify the types of temporal asymmetry that the interaction induces in the system as well as a pulse-dependent shift in the spectral domain. Us- ing these results, we discuss how careful measurement of the pulse-dependent shiftcanbeusedtoextractinformationabouttheanisotropyoftheelectronic interaction and how these results represent a novel tool for the examination of exotic NMR signatures in strongly correlated materials. Finally, we re- view specific aspects of the simulation package developed for our exploration and give explicit examples where package can be used to infer range and anisotropy of electronic correlations. In particular, we discuss its structure, accuracy, and the technical merits of the various approximations used to model the nuclear spin ensemble.
Quantum Tomography of Fermion Pairs in $e^+e^-$ Collisions: Longitudinal Beam Polarization Effects
This paper studies quantum properties of particle pairs produced in high-energy electron-positron collisions, showing how beam polarization can control quantum entanglement, Bell nonlocality, and 'magic' in the resulting fermion pairs. The researchers demonstrate that these quantum effects can be measured with high statistical significance at future particle accelerators.
Key Contributions
- Demonstrated controllable quantum entanglement and Bell nonlocality in fermion pairs from high-energy collisions using beam polarization
- Showed that quantum 'magic' (non-stabilizerness) can be measured and controlled in particle physics experiments with 5σ significance
View Full Abstract
We present a quantum tomography study of fermion pair production at future $e^+e^-$ colliders, emphasizing how longitudinal beam polarization controls the two-qubit spin density matrix. We study the processes $e^+ e^- \to t\bar{t},\ e^+e^-\to μ^+μ^-$ and Bhabha scattering $e^+e^-\to e^+e^-$, representing the mass threshold behavior, the $Z$ pole resonance and the $s/t$-channel interplay. We choose to focus on three key concepts: quantum entanglement via the concurrence $\mathcal{C}$, Bell nonlocality via the optimal Clauser Horne Shimony Holt (CHSH) parameter $\mathcal{B}$, and non-stabilizerness (``magic'') via the second stabilizer Rényi entropy $\mathcal{M}_2$. For the $s$-channel-dominated channels, longitudinal polarization mainly reshapes single-spin polarizations while leaving the spin-correlation matrix largely unchanged, rendering $\mathcal{C}$ and $\mathcal{B}$ comparatively robust, but inducing a pronounced variation of $\mathcal{M}_2$. In contrast, in Bhabha scattering, polarization modifies the relative contributions of the $s$-channel and $t$-channel and can strongly affect all three observables. The observability of entanglement, Bell nonlocality, and magic exceeds the $5σ$ level when both statistical and systematic uncertainties are included, establishing the fermion pair systems as ideal laboratories for quantum-information studies in high energy leptonic collisions. With optimized beam polarization, future $e^+e^-$ colliders will provide a unique opportunity to experimentally explore and influence quantum resources in particle interactions.
Compiling Quantum Regular Language States
This paper develops a compiler for preparing quantum states that represent regular languages (patterns described by regular expressions or finite automata). The compiler translates simple pattern descriptions into efficient quantum circuits, offering predictable resource costs and enabling preparation of both the pattern states and their complements with the same efficiency.
Key Contributions
- Novel quantum state preparation compiler that accepts structure-aware specifications via regular expressions, finite automata, or bitstring sets
- Two hardware-aware compilation backends: linear-depth SeqRLSP for nearest-neighbor architectures and logarithmic-depth TreeRLSP for all-to-all connectivity
- Efficient representation using deterministic finite automata and matrix product states as intermediate representation
- Theoretical bounds on circuit depth and gate counts with explicit compile-time complexity analysis
View Full Abstract
State preparation compilers for quantum computers typically sit at two extremes: general-purpose routines that treat the target as an opaque amplitude vector, and bespoke constructions for a handful of well-known state families. We ask whether a compiler can instead accept simple, structure-aware specifications while providing predictable resource guarantees. We answer this by designing and implementing a quantum state-preparation compiler for regular language states (RLS): uniform superpositions over bitstrings accepted by a regular description, and their complements. Users describe the target state via (i) a finite set of bitstrings, (ii) a regular expression, or (iii) a deterministic finite automaton (DFA), optionally with a complement flag. By translating the input to a DFA, minimizing it, and mapping it to an optimal matrix product state (MPS), the compiler obtains an intermediate representation (IR) that exposes and compresses hidden structure. The efficient DFA representation and minimization offloads expensive linear algebra computation in exchange of simpler automata manipulations. The combination of the regular-language frontend and this IR gives concise specifications not only for RLS but also for their complements that might otherwise require exponentially large state descriptions. This enables state preparation of an RLS or its complement with the same asymptotic resources and compile time. We outline two hardware-aware backends: SeqRLSP, which yields linear-depth, ancilla-free circuits for linear nearest-neighbor architectures via sequential generation, and TreeRLSP, which achieves logarithmic depth on all-to-all connectivity via a tree tensor network. We prove depth and gate-count bounds scaling with the system size and the state's maximal Schmidt rank, and we give explicit compile-time bounds that expose the benefit of our approach. We implement and evaluate the pipeline.
Resolving problems with the continuum limit in coherent-state path integrals
This paper solves mathematical problems in quantum path integrals by proving that symmetric (Weyl) ordering of operators gives the correct continuum limit for bosonic thermal coherent-state path integrals. The authors provide rigorous mathematical justification using renormalization procedures and demonstrate their approach on the harmonic oscillator.
Key Contributions
- Rigorous proof that Weyl ordering is correct for all Hamiltonians in coherent-state path integrals
- Simplified construction using only creation and annihilation operators without position and momentum operators
- Renormalization procedure in imaginary time frequency domain for deriving continuum path integrals
View Full Abstract
The paper solves the problem of continuum limit in bosonic thermal coherent-state path integrals. For this purpose, exact discrete versions of the path integral are constructed for three different orderings of the Hamiltonian: normal, anti-normal and symmetric (Weyl order). Subsequently, their different continuum versions are checked on the harmonic oscillator, to choose the symmetric ordering as a possibly correct choice for all Hamiltonians. Spotted mathematical subtleties in the simple case serve as a clue to the general solution. Finally, a general justification for the symmetric order is provided by deriving the continuum path integral starting from the exact discrete case by a renormalization procedure in the imaginary time frequency domain. While the role of Weyl order has already been found, the paper provides the missing proof of its suitability for every Hamiltonian and simplifies the previously established construction by referring only to creation and annihilation operators (without position and momentum operators).
Nonlinear light cone spreading of correlations in a triangular quantum magnet: a hard quantum simulation target
This paper studies how correlations spread through time and space in a triangular quantum magnet material using neutron spectroscopy, finding unusual transport behavior that current theoretical methods cannot reproduce. The researchers propose this system as a challenging benchmark for testing future quantum simulators.
Key Contributions
- Discovery of nonlinear sub-ballistic transport in triangular antiferromagnet KYbSe2 that defies current theoretical predictions
- Establishment of a challenging benchmark system for testing quantum simulation capabilities
View Full Abstract
Dynamical correlations of quantum many-body systems are typically analyzed in the momentum space and frequency basis. However, quantum simulators operate more naturally in real space, real time settings. Here we analyze the real-space time-dependent van Hove spin correlations $G(r,t)$ of the 2D triangular antiferromagnet KYbSe$_2$ as obtained from high-resolution Fourier-transformed neutron spectroscopy. We compare this to $G(r,t)$ from five theoretical simulations of the well-established spin Hamiltonian. Our analysis reveals non-linear sub-ballistic low-temperature transport in KYbSe$_2$ which none of the current state-of-the-art numerical or field-theoretical methods reproduce. Our observation signals an emergent collective hydrodynamics, perhaps associated with the quantum critical phase of a quantum spin liquid, and provides an ideal benchmark for future quantum simulations.
Guaranteeing Privacy in Hybrid Quantum Learning through Theoretical Mechanisms
This paper proposes HYPER-Q, a hybrid mechanism that combines classical and quantum noise to provide privacy protection in quantum machine learning models. The authors demonstrate that this approach can achieve formal differential privacy guarantees while potentially improving model utility and adversarial robustness compared to purely classical noise-based methods.
Key Contributions
- Development of HYPER-Q hybrid classical-quantum noise mechanism for privacy in QML
- Theoretical analysis providing differential privacy guarantees and utility bounds for the proposed method
- Empirical demonstration of improved adversarial robustness compared to classical noise-based approaches
View Full Abstract
Quantum Machine Learning (QML) is becoming increasingly prevalent due to its potential to enhance classical machine learning (ML) tasks, such as classification. Although quantum noise is often viewed as a major challenge in quantum computing, it also offers a unique opportunity to enhance privacy. In particular, intrinsic quantum noise provides a natural stochastic resource that, when rigorously analyzed within the differential privacy (DP) framework and composed with classical mechanisms, can satisfy formal $(\varepsilon, δ)$-DP guarantees. This enables a reduction in the required classical perturbation without compromising the privacy budget, potentially improving model utility. However, the integration of classical and quantum noise for privacy preservation remains unexplored. In this work, we propose a hybrid noise-added mechanism, HYPER-Q, that combines classical and quantum noise to protect the privacy of QML models. We provide a comprehensive analysis of its privacy guarantees and establish theoretical bounds on its utility. Empirically, we demonstrate that HYPER-Q outperforms existing classical noise-based mechanisms in terms of adversarial robustness across multiple real-world datasets.
Large Nc Truncations for SU(Nc) Lattice Yang-Mills Theory with Fermions
This paper develops truncation methods to simulate quantum chromodynamics (QCD) with fermions on quantum computers by reducing the infinite-dimensional gauge theory to finite dimensions through various approximation schemes. The authors provide explicit truncated Hamiltonians for 1+1D and 2+1D lattices and demonstrate numerical simulations of string-breaking dynamics.
Key Contributions
- Development of systematic truncation schemes for lattice QCD on quantum computers including local Krylov truncation and large Nc scaling limits
- Explicit construction of truncated Hamiltonians for 1+1D and 2+1D lattice gauge theories with fermions
- Numerical demonstration of string-breaking dynamics using the proposed truncation methods
View Full Abstract
Quantum simulations of quantum chromodynamics (QCD) require a representation of gauge fields and fermions on the finitely many degrees of freedom available on a quantum computer. We introduce a truncation of lattice QCD coupled to staggered fermions that includes (i) a local Krylov truncation that generates allowed basis states; (ii) a maximum allowed electric energy per link; (iii) a limit on the number of fermions per site; and (iv) a truncation in the large N_c scaling of Hamiltonian matrix elements. Explicit truncated Hamiltonians for 1+1D and 2+1D lattices are given, and numerical simulations of string-breaking dynamics are performed.
Optimal enhancement of the Overhauser and Solid Effects within a unified framework
This paper develops a unified theoretical framework to understand and optimize two techniques (Overhauser effect and Solid effect) used to enhance nuclear spin polarization in liquids and solids. The framework predicts optimal microwave drive conditions that maximize the enhancement effects.
Key Contributions
- Unified quantum master equation framework for both Overhauser and Solid effects
- Prediction of optimal microwave drive amplitudes for maximum enhancement
- Identification of optimal electron-nuclear coupling regimes
View Full Abstract
The Overhauser effect (OE) and the Solid effect (SE) are two Dynamic Nuclear Polarization techniques. These two-spin techniques are widely used to create nonequilibrium nuclear spin states having polarization far beyond its equilibrium value. OE is commonly encountered in liquids, and SE is a solid-state technique. Here, we report a single framework based on a recently proposed quantum master equation, to explain both OE and SE. To this end, we use a fluctuation-regularized quantum master equation that predicts dipolar relaxation and drive-induced dissipation, in addition to the standard environmental dissipation channels. Importantly, this unified approach predicts the existence of optimal microwave drive amplitudes that maximize the OE and SE enhancements. We also report optimal enhancement regime for electron-nuclear coupling for maximal enhancement.
Non-Perturbative SDiff Covariance of Fractional Quantum Hall Excitations
This paper investigates the geometric nature of collective excitations in Fractional Quantum Hall liquids, arguing that current perturbative analyses using w-infinity Lie algebra are insufficient. The authors propose a non-perturbative construction of Maxwell-Chern-Simons quantum field theory with unitary area-preserving diffeomorphism equivariance, finding it to be non-differentiable and suggesting physics beyond the standard algebraic framework.
Key Contributions
- Non-perturbative construction of Maxwell-Chern-Simons theory with unitary SDiff equivariance
- Identification of non-differentiable structure suggesting FQH phenomenology beyond w-infinity algebra
View Full Abstract
Collective excitations of Fractional Quantum Hall (FQH) liquids at long wavelengths are thought to be of a generally covariant geometric nature, governed by area-preserving diffeomorphisms ($\mathrm{SDiff}$). But current analyses rely solely on the corresponding perturbative $w_\infty$ Lie algebra. We argue this is insufficient: We identify a non-perturbative construction of the effective Maxwell-Chern-Simons quantum field theory which carries unitary $\mathrm{SDiff}$ equivariance. But this turns out to be non-differentiable, suggesting FQH excitation phenomenology beyond the $w_\infty$ algebra.
Energy-Transfer-Enhanced Emission and Quantum Sensing of VB- Defects in hBN-PbI2 Heterostructures
This paper demonstrates how placing a lead iodide layer next to hexagonal boron nitride creates a heterostructure that makes quantum defects 5-45 times brighter through energy transfer, while preserving their magnetic sensing capabilities. The enhanced brightness makes these quantum sensors more practical for detecting magnetic fields and other environmental changes.
Key Contributions
- Demonstrated 5-45x enhancement of VB- defect photoluminescence through van der Waals heterostructure engineering
- Maintained ODMR contrast and magnetic sensing capabilities while significantly improving signal brightness
- Established energy transfer mechanism via type-I band alignment and fluorescence resonance energy transfer
View Full Abstract
Spin defects in two-dimensional materials hold significant potential for quantum information technologies and sensing applications. The negatively charged boron vacancy (VB-) in hexagonal boron nitride (hBN) has attracted considerable attention as a quantum sensor due to its demonstrated sensitivity to temperature, magnetic fields, and pressure.1 However, its applications have thus far been limited by inherently dim photoluminescence (PL). By fabricating a van der Waals heterostructure with a sensitizing donor layer, lead iodide (PbI2), we effectively enhance the PL intensity from the VB- by 5-45x, while maintaining compatibility with other heterostructures and vdW optoelectronic platforms. The type-I band alignment at the heterojunction enables efficient exciton migration while suppressing back-electron transfer, and the strong spectral overlap between the PbI2 emission and defect absorption supports efficient fluorescence resonance energy transfer. Ab initio density functional theory (DFT) predicts a photon-ratcheting mechanism that boosts absorption and emission while maintaining magnetic resonance (ODMR) contrast through minimal hybridization. Experimentally, the heterostructure exhibits enhanced continuous-wave ODMR sensitivity and functions as a precise probe of external magnetic fields. This work establishes a proof-of-concept for amplifying weak defect signals in nanomaterials, highlighting a new strategy for engineering their optical and magnetic responses.
Observing weakly broken conservation laws in a dipolar Rydberg quantum spin chain
This paper experimentally demonstrates how tiny perturbations can break conservation laws in quantum spin chains made of Rydberg atoms, showing that non-local observables like magnetization fluctuations are highly sensitive probes for detecting when these fundamental symmetries are weakly violated. The researchers used chains of just 14 atoms to observe clear signatures of this fragile integrability breaking.
Key Contributions
- Experimental demonstration that non-local observables serve as sensitive probes for detecting fragile conservation law breaking in quantum many-body systems
- Establishing Rydberg atom arrays as an effective platform for testing perturbative descriptions of weakly broken quantum integrability with as few as 14 atoms
View Full Abstract
Integrable quantum many-body systems host families of extensive conservation laws, some of which are fragile: even infinitesimal perturbations can qualitatively alter their dynamical constraints. Here we show that this fragility leaves a clear experimental fingerprint in a one-dimensional quantum spin chain of as few as 14 Rydberg atoms. Weak integrability breaking from interatomic dipolar couplings is directly detectable within experimentally accessible times in the dynamics of non-local observables. In particular, magnetization fluctuations are highly sensitive to the breaking of fragile conservation laws and exhibit anomalous growth, which we observe experimentally; similar signatures appear in a semilocal string observable. Numerical simulations on substantially longer chains and a simplified classical stochastic model reproduce those features. We establish non-local observables as a sensitive probe of fragile conservation laws in quantum spin chains and Rydberg-atom arrays as a platform to test perturbative descriptions of quantum many-body dynamics with weak integrability breaking.
Sampling two-dimensional isometric tensor network states
This paper develops two new algorithms for sampling probability distributions from two-dimensional quantum states represented as tensor networks. The algorithms extend existing one-dimensional methods to handle more complex 2D quantum systems, with one providing single random samples and another finding multiple high-probability configurations.
Key Contributions
- Development of independent sampling algorithm for 2D isometric tensor network states
- Introduction of greedy search algorithm to identify K high-probability configurations in 2D tensor networks
View Full Abstract
Sampling a quantum systems underlying probability distributions is an important computational task, e.g., for quantum advantage experiments and quantum Monte Carlo algorithms. Tensor networks are an invaluable tool for efficiently representing states of large quantum systems with limited entanglement. Algorithms for sampling one-dimensional (1D) tensor networks are well-established and utilized in several 1D tensor network methods. In this paper we introduce two novel sampling algorithms for two-dimensional (2D) isometric tensor network states (isoTNS) that can be viewed as extensions of algorithms for 1D tensor networks. The first algorithm we propose performs independent sampling and yields a single configuration together with its associated probability. The second algorithm employs a greedy search strategy to identify K high-probability configurations and their corresponding probabilities. Numerical results demonstrate the effectiveness of these algorithms across quantum states with varying entanglement and system size.
The trouble with recording devices
This paper identifies a fundamental problem in quantum theory regarding how recording devices are described when measuring quantum systems, particularly in predicting probabilities of past and future states. The authors propose a modification to the Born rule to resolve this issue and clarify quantum theory's application to measurement scenarios.
Key Contributions
- Identification of a fundamental issue with quantum theory's description of recording devices in measurement scenarios
- Proposed amendment to the Born rule to correctly predict probabilities of both past and future quantum states
- Clarification of quantum theory application to continuous measurements and closed observer systems
View Full Abstract
Quantum theory encounters a difficulty when attempting to describe recording devices. If the recording is of events in which quantum uncertainty plays a role, such as an experiment on a quantum system, quantum theory is unable to correctly predict the probabilities of both future and past states of the recording. The nature of this difficulty will be laid out at the outset. A resolution then will be presented, in which the Born rule will be lightly amended so as to correctly predict all probabilities. The resolution will have the further benefit of clarifying how quantum theory applies to an array of situations in which the theory can be ambiguous, such as the descriptions of continuous measurements, and of closed systems containing all observers.
AQER: a scalable and efficient data loader for digital quantum computers
This paper presents AQER, a new method for efficiently loading classical and quantum data into quantum circuits by reducing entanglement in target states. The authors develop a unified theoretical framework for approximate quantum loaders and demonstrate that AQER outperforms existing methods in both accuracy and gate efficiency across various datasets.
Key Contributions
- Unified theoretical framework for approximate quantum loaders with information-theoretic bounds
- AQER algorithm that systematically reduces entanglement for efficient data loading
- Demonstration of superior performance across synthetic, classical, and quantum datasets up to 50 qubits
View Full Abstract
Digital quantum computing promises to offer computational capabilities beyond the reach of classical systems, yet its capabilities are often challenged by scarce quantum resources. A critical bottleneck in this context is how to load classical or quantum data into quantum circuits efficiently. Approximate quantum loaders (AQLs) provide a viable solution to this problem by balancing fidelity and circuit complexity. However, most existing AQL methods are either heuristic or provide guarantees only for specific input types, and a general theoretical framework is still lacking. To address this gap, here we reformulate most AQL methods into a unified framework and establish information-theoretic bounds on their approximation error. Our analysis reveals that the achievable infidelity between the prepared state and target state scales linearly with the total entanglement entropy across subsystems when the loading circuit is applied to the target state. In light of this, we develop AQER, a scalable AQL method that constructs the loading circuit by systematically reducing entanglement in target states. We conduct systematic experiments to evaluate the effectiveness of AQER, using synthetic datasets, classical image and language datasets, and a quantum many-body state datasets with up to 50 qubits. The results show that AQER consistently outperforms existing methods in both accuracy and gate efficiency. Our work paves the way for scalable quantum data processing and real-world quantum computing applications.
Towards Ultimate Accuracy in Quantum Multi-Class Classification: A Trace-Distance Binary Tree AdaBoost Classifier
This paper develops a new quantum machine learning method called Trace-distance binary Tree AdaBoost (TTA) that improves multi-class classification by organizing quantum classifiers in a hierarchical tree structure and combining many shallow quantum circuits instead of using one deep circuit. The approach achieves high accuracy while being implementable on near-term quantum computers.
Key Contributions
- Development of TTA algorithm that combines hierarchical binary trees with AdaBoost ensemble learning for quantum multi-class classification
- Demonstration of a practical approach to avoid barren plateau problems in quantum machine learning by using many shallow circuits instead of deep ones
View Full Abstract
We propose a Trace-distance binary Tree AdaBoost (TTA) multi-class quantum classifier, a practical pipeline for quantum multi-class classification that combines quantum-aware reductions with ensemble learning to improve trainability and resource efficiency. TTA builds a hierarchical binary tree by choosing, at each internal node, the bipartition that maximizes the trace distance between average quantum states; each node trains a binary AdaBoost ensemble of shallow variational quantum base learners. By confining intrinsically hard, small trace distance distinctions to small node-specific datasets and combining weak shallow learners via AdaBoost, TTA distributes capacity across many small submodels rather than one deep circuit, mitigating barren-plateau and optimization failures without sacrificing generalization. Empirically TTA achieves top test accuracy ($\approx $100\%) among quantum and classical baselines, is robust to common quantum errors, and realizes aggregate systems with 10000 cumulative layers and 0.2M parameters, implemented as many shallow circuits. Our results are empirical and implementable on near-term platforms, providing a resource-efficient route to scalable multi-class quantum machine learning.
A Schwinger-Keldysh Formulation of Semiclassical Operator Dynamics
This paper develops a mathematical framework using Schwinger-Keldysh theory to study how quantum operators evolve and become complex over time in closed quantum systems. The approach reveals new ways to understand and measure the transition between ordered and chaotic quantum dynamics through fluctuation patterns.
Key Contributions
- Development of Schwinger-Keldysh formulation for Krylov complexity as an in-in observable
- Identification of new fluctuation diagnostics for distinguishing integrable and chaotic quantum systems
- Field-theoretic framework for understanding operator growth and complexity in closed quantum systems
View Full Abstract
In this work we develop a real-time Schwinger-Keldysh formulation of Krylov dynamics that treats Krylov complexity as an in-in observable generated by a closed time contour path integral. The resulting generating functional exposes an emergent phase-space description in which the Lanczos coefficients define an effective Hamiltonian governing operator motion along the Krylov chain. In the semiclassical limit, exponential complexity growth arises from hyperbolic trajectories, and asymptotically linear Lanczos growth appears as a universal chaotic fixed point, with sub-leading deformations classified as irrelevant, marginal or relevant. Going beyond the saddle, the Schwinger-Keldysh framework provides controlled access to fluctuations and large deviations of Krylov complexity, revealing sharp signatures of integrability-chaos crossovers that are invisible at the level of the mean. This formulation reorganises Krylov complexity into a dynamical field-theoretic framework and identifies new fluctuation diagnostics of operator growth in closed quantum systems.
Microscopic simulations of the coupled dynamics of cavity photons, excitons, and biexcitons
This paper studies how light and matter interact in semiconductor nanostructures using computer simulations that account for quantum effects and particle interactions. The research shows that the quantum behavior depends strongly on the frequency of light in a cavity and how strongly light couples to the material.
Key Contributions
- Development of fully quantized microscopic approach incorporating many-body Coulomb correlations
- Demonstration of biexciton continuum states influence on quantum dynamics
View Full Abstract
The coherent interaction between quantum light and material excitations in semiconductor nanostructures is investigated using a fully quantized microscopic approach that incorporates many-body Coulomb correlations. The simulations demonstrate that the quantum dynamics is influenced by biexciton continuum states and is highly sensitive to both the frequency of the cavity mode and the strength of the light-matter coupling.
Quantum clock and Newtonian time
This paper proposes replacing the standard deterministic time parameter in quantum mechanics with a 'quantum clock' that ticks randomly but maintains correct average time. The authors show this leads to modifications of the von Neumann equation and derive bounds on clock parameters using atomic clock precision limits.
Key Contributions
- Introduction of quantum clock model as extension to standard quantum mechanics
- Derivation of modified evolution equations beyond von Neumann and Lindblad forms
- Connection of theoretical model parameters to atomic clock precision limits
View Full Abstract
An extension of standard quantum mechanics is proposed in which the Newtonian time parameter appearing in the unitary evolution operator is replaced with the time shown by a `quantum clock'. A quantum clock is defined by the following properties: (a) the time that the clock shows is non-decreasing, (b) the clock ticks at random with random tick sizes, and (c) on average the clock shows the Newtonian time. We show that the leading term in the evolution equation for the density matrix associated with any quantum clock model gives the von Neumann equation. Modifications to the von Neumann equation are worked out in detail in a parametric family of examples for which the tick sizes have a gamma distribution. The leading correction to the von Neumann equation is given by the Lindblad equation generated by the Hamiltonian, but there are higher-order terms that generalize the von Neumann equation and the Lindblad equation. Lower bounds on the parameters of these quantum clock models are derived by use of the precision limit of an atomic clock.
The soliton nature of the super-Klein tunneling effect
This paper establishes a connection between integrable soliton systems and quantum tunneling phenomena in Dirac systems, showing how breather solutions from the Davey-Stewartson II equation can be mapped to planar Dirac Hamiltonians that exhibit super-Klein tunneling effects. The work explores how different parameter regimes yield various symmetry properties including Hermitian, PT-symmetric, and time-reversal breaking systems.
Key Contributions
- Establishing connection between DS II integrable soliton systems and quasi-exactly solvable Dirac Hamiltonians with super-Klein tunneling
- Construction of three-parameter family of breather solutions using Darboux transformations that map to various symmetry classes of Dirac systems
- Identification of quasi-symmetry transformations that preserve SKT subspace while not commuting with full Hamiltonian
View Full Abstract
We establish a relationship between the Davey--Stewartson II (DS II) integrable system in $(2{+}1)$ dimensions and quasi-exactly solvable planar interacting Dirac Hamiltonians that exhibit the super-Klein tunneling (SKT) effect. The Dirac interactions are constructed from the real and imaginary parts of breather solutions of the DS II system. In this framework, the SKT effect arises when the energy is tuned to match the constant background of the soliton, while the resulting Dirac Hamiltonians simultaneously support bound states embedded in the continuum. By imposing the SKT boundary conditions, we employ Darboux transformations to construct a general three-parameter family of DS II breather solutions that can be mapped to Dirac Hamiltonians. At the initial soliton time, the corresponding Dirac systems form a massless two-parameter family of Hermitian models with nontrivial electrostatic potentials. As the soliton time evolves, the systems become $\mathcal{PT}$-symmetric and develop a nontrivial imaginary mass term. Finally, when the soliton time is taken to be imaginary, the construction yields Hermitian Dirac systems that lack time-reversal symmetry. In all cases, we identify the emergence of quasi-symmetry transformations that preserve the SKT subspace of states while not commuting with the full Hamiltonian.
On Quantum Learning Advantage Under Symmetries
This paper investigates whether quantum learning algorithms can outperform classical ones when learning functions with symmetric structures. The authors compare quantum and classical statistical query models, finding that quantum learners can achieve exponential advantages in some cases and better noise tolerance, but often match classical performance bounds.
Key Contributions
- Demonstrated exponential separation between quantum and classical statistical query learning on permutation-invariant function classes
- Established that quantum statistical query complexity lower bounds generally match classical bounds for most common symmetries
- Identified tolerance-based quantum advantage where quantum learners succeed at noise levels that defeat classical algorithms
View Full Abstract
Symmetry underlies many of the most effective classical and quantum learning algorithms, yet whether quantum learners can gain a fundamental advantage under symmetry-imposed structures remains an open question. Based on evidence that classical statistical query ($\SQ$) frameworks have revealed exponential query complexity in learning symmetric function classes, we ask: can quantum learning algorithms exploit the problem symmetry better? In this work, we investigate the potential benefits of symmetry within the quantum statistical query ($\QSQ$) model, which is a natural quantum analog of classical $\SQ$. Our results uncover three distinct phenomena: (i) we obtain an exponential separation between $\QSQ$ and $\SQ$ on a permutation-invariant function class; (ii) we establish query complexity lower bounds for $\QSQ$ learning that match, up to constant factors, the corresponding classical $\SQ$ lower bounds for most commonly studied symmetries; however, the potential advantages may occur under highly skewed orbit distributions; and (iii) we further identify a tolerance-based separation exists, where quantum learners succeed at noise levels that render classical $\SQ$ algorithms ineffective. Together, these findings provide insight into when symmetry can enable quantum advantage in learning.
Position: The Need for Ultrafast Training
This paper argues for developing FPGA-based systems that can perform both machine learning training and inference in real-time with sub-microsecond latency, rather than just inference, to enable adaptive control of fast physical processes.
Key Contributions
- Advocates for ultrafast on-chip learning in FPGAs beyond static inference
- Proposes real-time adaptive systems for controlling high-frequency physical processes
View Full Abstract
Domain-specialized FPGAs have delivered unprecedented performance for low-latency inference across scientific and industrial workloads, yet nearly all existing accelerators assume static models trained offline, relegating learning and adaptation to slower CPUs or GPUs. This separation fundamentally limits systems that must operate in non-stationary, high-frequency environments, where model updates must occur at the timescale of the underlying physics. In this paper, I argue for a shift from inference-only accelerators to ultrafast on-chip learning, in which both inference and training execute directly within the FPGA fabric under deterministic, sub-microsecond latency constraints. Bringing learning into the same real-time datapath as inference would enable closed-loop systems that adapt as fast as the physical processes they control, with applications spanning quantum error correction, cryogenic qubit calibration, plasma and fusion control, accelerator tuning, and autonomous scientific experiments. Enabling such regimes requires rethinking algorithms, architectures, and toolflows jointly, but promises to transform FPGAs from static inference engines into real-time learning machines.
Scalable Quantum-Classical DFT Embedding for NISQ Molecular Simulation
This paper develops a quantum-classical hybrid method that combines quantum computing with density functional theory (DFT) to simulate molecular systems on near-term quantum computers. The approach embeds a small number of electrons in a quantum calculation while treating the rest classically, achieving 60-68% correlation energy recovery across various molecules using only 10 qubits.
Key Contributions
- Development of scalable quantum-classical DFT embedding method for NISQ devices
- Demonstration of systematic correlation energy recovery across diverse molecular systems
- Practical guidelines showing 60% correlation recovery with 10 qubits in (4e,6o) active space
View Full Abstract
Scalable quantum-classical embedding is essential for chemically meaningful simulations on near-term NISQ hardware. Using QDFT, we show systematic recovery of correlation energy relative to the DFT baseline, benchmarked against CCSD in a fixed six-orbital active space across molecules ranging from water to naphthalene. By varying the number of embedded electrons from 2 to 8, aromatic systems saturate near 63-64 percent, while linear molecules such as carbon dioxide reach 68 percent. All systems converge within two embedding iterations under relaxed self-consistency thresholds, highlighting the robustness of the approach. A (4e,6o) active space recovers approximately 60 percent correlation using 10 qubits, providing practical guidelines for NISQ-era simulations.
Universal scaling of finite-temperature quantum adiabaticity in driven many-body systems
This paper develops mathematical criteria to determine when quantum systems at finite temperature can be driven slowly enough to remain in thermal equilibrium, extending previous work that only applied to systems at absolute zero temperature. The researchers derive universal scaling laws that show how the required driving speed depends on both system size and temperature.
Key Contributions
- Derived rigorous bounds on fidelity for mixed quantum states at finite temperature using Liouville space formulation
- Established universal scaling laws showing threshold driving rates factorize into system-size and temperature-dependent factors
- Provided practical model-independent criteria for finite-temperature adiabaticity in many-body quantum systems
View Full Abstract
Establishing quantitative adiabaticity criteria at finite temperature remains substantially less developed than in the pure-state setting, despite the fact that realistic quantum systems are never at absolute zero. Here we derive rigorous bounds on the Hilbert-Schmidt fidelity between mixed states by combining a mixed-state quantum speed limit with mixed-state fidelity susceptibility within the Liouville space formulation of quantum mechanics. Applied to protocols that drive an initial Gibbs state toward a quasi-Gibbs target, these bounds yield an explicit threshold driving rate for the onset of nonadiabaticity. For a broad class of local Hamiltonians in gapped phases, we show that, in the thermodynamic limit, the threshold factorizes into two factors: a system-size contribution that recovers the zero-temperature scaling and a universal temperature-dependent factor. The latter is exponentially close to unity at low temperature, whereas at high temperature it increases linearly with temperature. We verify the predicted scaling in several spin-1/2 chains by obtaining closed-form expressions for the threshold driving rate. Our results provide practical and largely model-independent criteria for finite-temperature adiabaticity in closed many-body systems.
Exceptional phase transition in a single Kerr-cat qubit
This paper investigates quantum phase transitions in a Kerr-cat qubit system at exceptional points where the system transitions from oscillatory to overdamped behavior. The research demonstrates how these non-Hermitian quantum systems exhibit unique quantum coherence properties that can be observed through Wigner function negativity.
Key Contributions
- Demonstration of Liouvillian exceptional point quantum phase transitions in continuous-variable Kerr-cat qubits
- Introduction of phase difference between Liouvillian eigenmatrix off-diagonal elements as a transition quantification parameter
- Identification of Wigner function negativity as a signature of genuine quantum coherence in non-Hermitian systems
View Full Abstract
Exceptional points in non-Hermitian quantum systems give rise to novel genuine quantum phenomena. Recent explorations of exceptional-point-induced quantum phase transitions have extended from discrete-variable to continuous-variable-encoded quantum systems. However, quantum phase transitions driven by Liouvillian exceptional points (LEPs) in continuous-variable platforms remain largely unexplored. Here, we construct and investigate a Liouvillian exceptional structure based on a driven-dissipative Kerr-cat qubit. Through numerical simulations, we reveal a quantum phase transition occurring at the LEP characterized by a sudden change in dynamical behavior from underdamped oscillations to overdamped relaxations as visualized via Wigner functions and Bloch sphere trajectories. Notably the negativity of the Wigner function serves as a direct signature of genuine quantum coherence unattainable in conventional single-qubit non-Hermitian systems. Furthermore, we introduce the phase difference between the off-diagonal elements of the Liouvillian eigenmatrices as a novel parameter to quantify the transition. Our results establish the Kerr-cat qubit as a novel continuous-variable setting for exploring dissipative quantum criticality and intrinsic non-Hermitian physics.
Quantum vortex channels as Josephson junctions
This paper demonstrates how quantum vortices in rotating binary condensates can create self-induced weak links that act as Josephson junctions, allowing controlled superflow between phase-separated domains. The researchers show these vortex channels can be tuned from hydrodynamic transport to quantum tunneling regimes by adjusting interspecies interactions.
Key Contributions
- Discovery of self-induced weak links formed by quantum vortices in binary condensates
- Demonstration of tunable crossover from hydrodynamic transport to Josephson tunneling regime
- Development of circuit models that quantitatively describe dc current-phase relations in vortex-based junctions
View Full Abstract
In quantum gases, weak links are typically realized with externally imposed optical potentials. We show that, in rotating binary condensates, quantized vortices in one component form hollow channels that act as self-induced weak links for the other, enabling superflow through otherwise impenetrable, phase-separated domains. This introduces a novel barrier mechanism: quantum pressure creates an effective barrier inside the vortex channel, set by the constriction width, which controls the superflow. Tuning the interspecies interaction strength drives a crossover from the hydrodynamic transport to Josephson tunneling regime. Long-range dipolar interactions further tune the weak-link properties, enabling both short links and two coupled junctions in series. Circuit models quantitatively capture the dc current-phase relations for both configurations. These results establish vortices as reconfigurable, interaction-controlled Josephson elements in superfluids.
Spin-orbit-dependent lifetimes of long-range Rydberg molecules
This paper studies long-range Rydberg molecules formed when highly excited electrons interact with ground-state atoms, focusing on how spin-orbit coupling affects their lifetimes and decay processes. The researchers combined theoretical modeling with experimental measurements using ultracold cesium gas to understand how these exotic molecular states form and decay.
Key Contributions
- Combined theoretical and experimental study of spin-orbit effects on Rydberg molecule lifetimes
- Identification of two families of molecular wells with different lifetime characteristics
- Demonstration that spin-orbit interactions strongly control inner-well binding and reduce lifetimes
View Full Abstract
Long-range Rydberg molecules (LRMs) form when a highly excited Rydberg electron scatters from ground-state atoms inside its orbit, creating oscillatory, long-range potentials. We present a combined theoretical and experimental study of caesium dimers correlated to 402P3/2 Rydberg states, with an emphasis on decay via autoionisation (associative ionisation). Our model includes a relativistic treatment of electron-atom scattering with spin-orbit coupling, the perturber's hyperfine structure, and coupling of vibrational levels to a continuum of short-range decay channels. Calculated potential-energy curves predict two families of wells: outer wells near the classical outer turning point supporting long-lived states, and inner wells at shorter range whose lifetimes are limited by tunneling and subsequent vibronic decay. Using photoassociation in an ultracold Cs gas and an analysis of pulsed-field-ionisation signals which are highly selective for the detection of molecules, we assign resonances by binding energy and measure lifetimes. The measured lifetimes of inner-well states increase systematically with increasing detuning and agree with calculated lifetimes; detection of Cs2+ product ions supports autoionisation as a dominant channel. We show that the lifetimes are strongly reduced by spin-orbit interactions in the transient Cs-collision complex, which lift the near-degeneracy in Omega observed for states in the outer well and control the inner-well binding. The identified states also provide promising pathways to create ultracold molecules in ion-pair states.
Quantum Circuit Representation of Bosonic Matrix Functions
This paper establishes mathematical connections between bosonic quantum systems (like photon networks) and quantum spin models by showing how their computational problems can be expressed through the same matrix functions (permanent, hafnian, loop-hafnian). The work extends previous results to more general interaction networks and demonstrates how to construct quantum circuits for these computationally hard problems.
Key Contributions
- Extended Ising model construction to arbitrary interaction networks showing transition amplitudes are proportional to hafnian and loop-hafnian functions
- Established unified framework connecting bosonic networks, quantum spin dynamics, and matrix functions with corresponding quantum circuit designs
View Full Abstract
Bosonic counting problems can be framed as estimation tasks of matrix functions such as the permanent, hafnian, and loop-hafnian, depending on the underlying bosonic network. Remarkably, the same functions also arise in spin models, including the Ising and Heisenberg models, where distinct interaction structures correspond to different matrix functions. This correspondence has been used to establish the classical hardness of simulating interacting spin systems by relating their output distributions to #P-hard quantities. Previous works, however, have largely been restricted to bipartite spin interactions, where transition amplitudes, which provide the leading-order contribution to the output probabilities, are proportional to the permanent. In this work, we extend the Ising model construction to arbitrary interaction networks and show that transition amplitudes of the Ising Hamiltonian are proportional to the hafnian and the loop-hafnian. The loop-hafnian generalizes both the permanent and hafnian, but unlike these cases, loop-hafnian-based states require Dicke-like superpositions, making the design of corresponding quantum circuits non-trivial. Our results establish a unified framework linking bosonic networks of single photons and Gaussian states with quantum spin dynamics and matrix functions. This unification not only broadens the theoretical foundation of quantum circuit models but also highlights new, diverse, and classically intractable applications.
Finite-Size Scaling of the Full Eigenstate Thermalization in Quantum Spin Chains
This paper studies how quantum many-body systems thermalize by examining the eigenstate thermalization hypothesis (ETH) in quantum spin chains. The researchers use exact diagonalization to analyze how finite-size corrections to thermalization decay with system size, identifying two distinct sources of corrections with different scaling behaviors.
Key Contributions
- Identified two distinct sources of finite-size corrections to ETH with polynomial and exponential decay rates
- Resolved anomalous finite-size scaling behavior in chaotic quantum systems
- Provided systematic methodology for validating full ETH in quantum many-body systems
View Full Abstract
Despite the unitary evolution of closed quantum systems, long-time expectation of local observables are well described by thermal ensembles, providing the foundation of quantum statistical mechanics. A promising route to understanding this quantum thermalization is the eigenstate thermalization hypothesis (ETH), which posits that individual energy eigenstates already appear locally thermal. Subsequent studies have extended this concept to the full ETH, which captures higher-order correlations among matrix elements through nontrivial relations. In this work, we perform a detailed exact-diagonalization study of finite-size corrections to these relations in the canonical ensemble. We distinguish two distinct sources of corrections: those arising from energy fluctuations, which decay polynomially with system size, and those originating from fluctuations within each energy window, which decay exponentially with system size. In particular, our analysis resolves the puzzle that, for certain observables, finite-size corrections exhibit anomalous growth with increasing system size even in chaotic systems. Our results provide a systematic and practical methodology for validating the full ETH in quantum many-body systems.
Semidefinite programming for understanding limitations of Lindblad equations
This paper develops a mathematical framework using semidefinite programming to determine when Lindblad equations (commonly used to describe quantum systems interacting with their environment) can accurately describe both quantum populations and coherences. The authors find that in most cases, these popular equations fail to simultaneously capture both aspects accurately, even for weak system-environment coupling.
Key Contributions
- Formulated the problem of determining Lindblad equation validity as a semidefinite program for both equilibrium and non-equilibrium steady states
- Demonstrated rigorous no-go results showing that accurate Markovian descriptions are fundamentally impossible in most parameter regimes for XXZ-type models
View Full Abstract
Lindbladian quantum master equations (LEs) are the most popular descriptions for quantum systems weakly coupled to baths. But, recent works have established that in many situations such Markovian descriptions are fundamentally limited: they cannot simultaneously capture populations and coherences even to the leading-order in system-bath couplings. This can cause violation of fundamental properties like thermalization and continuity equations associated with local conservation laws, even when such properties are expected in the actual setting. This begs the question: given a physical situation, how do we know if there exists an LE that describes it to a desired accuracy? Here we show that, for both equilibrium and non-equilibrium steady states (NESS), this question can be succinctly formulated as a semidefinite program (SDP), a convex optimization technique. If a solution to the SDP can be found to a desired accuracy, then an LE description is possible for the chosen setting. If not, no LE description is fundamentally attainable, showing that a consistent Markovian treatment is impossible even at weak system-bath coupling for that particular setting. Considering few qubit isotropic XXZ-type models coupled to multiple baths, we find that in most parameter regimes, LE description giving accurate populations and coherences to leading-order is unattainable, leading to rigorous no-go results. However, in some cases, LE description having correct populations but inaccurate coherences, and satisfying local conservation laws, is possible over some of the parameter regimes. Our work highlights the power of semidefinite programming in the analysis of physically consistent LEs, thereby, in understanding the limits of Markovian descriptions at weak system-bath couplings.
Relativistic Position Verification with Coherent States
This paper demonstrates a quantum-based position verification protocol using weak coherent light states and relativity principles. Two verifiers separated by 2 km successfully authenticate a prover's location within 75 meters, providing the first practical implementation of secure quantum position verification.
Key Contributions
- First experimental demonstration of secure quantum position verification protocol
- Achieved 75-meter location accuracy using phase-randomized weak coherent states over 2 km distance
View Full Abstract
Determining the position of an entity is a fundamental prerequisite for nearly all activities. Classical means, however, have been proven incapable of providing secure position verification, meaning that a prover can mislead verifiers about its actual position. In this work, we propose and experimentally realize a secure position-verification protocol that leverages quantum optics and relativity within an information-theoretic framework. Using phase-randomized weak coherent states, two verifiers separated by 2 km securely verify the prover's position with an accuracy better than 75 meters. These results establish secure position-based authentication as a practical possibility, paving the way for applications in financial transactions, disaster response, and authenticated secure communications.
Gravitational effects on a dissipative two-level atom in the weak-field regime
This paper studies how weak gravitational fields affect the spontaneous emission rate of a two-level atom interacting with a scalar field. The researchers derive a quantum master equation and find that gravity can either enhance or suppress the atom's emission rate depending on various parameters like the atom's position and dipole orientation.
Key Contributions
- Derivation of quantum master equation for two-level atom in weak gravitational field using Feynman-Vernon formalism
- Identification of parameter regimes where gravitational field enhances or suppresses spontaneous emission rate
View Full Abstract
We investigate the dissipative dynamics of a two-level atom in a weak gravitational field. Using the Feynman--Vernon influence functional formalism, we derive a quantum master equation describing the two-level atom interacting with a scalar field in a Newtonian gravitational field, and compute the energy dissipation rate of the atom. We find that the spontaneous emission rate (the dissipation rate in vacuum) is modified by the gravitational field. Specifically, this modification depends on the atom's dipole, the position of the atom relative to the source of the gravitational field, and the frequency of the scalar radiation emitted by the atom. Furthermore, we identify the parameter regimes in which the spontaneous emission rate is enhanced or suppressed by gravity. We also discuss how the modification arises from time dilation and dipole radiation in a weak gravitational field. These findings provide a theoretical basis for exploring gravitational effects in open quantum systems.
Dual channel multi-product formulas
This paper introduces a dual-channel multi-product formula method for quantum simulation that reduces the number of quantum circuit operations needed to achieve a target precision by approximately half compared to existing approaches. The method improves the scaling of Trotter errors in product-formula based quantum simulation, making it more practical for near-term quantum computers with limited performance.
Key Contributions
- Introduction of dual-channel multi-product formula achieving two-fold improvement in Trotter error scaling
- Demonstration of reduced circuit depth requirements for quantum simulation with lower physical error mitigation overhead
View Full Abstract
Product-formula (PF) based quantum simulation is a promising approach for simulating quantum systems on near-term quantum computers. Achieving a desired simulation precision typically requires a polynomially increasing number of Trotter steps, which remains challenging due to the limited performance of current quantum hardware. To alleviate this issue, post-processing techniques such as the multi-product formula (MPF) have been introduced to suppress algorithmic errors within restricted hardware resources. In this work, we propose a dual-channel multi-product formula that achieves a two-fold improvement in Trotter error scaling. As a result, our method enables the target simulation precision to be reached with approximately half the circuit depth compared to conventional MPF schemes. Importantly, the reduced circuit depth directly translates into lower physical error mitigation overhead when implemented on real quantum hardware. We demonstrate that, for a fixed CNOT count as a measure of quantum circuit, our proposal yields significantly smaller algorithmic errors, while the sampling error remains essentially unchanged.
N-dimensional Coulomb-Sturmians with noninteger quantum numbers
This paper extends Coulomb-Sturmian functions to allow non-integer quantum numbers through Bagci-Hoggan exponential-type orbitals, deriving differential equations for N-dimensional cases. The work shows that traditional Coulomb-Sturmian functions are special cases of this generalized framework and clarifies the relationship between different orbital basis sets.
Key Contributions
- Derivation of differential equations for N-dimensional Bagci-Hoggan orbitals with fractional quantum numbers
- Mathematical unification showing Coulomb-Sturmian functions as special cases of the generalized framework
- Clarification that Guseinov's Psi-alpha-ETOs are N-dimensional Coulomb-Sturmians with shifted dimensional parameters
View Full Abstract
Coulomb-Sturmian functions are complete, orthonormal, and include the full spectrum of continuum states. They are restricted to integer values of quantum numbers, as imposed by boundary and orthonormality conditions. Bagci-Hoggan exponential-type orbitals remove this restriction through a generalization to quantum number with fractional order. The differential equations for N-dimensional Bagci-Hoggan orbitals are derived. It is demonstrated that Coulomb-Sturmian functions satisfy a particular case of these equations. Additionally, Guseinov's Psi-alpha-ETOs are identified as N-dimensional Coulomb-Sturmians with a shifted dimensional parameter alpha, rather than representing an independent complete orthonormal sets of basis in a weighted Hilbert space.
Optimal Control to Minimize Dissipation and Fluctuations in Open Quantum Systems Beyond Slow and Rapid Regimes
This paper develops a new optimal control method for quantum systems that works at intermediate timescales, not just slow or fast driving regimes. The method minimizes both energy dissipation and fluctuations in open quantum systems by converting complex history-dependent calculations into simpler time-local integrals.
Key Contributions
- Development of optimal control framework for intermediate timescales in open quantum systems
- Mathematical technique to convert history-dependent work variance into time-local integrals for efficient optimization
View Full Abstract
Optimal control is a central problem in quantum thermodynamics. While control theories in the rapid-driving and slow-driving limits have been developed, to the best of our knowledge there is no general optimization method applicable to intermediate timescales. We introduce an optimal-control framework to minimize dissipated work and work variance, defined via the two-point measurement scheme, in open quantum systems governed by time-dependent Lindblad master equations. By introducing an auxiliary operator, we convert the history-dependent work variance into a time-local integral, enabling efficient gradient-based optimization beyond slow or rapid driving regimes. Applying our method, we find that in the coherent spin-boson model the optimized protocol can switch discontinuously between distinct locally optimal solutions as the relative weight between dissipation and fluctuations is varied. Moreover, for a single-level quantum dot coupled to a fermionic reservoir, the optimized fluctuation-minimizing protocol develops a qualitatively different multi-step structure that is not captured by approaches based on slow- or rapid-driving limits.
Quantum Jacobi-Davidson Method
This paper develops two new quantum algorithms (QJD and SBQJD) for finding energy eigenstates of quantum systems, which is crucial for understanding electronic structures in materials and molecules. The methods show faster convergence and require fewer measurements compared to existing quantum approaches when tested on various quantum systems including molecular Hamiltonians.
Key Contributions
- Development of Quantum Jacobi-Davidson (QJD) and Sample-Based Quantum Jacobi-Davidson (SBQJD) algorithms for eigenvalue problems
- Demonstration of faster convergence and reduced Pauli measurements compared to existing Quantum Davidson method
- Validation on multiple quantum systems including Ising models and molecular Hamiltonians up to 12 qubits
View Full Abstract
Computing electronic structures of quantum systems is a key task underpinning many applications in photonics, solid-state physics, and quantum technologies. This task is typically performed through iterative algorithms to find the energy eigenstates of a Hamiltonian, which are usually computationally expensive and suffer from convergence issues. In this work, we develop and implement the Quantum Jacobi-Davidson (QJD) method and its quantum diagonalization variant, the Sample-Based Quantum Jacobi-Davidson (SBQJD) method, and demonstrate their fast convergence for ground state energy estimation. We assess the intrinsic algorithmic performance of our methods through exact numerical simulations on a variety of quantum systems, including 8-qubit diagonally dominant matrices, 12-qubit one-dimensional Ising models, and a 10-qubit water molecule (H$_2$O) Hamiltonian. Our results show that both QJD and SBQJD achieve significantly faster convergence and require fewer Pauli measurements compared to the recently reported Quantum Davidson method, with SBQJD further benefiting from optimized reference state preparation. These findings establish the QJD framework as an efficient general-purpose subspace-based technique for solving quantum eigenvalue problems, providing a promising foundation for sparse Hamiltonian calculations on future fault-tolerant quantum hardware.
Unified entropy production in finite quantum systems
This paper develops a unified framework for defining entropy production in finite-dimensional quantum systems by using quantum relative entropy with respect to reference states having effective temperatures. The authors show this definition decomposes into classical Clausius-type entropy production plus corrections from time-dependent effective temperature, and derive conditions for when entropy production remains non-negative.
Key Contributions
- Unified definition of entropy production in finite quantum systems using quantum relative entropy
- Decomposition into Clausius-type entropy production plus effective temperature corrections
- Derivation of lower bounds and non-negativity conditions for entropy production
View Full Abstract
In finite-dimensional quantum systems, temperature cannot be uniquely defined. This, in turn, implies that there are several ways to define entropy production in finite-dimensional quantum systems, because the classical entropy production depends on temperature. We propose a unified definition of entropy production based on the difference in quantum relative entropy with respect to reference states characterized by effective temperatures. We demonstrate that the proposed definition naturally decomposes into a Clausius-type entropy production and an additional contribution arising from the time dependence of the effective temperature. Furthermore, we show that requiring the entropy production rate to take the conventional form as the sum of the entropy change and the heat flow constrains the effective temperature to be either constant or equal to a specific energy-matching effective temperature. For general initial states, entropy production can become negative, in which case we derive lower bounds on entropy production and establish sufficient conditions for its non-negativity using the trace distance.
Steady-state skin effect in bosonic topological edge states under parametric driving
This paper proposes a method to create a quantum version of the non-Hermitian skin effect in bosonic systems by using parametric driving of topological edge states, resulting in particle accumulation at corners without energy dissipation. The work demonstrates how to bridge non-Hermitian physics with practical quantum systems using Bogoliubov-de Gennes theory.
Key Contributions
- Demonstration of steady-state skin effect in quantum bosonic systems without dissipation
- Novel approach using parametric driving of topological edge states to realize non-Hermitian quantum effects
- Bridge between non-Hermitian mathematical theory and practical quantum condensed matter systems
View Full Abstract
Non-Hermitian systems have attracted significant theoretical interest due to their extreme properties. However, realizations have mostly been limited to classical applications or artificial setups. In this study, we focus on the quantum nature inherent in bosonic Bogoliubov-de Gennes (BdG) systems, which from the perspective of spectral theory corresponds to non-Hermiticity. Based on this insight, we propose a steady-state skin effect in quantum condensed matter utilizing such BdG non-Hermiticity. Specifically, we introduce BdG quantum terms arising from parametric pumping to the edge states of an underlying bosonic Hermitian Chern insulator, thereby realizing non-Hermiticity without dissipation. This system design has the advantage of being largely independent of microscopic model details. Through analysis using non-equilibrium Green's functions, we find that under open boundary conditions, a steady state exhibiting the non-Hermitian skin effect is realized. The pronounced corner particle accumulation observed in this steady state shows quadrature anisotropy, which manifests the bosonic quantum nature. Our results bridge the gap between the fascinating mathematics of non-Hermitian matrices and practical quantum physical systems.