Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Many-body contextuality and self-testing quantum matter via nonlocal games
This paper studies quantum contextuality in many-body systems using multiplayer nonlocal games that can be won perfectly by measuring quantum error-correcting code states. The authors develop methods to calculate classical success probabilities for these games and show how certain games can be used to verify specific quantum states through self-testing.
Key Contributions
- Development of nonlocal games based on CSS error-correcting codes that demonstrate quantum contextuality
- Introduction of efficient methods to calculate classical success probabilities using Walsh-Hadamard spectra and hypergraph symmetries
- Demonstration of self-testing protocols for quantum error-correcting codes like the 2D toric code
View Full Abstract
Contextuality is arguably the fundamental property that makes quantum mechanics different from classical physics. It is responsible for quantum computational speedups in both magic-state-injection-based and measurement-based models of computation, and can be directly probed in a many-body setting by multiplayer nonlocal quantum games. Here, we discuss a family of games that can be won with certainty when performing single-site Pauli measurements on a state that is a codeword of a Calderbank-Shor-Steane (CSS) error-correcting quantum code. We show that these games require deterministic computation of a code-dependent Boolean function, and that the classical probability of success is upper bounded by a generalized notion of nonlinearity/nonquadraticity. This success probability quantifies the state's contextuality, and is computed via the function's (generalized) Walsh-Hadamard spectrum. To calculate this, we introduce an efficient, many-body-physics-inspired method that involves identifying the symmetries of an auxiliary hypergraph state. We compute the classical probability of success for several paradigmatic CSS codes and relate it to both classical statistical mechanics models and to strange correlators of symmetry-protected topological states. We also consider CSS submeasurement games, which can only be won with certainty by sharing the appropriate codeword up to local isometries. These games therefore enable self-testing, which we illustrate explicitly for the 2D toric code. We also discuss how submeasurement games enable an extensive notion of contextuality in many-body states.
A magic criterion (almost) as nice as PPT, with applications in distillation and detection
This paper introduces the Triangle Criterion, a new method for detecting and characterizing 'magic' in quantum states (a resource needed for quantum advantage), which works similarly to how the PPT criterion detects entanglement. The authors prove that multi-qubit magic distillation protocols are fundamentally more powerful than single-qubit approaches and identify fundamental limitations in detecting certain types of magic states.
Key Contributions
- Introduction of the Triangle Criterion for magic state detection with geometric interpretation and operational significance
- Proof that multi-qubit magic distillation protocols are strictly more powerful than single-qubit schemes
- Derivation of upper bounds on minimal purity of magic states and prediction of unfaithful magic states
- Discovery of fundamental limitations in single-copy magic detection schemes
View Full Abstract
We introduce a mixed-state magic criterion, the Triangle Criterion, which plays a role for magic analogous to the Positive Partial Transposition (PPT) criterion for entanglement: it combines strong detection capability, a clear geometric interpretation, and an operational link to magic distillation. Using this criterion, we uncover several new features of multi-qubit magic distillation and detection. We prove that genuinely multi-qubit magic distillation protocols are strictly more powerful than all single-qubit schemes by showing that the Triangle Criterion is not stable under tensor products, in sharp contrast to the PPT criterion. Moreover, we show that, with overwhelming probability, multi-qubit magic states with relatively low rank cannot be distilled by any single-qubit distillation protocol. We derive an upper bound on the minimal purity of magic states, which is conjectured to be tight with both numerical and constructive evidences. Using this minimal-purity result, we predict the existence of unfaithful magic states, namely states that cannot be detected by any fidelity-based magic witness, and reveal fundamental limitations of mixed-state magic detection in any single-copy scheme.
Topological magic response in quantum spin chains
This paper introduces the concept of 'topological magic response' - how quantum spin chain systems spread over stabilizer space when perturbed by non-Clifford quantum operations. The authors show that symmetry-protected topological phases exhibit this response while trivial phases do not, providing new insights into the role of magic states in topological quantum matter.
Key Contributions
- Introduction of topological magic response as a new characterization tool for quantum phases
- Demonstration that SPT phases exhibit robust magic response while trivial phases remain featureless
- Development of algorithmic techniques using matrix product states to compute stabilizer Rényi entropies
- Connection between nonstabilizerness (magic) and topological quantum phases
View Full Abstract
Topological matter provides natural platforms for robust, non-local information storage, central to quantum error correction. Yet, while the relation between entanglement and topology is well established, little is known about the role of nonstabilizerness (or magic), a pivotal concept in fault-tolerant quantum computation, in topological phases. We introduce the concept of topological magic response, the ability of a state to spread over stabilizer space when perturbed by finite-depth non-Clifford circuits. Unlike a topological invariant or order parameter, this response function probes how a phase reacts to non-Clifford perturbations, revealing the presence of non-local quantum correlations. In Ising-type spin chains, we show that symmetry-broken and paramagnetic phases lack such a response, whereas symmetry-protected topological (SPT) phases always display it. To capture this, we utilize a combination of stabilizer Rényi entropies that, in analogy with topological entanglement entropy, isolates non-locally stored information. Using exact analytic computations and matrix product states simulations based on an algorithmic technique we introduce, we show that SPT phases doped with $T$ gates support robust topological magic response, while trivial phases remain featureless.
Fast Native Three-Qubit Gates and Fault-Tolerant Quantum Error Correction with Trapped Rydberg Ions
This paper develops a fast three-qubit quantum gate using trapped ions excited to Rydberg states, achieving 97% fidelity in 2 microseconds. The researchers demonstrate how this native three-qubit gate can enable fault-tolerant quantum error correction using the nine-qubit Bacon-Shor code.
Key Contributions
- First implementation scheme for native controlled-controlled-Z gate with microwave-dressed Rydberg ions achieving 97% fidelity
- Demonstration of fault-tolerant quantum error correction using nine-qubit Bacon-Shor code on linear Rydberg-ion chains
View Full Abstract
Trapped ions as one of the most promising quantum-information-processing platforms, yet conventional entangling gates mediated by collective motion remain slow and difficult to scale. Exciting trapped ions to high-lying electronic Rydberg states provides a promising route to overcome these limitations by enabling strong, long-range dipole-dipole interactions that support much faster multi-qubit operations. Here, we introduce the first scheme for implementing a native controlled-controlled-Z gate with microwave-dressed Rydberg ions by optimizing a single-pulse protocol that accounts for the finite Rydberg-state lifetime. The resulting gate outperforms standard decompositions into one- and two-qubit gates by achieving fidelities above 97% under realistic conditions, with execution times of about 2 microseconds at cryogenic temperatures. To explore the potential of trapped Rydberg ions for fault-tolerant quantum error correction, and to illustrate the utility of three-qubit Rydberg-ion gates in this context, we develop and analyze a proposal for fault-tolerant, measurement-free quantum error correction using the nine-qubit Bacon-Shor code. Our simulations confirm that quantum error correction can be performed in a fully fault-tolerant manner on a linear Rydberg-ion chain despite its limited qubit connectivity. These results establish native multiqubit Rydberg-ion gates as a valuable resource for fast, high-fidelity quantum computing and highlight their potential for fault-tolerant quantum error correction.
Prefix Sums via Kronecker Products
This paper develops new mathematical techniques using linear algebra and Kronecker products to create more efficient prefix sum circuits, which are then applied to design improved quantum adders with better depth and gate complexity than existing constructions.
Key Contributions
- Novel decomposition of triangular matrices using Kronecker products for prefix sum algorithms
- Quantum adder circuits with improved 1.893log(n) + O(1) Toffoli depth and O(n) gate complexity
View Full Abstract
In this work, we revisit prefix sums through the lens of linear algebra. We describe an identity that decomposes triangular all-ones matrices as a sum of two Kronecker products, and apply it to design recursive prefix sum algorithms and circuits. Notably, the proposed family of circuits is the first one that achieves the following three properties simultaneously: (i) zero-deficiency, (ii) constant fan-out per-level, and (iii) depth that is asymptotically strictly smaller than $2\log(n)$ for input length n. As an application, we show how to use these circuits to design quantum adders with $1.893\log(n) + O(1)$ Toffoli depth, $O(n)$ Toffoli gates, and $O(n)$ additional qubits, improving the Toffoli depth and/or Toffoli size of existing constructions.
Error mitigation for logical circuits using decoder confidence
This paper develops methods to assess and improve the reliability of quantum error correction by using decoder confidence scores to identify when error correction is likely to fail. The researchers show that by rejecting quantum computations with low confidence scores, they can dramatically reduce logical error rates in fault-tolerant quantum circuits.
Key Contributions
- Introduction of swim distance decoder confidence score for quantum error correction assessment
- Demonstration that rejecting 0.1% of low-confidence decoding results improves logical error probability by 5+ orders of magnitude
- Development of maximum likelihood inference methods using decoder confidence for large-scale quantum algorithms
View Full Abstract
Fault-tolerant quantum computers use decoders to monitor for errors and find a plausible correction. A decoder may provide a decoder confidence score (DCS) to gauge its success. We adopt a swim distance DCS, computed from the shortest path between syndrome clusters. By contracting tensor networks, we compare its performance to the well-known complementary gap and find that both reliably estimate the logical error probability (LEP) in a decoding window. We explore ways to use this to mitigate the LEP in entire circuits. For shallow circuits, we just abort if any decoding window produces an exceptionally low DCS: for a distance-13 surface code, rejecting a mere 0.1% of possible DCS values improves the entire circuit's LEP by more than 5 orders of magnitude. For larger algorithms comprising up to trillions of windows, DCS-based rejection remains effective for enhancing observable estimation. Moreover, one can use DCS to assign each circuit's output a unique LEP, and use it as a basis for maximum likelihood inference. This can reduce the effects of noise by an order of magnitude at no quantum cost; methods can be combined for further improvements.
A Quantum Bluestein's Algorithm for Arbitrary-Size Quantum Fourier Transform
This paper presents a quantum version of Bluestein's algorithm that can perform quantum Fourier transforms on arbitrary-sized inputs (not just powers of 2). The algorithm achieves the same computational efficiency as standard power-of-two QFTs while working with any input size N.
Key Contributions
- Development of quantum Bluestein's algorithm for arbitrary-size QFT
- Achieving O((log N)^2) gate complexity and O(log N) qubit usage for any input size N
- Experimental validation through Qiskit implementation and classical simulation
View Full Abstract
We propose a quantum analogue of Bluestein's algorithm (QBA) that implements an exact $N$-point Quantum Fourier Transform (QFT) for arbitrary $N$. Our construction factors the $N$-dimensional QFT unitary into three diagonal quadratic-phase gates and two standard radix-2 QFT subcircuits of size $M = 2^m$ (with $M \ge 2N - 1$). This achieves asymptotic gate complexity $O((\log N)^2)$ and uses $O(\log N)$ qubits, matching the performance of a power-of-two QFT on $m$ qubits while avoiding the need to embed into a larger Hilbert space. We validate the correctness of the algorithm through a concrete implementation in Qiskit and classical simulation, confirming that QBA produces the exact $N$-point discrete Fourier transform on arbitrary-length inputs.
Practical Challenges in Executing Shor's Algorithm on Existing Quantum Platforms
This paper experimentally tests Shor's algorithm on current cloud-based quantum computers to determine what encryption key sizes can actually be factored today. The results show a large gap between current quantum hardware capabilities and the requirements needed to break real-world cryptographic systems.
Key Contributions
- Experimental evaluation of Shor's algorithm on current quantum hardware platforms
- Quantification of the gap between theoretical resource estimates and practical implementation capabilities
- Analysis of specific limitations including circuit construction requirements and machine fidelity issues
View Full Abstract
Quantum computers pose a fundamental threat to widely deployed public-key cryptosystems, such as RSA and ECC, by enabling efficient integer factorization using Shor's algorithm. Theoretical resource estimates suggest that 2048-bit RSA keys could be broken using Shor's algorithm with fewer than a million noisy qubits. Although such machines do not yet exist, the availability of smaller, cloud-accessible quantum processors and open-source implementations of Shor's algorithm raises the question of what key sizes can realistically be factored with today's platforms. In this work, we experimentally investigate Shor's algorithm on several cloud-based quantum computers using publicly available implementations. Our results reveal a substantial gap between the capabilities of current quantum hardware and the requirements for factoring cryptographically relevant integers. In particular, we observe that circuit constructions still need to be highly specific for each modulus, and that machine fidelities are unstable, with high and fluctuating error rates.
Bosonic quantum computing with near-term devices and beyond
This thesis develops new quantum error correction methods that work with both continuous-variable (bosonic) and discrete quantum systems, including novel codes, decoding algorithms, and fault-tolerant architectures designed to make quantum computers more reliable and scalable.
Key Contributions
- Development of dissipatively stabilized squeezed cat qubits with enhanced error suppression
- Introduction of localized statistics decoding for quantum LDPC codes
- Creation of quantum radial codes - single-shot LDPC codes with low overhead
- Development of fault complexes framework for analyzing dynamic quantum error correction
View Full Abstract
(Abridged.) This thesis investigates scalable fault-tolerant quantum computation through the development of bosonic quantum codes, quantum LDPC codes, and decoding protocols that connect continuous-variable and discrete-variable error correction. We investigate superconducting microwave implementations of continuous-variable quantum computing, including the deterministic generation of cubic phase states, and introduce the dissipatively stabilized squeezed cat qubit, a noise-biased bosonic encoding with enhanced error suppression and faster gates. The performance of rotation-symmetric and GKP codes is analyzed under realistic noise and measurement models, revealing key trade-offs in measurement-based schemes. To integrate bosonic codes into larger architectures, we develop decoding methods that exploit analog syndrome information, enabling quasi-single-shot decoding in concatenated systems. On the discrete-variable side, we introduce localized statistics decoding, a highly parallelizable decoder for quantum LDPC codes, and propose quantum radial codes, a new family of single-shot LDPC codes with low overhead and strong circuit-level performance. Finally, we present fault complexes, a homological framework for analyzing faults in dynamic quantum error correction protocols. Extending the role of homology in static CSS codes, fault complexes provide a general language for the design and analysis of fault-tolerant schemes.
Enabling Technologies for Scalable Superconducting Quantum Computing
This paper discusses the critical engineering and technological advances needed to scale up superconducting quantum computers from current small-scale demonstrations to large-scale, fault-tolerant quantum computing systems. The authors identify key areas for development in quantum system infrastructure, particularly focusing on handling quantum information within cryogenic environments.
Key Contributions
- Identification of critical technological bottlenecks for scaling superconducting quantum computers
- Framework for quantum system and ecosystem development requirements
- Analysis of cryogenic quantum information handling challenges
View Full Abstract
Experiments with superconducting quantum processors have successfully demonstrated the basic functions needed for quantum computation and evidence of utility, albeit without a sizable array of error-corrected qubits. The realization of the full potential of quantum computing centers on achieving large scale fault-tolerant quantum computers. Science, engineering and industry advances are needed to robustly generate, sustain, and efficiently manipulate an exponentially large computational (Hilbert) space as well as supply the number and quality components needed for such a scaled system. In this article, we suggest critical areas of quantum system and ecosystem development, with respect to the handling and transmission of quantum information within and out of a cryogenic environment, that would accelerate the development of quantum computers based on superconducting circuits.
Physics-Informed Neural Networks with Adaptive Constraints for Multi-Qubit Quantum Tomography
This paper develops a physics-informed neural network approach for quantum state tomography that incorporates quantum mechanical constraints to significantly reduce measurement requirements from exponential to linear scaling while maintaining high fidelity in reconstructing multi-qubit quantum states.
Key Contributions
- Physics-informed neural network framework with adaptive quantum mechanical constraints for state tomography
- Reduction of measurement complexity from O(4^n) to O(2^n) while maintaining high reconstruction fidelity
- Theoretical analysis showing improved generalization bounds and dimensional scalability through constraint-induced complexity reduction
View Full Abstract
Quantum state tomography (QST) faces exponential measurement requirements and noise sensitivity in multi-qubit systems, bottlenecking practical quantum technologies. We present a physics-informed neural network (PINN) framework integrating quantum mechanical constraints via adaptive weighting, a residual-and-attention-enhanced architecture, and differentiable Cholesky parameterization for physical validity. Evaluations on 2--5 qubit systems and arbitrary-dimensional states show PINN consistently outperforms traditional neural networks (TNNs), achieving highest fidelity across all dimensions. PINN outperforms baselines, with marked improvements in moderately high-dimensional systems, superior noise robustness (slower performance degradation), and consistent dimensional robustness. Theoretical analysis shows physical constraints reduce Rademacher complexity and mitigate the curse of dimensionality via constraint-induced dimension and sample complexity reduction, effective regardless of qubit number. While experiments are limited to 5-qubit systems due to computational constraints, our theoretical framework (convergence guarantees, generalization bounds, scalability theorems) justifies PINN's advantages will persist and strengthen in larger systems (6+ qubits), where constraint-induced dimension reduction benefits grow with system size. Practically, this advances quantum error correction and gate calibration by reducing measurement requirements from O(4^n) to O(2^n) while maintaining high fidelity, enabling faster error correction cycles and accelerated calibration critical for scalable quantum computing.
Information-efficient decoding of surface codes
This paper presents two new decoding algorithms for surface codes that reduce the amount of syndrome information needed for real-time error correction. Instead of requiring syndrome data that scales with the area of the surface code, these decoders only need data that scales with the width, significantly reducing communication requirements between quantum and classical processors.
Key Contributions
- Development of information-efficient surface code decoders that reduce syndrome data requirements from area-scaling to width-scaling
- Solution to the exponential backlog problem for real-time decoding needed for fault-tolerant T-gates in surface codes
View Full Abstract
Surface codes are a popular error-correction route to fault-tolerant quantum computation. The so-called exponential backlog problem that can arise when one has to do logical $T$-gates within the surface code demands real-time decoding of the syndrome information to diagnose the appropriate Pauli frame in which to do the gate. This in turn puts a minimum requirement on the communication rate between the quantum processing unit, where the syndrome information is collected, and the classical processor, where the decoding algorithm is run. This minimum communication rate can be difficult to achieve while preserving the quality of the quantum processor. Here, we present two decoders that make use of a reduced syndrome information volume, relying on a number of syndrome bits that scale only as the width -- and not the usual area -- of the surface-code patch. This eases the communication requirements necessary for real-time decoding.
Coherence-Sensitive Readout Models for Quantum Devices: Beyond the Classical Assignment Matrix
This paper develops a more comprehensive model for quantum measurement errors that accounts for quantum coherences, extending beyond classical readout error models that only consider population mixing. The new framework introduces a coherence-response matrix that captures interference effects between computational basis states during measurement.
Key Contributions
- Derives general expression z = Ax + Cy for measurement probabilities that includes coherence effects through matrix C
- Provides framework for characterizing coherence-sensitive readout errors that are invisible to classical assignment matrix models
View Full Abstract
Readout error models for noisy quantum devices almost universally assume that measurement noise is classical: the measurement statistics are obtained from the ideal computational-basis populations by a column-stochastic assignment matrix $A$. This description is equivalent to assuming that the effective positive-operator-valued measurement (POVM) is diagonal in the measurement basis, and therefore completely insensitive to quantum coherences. We relax this assumption and derive a fully general expression for the observed measurement probabilities under arbitrary completely positive trace-preserving (CPTP) noise preceding a computational-basis measurement. Writing the ideal post-circuit stat $\tildeρ$ in terms of its populations $x$ and coherences $y$, we show that the observed probability vector $z$ satisfies $z = A x + C y$, where $A$ is the familiar classical assignment matrix and $C$ is a coherence-response matrix constructed from the off-diagonal matrix elements of the effective POVM in the computational basis. The classical model $z = A x$ arises if and only if all POVM elements are diagonal; in this sense $C$ quantifies accessible information about coherent readout distortions and interference between computational-basis states, all of which are invisible to models that retain only $A$. This work therefore provides a natural, fully general framework for coherence-sensitive readout modeling on current and future quantum devices.
Pontryagin Maximum Principle for Rydberg-blockaded state-to-state transfers: A semi-analytic approach
This paper develops optimal control methods for neutral-atom quantum computers using Rydberg blockade, applying mathematical optimization theory to find the fastest way to perform quantum operations between qubits. The researchers create a semi-analytical approach that combines theoretical insights with numerical methods to achieve high-fidelity, time-optimal quantum gate operations.
Key Contributions
- Development of semi-analytic Pontryagin Maximum Principle approach for optimal control of Rydberg-blockaded quantum systems
- Classification of normal and abnormal extremals for two-qubit operations with correspondence to classical particle motion in quartic potential
- General formalism for N-qubit time-optimal state-to-state transfers in neutral-atom quantum processors
View Full Abstract
We study time-optimal state-to-state control for two- and multi-qubit operations motivated by neutral-atom quantum processors within the Rydberg blockade regime. Block-diagonalization of the Hamiltonian simplifies the dynamics and enables the application of a semi-analytic approach to the Pontryagin Maximum Principle to derive optimal laser controls. We provide a general formalism for $N$ qubits. For $N=2$ qubits, we classify normal and abnormal extremals, showcasing examples where abnormal solutions are either absent or suboptimal. For normal extremals, we establish a correspondence between the laser detuning from atomic transitions and the motion of a classical particle in a quartic potential, yielding a reduced, semi-analytic formulation of the control problem. Combining PMP-based insights with numerical optimization, our approach bridges analytic and computational methods for high-fidelity, time-optimal control.
Decoding 3D color codes with boundaries
This paper develops an improved decoding algorithm for three-dimensional color codes, a type of quantum error correction code, achieving nearly double the error threshold compared to previous methods and demonstrating these codes could be more practical for fault-tolerant quantum computing.
Key Contributions
- Extended 2D color code decoders to 3D with boundary conditions
- Achieved 1.55% error threshold, nearly double previous results
- Developed qCodePlot3D visualization package for 2D/3D color codes
View Full Abstract
Practical large-scale quantum computation requires both efficient error correction and robust implementation of logical operations. Three-dimensional (3D) color codes are a promising candidate for fault-tolerant quantum computation due to their transversal non-Clifford gates, but efficient decoding remains challenging. In this work, we extend previous decoders for two-dimensional color codes [1], which are based on the restriction of the decoding problem to a subset of the qubit lattice, to three dimensions. Including boundaries of 3D color codes, we demonstrate that the 3D restriction decoder achieves optimal scaling of the logical error rate and a threshold value of 1.55(6)% for code-capacity bit- and phase-flip noise, which is almost a factor of two higher than previously reported for this family of codes [2, 3]. We furthermore present qCodePlot3D, a Python package for visualizing 2D and 3D color codes, error configurations, and decoding paths, which supports the development and analysis of such decoders. These advancements contribute to making 3D color codes a more practical option for exploring fault-tolerant quantum computation.
Fault-tolerant multi-qubit gates in Parity Codes
This paper presents new methods for implementing fault-tolerant multi-qubit gates in quantum error correction codes using parity qubits. The work shows how to efficiently perform high-weight rotation gates and multi-qubit CNOT operations without requiring complex lattice surgery or routing procedures.
Key Contributions
- Efficient implementation of fault-tolerant high-weight rotation gates of arbitrary angle on stabilizer codes
- Transversal CNOT gates for logical parity-controlled-NOT operations between multiple logical qubits without lattice surgery
View Full Abstract
We present a set of efficiently implementable logical multi-qubit gates in concatenated quantum error correction codes using parity qubits. In particular, we show how fault-tolerant high-weight rotation gates of arbitrary angle can be implemented on single physical qubits of a classical stabilizer code, or on localized regions of full quantum error correction codes. Similarly, we show how transversal CNOT gates can implement logical parity-controlled-NOT operations between arbitrarily many logical qubits. Both operation types can be implemented and in many cases parallelized without the use of lattice surgery or the need for complicated routing operations.
Fighting non-locality with non-locality: microcausality and boundary conditions in QED
This paper addresses fundamental locality problems in quantum electrodynamics (QED) by showing how non-local boundary conditions can restore local behavior for charged observables while maintaining microcausality. The authors demonstrate that globally charged quantities can be treated as local to bulk regions through careful treatment of gauge theory boundaries and relational localization concepts.
Key Contributions
- Development of non-local boundary conditions that allow charged observables to be treated as local while preserving microcausality
- Construction of a consistent local net of algebras that includes charged observables in bulk regions through relational localization
- Demonstration that gauge theory locality properties depend on choice of dynamical reference frame in boundary conditions
View Full Abstract
In gauge theories, globally charged observables necessarily depend non-locally on the kinematical fields, with this dependence extending to the asymptotic boundary of spacetime. Despite this, we show that a subset of such observables can be consistently regarded as local to the bulk, in a manner that respects microcausality and leaves locality properties of uncharged observables untouched. A sufficient condition for this is to impose kinematically non-local boundary conditions on the large gauge sector of the theory, and to invoke a relational notion of localisation for observables. This reveals a relatively underappreciated link between boundary conditions, and different notions of microcausality and locality. We develop this point through a detailed case study in scalar QED, describing non-local boundary conditions that allow a large family of observables on a codimension-1 bulk surface to be viewed as local to that surface, despite being dressed by asymptotic Wilson lines. We show that this property continues to hold within a perturbative quantisation of the theory, and we argue that this leads to a consistent local net of algebras that includes these charged observables in bulk algebras. We explain how this setup may be understood in terms of a preferred dynamical reference frame for small gauge transformations appearing in the boundary conditions. Many features of the theory (such as microcausality, the vacuum state, and the net of algebras of observables) depend on the choice of this frame, and we briefly discuss some repercussions of this for algebraic formulations of QFT. While our analysis is performed in QED, we expect our results to carry over qualitatively to more complicated theories including gravity.
Advantage of Warm Starts for Electron-Phonon Systems on Quantum Computers
This paper develops a better initial state preparation method for quantum computers simulating electron-phonon interactions in materials. By using physically-motivated starting states instead of simple guesses, they achieve exponential reductions in the computational resources needed for quantum phase estimation algorithms.
Key Contributions
- Development of physically-motivated initial state ansatz for electron-phonon systems
- Demonstration of exponential reduction in quantum circuit costs through improved state preparation
View Full Abstract
Simulating electron-phonon interactions on quantum computers remains challenging, with most algorithmic effort focused on Hamiltonian simulation and circuit optimization. In this work, we study the single-electron Holstein model and propose an initial-state ansatz that substantially enhances ground state overlap in the strong coupling regime, thereby reducing the number of iterations required in standard quantum phase estimation. We further show that this ansatz can be implemented efficiently and yields an exponential reduction in overall circuit costs relative to conventional initial guesses. Our results highlight the practical value of incorporating physical intuition into initial state preparation for electron-phonon coupled systems.
Random purification channel for passive Gaussian bosons
This paper develops a quantum channel that takes multiple copies of an unknown mixed quantum state and creates multiple copies of a randomly chosen pure state that could have generated the original mixed state. The authors specifically focus on bosonic Gaussian states (quantum states of light) and ensure the purified states have exactly twice the photon number of the original states.
Key Contributions
- Construction of a Gaussian version of the random purification channel for passive Gaussian bosonic states
- Mathematical characterization using representation theory of dual reductive pairs of unitary groups
- Ensures purified states have exactly twice the mean photon number of the initial state
View Full Abstract
The random purification channel, which, given $n$ copies of an unknown mixed state $ρ$, prepares $n$ copies of an associated random purification, has proved to be an extremely valuable tool in quantum information theory. In this work, we construct a Gaussian version of this channel that, given $n$ copies of a bosonic passive Gaussian state, prepares $n$ copies of one of its randomly chosen Gaussian purifications. The construction has the additional advantage that each purification has a mean photon number which is exactly twice that of the initial state. Our construction relies on the characterisation of the commutant of passive Gaussian unitaries via the representation theory of dual reductive pairs of unitary groups.
Electric field diagnostics in a continuous rf plasma using Rydberg-EIT
This paper demonstrates a new method to measure electric fields in plasma using Rydberg atoms and electromagnetically induced transparency (EIT). The technique exploits the extreme sensitivity of highly excited Rydberg atoms to electric fields to non-invasively diagnose plasma properties like density and field distributions.
Key Contributions
- Development of non-invasive Rydberg-EIT technique for electric field measurement in plasma
- Demonstration of plasma density and microfield distribution characterization using Rydberg Stark shifts
- Show that RF modulation sidebands vanish due to plasma screening effects
View Full Abstract
We present a non-invasive spectroscopic technique to measure electric fields in plasma, leveraging large polarizabilities and Stark shifts of Rydberg atoms. Rydberg Stark shifts are measured with high precision using narrow-linewidth lasers via Electromagnetically Induced Transparency (EIT) of rubidium vapor seeded into a continuous, inductively coupled radio-frequency (rf) plasma in a few mTorr of argon gas. Without plasma, the Rydberg-EIT spectra exhibit rf modulation sidebands caused by electric- and magnetic-dipole transitions in the rf drive coil. With the plasma present, the rf modulation sidebands vanish due to screening of the rf drive field from the plasma interior. The lineshapes of the EIT spectra in the plasma reflect the plasma's Holtsmark microfield distribution, allowing us to determine plasma density and collisional line broadening over a range of pressures and rf drive powers. The work is expected to have applications in non-invasive spatio-temporal electric-field diagnostics of low-pressure plasma, plasma sheaths, process plasma and dusty plasma.
Nonstabilizerness in Stark many-body localization
This paper studies how quantum many-body systems can become localized (non-transporting) without disorder by using a strong electric field, while still building up quantum computational resources called 'magic' or nonstabilizerness. The researchers show that magic serves as a useful probe to distinguish between different phases of quantum many-body dynamics.
Key Contributions
- Demonstrates that nonstabilizerness (magic) serves as a practical probe for disorder-free ergodicity breaking and constrained localization
- Shows that Stark many-body localization can suppress transport while maintaining computationally useful non-Clifford quantum resources
View Full Abstract
Quantum many-body disorder-free localization can suppress transport while still allowing the buildup of computationally costly non-Clifford resources. In a transverse-field Ising chain realizing disorder-free Stark many-body localization, we show that the stabilizer Rényi entropy remains nonzero and grows slowly to a finite plateau deep in the strong Stark-field regime, with strong initial-state selectivity. As the Stark field strength increases, long-time magic and entanglement consistently signal a crossover from ergodic to constrained localized dynamics. These results establish nonstabilizerness (``magic'') as a practical complexity probe for disorder-free ergodicity breaking and constrained localization, with direct relevance to benchmarking and designing near-term quantum simulators, and fill a gap in the understanding of nonstabilizerness in disorder-free many-body localization.
Signatures of real-space geometry, topology, and metric tensor in quantum transport in periodically corrugated spaces
This paper studies how quantum particles move on curved, periodically corrugated 2D surfaces, showing that the geometry and topology of the surface creates effective forces and band structures that affect quantum transport properties like conductance.
Key Contributions
- Demonstrates band formation in quantum transport on periodically modulated curved manifolds
- Derives conductance calculations using S-matrix approach for non-trivial geometries
- Shows how topology and geometry create effective potentials affecting quantum transport
View Full Abstract
The motion of a quantum particle constrained to a two-dimensional non-compact Riemannian manifold with non-trivial metric can be described by a flat-space Schroedinger-type equation at the cost of introducing local mass and metric and geometry-induced effective potential with no classical counterpart. For a metric tensor periodically modulated along one dimension, the formation of bands is demonstrated and transport-related quantities are derived. Using S-matrix approach, the quantum conductance along the manifold is calculated and contrasted with conventional quantum transport methods in flat spaces. The topology, e.g. whether the manifold is simply connected, compact or non-compact shows up in global, non-local properties such as the Aharonov-Bohm phase. The results vividly demonstrate emergent phenomena due to the interplay of reduced-dimensionality, particles quantum nature, geometry, and topology.
Revival Dynamics from Equilibrium States: Scars from Chords in SYK
This paper develops a theoretical framework for creating quantum many-body scar states in bipartite systems that exhibit perfect revivals when initialized from equilibrium states. The authors demonstrate this framework using the SYK (Sachdev-Ye-Kitaev) model and show that certain special states can undergo periodic quantum dynamics rather than thermalizing.
Key Contributions
- Development of a novel Krylov construction framework for building quantum many-body scar states with perfect correlations in bipartite systems
- Analytical demonstration of revival dynamics in SYK model chord states with numerical verification showing excellent agreement
View Full Abstract
We develop a novel framework to build quantum many-body scar states in bipartite systems characterized by perfect correlation between the Hamiltonians governing the two sides. By means of a Krylov construction, we build an interaction term which supports a tower of equally-spaced energy eigenstates. This gives rise to finite-time revivals whenever the system is initialized in a purification of a generic equilibrium state. The dynamics is universally characterized, and is largely independent of the specific details of the Hamiltonians defining the individual partitions. By considering the two-sided chord states of the double-scaled SYK model, we find an approximate realization of this framework. We analytically study the revival dynamics, finding rigid motion for wavepackets localized on the spectrum of a single SYK copy. These findings are tested numerically for systems of finite size, showing excellent agreement with the analytical predictions.
Numerically exact open quantum system work statistics with process tensors
This paper develops a new computational method using process tensors to calculate the exact energy costs and work statistics of quantum operations in complex, non-equilibrium environments. The researchers demonstrate their approach on quantum memory erasure, revealing quantum effects that previous approximate methods missed and showing how these effects impact the performance of quantum protocols.
Key Contributions
- Development of a process tensor framework for numerically exact computation of quantum work statistics in driven open quantum systems
- Demonstration that quantum signatures in work probability distributions significantly impact erasure fidelity beyond what low-order moments reveal
- Non-perturbative method for characterizing energy-exchange fluctuations in operating regimes of contemporary quantum devices
View Full Abstract
Accurately quantifying the thermodynamic work costs of quantum operations is essential for the continued development and optimisation of emerging quantum technologies. This present a significant challenge in regimes of rapid control within complex, non-equilibrium environments - conditions under which many contemporary quantum devices operate and conventional approximations break down. Here, we introduce a process tensor framework that enables the computation of the full numerically exact quantum work statistics of driven open quantum systems. We demonstrate the utility of our approach by applying it to a Landauer erasure protocol operating beyond the weak-coupling, Markovian, and slow-driving limits. The resulting work probability distributions reveal distinct quantum signatures that are missed by low-order moments yet significantly impact the erasure fidelity of the protocol. Our framework delivers non-perturbative accuracy and detail in characterising energy-exchange fluctuations in driven open quantum systems, establishing a powerful and versatile tool for exploring thermodynamics and control in the operating regimes of both near-term and future quantum devices.
Non-Linear Strong Data-Processing for Quantum Hockey-Stick Divergences
This paper develops improved mathematical tools for measuring how quantum information degrades when transmitted through noisy quantum channels, establishing tighter bounds than previous linear methods. The work introduces non-linear strong data-processing inequalities for quantum hockey-stick divergences and demonstrates applications to quantum privacy and mixing time analysis.
Key Contributions
- Established non-linear strong data-processing inequalities for quantum hockey-stick divergences that are tighter than existing linear bounds
- Developed applications to quantum local differential privacy with stronger privacy guarantees for sequential quantum channel composition
View Full Abstract
Data-processing is a desired property of classical and quantum divergences and information measures. In information theory, the contraction coefficient measures how much the distinguishability of quantum states decreases when they are transmitted through a quantum channel, establishing linear strong data-processing inequalities (SDPI). However, these linear SDPI are not always tight and can be improved in most of the cases. In this work, we establish non-linear SDPI for quantum hockey-stick divergence for noisy channels that satisfy a certain noise criterion. We also note that our results improve upon existing linear SDPI for quantum hockey-stick divergences and also non-linear SDPI for classical hockey-stick divergence. We define $F_γ$ curves generalizing Dobrushin curves for the quantum setting while characterizing SDPI for the sequential composition of heterogeneous channels. In addition, we derive reverse-Pinsker type inequalities for $f$-divergences with additional constraints on hockey-stick divergences. We show that these non-linear SDPI can establish tighter finite mixing times that cannot be achieved through linear SDPI. Furthermore, we find applications of these in establishing stronger privacy guarantees for the composition of sequential private quantum channels when privacy is quantified by quantum local differential privacy.
Reconstruction of Quantum Fields
This paper develops a new mathematical framework for transitioning from first to second quantization by taking quotients of distinguishable particle state spaces to create indistinguishable particle spaces. The authors derive generalized creation-annihilation algebras that reproduce partition functions for transtatistics, which are maximal generalizations of bosonic and fermionic statistics.
Key Contributions
- Novel quotient-based approach to deriving indistinguishable particle spaces from distinguishable ones
- Derivation of generalized creation-annihilation algebras that encompass transtatistics beyond standard bosons and fermions
View Full Abstract
One of the traditional ways of introducing bosons and fermions is through creation and annihilation algebras. Historically, these have been associated with emission and absorption processes at the quantum level and are characteristic of the language of second quantization. In this work, we formulate the transition from first to second quantization by taking quotients of the state spaces of distinguishable particles, so that the resulting equivalence classes identify states that contain no information capable of distinguishing between particles, thereby generalising the usual symmetrisation procedure. Assuming that the resulting indistinguishable-particle space (i) admits an ordered basis compatible with how an observer may label the accessible modes, (ii) is invariant under unitary transformations of those modes, and (iii) supports particle counting as a mode-wise local operation, we derive a new class of creation-annihilation algebras. These algebras reproduce the partition functions of transtatistics-maximal generalisations of bosons and fermions consistent with these operational principles.
Model-Based Real-Time Synthesis of Acousto-Optically Generated Laser-Beam Patterns and Tweezer Arrays
This paper develops a real-time control system for acousto-optic deflectors that can create and precisely control arrays of laser beams (optical tweezers) in two dimensions. The system uses a GPU-based model to control beam intensity and position with microsecond latency, enabling arrays of up to 2,500 individually controlled laser spots.
Key Contributions
- Development of a compute-efficient GPU-based model for real-time acousto-optic beam control with coupled-wave theory
- Demonstration of programmable 2D laser arrays up to 50x50 tweezers with sub-microsecond control latency and precise intensity matching
View Full Abstract
Acousto-optic deflectors (AOD) enable spatiotemporal control of laser beams through diffraction at an ultrasonic grating that is controllable by radio-frequency (rf) waveforms. These devices are a widely used tool for high-bandwidth random-access scanning applications, such as optical tweezers in quantum technology. A single AOD can generate multiple optical tweezers by multitone rf input in one dimension. Two-dimensional (2D) patterns can be realized with two perpendicularly oriented AODs. As the acousto-optical response depends nonlinearly on the applied frequency components, phases, and amplitudes, and in addition experiences dimensional coupling in 2D setups, intensity regulation becomes a unique challenge. Guided by coupled-wave theory and experimental observations, we derive a compute-efficient model which we implement on a graphics processing unit. Only one-time sampling of single-tone laser-power calibration is needed for model parameter determination, allowing for straight-forward integration into optical instruments. We implement and experimentally validate an open-loop diffraction efficiency control system that enables programmable 2D multibeam trajectories with intensity control applied at every time step during digital signal generation, overcoming the limited flexibility, pattern-size constraints, and bandwidth limitations of methods using precalculation and precalibration of a predefined pattern set or closed-loop feedback. The system is capable of stable real-time waveform streaming of arrays with up to 50 x 50 tweezers with minimal time resolution of 1.4 ns (700 MS/s) and a peak latency below 257 microseconds for execution of newly requested patterns. Reactive, real-time 2D multibeam laser patterning and scanning with strict intensity matching will substantially benefit parallelization and increasing data rates in materials processing, microscopy, and optical tweezers.
QuantumSavory: Write Symbolically, Run on Any Backend -- A Unified Simulation Toolkit for Quantum Computing and Networking
QuantumSavory is an open-source software toolkit that provides a unified framework for simulating quantum computing and networking systems. It separates symbolic programming from backend execution, allowing researchers to write quantum protocols once and run them on different simulation engines while providing tools for modeling complex quantum network interactions.
Key Contributions
- Backend-agnostic symbolic language for quantum protocols that can run on multiple simulation engines
- Tag/query system for coordinating classical-quantum interactions in distributed quantum networks
- Unified toolkit supporting arbitrary quantum systems and multipartite entanglement beyond just qubits
View Full Abstract
Progress in quantum computing and networking depends on codesign across abstraction layers: device-level noise and heterogeneous hardware, algorithmic structure, and distributed classical control. We present QuantumSavory, an open-source toolkit built to make such end-to-end studies practical by cleanly separating a symbolic computer-algebra frontend from interchangeable numerical simulation backends. States, operations, measurements, and protocol logic are expressed in a backend-agnostic symbolic language; the same model can be executed across multiple backends (e.g., stabilizer, wavefunction, phase-space), enabling rapid exploration of accuracy-performance tradeoffs without rewriting the model. Furthermore, new custom backends can be added via a small, well-defined interface that immediately reuses existing models and protocols. QuantumSavory also addresses the classical-quantum interaction inherent to LOCC protocols via discrete-event execution and a tag/query system for coordination. Tags attach structured classical metadata to quantum registers and message buffers, and queries retrieve, filter, or wait on matching metadata by wildcards or arbitrary predicates. This yields a data-driven control plane where protocol components coordinate by publishing and consuming semantic facts (e.g., resource availability, pairing relationships, protocol outcomes) rather than by maintaining rigid object graphs or bespoke message plumbing, improving composability and reuse as models grow. Our toolkit is also not limited to qubits and Bell pairs; rather, any networking dynamics of any quantum system under any type of multipartite entanglement can be tackled. Lastly, QuantumSavory ships reusable libraries of standard states, circuits, and protocol building blocks with consistent interfaces, enabling full-stack examples to be assembled, modified, and compared with minimal glue code.
Propagators of singular anharmonic oscillators with quasi-equidistant spectra
This paper studies mathematical propagators (functions describing quantum system evolution) for modified harmonic oscillators with unusual potential energy shapes including two-well and three-well configurations. The authors use advanced mathematical techniques called Darboux transformations to derive analytical expressions and connect these to magnetic field systems.
Key Contributions
- Analytical expressions for propagators of singular anharmonic oscillators using Darboux transformations
- Development of two-well and three-well potential families with corresponding propagators
- Identification of axially symmetric magnetic field configurations corresponding to these potentials
View Full Abstract
Darboux transformations of the singular harmonic oscillator are considered. Analytical expressions for the propagators are obtained, using the image method applied to formal singular propagators. Two-well and three-well families of potentials and the corresponding propagators are presented. Axially symmetric magnetic field configurations corresponding to these potentials have been identified.
On the Dynamics of Local Hidden-Variable Models
This paper investigates whether quantum systems that appear local at each moment in time can be described by evolving hidden variables over time. The authors prove that even when correlations are always local, the dynamics cannot always be captured by local hidden-variable models, revealing a new type of nonlocality based on time evolution rather than static measurements.
Key Contributions
- Introduction of dynamical local hidden-variable models as a framework for understanding temporal quantum correlations
- Rigorous no-go theorem proving that local hidden-variable dynamics cannot always reproduce quantum time evolution even when instantaneous correlations are local
- Discovery of a new form of nonlocality based on temporal evolution rather than static Bell-type correlations
View Full Abstract
Bell nonlocality is an intriguing property of quantum mechanics with far reaching consequences for information processing, philosophy and our fundamental understanding of nature. However, nonlocality is a statement about static correlations only. It does not take into account dynamics, i.e. time evolution of those correlations. Consider a dynamic situation where the correlations remain local for all times. Then at each moment in time there exists a local hidden-variable (LHV) model reproducing the momentary correlations. Can the time evolution of the correlations then be captured by evolving the hidden variables? In this light, we define dynamical LHV models and motivate and discuss potential additional physical and mathematical assumptions. Based on a simple counter example we conjecture that such LHV dynamics does not always exist. This is further substantiated by a rigorous no-go theorem. Our results suggest a new type of nonlocality that can be deduced from the observed time evolution of measurement statistics and which generically occurs in interacting quantum systems.
Symbolic Pauli Propagation for Gradient-Enabled Pre-Training of Quantum Circuits
This paper introduces a method called symbolic Pauli propagation that allows quantum machine learning circuits to be trained classically before deployment on quantum hardware. The technique creates mathematical representations of quantum observables that can be optimized using traditional gradient-based methods, making quantum circuit training more efficient and scalable.
Key Contributions
- Development of symbolic Pauli propagation method for analytical gradient computation in quantum circuits
- Demonstration of classical pre-training approach that reduces expensive on-chip quantum training requirements
- Scalable framework extending beyond classical simulation limits for larger quantum systems
View Full Abstract
Quantum Machine Learning models typically require expensive on-chip training procedures and often lack efficient gradient estimation methods. By employing Pauli propagation, it is possible to derive a symbolic representation of observables as analytic functions of a circuit's parameters. Although the number of terms in such functional representations grows rapidly with circuit depth, suitable choices of ansatz and controlled truncations on Pauli weights and frequency components yield accurate yet tractable estimators of the target observables. With the right ansatz design, this approach can be extended to system sizes beyond the reach of classical simulation, enabling scalable training for larger quantum systems. This also enables a form of classical pre-training through gradient-based optimization prior to deployment on quantum hardware. The proposed approach is demonstrated on the Variational Quantum Eigensolver for obtaining the ground state of a spin model, showing that accurate results can be achieved with a scalable and computationally efficient procedure.
Field Quantisations in Schwarzschild Spacetime: Theory versus Low-Energy Experiments
This paper investigates how quantum particles behave in strong gravitational fields by studying particle propagation near black holes using quantum field theory in curved spacetime. The authors find discrepancies between different theoretical approaches when describing quantum particles in Schwarzschild spacetime compared to the simpler methods that work well for weak gravity experiments on Earth.
Key Contributions
- Computation of Hawking particle propagators in far-horizon region of Schwarzschild spacetime
- Identification of discrepancies between quantum field theory in curved spacetime and path-integral formalism for gravitational quantum effects
View Full Abstract
Non-relativistic quantum particles in the Earth's gravitational field are successfully described by the Schrödinger equation with Newton's gravitational potential. Particularly, quantum mechanics is in agreement with such experiments as free fall and quantum interference induced by gravity. However, quantum mechanics is a low-energy approximation to quantum field theory. The latter is successful by the description of high-energy experiments. Gravity is embedded in quantum field theory through the general-covariance principle. This framework is known in the literature as quantum field theory in curved spacetime, where the concept of a quantum particle is, though, ambiguous. In this article, we study in this framework how a Hawking particle moves in the far-horizon region of Schwarzschild spacetime by computing its propagator. We find this propagator differs from that which follows from the path-integral formalism -- the formalism which adequately describes both free fall and quantum interference induced by gravity.
Explicit finite-time illustration of improper unitary evolution for the Klein--Gordon field in de Sitter space
This paper examines quantum field theory in curved spacetime, specifically studying how a free scalar field behaves in de Sitter space. The authors demonstrate that vacuum states at different times cannot be connected by proper unitary evolution, showing this breakdown occurs even for infinitesimally small time steps.
Key Contributions
- Explicit demonstration that vacuum states in de Sitter space are unitarily inequivalent across different time slices
- Proof that improper unitary evolution occurs even for infinitesimal time steps, not just asymptotic limits
View Full Abstract
It is known that quantum field theories in curved spacetime suffer from a number of pathologies, including the inability to relate states on different spatial slices by proper unitary time-evolution operators. In this article, we illustrate this issue by describing the canonical quantisation of a free scalar field in de Sitter space and explicitly demonstrating that the vacuum at a given time slice is unitarily inequivalent to that at any other time. In particular, we find that, if both background and Hamiltonian dynamics are taken into account, this inequivalence holds even for infinitesimally small time steps and not only in the asymptotic time limits.
Shaping Dynamics Through Memory: A Study of Reservoir Profiles in Open Quantum Systems
This paper studies how different types of environmental memory (Lorentzian, Gaussian, and Uniform profiles) affect the behavior of quantum systems coupled to their surroundings. The researchers analyze how these memory effects influence signal transmission and create non-Markovian dynamics where the system's future depends on its past interactions with the environment.
Key Contributions
- Comparative analysis of three different reservoir memory kernels and their effects on quantum system dynamics
- Development of quantitative measures for non-Markovianity based on information backflow in structured environments
View Full Abstract
In this work, we investigate how different reservoir memory profiles influence the dynamical evolution of a single waveguide coupled to an external environment. We compare three representative memory kernels: Lorentzian, Gaussian and Uniform, highlighting their distinct spatial correlations and their impact on system behavior. We compute the transmission amplitude, transparency properties, as well as long-time behavior of the system under each memory model. To quantify deviations from Markovian dynamics, we employ a non-Markovianity measure based on information backflow, allowing a direct comparison between the structured reservoirs and the Markovian limit. Our results reveal clear signatures of memoryless-induced modifications in the transmission spectrum and demonstrate how specific reservoir profiles enhance or suppress non-Markovian effects.
Scalable tests of quantum contextuality from stabilizer-testing nonlocal games
This paper develops new methods to analyze quantum contextuality in stabilizer states by studying nonlocal games, proving that certain quantum states maintain their quantum advantage even when measured with low fidelity. The work establishes theoretical bounds on classical performance in these games and demonstrates that exponentially small fidelities can still witness quantum contextuality in cyclic cluster states.
Key Contributions
- Proved general coding-theory bound showing classical strategies cannot asymptotically match quantum performance in stabilizer-testing games
- Established asymptotically tight upper bounds for cyclic cluster states using transfer-matrix methods, showing exponentially small fidelity suffices to witness contextuality
View Full Abstract
Soon after the dawn of quantum error correction, DiVincenzo and Peres observed that stabilizer codewords could give rise to simple proofs of quantumness via contextuality. This discovery can be recast in the language of nonlocal games: every $n$-qubit stabilizer state defines a specific "stabilizer-testing" $n$-player nonlocal game, which quantum players can win with probability one. If quantum players can moreover outperform all possible classical players, then the state is contextual. However, the classical values of stabilizer-testing games are largely unknown for scalable examples beyond the $n$-qubit GHZ state. We introduce several new methods for upper-bounding the classical values of these games. We first prove a general coding-theory bound for all stabilizer-testing games: if the classical value $p_{\mathrm{cl}}^* < 1$, then $p_{\mathrm{cl}}^* \leq 7/8$, i.e., there is no classical strategy that can perform as well as the optimal quantum strategy even in an asymptotic sense. We then show how to tighten this bound for the most common scalable examples, namely GHZ, toric-code and cyclic cluster states. In particular, we establish an asymptotically tight upper bound for cyclic cluster states using transfer-matrix methods. This leads to the striking conclusion that measuring an exponentially small fidelity to the cyclic cluster state will suffice to witness its contextuality.
Condensation of slow $γ$-quanta in strong magnetic fields
This paper investigates how extremely strong magnetic fields (stronger than those typically studied in quantum electrodynamics) affect blackbody radiation by causing vacuum birefringence. The authors show this creates a novel condensate-like state of slow-moving photons that could exist in neutron star cores and affect stellar stability.
Key Contributions
- Discovery of anisotropic Planck radiation law in strong magnetic fields due to vacuum birefringence
- Identification of a novel condensate-like state of slow photons at high temperatures in extreme magnetic environments
View Full Abstract
The implications of the root singularity of the vacuum polarization tensor near the first pair creation threshold on blackbody radiation are investigated for magnetic fields above the characteristic scale of quantum electrodynamics. We show that the vacuum birefringence in such a strong background leads to an anisotropic behavior of the Planck radiation law. The thermal spectrum is characterized by a resonance that competes with the Wien maximum, causing a crossover in the low $γ$-spectrum of the heat radiation. A light state resembling a many-body condensate with slow motion is linked to the high-temperature phase. This novel state of radiation may coexist with nuclear or quark matter in a neutron star's core, increasing its compactness and influencing its stability.
Indistinguishable photons from a two-photon cascade
This paper demonstrates high-quality indistinguishable photons from semiconductor quantum dots using a biexciton cascade process. The researchers achieved excellent two-photon interference visibility by using Purcell enhancement in low-noise devices, showing that photon coherence can be controlled by tuning the lifetime ratio of the biexciton and exciton transitions.
Key Contributions
- Demonstrated high two-photon interference visibility (94% for XX photons, 82% for X photons) using Purcell-enhanced biexciton transitions
- Showed controllable photon coherence by tuning the XX:X lifetime ratio over two orders of magnitude in low-noise quantum dot devices
View Full Abstract
Decay of a four-level diamond scheme via a cascade is a potential source of entangled photon pairs. A solid-state implementation is the biexciton cascade in a semiconductor quantum dot. While high entanglement fidelities have been demonstrated, the two photons, XX and X, are temporally correlated, typically resulting in poor photon coherence. Here, we demonstrate a high two-photon interference visibility (a measure of the photon coherence) for both XX (V=94$\pm$2%) and X (V=82$\pm$6%) photons. This is achieved by Purcell-enhancing the biexciton transition in a low-noise device. We find that the photon coherence follows the well-known quantum optics result upon tuning the XX:X lifetime ratio over two orders of magnitude.
Giant-atom quantum acoustodynamics in hybrid superconducting-phononic integrated circuits
This paper demonstrates a 'giant atom' by connecting a superconducting qubit to a phononic waveguide at two distant points, creating unique quantum dynamics with controllable decay rates. The researchers show this setup can prepare high-purity quantum states and provides a new platform for advanced quantum device control.
Key Contributions
- First demonstration of giant-atom physics in hybrid superconducting-phononic circuits with 600 wavelength separation
- Achievement of highly tunable frequency-dependent decay rates with Purcell factors exceeding 40
- Demonstration of high-purity quantum superposition state preparation using controllable non-Markovian dynamics
View Full Abstract
We demonstrate a giant atom by coupling a superconducting transmon qubit to a lithium niobate phononic waveguide at two points separated by about 600 acoustic wavelengths, with a propagation delay of 125 ns. The giant atom yields non-Markovian relaxation dynamics characterized by phonon backflow and a frequency-dependent effective decay rate varying four-fold over merely 4 MHz, corresponding to a Purcell factor exceeding 40. Exploiting this frequency-dependent dissipation, we prepare quantum superposition states with high purity. Our results establish phononic integrated circuits as a versatile platform for giant-atom physics, providing highly tunable quantum devices for advanced quantum information processing.
The measured speed in the evanescent regime reflects the spatial decay of the wavefunction, not particle motion
This paper critiques an experimental interpretation of photon behavior in coupled waveguides, arguing that a measured energy-dependent parameter reflects the spatial decay of quantum wavefunctions rather than actual particle motion. The authors defend Bohmian mechanics against claims that the experiment contradicts its predictions about particle velocity in classically forbidden regions.
Key Contributions
- Clarifies the interpretation of spatial decay measurements in evanescent wave regimes
- Defends the ontological framework of Bohmian mechanics against experimental challenges
View Full Abstract
The recent paper by Sharoglazova et al. reports an energy-dependent parameter $ν$ extracted from the spatial distribution of photons in a coupled-waveguide experiment. The authors interpret $ν$ as the speed of quantum particles, even in the classically forbidden regime, and claim that its finite value contradicts the Bohmian mechanics prediction of zero particle velocity. This challenge arises from a fundamental misunderstanding of the operational meaning of v within the Bohmian ontological framework. We demonstrate that v quantifies the spatial gradient of the wavefunctions amplitude, a geometric property of the guiding field, not the kinematical velocity of point-like particles. The experiment therefore does not challenge but rather illustrates the clean ontological separation between the wave and particle aspects inherent to Bohmian mechanics.
Wichmann-Kroll vacuum polarization density in a finite Gaussian basis set
This paper develops improved computational methods for calculating quantum electrodynamics (QED) effects in atoms, specifically focusing on vacuum polarization corrections to hydrogen-like ions using Gaussian basis sets. The work aims to achieve higher numerical precision in computing energy shifts caused by virtual electron-positron pairs in the quantum vacuum.
Key Contributions
- Analytic expression for linear vacuum polarization density using Riesz projectors
- Error analysis and convergence study of finite Gaussian basis methods for QED calculations
- Strategy using even-tempered basis sets for extrapolation to complete basis set limit
View Full Abstract
This work further develops the calculation of QED effects in a finite Gaussian basis. We focus on the non-linear $α(Zα)^{n\ge 3}$ contribution to the vacuum polarization density, computing the energy shift of 1s$_{1/2}$ states of hydrogen-like ions. Our goal is to improve the numerical computations to achieve a precision comparable to that of Green's function methods reported in the literature. To do so, an analytic expression for the linear contribution to the vacuum polarization density is derived using Riesz projectors. Alternative formulations of the vacuum polarization density and their relation is discussed. The convergence of the finite Gaussian basis scheme is investigated, and the numerical difficulties that arise are characterized. In particular, an error analysis is performed to assess the method's robustness to numerical noise. We then report a strategy for computing the energy shift with sufficient precision to enable a sensible extrapolation of the partial-wave expansion. A key feature of the procedure is the use of even-tempered basis sets, allowing for an extrapolation towards the complete basis set limit.
Landscape Analysis of Excited States Calculation over Quantum Computers
This paper analyzes three variational quantum eigensolver (VQE) models for calculating excited states on quantum computers, focusing on methods that embed orthogonality constraints to avoid variational collapse to ground states. The authors provide theoretical analysis showing these models have favorable optimization landscapes where local minima are global minima.
Key Contributions
- Rigorous landscape analysis of three VQE models with embedded orthogonality constraints for excited state calculations
- Theoretical proof that these models have favorable optimization properties where local minima are global minima
- Comprehensive comparison of quantum resource requirements and classical optimization complexity across the three models
View Full Abstract
The variational quantum eigensolver (VQE) is one of the most promising algorithms for low-lying eigenstates calculation on Noisy Intermediate-Scale Quantum (NISQ) computers. Specifically, VQE has achieved great success for ground state calculations of a Hamiltonian. However, excited state calculations arising in quantum chemistry and condensed matter often requires solving more challenging problems than the ground state as these states are generally further away from a mean-field description, and involve less straightforward optimization to avoid the variational collapse to the ground state. Maintaining orthogonality between low-lying eigenstates is a key algorithmic hurdle. In this work, we analyze three VQE models that embed orthogonality constraints through specially designed cost functions, avoiding the need for external enforcement of orthogonality between states. Notably, these formulations possess the desirable property that any local minimum is also a global minimum, helping address optimization difficulties. We conduct rigorous landscape analyses of the models' stationary points and local minimizers, theoretically guaranteeing their favorable properties and providing analytical tools applicable to broader VQE methods. A comprehensive comparison between the three models is also provided, considering their quantum resource requirements and classical optimization complexity.
Replica Keldysh field theory of quantum-jump processes: General formalism and application to imbalanced and inefficient fermion counting
This paper develops a theoretical framework called replica Keldysh field theory to study quantum systems under imperfect measurements, where detectors may miss some quantum events or have imbalanced detection rates. The researchers apply this theory to fermion counting in lattice systems and find that inefficient detection creates distinct phases with different entanglement properties.
Key Contributions
- Development of replica Keldysh field theory for general quantum-jump processes with inefficient detection
- Unified theoretical framework connecting measurement-induced phase transitions and open quantum system dynamics
- Analysis of imbalanced and inefficient fermion counting showing distinct entanglement scaling laws
View Full Abstract
Measurement-induced phase transitions have largely been explored for projective or continuous measurements of Hermitian observables, assuming perfect detection without information loss. Yet such transitions also arise in more general settings, including quantum-jump processes with non-Hermitian jump operators, and under inefficient detection. A theoretical framework for treating these broader scenarios has been missing. Here we develop a comprehensive replica Keldysh field theory for general quantum-jump processes in both bosonic and fermionic systems. Our formalism provides a unified description of pure-state quantum trajectories under efficient detection and mixed-state dynamics emerging from inefficient monitoring, with deterministic Lindbladian evolution appearing as a limiting case. It thus establishes a direct connection between phase transitions in nonequilibrium steady states of driven open quantum matter and in measurement-induced dynamics. As an application, we study imbalanced and inefficient fermion counting in a one-dimensional lattice system: monitored gain and loss of fermions occurring at different rates, with a fraction of gain and loss jumps undetected. For imbalanced but efficient counting, we recover the qualitative picture of the balanced case: entanglement obeys an area law for any nonzero jump rate, with an extended quantum-critical regime emerging between two parametrically separated length scales. Inefficient detection introduces a finite correlation length beyond which entanglement, as quantified by the fermionic logarithmic negativity, obeys an area law, while the subsystem entropy shows volume-law scaling. Numerical simulations support our analytical findings. Our results offer a general and versatile theoretical foundation for studying measurement-induced phenomena across a wide class of monitored and open quantum systems.
Rationally-extended radial harmonic oscillator in a position-dependent mass background
This paper solves the radial harmonic oscillator problem with position-dependent mass using mathematical transformations, showing how it can be mapped to a known potential problem and extended using exceptional orthogonal polynomials. The work develops new exactly-solvable quantum mechanical systems with deformed supersymmetric properties.
Key Contributions
- Exact solution of radial harmonic oscillator with position-dependent mass using point canonical transformations
- Development of rational extensions using exceptional orthogonal polynomials with deformed shape invariance properties
View Full Abstract
We show that the radial harmonic oscillator problem in the position-dependent mass background of the type $m(α;r) = (1+αr^2)^{-2}$, $α>0$, can be solved by using a point canonical transformation mapping the corresponding Schrödinger equation onto that of the Pöschl-Teller I potential with constant mass. The radial harmonic oscillator problem with position-dependent mass is shown to exhibit a deformed shape invariance property in a deformed supersymmetric framework. The inverse point canonical transformation then provides some exactly-solvable rational extensions of the radial harmonic oscillator with position-dependent mass associated with $X_m$-Jacobi exceptional orthogonal polynomials of type I, II, or III. The extended potentials of type I and II are proved to display deformed shape invariance. The spectrum and wavefunctions of the radial harmonic oscillator potential and its extensions are shown to go over to well-known results when the deforming parameter $α$ goes to zero.
Classical and quantum electromagnetic momentum in anisotropic optical waveguides
This paper develops a theoretical framework for understanding how light waves in optical waveguides carry momentum, bridging classical electromagnetic theory with quantum descriptions of photons in waveguides. The work provides a rigorous method for quantizing electromagnetic fields in integrated photonic circuits.
Key Contributions
- Proves orthogonality condition relating guided modes to electromagnetic momentum
- Provides rigorous quantization procedure for broadband guided fields in photonic circuits
- Bridges theoretical gap between classical Maxwell equations and photon understanding in waveguides
View Full Abstract
The guided modes supported by dielectric channel waveguides act as individual carriers of momentum. We show this by proving that the modes satisfy an orthogonality condition which relates to the momentum of the optical electromagnetic field, with a link to the more familiar power (energy) orthogonality. This result forms the basis for a rigorous, self-consistent procedure for the quantization of broadband guided electromagnetic fields in the typical channels used in integrated photonic circuits. Our work removes the existing theoretical gap between the classical solution of the Maxwell equations for guided fields and the intuitive understanding of photons in waveguides. The presented approach is valid for straight, lossless, and potentially anisotropic, dielectric waveguides of general shape, in the linear regime, and including material dispersion. Examples for the hybrid modes of a thin film lithium niobate strip waveguide are briefly discussed.
Quantum-Inspired Ising Machines for Quantum Chemistry Calculations
This paper demonstrates that quantum-inspired classical algorithms (coherent Ising machines and simulated bifurcation) can solve quantum chemistry problems faster than gate-based quantum computers, accurately calculating molecular energy profiles for H₂ and H₂O molecules while avoiding the noise issues of current quantum hardware.
Key Contributions
- Demonstrated quantum-inspired classical algorithms outperform gate-based quantum computing for molecular energy calculations
- Achieved significant speedup (1.2-2.4s vs >6s) for H₂ and H₂O energy profile computations with comparable accuracy
View Full Abstract
Four decades after Richard Feynman's famous remark, we have reached a stage at which nature can be simulated quantum mechanically. Quantum simulation is among the most promising applications of quantum computing. However, like many quantum algorithms, it is severely constrained by noise in near-term hardware. Quantum-inspired algorithms provide an attractive alternative by avoiding the need for error-prone quantum devices. In this study, we demonstrate that the coherent Ising machine and simulated bifurcation algorithms can accurately reproduce the electronic energy profiles of H_2 and H_2O, capturing their essential energetic features. Notably, we obtain computational times of 1.2 s and 2.4 s for the H_2 and H_2O profiles, respectively, representing a substantial speed-up compared to gate-based quantum computing approaches, which typically require at least 6 s to compute a single molecular geometry with comparable accuracy. These results highlight the potential of quantum-inspired approaches for scaling to larger molecular systems and for future applications in chemistry and materials science.
Photoacoustic model for laser-induced acoustic desorption of nanoparticles
This paper develops a theoretical model for laser-induced acoustic desorption (LIAD) to optimize the process of loading nanoparticles into optical traps for quantum optomechanics experiments. The model identifies key scaling relationships and enables design of compact laser systems that can achieve performance comparable to large laboratory setups.
Key Contributions
- Development of photoacoustic wave equation framework for modeling LIAD process
- Identification of scaling relationships for surface acceleration and optimal spot size
- Demonstration that compact sub-nanosecond laser systems can match laboratory-scale performance
View Full Abstract
Laser-induced acoustic desorption (LIAD) enables loading nanoparticles into optical traps under vacuum for levitated optomechanics experiments. Current LIAD systems rely on empirical optimization using available laboratory lasers rather than systematic theoretical design, resulting in large systems incompatible with portable or space-based applications. We develop a theoretical framework using the photoacoustic wave equation to model acoustic wave generation and propagation in metal substrates, enabling systematic optimization of laser parameters. The model identifies key scaling relationships: surface acceleration scales as $τ^{-2}$ with pulse duration, while acoustic diffraction sets fundamental limits on optimal spot size $w \gtrsim \sqrt{vτd}$. Material figures of merit combine thermal expansion and optical absorption properties, suggesting alternatives to traditional aluminum substrates. The framework validates well against experimental data and demonstrates that compact laser systems with sub-nanosecond pulse durations can achieve performance competitive with existing laboratory-scale implementations despite orders-of-magnitude lower pulse energies. This enables rational design of minimal LIAD systems for practical applications.
Coined Quantum Walks on Complex Networks for Quantum Computers
This paper develops quantum circuits for implementing coined quantum walks on complex networks like social networks or the internet, using a dual-register approach to handle varying node connections more efficiently. The researchers test their method on different network types and demonstrate it on IBM quantum hardware, showing the approach scales polynomially and could work on larger quantum computers in the future.
Key Contributions
- Novel dual-register encoding for quantum walks on complex networks with varying node degrees
- Polynomial scaling quantum circuit design with O(N^1.9) depth demonstrated across multiple network topologies
- Experimental validation on IBM quantum hardware showing feasibility for early fault-tolerant quantum computing
View Full Abstract
We propose a quantum circuit design for implementing coined quantum walks on complex networks. In complex networks, the coin and shift operators depend on the varying degrees of the nodes, which makes circuit construction more challenging than for regular graphs. To address this issue, we use a dual-register encoding. This approach enables a simplified shift operator and reduces the resource overhead compared to previous methods. We implement the circuit using Qmod, a high-level quantum programming language, and evaluated the performance through numerical simulations on Erdős-Rényi, Watts-Strogatz, and Barabási-Albert models. The results show that the circuit depth scales as approximately $N^{1.9}$ regardless of the network topology. Furthermore, we execute the proposed circuits on the ibm\_torino superconducting quantum processor for Watts-Strogatz models with $N=4$ and $N=8$. The experiments show that hardware-aware optimization slightly improved the $L_1$ distance for the larger graph, whereas connectivity constraints imposed overhead for the smaller one. These results indicate that while current NISQ devices are limited to small-scale validations, the polynomial scaling of our framework makes it suitable for larger-scale implementations in the early fault-tolerant quantum computing era.
Instantaneous velocity during quantum tunnelling
This paper analyzes the dynamics of quantum tunnelling, showing that particle velocity inside a barrier starts large and relaxes toward zero as the system reaches steady state, while probability density gradually builds up within the barrier. The work resolves paradoxes about particle motion in tunnelling and provides a clearer theoretical framework for understanding time-resolved tunnelling phenomena.
Key Contributions
- Showed that particle velocity in tunnelling barriers continuously relaxes from large initial values toward zero in steady state
- Derived explicit relation between particle velocity and barrier width, resolving paradoxes about vanishing velocity coexisting with finite particle density
- Established theoretical foundation for time-resolved tunnelling dynamics and critiqued spurious speed definitions based on probability density
View Full Abstract
Quantum tunnelling, a hallmark phenomenon of quantum mechanics, allows particles to pass through the classically forbidden region. It underpins fundamental processes ranging from nuclear fusion and photosynthesis to the operation of superconducting qubits. Yet the underlying dynamics of particle motion during tunnelling remain subtle and are still the subject of active debate. Here, by analyzing the temporal evolution of the tunnelling process, we show that the particle velocity inside the barrier continuously relaxes from a large initial value toward a smaller one, and may even approach zero in the evanescent regime. Meanwhile, the probability density within the barrier gradually builds up before reaching its stationary profile, in contrast to existing inherently. In addition, starting from the steady-state equations, we derive an explicit relation between the particle velocity and the barrier width, and show that the velocity in evanescent states approaches zero when the barrier is sufficiently wide. These findings resolve the apparent paradox of a vanishing steady-state velocity coexisting with a finite particle density. We point out that defining an effective speed from the probability density, rather than from the probability current, can lead to spuriously nonzero "stationary speed," as appears to be the case in Ref. [Nature 643, 67 (2025)]. Our work establishes a clear dynamical picture for the formation of tunnelling flow and provides a theoretical foundation for testing time-resolved tunnelling phenomena.
Feedback Cooling and Thermometry of a Single Trapped Ion Using a Knife Edge
This paper demonstrates a new method to cool a single trapped ion to temperatures 9 times below the normal Doppler cooling limit using real-time feedback control. The technique monitors the ion's motion by detecting fluorescence light changes at a knife edge detector and applies electronic feedback to reduce the ion's thermal motion.
Key Contributions
- First demonstration of feedback cooling a single trapped ion below the Doppler limit
- Novel application of knife-edge detection method for real-time ion motion monitoring and temperature measurement
View Full Abstract
We report on the first feedback cooling of a single trapped ion below the Doppler limit of $\hbarΓ/2 k_\mathrm{B}$. The motion of a single ion is monitored in real-time and cooled up to 9-times below the Doppler cooling temperature by applying electronic feedback. Real-time motion detection is implemented by imaging the fluorescence photons emitted by the ion onto a knife edge and detecting the transmitted light, a method used so far to cool trapped nanoparticles. The intensity modulation of the fluorescence resulting from the ion motion is used to generate and apply the feedback signal and also to determine the ion temperature. The method benefits from a high rate of detected scattered photons, which can be a challenge, and which we address by using a parabolic mirror for collecting the fluorescence.
Entropy of Schwinger pair production in time-dependent Sauter pulse electric field
This paper studies different types of entropy (entanglement, thermal, and chemical potential-corrected) that arise when electron-positron pairs are created from vacuum by strong time-varying electric fields. The authors compare how these entropy measures behave differently for short versus long electric pulses and examine their relationships.
Key Contributions
- Comparative analysis of three different entropy measures in Schwinger pair production under time-dependent electric fields
- Discovery that entanglement entropy dominates for short pulses while thermal entropy dominates for long pulses
- Introduction of Unruh temperature-based thermal entropy calculation for full momentum consideration
View Full Abstract
We investigate entropy of electron-positron pair production in time-dependent Sauter pulse electric field. Both cases of pair longitudinal momentum only and full momentum consideration are examined. We further examine three types of entropy, one is the usual entanglement entropy $S_{\text{E}}$, the other two extensions are thermal distribution entropy $S_{\text{Th}}$, and that with the chemical potential correction, $S_{\text{Th,CP}}$. For short pulse, $S_{\text{E}}$ is higher than $S_{\text{Th}}$ and vice versa for long pulse. The chemical potential causes the single-particle average thermal distribution entropy $\frac{S_{\text{Th,CP}}}{N}$ to exhibit non-monotonic behavior, similar to the single-particle average entanglement entropy $\frac{S_{\text{E}}}{N}$ in the short-pulse range. In the full momentum case, we calculate the thermal distribution entropy $S_{\text{Th, U}}$ via introducing the Unruh temperature as the local effective temperature. We find that both $S_{\text{Th, U}}$ and $S_{\text{E}}$ saturate asymptotically to the constant while the former has a larger asymptotic value. The results presented in this study reveals that the different entropies have some delicate relationships among them.
Neutrino Propagation in Quantum Field Theory at Short and Long Baselines
This paper develops a quantum field theory approach to neutrino oscillations using wave packet descriptions, finding that quantum corrections violate the classical inverse-square law and reduce neutrino detection rates. The authors suggest this framework might partially explain the reactor antineutrino anomaly observed in experiments.
Key Contributions
- Development of wave packet modified neutrino propagator with asymptotic series expansions
- Identification of quantum field theory corrections that violate inverse-square law in neutrino propagation
- Potential explanation for reactor antineutrino anomaly through quantum field effects
View Full Abstract
In a quantum field approach to neutrino oscillations, the neutrino is treated as a propagator, while the external initial and final particle states are described by covariant wave packets. For the asymptotic behavior on short and long macroscopic baselines, the wave packet modified neutrino propagator is expressed through asymptotic series in powers of dimensionless Lorentz and rotation invariant variables. In both regimes, leading-order corrections violate the classical inverse-square law and lead to a decrease in the neutrino-induced event rate. The possibility that the so-called reactor antineutrino anomaly can, at least partially, be explained within this approach is discussed.
Quantum Readiness in Latin American High Schools: Curriculum Compatibility and Enabling Conditions
This paper analyzes the readiness of Latin American high schools to integrate quantum computing education by evaluating curriculum compatibility and institutional capacity across six countries. The study proposes a framework for assessing educational preparedness and provides a roadmap for implementing quantum education in secondary schools.
Key Contributions
- Development of a qualitative framework for assessing quantum education readiness in secondary schools
- Cross-country comparative analysis of Latin American educational systems' preparedness for quantum curriculum integration
View Full Abstract
The accelerating global development of quantum technologies strengthens the case for introducing quantum computing concepts before university. Yet in Latin America, there is no consolidated, region wide integration of quantum computing into secondary education, and the feasibility conditions for doing so remain largely unexamined. This paper proposes a qualitative, comparative framework to assess academic readiness for quantum education across six countries - Peru, Bolivia, Chile, Argentina, Brazil, and Colombia - grounded in the relationship between curriculum compatibility and enabling conditions spanning institutional capacity, teacher preparation, infrastructure, and equity. Using official curricula, policy documents, national statistics, and educational reports, we apply structured qualitative coding and a 1-5 ordinal scoring system to generate a cross country diagnosis. The findings reveal substantial regional asymmetries: among the six countries studied, Chile emerges as the most institutionally prepared for progressive quantum education integration, while the remaining countries exhibit varying combinations of curricular gaps and fragmented but promising enabling conditions. Building on this diagnosis, we propose a country sensitive, regionally coordinated roadmap for staged implementation, beginning with teacher development and pilot centers, leveraging open source platforms and local language resources, and scaling toward gradual curricular integration. This work establishes a baseline for future quantitative and mixed method studies evaluating learning outcomes, motivation, and scalable models for quantum education in Latin America.
Self-testing GHZ state via a Hardy-type paradox
This paper develops a method to verify that quantum devices are actually producing genuine three-particle GHZ entangled states by using a generalized version of Hardy's quantum paradox, without needing to trust or know the internal workings of the devices. The authors prove this verification method is mathematically robust and show it connects to existing Bell inequality tests.
Key Contributions
- Development of a self-testing protocol for GHZ states based on Hardy's nonlocality paradox
- Proof that the Hardy correlation achieving maximal success probability is an exposed extremal point of the quantum correlation set
- Demonstration of equivalence between Hardy-type paradox violations and Mermin inequality violations for the same correlations
- Robust analysis framework for handling experimental imperfections in self-testing
View Full Abstract
Self-testing is a correlation-based framework that enables the certification of both the underlying quantum state and the implemented measurements without imposing any assumptions on the internal structure of the devices. In this work, we introduce a self-testing protocol for the Greenberger-Horne-Zeilinger (GHZ) state based on a natural generalization of Hardy's nonlocality argument. Within this framework, we prove that the correlation achieving the maximal Hardy success probability constitutes an extremal point of the quantum correlation set and, moreover, that this point is \emph{exposed}. To address experimentally relevant imperfections, we further develop a robust self-testing analysis tailored to the Hardy construction. Additionally, we show that, in this scenario, the quantum correlation that attains the maximal violation of the Hardy-type paradox coincides with the correlation that yields the maximal violation of the Mermin inequality. This establishes a unified perspective in which the same multipartite correlation admits both a logical-paradox interpretation and a Bell-inequality-based characterization. Collectively, our results pave the way for investigating whether the correlations that maximally violate the generalized $N$-party Hardy paradox remain exposed in higher-party regimes.
Amplifying Decoherence-Free Many-Body Interactions with Giant Atoms Coupled to Parametric Waveguide
This paper presents a new quantum platform that combines giant atoms (atoms coupled to waveguides at multiple points) with parametric waveguides to create enhanced many-body quantum interactions that are protected from noise-induced decoherence. The system enables both exchange and pairing interactions between atoms while using destructive interference to maintain quantum coherence.
Key Contributions
- Novel architecture combining giant atoms with traveling-wave parametric waveguides for decoherence-free quantum interactions
- Demonstration of tunable exchange and pairing interactions suitable for many-body quantum simulation
- Method to enhance quantum interactions while maintaining immunity to squeezed noise through engineered interference
View Full Abstract
Parametric amplification offers a powerful means to enhance quantum interactions through field squeezing, yet it typically introduces additional noise which accelerates quantum decoherence, a major obstacle for scalable quantum information processing. The squeezing field is implemented in cavities rather than continuous waveguides, thereby limiting its scalability for applications in quantum simulation. Giant atoms, which couple to waveguides at multiple points, provide a promising route to mitigate dissipation via engineered interference, enabling decoherence-free interactions. We extend the squeezing-amplified interaction to a novel quantum platform combining giant atoms with traveling-wave parametric waveguides based on $χ^{(2)}$ nonlinearity. By exploiting destructive interference between different coupling points, the interaction between giant atoms is not only significantly enhanced but also becomes immune to squeezed noise. Unlike conventional waveguide quantum electrodynamics without a squeezing pump, the giant emitters exhibit both exchange and pairing interactions, making this platform particularly suitable for simulating many-body quantum physics. More intriguingly, the strengths of these interactions can be smoothly tuned by adjusting the squeezing and coupling parameters. Our architecture thus provides a versatile and scalable platform for quantum simulation of strongly correlated physics and paves the way toward robust quantum control in many-body regimes.
Near-Infrared Quantum Emission from Oxygen-Related Defects in hBN
This paper demonstrates a method to create oxygen-related quantum defects in hexagonal boron nitride that emit single photons in the near-infrared spectrum (700-960 nm) with excellent properties including room-temperature operation, high brightness, and narrow linewidths. The defects are created using a simple oxygen plasma treatment and show promise for quantum networking applications.
Key Contributions
- Development of scalable oxygen plasma process to create stable single-photon emitters in hBN
- Demonstration of NIR quantum emitters with ultra-sharp linewidths and weak electron-phonon coupling
- First-principles identification of oxygen-related defect configurations responsible for emission
View Full Abstract
Color centers hosted in hexagonal boron nitride (hBN) have emerged as a promising platform for single-photon emission and coherent spin-photon interfaces that underpin quantum communication and quantum networking technologies. As a wide-bandgap van der Waals material, hBN can host individual optically active quantum defects emitting across the ultraviolet to visible spectrum, but existing color centers often show broad phonon sidebands (PSBs), unstable emission, or inconvenient wavelengths. Here, we show a simple, scalable oxygen-plasma process that reproducibly creates oxygen-related single quantum emitters in hBN with blinking-free zero-phonon lines spanning the near-infrared (NIR) spectrum from 700-960 nanometers. These emitters demonstrate room-temperature operation, high brightness, and ultra-sharp cryogenic linewidths in the few-gigahertz range under non-resonant excitation. Analysis of the PSBs shows weak electron-phonon coupling and predominant zero-phonon-line emission, while first-principles calculations identify plausible oxygen-related defect configurations. These emitters provide a promising platform for indistinguishable NIR single photons towards free-space quantum networking.
Optimizing Quantum Data Embeddings for Ligand-Based Virtual Screening
This paper develops quantum-classical hybrid approaches that combine neural networks with parameterized quantum circuits to create better molecular representations for drug discovery. The researchers show that these quantum embedding methods outperform classical approaches, especially when training data is limited.
Key Contributions
- Development of quantum-classical hybrid embedding methods for molecular representation
- Demonstration that quantum embeddings outperform classical baselines in data-limited drug screening scenarios
View Full Abstract
Effective molecular representations are essential for ligand-based virtual screening. We investigate how quantum data embedding strategies can improve this task by developing and evaluating a family of quantum-classical hybrid embedding approaches. These approaches combine classical neural networks with parameterized quantum circuits in different ways to generate expressive molecular representations and are assessed across two benchmark datasets of different sizes: the LIT-PCBA and COVID-19 collections. Across multiple biological targets and class-imbalance settings, several quantum and hybrid embedding variants consistently outperform classical baselines, especially in limited-data regimes. These results highlight the potential of optimized quantum data embeddings as data-efficient tools for ligand-based virtual screening.
Tunneling in double-well potentials within stochastic quantization: Application to ammonia inversion
This paper uses stochastic quantization to study quantum tunneling in double-well potentials, treating quantum mechanics as a diffusion process to calculate tunneling times and their probability distributions. The researchers derive analytical expressions, validate them through simulations, and apply their method to analyze ammonia molecule inversion dynamics, achieving good agreement with experimental data.
Key Contributions
- Development of first-passage time theory within stochastic quantization framework for tunneling analysis
- Derivation of direct relation between stochastic-mechanical and quantum-mechanical tunneling times in high-barrier limit
- Successful application to ammonia inversion dynamics yielding experimentally consistent results
View Full Abstract
Stochastic quantization - introduced by Nelson in 1966 - describes quantum behavior as a conservative diffusion process in which a particle undergoes Brownian-like motion with a fluctuation amplitude set by Planck's constant. While it fully reproduces conventional quantum mechanics, this approach provides an alternative framework that enables the study of dynamical quantities not easily defined within the standard formulation. In the present work, stochastic quantization is employed to investigate tunneling-time statistics for bound states in double-well potentials. Using first-passage time theory within the stochastic quantization framework, both the mean tunneling time, $\barτ$, and the full probability distribution, $p(τ)$, are computed, and the theoretical predictions are validated through extensive numerical simulations of stochastic trajectories for the two potentials considered as representative cases. For the square double-well potential, analytical expressions for $\barτ$ are derived and show excellent agreement with simulations. In the high-barrier limit, the results reveal a direct relation between the stochastic-mechanical and quantum-mechanical tunneling times, expressed as $τ_{\mathrm{QM}} = (π/2)\barτ$, where $τ_{\mathrm{QM}}$ corresponds to half the oscillation period of the probability of finding the particle in either well. This relation is further confirmed for generic double-well systems through a WKB analysis. As a concrete application, the inversion dynamics of the ammonia molecule is analyzed, yielding an inversion frequency of approximately $24$ GHz, in close agreement with experimental observations. These results highlight the potential of stochastic quantization as a powerful and physically insightful framework for analyzing tunneling phenomena in quantum systems.
Antisymmetrization of composite fermionic states for quantum simulations of nuclear reactions in first-quantization mapping
This paper presents a quantum algorithm for preparing antisymmetric fermionic states needed to simulate nuclear reactions by efficiently combining two separate groups of identical fermions (target and projectile nuclei). The method uses auxiliary qubits and requires only O(N_T N_p) single-particle exchanges to create the correct quantum mechanical wavefunction for the combined system.
Key Contributions
- Deterministic first-quantization algorithm for antisymmetrizing composite fermionic systems with O(N_T N_p) complexity
- Scalable quantum protocol for preparing antisymmetric states in nuclear reaction simulations using Dicke-state ancilla registers
View Full Abstract
I present a first-quantization deterministic algorithm for antisymmetrizing a spatially separated target-projectile system containing $N_T$ and $N_p$ identical fermions, respectively. The method constructs a fully antisymmetric wavefunction from the product of two independently antisymmetrized many-body states, each of which may be a superposition of Slater determinants. The algorithm uses a Dicke-state ancilla register that coherently encodes all one-particle exchange channels between the two subsystems, and, crucially, requires only single-particle swaps to generate the full antisymmetric structure. A total of $O(N_T N_p)$ single-particle exchanges are needed, with up to $N_p$ of them implemented in parallel, if an additional $N_p$ ancillae are used. The correct fermionic phase is incorporated through application of $Z$ gates on $N_T$ ancillae, after which the ancilla register is efficiently uncomputed using a compact sequence of controlled operations. This construction provides a nontrivial and scalable protocol for preparing fully antisymmetric states in reaction and scattering simulations, significantly expanding the range of systems that can be addressed with first-quantized quantum algorithms.
Multi-messenger tracking of coherence loss during bond breaking
This paper uses advanced coincidence measurement techniques to track how chemical bonds break in bromine molecules, revealing that electrons reorganize before the atomic nuclei physically separate. The work demonstrates how molecular bond breaking can be viewed as a quantum interference process between atomic centers.
Key Contributions
- Development of multi-messenger coincidence technique for real-time tracking of bond breaking dynamics
- Discovery that electronic rearrangement precedes nuclear separation during molecular dissociation
- Demonstration of molecular bond breaking as a two-center quantum interferometer with measurable coherence loss
View Full Abstract
Coupled electronic and nuclear motions govern chemical reactions, yet disentangling their interplay during bond rupture remains challenging. Here we follow the light-induced fragmentation of Br$_2$ using a coincidence-based multi-messenger approach. A UV pulse prepares the dissociative state, and strong-field ionization probes the evolving system. Coincident measurement of three-dimensional photoion and photoelectron momenta provides real-time access to both the instantaneous internuclear separation and the accompanying reorganization of the electronic structure, allowing us to determine the timescale of bond breaking. We find that electronic rearrangement concludes well before the nuclei reach the bond-breaking distance, revealing a hierarchy imposed by electron-nuclear coupling. Supported by semiclassical modelling, the results show that the stretched Br$_2$ molecule behaves as a two-centre interferometer in which the loss of coherence between atomic centres encodes the coupled evolution of electrons and nuclei. Our work establishes a general framework for imaging ultrafast electron-nuclear dynamics in molecules.
Analyzing the performance of CV-MDI QKD under continuous-mode scenarios
This paper analyzes how continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) performs when real-world high-speed systems cause spectrum broadening and mode mismatches. The researchers found that mismatches between users' transmission modes and Bell measurement modes significantly reduce transmission distances and secret key rates, with Bob's side showing more severe degradation than Alice's side.
Key Contributions
- Introduction of temporal modes analysis for CV-MDI QKD under continuous-mode scenarios
- Demonstration that mode mismatches on Bob's side cause more severe performance degradation than on Alice's side
- Quantification of how mode mismatches drastically reduce transmission distances and secret key rates in practical systems
View Full Abstract
Continuous-variable measurement-device-independent quantum key distribution (CV-MDI QKD) can address vulnerabilities on the detection side of a QKD system. The core of this protocol involves continuous-variable Bell measurements performed by an untrusted third party. However, in high-speed systems, spectrum broadening causes Bell measurements to deviate from the ideal single-mode scenario, resulting in mode mismatches, reduced performance, and compromised security. Here, we introduce temporal modes (TMs) to analyze the performance of CV-MDI QKD under continuous-mode scenarios. The mismatch between Bob's transmitting mode and Bell measurement mode has a more significant effect on system performance compared to that on Alice's side. When the Bell receiver is close to Bob and the mismatch is set to just 5%, the transmission distance drastically decreases from 87.96 km to 18.50 km. In comparison, the same mismatch for Alice reduces the distance to 86.83 km. This greater degradation on Bob's side can be attributed to the asymmetry in the data modification step. Furthermore, the mismatch in TM characteristics leads to a significant reduction in the secret key rate by 83% when the transmission distance is set to 15 km, which severely limits the practical usability of the protocol over specific distances. These results indicate that in scenarios involving continuous-mode interference, such as large-scale MDI network setups, careful consideration of each user's TM characteristics is crucial. Rigorous pre-calibration of these modes is essential to ensure the system's reliability and efficiency.
Discrete time crystals enhanced by Stark potentials in Rydberg atom arrays
This paper proposes a new way to create discrete time crystals (exotic quantum phases that break time symmetry) using arrays of Rydberg atoms with a linear Stark potential, avoiding the need for disorder that previous methods required. The authors show numerically that this approach makes the time crystals more robust and longer-lasting.
Key Contributions
- Demonstrated disorder-free discrete time crystal realization in Rydberg atom arrays using Stark potentials
- Showed enhanced robustness and extended lifetime of DTCs without requiring many-body localization or special initial state preparation
View Full Abstract
Discrete time crystals (DTCs) are non-equilibrium phases in periodically driven systems that exhibit spontaneous breaking of discrete time-translation symmetry. The stabilization of most DTC phases is achieved via the disorder-induced many-body localization. In this work, we propose an experimental scheme to realize disorder-free DTCs in a periodically driven Rydberg atom array. Our scheme utilizes a linear potential in the atomic detuning to enhance the DTC order, without being tired to (Stark) many-body localization. We numerically demonstrate that the Stark potential enhances the robustness of the DTC against the flip imperfections and extends its lifetime, which are independent of initial states. Thus, our scheme provides a promising way to explore DTCs in Rydberg atom arrays without disorder averaging and special state preparation.
Large Isolated Stripes on Short 18-leg $t$-$J$ Cylinders
This paper studies stripe formation in high-temperature superconductors using advanced computational methods on unusually wide 18-leg cylindrical geometries. The researchers identify two distinct regimes of stripe formation and connect microscopic dopant behavior to macroscopic striped phases.
Key Contributions
- Demonstrated stripe formation on wider 18-leg cylindrical geometries using DMRG methods
- Identified two distinct regimes of stripe formation - high-filling and low-filling with different microscopic mechanisms
- Connected single stripe physics to filling fraction spreads observed across different studies
View Full Abstract
Spin-charge stripes belong to the most prominent low-temperature orders besides superconductivity in high-temperature superconductors. This phase is particularly challenging to study numerically due to finite-size effects. By investigating the formation of long, isolated stripes, we offer a perspective complementary to typical finite-doping phase diagrams. We use the density-matrix renormalization group algorithm to extract the ground states of an 18-leg cylindrical strip geometry, making the diameter significantly wider than in previous works. This approach allows us to map out the range of possible stripe filling fractions on the electron versus hole-doped side. We find good agreement with established results, suggesting that the spread of filling fractions observed in the literature is governed by the physics of a single stripe. Taking a microscopic look at stripe formation, we reveal two separate regimes - a high-filling regime captured by a simplified squeezed-space model and a low-filling regime characterized by the structure of individual pairs of dopants. Thereby, we trace back the phenomenology of the striped phase to its microscopic constituents and highlight the different challenges for observing the two regimes in quantum simulation experiments.
An introduction to nonlinear fiber optics and optical analogues to gravitational phenomena
This paper provides a comprehensive introduction to nonlinear fiber optics and demonstrates how optical fibers can serve as analogues to gravitational phenomena like black holes. The authors derive the nonlinear Schrödinger equation for light propagation in fibers and show how intense light creates effective spacetime geometries that can simulate black hole physics, including optical horizons and Hawking radiation.
Key Contributions
- Self-contained derivation of nonlinear Schrödinger equation for fiber optics with minimal assumptions
- Demonstration of optical analogues to black hole physics including optical horizons and Hawking effect
- Framework for analogue gravity experiments using accessible fiber optic setups
View Full Abstract
The optical fiber is a revolutionary technology of the past century. It enables us to manipulate single modes in nonlinear interactions with precision at the quantum level without involved setups. This setting is useful in the field of analogue gravity (AG), where gravitational phenomena are investigated in accessible analogue lab setups. These lecture notes provide an account of this AG framework and applications. Although light in nonlinear dielectrics is discussed in textbooks, the involved modelling often includes many assumptions that are directed at optical communications, some of which are rarely detailed. Here, we provide a self-contained and sufficiently detailed description of the propagation of light in fibers, with a minimal set of assumptions, which is relevant in the context of AG. Starting with the structure of a step-index fiber, we derive linear-optics propagating modes and show that the transverse electric field of the fundamental mode is well approximated as linearly polarized and of a Gaussian profile. We then incorporate a cubic nonlinearity and derive a general wave envelope propagation equation. With further simplifying assumptions, we arrive at the famous nonlinear Schrödinger equation, which governs fundamental effects in nonlinear fibers, such as solitons. As a first application in AG, we show how intense light in the medium creates an effective background spacetime for probe light akin to the propagation of a scalar field in a black hole spacetime. We introduce optical horizons and particle production in this effective spacetime, giving rise to the optical Hawking effect. Furthermore, we discuss two related light emission mechanisms. Finally, we present a second optical analogue model for the oscillations of black holes, the quasinormal modes, which are important in the program of black hole spectroscopy.
A random purification channel for arbitrary symmetries with applications to fermions and bosons
This paper develops a generalized quantum channel that creates random purifications of mixed quantum states while preserving arbitrary symmetries, with specific applications to fermionic and bosonic systems. The work provides improved methods for characterizing and testing Gaussian quantum states, particularly achieving optimal scaling for fermionic state tomography.
Key Contributions
- Generalization of random purification channels to arbitrary group symmetries
- First optimally-scaling tomography protocol for fermionic Gaussian states
- Improved property testing methods for Gaussian quantum states
View Full Abstract
The random purification channel maps n copies of any mixed quantum state to n copies of a random purification of the state. We generalize this construction to arbitrary symmetries: for any group G of unitaries, we construct a quantum channel that maps states contained in the algebra generated by G to random purifications obtained by twirling over G. In addition to giving a surprisingly concise proof of the original random purification theorem, our result implies the existence of fermionic and bosonic Gaussian purification channels. As applications, we obtain the first tomography protocol for fermionic Gaussian states that scales optimally with the number of modes and the error, as well as an improved property test for this class of states.
Combinatorial structures in quantum correlation: A new perspective
This paper introduces a new class of quantum states called A_α-graph states that are constructed from graphs by combining their degree and adjacency matrices with a tunable parameter. The authors develop methods to detect entanglement in these states using graph-theoretic properties and experimentally accessible moment-based measurements.
Key Contributions
- Introduction of A_α-graph states as a new class of quantum states with tunable entanglement properties
- Development of graph-theoretic formulation for moments-based entanglement detection using experimentally accessible measurements
View Full Abstract
Graph-theoretic structures play a central role in the description and analysis of quantum systems. In this work, we introduce a new class of quantum states, called $A_α$-graph states, which are constructed from either unweighted or weighted graphs by taking the normalised convex combination of the degree matrix $D$ and the adjacency matrix $A_G$ of a graph $G$. The constructed states are different from the standard graph states arising from stabiliser formalism. Our approach is also different from the approach used by Braunstein et al. This class of states depend on a tunable mixing parameter $α\in (0,1]$. We first establish the conditions under which the associated operator $ρ_α^{A_G}$ is positive semidefinite and hence represents a valid quantum state. We then derive a positive partial transposition (PPT) condition for $A_α$-graph states in terms of graph parameters. This PPT condition involves only the Frobenius norm of the adjacency matrix of the graph, the degrees of the vertices and the total number of vertices. For simple graphs, we obtain the range of the parameter $α$ for which the $A_α$-graph states represent a class of entangled states. We then develop a graph-theoretic formulation of a moments-based entanglement detection criterion, focusing on the recently proposed $p_3$-PPT criterion, which relies on the second and third moments of the partial transposition. Since the estimation of these moments is experimentally accessible via randomised measurements, swap operations, and machine-learning-based protocols, our approach provides a physically relevant framework for detecting entanglement in structured quantum states derived from graphs. This work bridges graph theory and moments-based entanglement detection, offering a new perspective on the role of combinatorial structures in quantum correlations.
Prospects for quantum advantage in machine learning from the representability of functions
This paper develops a theoretical framework to analyze when quantum machine learning models can actually provide advantages over classical approaches by examining the mathematical structure of functions that quantum circuits can learn. The authors identify key circuit properties that determine whether quantum models can be efficiently simulated classically or remain robustly quantum.
Key Contributions
- Introduces framework connecting parametrized quantum circuit structure to learnable function properties
- Identifies circuit depth and non-Clifford gate count as key factors determining classical simulability
- Provides conceptual map distinguishing fully simulatable, classically tractable, and robustly quantum models
View Full Abstract
Demonstrating quantum advantage in machine learning tasks requires navigating a complex landscape of proposed models and algorithms. To bring clarity to this search, we introduce a framework that connects the structure of parametrized quantum circuits to the mathematical nature of the functions they can actually learn. Within this framework, we show how fundamental properties, like circuit depth and non-Clifford gate count, directly determine whether a model's output leads to efficient classical simulation or surrogation. We argue that this analysis uncovers common pathways to dequantization that underlie many existing simulation methods. More importantly, it reveals critical distinctions between models that are fully simulatable, those whose function space is classically tractable, and those that remain robustly quantum. This perspective provides a conceptual map of this landscape, clarifying how different models relate to classical simulability and pointing to where opportunities for quantum advantage may lie.
All Entangled States are Nonlocal and Self-Testable in the Broadcast Scenario
This paper proves that all entangled quantum states exhibit nonlocal behavior when extended to a broadcast scenario, closing a known gap where some entangled states appeared local under standard Bell tests. The authors also demonstrate that all multipartite quantum states can be self-tested in this broadcast framework.
Key Contributions
- Proved that all entangled states exhibit broadcast nonlocality in quantum theory
- Showed that all multipartite states can be broadcast-self-tested
View Full Abstract
Entanglement and Bell nonlocality are known to be inequivalent: there exist entangled states that admit a local hidden-variable model for all local measurements. Here we show that this gap disappears in a minimal broadcast extension of the Bell scenario. Assuming only the validity of quantum theory, we prove that for every entangled state $ρ_{AB}$ there exist local broadcasting maps and local measurements such that the resulting four--partite correlations cannot be reproduced by any broadcast network whose source is separable across the $A|B$ cut. Thus, all entangled states are broadcast nonlocal in quantum theory. In addition, we show that all (also mixed) multipartite states can be broadcast-self-tested, according to a natural operational definition.
Characterization of Generalized Coherent States through Intensity-Field Correlations
This paper develops a method to detect quantum properties in generalized coherent states of light by measuring intensity-field correlations. The authors show that deviations from unity in these normalized correlations can reveal nonclassical behavior that standard intensity measurements cannot detect.
Key Contributions
- Demonstrated that intensity-field correlation function serves as a witness for nonclassicality in generalized coherent states
- Derived analytical results for Kerr-generated states and extended analysis to statistical mixtures
- Proposed a practical, real-time method for detecting quantum signatures in non-Gaussian states
View Full Abstract
Non-Gaussian quantum states of light are essential resources for quantum information processing and precision metrology. Among them, generalized coherent states (GCS), which naturally arise from the evolution of a coherent state with a nonlinear medium, exhibit useful quantum features such as Wigner negativity and metrological advantages [Phys. Rev. Res. 5, 013165 (2023)]. Because these states remain coherent to all orders, their nonclassical character cannot be revealed through standard intensity-intensity correlation measurements. Here, we demonstrate that the intensity-field correlation function alone provides a simple and experimentally accessible witness of nonclassicality. For GCSs, any deviation of this normalized correlation from unity signals nonclassical behavior. We derive analytical results for Kerr-generated states and extend the analysis to statistical mixtures of GCSs. The proposed approach enables real-time, low-complexity detection of quantum signatures in non-Gaussian states, offering a practical tool for experiments across a broad range of nonlinear regimes.
Implementation of the Quantum Fourier Transform on a molecular qudit with full refocusing and state tomography
This paper demonstrates the implementation of the Quantum Fourier Transform algorithm on a molecular spin qudit system based on ytterbium complexes, using advanced pulse sequences to maintain quantum coherence and performing complete state verification through tomography.
Key Contributions
- First experimental implementation of Quantum Fourier Transform on molecular spin qudits
- Development of full-refocusing protocol to mitigate decoherence in multi-level quantum systems
- Demonstration of high-fidelity quantum algorithm execution with complete state tomography validation
View Full Abstract
Molecular spin qudits based on lanthanide complexes offer a promising platform for quantum technologies, combining chemical tunability with multi-level encoding. However, experimental demonstrations of their envisaged capabilities remain scarce, posing the difficulty of achieving precise control over coherences between qudit states in long pulse sequences. Here, we implement in 173Yb(trensal) qudit the Quantum Fourier Transform (QFT), a core component of numerous quantum algorithms, storing quantum information in the phases of coherences. QFT provides an ideal benchmark for coherence manipulation and an unprecedented challenge for molecular spin qudits. We address this challenge by embedding a full-refocusing protocol for spin qudits in our algorithm, mitigating inhomogeneous broadening and enabling a high-fidelity recovery of the state. Complete state tomography demostrates the performance of the algorithm, while simulations provide insight into the physical mechanisms behind inhomogeneous broadening. This work shows the feasibility of quantum logic on molecular spin qudits and highlights their potential.
Quadratic power enhancement in extended Dicke quantum battery
This paper demonstrates a quantum battery design using two-level systems coupled to two cavity modes that achieves quadratic (N²) power enhancement, outperforming traditional Dicke batteries through improved quantum correlations and evolution speed while maintaining energy efficiency and experimental feasibility.
Key Contributions
- Demonstration of N² power scaling in extended Dicke quantum battery architecture
- Show that power enhancement comes from both quantum correlations and speed scaling
- Prove energy efficiency is maintained while achieving superior performance over standard Dicke batteries
View Full Abstract
We demonstrate a quadratic enhancement of power in a battery consisting of $N$ two-level systems or spins interacting with two photonic cavity modes, where one of the modes is in the dispersive regime. In contrast to Dicke batteries, the power enhancement arises from a $N^2$ scaling of both quantum correlations and speed of evolution, thus highlighting genuine quantum advantage. Moreover, this hybrid setup is experimentally realizable and ensures that power enhancement is not achieved at significant cost to energy efficiency, while allowing for greater tunability and stable operation in the presence of noise.
First-principles simulation of spin diffusion in static solids using dynamic mean-field theory
This paper develops and validates a computational method called spin dynamic mean-field theory (spinDMFT) to efficiently simulate how nuclear spins interact and diffuse in solid materials, particularly for nuclear magnetic resonance applications. The method overcomes computational limitations of traditional approaches by using mean-field approximations while maintaining high accuracy compared to experimental data.
Key Contributions
- Development of spinDMFT as an efficient computational method for simulating spin diffusion in static solids
- Demonstration that spinDMFT can simulate zero-quantum line shapes that previously eluded efficient quantitative simulation
View Full Abstract
The dynamics of disordered nuclear spin ensembles are the subject of nuclear magnetic resonance studies. Due to the through-space long-range dipolar interaction generically many spins are involved in the time evolution, so that exact brute force calculations are impossible. The recently established spin dynamic mean-field theory (spinDMFT) represents an efficient and unbiased alternative to overcome this challenge. The approach only requires the dipolar couplings as input and the only prerequisite for its applicability is that each spin interacts with a large number of other spins. In this article, we show that spinDMFT can be used to describe spectral spin diffusion in static samples and to simulate zero-quantum line shapes which eluded an efficient quantitative simulation so far to the best of our knowledge. We perform benchmarks for two test substances that establish an excellent match with published experimental data. As spinDMFT combines low computational effort with high accuracy, we strongly suggest to use it for large-scale simulations of spin diffusion, which are important in various areas of magnetic resonance.
Photonics-Enhanced Graph Convolutional Networks
This paper combines photonic hardware with graph neural networks by using light propagation patterns in synthetic frequency lattices to create positional embeddings that improve the performance of graph convolutional networks on molecular datasets. The approach leverages optical processing to generate features that provide better structural information than traditional methods.
Key Contributions
- Novel photonics-based positional embeddings for graph neural networks derived from light propagation on synthetic frequency lattices
- Demonstration of 6.3% improvement in regression tasks and 2.3% improvement in classification tasks over baseline methods on molecular datasets
View Full Abstract
Photonics can offer a hardware-native route for machine learning (ML). However, efficient deployment of photonics-enhanced ML requires hybrid workflows that integrate optical processing with conventional CPU/GPU based neural network architectures. Here, we propose such a workflow that combines photonic positional embeddings (PEs) with advanced graph ML models. We introduce a photonics-based method that augments graph convolutional networks (GCNs) with PEs derived from light propagation on synthetic frequency lattices whose couplings match the input graph. We simulate propagation and readout to obtain internode intensity correlation matrices, which are used as PEs in GCNs to provide global structural information. Evaluated on Long Range Graph Benchmark molecular datasets, the method outperforms baseline GCNs with Laplacian based PEs, achieving $6.3\%$ lower mean absolute error for regression and $2.3\%$ higher average precision for classification tasks using a two-layer GCN as a baseline. When implemented in high repetition rate photonic hardware, correlation measurements can enable fast feature generation by bypassing digital simulation of PEs. Our results show that photonic PEs improve GCN performance and support optical acceleration of graph ML.
Anomalous Dynamical Scaling at Topological Quantum Criticality
This paper investigates how quantum systems behave when driven through topological quantum critical points, discovering that topological edge modes create unusual scaling behavior that differs from standard theoretical predictions. The researchers found that while bulk properties follow expected patterns, boundary dynamics show anomalous scaling unique to topological systems.
Key Contributions
- Discovery of anomalous dynamical scaling at topological quantum critical points that goes beyond Kibble-Zurek mechanism
- Demonstration that topological edge modes create unique boundary dynamics distinct from bulk behavior during driven quantum phase transitions
View Full Abstract
We study the nonequilibrium driven dynamics at topologically nontrivial quantum critical points (QCPs),and find that topological edge modes at criticality give rise to anomalous universal dynamical scaling behavior. By analyzing the driven dynamics of bulk and boundary order parameters at topologically distinct Ising QCPs, we demonstrate that, while the bulk dynamics remain indistinguishable and follow standard Kibble Zurek (KZ) scaling, the anomalous boundary dynamics is unique to topological criticality, and its explanation goes beyond the traditional KZ mechanism. To elucidate the unified origin of this anomaly, we further study the dynamics of defect production at topologically distinct QCPs in free-fermion models and demonstrate similar anomalous universal scaling exclusive to topological criticality. These findings establish the existence of anomalous dynamical scaling arising from the interplay between topology and driven dynamics, challenging standard paradigms of quantum critical dynamics.
Decoherence dynamics across sub-Planckian to arbitrary scales using kitten states
This paper studies how quantum compass states and their variants lose their quantum properties when exposed to environmental noise, finding that states with finer sub-Planck scale features are more fragile to decoherence. The research reveals a fundamental tradeoff between precision in phase space and resistance to environmental interference.
Key Contributions
- Demonstrated fundamental tradeoff between sub-Planck precision and decoherence resistance in compass states
- Provided theoretical framework for analyzing decoherence dynamics across different phase-space scales using established techniques
View Full Abstract
Environmental decoherence occurs when a quantum system interacts with its surroundings, progressively reducing quantum interference and coherence, complicating the preservation of critical quantum properties over time, especially during experimental implementation. The effect of decoherence varies depending on the phase-space features of quantum states, which are theoretically characterized by the Wigner phase space and appear at different scales. We explore the compass state and its photon-added and photon-subtracted variants, each of which exhibits phase-space features with dimensions beyond the Planck scale, making them suitable for quantum sensing applications. We investigate the interaction of these states with a heat reservoir by employing a range of well-established theoretical techniques, revealing a clear tradeoff between the degree of fineness in the smallest features, such as the sub-Planck structure, and the extent of decoherence. Specifically, increasing the parameters enhances sub-Planck precision in phase space, concomitantly amplifying the fragility of these compass states to undesired decoherence. Our general illustration, validated through these compass states, also applies to any pure quantum state interacting with the considered heat reservoir, exhibiting enhanced sustainability of features at larger phase-space extensions.
Lower Bounding the Secret Key Capacity of Bosonic Gaussian Channels via Optimal Gaussian Measurements
This paper develops improved methods for secure quantum communication over bosonic channels by finding optimal Gaussian measurement protocols that maximize the rate of secret key distribution. The work provides better lower bounds on secret communication capacity for certain types of quantum channels, particularly improving results for added noise channels.
Key Contributions
- Derived maximum achievable rate for private communication using optimal single-mode Gaussian measurements on phase-insensitive Gaussian channels
- Provided simplified formulas for evaluating performance of thermal-loss and thermal amplification channel protocols
- Established improved lower bounds on secret key capacity for added noise channels
View Full Abstract
We find the maximum rate achievable in the private communication over a bosonic quantum channel with a fully Gaussian protocol based on optimal single-mode Gaussian measurements. This rate establishes a lower bound on the secret rate capacity of the channel. We focus on the class of phase-insensitive Gaussian channels. For the thermal-loss and thermal amplification channels, our results demonstrate the optimality, within the constraints of our analysis, of previously proposed protocols, while also providing a significantly simplified formula for their performance evaluation. For the added noise channel, our rate provides a better lower bound than any previously known.
QuantGraph: A Receding-Horizon Quantum Graph Solver
This paper presents QuantGraph, a quantum-enhanced framework for solving graph optimization problems by using Grover's algorithm to search over trajectory spaces in two stages (local and global), combined with classical model-predictive control for improved scalability and robustness.
Key Contributions
- Two-stage quantum graph solver using Grover-adaptive-search with local pruning and global refinement
- Integration of quantum search with receding-horizon model-predictive control for improved scalability
- Demonstration of 2x increase in control-discretization precision while maintaining quadratic speedup
View Full Abstract
Dynamic programming is a cornerstone of graph-based optimization. While effective, it scales unfavorably with problem size. In this work, we present QuantGraph, a two-stage quantum-enhanced framework that casts local and global graph-optimization problems as quantum searches over discrete trajectory spaces. The solver is designed to operate efficiently by first finding a sequence of locally optimal transitions in the graph (local stage), without considering full trajectories. The accumulated cost of these transitions acts as a threshold that prunes the search space (up to 60% reduction for certain examples). The subsequent global stage, based on this threshold, refines the solution. Both stages utilize variants of the Grover-adaptive-search algorithm. To achieve scalability and robustness, we draw on principles from control theory and embed QuantGraph's global stage within a receding-horizon model-predictive-control scheme. This classical layer stabilizes and guides the quantum search, improving precision and reducing computational burden. In practice, the resulting closed-loop system exhibits robust behavior and lower overall complexity. Notably, for a fixed query budget, QuantGraph attains a 2x increase in control-discretization precision while still benefiting from Grover-search's inherent quadratic speedup compared to classical methods.
Energy Inference of Black-Box Quantum Computers Using Quantum Speed Limit
This paper presents a method to estimate the energy scales of quantum gate operations in cloud-based quantum computers by using quantum speed limits and measuring job execution times, without needing access to hardware details. The researchers applied this technique to IBM's quantum processors and found that current gate operations appear to operate near fundamental quantum speed limits.
Key Contributions
- Novel method to infer energy scales of gate Hamiltonians in black-box quantum processors using quantum speed limits
- Gate-time amplification technique to extract nanosecond-scale gate times from second-scale execution measurements
- Experimental validation on IBM superconducting quantum processors showing gate operations approach quantum speed limits
View Full Abstract
Cloud-based quantum computers do not provide users with access to hardware-level information such as the underlying Hamiltonians, which obstructs the characterization of their physical properties. We propose a method to infer the energy scales of gate Hamiltonians in such black-box quantum processors using only user-accessible data, by exploiting quantum speed limits. Specifically, we reinterpret the Margolus-Levitin and Mandelstam-Tamm bounds as estimators of the energy expectation value and variance, respectively, and relate them to the shortest time for the processor to orthogonalize a quantum state. This shortest gate time, expected to lie on the nanosecond scale, is inferred from job execution times measured in seconds by employing gate-time amplification. We apply the method to IBM's superconducting quantum processor and estimate the energy scales associated with single-, two-, and three-qubit gates. The order of estimated energy is consistent with typical drive energies in superconducting qubit systems, suggesting that current gate operations approach the quantum speed limit. Our results demonstrate that fundamental energetic properties of black-box quantum computers can be quantitatively accessed through operational time measurements, reflecting the conjugate relationship between time and energy imposed by the uncertainty principle.
Benchmarking Atomic Ionization Driven by Strong Quantum Light
This paper develops new theoretical methods to accurately model how atoms interact with intense quantum light pulses by solving the full quantum mechanical equations. The researchers found that current widely-used approximation methods fail to capture important quantum entanglement effects between electrons and photons, and propose a better approach using Feynman path integrals.
Key Contributions
- Established rigorous benchmark by solving fully quantized time-dependent Schrödinger equation for atom-bright squeezed light interactions
- Developed general theoretical framework based on Feynman path integrals that properly incorporates electron-photon quantum entanglement
View Full Abstract
The recently available high-intensity quantum light pulses provide novel tools for controlling light-matter interactions. However, the rigor of the theoretical frameworks currently used to describe the interaction of strong quantum light with atoms and molecules remains unverified. Here, we establish a rigorous benchmark by solving the fully quantized time-dependent Schrödinger equation for an atom exposed to bright squeezed vacuum light. Our \textit{ab initio} simulations reveal a critical limitation of the widely used $Q$-representation: although it accurately reproduces the total photoelectron spectrum after tracing over photon states, it completely fails to capture the electron-photon joint energy spectrum. To overcome this limitation, we develop a general theoretical framework based on the Feynman path integral that properly incorporates the electron-photon quantum entanglement. Our results provide both quantitative benchmarks and fundamental theoretical insights for the emerging field of strong-field quantum optics.
The inverse parametric problem
This paper develops methods to calculate pump waveform frequencies that drive parametric oscillators to achieve desired frequency mixing between modes. The researchers demonstrate control over complex scattering processes and dynamic signal routing between multiple modes for manipulating continuous variable quantum information.
Key Contributions
- Development of inverse parametric problem solution for calculating pump frequencies to achieve desired mode mixing
- Demonstration of dynamic control methods for routing quantum signals between modes
- Experimental validation of complex multi-mode scattering processes including non-reciprocal circulation
View Full Abstract
We present a method to calculate the frequency components of a pump waveform driving a parametric oscillator, which realizes a desired frequency mixing or scattering between modes. The method is validated by numerical analysis and we study its sensitivity to added Gaussian noise. A series of experiments apply the method and demonstrate its ability to realize complex scattering processes involving many modes at microwave frequencies, including non-reciprocal mode circulation. We also present an approximate method to dynamically control mode scattering, capable of rapidly routing signals between modes in a prescribed manner. These methods are useful tools for encoding and manipulating continuous variable quantum information with multi-modal Gaussian states.
Characterizing Fisher information of quantum measurement
This paper establishes a mathematical connection between informationally complete quantum measurements (used for quantum state reconstruction) and quantum parameter estimation by analyzing Fisher information bounds. The work reveals fundamental tradeoffs between the completeness of quantum measurements and their effectiveness for estimating specific parameters.
Key Contributions
- Established general link between informationally complete measurements and quantum parameter estimation using operator frame theory
- Derived bounds on the ratio between classical and quantum Fisher information in terms of frame operator spectral decomposition
- Connected Fisher information bounds to optimal and least optimal directions for parameter encoding
View Full Abstract
Informationally complete measurements form the foundation of universal quantum state reconstruction, while quantum parameter estimation is based on the local structure of the manifold of quantum states. Here we establish a general link between these two aspects, in the context of a single informationally complete measurement, by employing a suitably adapted operator frame theory. In particular, we bound the ratio between the classical and quantum Fisher information in terms of the spectral decomposition of the associated frame operator, and connect these bounds to the optimal and least optimal directions for parameter encoding. The geometric and operational characterization of information extraction thus obtained reveals the fundamental tradeoff imposed by informational completeness on local quantum parameter estimation.
A short history of Quantum Illumination
This paper provides a historical overview of quantum illumination, a quantum sensing protocol that uses entangled photons to detect objects in noisy environments. The authors review the development of this quantum technology, emphasizing its practical applications and unusual robustness against noise and losses compared to other quantum protocols.
Key Contributions
- Historical survey of quantum illumination protocol development
- Overview of noise-robust quantum sensing technology
View Full Abstract
Quantum illumination represents one of the most interesting examples of quantum technologies. On the one hand, it can find significant applications; on the other hand, it is one of the few quantum protocols robust against noise and losses. Here we present a short summary of the history of this quantum protocol.
Hamiltonian and double-bracket flow formulations of quantum measurements
This paper develops a mathematical framework that reinterprets quantum measurement as either stochastic Hamiltonian dynamics or gradient flows that minimize measurement uncertainty. The authors show how this unified approach can be used to design feedback processes for quantum state preparation and ground state preparation.
Key Contributions
- Unified framework connecting quantum measurement dynamics with Hamiltonian and gradient flow formulations
- Development of feedback processes for deterministic state preparation and ground state preparation using double-bracket flows
View Full Abstract
We introduce a framework that unifies quantum measurement dynamics, Hamiltonian dynamics, and double-bracket gradient flows. We do so by providing explicit expressions for stochastic Hamiltonians that produce state dynamics identical to those that happen during continuous quantum measurements. When such dynamical processes are integrated over sufficiently long time intervals, they yield the same results and statistics as during wavefunction collapse. That is, wavefunction collapse can be interpreted as coarse-grained (stochastic) Hamiltonian dynamics. Alternatively, wavefunction collapse can be interpreted as double-bracket gradient flows determined by derivatives of (stochastic) potentials defined in terms of observables with direct physical interpretations. The gradient flows minimize the variance of the monitored observable. Our derivations hold for general monitoring described by non-Hermitian jump processes. We show that such reinterpretations of measurement dynamics facilitate the design of feedback processes. In particular, we introduce feedback processes that yield deterministic double-bracket flow equations, which prepare ground states of a target Hamiltonian, and feedback processes for state preparation. We conclude by re-interpreting feedback processes as gradient flows with tilted fixed points.
Quditto: Emulating and Orchestrating Distributed QKD Network Deployments
This paper presents Quditto, an open-access software platform that emulates quantum key distribution (QKD) networks, allowing researchers to test and experiment with quantum cryptographic protocols without requiring expensive physical hardware. The platform provides realistic modeling of quantum channels and standardized interfaces that mirror real QKD systems.
Key Contributions
- Development of Quditto, an automated emulation platform for QKD network deployments
- Creation of standardized APIs that enable interaction with emulated QKD networks identical to real hardware
- Validation through proof-of-concept scenarios including eavesdropper attacks and heterogeneous channel modeling
View Full Abstract
Quantum Key Distribution (QKD) offers information-theoretic security by leveraging quantum mechanics, yet the cost and complexity of dedicated hardware and fiber infrastructure have so far limited large-scale deployment and experimentation. In this paper, we introduce Quditto, an automated open-access emulation platform that combines high-fidelity quantum-channel modeling with a standardized key-delivery API, enabling users to interact with the emulated network exactly as they would with real QKD hardware. Quditto modular design supports pluggable protocol implementations, complex key management schemes and detailed channel models, including variable attenuation and decoherence. We validate Quditto by deploying networks of various sizes and demonstrate its flexibility through two proof-of-concept scenarios featuring eavesdropper attacks and heterogeneous channel conditions.
Spontaneous wave function collapse from non-local gravitational self-energy
This paper proposes that gravity causes quantum wave functions to spontaneously collapse by incorporating non-local gravitational self-energy into quantum mechanics. The authors show that the fundamental principles of quantum superposition and general relativity create an inherent instability that leads to wave function collapse at a rate inversely proportional to the system's mass.
Key Contributions
- Incorporation of non-local gravitational self-energy into the Schrödinger-Newton equation
- Demonstration that quantum superposition becomes unstable when gravity is included
- Derivation of gravitationally induced phase shifts between inertial and freely falling reference frames
- Prediction of mass-dependent spontaneous wave function collapse times
View Full Abstract
We incorporate non-local gravitational self-energy, motivated by string-inspired T-duality, into the Schrödinger-Newton equation. In this framework spacetime has an intrinsic non-locality, rendering the standard linear superposition principle only an approximation valid in the absence of gravitational effects. We then invert the logic by assuming the validity of linear superposition and demonstrate that such superpositions inevitably become unstable once gravity is included. The resulting wave-function collapse arises from a fundamental tension between the equivalence principle and the quantum superposition principle in a semiclassical spacetime background. We further show that wave functions computed in inertial and freely falling frames differ by a gravitationally induced phase shift containing linear and cubic time contributions along with a constant global term. These corrections produce a global phase change and lead to a spontaneous, model-independent collapse time inversely proportional to the mass of the system.
Consecutive-gap ratio distribution for crossover ensembles
This paper studies the statistical properties of energy level spacing in quantum many-body systems, specifically developing a mathematical framework to describe the transition between chaotic and localized behavior in disordered spin chains. The authors propose new statistical measures and stochastic models to characterize many-body localization transitions.
Key Contributions
- Development of a two-parameter surmise expression for consecutive-gap ratio distribution describing GOE to Poisson crossover
- Introduction of flow pattern analysis and stochastic differential equation framework for characterizing many-body localization transitions
View Full Abstract
The study of spectrum statistics, such as the consecutive-gap ratio distribution, has revealed many interesting properties of many-body complex systems. Here we propose a two-parameter surmise expression for such distribution to describe the crossover between the Gaussian orthogonal ensemble (GOE) and Poisson statistics. This crossover is observed in the isotropic Heisenberg spin-$1/2$ chain with disordered local field, exhibiting the Many-Body Localization (MBL) transition. Inspired by the analysis of stability in dynamical systems, this crossover is presented as a flow pattern in the parameter space, with the Poisson statistics being the fixed point of the system, which represents the MBL phase. We also analyze an isotropic Heisenberg spin-$1/2$ chain with disordered local exchange coupling and a zero magnetic field. In this case, the system never achieves the MBL phase because of the spin rotation symmetry. This case is more sensitive to finite-size effects than the previous one, and thus the flow pattern resembles a two-dimensional random walk close to its fixed point. We propose a system of linearized stochastic differential equations to estimate this fixed point. We study the continuous-state Markov process that governs the probability of finding the system close to this fixed point as the disorder strength increases. In addition, we discuss the conditions under which the stationary probability distribution is given by a bivariate normal distribution.
Amplitude-amplified coherence detection and estimation
This paper develops improved protocols for detecting and measuring quantum coherence in unknown quantum states. The authors use amplitude amplification techniques to achieve quadratically better sample complexity compared to traditional coherence witnesses, requiring fewer experimental measurements to detect coherence with high confidence.
Key Contributions
- Proof that sample complexity for coherence detection scales as Θ(c(|ψ⟩)^(-1)) for any experimental procedure
- Amplitude-amplified protocol achieving quadratic improvement to O(c(|ψ⟩)^(-1/2)) sample complexity
- Phase estimation protocol for coherence quantification with O(1/ε) scaling versus O(1/ε²) for Monte Carlo methods
- New operational interpretation of geometric measure of coherence through average sample requirements
View Full Abstract
The detection and characterization of quantum coherence is of fundamental importance both in the foundations of quantum theory as well as for the rapidly developing field of quantum technologies, where coherence has been linked to quantum advantage. Typical approaches for detecting coherence employ {\it coherence witnesses} -- observable quantities whose expectation value can be used to certify the presence of coherence. By design, coherence witnesses are only able to detect coherence for some, but not all, possible states of a quantum system. In this work we construct protocols capable of detecting the presence of coherence in an {\it unknown} pure quantum state $|ψ\rangle$. Having access to $m$ copies of an unknown pure state $|ψ\rangle$ we show that the sample complexity of any experimental procedure for detecting coherence with constant probability of success $\ge 2/3$ is $Θ(c(|ψ\rangle)^{-1})$, where $c(|ψ\rangle)$ is the geometric measure of coherence of $|ψ\rangle$. However, assuming access to the unitary $U_ψ$ which prepares the unknown state $|ψ\rangle$, and its inverse $U_ψ^\dagger$, we devise a coherence detecting protocol that employs amplitude-amplification {\it a la} Grover, and uses a quadratically smaller number $O(c(|ψ\rangle)^{-1/2})$ of samples. Furthermore, by augmenting amplitude amplification with phase estimation we obtain an experimental estimation of upper bounds on the geometric measure of coherence within additive error $\varepsilon$ with a sample complexity that scales as $O(1/\varepsilon)$ as compared to the $O(1/\varepsilon^2)$ sample complexity of Monte Carlo estimation methods. The average number of samples needed in our amplitude estimation protocol provides a new operational interpretation for the geometric measure of coherence. Finally, we also derive bounds on the amount of noise our protocols are able to tolerate.
Integrated on-chip quantum light sources on a van der Waals platform
This paper demonstrates integrated quantum light sources on a chip using van der Waals materials, specifically engineered bilayer WSe2 quantum emitters coupled to WS2 waveguides. The system produces high-purity single photons that can be efficiently guided on-chip, representing a significant step toward scalable photonic quantum technologies.
Key Contributions
- First demonstration of integrated single-photon sources using van der Waals materials with waveguide coupling
- Achievement of high-purity single-photon emission with g(2)(0) = 0.003 off-chip and efficient on-chip coupling rates of 1.7 MHz
- Development of scalable platform combining quantum emitters, waveguides, and couplers on single chip using 2D materials
View Full Abstract
Scalable photonic quantum information technologies require a platform combining quantum light sources, waveguides, and detectors on a single chip. Here, we introduce a van der Waals platform comprising strain-engineered bilayer WSe$_2$ quantum emitters, integrated on multimode WS$_2$ waveguides with optimized grating couplers, enabling efficient on-chip quantum light sources. The emitters exhibit bright, highly polarized emission that couples efficiently into WS$_2$ waveguides. Under resonant p-shell excitation, we observe high-purity, waveguide-coupled single-photon emission, measured using both an off-chip Hanbury Brown-Twiss configuration ($g^{(2)}(0) = 0.003^{+0.030}_{-0.003}$) and an on-chip configuration ($g^{(2)}(0) = 0.076\pm0.023$). For a single output, the out-coupled single-photon count rate at the first lens reaches approximately 320 kHz under continuous-wave p-shell excitation, corresponding to an estimated waveguide-coupled rate of 1.7 MHz. These results demonstrate an efficient, integrated single-photon source and establish a pathway toward scalable photonic quantum information processing centered around nanoengineered van der Waals materials.
Wave-packet dynamics in pseudo-Hermitian lattices: Coexistence of Hermitian and non-Hermitian wavefronts
This paper studies how wave packets move through non-Hermitian quantum lattice systems and discovers that two different wavefronts can travel simultaneously at different speeds. The research uses theoretical models to predict unusual quantum transport effects like sudden wave packet shifts and reflections.
Key Contributions
- Discovery of dual-front wave propagation in pseudo-Hermitian lattices with coexisting Hermitian and non-Hermitian velocities
- Theoretical explanation of unconventional transport phenomena including non-Hermitian reflections and disorder-induced wave packet emergence
View Full Abstract
This paper investigates wave-packet dynamics in non-Hermitian lattice systems and reveals a surprising phenomenon: The simultaneous propagation of two distinct wavefronts, one traveling at the non-Hermitian velocity and the other at the Hermitian velocity. We show that this dual-front behavior arises naturally in systems governed by a pseudo-Hermitian Hamiltonian. Using the paradigmatic Hatano-Nelson model as our primary example, we demonstrate that this coexistence is essential for understanding a wide array of unconventional dynamical effects, including abrupt ``non-Hermitian reflections'', sudden shifts of Gaussian wave-packets, and disorder-induced emergent packets seeded by the small initial tails. We present analytic predictions that closely match numerical simulations. These results may offer new insight into the topology of non-Hermitian systems and point toward measurable experimental consequences.
Continuous-mode analysis for practical continuous-variable quantum key distribution
This paper develops improved theoretical models for continuous-variable quantum key distribution (CV-QKD) that better account for real-world device imperfections by using temporal modes and continuous-mode analysis. The researchers demonstrate that optimizing pulse shaping and digital signal processing can improve secure key rates by approximately 50% in practical fiber-based quantum communication systems.
Key Contributions
- Introduction of temporal modes and continuous-mode analysis to better model device nonidealities in CV-QKD systems
- Development of linear weighted-reconstruction digital signal processing method that improves secret key rates by ~50% without additional hardware
- Demonstration that pulse-shaping optimization can significantly enhance performance under detector bandwidth limitations
View Full Abstract
Continuous-variable quantum key distribution (CV-QKD) enables two remote parties to establish information-theoretically secure keys and offers high practical feasibility due to its compatibility with mature coherent optical communication technologies. However, as CV-QKD systems progress toward digital implementations, device nonidealities drive the optical field from a single-mode to a continuous-mode region, thereby underscoring the mismatch between theoretical models and practical systems. Here, we introduce temporal modes to construct an entanglement-based scheme that more accurately captures device nonidealities and develop a corresponding secret key rate calculation method applicable to continuous-mode scenarios. We demonstrate that optimizing the pulse-shaping format can significantly improve performance under detector-bandwidth-limited conditions. Experimental results also confirm that the proposed model effectively describes the impact of sampling-time deviations. We further analyze a linear weighted-reconstruction digital signal processing method,which improves the secret key rate by approximately 50% in a 30-km fiber experiment without requiring additional hardware, demonstrating a substantial performance enhancement at metropolitan distances. The proposed theoretical framework accommodates a broader range of experimental conditions and can guide the optimization of digital CV-QKD systems.
Decoherence in the Pure Dephasing Spin-Boson Model with Hermitian or Non-Hermitian Bath
This paper studies how quantum bits (qubits) lose their quantum properties when connected to different types of environments, comparing normal (Hermitian) and special non-normal (non-Hermitian) baths. The researchers find that non-Hermitian environments can actually help protect qubits from decoherence, which could be useful for engineering better quantum systems.
Key Contributions
- Analytical establishment of similarity between non-equilibrium and equilibrium correlation functions in the pure dephasing spin-boson model
- Discovery that non-Hermitian baths suppress qubit decoherence across all coupling strengths and bath exponents, contradicting previous conclusions
View Full Abstract
In this paper, we investigate the decoherence of qubit due to its coupling to a Hermitian or a non-Hermitian bath within the pure dephasing spin-boson model. First, using this model, we analytically establish the previously anticipated similarity between the non-equilibrium and the equilibrium correlation functions $P_x(t)$ and $C_x(t)$. Then, in the short/long time asymptotic behaviors of $P_x(t)$, we find singular dependence on $A$ (coupling strength) and $s$ (bath exponent) at their integer values. Finally, we find that the non-Hermitian bath tends to suppress the decoherence of qubit for all values of $A$ and $s$, in contrast to the conclusion of Dey et al. . Our results show the potential of non-Hermitian environment engineering in suppressing the decoherence of qubit.
Two-Body Kapitza-Dirac Scattering of One-Dimensional Ultracold Atoms
This paper studies how ultracold atoms scatter when hit by standing light waves (Kapitza-Dirac scattering), specifically examining what happens when two atoms interact strongly with each other. The researchers developed exact numerical methods to predict how interaction strength and light parameters affect the scattering patterns, providing benchmarks for experimental studies.
Key Contributions
- Developed numerically-exact two-body description of Kapitza-Dirac scattering for contact-interacting atoms
- Mapped parameter regimes where sudden-approximation descriptions succeed or fail, particularly identifying failure at strong attraction and small lattice wavenumber
View Full Abstract
Kapitza-Dirac scattering, the diffraction of matter waves from a standing light field, is widely utilized in ultracold gases, but its behavior in the strongly interacting regime is an open question. Here we develop a numerically-exact two-body description of Kapitza-Dirac scattering for two contact-interacting atoms in a one-dimensional harmonic trap subjected to a pulsed optical lattice, enabling us to obtain the numerically exact dynamics. We map how interaction strength, lattice depth, lattice wavenumber, and pulse duration reshape the diffraction pattern, leading to an interaction-dependent population redistribution in real and momentum-space. By comparing the exact dynamics to an impulsive sudden-approximation description, we delineate the parameter regimes where it remains accurate and those, notably at strong attraction and small lattice wavenumber, where it fails. Our results provide a controlled few-body benchmark for interacting Kapitza-Dirac scattering and quantitative guidance for Kapitza-Dirac-based probes of ultracold atomic systems.
Quantum Mpemba effect in Local Gauge Symmetry Restoration
This paper studies the quantum Mpemba effect (a counterintuitive relaxation phenomenon where certain systems reach equilibrium faster when starting from a more excited state) in gauge theories with local symmetries, specifically using the lattice Schwinger model. The researchers demonstrate conditions under which gauge symmetry can be dynamically restored and construct families of initial states that exhibit this quantum Mpemba effect.
Key Contributions
- Demonstrated quantum Mpemba effect in gauge theories with local symmetries for the first time
- Identified conditions for gauge symmetry restoration and constructed families of initial states exhibiting QME
- Proposed experimentally accessible order parameters and validated results in quantum link models relevant to current quantum simulation experiments
View Full Abstract
Understanding relaxation in isolated quantum many-body systems remains a central challenge. Recently, the quantum Mpemba effect (QME), a counterintuitive relaxation phenomenon, has attracted considerable attention and has been extensively studied in systems with global symmetries. Here, we study the QME in gauge theories with massive local gauge symmetries. In the lattice Schwinger model, we demonstrate that the gauge structure of the reduced density matrix of a subsystem is entirely determined by the initial state and remain unchanged during the time evolution. We then investigate whether gauge symmetry can be dynamically restored following a symmetric quench. Analytical and numerical results show that when the Maxwell term is zero, gauge symmetry restoration fails due to the emergence of a peculiar conservation law. However, for any finite Maxwell term, subsystem gauge symmetry is restored in the thermodynamic limit. Based on these results, we systematically construct a families of initial states exhibiting the QME. We further explore the QME in the quantum link model-a truncated lattice Schwinger model, which has been realized in experiments. Moreover, we propose an experimentally accessible order parameter that correctly captures the QME. Our work demonstrates the generality of the quantum Mpemba effect even in the local gauge symmetries, and are directly relevant to ongoing quantum simulation experiments of gauge theories.
Quantum Dynamics of a Nanorotor Driven by a Magnetic Field
This paper proposes that nanoscale molecular rotors could explain how weak magnetic fields affect biological systems. The researchers show that these tiny rotors can maintain quantum properties and are surprisingly sensitive to weak magnetic fields while being less affected by strong ones.
Key Contributions
- Theoretical model for quantum coherent molecular rotors in biological systems
- Demonstration of selective sensitivity to weak vs strong magnetic fields in nanoscale quantum systems
View Full Abstract
A molecular rotor mechanism is proposed to explain weak magnetic field effects in biology. Despite being nanoscale (1 nm), this rotor exhibits quantum superposition and interference. Analytical modeling shows its quantum dynamics are highly sensitive to weak, but not strong, magnetic fields. Due to its enhanced moment of inertia, the rotor maintains quantum coherence relatively long, even in a noisy cellular environment. Operating at the mesoscopic boundary between quantum and classical behavior, such a rotor embedded in cyclical biological processes could exert significant and observable biological influence.
Sharing quantum indistinguishability with multiple parties
This paper presents a sequential quantum state discrimination scheme that allows multiple parties to share and extract quantum uncertainty from a single quantum system using weak measurements. The approach enables resource sharing where multiple observers can perform state discrimination sequentially on the same quantum state without completely destroying the quantum information.
Key Contributions
- Sequential state-discrimination protocol enabling multiple parties to share quantum uncertainty
- Application of maximum-confidence measurements and weak measurements for quantum resource sharing
View Full Abstract
Quantum indistinguishability of non-orthogonal quantum states is a valuable resource in quantum information applications such as cryptography and randomness generation. In this article, we present a sequential state-discrimination scheme that enables multiple parties to share quantum uncertainty, in terms of the max relative entropy, generated by a single party. Our scheme is based upon maximum-confidence measurements and takes advantages of weak measurements to allow a number of parties to perform state discrimination on a single quantum system. We review known sequential state discrimination and show how our scheme would work through a number of examples where ensembles may or may not contain symmetries. Our results will have a role to play in understanding the ultimate limits of sequential information extraction and guide the development of quantum resource sharing in sequential settings.
Composite N-Q-S: Serial/Parallel Instrument Axioms, Bipartite Order-Effect Bounds, and a Monitored Lindblad Limit
This paper develops a mathematical framework for analyzing sequential quantum measurements, providing bounds on how the order of measurements affects outcomes and connecting discrete measurement sequences to continuous quantum dynamics. The work focuses on theoretical tools for quantifying and controlling measurement order effects in quantum systems.
Key Contributions
- Tight bipartite order-effect bounds for sequential quantum measurements with explicit equality conditions
- Diamond-norm commutator bounds quantifying how measurement sequence rearrangements affect observable outcomes
- Monitored Lindblad limit connecting discrete measurement loops to continuous-time quantum dynamics
- Data-driven exponential mixing rates with finite-sample certificates for operational parameters
View Full Abstract
We develop a composite operational architecture for sequential quantum measurements that (i) gives a tight bipartite order-effect bound with an explicit equality set characterized on the Halmos two-subspace block, (ii) upgrades Doeblin-type minorization to composite instruments and proves a product lower bound for the operational Doeblin constants, yielding data-driven exponential mixing rates, (iii) derives a diamond-norm commutator bound that quantifies how serial and parallel rearrangements influence observable deviations, and (iv) establishes a monitored Lindblad limit that links discrete look-return loops to continuous-time GKLS dynamics under transparent assumptions. Building on the GKLS framework of Gorini, Kossakowski, Sudarshan, Lindblad, Davies, Spohn, and later work of Fagnola-Rebolledo and Lami et al., we go beyond asymptotic statements by providing finite-sample certificates for the minorization parameter via exact binomial intervals and propagating them to rigorous bounds on the number of interaction steps required to attain a prescribed accuracy. A minimal qubit toy model and CSV-based scripts are supplied for full reproducibility. Our results position order-effect control and operational mixing on a single quantitative axis, from equality windows for pairs of projections to certified network mixing under monitoring. The framework targets readers in quantum information and quantum foundations who need explicit constants that are estimable from data and transferable to device-level guarantees.
Defect-Driven Nonlinear and Nonlocal Perturbations in Quantum Chains
This paper develops an analytical framework to study how single defects in quantum lattice systems affect particle transport and localization. The research shows that even minimal defects can create surprising nonlinear effects and enhance localization at distant sites, providing new insights into quantum transport mechanisms.
Key Contributions
- Development of exact analytical framework for defect-driven transport in quantum chains
- Discovery that single defects induce nonlinear and nonlocal effects including enhanced localization at distant sites
- Demonstration of strong sensitivity to initial particle position and non-monotonic transport suppression
View Full Abstract
Transport and localization in isolated quantum systems are typically attributed to spatially-extended disorder, leaving the influence of a few controllable defects largely unexplored despite their relevance to engineered quantum platforms. We introduce an analytic framework showing how a single defect profoundly reshapes wave-function spreading on a finite isolated and periodic tight-binding lattice. Adapting the defect technique from classical random-walk studies, we obtain exact time-resolved site-occupation probabilities and several observables of interest. Even one defect induces striking nonlinear and nonlocal effects, including non-monotonic suppression of transport, enhanced localization at distant sites, and strong sensitivity to the initial particle position at long times. These results demonstrate that minimal perturbations can generate unexpected long-time transport signatures, establishing a microscopic defect-driven mechanism of quantum localization.
Universal Blind Quantum Computation with Recursive Rotation Gates
This paper presents a new protocol for blind quantum computation that allows a client to securely delegate quantum computations to a remote server without revealing sensitive data. The approach uses recursive rotation gates instead of highly entangled states, making it more practical for current quantum hardware and reducing communication requirements.
Key Contributions
- Novel blind quantum computation protocol using recursive rotation gates that avoids highly entangled resource states
- Reduced communication rounds making the protocol more practical for NISQ-era implementations and hybrid quantum-classical systems
View Full Abstract
Blind Quantum Computation lets a limited-capability client delegate its complex computation to a remote server without revealing its data or computation. Several such protocols have been proposed under varied quantum computing models. However, these protocols either rely on highly entangled resource states (in measurement-based models) or are based on non-parametric resource sets (in circuit-based models). These restrictions hinder the practical applicability of such an algorithm in the NISQ era, especially concerning the hybrid quantum-classical infrastructure, which depends on parametric gates. We present a protocol for universal blind quantum computation based on recursive decryption of parametric rotation gates, which does not require a highly entangled state at the server side and substantially reduces the communication rounds required for practical prototyping of secure variational algorithms.
Exponential convergence dynamics in Grover's search algorithm
This paper modifies Grover's quantum search algorithm to eliminate its oscillatory behavior by converting solution states into a reservoir using ancilla qubits, creating exponential convergence dynamics instead of the typical oscillations while maintaining the quadratic speedup.
Key Contributions
- Modified Grover's algorithm with exponential convergence dynamics that eliminates the 'soufflé problem'
- Quantum circuit implementation via Trotterization that maintains quadratic speedup while removing oscillatory behavior
View Full Abstract
Grover's search algorithm is the cornerstone of many applications of quantum computing, providing a quadratic speed-up over classical methods. One limitation of the algorithm is that it requires knowledge of the number of solutions to obtain an optimal success probability, due to the oscillatory dynamics between the initial and solutions states (the ``soufflé problem''). While various methods have been proposed to solve this problem, each has their drawbacks in terms of inefficiency or sensitivity to control errors. Here, we modify Grover's algorithm to eliminate the oscillatory dynamics, such that the search proceeds as an exponential decay into the solution states. The basic idea is to convert the solution states into a reservoir by using ancilla qubits such that the initial state is nonreflectively absorbed. Trotterizing the continuous algorithm yields a quantum circuit that gives equivalent performance, which has the same quadratic quantum speedup as the original algorithm.
Quantum data hiding with two-qubit separable states
This paper develops a quantum data-hiding scheme using two-qubit separable states, showing how to hide information that cannot be accessed through local measurements but can be retrieved through global measurements. The work provides theoretical bounds for local discrimination of quantum states and demonstrates the scheme using orthogonal separable states.
Key Contributions
- Established bound on optimal local discrimination of two-party quantum states
- Developed practical quantum data-hiding scheme using minimal two-qubit separable states
View Full Abstract
We consider the discrimination of two-party quantum states and provide a quantum data-hiding scheme using two-qubit separable states. We first provide a bound on the optimal local discrimination of two-party quantum states, and establish a sufficient condition under which a two-party quantum state ensemble can be used to construct a data-hiding scheme. We illustrate this condition with examples of two-qubit state ensembles consisting of two orthogonal separable states. As our data-hiding scheme can be implemented with separable states of the lowest possible dimension, its practical realization becomes significantly more attainable.
Coherent transfer via parametric control of normal-mode splitting in a superconducting multimode resonator
This paper demonstrates a new method for storing and retrieving microwave signals in superconducting quantum circuits using parametric control to create tunable mode splitting. The researchers show they can store microwave pulses on-demand by controlling the coupling between different resonator modes, creating a quantum memory system.
Key Contributions
- Demonstration of parametric normal-mode splitting for controllable microwave storage
- On-demand storage and retrieval mechanism in superconducting multimode resonators
- Alternative approach to quantum memory using coherent energy exchange between modes
View Full Abstract
Microwave storage and retrieval are essential capabilities for superconducting quantum circuits. Here, we demonstrate an on-chip multimode resonator in which strong parametric modulation induces a large and tunable normal-mode splitting that enables microwave storage. When the spectral bandwidth of a short microwave pulse covers the two dressed-state absorption peaks, part of the pulse is absorbed and undergoes coherent energy exchange between the modes, producing a clear time-domain beating signal. By switching off the modulation before the beating arrives, we realize on-demand storage and retrieval, demonstrating an alternative approach to microwave photonic quantum memory. This parametric-normal-mode-splitting protocol offers a practical route toward a controllable quantum-memory mechanism in superconducting circuits.
Quantum Batteries in Coherent Ising Machine
This paper proposes a quantum battery design using degenerate optical parametric oscillators (DOPO) to store and transfer energy through quantum effects. The researchers demonstrate that coherent energy storage components decay more slowly than incoherent ones and identify optimal timing for energy extraction by coupling the battery to a two-level system.
Key Contributions
- Proposed practical quantum battery architecture using mature optical DOPO technology
- Identified optimal charging/discharging timing where coherent ergotropy and charging power peak simultaneously
- Demonstrated efficient energy transfer from quantum battery to two-level system load
View Full Abstract
With intensive studies of quantum thermodynamics, the quantum batteries (QBs) have been proposed to store and transfer energy via quantum effects. Despite many theoretical models, decoherence remains a severe challenge and practical platforms are still rare. Here we propose the QB based on the degenerate optical parametric oscillator (DOPO), using the signal field as the energy-storage unit. We carefully separate the ergotropy into coherent and incoherent components and find that the coherent part decays roughly half as slowly as the incoherent part. More importantly, the coherent ergotropy and the average charging power reach their respective maxima at essentially the same moment, i.e., $γ_s t \approx 10$. This coincidence defines the optimal instant to switch off the pump. Finally, coupling the QB to a two-level system (TLS) as the load, we demonstrate an efficient discharge process of the QB. Our work establishes a realistic and immediately-implementable QB architecture on a mature optical platform.
Microscopic model for a spatial multimode generation based on Multi-pump Four Wave Mixing in hot vapours
This paper develops a theoretical microscopic model to describe how multipartite entanglement can be generated using four wave mixing with two laser pumps in hot alkali atom vapors. The model predicts how quantum correlations form between different spatial modes, which could be useful for quantum information applications.
Key Contributions
- First microscopic theoretical description of multimode entanglement generation using two-pump four wave mixing in dense atomic media
- Development of Floquet expansion method to analyze multimode gain amplification and quantum noise properties
View Full Abstract
Multipartite entanglement is an important resource for quantum information processing. It has been shown that it is possible to employ alkali atoms to implement single device multipartite entanglement by using nonlinear processes with spatial modes. This work presents the first microscopic description of such multi-mode generation with two-pump four wave mixing (4WM) in dense atomic media. We implement an extension of a double $Λ$ model for a single pump 4WM in order to describe the multi-mode generation with a two-pump configuration. We propose a Floquet expansion to solve the multimode gain amplification and noise properties. The model describes the angle and the two-photon dependency of the multimode generation and the quantum correlations among the modes. We investigate the entanglement properties of the system, describing the main properties of previous experimental observations. Such a microscopic description can be used to predict the gain distribution of modes and the quantum correlation within a typical range of experimental parameters.
Graph-theoretical search for integrable multistate Landau-Zener models
This paper develops a systematic computational method to search for exactly solvable multistate Landau-Zener models using graph theory. The researchers create an algorithm to identify candidate graphs that could host these quantum models and test it on systems up to 13 vertices, confirming existing conjectures about which graph families allow exact solutions.
Key Contributions
- Development of an efficient algorithm to systematically search for integrable multistate Landau-Zener models using graph-theoretical methods
- Computational verification of existing conjectures about host graph families (hypercubes, fans, Cartesian products) for systems up to 11 vertices
- Identification of '(0,2)-graph descendants' as promising candidates for larger systems that may extend beyond known solvable families
View Full Abstract
The search for exactly solvable models is an evergreen topic in theoretical physics. In the context of multistate Landau-Zener models -- $N$-state quantum systems with linearly time-dependent Hamiltonians -- the theory of integrability provides a framework for identifying new solvable cases. In particular, it was proved that the integrability of a specific class known as the multitime Landau-Zener (MTLZ) models guarantees their exact solvability. A key finding was that an $N$-state MTLZ model can be represented by data defined on an $N$-vertex graph. While known host graphs for MTLZ models include hypercubes, fans, and their Cartesian products, no other families have been discovered, leading to the conjecture that these are the only possibilities. In this work, we conduct a systematic graph-theoretical search for integrable models within the MTLZ class. By first identifying minimal structures that a graph must contain to host an MTLZ model, we formulate an efficient algorithm to systematically search for candidate graphs for MTLZ models. Implementing this algorithm using computational software, we enumerate all candidate graphs with up to $N = 13$ vertices and perform an in-depth analysis of those with $N \le 11$. Our results corroborate the aforementioned conjecture for graphs up to $11$ vertices. For even larger graphs, we propose a specific family, termed descendants of ``$(0,2)$-graphs'', as promising candidates that may violate the conjecture above. Our work can serve as a guideline to identify new exactly solvable multistate Landau-Zener models in the future.
Characterizing entanglement shareability and distribution in $N$-partite systems
This paper studies how entanglement can be shared and distributed across multiple quantum particles, developing improved mathematical measures called G_q-concurrence that better characterize entanglement in complex quantum systems with more than two components.
Key Contributions
- Demonstrated hierarchical monogamy relations for squared G_q-concurrence in N-qubit systems
- Proved superiority of G_q-concurrence over standard concurrence for characterizing entanglement in multilevel quantum systems
- Developed hierarchical indicators with enhanced entanglement witnessing capabilities
View Full Abstract
Exploring the shareability and distribution of entanglement possesses fundamental significance in quantum information tasks. In this paper, we demonstrate that the square of bipartite entanglement measures $G_q$-concurrence, which is the generalization of concurrence, follows a set of hierarchical monogamy relations for any $N$-qubit quantum state. On the basis of these monogamy inequalities, we render two kinds of hierarchical indicators that exhibit evident advantages in the capacity of witnessing entanglement. Moreover, we show an analytical relation between $G_q$-concurrence and concurrence in $2\otimes d$ systems. Furthermore, we rigorously prove that the monogamy property of squared $G_q$-concurrence is superior to that of squared concurrence in $2\otimes d_2\otimes d_3\otimes\cdots\otimes d_N$ systems. In addition, several concrete examples are provided to illustrate that for multilevel systems, the squared $G_q$-concurrence satisfies the monogamy relation, even if the squared concurrence does not. These results better reveal the intriguing characteristic of multilevel entanglement and provide critical insights into the entanglement distribution within multipartite quantum systems.
Trade-off relations and enhancement protocol of quantum battery capacities in multipartite systems
This paper investigates quantum batteries in multi-qubit systems, discovering trade-off relationships between subsystem and total battery capacities, and develops protocols using incoherent unitary operations to enhance energy storage performance in quantum systems.
Key Contributions
- Established trade-off relations between subsystem and total battery capacities in two-qubit and three-qubit systems for various Hamiltonian models
- Developed enhancement protocol using incoherent unitary operations to improve subsystem battery capacity with sufficient conditions for capacity gain
- Defined residual battery capacity and coherent/incoherent components of subsystem battery capacity for general quantum states
View Full Abstract
First, we investigate the trade-off relations of quantum battery capacities in two-qubit system. We find that the sum of subsystem battery capacity is governed by the total system capacity, with this trade-off relation persisting for a class of Hamiltonians, including Ising, XX, XXZ and XXX models. Then building on this relation, we define residual battery capacity for general quantum states and establish coherent/incoherent components of subsystem battery capacity. Furthermore, we introduce the protocol to guide the selection of appropriate incoherent unitary operations for enhancing subsystem battery capacity in specific scenarios, along with a sufficient condition for achieving subsystem capacity gain through unitary operation. Numerical examples validate the feasibility of the incoherent operation protocol. Additionally, for the three-qubit system, we also established a set of theories and results parallel to those for two-qubit case. Finally, we determine the minimum time required to enhance subsystem battery capacity via a single incoherent operation in our protocol. Our findings contribute to the development of quantum battery theory and quantum energy storage systems.
High efficiency controlled quantum secure direct communication with 4D qudits and Grover search algorithm
This paper proposes a new quantum secure direct communication protocol that uses 4-dimensional quantum particles and a three-party decoding mechanism to achieve high efficiency (66.7% qudit efficiency) while maintaining security. The protocol eliminates the need for classical computation in the decoding process by using a controller-authorized sequence of quantum operations.
Key Contributions
- Novel three-party controlled QSDC protocol using 4D qudits with 66.7% efficiency
- Collaborative unitary sequence decoding paradigm that eliminates classical computation requirements
- Multi-layer defense mechanism incorporating decoy photon authentication
View Full Abstract
Currently, the progress of quantum secure direct communication (QSDC) is impeded by a fundamental trade off among control efficiency, security, and scalability. This study proposes an innovative controlled QSDC protocol based on a collaborative unitary sequence decoding paradigm to break this deadlock.Leveraging four dimensional single particle states, the protocol's core innovation lies in its three party decoding mechanism. The controller's authorization unlocks a specific unitary operation sequence, enabling the receiver to directly decode exclusively via quantum operations, eliminating the need for classical computational algorithms in conventional protocols. This tailored sequence underpins its high efficiency.The protocol also seamlessly incorporates decoy photon authentication, creating a multi layer defense against both external and internal attacks. Consequently, it achieves a remarkable qudit efficiency of 66.7%, offering a significant performance improvement over existing schemes and an efficient, highly secure solution for future quantum networks.
Testing electron-photon exchange-correlation functional performance for many-electron systems under weak and strong light-matter coupling
This paper develops and tests a new computational method called pxcLDA within quantum electrodynamics density functional theory (QEDFT) to efficiently calculate how electrons behave in atoms and molecules when they interact with light in optical cavities. The method uses a renormalization factor to account for electron-photon correlations and shows good agreement with more computationally expensive reference methods across various molecular systems.
Key Contributions
- Extension of pxcLDA functional from one-electron to many-electron systems under light-matter coupling
- Development of renormalization factor approach to capture electron-photon correlations in weak-coupling regime
- Validation against quantum electrodynamics coupled-cluster methods showing good agreement for cavity-modified electron densities
View Full Abstract
We present results of a photon-free exchange-correlation functional within the local density approximation (pxcLDA) for quantum electrodynamics density functional theory (QEDFT) that efficiently describes the electron density of many-electron systems across weak to strong light-matter coupling. Building on previous work [I-Te. Lu et al., Phys. Rev. A 109, 052823 (2024)] that captured electron-photon correlations via an exchange-correlation functional derived from the nonrelativistic Pauli-Fierz Hamiltonian and tested on one-electron systems, we use a simple procedure to compute a renormalization factor describing electron-photon correlations and inhomogeneity in the weak-coupling regime by comparing it with quantum electrodynamics coupled-cluster, and previous QEDFT optimized effective potential methods. Across various atoms and molecules, pxcLDA reproduces cavity-modified densities in close agreement with these references. The renormalization factor approaches unity as the system size or collective coupling increases, reflecting an electron-photon exchange-dominated behavior and improved accuracy for larger systems. This approach now offers a practical route to applying QEDFT functionals based on electron density to realistic electron systems.
Improved Lower Bounds for QAC0
This paper establishes new theoretical lower bounds showing that constant-depth quantum circuits (QAC0) have fundamental limitations in computing certain Boolean functions like PARITY and MAJORITY, suggesting quantum circuits may not always outperform classical circuits for decision problems.
Key Contributions
- Proved depth 3 QAC0 circuits cannot compute PARITY and need exponential gates for MAJORITY
- Showed depth 2 circuits cannot approximate high-influence Boolean functions with non-negligible advantage
- Demonstrated depth 2 QAC0 circuits cannot synthesize n-target nekomata states
View Full Abstract
In this work, we establish the strongest known lower bounds against QAC$^0$, while allowing its full power of polynomially many ancillae and gates. Our two main results show that: (1) Depth 3 QAC$^0$ circuits cannot compute PARITY regardless of size, and require at least $Ω(\exp(\sqrt{n}))$ many gates to compute MAJORITY. (2) Depth 2 circuits cannot approximate high-influence Boolean functions (e.g., PARITY) with non-negligible advantage in depth $2$, regardless of size. We present new techniques for simulating certain QAC$^0$ circuits classically in AC$^0$ to obtain our depth $3$ lower bounds. In these results, we relax the output requirement of the quantum circuit to a single bit (i.e., no restrictions on input preservation/reversible computation), making our depth $2$ approximation bound stronger than the previous best bound of Rosenthal (2021). This also enables us to draw natural comparisons with classical AC$^0$ circuits, which can compute PARITY exactly in depth $2$ using exponential size. Our proof techniques further suggest that, for inherently classical decision problems, constant-depth quantum circuits do not necessarily provide more power than their classical counterparts. Our third result shows that depth $2$ QAC$^0$ circuits, regardless of size, cannot exactly synthesize an $n$-target nekomata state (a state whose synthesis is directly related to the computation of PARITY). This complements the depth $2$ exponential size upper bound of Rosenthal (2021) for approximating nekomatas (which is used as a sub-circuit in the only known constant depth PARITY upper bound).
Sequential realization of Quantum Instruments
This paper presents a mathematical framework for implementing quantum instruments (general quantum operations) using adaptive sequences where classical measurement results determine subsequent quantum gates. The work shows how to minimize the trade-off between the number of measurement steps and ancillary qubits needed, with surprising results about reusing qubits multiple times.
Key Contributions
- Mathematical framework for adaptive sequence of instruments (ASI) that decomposes any quantum instrument into sequential implementations
- Proof of achievable lower bound on the N·n_A trade-off between number of steps and ancillary qubits
- Counter-intuitive result showing quantum instruments expanding n to m qubits can be implemented with only (m-n) ancillary qubits through strategic remeasurement
View Full Abstract
In adaptive quantum circuits classical results of mid-circuit measurements determine the upcoming gates. This allows POVMs, quantum channels or more generally quantum instruments to be implemented sequentially, so that fewer qubits need to be used at each of the $N$ measurement steps. In this paper, we mathematically describe these problems via adaptive sequence of instruments (ASI) and show how any instrument can be decomposed into it. Number of steps $N$ and number of ancillary qubits $n_A$ needed for actual implementation are crucial parameters of any such ASI. We show an achievable lower bound on the product $N.n_A$ and we determine in which situations this tradeoff is likely to be optimal. Contrary to common intuition we show that for quantum instruments which transform $n$ to $m(>n)$ qubits, there exist $N$-step ASI implementing them just with $(m-n)$ ancillary qubits, which are remeasured $(N-1)$ times and finally used as output qubits.
Microwave control of photonic spin Hall effect in atomic system
This paper investigates how microwave fields can control the photonic Spin Hall Effect in atomic systems, demonstrating that both the magnitude and angular position of light polarization shifts can be precisely controlled by adjusting microwave parameters and optical field phases.
Key Contributions
- Demonstration of microwave control over photonic Spin Hall Effect magnitude and angular position in atomic systems
- Discovery of microwave-controlled switching mechanism for photonic SHE through phase tuning
- Identification of optimal conditions where maximum SHE occurs at unit refractive index when susceptibility components vanish
View Full Abstract
The photonic Spin Hall Effect (SHE) causes a polarization-dependent transverse shift of light at an interface. There is a significant research interest in controlling and enhancing the photonic SHE. In this paper, we theoretically investigate the microwave field control of the photonic SHE in a closed-loop $Λ$-type atomic system. We demonstrate that both the magnitude and angular position of the photonic SHE can be controlled by varying the relative phase $φ$ between the driving optical fields and the strength of the microwave coupling $Ω_μ$. At zero probe field detuning ($Δ_p = 0$) and $φ=0,π$, the photonic SHE magnitude reaches to upper limit equal to the half of the incident beam waist, and remains largely unaffected by the microwave strength $Ω_μ$, but its angular position shifts linearly with increasing $Ω_μ$. At intermediate phases, especially at $φ= π/2$, the magnitude of the photonic SHE exponentially decreases with the increase of $Ω_μ$. Interestingly, we observed microwave-controlled switching of photonic SHE by tuning the relative phase $φ$ at an optimized value of $Ω_μ$ and $Ω_{c}$. In contrast, at $Δ_p = \pm Ω_c$, a maximum photonic SHE equal to half of the incident beam waist occurs at $φ\leq π$ and $Ω_μ \geq Ω_p$, where both real and imaginary parts of the susceptibility vanish, yielding a unit refractive index. Our results may have potential applications in microwave quantum sensing and quantum optical switches based on the photonic SHE.
Reading Qubits with Sequential Weak Measurements: Limits of Information Extraction
This paper analyzes how to optimally read qubit states using sequential weak measurements, studying the fundamental limits of information extraction when measurements are noisy and the qubit undergoes intrinsic dynamics. The researchers develop theoretical models to determine optimal measurement duration and strength for maximizing information recovery about initial qubit states.
Key Contributions
- Development of information-theoretic framework using mutual information to quantify optimal qubit readout performance under weak measurements
- Analytical bounds and asymptotic expansions for information extraction limits in realistic measurement scenarios with intrinsic dynamics
View Full Abstract
Quantum information processing and computation requires high accuracy qubit configuration readout. In many practical schemes, the initial qubit configuration has to be inferred from readout that is a time-dependent weak measurement record. However, a combination of the measurement scheme and intrinsic dynamics can end up scrambling the initial state and lose information irretrievably. Here, we study the information physics of quantum trajectories based on weak measurements in order to address the optimal achievable performance in qubit configuration readout for two realistic models of single qubit readout: (i) Model I is informationally complete, but without intrinsic dynamics; (ii) Model II is informationally incomplete weak measurements with intrinsic dynamics. We first use mutual information to characterize how much intrinsic information about the initial state is encoded in the measurement record. Using a fixed discrete time-step formulation, we compute the mutual information while varying the measurement strength, duration of measurement record, and the relative strength of intrinsic dynamics in our measurement schemes. We also exploit the emergence of continuum scaling and the Stochastic Master Equation in the weak measurement limit. We develop an asymptotic expansion in the measurement efficiency parameter to calculate mutual information, which captures qualitative and quantitative features of the numerical data. The bounds on information extraction are manifested as plateaux in mutual information, our analysis obtains these bounds and also optimal duration of measurement required to saturate them. Our results should be useful both for quantum device operation and optimization and also, possibly, for improving the performance of recent machine learning approaches for qubit and multiqubit configuration readout in current Noisy Intermediate-Scale Quantum (NISQ) experiment regimes.
Exploiting Reset Operations in Cloud-based Quantum Computers to Run Quantum Circuits for Free
This paper identifies a security vulnerability in cloud-based quantum computing services where users can exploit mid-circuit reset operations to run multiple quantum circuits within a single charged 'shot', effectively getting free computation by bundling circuits together and paying only once.
Key Contributions
- Identification of economic exploitation vulnerability in per-shot pricing models for cloud quantum computers
- Demonstration of methods to bundle multiple circuits using reset operations to reduce costs by up to 900%
- Proposal for improved pricing approaches to address this vulnerability while maintaining functionality
View Full Abstract
This work presents the first thorough exploration of how reset operations in cloud-based quantum computers could be exploited to run quantum circuits for free. This forms a new type of attack on the economics of cloud-based quantum computers. All major quantum computing companies today offer access to their hardware through some type of cloud-based service. Due to the noisy nature of quantum computers, a quantum circuit is run many times to collect the output statistics, and each run is called a shot. The fees users pay for access to the machines typically depend on the number of these shots of a quantum circuit that are executed. Per-shot pricing is a clean and straightforward approach as users are charged a small fee for each shot of their circuit. This work demonstrates that per-shot pricing can be exploited to get circuits to run for free when users abuse recently implemented mid-circuit qubit measurement and reset operations. Through evaluation on real, cloud-based quantum computers this work shows how multiple circuits can be executed together within a shot, by separating each user circuit by set of reset operations and submitting all the circuits, and reset operations, as one larger circuit. As a result, the user is charged per-shot pricing, even though inside each shot are multiple circuits. Total per-shot cost to run certain circuits could be reduced by up to $900$\% using methods proposed in this work, leading to significant financial losses to quantum computing companies. To address this novel finding, this work proposes a clear approach for how users should be charged for their execution, while maintaining the flexibility and usability of the mid-circuit measurement and reset~operations.
Entanglement measure for the W-class states
This paper investigates how to properly measure entanglement in W-class quantum states (a specific type of multi-qubit entangled state) and shows that existing measures like π-tangle fail for large systems, proposing better alternatives like the sum of two-tangles. The work establishes conditions linking pairwise entanglement to global separability and introduces new criteria for effective entanglement quantification.
Key Contributions
- Established rigorous condition linking pairwise entanglement to global separability in W-class states
- Identified sum of two-tangles as effective entanglement quantifier that works in large-n limit
- Introduced new condition for entanglement measures to address limitations of existing π-tangle approach
View Full Abstract
The structure and quantification of entanglement in the W-class states are investigated under physically motivated transformations that induce mixed-state dynamics. A rigorous condition is established linking global separability to the behavior of pairwise entanglement, showing that the absence of pairwise entanglement is sufficient to guarantee complete separability of the system, provided the Hilbert-space basis is preserved. This result motivates the identification of the sum of two-tangles as a natural and effective entanglement quantifier for the W-class states. Furthermore, the commonly used $π$-tangle becomes ineffective for the maximally entangled $n$-qubit W state as the system size increases, vanishing in the large-$n$ limit. To address this limitation, the sum of $π$-tangles is introduced, which, like the sum of two-tangles, successfully quantifies the entanglement of the maximally entangled $n$-qubit W state in the large-$n$ limit. In addition, a new condition for entanglement measures is introduced, which facilitates the formulation of a well-behaved and physically meaningful entanglement measure.
Fair sampling of ground-state configurations using hybrid quantum-classical MCMC algorithms
This paper develops hybrid quantum-classical algorithms that combine quantum optimization methods (like QAOA) with classical Monte Carlo techniques to fairly sample all possible solutions to optimization problems, rather than being biased toward certain solutions. The authors demonstrate that their approach can uniformly sample ground states in problems like satisfiability (SAT) where many equivalent optimal solutions exist.
Key Contributions
- Development of hybrid quantum-classical MCMC algorithms that correct sampling bias inherent in pure quantum optimization methods
- Demonstration of fair sampling capabilities on random k-SAT problems near the satisfiability threshold, including cases where classical methods fail
View Full Abstract
We study the fair sampling properties of hybrid quantum-classical Markov chain Monte Carlo (MCMC) algorithms for combinatorial optimization problems with degenerate ground states. While quantum optimization heuristics such as quantum annealing and the quantum approximate optimization algorithm (QAOA) are known to induce biased sampling, hybrid quantum-classical MCMC incorporates quantum dynamics only as a proposal transition and enforces detailed balance through classical acceptance steps. Using small Ising models, we show that MCMC post-processing corrects the sampling bias of quantum dynamics and restores near-uniform sampling over degenerate ground states. We then apply the method to random $k$-SAT problems near the satisfiability threshold. For random 2-SAT, a hybrid MCMC combining QAOA-assisted neural proposals with single spin-flip updates achieves fairness comparable to that of PT-ICM. For random 3-SAT, where such classical methods are no longer applicable, the hybrid MCMC still attains approximately uniform sampling. We also examine solution counting and find that the required number of transitions is comparable to that of WalkSAT. These results indicate that hybrid quantum-classical MCMC provides a viable framework for fair sampling and solution enumeration.
A Graph-Based Forensic Framework for Inferring Hardware Noise of Cloud Quantum Backend
This paper develops a machine learning framework using Graph Neural Networks to predict error rates of quantum computing hardware from limited observable data, addressing transparency issues in cloud quantum platforms where users cannot verify if their circuits ran on the hardware they paid for.
Key Contributions
- Development of GNN-based forensic framework for inferring quantum backend error characteristics without direct calibration access
- Creation of methodology to predict per-qubit and per-qubit link error rates using only topology and transpiled circuit features
View Full Abstract
Cloud quantum platforms give users access to many backends with different qubit technologies, coupling layouts, and noise levels. The execution of a circuit, however, depends on internal allocation and routing policies that are not observable to the user. A provider may redirect jobs to more error-prone regions to conserve resources, balance load or for other opaque reasons, causing degradation in fidelity while still presenting stale or averaged calibration data. This lack of transparency creates a security gap: users cannot verify whether their circuits were executed on the hardware for which they were charged. Forensic methods that infer backend behavior from user-visible artifacts are therefore becoming essential. In this work, we introduce a Graph Neural Network (GNN)-based forensic framework that predicts per-qubit and per-qubit link error rates of an unseen backend using only topology information and aggregated features extracted from transpiled circuits. We construct a dataset from several IBM 27-qubit devices, merge static calibration features with dynamic transpilation features and train separate GNN regressors for one- and two-qubit errors. At inference time, the model operates without access to calibration data from the target backend and reconstructs a complete error map from the features available to the user. Our results on the target backend show accurate recovery of backend error rate, with an average mismatch of approximately 22% for single-qubit errors and 18% for qubit-link errors. The model also exhibits strong ranking agreement, with the ordering induced by predicted error values closely matching that of the actual calibration errors, as reflected by high Spearman correlation. The framework consistently identifies weak links and high-noise qubits and remains robust under realistic temporal noise drift.
Continuous Accumulation of Cold Atoms in an Optical Cavity
This paper demonstrates a method to continuously trap and cool millions of rubidium atoms inside an optical cavity using light-shift manipulation, maintaining them at ultra-cold temperatures below 10 microkelvin in steady state. This enables continuously operating atom-light interfaces that could be used for quantum sensors and processors without requiring time-sequenced operation.
Key Contributions
- Demonstrated continuous accumulation and cooling of atoms in an optical cavity without time-sequenced operation
- Achieved steady-state ensemble of millions of atoms at sub-10 microkelvin temperatures with collective cavity coupling
- Developed light-shift manipulation technique creating spatially varying cooling parameters for efficient atom capture
View Full Abstract
Continuously operating atom-light interfaces represent a key prerequisite for steady-state quantum sensors and efficient quantum processors. Here, we demonstrate continuous accumulation of sub-Doppler-cooled atoms in a shallow intracavity dipole trap, realizing this regime. The key ingredient is a light-shift manipulation that creates spatially varying cooling parameters, enabling efficient capture and accumulation of atoms within a cavity mode. Demonstrated with rubidium atoms, a continuous flux from a source cell is funneled through the magneto-optical trap into the cavity mode, where the atoms are cooled and maintained below $10~μ\text{K}$ in steady state without time-sequenced operation. We characterize the resulting continuously maintained ensemble of millions of atoms and its collective coupling to the cavity field, establishing a route toward continuously operated cavity-QED systems and long-duration atomic and hybrid quantum sensors.
Finite-Time Protocols Stabilize Charging in Noisy Ising Quantum Batteries
This paper studies quantum batteries based on interacting spin chains, showing that gradual charging protocols are more stable than sudden ones. The research reveals that noise can either help or hurt battery performance depending on whether the charging protocol weakly or strongly excites the quantum system.
Key Contributions
- Demonstrates that finite-time charging protocols provide more stable energy storage than sudden charging in noisy quantum batteries
- Shows that noise effects depend critically on charging protocol - weak excitation protocols gain energy but lose efficiency under noise while strong excitation protocols show opposite behavior
View Full Abstract
Reliable charging protocols are crucial for advancing quantum batteries toward practical use. We investigate a transverse-field Ising chain as a quantum battery, focusing on the combined role of qubit interactions in the battery model and finite charging time. This interplay yields smoother and more controllable charging compared to sudden protocols or non-interacting batteries. Introducing stochastic noise reveals a strong dependence on the charging trajectory. Protocols that weakly excite the system gain energy under noise but lose extractable work. In contrast, protocols that strongly excite many modes show the opposite trend: noise reduces stored energy yet improves efficiency, defined as the ratio of ergotropy to stored energy. These findings demonstrate that finite-time ramps stabilize charging and highlight that noise can either hinder or enhance quantum-battery performance depending on the protocol.
Large circuit execution for NMR spectroscopy simulation on NISQ quantum hardware
This paper demonstrates the use of quantum computers to simulate nuclear magnetic resonance (NMR) spectra for molecular systems with up to 34 spins, using advanced error mitigation techniques to run deep quantum circuits on current noisy quantum hardware from IBM and IonQ.
Key Contributions
- Demonstrated quantum simulation of 1D NMR spectra for systems up to 34 spins using NISQ hardware
- Achieved 22x improvement in mean square error through advanced error mitigation and suppression techniques
- Successfully executed deep quantum circuits beyond the classical Liouville limit (32 spins) for NMR simulation
View Full Abstract
With the latest advances in quantum computing technology, we are gradually moving from the noisy intermediate-scale quantum (NISQ) era characterized by hardware limited in the number of qubits and plagued with quantum noise, to the age of quantum utility where both the newest hardware and software methods allow for tackling problems which have been deemed difficult or intractable with conventional classical methods. One of these difficult problems is the simulation of one-dimensional (1D) nuclear magnetic resonance (NMR) spectra, a major tool to learn about the structure of molecules, helping the design of new materials or drugs. Using advanced error mitigation and error suppression techniques from Q-CTRL together with the latest commercially available superconducting-qubit quantum computer from IBM and trapped-ion quantum computer from IonQ, we present the quantum Hamiltonian simulation of liquid-state 1D NMR spectra in the high-field regime for spin systems up to 34 spins. Our pipeline has a major impact on the ability to execute deep quantum circuits with the reduction of quantum noise, improving mean square error by a factor of 22. It allows for the execution of deep quantum circuits and obtaining salient features of the 1D NMR spectra for both 16-spin and 22-spin systems, as well as a 34-spin system, which lies beyond the regime where unrestricted full Liouvillespace simulations are practical (32 spins, the Liouville limit). Our work is a step toward near-term quantum utility in NMR spectroscopy.
From few- to many-body physics: Strongly dipolar molecular Bose-Einstein condensates and quantum fluids
This paper explores the unique properties of Bose-Einstein condensates made from molecules with strong dipolar interactions, analyzing which molecular species and experimental conditions could enable the study of exotic quantum many-body phenomena. The authors assess how existing theoretical frameworks can be extended to handle these strongly interacting dipolar systems.
Key Contributions
- Analysis of parameter regimes achievable with current molecular cooling techniques for strongly dipolar BECs
- Extension of beyond mean-field theories from weakly to strongly dipolar systems
- Identification of molecular species best suited for exploring exotic quantum many-body states
View Full Abstract
Recent advances in molecular cooling have enabled the realization of strongly dipolar Bose-Einstein condensates (BECs) of molecules, and BECs of many different molecular species may become experimentally accessible in the near future. Here, we explore the unique properties of such BECs and the new insights they may offer into dipolar quantum fluids and many-body physics. We explore which parameter regimes can realistically be achieved using currently available experimental techniques, discuss how to implement these techniques, and outline which molecular species are particularly well suited to explore exotic new states of matter. We further determine how state-of-the-art beyond mean-field theories, originally developed for weakly dipolar magnetic gases, can be pushed to their limits and beyond, and what other long-standing questions in the field of dipolar physics may realistically come within reach using molecular systems.
Nonlocal contributions to ergotropy: A thermodynamic perspective
This paper investigates how quantum nonlocality contributes to ergotropy (the maximum extractable work from quantum systems) in bipartite systems. The authors develop mathematical tools to quantify nonlocal contributions to work extraction and show that nonlocality always enhances extractable work in non-interacting systems, while its effect in interacting systems depends on the specific system structure.
Key Contributions
- Introduction of a quantifier for nonlocal contributions to extractable work in bipartite quantum systems
- Derivation of closed-form expressions in terms of Schmidt coefficients and establishment of direct relationship between ergotropy and correlations for non-interacting Hamiltonians
View Full Abstract
Nonlocality is a defining feature of quantum mechanics and has long served as a key indicator of quantum resources since the formulation of Bell's inequalities. Identifying the contribution of nonlocality to extractable work remains a central problem in quantum thermodynamics. We address this by introducing a quantifier of nonlocal contributions to extractable work in bipartite systems. It is shown that closed form expressions can be calculated for our quantity in terms of the Schmidt coefficients. Further for strictly non-interacting Hamiltonian, the direct relationship between ergotropy and correlations is established. Our results reveal that nonlocal resources invariably enhance extractable work under non-interacting Hamiltonians, while in the presence of interactions, their contribution can either increase or diminish depending on the structure of the state and the Hamiltonian.
Multimode Jahn-Teller Effect in Negatively Charged Nitrogen-Vacancy Center in Diamond
This paper studies how vibrations in diamond's nitrogen-vacancy (NV) centers affect their electronic properties when excited, using quantum mechanical calculations. The research identifies which vibrational patterns are most important and how they couple with the electronic states of these quantum defects.
Key Contributions
- Identified dominant vibrational modes contributing to Jahn-Teller distortions in NV centers using first-principles DFT calculations
- Provided theoretical understanding of vibronic coupling mechanisms that affect dephasing and relaxation processes critical for quantum applications
View Full Abstract
Multimode Jahn-Teller (JT) effect in a negatively charged nitrogen-vacancy (NV) center in its excited state is studied by first-principles calculations based on density function theory (DFT). The activation pathways of the JT distortions are analyzed to elucidate and quantify the contribution of different vibrational modes. The results show that the dominant vibrational modes in the JT distortions are closely related to the phonon sideband observed in two-dimensional electronic spectroscopy (2DES), consistent with ab initio molecular dynamics (AIMD) simulation results. Our calculations provide a new way to understand the origin and the mechanism of the vibronic coupling of the system. The obtained dominant vibrational modes coupled to the NV centre and their interactions with electronic states provides new insights into dephasing, relaxation and optically driven quantum effects, and are critical for the application to quantum information, magnetometry and sensing.
Graphene-Insulator-Superconductor junctions as thermoelectric bolometers
This paper develops a new type of bolometer (sensitive detector) using a graphene-insulator-superconductor junction that converts thermal radiation directly into voltage without needing external power. The device shows promising performance for detecting very weak signals in cosmological experiments.
Key Contributions
- Design of passive thermoelectric bolometer using graphene-insulator-superconductor tunnel junction
- Development of novel noise expressions accounting for temperature differences across junction sides
- Demonstration of ultra-low noise equivalent power performance suitable for large-array cosmological applications
View Full Abstract
We design a superconducting thermoelectric bolometer made of a Graphene-Insulator-Superconductor tunnel junction. Our detector has the advantage of being passive, as it directly transduces input power to a voltage without the need to modulate an external bias. We characterize the device via numerical simulation of the full nonlinear thermal dynamical model of the junction, considering heating of both sides of the junction. While estimating noise contributions, we found novel expressions due to the temperatures of both sides being different than the bath temperature. Numerical simulations show a Noise Equivalent Power ${\rm NEP}\sim 4\times 10^{-17}\,{\rm W}/\sqrt{\rm Hz}$ for an input power of $\sim10^{-16}\,{\rm W}$, a response time of $τ_{th}\sim 200\, {\rm ns}$ and an integration time to obtain a Signal-to-Noise Ratio ${\rm SNR}=1$ of $τ_{\rm SNR=1}\sim 100\,μ{\rm s}$ for an input power $\sim 10^{-13}\,{\rm W}$. Therefore, the device shows promise for large-array cosmological experiment applications, also considering its advantages for fabrication and heat budget.
A Compact Incubation Platform for Long-Term Cultivation of Biological Samples for Nitrogen-Vacancy Center Widefield Microscopy
This paper presents a specialized incubation system that allows biological cells to be grown and studied for extended periods using nitrogen-vacancy center quantum sensors in diamond. The platform maintains proper cell growth conditions while enabling long-term magnetic field imaging of living cells.
Key Contributions
- Development of a compact incubation platform specifically designed for NV center widefield magnetometry
- Demonstration of 90-hour continuous cell cultivation with successful magnetic field imaging of immunomagnetically labeled cells
View Full Abstract
Nitrogen-vacancy (NV) centers in diamond provide a versatile quantum sensing platform for biological imaging through magnetic field detection, offering unlimited photostability and the ability to perform long-term observations without photobleaching or phototoxicity. However, conventional stage-top incubators are incompatible with the unique requirements for NV widefield magnetometry to study cellular dynamics. Here, we present a purpose-built compact incubation platform that maintains precise environmental control of temperature, CO$_2$ atmosphere, and humidity while accommodating the complex constraints of NV widefield microscopy. The system employs a 3D-printed biocompatible chamber with integrated heating elements, temperature control, and humidified gas flow to create a stable physiological environment directly on the diamond sensing surface. We demonstrate sustained viability and proliferation of HT29 colorectal cancer cells over 90 hours of continuous incubation, with successful magnetic field imaging of immunomagnetically labeled cells after extended cultivation periods. This incubation platform enables long-term cultivation and real-time monitoring of biological samples on NV widefield magnetometry platforms, opening new possibilities for studying dynamic cellular processes using quantum sensing technologies.
Super-Heisenberg-limited Sensing via Collective Subradiance in Waveguide QED
This paper demonstrates how arrays of closely-spaced quantum emitters coupled to nanophotonic waveguides can create extremely narrow optical resonances that enable ultra-precise sensing. The technique achieves sensitivity that scales much better than traditional quantum sensing methods, potentially enabling detection of minute changes in atomic positions.
Key Contributions
- Derived universal N^-3 scaling law for subradiant decay rates in waveguide QED systems
- Demonstrated N^6 scaling of quantum Fisher information for sensing applications
- Showed robustness of super-Heisenberg sensing under realistic disorder conditions
View Full Abstract
We explore the quantum-metrological potential of subwavelength-spaced emitter arrays coupled to a one-dimensional nanophotonic waveguide. In this system, strong dipole--dipole interactions profoundly modify the collective optical response, leading to the emergence of ultranarrow subradiant resonances. Through an eigenmode analysis of the effective non-Hermitian Hamiltonian, we derive a universal scaling law for the decay rate of the most subradiant state, which exhibits an $ N^{-3} $ scaling with even-odd oscillatory behavior in the deep-subwavelength regime. This scaling is directly observable in the single-photon scattering spectrum, enabling the detection of minute changes in atomic separation with a figure of merit that scales as $ N^3 $. The quantum Fisher information (QFI) scales as $N^6$ and can be closely approached by measuring spectral shifts near the steepest slope of the most subradiant resonance. These enhancements remain robust under realistic positional disorder, confirming that dipole--dipole-engineered subradiance provides a viable resource for quantum metrology. Our work bridges many-body waveguide quantum electrodynamics and high-precision sensing, opening a route toward scalable quantum sensors on integrated nanophotonic platforms.
Adiabatic-Inspired Hybrid Quantum-Classical Methods for Molecular Ground State Preparation
This paper develops and benchmarks hybrid quantum-classical algorithms for finding molecular ground states in quantum chemistry, introducing a novel G-AQC-PQC method that combines adiabatic quantum computing principles with variational approaches. The researchers test these methods on beryllium hydride molecules and show improved performance over conventional Variational Quantum Eigensolver approaches.
Key Contributions
- Unified framework for adiabatically-inspired quantum algorithms
- Novel G-AQC-PQC hybrid method with improved performance over conventional VQE
- Comprehensive benchmarking of quantum chemistry algorithms on BeH2 molecular system
View Full Abstract
Quantum computing promises to efficiently and accurately solve many important problems in quantum chemistry which elude classical solvers, such as the electronic structure problem of highly correlated materials. Two leading methods in solving the ground state problem are the Variational Quantum Eigensolver (VQE) and Adiabatic Quantum Computing (AQC) algorithms. VQE often struggles with convergence due to the energy landscape being highly non-convex and the existence of barren plateaux, and implementing AQC is beyond the capabilities of current quantum devices as it requires deep circuits. Adiabatically-inspired algorithms aim to fill this gap. In this paper, we first present a unifying framework for these algorithms and then benchmark the following methods: the Adiabatically Assisted VQE (AAVQE) (Garcia-Saez and Latorre (2018)), the Variational Adiabatic Quantum Computing (VAQC) (Harwood et al (2022)), and the Adiabatic Quantum Computing with Parametrized Quantum Circuits (AQC-PQC) (Kolotouros et al (2025)) algorithms. Second, we introduce a novel hybrid approach termed G-AQC-PQC, which generalizes the AQC-PQC method, and combines adiabatic-inspired initialization with the low-memory BFGS optimizer, reducing the quantum computational cost of the method. Third, we compare the accuracy of the methods for chemistry applications using the beryllium hydride molecule (BeH$_2$). We compare the approaches across a number of different choices (ansätze types, depth, discretization steps, initial Hamiltonian, adiabatic schedules and method used). Our results show that the G-AQC-PQC outperforms conventional VQE. We further discuss limitations such as the zero-gradient problem and identify regimes where adiabatically-inspired methods offer a tangible advantage for near-term quantum chemistry applications.
Ground State Energy via Adiabatic Evolution and Phase Measurement for a Molecular Hamiltonian on an Ion-Trap Quantum Computer
This paper demonstrates using an ion-trap quantum computer to find the ground state energy of the H3+ molecule by preparing quantum states through adiabatic evolution and measuring energies via quantum phase estimation. The researchers identified leakage errors as the primary obstacle to achieving chemical accuracy, showing this limits performance more than other types of quantum noise.
Key Contributions
- Demonstrated end-to-end molecular ground state energy estimation on ion-trap hardware without classical computational off-loading
- Identified leakage errors as the dominant noise source limiting chemical accuracy in quantum chemistry applications
- Showed that coherent and incoherent noise have minimal impact compared to leakage errors in this molecular simulation context
View Full Abstract
Estimating molecular ground-state energies is a central application of quantum computing, requiring both the preparation of accurate quantum states and efficient energy readout. Understanding the effect of hardware noise on these experiments is crucial to distinguish errors that have low impact, errors that can be mitigated, and errors that must be reduced at the hardware level. We ran a state preparation and energy measurement protocol on an ion-trap quantum computer, without any non-scalable off-loading of computational tasks to classical computers, and show that leakage errors are the main obstacle to chemical accuracy. More specifically, we apply adiabatic state preparation to prepare the ground state of a six-qubit encoding of the H3+ molecule and extract its energy using a noise-resilient variant of iterative quantum phase estimation. Our results improve upon the classical Hartree-Fock energy. Analyzing the effect of hardware noise on the result, we find that while coherent and incoherent noise have little influence, the hardware results are mainly impacted by leakage errors. Absent leakage errors, noisy numerical simulations show that with our experimental settings we would have achieved close to chemical accuracy, even shot noise included. These insights highlight the importance of targeting leakage suppression in future algorithm and hardware development.
Capacity and SKR tradeoff in coexisting classical and CV-QKD metropolitan-reach optical links
This paper optimizes quantum key distribution (QKD) and classical data transmission coexistence in metropolitan fiber networks by studying how channel placement and power levels affect performance tradeoffs. The researchers show that placing quantum channels at the edge of frequency bands with optimized guardbands significantly improves quantum communication rates while minimizing impact on classical data capacity.
Key Contributions
- Power-regime-dependent guardband optimization for quantum-classical channel coexistence
- Demonstration that band-edge quantum channel placement achieves 108% SKR improvement with reduced classical capacity penalty compared to band-center placement
View Full Abstract
We demonstrate power-regime-dependent guardband optimization for quantum-classical coexistence in metropolitan DWDM. Quantum channel at band-edge with 100-150 GHz guardbands achieves 108% SKR improvement at -1.5 dBm/ch, incurring 3.4% capacity loss versus 6.8% for band-center.
Hybrid acousto-optical spin control in quantum dots
This paper proposes a new method to control electron spins in quantum dots using a combination of acoustic waves and optical fields, overcoming the typically weak coupling between sound and spins in semiconductors. The technique enables high-fidelity spin rotation by using optical fields to break spin conservation rules, allowing acoustic waves to drive spin transitions.
Key Contributions
- Novel hybrid acousto-optical method for controlling quantum dot spins that overcomes weak phonon-spin coupling
- Demonstration of high-fidelity spin rotation (99.9%) using feasible experimental parameters
- Integration-ready approach for connecting acoustic, optical, and microwave quantum systems on-chip
View Full Abstract
Mechanical degrees of freedom very weakly couple to spins in semiconductors. The inefficient coupling between phonons and single electron spins in semiconductor quantum dots (QDs) hinders their integration into on-chip acoustically coupled quantum hybrid systems. We propose a hybrid acousto-optical spin control method that circumvents this problem and effectively introduces acoustic spin rotation to QDs, complementing their rich couplings with external fields and quantum registers. We show that combining continuous-wave detuned optical coupling to a trion state and acoustic modulation results in spin rotation around an axis defined by the acoustic field. The optical field breaks spin conservation, allowing phonons to drive transitions between disrupted spin states when at resonance with the Zeeman frequency. Our method is compatible with pulse sequences that mitigate quasi-static noise effects, which makes trion recombination the primary limitation to gate fidelity under cooled nuclear-spin conditions. Numerical simulations indicate that spin rotation fidelity can be very high, if the trion lifetime is long and Zeeman splitting is sufficiently large, with a currently feasible 50~ns lifetime and 44~GHz splitting giving 99.9\% fidelity. Applying our advancement could enable acoustic QD spin state transfer to diverse solid-state systems and transduction between acoustic, optical, and microwave domains, all within an on-chip integration-ready setting.
Geometric quantum thermodynamics: A fibre bundle approach
This paper develops a mathematical framework using fiber bundle geometry to describe quantum thermodynamics, treating thermodynamic variables as gauge theories similar to fundamental physics theories. The authors construct geometric structures that unify information theory with quantum thermal properties through gauge transformations.
Key Contributions
- Construction of principal fiber bundle for quantum thermodynamics
- Identification of two distinct geometric structures in thermodynamic gauge theory
- Mathematical unification of thermodynamics with fundamental physics theories using geometric language
View Full Abstract
Classical thermodynamics is a theory based on coarse-graining, meaning that the thermodynamic variables arise from discarding information related to the microscopic features of the system at hand. In quantum mechanics, however, where one has a high degree of control over microscopic systems, information theory plays an important role in describing the thermal properties of quantum systems. Recently, a new approach has been proposed in the form of a quantum thermodynamic gauge theory, where the notion of redundant information arises from a group of physically motivated gauge transformations called the thermodynamic group. In this work, we explore the geometrical structure of quantum thermodynamics. Particularly, we do so by explicitly constructing the relevant principal fibre bundle. We then show that there are two distinct (albeit related) geometric structures associated with the gauge theory of quantum thermodynamics. In this way, we express thermodynamics in the same mathematical (geometric) language as the fundamental theories of physics. Finally, we discuss how the geometric and topological properties of these structures may help explain fundamental properties of thermodynamics.
Steering Alternative Realities through Local Quantum Memory Operations
This paper proposes a theoretical protocol called 'reality steering' where an observer could potentially switch between different quantum measurement outcomes by erasing memory information locally, without affecting the environment. The authors show this would be unverifiable within standard quantum mechanics but might be possible with nonlinear quantum operations.
Key Contributions
- Introduction of reality steering protocol for accessing alternative quantum measurement outcomes
- Demonstration that such transitions would be unverifiable within standard quantum mechanics
- Analysis of constraints and requirements for multi-reality navigation using quantum information theory
View Full Abstract
Quantum measurement resolves a superposition into a definite outcome by correlating it with an observer's memory -- a reality register. While the global quantum state remains coherent, the observer's local reality becomes singular and definite. This work introduces reality steering, a protocol that allows an observer to probabilistically access a different reality already supported by the initial quantum state, without reversing decoherence on the environment. The mechanism relies on locally erasing the 'which-outcome' information stored in the observer's brain. Here, 'local' means operations confined to the observer's memory, excluding the environment, which may be cosmically large. Reality steering nevertheless faces intrinsic constraints: successful navigation requires coherent participation from the observer's counterparts across the relevant branches, and any transition is operationally indistinguishable from non-transition. After arriving in a new reality, all memory records are perfectly consistent with that reality, leaving no internal evidence that a switch occurred. This makes conscious confirmation impossible within standard quantum mechanics. We show that nonlinear operations beyond the standard theory could, in principle, enable verifiable and deliberate navigation. Our results shift multi-reality exploration from philosophical speculation toward a concrete -- though fundamentally constrained -- quantum-informational framework.
Another 100 Years of Quantum Interpretation?
This paper argues that instead of focusing solely on interpretations of quantum mechanics, we should evaluate how different interpretations might help us discover more fundamental theories like quantum gravity. The author proposes assessing quantum interpretations based on their potential to guide us toward unifying quantum mechanics with other fundamental physics.
Key Contributions
- Proposes evaluating quantum interpretations by their heuristic value for finding more fundamental theories
- Questions the traditional separation between interpreting quantum mechanics versus deriving it from quantum gravity theories
View Full Abstract
Interpretation is not the only way to explain a theory's success, form and features, and nor is it the only way to solve problems we see with a theory. This can also be done by giving a reductive explanation of the theory, by reference to a newer, more accurate, and/or more fundamental theory. We are seeking a theory of quantum gravity, a more fundamental theory than both quantum mechanics and general relativity, yet, while this theory is supposed to explain general relativity, it's not typically been thought to be necessary, or able, to explain quantum mechanics -- a task instead assigned to interpretation. Here, I question why this is. I also present a new way of assessing the various interpretations of quantum mechanics, in terms of their heuristic and unificatory potential in helping us find a more fundamental theory.
Universal Structure of Nonlocal Operators for Deterministic Navigation and Geometric Locking
This paper develops a geometric framework that simplifies the search for optimal nonlocal quantum operators by reducing the problem to two angular parameters, and identifies two distinct types of quantum phase transitions based on whether optimal measurement configurations change drastically or remain locked during transitions.
Key Contributions
- Universal geometric framework reducing nonlocal operator optimization to two angular parameters
- Classification of quantum phase transitions into geometric criticality vs geometric locking regimes
- Deterministic navigation method for Bell experiment optimization
View Full Abstract
We establish a universal geometric framework that transforms the search for optimal nonlocal operators from a combinatorial black box into a deterministic predict-verify operation. We discover that the principal eigenvalue governing nonlocality is rigorously dictated by a low-dimensional manifold parameterized by merely two fundamental angular variables, $θ$ and $φ$, whose symmetry leads to further simplification. This geometric distillation establishes a precise mapping connecting external control parameters directly to optimal measurement configurations. Crucially, a comparative analysis of the geometric angles against the principal eigenvalue spectrum, including its magnitude, susceptibility, and nonlocal gap, reveals a fundamental dichotomy in quantum criticality. While transitions involving symmetry sector rotation manifest as geometric criticality with drastic operator reorientation, transitions dominated by strong anisotropy exhibit geometric locking, where the optimal basis remains robust despite clear signatures of phase transitions in the spectral indicators. This distinction offers a novel structural classification of quantum phase transitions and provides a precision navigation chart for Bell experiments.
General Quantum Instruction for Communication via Maximally Entangled $n$-Qubit States
This paper develops a generalized quantum communication protocol that can transmit n classical bits of information using entangled n-qubit quantum systems, extending the concept of superdense coding to arbitrary message lengths. The researchers tested their protocol on IBM quantum hardware with messages up to 10 bits long, finding that performance decreases with longer messages due to hardware limitations.
Key Contributions
- First explicit and scalable quantum circuit construction for n-bit superdense coding
- Experimental validation on real quantum hardware (IBM-Torino) for message lengths up to 10 bits
View Full Abstract
This study presents a generalized $n$-bit superdense coding protocol that enables the transmission of n classical bits of information using an entangled n--qubit quantum system and the transmission of $n-1$ qubits. The protocol involves creating a maximally entangled n--qubit state, encoding the classical message with Pauli--Z and Pauli--X gates, and then transmitting and decoding the message via quantum communication, quantum operations, and measurements. The key novelty of this work lies in the proposed n--bit encoding routine, which, to the best of our knowledge, is the first explicit and scalable recipe for constructing quantum circuits for n--bit Superdense Coding, minimizing errors through a simple circuit design. The protocol was tested on real quantum hardware using Qiskit 2.0 and the IBM--Torino quantum computer for message lengths of 4, 6, 8, and 10 bits. Results show that success rates decrease as message length, circuit depth, and gate count increase, largely due to increased Pauli--X gate usage for messages with more ``1" bits. Strategies to improve performance include sending messages in shorter segments and advances in qubit coherence and gate fidelity. This work offers a practical and easily scalable quantum communication instruction with potential applications in quantum networks and communication systems.
Engineering Anisotropic Rabi Model in Circuit QED
This paper demonstrates a new way to control quantum interactions in circuit quantum electrodynamics by engineering an anisotropic Rabi model, allowing researchers to tune between different types of qubit-resonator interactions and enabling new quantum measurement capabilities.
Key Contributions
- Implementation of tunable anisotropic Rabi model in circuit QED with geometric control
- Development of novel quantum measurement capabilities including dispersive shift cancellation and Purcell-suppressed readout
View Full Abstract
The anisotropic Rabi model (ARM), which features tunable Jaynes-Cummings (JC) and anti-Jaynes-Cummings (AJC) interactions, has remained challenging to realize fully. We present a circuit QED implementation that provides static control over the ARM parameters. By simultaneously coupling a qubit to a resonator's voltage and current antinodes, we geometrically tune the interaction from pure JC to pure AJC. This control enables novel quantum measurement capabilities, including dispersive shift cancellation and Purcell-suppressed readout. Our work establishes a direct platform for exploring the ARM's full parameter space and its applications in quantum information processing.
Dipolar quantum gases: from 3D to Low dimensions
This paper reviews dipolar quantum gases, which are atoms and molecules with strong magnetic or electric dipole moments that interact through long-range forces. The authors compare how these systems behave in 3D versus 2D configurations, exploring exotic quantum phases like supersolids and quantum droplets.
Key Contributions
- Comprehensive review of dipolar quantum gas behavior across different dimensions
- Analysis of novel 2D phenomena including angle-dependent phase transitions and potential supersolidity
- Identification of key challenges for future experimental work on strongly dipolar 2D systems
View Full Abstract
Dipolar quantum gases, encompassing atoms and molecules with significant dipole moments, exhibit unique long-range and anisotropic dipole-dipole interactions (DDI), distinguishing them from systems dominated by short-range contact interactions. This review explores their behavior across dimensions, focusing on magnetic atoms in quasi-2D in comparison to 3D. In 3D, strong DDI leads to phenomena like anisotropic superfluidity, quantum droplets stabilized by Lee-Huang-Yang corrections, and supersolid states with density modulations. In 2D, we discuss a new scenario where DDI induces angle-dependent Berezinskii-Kosterlitz-Thouless transitions and potential supersolidity, as suggested by recent experimental realizations of strongly dipolar systems in quasi-2D geometries. We identify key challenges for future experimental and theoretical work on strongly dipolar 2D systems. The review concludes by highlighting how these unique 2D dipolar systems could advance fundamental research as well as simulate novel physical phenomena.
Analogue gravity with Bose-Einstein condensates
This paper provides a pedagogical introduction to analogue gravity using Bose-Einstein condensates, where sound waves in these quantum fluids can mimic the behavior of fields in curved spacetimes, including phenomena like black hole physics and Hawking radiation.
Key Contributions
- Pedagogical framework for understanding analogue gravity in Bose-Einstein condensates
- Theoretical description of quantum field behavior in curved spacetime analogues
- Survey of black-hole superradiance and analogue Hawking radiation with numerical methods
View Full Abstract
Analogue gravity explores how collective excitations in condensed matter systems can reproduce the behavior of fields in curved spacetimes. An important example is the acoustic black holes that can occur for sound in a moving fluid. In these lecture notes, we focus on atomic Bose-Einstein condensates (BECs), quantum fluids that provide an interesting platform for analogue gravity studies thanks to their accurate theoretical description, remarkable experimental control, and ultralow temperatures that allow the quantum nature of sound to emerge. We give a pedagogical introduction to analogue black holes and the theoretical description of BECs and their elementary excitations, which behave as quantum fields in curved spacetimes. We then apply these tools to survey the current understanding of black-hole superradiance and analogue Hawking radiation, including explicit examples and numerical methods.
Quantum Machine Learning for Climate Modelling
This paper demonstrates using quantum neural networks to predict cloud cover in climate models, showing performance similar to classical neural networks but with more consistent learning patterns. The work explores quantum machine learning applications for improving Earth system models used in climate change prediction.
Key Contributions
- First application of quantum neural networks to climate modeling and cloud cover parameterization
- Demonstration that QNNs achieve comparable performance to classical NNs with same parameter count but with more consistent learning
View Full Abstract
Quantum machine learning (QML) is making rapid progress, and QML-based models hold the promise of quantum advantages such as potentially higher expressivity and generalizability than their classical counterparts. Here, we present work on using a quantum neural net (QNN) to develop a parameterization of cloud cover for an Earth system model (ESM). ESMs are needed for predicting and projecting climate change, and can be improved in hybrid models incorporating both traditional physics-based components as well as machine learning (ML) models. We show that a QNN can predict cloud cover with a performance similar to a classical NN with the same number of free parameters and significantly better than the traditional scheme. We also analyse the learning capability of the QNN in comparison to the classical NN and show that, at least for our example, QNNs learn more consistent relationships than classical NNs.
Cosmic Lockdown: When Decoherence Saves the Universe from Tunneling
This paper studies how quantum decoherence affects the behavior of quantum fields in cosmological settings, particularly how environmental interactions can prevent quantum tunneling between different vacuum states. The authors find that once decoherence occurs, quantum fields become 'locked' into their current state through a quantum Zeno effect, preventing transitions that could destabilize the universe.
Key Contributions
- Derivation of Markovian and non-Markovian master equations for cosmological quantum field dynamics
- Discovery of 'cosmic lockdown' mechanism where decoherence suppresses vacuum tunneling via quantum Zeno effect
View Full Abstract
We investigate how quantum decoherence influences the tunneling dynamics of quantum fields in cosmological spacetimes. Specifically, we study a scalar field in an asymmetric double well potential during inflation, coupled to environmental degrees of freedom provided either by heavy spectator fields or by short-wavelength modes as they cross out the Hubble scale. This setup enables a systematic derivation of both Markovian and non-Markovian master equations, along with their stochastic unravelings, which we solve numerically. We find that, while decoherence is essential for suppressing quantum interference between vacua, its impact on the relative vacuum populations is limited. Fields heavier than the Hubble scale relax adiabatically toward the true vacuum with high probability, while lighter fields exhibit non-adiabatic enhancements of false-vacuum occupation. Once the system has decohered, quantum tunneling between vacua becomes strongly suppressed, effectively locking the system into the stochastically selected local minimum. This ``cosmic lockdown'' mechanism is a manifestation of the quantum Zeno effect: environmental monitoring stabilizes enhanced false-vacuum occupation for light fields by preventing them from tunneling.
Acoustic horizons and the Hawking effect in polariton fluids of light
This paper develops polariton fluids of light as quantum simulators to study curved spacetime physics, specifically acoustic horizons and Hawking radiation effects. The work provides both theoretical frameworks and experimental toolkits for investigating quantum field theory phenomena in these controllable quantum fluid systems.
Key Contributions
- Theoretical mapping of polariton fluids to relativistic field theories for simulating curved spacetime physics
- Experimental toolkit including phase-imprinted flows and coherent pump-probe spectroscopy for studying Hawking effects
- Framework for extracting quantum correlations, entanglement, and squeezing from acoustic horizon measurements
View Full Abstract
These lecture notes develop polariton fluids of light as programmable simulators of quantum fields on tailored curved spacetimes, with emphasis on acoustic horizons and the Hawking effect. After introducing exciton-polariton physics in semiconductor microcavities, we detail the theoretical tools to study the mean field and the quantum hydrodynamics of this driven-dissipative quantum system. We derive the mapping to relativistic field theories and cast horizon physics as a pseudounitary stationary scattering problem. We present the Gaussian optics circuit that describes observables and fixes detection weights for the horizon modes in near- and far-field measurements. We provide a practical experimental toolkit (phase-imprinted flows, coherent pump-probe spectroscopy, balanced and homodyne detection) and a step-by-step workflow to extract amplification, quadrature squeezing, and entanglement among correlations. Finally, we discuss the potential of this platform to investigate open questions in quantum field theory in curved spacetime, such as near horizon effects and quasinormal modes, as well as other phenomena universal to rotating geometries, from rotational superradiance to dynamical instabilities. We further outline the interplay between rotational superradiance and the Hawking effect, proposing to spatially resolve measurements as a roadmap for `dumb hole spectroscopy' and the study of entanglement dynamics in curved spacetimes.
Discrete time crystals enabled by Floquet strong Hilbert space fragmentation
This paper studies discrete time crystals (DTCs) in periodically driven quantum spin chains, showing how these exotic phases of matter that break time-translation symmetry can be stabilized without disorder through a mechanism called Floquet strong Hilbert space fragmentation. The researchers demonstrate that the DTC order exhibits robust subharmonic responses and can persist for exponentially long times in larger systems.
Key Contributions
- Demonstrated disorder-free mechanism for stabilizing discrete time crystals using Floquet strong Hilbert space fragmentation
- Revealed exponential scaling of DTC lifetime with system size and identified approximate conservation laws in the Floquet operator
- Uncovered multiple-period response with beating dynamics from coherent interplay of π-pairs in small systems
View Full Abstract
Discrete time crystals (DTCs) are non-equilibrium phases of matter that break the discrete time-translation symmetry and is characterized by a robust subharmonic response in periodically driven quantum systems. Here, we explore the DTC in a disorder-free, periodically kicked XXZ spin chain, which is stabilized by the Floquet strong Hilbert space fragmentation. We numerically show the period-doubling response of the conventional DTC order, and uncover a multiple-period response with beating dynamics due to the coherent interplay of multiple $π$-pairs in the Floquet spectrum of small-size systems. The lifetime of the DTC order exhibits independence of the driving frequency and a power-law dependence on the ZZ interaction strength. It also grows exponentially with the system size, as a hallmark of the strong fragmentation inherent to the Floquet model. We analytically reveal the approximate conservation of the magnetization and domain-wall number in the Floquet operator for the emergent strong fragmentation, which is consistent with numerical results of the dimensionality ratio of symmetry subspaces. The rigidity and phase regime of the DTC order are identified through finite-size scaling of the Floquet-spectrum-averaged mutual information, as well as via dynamical probes. Our work establishes the Floquet Hilbert space fragmentation as a disorder-free mechanism for sustaining nontrivial temporal orders in out-of-equilibrium quantum many-body systems.
Towards Explainable Quantum AI: Informing the Encoder Selection of Quantum Neural Networks via Visualization
This paper introduces XQAI-Eyes, a visualization tool that helps developers choose better encoders for Quantum Neural Networks by allowing them to compare how classical data features are represented in quantum states. The tool addresses the current trial-and-error approach to encoder selection by providing insights into how different encoders affect the ability to distinguish between data classes.
Key Contributions
- Development of XQAI-Eyes visualization tool for quantum encoder analysis
- Establishment of systematic principles for quantum encoder selection based on pattern preservation and feature mapping
View Full Abstract
Quantum Neural Networks (QNNs) represent a promising fusion of quantum computing and neural network architectures, offering speed-ups and efficient processing of high-dimensional, entangled data. A crucial component of QNNs is the encoder, which maps classical input data into quantum states. However, choosing suitable encoders remains a significant challenge, largely due to the lack of systematic guidance and the trial-and-error nature of current approaches. This process is further impeded by two key challenges: (1) the difficulty in evaluating encoded quantum states prior to training, and (2) the lack of intuitive methods for analyzing an encoder's ability to effectively distinguish data features. To address these issues, we introduce a novel visualization tool, XQAI-Eyes, which enables QNN developers to compare classical data features with their corresponding encoded quantum states and to examine the mixed quantum states across different classes. By bridging classical and quantum perspectives, XQAI-Eyes facilitates a deeper understanding of how encoders influence QNN performance. Evaluations across diverse datasets and encoder designs demonstrate XQAI-Eyes's potential to support the exploration of the relationship between encoder design and QNN effectiveness, offering a holistic and transparent approach to optimizing quantum encoders. Moreover, domain experts used XQAI-Eyes to derive two key practices for quantum encoder selection, grounded in the principles of pattern preservation and feature mapping.
High-Order Harmonic Generation with Beyond-Semiclassical Emitter Dynamics: A Strong-Field Quantum Optical Heisenberg Picture Approach
This paper develops a new theoretical approach using the Heisenberg picture to study high-order harmonic generation (HHG) in strong laser fields, going beyond standard semiclassical descriptions to include quantum effects from field fluctuations. The work shows how quantum properties like light squeezing scale with the number of emitting atoms and provides a more accurate framework for understanding quantum optical effects in HHG experiments.
Key Contributions
- Development of a Heisenberg picture approach for quantum-optical HHG that captures beyond-semiclassical corrections
- Discovery that squeezing increases with emitter number while photon statistics become classical in the many-emitter limit
- Demonstration that quantum fluctuations significantly enhance light squeezing in HHG
View Full Abstract
Quantum-optical descriptions of strong-field processes have attracted significant attention in recent years. Typically, the theoretical modeling has been conducted in the Schrödinger picture, where results are only obtainable under certain approximations, while, in contrast, the Heisenberg picture has remained relatively unexplored. In this work, we develop an accurately controlled perturbative expansion of the time-evolution operator in the Heisenberg picture and derive beyond-semiclassical corrections to the emitter dynamics due to the coupling to the quantized electromagnetic field, capturing effects of the quantum fluctuations present in the latter. We focus on high-order harmonic generation (HHG), where the approach is accurate in parameter regimes of current interest and it gives closed-form expressions for key observables. This formulation not only simplifies numerical calculations compared to the Schrödinger-picture approach but also provides a clear correspondence between nonclassical features of the emitted light and the underlying induced dynamics of the generating medium including quantum fluctuations. Moreover, the Heisenberg framework naturally yields scaling relations with the number of independent emitters, enabling us to assess whether nonclassical behavior should persist under typical experimental conditions involving large emitter ensembles. Interestingly, we find that the degree of squeezing increases with the number of emitters, whereas the photon statistics approaches a classical Poissonian distribution in the many-emitter limit. We also find that the beyond-semiclassical emitter dynamics significantly enhances the degree of squeezing of the emitted light. Our work advances the theoretical understanding of quantum-optical HHG and introduces an accessible and well-controlled framework to describe realistic experiments.
Quantum-Inspired Approach to Analyzing Complex System Dynamics
This paper develops a method that uses mathematical tools from quantum information theory (like density matrices and fidelity measures) to analyze complex systems with multiple interacting variables over time. The authors apply this quantum-inspired framework to study climate data, specifically tracking how global temperature patterns have changed compared to historical baselines.
Key Contributions
- Development of quantum information-inspired framework for multivariate time series analysis using density matrices
- Application to climate data analysis showing quantification of temperature anomaly patterns relative to historical baselines
View Full Abstract
We present a quantum information-inspired framework for analyzing complex systems through multivariate time series. In this approach the system's state is encoded into a density matrix, providing a compact representation of higher-order correlations and dependencies. This formulation enables precise quantification of the relative influence among time series, tracking of their response to external perturbations and also the definition of a recovery timescale without need for dimensional reduction. By leveraging tools such as fidelity from quantum information theory, our method naturally captures higher-order co-fluctuations beyond pairwise statistics, offering a holistic characterization of resilience and similarity in high-dimensional dynamics. We validate this approach on synthetic data generated by a 9-dimensional modified Lorenz-96 model and demonstrate its utility on real-world climate data, analyzing global temperature anomalies across nine regions, quantifying the dissimilarity of each 288-month time window up to July 2025 relative to the 1850-1874 baseline period.
Quantum Fisher Information Measure in a Strongly Confined Harmonic Paul Trap Lattice System
This paper studies how information-theoretic measures like Fisher information and Shannon entropy change when a single ion is trapped in a Paul trap with an added optical lattice. The researchers show that these measures track the curvature of the effective potential, providing a framework for precision quantum control of trapped ions.
Key Contributions
- Demonstrates that information-theoretic measures track the effective potential curvature in modified Paul traps
- Provides a framework for precision quantum control using trap frequency and optical lattice parameters as independent tuning mechanisms
View Full Abstract
In this work, we examine how the informational and structural properties of a single ion respond to controlled changes of the effective potential in a Paul trap modified by an optical lattice. We consider the ground state of the system where confinement is strongest. And by treating the trap frequency $ω$ and lattice $κ$ as independent tunning parameters, we show that Fisher information, Shannon entropy, and Fisher-Shannon complexity track the curvature of the effective potential $ω_{\mathrm{eff}}=ω^2\,\sqrt{1-κ}$. The $ω$ and $κ$ sweeps confirm that curvature and not the choice of control parameter determines the behaviour of the system. This gives the trapped-ion platform a clear advantage that the curvature can be engineered without altering the harmonic characteristics of the system. The interplay between $ω$ and $κ$ thus provides a practical route for precision quantum control and offers Information-theoretic framework for experiments that probe confinement, quantization scale, and information flow in engineered ion traps.
A sine-square deformation approach to quantum critical points in one-dimensional systems
This paper develops a method using sine-square deformation (SSD) to identify quantum phase transitions in one-dimensional systems by analyzing when local observables become translationally symmetric. The authors test this approach on antiferromagnetic Ising chains and propose experimental implementation using Rydberg atom arrays.
Key Contributions
- Development of SSD-based method for determining quantum critical points through translational symmetry analysis
- Demonstration that quantum phase boundaries can be accurately identified using smaller system sizes than traditional methods
- Proposal for experimental implementation using Rydberg atom arrays in optical tweezers
View Full Abstract
We propose a method to determine the quantum phase boundaries of one-dimensional systems using sine-square deformation (SSD). Based on the proposition, supported by several exactly solved cases though not proven in full generality, that ``if a one-dimensional system is gapless, then the expectation value of any local observable in the ground state of the Hamiltonian with SSD exhibits translational symmetry in the thermodynamic limit," we determine the quantum critical point as the location where a local observable becomes site-independent, identified through finite-size scaling analysis. As case studies, we consider two models: the antiferromagnetic Ising chain in mixed transverse and longitudinal magnetic fields with nearest-neighbor and long-range interactions. We calculate the ground state of these Hamiltonians with SSD using the density-matrix renormalization-group algorithm and evaluate the local transverse magnetization. For the nearest-neighbor model, we show that the quantum critical point can be accurately estimated by our procedure with systems of up to 84 sites, or even smaller, in good agreement with results from the literature. For the long-range model, we find that the phase boundary between the antiferromagnetic and paramagnetic phases is slightly shifted relative to the nearest-neighbor case, leading to a reduced region of antiferromagnetic order. Moreover, we propose an experimental procedure to implement the antiferromagnetic $J_1$-$J_2$ Ising couplings with SSD using Rydberg atom arrays in optical tweezers, which can be achieved within a very good approximation. Because multiple independent scaling conditions naturally emerge, our approach enables precise determination of quantum critical points and possibly even the extraction of additional critical phenomena, such as critical exponents, from relatively small system sizes.
QBism, Polishing Some Points
This paper explores QBism (Quantum Bayesianism), an interpretation of quantum mechanics that treats quantum states and probabilities as subjective beliefs of individual agents rather than objective properties of nature. The authors clarify three key tenets of QBism and use them to contrast their interpretation with other quantum mechanical interpretations like Bohr's Copenhagen interpretation and the Many Worlds interpretation.
Key Contributions
- Clarifies three fundamental tenets of QBism regarding the subjective nature of quantum probabilities and measurement outcomes
- Provides detailed comparisons between QBism and other quantum interpretations including responses to Bell inequality violations and Wigner's argument
View Full Abstract
QBism pursues the real by first eliminating the elements of quantum theory too fragile to be ontologies on their own. Thereafter, it seeks an "ontological lesson" from whatever remains. Here, we explore this program by highlighting three tenets of QBism. First, the Born Rule is a normative statement. It is about the decision-making behavior any individual agent should strive for, not a descriptive "law of nature." Second, all probabilities, including all quantum probabilities, are so subjective they never tell nature what to do. This includes probability-1 assignments. Quantum states thus have no "ontic hold" on the world, which implies a more radical kind of indeterminism in quantum theory than other interpretations understand. Third, quantum measurement outcomes just are personal experiences for the agent gambling upon them. Thus all quantum measurement outcomes are local in the sense of the agent enacting them. Through these tenets, we explain four points better than previously: 1) how QBism contrasts with Bohr's concern over unambiguous language, 2) how QBism contrasts with the Everett interpretation, 3) how QBism understands the meaning of Bell inequality violations, and 4) how QBism responds to Wigner's "suspended animation" argument. Finally, we consider the ontological lesson of the tenets and ask what it might mean for the next one hundred years of quantum theory and humankind more generally.
Group Theory and Representation Theory for Identical Particles
This paper provides a comprehensive mathematical foundation covering group theory and representation theory for identical particles in quantum systems. It serves as educational material bridging condensed matter physics, quantum chemistry, and quantum computing by developing the mathematical framework for describing identical particle systems in both first and second quantization.
Key Contributions
- Comprehensive development of group theory and representation theory for identical particles
- Mathematical framework connecting condensed matter, quantum chemistry, and quantum computing
- Full treatment of first and second quantization schemes for identical particle systems
View Full Abstract
Few, if any, applications of quantum technology are as widely known as the quantum simulation of quantum matter. Consequently, many interesting questions have been sparked at the intersection of condensed matter, quantum chemistry, and quantum computing. Given the common mathematical foundation of these subjects, we walk through the necessary group theory and representation theory serving as background in all of these fields. Our discussion will include a full development of the mathematics of identical particles and the mechanics of describing systems of identical particles in both first and second quantization schemes. This chapter is an offshoot of a larger work that provides a graduate-level introduction to quantum information science. This chapter is being released separately because it is not explicitly focused on quantum information. It has grown beyond a short digression into a full-fledged journey into the symmetries and representations of identical particles that we invite you, the reader, to join.
Integrability Breaking and Coherent Dynamics in Hermitian and Non-Hermitian Spin Chains with Long-Range Coupling
This paper studies one-dimensional quantum spin chains with long-range interactions, investigating how these systems transition from predictable (integrable) to chaotic behavior. The researchers discover that even in chaotic regimes, some special quantum states called 'many-body scars' resist thermalization and maintain quantum coherence.
Key Contributions
- Demonstration that long-range coupling acts as universal control parameter for integrability-to-chaos transition
- Discovery of robust quantum many-body scars that survive strong non-Hermitian perturbations
- Universal mechanism connecting long-range and non-Hermitian effects in quantum ergodicity
View Full Abstract
Unraveling the mechanisms of ergodicity breaking in complex quantum systems is a central pursuit in nonequilibrium physics. In this work, we investigate a one-dimensional spin model featuring a tunable long-range hopping term, $H_{n}$, which introduces nonlocal interactions and bridges the gap between Hermitian and non-Hermitian regimes. Through a systematic analysis of level-spacing statistics, Krylov complexity, and entanglement entropy, we demonstrate that $H_{n}$ acts as a universal control parameter driving the transition from integrability to quantum chaos. Specifically, increasing the strength of $H_{n}$ induces a crossover from Poissonian to Gaussian Orthogonal Ensemble statistics in the Hermitian limit, and similarly triggers chaotic dynamics in the non-Hermitian case. Most remarkably, despite the onset of global chaos, we identify a tower of exact nonthermal eigenstates that evade thermalization. These states survive as robust quantum many-body scars, retaining low entanglement and coherent dynamics even under strong non-Hermitian perturbations. Our findings reveal a universal mechanism by which long-range and non-Hermitian effects reshape quantum ergodicity, offering new pathways for preserving quantum coherence in complex many-body systems.
Frozen Gaussian sampling algorithms for simulating Markovian open quantum systems in the semiclassical regime
This paper develops a new computational algorithm called Frozen Gaussian Sampling (FGS) to efficiently simulate open quantum systems that interact with their environment. The method overcomes major computational challenges in the semiclassical regime and provides new insights into the long-term behavior of quantum systems in complex potential landscapes.
Key Contributions
- Development of FGS algorithm that eliminates computational scaling problems in semiclassical simulations
- Numerical evidence for steady states in strongly non-harmonic potentials where analytical solutions don't exist
View Full Abstract
Simulating Markovian open quantum systems in the semiclassical regime poses a grand challenge for computational physics, as the highly oscillatory nature of the dynamics imposes prohibitive resolution requirements on traditional grid-based methods. To overcome this barrier, this paper introduces an efficient Frozen Gaussian Sampling (FGS) algorithm based on the Wigner-Fokker-Planck phase-space formulation. The proposed algorithm exhibits two transformative advantages. First, for the computation of physical observables, its sampling error is independent of the semiclassical parameter $\varepsilon$, thus fundamentally breaking the prohibitive computational scaling faced by grid methods in the semiclassical limit. Second, its mesh-free nature entirely eliminates the boundary-induced instabilities that constrain long-time grid-based simulations. Leveraging these capabilities, the FGS algorithm serves as a powerful investigatory tool for exploring the long-time behavior of open quantum systems. Specifically, we provide compelling numerical evidence for the existence of steady states in strongly non-harmonic potentials-a regime where rigorous analytical results are currently lacking.
Quantifying electron-nuclear spin entanglement dynamics in central-spin systems using one-tangles
This paper develops methods to quantify and optimize entanglement between electron spins and nuclear spins in solid-state quantum systems like quantum dots and color centers. The researchers use a mathematical tool called 'one-tangling power' to understand how these systems can be controlled to maximize entanglement and maintain quantum coherence.
Key Contributions
- Generalized one-tangling power metric to central-spin systems with arbitrary nuclear spin values beyond spin-1/2
- Developed procedures to identify parameter regimes for maximal electron-nuclear entanglement in quantum dots
- Provided exact methods for computing electron spin dephasing times and identifying coherence-sustaining conditions
View Full Abstract
Optically-active solid-state systems such as self-assembled quantum dots, rare-earth ions, and color centers in diamond and SiC are promising candidates for quantum network, computing, and sensing applications. Although the nuclei in these systems naturally lead to electron spin decoherence, they can be repurposed, if they are controllable, as long-lived quantum memories. Prior work showed that a metric known as the one-tangling power can be used to quantify the entanglement dynamics of sparse systems of spin-1/2 nuclei coupled to color centers in diamond and SiC. Here, we generalize these findings to a wide range of electron-nuclear central-spin systems, including those with spin > 1/2 nuclei, such as in III-V quantum dots (QDs), rare-earth ions, and some color centers. Focusing on the example of an (In)GaAs QD, we offer a procedure for pinpointing physically realistic parameter regimes that yield maximal entanglement between the central electron and surrounding nuclei. We further harness knowledge of naturally-occurring degeneracies and the tunability of the system to generate maximal entanglement between target subsets of spins when the QD electron is subject to dynamical decoupling. We also leverage the one-tangling power as an exact and immediate method for computing QD electron spin dephasing times with and without the application of spin echo sequences, and use our analysis to identify coherence-sustaining conditions within the system.
A non-linear quantum neural network framework for entanglement engineering
This paper develops a quantum neural network architecture that uses non-linear activation functions to efficiently generate highly entangled quantum states across multiple qubits. The researchers demonstrate that their approach outperforms traditional linear quantum circuits for creating entanglement in both ideal and noisy quantum devices up to 20 qubits.
Key Contributions
- Novel quantum neural network architecture with non-linear activation functions that outperforms linear variational circuits for entanglement generation
- Scalable variational framework for engineering multipartite entanglement on near-term quantum devices with demonstrated advantages up to 20 qubits
View Full Abstract
Multipartite entanglement is a key resource for quantum technologies, yet its scalable generation in noisy quantum devices remains challenging. Here, we propose a low-depth quantum neural network architecture with linear scaling, inspired by memory-enabled photonic components, for variational entanglement engineering. The network incorporates physically motivated non-linear activation functions, enhancing expressivity beyond linear variational circuits at fixed depth. By Monte Carlo sampling over circuit topologies, we identify architectures that efficiently generate highly entangled pure states, approaching the GHz limit, and demonstrate a clear advantage of non-linear networks up to 20 qubits. For the noisy scenario, we employ the experimentally accessible Meyer-Wallach global entanglement as a surrogate optimization cost and certify entanglement using bipartite negativity. For mixed states of up to ten qubits, the optimized circuits generate substantial entanglement across both symmetric and asymmetric bipartitions. These results establish an experimentally motivated and scalable variational framework for engineering multipartite entanglement on near-term quantum devices, highlighting the combined role of non-linearity and circuit topology.
Q-IRIS: The Evolution of the IRIS Task-Based Runtime to Enable Classical-Quantum Workflows
This paper presents Q-IRIS, a hybrid runtime system that coordinates execution of both classical and quantum computing tasks across different hardware backends. The system integrates existing quantum programming frameworks to enable asynchronous scheduling and execution of quantum workloads, demonstrated through quantum circuit cutting techniques that break large circuits into smaller, more manageable pieces.
Key Contributions
- Integration of IRIS task-based runtime with quantum programming frameworks for hybrid classical-quantum workflows
- Demonstration of quantum circuit cutting for improved simulator throughput and reduced queueing in heterogeneous systems
View Full Abstract
Extreme heterogeneity in emerging HPC systems are starting to include quantum accelerators, motivating runtimes that can coordinate between classical and quantum workloads. We present a proof-of-concept hybrid execution framework integrating the IRIS asynchronous task-based runtime with the XACC quantum programming framework via the Quantum Intermediate Representation Execution Engine (QIR-EE). IRIS orchestrates multiple programs written in the quantum intermediate representation (QIR) across heterogeneous backends (including multiple quantum simulators), enabling concurrent execution of classical and quantum tasks. Although not a performance study, we report measurable outcomes through the successful asynchronous scheduling and execution of multiple quantum workloads. To illustrate practical runtime implications, we decompose a four-qubit circuit into smaller subcircuits through a process known as quantum circuit cutting, reducing per-task quantum simulation load and demonstrating how task granularity can improve simulator throughput and reduce queueing behavior -- effects directly relevant to early quantum hardware environments. We conclude by outlining key challenges for scaling hybrid runtimes, including coordinated scheduling, classical-quantum interaction management, and support for diverse backend resources in heterogeneous systems.
Capturing reduced-order quantum many-body dynamics out of equilibrium via neural ordinary differential equations
This paper uses neural networks to study quantum many-body systems out of equilibrium, showing that neural ODEs can reproduce quantum dynamics when correlations between particle groups are strong, but fail when correlations are weak. The work identifies when simplified approaches to modeling complex quantum systems work versus when more sophisticated memory-dependent methods are needed.
Key Contributions
- Demonstrated that neural ODEs can reproduce quantum many-body dynamics only when two- and three-particle correlations are strongly correlated
- Identified correlation magnitude as a predictor for when cumulant expansion methods succeed or fail
- Provided a diagnostic tool for mapping applicability regimes of quantum simulation methods
View Full Abstract
Out-of-equilibrium quantum many-body systems exhibit rapid correlation buildup that underlies many emerging phenomena. Exact wave-function methods to describe this scale exponentially with particle number; simpler mean-field approaches neglect essential two-particle correlations. The time-dependent two-particle reduced density matrix (TD2RDM) formalism offers a middle ground by propagating the two-particle reduced density matrix (2RDM) and closing the BBGKY hierarchy with a reconstruction of the three-particle cumulant. But the validity and existence of time-local reconstruction functionals ignoring memory effects remain unclear across different dynamical regimes. We show that a neural ODE model trained on exact 2RDM data (no dimensionality reduction) can reproduce its dynamics without any explicit three-particle information -- but only in parameter regions where the Pearson correlation between the two- and three-particle cumulants is large. In the anti-correlated or uncorrelated regime, the neural ODE fails, indicating that no simple time-local functional of the instantaneous two-particle cumulant can capture the evolution. The magnitude of the time-averaged three-particle-correlation buildup appears to be the primary predictor of success: For a moderate correlation buildup, both neural ODE predictions and existing TD2RDM reconstructions are accurate, whereas stronger values lead to systematic breakdowns. These findings pinpoint the need for memory-dependent kernels in the three-particle cumulant reconstruction for the latter regime. Our results place the neural ODE as a model-agnostic diagnostic tool that maps the regime of applicability of cumulant expansion methods and guides the development of non-local closure schemes. More broadly, the ability to learn high-dimensional RDM dynamics from limited data opens a pathway to fast, data-driven simulation of correlated quantum matter.
Quantum oracles give an advantage for identifying classical counterfactuals
This paper demonstrates that quantum oracles can identify classical causal relationships and answer counterfactual questions that are impossible to determine using classical oracles, even with unlimited queries. The authors show this advantage exists for determining causal parameters in structural causal models when classical variables are encoded in quantum systems.
Key Contributions
- Proved quantum oracles can identify all causal parameters in structural causal models while classical oracles cannot
- Demonstrated quantum advantage for identifying two-way joint counterfactuals p(Y_x=y, Y_{x'}=y') that is impossible classically
- Extended quantum oracle theory beyond traditional problems like Deutsch-Jozsa to causal inference applications
- Showed the advantage exists even in some classical theories like Spekkens' toy theory, questioning whether non-classical features are necessary
View Full Abstract
We show that quantum oracles provide an advantage over classical oracles for answering classical counterfactual questions in causal models, or equivalently, for identifying unknown causal parameters such as distributions over functional dependences. In structural causal models with discrete classical variables, observational data and even ideal interventions generally fail to answer all counterfactual questions, since different causal parameters can reproduce the same observational and interventional data while disagreeing on counterfactuals. Using a simple binary example, we demonstrate that if the classical variables of interest are encoded in quantum systems and the causal dependence among them is encoded in a quantum oracle, coherently querying the oracle enables the identification of all causal parameters -- hence all classical counterfactuals. We generalize this to arbitrary finite cardinalities and prove that coherent probing 1) allows the identification of all two-way joint counterfactuals p(Y_x=y, Y_{x'}=y'), which is not possible with any number of queries to a classical oracle, and 2) provides tighter bounds on higher-order multi-way counterfactuals than with a classical oracle. This work can also be viewed as an extension to traditional quantum oracle problems such as Deutsch--Jozsa to identifying more causal parameters beyond just, e.g., whether a function is constant or balanced. Finally, we raise the question of whether this quantum advantage relies on uniquely non-classical features like contextuality. We provide some evidence against this by showing that in the binary case, oracles in some classically-explainable theories like Spekkens' toy theory also give rise to a counterfactual identifiability advantage over strictly classical oracles.
Matter-Mediated Entanglement in Classical Gravity: Suppression by Binding Potentials and Localization
This paper critiques a recent claim that spatially separated masses can become entangled through classical gravity, arguing instead that any observed entanglement comes from quantum matter exchange channels. The authors show that realistic binding potentials in macroscopic objects exponentially suppress this matter-mediated entanglement, making it negligible at relevant distances.
Key Contributions
- Demonstrates that matter-mediated entanglement between separated masses is suppressed by binding potentials in realistic systems
- Clarifies that observed entanglement indicates matter exchange rather than quantum vs classical nature of gravity
View Full Abstract
Aziz and Howl [Nature 646 (2025)] argue that two spatially separated masses can become entangled even when gravity is treated as a classical field, by invoking higher-order "virtual-matter" processes in a QFT description of matter, which is non-LOCC (local operations and classical communication). We point out that the relevant mechanism is not intrinsically field-theoretic, but is essentially a quantum tunneling/evanescent matter channel, which is already captured within ordinary quantum mechanics. More importantly, the microscopic constituents of realistic macroscopic objects are bound and localized by strong potentials, introducing a large internal energy scale that suppresses coherent propagation between distant bodies. Including such binding/localization generically yields an exponential suppression, rendering the matter-mediated contribution negligible at the macroscopic separations relevant to gravitational-entanglement proposals. Consequently, the entanglement identified by AH diagnoses the presence of a coherent matter-exchange channel rather than the classical or quantum nature of gravity, and it does not undermine LOCC-based witness arguments in realistic bound-matter platforms.
Matrix Product State Simulation of Reacting Shear Flows
This paper develops a new computational method using matrix product states (a quantum physics technique) to simulate turbulent reactive flows in combustion systems. The approach achieves significant memory reductions (30-99.99%) compared to traditional direct numerical simulation while maintaining accuracy.
Key Contributions
- Adaptation of matrix product state tensor networks from quantum physics to computational fluid dynamics
- Demonstration of 30% memory reduction with potential for up to 99.99% compression in turbulent reactive flow simulations
View Full Abstract
Direct numerical simulation (DNS) of turbulent reactive flows has been the subject of significant research interest for several decades. Accurate prediction of the effects of turbulence on the rate of reactant conversion, and the subsequent influence of chemistry on hydrodynamics remain a challenge in combustion modeling. The key issue in DNS is to account for the wide range of temporal and spatial physical scales that are caused by complex interactions of turbulence and chemistry. In this work, a new computational methodology is developed that is shown to provide a viable alternative to DNS. The framework is the matrix product state (MPS), a form of tensor network (TN) as used in computational many body physics. The MPS is a well-established ansatz for efficiently representing many types of quantum states in condensed matter systems, allowing for an exponential compression of the required memory compared to exact diagonalization methods. Due to the success of MPS in quantum physics, the ansatz has been adapted to problems outside its historical domain, notably computational fluid dynamics. Here, the MPS is used for computational simulation of a shear flow under non-reacting and nonpremixed chemically reacting conditions. Reductions of 30% in memory are demonstrated for all transport variables, accompanied by excellent agreements with DNS. The anastaz accurately captures all pertinent flow physics such as reduced mixing due to exothermicity & compressibility, and the formation of eddy shocklets at high Mach numbers. A priori analysis of DNS data at higher Reynolds numbers shows compressions as large as 99.99% for some of the transport variables. This level of compression is encouraging and promotes the use of MPS for simulations of complex turbulent combustion systems.
Towards Quantum Advantage in Chemistry
This paper demonstrates a quantum algorithm called iterative qubit coupled-cluster (iQCC) for molecular simulations by running it on classical processors at unprecedented scale, showing it can outperform classical methods for computing excited states of organometallic compounds. The study establishes that quantum advantage in chemistry may emerge around 200 logical qubits.
Key Contributions
- Demonstrated iQCC algorithm at unprecedented scale requiring hundreds of logical qubits and millions of entangling gates
- Achieved highest accuracy (0.05 eV error, R² = 0.94) for excited state energies of organometallic compounds compared to classical methods
- Established ~200 logical qubits as the threshold where quantum advantage in computational chemistry may emerge
View Full Abstract
Molecular simulations are widely regarded as leading candidates to demonstrate quantum advantage--defined as the point at which quantum methods surpass classical approaches in either accuracy or scale. Yet the qubit counts and error rates required to realize such an advantage remain uncertain; resource estimates for ground-state electronic structure span orders of magnitude, and no quantum-native method has been validated at a commercially relevant scale. Here we address this uncertainty by executing the iterative qubit coupled-cluster (iQCC) algorithm, designed for fault-tolerant quantum hardware, at unprecedented scale using a quantum solver on classical processors, enabling simulations of transition organo-metallic complexes requiring hundreds of logical qubits and millions of entangling gates. Using this approach, we compute the lowest triplet excited state (T$_1$) energies of Ir(III) and Pt(II) phosphorescent organometallic compounds and show that iQCC achieves the lowest mean absolute error (0.05 eV) and highest R$^2$ (0.94) relative to experiment, outperforming leading classical methods. We find these systems remain classically tractable up to $\sim$200 logical qubits, establishing the threshold at which quantum advantage in computational chemistry may emerge and clarifying resource requirements for future quantum computers.
Schrödinger Symmetry in Spherically-symmetric Static Mini-superspaces with Matter Fields
This paper studies how Schrödinger symmetry emerges in simplified quantum gravity models with matter fields, examining two specific cases: spacetime with electromagnetic fields and cosmological constant, and spacetime with massless scalar fields. The authors develop mathematical methods to analyze these symmetries and provide physical interpretations for how they relate to known spacetime solutions like de Sitter and generalized Janis-Newman-Winicour spacetimes.
Key Contributions
- Development of canonical transformation methods to analyze Schrödinger symmetry in mini-superspace models with matter fields
- Demonstration that different dimensional Schrödinger symmetries emerge in Maxwell field and scalar field cases
- Physical interpretation of symmetry generators and their relationship to Hamiltonian constraints in quantum gravity
View Full Abstract
Schrödinger symmetry emerged in a ``fluid limit" from a full superspace to several mini-superspace models. We consider two spherically-symmetric static mini-superspace models with matter fields and verify the robustness of this emergent symmetry at the classical level: (i) Maxwell field with cosmological constant and (ii) $n$ massless scalar fields. We develop a method based on canonical transformations and show that: for model (i), 3D Schrödinger symmetry emerges, and the solution is the (anti-) de Sitter Reissner-Nordström spacetime, and for model (ii), $(2+n)$D Schrödinger symmetry appears, and the solution is a generalized Janis-Newman-Winicour spacetime and its ``interior", a Kantowski-Sachs type closed universe. In the matter decoupling limit, both cases lead to 2D Schrödinger symmetry in different lapse functions and mini-superspace coordinates, which implies the covariance of Schrödinger symmetry. Finally, we propose a physical interpretation of the symmetry under Hamiltonian constraints $H$ and explain it with examples: Symmetry generators commuting with $H$ map a solution to another one, while those non-commuting with $H$ generate a new theory with the Schrödinger symmetry and the transformed configuration is a solution to the new theory. These support the robustness of the emergence of Schrödinger symmetry and open new possibilities for exploring quantum dynamics of matter and gravity based on the symmetry.
Quadratic and cubic scrambling in the estimation of two successive phase-shifts
This paper investigates how nonlinear scrambling operations can improve the precision of quantum sensors that need to estimate multiple phase-shift parameters simultaneously. The researchers show that adding quadratic or cubic nonlinear operations between measurements helps overcome fundamental limitations when parameters are incompatible or when the quantum probe is insensitive to individual parameters.
Key Contributions
- Demonstrates that nonlinear scrambling operations mitigate sloppiness and improve parameter compatibility in multiparameter quantum estimation
- Shows third-order nonlinearity outperforms second-order scrambling for both squeezed vacuum and coherent probe states
- Establishes threshold conditions for when joint estimation with nonlinear coupling outperforms stepwise estimation strategies
View Full Abstract
Multiparameter quantum estimation becomes challenging when the parameters are incompatible, i.e., when their respective symmetric logarithmic derivatives do not commute, or when the model is sloppy, meaning that the quantum probe depends only on combinations of parameters leading to a degenerate or ill-conditioned Fisher information matrix. In this work, we explore the use of scrambling operations between parameter encoding to overcome sloppiness. We consider a bosonic model with two phase-shift parameters and analyze the performance of second- and third-order nonlinear scrambling using two classes of probe states: squeezed vacuum states and coherent states. Our results demonstrate that nonlinear scrambling mitigates sloppiness, increases compatibility, and improves overall estimation precision. We find third-order nonlinearity to be more effective than second-order under both fixed-probe and fixed-energy constraints. Furthermore, by comparing joint estimation to a stepwise estimation strategy, we show that a threshold for nonlinear coupling exists. For coherent probes, joint estimation outperforms the stepwise strategy if the nonlinearity is sufficiently large, while for squeezed probes, this advantage is observed specifically with third-order nonlinearity.
Certified-Everlasting Quantum NIZK Proofs
This paper develops new quantum cryptographic protocols called certified-everlasting non-interactive zero-knowledge proofs, where a verifier can provably delete quantum proof information in a way that can be verified by the prover. The authors construct these protocols using existing cryptographic assumptions like Learning with Errors (LWE) and identify barriers to certain construction approaches.
Key Contributions
- Identified barriers to constructing certified-everlasting NIZKs in the CRS model from known interactive proofs
- Constructed CE-NIZKs for NP based on LWE assumptions in both CRS and shared EPR models
View Full Abstract
We study non-interactive zero-knowledge proofs (NIZKs) for NP satisfying: 1) statistical soundness, 2) computational zero-knowledge and 3) certified-everlasting zero-knowledge (CE-ZK). The CE-ZK property allows a verifier of a quantum proof to revoke the proof in a way that can be checked (certified) by the prover. Conditioned on successful certification, the verifier's state can be efficiently simulated with only the statement, in a statistically indistinguishable way. Our contributions regarding these certified-everlasting NIZKs (CE-NIZKs) are as follows: - We identify a barrier to obtaining CE-NIZKs in the CRS model via generalizations of known interactive proofs that satisfy CE-ZK. - We circumvent this by constructing CE-NIZK from black-box use of NIZK for NP satisfying certain properties, along with OWFs. As a result, we obtain CE-NIZKs for NP in the CRS model, based on polynomial hardness of the learning with errors (LWE) assumption. - In addition, we observe that the aforementioned barrier does not apply to the shared EPR model. Consequently, we present a CE-NIZK for NP in this model based on any statistical binding hidden-bits generator, which can be based on LWE. The only quantum computation in this protocol involves single-qubit measurements of the shared EPR pairs.
Quantum Integrability of Hamiltonians with Time-Dependent Interaction Strengths and the Renormalization Group Flow
This paper studies quantum systems where interaction strengths change over time and shows that the mathematical conditions needed for these systems to be exactly solvable are identical to renormalization group flow equations from static systems. Using the time-dependent Kondo model as an example, the authors demonstrate a fundamental connection between quantum integrability and renormalization group theory.
Key Contributions
- Establishes correspondence between integrability constraints in time-dependent quantum systems and renormalization group flow equations
- Provides exact solution to time-dependent anisotropic Kondo model using generalized Bethe ansatz framework
View Full Abstract
In this paper we consider quantum Hamiltonians with time-dependent interaction strengths, and following the recently formulated generalized Bethe ansatz framework [P. R. Pasnoori, Phys. Rev. B 112, L060409 (2025)], we show that constraints imposed by integrability take the same form as the renormalization group flow equations corresponding to the respective Hamiltonians with constant interaction strengths. As a concrete example, we consider the anisotropic time-dependent Kondo model characterized by the time-dependent interaction strengths $J_{\parallel}(t)$ and $J_{\perp}(t)$. We construct an exact solution to the time-dependent Schrodinger equation and by applying appropriate boundary conditions on the fermion fields we obtain a set of matrix difference equations called the quantum Knizhnik-Zamolodchikov (qKZ) equations corresponding to the XXZ R-matrix. The consistency of these equations imposes constraints on the time-dependent interaction strengths $J_{\parallel}(t)$ and $J_{\perp}(t)$, such that the system is integrable. Remarkably, the resulting temporal trajectories of the couplings are shown to coincide exactly with the RG flow trajectories of the static Kondo model, establishing a direct and universal correspondence between integrability and renormalization-group flow in time-dependent quantum systems.
Quantum channel tomography and estimation by local test
This paper develops efficient methods for determining the properties of unknown quantum channels (operations that transform quantum states) by querying them a limited number of times. The authors prove that local testing approaches can achieve the same efficiency as more complex methods, and establish optimal query complexity bounds for reconstructing quantum channels with different error tolerances.
Key Contributions
- Proved equivalence between local testing and dilation-based approaches for quantum channel estimation
- Established optimal query complexity bounds for quantum channel tomography with diamond norm error scaling
- Achieved Heisenberg scaling O(1/ε) for specific channel conditions where Kraus rank times output dimension equals input dimension
- Provided efficient algorithms for mixed state tomography with improved query complexity
View Full Abstract
We study the estimation of an unknown quantum channel $\mathcal{E}$ with input dimension $d_1$, output dimension $d_2$ and Kraus rank at most $r$. We establish a connection between the query complexities in two models: (i) access to $\mathcal{E}$, and (ii) access to a random dilation of $\mathcal{E}$. Specifically, we show that for parallel (possibly coherent) testers, access to dilations does not help. This is proved by constructing a local tester that uses $n$ queries to $\mathcal{E}$ yet faithfully simulates the tester with $n$ queries to a random dilation. As application, we show that: - $O(rd_1d_2/\varepsilon^2)$ queries to $\mathcal{E}$ suffice for channel tomography to within diamond norm error $\varepsilon$. Moreover, when $rd_2=d_1$, we show that the Heisenberg scaling $O(1/\varepsilon)$ can be achieved, even if $\mathcal{E}$ is not a unitary channel: - $O(\min\{d_1^{2.5}/\varepsilon,d_1^2/\varepsilon^2\})$ queries to $\mathcal{E}$ suffice for channel tomography to within diamond norm error $\varepsilon$, and $O(d_1^2/\varepsilon)$ queries suffice for the case of Choi state trace norm error $\varepsilon$. - $O(\min\{d_1^{1.5}/\varepsilon,d_1/\varepsilon^2\})$ queries to $\mathcal{E}$ suffice for tomography of the mixed state $\mathcal{E}(|0\rangle\langle 0|)$ to within trace norm error $\varepsilon$.
Optimised Fermion-Qubit Encodings for Quantum Simulation with Reduced Transpiled Circuit Depth
This paper develops a new method to optimize how fermionic quantum systems (like molecules) are encoded onto quantum computers, specifically improving ternary tree encodings to reduce the depth of quantum circuits needed for simulation. The authors demonstrate their approach reduces circuit depths by about 27% when simulating water molecules compared to standard encoding methods like Jordan-Wigner.
Key Contributions
- Development of a deterministic optimization method for ternary tree fermion-qubit encodings that reduces Pauli-weight without requiring ancilla qubits or additional swap gates
- Demonstration of 27.7% and 26.0% average reductions in qDRIFT circuit depths for untranspiled and transpiled circuits respectively across various encoding methods
View Full Abstract
Simulation of fermionic Hamiltonians with gate-based quantum computers requires the selection of an encoding from fermionic operators to quantum gates, the most widely used being the Jordan-Wigner transform. Many alternative encodings exist, with quantum circuits and simulation results being sensitive to choice of encoding, device connectivity and Hamiltonian characteristics. Non-stochastic optimisation of the ternary tree class of encodings to date has targeted either the device or Hamiltonian. We develop a deterministic method which optimises ternary tree encodings without changing the underlying tree structure. This enables reduction in Pauli-weight without ancillae or additional swap-gate overhead. We demonstrate this method for a variety of encodings, including those which are derived from the qubit connectivity graph of a quantum computer. Across a suite of standard encoding methods applied to water in STO-3G basis, including Jordan-Wigner, our method reduces qDRIFT circuit depths on average by $27.7\%$ and $26.0\%$ for untranspiled and transpiled circuits respectively.
Three-qubit entangling gates with simultaneous exchange controls in spin qubit systems
This paper develops new three-qubit gates that can simultaneously control multiple spin qubit pairs at once, rather than operating on pairs individually. The authors show this approach can create important quantum states and gates more efficiently with fewer operations.
Key Contributions
- Development of simultaneous multi-qubit exchange gates for spin qubit systems
- Analytical expressions for three-qubit entangling operations in linear and triangular configurations
- Demonstration of more efficient quantum circuits with reduced gate count for standard operations like GHZ states, W states, and Toffoli gates
View Full Abstract
Pairwise exchange couplings have long been the standard mechanism for entangling spin qubits in semiconductor systems. However, implementing quantum circuits based on pairwise exchange gates often requires a lengthy sequence of elementary gate operations. In this work, we present an alternative approach: multi-qubit entangling gate operations that simultaneously drive the exchange couplings between multiple pairs of spin qubits. We explore three spin qubit systems in linear or triangular configurations. We derive analytical expressions for these multi-exchange entangling operations and demonstrate how to use the resulting three-qubit gates to construct quantum circuits capable of generating standard entangled states such as GHZ and W states, and the Toffoli gate, by optimizing control parameters. Our results show that this multi-qubit strategy significantly reduces the number of required operations, offering a pathway to more efficient, shallower, and more coherent circuits for spin-qubit processors.
Tensor Network Formulation of Dequantized Algorithms for Ground State Energy Estimation
This paper develops a new classical algorithm framework using tensor networks to efficiently estimate ground state energies of quantum systems, eliminating the need for computationally expensive sampling procedures. The method provides a practical tool for determining when quantum computers would have genuine advantages over classical computers for energy estimation problems.
Key Contributions
- Development of tensor network-based dequantization framework that eliminates sampling overhead while preserving asymptotic complexity
- Demonstration of practical algorithm capable of handling systems up to 100 qubits with high-degree polynomials
- Identification of crossover regimes between classical tractability and quantum advantage for ground state energy estimation
View Full Abstract
Verifying quantum advantage for practical problems, particularly the ground state energy estimation (GSEE) problem, is one of the central challenges in quantum computing theory. For that purpose, dequantization algorithms play a central role in providing a clear theoretical framework to separate the complexity of quantum and classical algorithms. However, existing dequantized algorithms typically rely on sampling procedures, leading to prohibitively large computational overheads and hindering their practical implementation on classical computers. In this work, we propose a tensor network-based dequantization framework for GSEE that eliminates the sampling process while preserving the asymptotic complexity of prior dequantized algorithms. In our formulation, the overhead arising from sampling is replaced by the growth of the bond dimension required to represent Chebyshev vectors as tensor network states. Consequently, physical structure, such as entanglement and locality, is naturally reflected in the computational cost. By combining this approach with tensor network approximations, such as Matrix Product States (MPS), we construct a practical dequantization algorithm that is executable within realistic computational resources. Numerical simulations demonstrate that our method can efficiently construct high-degree polynomials up to $d=10^4$ for Hamiltonians with up to $100$ qubits, explicitly revealing the crossover between classically tractable and quantum advantaged regimes. These results indicate that tensor network-based dequantization provides a crucial tool toward the rigorous, quantitative verification of quantum advantage in realistic many-body systems.
Multi-Photon Lasing Phenomena in Quantum Dot-Cavity QED
This paper studies multi-photon lasing in quantum dot-cavity systems, where multiple photons are emitted simultaneously in a coherent process rather than single photons. The research explores various configurations of quantum dots coupled to optical cavities and develops theoretical models to understand and predict these nonclassical light emission phenomena.
Key Contributions
- Development of polaron-transformed master equation approach for modeling exciton-phonon interactions in quantum dot-cavity systems
- Derivation of Scully-Lamb laser rate equations for multi-photon processes without mean-field approximations
- Investigation of various multi-photon lasing regimes including cooperative two-photon lasing and hyperradiant lasing
View Full Abstract
Multi-photon lasing has been realized in systems with strong nonlinear interactions between emitters and cavity modes, where single-photon processes are suppressed. Coherence between the internal states of a quantum emitter, or among multiple emitters, plays a key role. Such continuous nonclassical sources of light can find applications in quantum computation, quantum sensing, quantum metrology, and quantum communication. This thesis explores the multi-photon lasing phenomena in various quantum dot-photonic crystal cavity quantum electrodynamic (QED) setups. Exciton-phonon interactions are inevitable in such systems and are incorporated using the polaron-transformed master equation. The Born-Markov approximation is employed to obtain the reduced density matrix rate equation. Using quantum laser theory, we derived the Scully-Lamb laser rate equations and evaluated the single- and multi-photon excess emission rates defined as the difference between emission and absorption rates into the cavity mode without mean-field approximations. We investigated cooperative two-photon lasing, correlated emission lasing, hyperradiant lasing, non-degenerate two-mode two-photon lasing, and continuous variable entanglement in open quantum systems with single or multiple semiconductor quantum dots (two-level, three-level, and four-level) driven coherently/incoherently and coupled to single/ bimodal cavities.
Unraveling the Quantum Mpemba Effect on Markovian Open Quantum Systems
This paper studies the quantum Mpemba effect, where quantum systems that start further from equilibrium can sometimes reach equilibrium faster than systems starting closer to it. The researchers propose physical mechanisms for this counterintuitive phenomenon and show it can be exponentially enhanced in larger quantum systems.
Key Contributions
- Proposes physical mechanism for quantum Mpemba effect based on decoherence-free subspaces
- Demonstrates exponential enhancement of decay rates toward equilibrium scaling with system size
- Studies strong Mpemba effect through unravelings of Davies maps and proposes microscopic model for bath dynamics
View Full Abstract
In recent years, the quantum Mpemba effect (QME), which occurs when an out-of-equilibrium system reaches equilibrium faster than another that is closer to equilibrium, has attracted significant attention from the scientific community as an intriguing and counterintuitive phenomenon. It generalizes its classical counterpart by extending the concept beyond temperature equilibration. This paper approaches the QME in Markovian open quantum systems from different perspectives. First, we propose a physical mechanism based on decoherence-free subspaces. Second, we show that an exponential enhancement of the decay rate toward equilibrium, scaling with system size, can be obtained, leading to an extreme version of the phenomenon in Markovian open quantum systems. Third, we study the strong Mpemba effect through the unravelings of Davies maps, revealing subtleties in the choice of figures of merit used to identify the QME. Finally, we propose a microscopic model to gain deeper insight into bath dynamics in this context.
Arrival Time -- Classical Parameter or Quantum Operator?
This paper investigates how to measure arrival times in quantum mechanics for multi-particle entangled systems, comparing two approaches: treating time as a classical parameter versus a quantum operator. The researchers propose experiments that could distinguish between these approaches and have implications for quantum technologies using temporal entanglement.
Key Contributions
- Extended arrival-time analysis from single particles to multi-particle entangled systems
- Proposed experimental methods to distinguish between time-parameter and time-operator approaches in quantum mechanics
View Full Abstract
The question of how to interpret and compute arrival-time distributions in quantum mechanics remains unsettled, reflecting the longstanding tension between treating time as a quantum observable or as a classical parameter. Most previous studies have focused on the single-particle case in the far-field regime, where both approaches yield very similar arrival-time distributions and a semi-classical analysis typically suffices. Recent advances in atom-optics technologies now make it possible to experimentally investigate arrival-time distributions for entangled multi-particle systems in the near-field regime, where a deeper analysis beyond semi-classical approximations is required. Even in the far-field regime, due to quantum non-locality, the semi-classical approximation cannot generally hold in multi-particle systems. Therefore, in this work, two fundamental approaches to the arrival-time problem -- namely, the time-parameter and time-operator approaches -- are extended to multi-particle systems. Using these extensions, we propose a feasible two-particle arrival-time experiment and numerically evaluate the corresponding joint distributions. Our results reveal regimes in which the two approaches yield inequivalent predictions, highlighting conditions under which experiments could shed new light on distinguishing between competing accounts of time in quantum mechanics. Our findings also provide important insights for the development of quantum technologies that use entanglement in the time domain, including non-local temporal interferometry, temporal ghost imaging, and temporal state tomography in multi-particle systems.
Wigner function negativity in a classical model of quantum light
This paper demonstrates that classical models of squeezed light can reproduce quantum phenomena typically considered nonclassical, specifically showing that classical systems with post-selection can generate Wigner functions with negative values similar to those seen in single-photon added coherent states.
Key Contributions
- Demonstration of classical model reproducing quantum Wigner function negativity
- Application of post-selection techniques to classical squeezed light systems
View Full Abstract
The presence of negative values in the Wigner quasiprobability distribution is deemed one of the hallmarks of nonclassical phenomena in quantum systems. Here we demonstrate a classical model of squeezed light that, when combined with post-selection on amplitude threshold-crossing detection events, is capable of reproducing observed behavior of single-photon added coherent states. In particular, a classical model of balanced homodyne detection and standard tomographic techniques are used to infer the density matrix in the Fock basis. The resulting Wigner functions exhibit negatively for photon-added vacuum and weak coherent states.
Gibbs state postulate from dynamical stability -- Redundancy of the zeroth law
This paper proves that Gibbs states (which describe thermal equilibrium in quantum systems) can be uniquely characterized by requiring only that quantum systems be dynamically stable when isolated and when weakly coupled to simple harmonic oscillator environments. The authors show that a previously assumed 'zeroth law' condition involving three-system stability is actually unnecessary.
Key Contributions
- Proved that the zeroth law assumption in the Frigerio-Gorini-Verri derivation of Gibbs states is redundant
- Demonstrated that harmonic oscillator environments alone are sufficient to uniquely determine Gibbs states through dynamical stability requirements
View Full Abstract
Gibbs states play a central role in quantum statistical mechanics as the standard description of thermal equilibrium. Traditionally, their use is justified either by a heuristic, a posteriori reasoning, or by derivations based on notions of typicality or passivity. In this work, we show that Gibbs states are completely characterized by assuming dynamical stability of the system itself and of the system in weak contact with an arbitrary environment. This builds on and strengthens a result by Frigerio, Gorini, and Verri (1986), who derived Gibbs states from dynamical stability using an additional assumption that they referred to as the "zeroth law of thermodynamics", as it concerns a nested dynamical stability of a triple of systems. We prove that this zeroth law is redundant and that an environment consisting solely of harmonic oscillators is sufficient to single out Gibbs states as the only dynamically stable states.
Phase Space Electronic Structure Theory: From Diatomic Lambda-Doubling to Macroscopic Einstein-de Haas
This paper develops a phase space electronic structure theory that goes beyond the Born-Oppenheimer approximation by including both nuclear position and momentum in the electronic Hamiltonian. The authors demonstrate that this approach can accurately predict Lambda-doubling energy splitting in diatomic molecules like NO, connecting microscopic quantum effects to macroscopic phenomena like the Einstein-de Haas effect.
Key Contributions
- Development of phase space potential energy surfaces E_PS(X,P) that depend on both nuclear position and momentum
- Quantitative recovery of Lambda-doubling energy splitting in NO molecule using the new theoretical framework
- Demonstration that proper angular momentum conservation is essential for capturing electron-rotation coupling effects
View Full Abstract
$Λ$-doubling of diatomic molecules is a subtle microscopic phenomenon that has long attracted the attention of experimental groups, insofar as rotation of molecular $\textit{nuclei}$ induces small energetic changes in the (degenerate) $\textit{electronic}$ state. A direct description of such a phenomenon clearly requires going beyond the Born-Oppenheimer approximation. Here we show that a phase space theory previously developed to capture electronic momentum and model vibrational circular dichroism -- and which we have postulated should also describe the Einstein-de Haas effect, a macroscopic manifestation of angular momentum conservation -- is also able to recover the $Λ$-doubling energy splitting (or $Λ$-splitting) of the NO molecule nearly quantitatively. The key observation is that, by parameterizing the electronic Hamiltonian in terms of both nuclear position ($\mathbf{X}$) and nuclear momentum ($\mathbf{P}$), a phase space method yields potential energy surfaces that explicitly include the electron-rotation coupling and correctly conserve angular momentum (which we show is essential to capture $Λ-$doubling). The data presented in this manuscript offers another small glimpse into the rich physics that one can learn from investigating phase space potential energy surfaces $E_{PS}(\mathbf{X},\mathbf{P})$ as a function of both nuclear position and momentum, all at a computational cost comparable to standard Born-Oppenheimer electronic structure calculations.
Gate-Tunable Giant Negative Magnetoresistance in Tellurene Driven by Quantum Geometry
This paper reports the discovery of a giant negative magnetoresistance effect in tellurene films where electrical resistance drops by 90% in magnetic fields, proposing that quantum geometric effects in the material's electronic structure enhance electron diffusion through novel spin-orbit coupling mechanisms.
Key Contributions
- Discovery of record-breaking 90% negative magnetoresistance in tellurene films
- Identification of quantum geometric mechanisms enhancing electron diffusion
- Demonstration of gate-tunable magnetoelectric transport effects
- Establishment of non-Markovian memory effects in topological material transport
View Full Abstract
Negative magnetoresistance in conventional two-dimensional electron gases is a well-known phenomenon, but its origin in complex and topological materials, especially those endowed with quantum geometry, remains largely elusive. Here, we report the discovery of a giant negative magnetoresistance, reaching a remarkable $- 90\%$ of the resistance at zero magnetic field, $R_0$, in $n$-type tellurene films. This record-breaking effect persists over a wide magnetic field range (measured up to $35$ T) at cryogenic temperatures and is suppressed when the chemical potential shifts away from the Weyl node in the conduction band, strongly suggesting a quantum geometric origin. We propose two novel mechanisms for this phenomenon: a quantum geometric enhancement of diffusion and a magnetoelectric spin interaction that locks the spin of a Weyl fermion, in cyclotron motion under crossed electric $\boldsymbol{\cal E}$ and magnetic ${\bf B}$ fields, to its guiding-center drift, $(\boldsymbol{\cal E}\times{\bf B})\cdotσ$. We show that the time integral of the velocity auto-correlations promoted by the quantum metric between the spin-split conduction bands enhance diffusion, thereby reducing the resistance. This mechanism is experimentally confirmed by its unique magnetoelectric dependence, $ΔR_{zz}(\boldsymbol{\cal E},{\bf B})/R_0=-β_{g}(\boldsymbol{\cal E}\times{\bf B})^2$, with $β_{g}$ determined by the quantum metric. Our findings establish a new, quantum geometric and non-Markovian memory effect in magnetotransport, paving the way for controlling electronic transport in complex and topological matter.
Riemannian gradient descent-based quantum algorithms for ground state preparation with guarantees
This paper develops quantum algorithms based on Riemannian gradient descent for finding the ground state (lowest energy configuration) of quantum systems. The researchers provide theoretical guarantees for convergence and test their algorithms on IBM quantum computers, showing different scaling behaviors for different types of quantum spin systems.
Key Contributions
- Theoretical upper bounds for Riemannian gradient descent steps needed for ground state preparation based on Hamiltonian spectral properties
- Efficient quantum implementations using Trotterization and random projection techniques with convergence guarantees
- Experimental demonstration on IBM quantum devices showing linear scaling for 1D Ising chains and quadratic scaling for all-to-all coupled systems
View Full Abstract
We investigate Riemannian gradient flows for preparing ground states of a desired Hamiltonian on a quantum device. We show that the number of steps of the corresponding Riemannian gradient descent (RGD) algorithm that prepares a ground state to a given precision depends on the structure of the Hamiltonian. Specifically, we develop an upper bound for the number of RGD steps that depends on the spectral gap of the Hamiltonian, the overlap between ground and initial state, and the target precision. In numerical experiments we study examples where we observe for a 1D Ising chain with nearest-neighbor interactions that the RGD steps needed to prepare a ground state scales linearly with the number of spins. For all-to-all couplings a quadratic scaling is obtained. To achieve efficient implementations while keeping convergence guarantees, we develop RGD approximations by randomly projecting the Riemannian gradient into polynomial-sized subspaces. We find that the speed of convergence of the randomly projected RGD critically depends on the size of the subspace the gradient is projected into. Finally, we develop efficient quantum device implementations based on Trotterization and a quantum stochastic drift-inspired protocol. We implement the resulting quantum algorithms on IBM's quantum devices and provide data for small-scale problems.
Quantum Chaos as an Essential Resource for Full Quantum State Controllability
This paper demonstrates how quantum chaotic systems can be fully controlled using weak perturbations, leveraging quantum analogs of classical chaos properties like sensitivity to perturbations and statistical behavior described by random matrix theory. The authors show that unlike integrable systems, quantum chaotic dynamics allow steering any initial quantum state to any target state within a time that scales logarithmically with system size.
Key Contributions
- Establishes theoretical framework connecting quantum chaos properties to full quantum state controllability
- Demonstrates that quantum chaotic systems enable universal state control with weak perturbations in logarithmic time scaling
- Shows practical examples using quantum kicked rotor including revival generation and entangled state preparation
View Full Abstract
Using the key properties of chaos, i.e.~ergodicity and exponential instability, as a resource to control classical dynamics has a long and considerable history. However, in the context of controlling ``chaotic'' quantum unitary dynamics, the situation is far more tenuous. The classical concepts of exponential sensitivity to trajectory initial conditions and ergodicity do not directly translate into quantum unitary evolution. Nevertheless properties inherent to quantum chaos can take on those roles: i) the dynamical sensitivity to weak perturbations, measured by the fidelity decay, serves a similar purpose as the classical sensitivity to initial conditions; and ii) paired with the fact that quantum chaotic systems are conjectured to be statistically described by random matrix theory, implies a method to translate the ergodic feature into the control of quantum dynamics. With those two properties, it can be argued that quantum chaotic dynamical systems, in principle, allow for full controllability beyond a characteristic time that scales only logarithmically with system size and $\hbar^{-1}$. In the spirit of classical targeting, it implies that it is possible to fine tune the immense quantum interference with weak perturbations and steer the system from any initial state into any desired target state, subject to constraints imposed by conserved quantities. In contrast, integrable dynamics possess neither ergodicity nor exponential instability, and thus the weak perturbations apparently must break the integrability for control purposes. The main ideas are illustrated with the quantum kicked rotor. The production of revivals, cat-like entangled states, and the transition from any random state to any other random state is possible as demonstrated.
Fundamental bound on entanglement generation between interacting Rydberg atoms
This paper derives a fundamental theoretical limit on how well quantum entanglement can be created between two Rydberg atoms, accounting for unavoidable losses from spontaneous decay. The researchers also demonstrate laser pulse sequences that achieve entanglement preparation with errors only 1% above this fundamental limit.
Key Contributions
- Analytical derivation of fundamental lower bound for Bell state preparation fidelity in Rydberg atom systems
- Demonstration of quantum optimal control methods achieving near-optimal entanglement generation within 1% of theoretical limit
View Full Abstract
We analytically derive the fundamental lower bound for the preparation fidelity of a maximally-entangled (Bell) state of two atoms involving Rydberg-state interactions. This bound represents the minimum achievable error $E \geq ( 1 + π/2 ) Γ/B$ due to spontaneous decay $Γ$ of the Rydberg states and their finite interaction strength $B$. Using quantum optimal control methods, we identify laser pulses for preparing a maximally-entangled state of a pair of atomic qubits with an error only $1\%$ above the derived fundamental bound.
Impact of Information on Quantum Heat Engines
This paper develops a theoretical framework for quantum heat engines that use information as a resource through Maxwell's demon-like feedback control. The researchers show how a quantum engine can be controlled by measurements and feedback from a classical memory system, and surprisingly find that having more information doesn't always improve the engine's performance.
Key Contributions
- General framework for feedback-controlled quantum heat engines with N thermal baths and Maxwell's demon
- Demonstration that more information does not necessarily lead to better thermodynamic performance in quantum engines
View Full Abstract
The emerging field of quantum thermodynamics is beginning to reveal the intriguing role that information can play in quantum thermal engines. Information enters as a resource when considering feedback-controlled thermal machines. While both a general theory of quantum feedback control as well as specific examples of quantum feedback-controlled engines have been presented, still lacking is a general framework for such machines. Here, we present a framework for a generic, two-stroke quantum heat engine interacting with $N$ thermal baths and Maxwell's demon. The demon performs projective measurements on the engine working substance, the outcome of which is recorded in a classical memory, embedded in its own thermal bath. To perform feedback control, the demon enacts unitary operations on the working substance, conditioned on the recorded outcome. By considering the compound machine-memory as a hybrid (classical-quantum) standard thermal machine interacting with $N+1$ thermal baths, our framework puts the working substance and memory on equal footing, thereby enabling a comprehensible resolution to Maxwell's paradox. We illustrate the application of our framework with a two-qubit engine. A remarkable observation is that more information does not necessarily result in better thermodynamic performance: sometimes knowing less is better.
Achievable Trade-Off in Network Nonlocality Sharing
This paper develops methods to efficiently share quantum nonlocality across network branches, establishing thresholds for when unlimited sharing is possible and deriving trade-offs between the number of branches and sharing rounds when resources are limited. The work provides practical protocols for recycling quantum correlations in networks, including analysis under realistic noise conditions.
Key Contributions
- Established entanglement thresholds required for unbounded nonlocality sharing across quantum networks
- Derived achievable trade-offs between number of sharable branches and sequential sharing rounds when resources are limited
- Developed probabilistic projective measurement protocols for nonlocality recycling with noise model analysis
View Full Abstract
Quantum networks are essential for advancing scalable quantum information processing. Quantum nonlocality sharing provides a crucial strategy for the resource-efficient recycling of quantum correlations, offering a promising pathway toward scaling quantum networks. Despite its potential, the limited availability of resources introduces a fundamental trade-off between the number of sharable network branches and the achievable sequential sharing rounds. The relationship between available entanglement and the sharing capacity remains largely unexplored, which constrains the efficient design and scalability of quantum networks. Here, we establish the entanglement threshold required to support unbounded sharing across an entire network by introducing a protocol based on probabilistic projective measurements. When resources fall below this threshold, we derive an achievable trade-off between the number of sharable branches and sharing rounds. To assess practical feasibility, we compare the detectability of our protocol with weak-measurement schemes and extend the sharing protocol to realistic noise models, providing a robust framework for nonlocality recycling in quantum networks.
Eigenstate Typicality as the Dynamical Bridge to the Eigenstate Thermalization Hypothesis: A Derivation from Entropy, Geometry, and Locality
This paper provides a theoretical framework explaining why isolated quantum many-body systems thermalize by deriving the eigenstate thermalization hypothesis (ETH) from four fundamental principles: maximum entropy, high-dimensional geometry, locality, and eigenstate typicality. The work clarifies the physical foundations of quantum thermalization without relying on random matrix theory assumptions.
Key Contributions
- Unified theoretical framework deriving ETH from entropy, geometry, locality and eigenstate typicality principle
- Explanation of quantum thermalization without random matrix theory assumptions
- Clarification of the scope and physical foundations of the eigenstate thermalization hypothesis
View Full Abstract
The eigenstate thermalization hypothesis (ETH) provides a powerful framework for understanding thermalization in isolated quantum many-body systems, yet its physical foundations and minimal underlying assumptions remain actively debated. In this work, we develop a unified framework that clarifies the origin of ETH by separating kinematic typicality from dynamical input. We show that the characteristic ETH structure of local operator matrix elements follows from four ingredients: the maximum entropy principle, the geometry of high-dimensional Hilbert space, the locality of physical observables, and a minimal dynamical principle, which we term the eigenstate typicality principle (ETP). ETP asserts that in quantum-chaotic systems, energy eigenstates are statistically indistinguishable from typical states within a narrow microcanonical shell with respect to local measurements. Within this framework, diagonal ETH emerges from measure concentration, while the universal exponential suppression and smooth energy-frequency dependence of off-diagonal matrix elements arise from entropic scaling and local dynamical correlations, without invoking random-matrix assumptions. Our results establish ETH as a consequence of entropy, geometry, and chaos-induced typicality, and clarify its scope, thereby deepening our understanding of quantum thermalization and the emergence of statistical mechanics from unitary many-body dynamics.
Projected Optimal Sensors from Operator Orbits
This paper develops a unified theoretical framework for different types of quantum sensors using operator algebra, and demonstrates how to design new quantum sensors that achieve better-than-classical precision in measurements. The work shows how the mathematical structure of quantum operations determines the sensitivity scaling and proposes novel sensor designs with improved performance even in the presence of noise and particle loss.
Key Contributions
- Unified theoretical framework connecting Ramsey, twist-untwist, and random quantum sensors through operator algebra
- Novel quantum sensor designs using projected state ensembles that achieve beyond-shot-noise sensitivity
- Demonstration of favorable Fisher information scaling under decoherence and particle loss conditions
View Full Abstract
We unify Ramsey, twist-untwist, and random quantum sensors using operator algebra and account for the Fisher scaling of various sensor designs. We illustrate how the operator orbits associated with state preparation inform the scaling of the sensitivity with the number of subsystems. Using our unified model, we design a novel set of sensors in which a projected ensemble of quantum states exhibits beyond-shot-noise metrological performance. We also show favorable scaling of Fisher information with decoherence models and loss of particles.
Coherent feedback-enhanced asymmetry of thermal process in open quantum systems: Cavity optomechanics
This paper studies how coherent feedback loops can enhance entropy production and irreversibility in open quantum systems, specifically using cavity optomechanics as an example. The researchers show that coherent feedback drives systems far from thermal equilibrium and that entropy production correlates with quantum mutual information.
Key Contributions
- Demonstrated that coherent feedback loops can enhance entropy production in quantum systems
- Showed correlation between entropy production rate and quantum mutual information in small-coupling limit
- Applied theoretical framework to optomechanical cavity systems showing improved heating/cooling control
View Full Abstract
Entropy production is a fundamental concept in nonequilibrium thermodynamics, providing a direct measure of the irreversibility inherent in any physical process. In this work, we investigate in steady-state the enhancement of irreversibility employing coherent feedback loop. We evaluate the steady-state entropy production rate and quantum correlations by applying the quantum phase space formulation to calculate the entropy change. Our study reveals the essential contribution of coherent feedback in the thermal bath's input-noise operators, resulting in the system being driven far from thermal equilibrium. Our analysis shows that in the small-coupling limit, the entropy production rate is proportional to the quantum mutual information. We use for application the optomechanical system of Fabry-Pérot cavity, and show that the picks of the entropy production corresponding of the heating/cooling of movable mirror are improved. Therefore, we conclude that irreversibility and quantum correlations are not independent and must be analyzed jointly. The results demonstrate the possibility of enhancement of entropy production and pave the way for promising quantum thermal applications through coherent feedback loop.
Dual-Qubit Hierarchical Fuzzy Neural Network for Image Classification: Enabling Relational Learning via Quantum Entanglement
This paper proposes a dual-qubit hierarchical fuzzy neural network (DQ-HFNN) that uses quantum entanglement to model relationships between feature pairs for image classification. The approach encodes feature pairs onto entangled qubits rather than single qubits, enabling the network to learn correlations between features and achieve better classification accuracy than classical methods.
Key Contributions
- Novel dual-qubit encoding scheme for feature pairs using quantum entanglement
- Demonstration that entanglement enables relational learning between features rather than just increased expressivity
- Parameter-efficient quantum neural network with improved classification performance and noise robustness
View Full Abstract
Classical deep neural network models struggle to represent data uncertainty and capture dependencies between features simultaneously, especially under fuzzy or noisy conditions. Although a quantum-assisted hierarchical fuzzy neural network (QA-HFNN) was proposed to learn fuzzy membership for each feature, it cannot model dependencies between features due to its single-qubit encoding. To address this, this paper proposes a dual-qubit hierarchical fuzzy neural network (DQ-HFNN), encoding feature pairs onto a pair of entangled qubits, which extends the single-feature fuzzy model to a joint fuzzy representation. By introducing quantum entanglement, the dual-qubit circuit can encode non-classical correlations, enabling the model to directly learn relationship patterns between feature pairs. Experiments on benchmarks show that DQ-HFNN demonstrates higher classification accuracy than QA-HFNN, as well as classical deep learning baselines. Furthermore, ablation studies after controlling for circuit depth and parameter counts show that the performance gain mainly stems from the relational modeling capability enabled by entanglement rather than enhanced expressivity. The proposed DQ-HFNN model exhibits high parameter efficiency and fast inference speed. Experiments under noisy conditions suggest that it is robust against noise and has the potential to be implemented on noisy intermediate-scale quantum devices.
Slowing and Storing Microwaves in a Single Superconducting Fluxonium Artificial Atom
This paper demonstrates electromagnetically induced transparency and quantum memory effects in a single superconducting fluxonium artificial atom, achieving 217 nanoseconds of light delay and photon storage capabilities in the microwave frequency range.
Key Contributions
- First demonstration of EIT in a single fluxonium qubit without additional coupling elements
- Achievement of microwave photon storage and 217 ns delay time
- Development of potential quantum memory architecture for superconducting circuits
View Full Abstract
Three-level Lambda systems provide a versatile platform for quantum optical phenomena such as Electromagnetically Induced Transparency (EIT), slow light, and quantum memory. Such Lambda systems have been realized in several quantum hardware platforms including atomic systems, superconducting artificial atoms, and meta-structures. Previous experiments involving superconducting artificial atoms incorporated coupling to additional degrees of freedom, such as resonators or other superconducting atoms. In this work, we performed an EIT experiment in microwave frequency range utilizing a single Fluxonium qubit within a microwave waveguide. The Lambda system is consisted of two plasmon transitions in combination with one metastable state originating from the fluxon transition. In this configuration, the controlling and probing transitions are strongly coupled to the transmission line, safeguarding the transition between 0 and 1 states, and ensuring the Fluxonium qubit is close to the sweet spot. Our observations include the manifestation of EIT, a slowdown of light with a delay time of 217 ns, and photon storage. These results highlight the potential as a phase shifter or quantum memory for quantum communication in superconducting circuits.
Distillation of continuous variable qudits from single photon sources: A cascaded approach
This paper presents a method to create high-quality quantum states of light using only single photon sources and detectors arranged in a cascaded beam splitter setup. The approach can generate various important quantum states including Schrödinger cat states and displaced photon states with very high fidelity (98-99%) without requiring complex nonlinear optics.
Key Contributions
- Linear optical method for generating high-fidelity continuous variable quantum states using only single photon sources and detectors
- Demonstration of >98% fidelity generation of Schrödinger cat states and GKP resource states in a single cascaded setup
- Framework using displaced qudits for efficient optimization of input parameters to generate target quantum states
View Full Abstract
Creation of high fidelity photonic quantum states in the continuous variable regime is indispensable for the implementation of quantum technologies universally. However, this is a challenging task as it requires higher nonlinearity or larger Fock states. In this article, we surmount this necessity by using a linear optical setup with a cascaded arrangement of beam splitters that relies solely on single photon sources and single photon detectors to tailor desired single mode nonclassical states. To show the utility of this setup, we demonstrate the generation of displaced higher photon states with unit fidelity and the family of Schrodinger cat states above $98\%$ fidelity. In addition, we manifest the generation of GKP resource states, such as ON states and weak cubic phase states with $99\%$ fidelity. Creating such a variety of important states in this single setup is made feasible by stating the output in the form of displaced qudits. This figure of merit facilitates efficient identification and optimization of input parameters required to generate the target single mode quantum states. We also account for the experimental imperfections by incorporating detector inefficiencies and non-unit single photon sources. This cascaded setup will assist the experimentalists to explore the feasible creation of target states using currently available resources, such as single photon sources and single photon detectors.
High-purity frequency-degenerate photon pair generation via cascaded SFG/SPDC in thin film lithium niobate
This paper demonstrates a new method for generating pairs of photons with identical frequencies using integrated photonic devices made from thin film lithium niobate. The approach uses two pump lasers in a cascaded process to produce high-quality photon pairs while suppressing unwanted background noise by 40 dB.
Key Contributions
- Novel dual-pump cascaded SFG/SPDC scheme for frequency-degenerate photon pair generation
- 40 dB suppression of parasitic single-pump processes while maintaining high brightness
- Demonstration in scalable thin film lithium niobate integrated photonic platform
View Full Abstract
Frequency-degenerate photon pairs generated using nonlinear photonic integrated devices are a crucial resource for scalable quantum information processing and metrology. However, their realization is hindered by unwanted parametric processes occurring within the same phase matching band, which degrade the signal-to-noise ratio and reduce the purity of the associated quantum states. Here, we propose a dual-pump scheme to produce frequency-degenerate photon pairs, based on cascaded sum-frequency generation and spontaneous parametric down-conversion occurring within a single waveguide, while strongly suppressing parasitic photon pair generation from single-pump processes. This approach significantly simplifies the design compared to microresonator-based methods and enables both pumping and collection of photon pairs entirely in the telecom band. We experimentally validate the concept in a layer-poled thin film lithium niobate waveguide, achieving frequency-degenerate photon pair generation with a brightness of \SI{1.0(3)e5}{\hertz \per \nm \per \square \milli \watt } and a 40 dB suppression of unwanted single-pump processes.
A Conjecture on Almost Flat SIC-POVMs
This paper investigates SIC-POVMs (maximal sets of complex equiangular lines) with anti-unitary symmetry and examines whether mathematical identities expressing overlaps as squares of fiducial vector components can uniquely determine associated Stark units. The authors find that these identities are insufficient for unique determination, though the failure may be limited.
Key Contributions
- Investigation of whether overlap identities can uniquely determine Stark units in SIC-POVMs
- Demonstration that the mathematical identities are insufficient for unique determination, with analysis of the limitations
View Full Abstract
A well supported conjecture states that SIC-POVMs -- maximal sets of complex equiangular lines -- with anti-unitary symmetry give rise to an identity expressing some of its overlaps as squares of the (rescaled) components of a suitably chosen fiducial vector. In number theoretical terms the identity essentially expresses Stark units as sums of products of pairs of square roots of Stark units. We investigate whether the identity is enough to determine these Stark units. The answer is no, but the failure might be quite mild.
Investigation of a Bit-Sequence Reconciliation Protocol Based on Neural TPM Networks in Secure Quantum Communications
This paper proposes using Tree Parity Machine neural networks for key reconciliation in quantum key distribution systems, where quantum key material is converted into neural network weights. The authors study how quantum bit error rates and weight ranges affect synchronization performance and information leakage.
Key Contributions
- Novel application of Tree Parity Machine neural networks for QKD key reconciliation
- Experimental analysis of relationship between QBER and synchronization iterations in neural cryptographic protocols
- Demonstration that larger weight ranges reduce information leakage while increasing synchronization time
View Full Abstract
The article discusses a key reconciliation protocol for quantum key distribution (QKD) systems based on Tree Parity Machines (TPM). The idea of transforming key material into neural network weights is presented. Two experiments were conducted to study how the number of synchronization iterations and the amount of leaked information depend on the quantum bit error rate (QBER) and the range of neural network weights. The results show a direct relationship between the average number of synchronization iterations and QBER, an increase in iterations when the weight range is expanded, and a reduction in leaked information as the weight range increases. Based on these results, conclusions are drawn regarding the applicability of the protocol and the prospects for further research on neural cryptographic methods in the context of key reconciliation.
Tales of Hoffman: from a distance
This paper extends the classical Hoffman bound on graph chromatic numbers to distance-k colorings and quantum distance coloring parameters. The authors develop new eigenvalue-based bounds using polynomial optimization techniques and linear programming methods.
Key Contributions
- Extension of Hoffman's eigenvalue bound to distance-k graph coloring settings
- Development of linear programming methods to optimize polynomial-based bounds for quantum distance chromatic numbers
View Full Abstract
Hoffman proved that a graph $G$ with adjacency eigenvalues $λ_1\geq \cdots \geq λ_n$ and chromatic number $χ(G)$ satisfies $χ(G)\geq 1+κ,$ where $κ$ is the smallest integer such that $$λ_1+\sum_{i=1}^κλ_{n+1-i}\leq 0.$$ We extend this eigenvalue bound to the distance-$k$ setting, and also show a strengthening of it by proving that it also lower bounds the corresponding quantum distance coloring graph parameter. The new bound depends on a degree-$k$ polynomial which can be chosen freely, so one needs to make a good choice of the polynomial to obtain as strong a bound as possible. We thus propose linear programming methods to optimize it. We also investigate the implications of the new bound for the quantum distance chromatic number, showing that it is sharp for some classes of graphs. Finally, we extend the Hoffman bound to the distance setting of the vector chromatic number. Our results extend and unify several previous bounds in the literature.
Practical Homodyne Shadow Estimation
This paper develops a practical method for estimating quantum states in continuous-variable systems using discretized homodyne detection with limited phase settings and measurement bins. The work bridges theoretical shadow estimation techniques with realistic experimental constraints, providing improved scaling bounds and enabling more efficient quantum state characterization.
Key Contributions
- Development of practical shadow estimation protocol for continuous-variable systems with finite measurement resources
- Improved variance scaling bounds from O(n_max^13/3) to O(n_max^4)
- Establishment of sufficient and necessary conditions for informational completeness in truncated Fock spaces
View Full Abstract
Shadow estimation provides an efficient framework for estimating observable expectation values using randomized measurements. While originally developed for discrete-variable systems, its recent extensions to continuous-variable (CV) quantum systems face practical limitations due to idealized assumptions of continuous phase modulation and infinite measurement resolution. In this work, we develop a practical shadow estimation protocol for CV systems using discretized homodyne detection with a finite number of phase settings and quadrature bins. We construct an unbiased estimator for the quantum state and establish both sufficient conditions and necessary conditions for informational completeness within a truncated Fock space up to $n_{\mathrm{max}}$ photons. We further provide a comprehensive variance analysis, showing that the shadow norm scales as $\mathcal{O}(n_{\mathrm{max}}^4)$, improving upon previous $\mathcal{O}(n_{\mathrm{max}}^{13/3})$ bounds. Our work bridges the gap between theoretical shadow estimation and experimental implementations, enabling robust and scalable quantum state characterization in realistic CV systems.
Quantum critical dynamics and emergent universality in decoherent digital quantum processors
This paper studies how noise affects quantum phase transitions in large-scale quantum processors, using IBM's superconducting quantum computers with 80-120 qubits to measure how decoherence modifies universal scaling behaviors. The researchers discovered that noise creates new universal scaling patterns different from ideal theoretical predictions, suggesting these scaling laws could serve as high-level benchmarks for quantum hardware performance.
Key Contributions
- Demonstrated noise-influenced universal scaling in large-scale quantum processors with 80-120 qubits
- Proposed using universal dynamical scaling as a high-level quantum hardware performance metric
- Showed theoretical and experimental evidence that decoherence creates distinct universality regimes rather than simply suppressing quantum critical behavior
View Full Abstract
Understanding how noise influences nonequilibrium quantum critical dynamics is essential for both fundamental physics and the development of practical quantum technologies. While the quantum Kibble-Zurek (QKZ) mechanism predicts universal scaling during quenches across a critical point, real quantum systems exhibit complex decoherence that can substantially modify these behaviors, ranging from altering critical scaling to completely suppressing it. By considering a specific case of nondemolishing noise, we first show how decoherence can reshape universal scaling and verify these theoretical predictions using numerical simulations of spin chains across a wide range of noise strengths. Then, we study linear quenches in the transverse-field Ising model on IBM superconducting processors where the noise model is unknown. Using large system sizes of 80-120 qubits, we measure equal-time connected correlations, defect densities, and excess energies across various quench times. Surprisingly, unlike earlier observations where noise-induced defect production masked universal behavior at long times, we observe clear scaling relations, pointing towards persistent universal structure shaped by decoherence. The extracted scaling exponents differ from both ideal QKZ predictions and analytic results for simplified noise models, suggesting the emergence of a distinct noise-influenced universality regime. Our results, therefore, point toward the possibility of using universal dynamical scaling as a high-level descriptor of quantum hardware, complementary to conventional gate-level performance metrics.
Intense-Laser Nondipole-Induced Symmetry Breaking in Solids
This paper studies how intense laser pulses generate high-frequency light in solid materials, focusing on effects beyond the standard approximations. The researchers show that including more complete physics reveals new properties of the generated light that depend on whether the material has special topological characteristics.
Key Contributions
- Demonstrates that nondipole effects in high-harmonic generation break dipole selection rules and enable new polarizations
- Shows that helicity generation depends on the topological phase of the material, providing a potential probe for topological properties
View Full Abstract
High-harmonic spectroscopy in solids gives insight into the inner workings of solids, such as reconstructing band structures or probing the topological phase of materials. High-harmonic generation (HHG) is a highly non-linear phenomena and simulations guide interpretation of experimental results. These simulations often rely on the electric dipole approximation, even though the driving fields enter regimes that challenge its accuracy. Here, we investigate effects of including nondipole terms in the light-matter coupling in simulations of HHG in materials with both topologically trivial and non-trivial phases. We show how the inclusion of nondipole terms breaks dipole selection rules, allowing for new polarizations of the generated light. Specifically we find that helicity, completely absent in the dipole approximation, is induced by the nondipole extension, and that this helicity is dependent on the topological phase of the material.
A Joint Quantum Computing, Neural Network and Embedding Theory Approach for the Derivation of the Universal Functional
This paper proposes a hybrid approach combining quantum computing algorithms with neural networks to find a universal functional for quantum chemistry simulations. The method uses density matrix embedding theory to make the functional reusable across different molecular systems without requiring additional quantum resources.
Key Contributions
- Novel integration of quantum algorithms with neural networks for universal functional derivation
- Use of density matrix embedding theory to expand applicability without additional quantum resources
- Demonstration of potential cumulative quantum advantage for quantum chemistry applications
View Full Abstract
We introduce a novel approach that exploits the intersection of quantum computing, machine learning and reduced density matrix functional theory to leverage the potential of quantum computing to improve simulations of interacting quantum particles. Our method focuses on obtaining the universal functional using a deep neural network trained with quantum algorithms. We also use fragment-bath systems defined by density matrix embedding theory to strengthen our approach by substantially expanding the space of Hamiltonians for which the obtained functional can be applied without the need for additional quantum resources. Given the fact that once obtained, the same universal functional can be reused for any system where the interactions within the embedded fragment are identical, our work demonstrates a way to potentially achieve a cumulative quantum advantage within quantum computing applications for quantum chemistry and condensed matter physics.
Genuine Tripartite Strong Coupling in a Superconducting-Spin Hybrid Quantum System
This paper demonstrates strong quantum coupling between three different components: a superconducting qubit, a microwave resonator, and diamond spin defects (NV centers). The researchers show that quantum excitations can be coherently shared across all three subsystems simultaneously, creating a hybrid quantum platform.
Key Contributions
- First demonstration of genuine tripartite strong coupling in a superconducting-spin hybrid system
- Observation of three-mode avoided crossings indicating coherent excitation sharing across all subsystems
- Discovery of nonlinear effects and nuclear spin interactions at higher excitation levels
View Full Abstract
We demonstrate genuine tripartite strong coupling in a solid-state hybrid quantum system comprising a superconducting transmon qubit, a fixed-frequency coplanar-waveguide resonator, and an ensemble of NV$^-$ centers in diamond. Frequency-domain spectroscopy reveals a characteristic three-mode avoided crossing, indicating that single excitations are coherently shared across all three subsystems. At higher probe powers, we observe nonlinear features including multiphoton transitions and signatures of transmon-${}^{14}\mathrm{N}$ nuclear-spin interactions, highlighting the accessibility of higher-excitation manifolds in this architecture. These results establish a new regime of hybrid cavity QED that integrates superconducting and spin degrees of freedom, providing a platform for exploring complex multicomponent dynamics and developing hybrid quantum interfaces.
Neural quantum states for entanglement depth certification from randomized Pauli measurements
This paper introduces a machine learning approach using neural quantum states to certify how many qubits in a quantum system share genuine multipartite entanglement, based only on simple randomized measurements rather than complex full state reconstruction. The method trains different neural networks with built-in entanglement constraints and uses statistical model comparison to determine the minimum entanglement depth present in the quantum state.
Key Contributions
- Novel likelihood-based approach for entanglement depth certification using neural quantum states with architectural constraints
- Scalable method that avoids full quantum state tomography by working directly with measurement statistics from randomized Pauli measurements
- Demonstration of robustness for mixed states under noise and interpretability diagnostics for understanding entanglement patterns
View Full Abstract
Entanglement depth quantifies how many qubits share genuine multipartite entanglement, but certification typically relies on tailored witnesses or full tomography, both of which scale poorly with system size. We recast entanglement-depth and non-$k$-separability certification as likelihood-based model selection among neural quantum states whose architecture enforces a chosen entanglement constraint. A hierarchy of separable neural quantum states is trained on finite-shot local Pauli outcomes and compared against an unconstrained reference model trained on the same data. When all constrained models are statistically disfavored, the data certify entanglement beyond the imposed limit directly from measurement statistics, without reconstructing the density matrix. We validate the method on simulated six- and ten-qubit datasets targeting GHZ, Dicke, and Bell-pair states, and demonstrate robustness for mixed states under local noise. Finally, we discuss lightweight interpretability diagnostics derived from trained parameters that expose coarse entanglement patterns and qubit groupings directly from bitstring statistics.
The emergence of long-range entanglement and odd-even effect in periodic generalized cluster models
This paper studies how entanglement spreads in one-dimensional quantum spin systems with periodic boundaries, finding that long-range entanglement emerges specifically when both the system size and interaction range are odd numbers. The researchers use quantum information measures to characterize this entanglement and show it persists even when external magnetic fields are applied.
Key Contributions
- Discovery of odd-even effect where long-range entanglement emerges only when both system size N and interaction range m are odd
- Demonstration that four-part quantum conditional mutual information entropy serves as a direct signature of long-range entanglement in these systems
View Full Abstract
We investigate the entanglement properties in a generalized cluster model under periodic boundary condition. By evaluating the entanglement entropy and the quantum conditional mutual information entropy under three or four subsystem partitions, we identify clear signatures of long-range entanglement. Specifically, when both the system size $N$ and the interaction range $m$ are odd, the system exhibits nonzero four-part quantum conditional mutual information entropies. This non-vanishing four-part quantum conditional mutual information entropy directly signals the presence of long-range entanglement. In contrast, all other combination of $N$ and $m$ yield vanishing four-part quantum conditional mutual information entropy. Remarkably, in the case of $N, m \in \text{odd}$, these long-range entangled features persist even in the presence of a finite transverse field, demonstrating their robustness against quantum fluctuations. These results demonstrate how the interplay between system size and interaction range governs the emergence of long-range entanglement in one-dimensional spin systems.
Unraveling real-time chemical shifts in the ultrafast regime
This paper demonstrates using ultrafast x-ray photoelectron spectroscopy to track chemical bond breaking in real-time during molecular dissociation. The researchers studied fluoromethane molecules breaking apart and showed they could distinguish between different bond-breaking pathways using femtosecond x-ray pulses.
Key Contributions
- Extended x-ray photoelectron spectroscopy to out-of-equilibrium ultrafast molecular dynamics
- Demonstrated real-time tracking of multiple bond dissociation pathways in polyatomic molecules using femtosecond x-ray probes
View Full Abstract
Traditional x-ray photoelectron spectroscopy (XPS) relies upon a direct mapping between the photoelectron binding energies and the local chemical environment, which is well-characterized by an electrostatic partial charges model for systems in equilibrium. However, the extension of this technique to out-of-equilibrium systems has been hampered by the lack of x-ray sources capable of accessing multiple atomic sites with high spectral and temporal resolution, as well as the lack of simple theoretical procedures to interpret the observed signals. In this work we employ multi-site XPS with a narrowband femtosecond x-ray probe to unravel different ultrafast dissociation processes of a polyatomic molecule, fluoromethane (CH$_{3}$F). We show that XPS can follow the cleavage of both the C-F and C-H bonds in real time, despite these channels lying close in binding energy. Additionally, we apply the partial charges model to describe these dynamics, and verify this extension with both advanced ab-initio calculations and experimental data. These results enable the application of this technique to out-of-equilibrium systems of higher complexity, by correlating real-time information from multiple atomic sites and interpreting the measurements through a viable theoretical modelling.
Quantum simulation of strong Charge-Parity violation and Peccei-Quinn mechanism
This paper uses quantum simulation to study a fundamental physics problem: why charge-parity (CP) symmetry violation appears to be absent in quantum chromodynamics despite theoretical predictions. The researchers simulate a simplified version of QCD using qubits and demonstrate how introducing an axion field can dynamically drive the system toward CP conservation, implementing the Peccei-Quinn mechanism on quantum hardware.
Key Contributions
- Development of a qubit-based quantum simulation of the Schwinger model with topological theta-terms that preserve CP-violating physics
- Demonstration of the Peccei-Quinn mechanism through quantum simulation, showing how dynamical axion fields drive the system toward theta=0
View Full Abstract
Quantum Chromodynamics (QCD) admits a topological θ-term that violates Charge-Parity (CP) symmetry, yet experimental indicate that θ is nearly zero. To investigate this discrepancy in a controlled setting, we derive the Hamiltonian representation of the QCD Lagrangian and construct its (1+1)-dimensional Schwinger-model analogue. By encoding fermionic and gauge degrees of freedom into qubits using the Jordan-Wigner and quantum-link schemes, we obtain a compact Pauli Hamiltonian that retains the relevant topological vacuum structure. Ground states are prepared using a feedback-based quantum optimization protocol, enabling numerical evaluation of the vacuum energy E0(θ) on a few-qubit simulator. Our results show a displaced vacuum at nonzero θin agreement with strong-interaction expectations, and demonstrate that introducing a dynamical axion field drives the system toward θ= 0, thereby realizing the Peccei-Quinn mechanism within a minimal quantum simulation. These results illustrate how quantum hardware can examine symmetry violation and its dynamical resolution in gauge theories.
Measurement-Induced Perturbations of Hausdorff Dimension in Quantum Paths
This paper studies how quantum measurements affect the fractal geometry of particle paths, extending previous theoretical work by Abbott et al. to include realistic measurement effects. The authors show that actual quantum measurements change the roughness and Hausdorff dimension of quantum trajectories compared to idealized calculations.
Key Contributions
- Incorporation of realistic measurement effects on quantum path geometry using Gaussian wave packet models
- Demonstration that measurements shift Hausdorff dimension toward lower values and affect trajectory roughness
- Connection between theoretical quantum fractality and practical measurement physics
View Full Abstract
In a seminal paper, Abbott et al. analyzed the relationship between a particle's trajectory and the resolution of position measurements performed by an observer at fixed time intervals. They predicted that quantum paths exhibit a universal Hausdorff dimension that transitions from $d=2$ to $d=1$ as the momentum of the particle increases. However, although measurements were assumed to occur at intervals of time, the calculations only involved evaluating the expectation value of operators for the free evolution of wave function within a single interval, with no actual physical measurements performed. In this work we investigate how quantum measurements alter the fractal geometry of quantum particle paths. By modelling sequential measurements using Gaussian wave packets for both the particle and the apparatus, we reveal that the dynamics of the measurement change the roughness of the path and shift the emergent Hausdorff dimension towards a lower value in nonselective evolution. For selective evolution, feedback control forces must be introduced to counteract stochastic wave function collapse, stabilising trajectories and enabling dimensionality to be tuned. When the contribution of the measurement approaches zero, our result reduces to that of Abbott et al. Our work can thus be regarded as a more realistic formulation of their approach, and it connects theoretical quantum fractality with measurement physics, quantifying how detectors reshape spacetime statistics at quantum scales.
Imaginary-time-enhanced feedback-based quantum algorithms for universal ground-state preparation
This paper develops an improved quantum algorithm called ITE-FALQON that combines feedback-based quantum optimization with imaginary-time evolution to reliably find ground states of strongly correlated quantum systems, solving problems where the original FALQON algorithm fails due to spectral degeneracies.
Key Contributions
- Development of ITE-FALQON hybrid algorithm that combines feedback-based optimization with imaginary-time evolution
- Demonstration that the method overcomes spectral degeneracy failures in ground-state preparation for strongly correlated quantum systems
View Full Abstract
Preparing ground states of strongly correlated quantum systems is a central goal in quantum simulation and optimization. The feedback-based quantum algorithm (FALQON) provides an attractive alternative to variational methods with a fully quantum feedback rule, but it fails in the presence of spectral degeneracies, where the feedback signal collapses and the evolution cannot reach the ground state. Using the Fermi-Hubbard model on lattices up to 3x3, we show that this breakdown appears at half-filling on the 2x2 lattice and extends to both half-filled and doped configurations on the 3x3 lattice. We then introduce an imaginary-time-enhanced FALQON (ITE-FALQON) scheme, which inserts short imaginary-time evolution steps into the feedback loop. The hybrid method suppresses excited-state components, escapes degenerate subspaces, and restores monotonic energy descent. The ITE-FALQON achieves a reliable ground-state convergence across all fillings, providing a practical route to scalable ground-state preparation in strongly correlated quantum systems.
Universal Quantum Random Access Memory: A Data-Independent Unitary Construction
This paper presents a new method for building Quantum Random Access Memory (QRAM) using a single, data-independent unitary operator that encodes data in memory qubits acting as control signals. The approach simplifies QRAM implementation by separating the fixed circuit structure from variable data encoding, requiring log₂N + K + NK qubits for N addresses with K-bit data words.
Key Contributions
- Novel data-independent unitary construction for QRAM that separates circuit structure from data encoding
- Specific resource requirements and decomposition into NK multi-controlled gates with verification across multiple configurations
View Full Abstract
We present a construction for Quantum Random Access Memory (QRAM) that achieves a single, data-independent unitary operator. Unlike routing-based approaches or circuit methods that yield data-dependent unitaries, our Universal QRAM encodes data in memory qubits that act as quantum control signals within a block-diagonal permutation structure. The key insight is that memory qubits serve as control signals, enabling coherent lookup when addresses are in superposition. For N addresses with K-bit data words, the construction requires $\log_2 N + K + NK$ qubits and decomposes into exactly $NK$ multi-controlled gates. We verify the construction for $N \in \{2, 4, 8, 16\}$ and $K \in \{1, 2, 3, 4\}$, confirming that the resulting unitary is a pure permutation matrix with zero error across all data configurations. This approach simplifies QRAM implementation by separating fixed circuit structure from variable data encoding.
Quantum Coherence in Reflected and Refracted Beams: A Van Cittert-Zernike Approach
This paper develops a quantum extension of the van Cittert-Zernike theorem to describe how quantum coherence and photon statistics change when light beams undergo reflection and refraction at interfaces. The work shows that these basic optical processes can modify quantum statistics of light without complex light-matter interactions, potentially enabling thermal light to exhibit sub-Poissonian statistics.
Key Contributions
- Development of quantum van Cittert-Zernike theorem for reflection/refraction processes
- Demonstration that thermal light can exhibit sub-Poissonian statistics through post-selected measurements at interfaces
- Discovery of scaling law linking beam collimation to far-field thermalization for quantum statistical control
View Full Abstract
Recent advances in quantum optics have highlighted the critical role of spatial propagation in controlling the quantum coherence of light beams. However, the evolution of quantum coherence for light beams undergoing fundamental optical processes at dielectric interfaces remains unexplored. Furthermore, manipulating multiphoton correlations typically requires complex interactions that challenge few-photon level implementation. Here, we introduce a quantum van Cittert-Zernike theorem for light beams, describing how their coherence-polarization properties are influenced by reflection and refraction, as well as how these properties evolve upon subsequent propagation. Our work demonstrates that the quantum statistics of photonic systems can be controllably modified through the inherent polarization coupling arising from reflection and refraction at an interface, without relying on conventional light-matter interactions. Our approach reveals regimes where thermal light can exhibit sub-Poissonian statistics with fluctuations below the shot-noise level through post-selected measurements, and this statistical property can be tuned by the incident angle. Remarkably, this quantum statistical modification is governed by a scaling law linking beam collimation to far-field thermalization. Our work establishes a robust, decoherence-avoiding mechanism for quantum state control, advancing the fundamental understanding of coherence in quantum optics and opening new avenues for applications in quantum information and metrology.