Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Scalable Suppression of XY Crosstalk by Pulse-Level Control in Superconducting Quantum Processors
This paper develops a scalable control method to reduce unwanted interactions (crosstalk) between neighboring qubits in superconducting quantum processors by using frequency modulation and dynamical decoupling techniques. The approach works independently of coupling strengths and demonstrates significant error reduction in both two-qubit and five-qubit systems.
Key Contributions
- Development of a scalable pulse-level control framework using frequency modulation and dynamical decoupling to suppress XY crosstalk
- Demonstration of orders-of-magnitude fidelity improvements in superconducting quantum processors with validation up to five-qubit systems
View Full Abstract
As superconducting quantum processors continue to scale, high-performance quantum control becomes increasingly critical. In densely integrated architectures, unwanted interactions between nearby qubits give rise to crosstalk errors that limit operational performance. In particular, direct exchange-type (XY) interactions are typically minimized by designing large frequency detunings between neighboring qubits at the hardware level. However, frequency crowding in large-scale systems ultimately restricts the achievable frequency separation. While such XY coupling facilitates entangling gate operations, its residual presence poses a key challenge during single-qubit controls. Here, we propose a scalable pulse-level control framework, incorporating frequency modulation (FM) and dynamical decoupling (DD), to suppress XY crosstalk errors. This framework operates independently of coupling strengths, reducing calibration overhead and naturally supporting multi-qubit connectivity. Numerical simulations show orders-of-magnitude reductions in infidelity for both idle and single-qubit gates in a two-qubit system. We further validate scalability in a five-qubit layout, where crosstalk between a central qubit and four neighbors is simultaneously suppressed. Our crosstalk suppression framework provides a practical route toward high-fidelity operation in dense superconducting architectures.
Interacting electrons in silicon quantum interconnects
This paper studies one-dimensional electron channels in silicon quantum structures that could serve as interconnects between quantum processing units. The researchers identify different interaction regimes (Wigner and Friedel) in these channels and propose how they could enable long-range coupling between quantum dots for scalable quantum computing architectures.
Key Contributions
- Identification of Wigner-Friedel crossover in silicon quantum interconnects with distinct correlation signatures
- Demonstration that Wigner regime enables long-range capacitive coupling between quantum dots for entanglement generation
- DMRG simulations showing robustness of interaction regimes against realistic disorder levels up to 400 micro eV
- Proposal for experimental signatures to detect different interaction regimes via transport and charge sensing
View Full Abstract
Coherent interconnects between gate-defined silicon quantum processing units are essential for scalable quantum computation and long-range entanglement. We argue that one-dimensional electron channels formed in the silicon quantum well of a Si/SiGe heterostructure exhibit strong Coulomb interactions and realize strongly interacting Luttinger liquid physics. At low electron densities, the system enters a Wigner regime characterized by dominant 4kF correlations; increasing the electron density leads to a crossover from the Wigner regime to a Friedel regime with dominant 2kF correlations. We support these results through large-scale density matrix renormalization group (DMRG) simulations of the interacting ground state under both screened and unscreened Coulomb potentials. We propose experimental signatures of the Wigner-Friedel crossover via charge transport and charge sensing in both zero- and high-magnetic field limits. We also analyze the impact of short-range correlated disorder - including random alloy fluctuations and valley splitting variations - and identify that the Wigner-Friedel crossover remains robust until disorder levels of about 400 micro eV. Finally, we show that the Wigner regime enables long-range capacitive coupling between quantum dots across the interconnect, suggesting a route to create long-range entanglement between solid-state qubits. Our results position silicon interconnects as a platform for studying Luttinger liquid physics and for enabling architectures supporting nonlocal quantum error correction and quantum simulation.
Unitary fault-tolerant encoding of Pauli states in surface codes
This paper presents a new method for preparing quantum states in surface codes that preserves the code's error-correction capabilities during state preparation. The approach uses only local quantum gates and significantly reduces error rates compared to traditional measurement-based methods, making it particularly valuable for quantum computing platforms where measurements are expensive.
Key Contributions
- Distance-preserving unitary encoding scheme for Pauli eigenstates in surface codes
- Scalable construction generalizable to arbitrary code distances with O(d) circuit depth
- Demonstration of up to order-of-magnitude improvement in logical error rates over standard methods
View Full Abstract
In fault-tolerant quantum computation, the preparation of logical states is a ubiquitous subroutine, yet significant challenges persist even for the simplest states required. In the present work, we present a unitary, scalable, distance-preserving encoding scheme for preparing Pauli eigenstates in surface codes. Unlike previous unitary approaches whose fault-distance remains constant with increasing code distance, our scheme ensures that the protection offered by the code is preserved during state preparation. Building on strategies discovered by reinforcement learning for the surface-17 code, we generalize the construction to arbitrary code distances and both rotated and unrotated surface codes. The proposed encoding relies only on geometrically local gates, and is therefore fully compatible with planar 2D qubit connectivity, and it achieves circuit depth scaling as $\mathcal{O}(d)$, consistent with fundamental entanglement-generation bounds. We design explicit stabilizer-expanding circuits with and without ancilla-mediated connectivity and analyze their error-propagation behavior. Numerical simulations under depolarizing noise show that our unitary encoding without ancillas outperforms standard stabilizer-measurement-based schemes, reducing logical error rates by up to an order of magnitude. These results make the scheme particularly relevant for platforms such as trapped ions and neutral atoms, where measurements are costly relative to gates and idling noise is considerably weaker than gate noise. Our work bridges the gap between measurement-based and unitary encodings of surface-code states and opens new directions for distance-preserving state preparation in fault-tolerant quantum computation.
Fast, high-fidelity Transmon readout with intrinsic Purcell protection via nonperturbative cross-Kerr coupling
This paper demonstrates a new 'junction readout' method for measuring superconducting qubits that achieves faster, more accurate measurements than traditional approaches. By coupling the qubit to its readout circuit through both capacitive and Josephson junction connections, they achieve 99.4% measurement accuracy in just 68 nanoseconds without needing expensive additional hardware components.
Key Contributions
- Development of junction readout architecture with intrinsic Purcell protection that eliminates need for external Purcell filters
- Achievement of 99.4% assignment fidelity with 68 ns integration time using bifurcation-based readout
- Demonstration of enhanced resilience to measurement-induced state transitions through nonperturbative cross-Kerr coupling
- Scalable readout solution with reduced hardware overhead compared to conventional dispersive readout
View Full Abstract
Dispersive readout of superconducting qubits relies on a transverse capacitive coupling that hybridizes the qubit with the readout resonator, subjecting the qubit to Purcell decay and measurement-induced state transitions (MIST). Despite the widespread use of Purcell filters to suppress qubit decay and near-quantum-limited amplifiers, dispersive readout often lags behind single- and two-qubit gates in both speed and fidelity. Here, we experimentally demonstrate junction readout, a simple readout architecture that realizes a strong qubit-resonator cross-Kerr interaction without relying on a transverse coupling. This interaction is achieved by coupling a transmon qubit to its readout resonator through both a capacitance and a Josephson junction. By varying the qubit frequency, we show that this hybrid coupling provides intrinsic Purcell protection and enhanced resilience to MIST, enabling readout at high photon numbers. While junction readout is compatible with conventional linear measurement, in this work we exploit the nonlinear coupling to intentionally engineer a large Kerr nonlinearity in the resonator, enabling bifurcation-based readout. Using this approach, we achieve a 99.4 % assignment fidelity with a 68 ns integration time and a 98.4 % QND fidelity without an external Purcell filter or a near-quantum-limited amplifier. These results establish the junction readout architecture with bifurcation-based readout as a scalable and practical alternative to dispersive readout, enabling fast, high-fidelity qubit measurement with reduced hardware overhead.
SurgeQ: A Hybrid Framework for Ultra-Fast Quantum Processor Design and Crosstalk-Aware Circuit Execution
This paper presents SurgeQ, a hybrid hardware-software approach for quantum computing that uses stronger coupling between qubits to enable faster two-qubit gates, while using smart scheduling algorithms to minimize the increased crosstalk noise that comes with stronger coupling.
Key Contributions
- Hardware-software co-design framework combining coupling-strengthened fast gates with crosstalk-aware scheduling
- Systematic evaluation pipeline for optimizing coupling strength under composite noise models
- Demonstration of million-fold fidelity improvement in large-scale quantum circuits
View Full Abstract
Executing quantum circuits on superconducting platforms requires balancing the trade-off between gate errors and crosstalk. To address this, we introduce SurgeQ, a hardware-software co-design strategy consisting of a design phase and an execution phase, to achieve accelerated circuit execution and improve overall program fidelity. SurgeQ employs coupling-strengthened, faster two-qubit gates while mitigating their increased crosstalk through a tailored scheduling strategy. With detailed consideration of composite noise models, we establish a systematic evaluation pipeline to identify the optimal coupling strength. Evaluations on a comprehensive suite of real-world benchmarks show that SurgeQ generally achieves higher fidelity than up-to-date baselines, and remains effective in combating exponential fidelity decay, achieving up to a million-fold improvement in large-scale circuits.
Holographic codes seen through ZX-calculus
This paper analyzes holographic quantum error correcting codes using ZX-calculus, a graphical language for quantum computation. The authors study the pentagon holographic code's structure through diagrams and introduce new codes based on hyperbolic tessellations, testing their error correction performance.
Key Contributions
- Diagrammatic analysis of pentagon holographic quantum error correcting code using ZX-calculus
- Introduction of new quantum error correcting codes based on dual hyperbolic tessellations with belief propagation decoding
View Full Abstract
We re-visit the pentagon holographic quantum error correcting code from a ZX-calculus perspective. By expressing the underlying tensors as ZX-diagrams, we study the stabiliser structure of the code via Pauli webs. In addition, we obtain a diagrammatic understanding of its logical operators, encoding isometries, Rényi entropy and toy models of black holes/wormholes. Then, motivated by the pentagon holographic code's ZX-diagram, we introduce a family of codes constructed from ZX-diagrams on its dual hyperbolic tessellations and study their logical error rates using belief propagation decoders.
Conveyor-mode electron shuttling through a T-junction in Si/SiGe
This paper demonstrates a T-junction device that can route single electrons and spin qubits between two conveyor-belt shuttle lanes in silicon quantum dots, achieving nearly perfect transfer fidelity and enabling two-dimensional quantum computing architectures.
Key Contributions
- Demonstrated T-junction routing between independent shuttle lanes with 99.9999991% fidelity
- Showed controllable electron pattern swapping across 54 quantum dots using simple atomic pulses
- Established foundation for scalable 2D quantum computing architectures with flexible spin qubit routing
View Full Abstract
Conveyor-mode shuttling in gated Si/SiGe devices enables adiabatic transfer of single electrons, electron patterns and spin qubits confined in quantum dots across several microns with a scalable number of signal lines. To realize their full potential, linear shuttle lanes must connect into a two-dimensional grid with controllable routing. We introduce a T-junction device linking two independently driven shuttle lanes. Electron routing across the junction requires no extra control lines beyond the four channels per conveyor belt. We measure an inter-lane charge transfer fidelity of $F = 100.0000000^{+0}_{-9\times 10^{-7}}\,\%$ at an instantaneous electron velocity of $270\,\mathrm{mm}\,\mathrm{s}^{-1}$. The filling of 54 quantum dots is controlled by simple atomic pulses, allowing us to swap electron patterns, laying the groundwork for a native spin-qubit SWAP gate. This T-junction establishes a path towards scalable, two-dimensional quantum computing architectures with flexible spin qubit routing for quantum error correction.
Integration and Resource Estimation of Cryoelectronics for Superconducting Fault-Tolerant Quantum Computers
This paper analyzes the requirements and approaches for integrating cryogenic electronics into large-scale superconducting quantum computers, providing a framework for estimating resources needed to control fault-tolerant quantum systems. The authors examine how to partition control functions between room-temperature electronics, cryo-CMOS at 4K, and superconducting logic to enable scaling to cryptographically relevant quantum computers.
Key Contributions
- Development of transparent first-order accounting framework for cryoelectronics resource estimation
- Analysis of functional partitioning strategies across temperature stages for fault-tolerant quantum computer control systems
- Demonstration of scaling constraints using RSA-2048 cryptographic benchmark as reference point
View Full Abstract
Scaling superconducting quantum computers to the fault-tolerant regime calls for a commensurate scaling of the classical control and readout stack. Today's systems largely rely on room-temperature, rack-based instrumentation connected to dilution-refrigerator cryostats through many coaxial cables. Looking ahead, superconducting fault-tolerant quantum computers (FTQCs) will likely adopt a heterogeneous quantum-classical architecture that places selected electronics at cryogenic stages -- for example, cryo-CMOS at 4~K and superconducting digital logic at 4~K and/or mK stages -- to curb wiring and thermal-load overheads. This review distills key requirements, surveys representative room-temperature and cryogenic approaches, and provides a transparent first-order accounting framework for cryoelectronics. Using an RSA-2048-scale benchmark as a concrete reference point, we illustrate how scaling targets motivate constraints on multiplexing and stage-wise cryogenic power, and discuss implications for functional partitioning across room-temperature electronics, cryo-CMOS, and superconducting logic.
Strip-Symmetric Quantum Codes for Biased Noise: Z-Decoupling in Stabilizer and Floquet Codes
This paper introduces a framework for quantum error correction codes called 'strip-symmetric biased codes' that are optimized for dephasing noise, where errors can be efficiently decoded by breaking the problem into independent one-dimensional strips rather than solving the full two-dimensional decoding problem.
Key Contributions
- Defines strip-symmetric biased codes as a unifying framework for existing bias-tailored quantum error correction codes
- Shows that Z-error decoding can be factorized across independent strips, reducing computational complexity for matching-based decoders
- Provides design tools for constructing new bias-tailored Floquet codes using synthetic detector models and domain-wise Clifford constructions
View Full Abstract
Bias-tailored codes such as the XZZX surface code and the domain wall color code achieve high dephasing-biased thresholds because, in the infinite-bias limit, their $Z$ syndromes decouple into one-dimensional repetition-like chains; the $X^3Z^3$ Floquet code shows an analogous strip-wise structure for detector events in spacetime. We capture this common mechanism by defining strip-symmetric biased codes, a class of static stabilizer and dynamical (Floquet) codes for which, under pure dephasing and perfect measurements, each elementary $Z$ fault is confined to a strip and the Z-detector--fault incidence matrix is block diagonal. For such codes the Z-detector hypergraph decomposes into independent strip components and maximum-likelihood $Z$ decoding factorizes across strips, yielding complexity savings for matching-based decoders. We characterize strip symmetry via per-strip stabilizer products, viewed as a $\mathbb{Z}_2$ 1-form symmetry, place XZZX, the domain wall color code, and $X^3Z^3$ in this framework, and introduce synthetic strip-symmetric detector models and domain-wise Clifford constructions that serve as design tools for new bias-tailored Floquet codes.
Optimizing Fault-tolerant Cat State Preparation
This paper presents an optimized method for preparing cat states (quantum superposition states) that are essential for fault-tolerant quantum computing. The approach uses two low-depth circuits combined with a transversal CNOT operation to create high-quality cat states with fewer resources than previous methods.
Key Contributions
- Novel cat state preparation scheme achieving fault distance up to 9 with reduced circuit depth and CNOT count
- Three optimization methods for transversal CNOT wiring including SAT-based and heuristic approaches
- Resource-efficient construction requiring only ⌈log₂ w⌉+1 depth and at most 3w-2 CNOTs
View Full Abstract
Cat states are an important resource for fault-tolerant quantum computing, where they serve as building blocks for a variety of fault-tolerant primitives. Consequently, the ability to prepare high-quality cat states at large fault distances is essential. While optimizations for low fault distances or small numbers of qubits exist, higher fault distances can be achieved via generalized constructions with potentially suboptimal circuit sizes. In this work, we propose a cat state preparation scheme based on preparing two cat states with low-depth circuits, followed by a transversal CNOT and measurement of one of the states. This scheme prepares $w$-qubit cat states fault-tolerantly up to fault distances of $9$ using $\lceil\log_2 w\rceil+1$ depth and at most $3w-2$ CNOTs and $2w$ qubits. We discuss that the combinatorially challenging aspect of this construction is the precise wiring of the transversal CNOT and propose three methods for finding these: two based on Satisfiability Modulo Theory solving and one heuristic search based on a local repair strategy. Numerical evaluations show that our circuits achieve a high fault-distance while requiring fewer resources as generalized constructions.
FTCircuitBench: A Benchmark Suite for Fault-Tolerant Quantum Compilation and Architecture
This paper introduces FTCircuitBench, a comprehensive benchmark suite and toolkit for evaluating quantum error correction and fault-tolerant quantum computing compilation. It provides standardized algorithms, compilation pipelines, and evaluation tools to help researchers develop and optimize fault-tolerant quantum computing systems.
Key Contributions
- Created standardized benchmark suite for fault-tolerant quantum compilation with pre-compiled algorithm instances
- Developed modular end-to-end compilation pipeline supporting various fault-tolerant architectures and optimization passes
- Provided comprehensive toolkit for evaluating quantum algorithms and optimizations across the full compilation stack
View Full Abstract
Realizing large-scale quantum advantage is expected to require quantum error correction (QEC), making the compilation and optimization of logical operations a critical area of research. Logical computation imposes distinct constraints and operational paradigms that differ from those of the Noisy Intermediate-Scale Quantum (NISQ) regime, motivating the continued evolution of compilation tools. Given the complexity of this emerging stack, where factors such as gate decomposition precision and computational models must be co-designed, standardized benchmarks and toolkits are valuable for evaluating progress. To support this need, we introduce FTCircuitBench, which serves as: (1) a benchmark suite of impactful quantum algorithms, featuring pre-compiled instances in both Clifford+T and Pauli Based Computation models; (2) a modular end-to-end pipeline allowing users to compile and decompose algorithms for various fault-tolerant architectures, supporting both prebuilt and custom optimization passes; and (3) a toolkit for evaluating the impact of algorithms and optimization across the full compilation stack, providing detailed numerical analysis at each stage. FTCircuitBench is fully open-sourced and maintained on Github.
Energetics of Rydberg-atom Quantum Computing
This paper analyzes the energy consumption of Rydberg-atom quantum computers by examining the energetic costs of different components and algorithms. The researchers investigate the energy efficiency of executing quantum algorithms like Quantum Phase Estimation and Quantum Fourier Transform on Rydberg platforms, comparing these costs to classical supercomputers to evaluate potential quantum energy advantages.
Key Contributions
- First comprehensive analysis of energy consumption in Rydberg-atom quantum computing platforms
- Energy scaling analysis and comparison between quantum Fourier transform on Rydberg systems versus classical discrete Fourier transform on supercomputers
- Identification of energy bottlenecks and optimization opportunities in different components of Rydberg quantum computers
View Full Abstract
Quantum computing exploits the properties of Quantum Mechanics to solve problems faster than classical computers. The potential applications of this technology have been widely explored, and extensive research over the past decades has been dedicated to developing scalable quantum computers. However, the question of the energetic performance of quantum computation has only gained attention more recently, and its importance is now recognized. In fact, quantum computers can only be a viable alternative if their energy cost scales favorably, and some research has shown that there is even a potential quantum energy advantage. Rydberg atoms have emerged recently as one of the most promising platforms to implement a large-scale quantum computer, with significant advances made in recent years. This work aims at contributing first steps to understand the energy efficiency of this platform, namely by investigating the energy consumption of the different elements of a Rydberg atom quantum computer. First, an experimental implementation of the Quantum Phase Estimation algorithm is analyzed, and an estimation of the energetic cost of executing this algorithm is calculated. Then, a potential scaling of the energy cost of performing the Quantum Fourier Transform with Rydberg atoms is derived. This analysis facilitates a comparison of the energy consumption of different elements within a Rydberg atom quantum computer, from the preparation of the atoms to the execution of the algorithm, and the measurement of the final state, enabling the evaluation of the energy expenditure of the Rydberg platform and the identification of potential improvements. Finally, we used the Quantum Fourier Transform as an energetic benchmark, comparing the scaling we obtained to that of the execution of the Discrete Fourier Transform in two state-of-the-art classical supercomputers.
Gradient descent reliably finds depth- and gate-optimal circuits for generic unitaries
This paper demonstrates that gradient descent optimization can efficiently find optimal quantum circuits for generic unitary operations, challenging previous assumptions that such optimization required more complex combinatorial search methods. The key insight is that avoiding parameter-deficient circuit structures allows simple gradient descent to reliably achieve both depth-optimal and gate-optimal circuits.
Key Contributions
- Showed that gradient descent can reliably find optimal quantum circuits for generic unitaries, contrary to previous beliefs requiring combinatorial search
- Identified that avoiding parameter-deficient circuit skeletons is key to successful optimization, explaining discrepancies with earlier work
View Full Abstract
When the gate set has continuous parameters, synthesizing a unitary operator as a quantum circuit is always possible using exact methods, but finding minimal circuits efficiently remains a challenging problem. The landscape is very different for compiled unitaries, which arise from programming and typically have short circuits, as compared with generic unitaries, which use all parameters and typically require circuits of maximal size. We show that simple gradient descent reliably finds depth- and gate-optimal circuits for generic unitaries, including in the presence of restricted chip connectivity. This runs counter to earlier evidence that optimal synthesis required combinatorial search, and we show that this discrepancy can be explained by avoiding the random selection of certain parameter-deficient circuit skeletons.
Minimization of AND-XOR Expressions with Decoders for Quantum Circuits
This paper presents new methods for designing quantum circuits that perform logical operations more efficiently by using decoder-based three-level structures instead of traditional two-level approaches, aiming to reduce the quantum cost of reversible circuits through novel mathematical forms called MVI-FPRM.
Key Contributions
- Introduction of Multi-Valued Input Fixed Polarity Reed-Muller (MVI-FPRM) forms for quantum circuit synthesis
- Development of decoder-based three-level circuit architecture to reduce quantum costs compared to traditional two-level ESOP methods
- Creation of two practical algorithms (products-matching and butterfly diagrams) for three-level circuit synthesis
View Full Abstract
This paper introduces a new logic structure for reversible quantum circuit synthesis. Our synthesis method aims to minimize the quantum cost of reversible quantum circuits with decoders. In this method, multi-valued input, binary output (MVI) functions are utilized as a mathematical concept only, but the circuits are binary. We introduce the new concept of ``Multi-Valued Input Fixed Polarity Reed-Muller (MVI-RM)" forms. Our decoder-based circuit uses three logical levels in contrast to commonly-used methods based on Exclusive-or Sum of Products (ESOP) with two levels (AND-XOR expressions), realized by Toffoli gates. In general, the high number of input qubits in the resulting Toffoli gates is a problem that greatly impacts the quantum cost. Using decoders decreases the number of input qubits in these Toffoli gates. We present two practical algorithms for three-level circuit synthesis by finding the MVI-FPRM: products-matching and the newly developed butterfly diagrams. The best MVI-FPRM forms are factorized and reduced to approximate Multi-Valued Input Generalized Reed-Muller (MVI-GRM) forms.
Developments in superconducting erasure qubits for hardware-efficient quantum error correction
This paper reviews recent developments in superconducting erasure qubits, a specialized type of quantum bit designed to have predictable error patterns that make quantum error correction more efficient. The authors focus on dual-rail encoded implementations and discuss how these qubits can enable hardware-efficient quantum error correction by combining built-in error correction with additional outer codes.
Key Contributions
- Comprehensive review of superconducting erasure qubit implementations and their hardware-efficient quantum error correction capabilities
- Analysis of dual-rail encoding schemes and concatenated error correction approaches for fault-tolerant quantum computing
View Full Abstract
Quantum computers are inherently noisy, and a crucial challenge for achieving large-scale, fault-tolerant quantum computing is to implement quantum error correction. A promising direction that has made rapid recent progress is to design hardware that has a specific noise profile, leading to a significantly higher threshold for noise with certain quantum error correcting codes. This Perspective focuses on erasure qubits, which enable hardware-efficient quantum error correction, by concatenating an inner code built-in to the hardware with an outer code. We focus on implementations of dual-rail encoded erasure qubits using superconducting qubits, giving an overview of recent developments in theory and simulation, and hardware demonstrators. We also discuss the differences between implementations; near-term applications using quantum error detection; and the open problems for developing this approach towards early fault-tolerant quantum computers.
Flux-noise-resilient transmon qubit via a doubly-connected gradiometric design
This paper presents a new transmon qubit design called the '8-mon' that uses a doubly-connected gradiometric structure with a nano-airbridge to significantly reduce sensitivity to magnetic flux noise while maintaining full electrical tunability. The design achieves nearly threefold improvement in coherence time compared to standard X-mon qubits without requiring additional control overhead.
Key Contributions
- Novel doubly-connected gradiometric transmon qubit design that suppresses flux noise while preserving tunability
- Demonstration of threefold enhancement in Ramsey coherence time T2* reaching the same order as T1
- Development of spatially correlated flux-noise model that quantitatively reproduces experimental coherence trends
- Practical pathway toward more stable superconducting quantum processors with superior long-term frequency stability
View Full Abstract
Frequency-tunable superconducting transmon qubits are a cornerstone of scalable quantum processors, yet their performance is often degraded by sensitivity to low-frequency flux noise. Here we present a doubly-connected gradiometric transmon (the ``8-mon") that incorporates a nano-airbridge to link its two loops. This design preserves full electrical tunability and remains fully compatible with standard X-mon control and readout, requiring no additional measurement overhead. The airbridge interconnect eliminates dielectric loss, which enables the 8-mon to achieve both energy relaxation times $T_{\rm 1}$ comparable to reference X-mons and, in the small flux-bias regime, a nearly threefold enhancement in Ramsey coherence time $T_{\rm 2}^*$. This improved $T_{\rm 2}^*$ reaches the same order as $T_{\rm 1}$ without employing echo decoupling. The device also exhibits superior long-term frequency stability even without any magnetic field shielding. We develop a spatially correlated flux-noise model whose simulations quantitatively reproduce the experimental coherence trends, revealing the coexistence of short- and long-correlation-length magnetic noise in the superconducting chip environment. By unifying high tunability with intrinsic flux-noise suppression through a robust geometric design, the 8-mon provides a practical pathway toward more coherent and stable superconducting quantum processors.
Parallel Quantum Gates via Scalable Subsystem-Optimized Robust Control
This paper presents a method to reduce crosstalk errors when running multiple quantum gates simultaneously by optimizing control over smaller subsystems rather than the entire quantum processor. The approach dramatically reduces computational costs while improving gate fidelities, making it practical for large-scale quantum computers with hundreds of qubits.
Key Contributions
- Scalable subsystem-based optimization that reduces crosstalk errors in parallel quantum gate operations
- Demonstration of improved noise scaling from exponential to linear for parallel single-qubit gates across multiple quantum computing platforms
- Platform-agnostic framework that works without precise crosstalk knowledge or specific connectivity assumptions
View Full Abstract
Accurate and efficient implementation of parallel quantum gates is crucial for scalable quantum information processing. However, the unavoidable crosstalk between qubits in current noisy processors impedes the achievement of high gate fidelities and renders full Hilbert-space control optimization prohibitively difficult. Here, we overcome this challenge by reducing the full-system optimization to crosstalk-robust control over constant-sized subsystems, which dramatically reduces the computational cost. Our method effectively eliminates the leading-order gate operation deviations induced by crosstalk, thereby suppressing error rates. Within this framework, we construct analytical pulse solutions for parallel single-qubit gates and numerical pulses for parallel multi-qubit operations. We validate the proposed approach numerically across multiple platforms, including coupled nitrogen-vacancy centers, a nuclear-spin processor, and superconducting-qubit arrays with up to 200 qubits. As a result, the noise scaling is reduced from exponential to linear for parallel single-qubit gates, and an order-of-magnitude reduction is achieved for parallel multi-qubit gates. Moreover, our method does not require precise knowledge of crosstalk strengths and makes no assumption about the underlying qubit connectivity or lattice geometry, thereby establishing a scalable framework for parallel quantum control in large-scale quantum architectures.
Design and Characterization of Compact Acousto-Optic-Deflector Individual Addressing System for Trapped-Ion Quantum Computing
This paper presents a compact beam-steering system using acousto-optic deflectors to individually address ions in trapped-ion quantum computers. The system achieves high precision beam control with minimal crosstalk, enabling manipulation of individual qubits in chains of up to 30 ions.
Key Contributions
- Compact AOD-based beam steering system with <1 square foot footprint for improved optical stability
- Demonstrated individual ion addressing in 30-ion chains with <9×10^-4 intensity crosstalk
- Fast beam switching capability (~240 ns) enabling high-fidelity quantum operations on long ion chains
View Full Abstract
We present a compact design for a beam-steering system based on acousto-optic-deflectors (AODs) used as an individual addressing system for trapped-ion quantum computing. The design targets to minimize the optomechanical degrees of freedom and the optical beam paths to improve optical stability, and we successfully implemented a solution with a compact footprint of less than 1 square foot. The system characterization results show that we achieve clean Gaussian beams at 355nm wavelength with a beam steering range of $\sim$50 times the beam diameter, and an intensity crosstalk of $< 9 \times 10^{-4}$ at all neighboring ions in a five-ion chain. Based on these capabilities, we experimentally demonstrate individual addressing of a 30-ion chain. We estimate the beam switching time of the AOD to be $\sim$240 ns. The compact system design is expected to provide high optical stability, providing the potential for high-fidelity trapped-ion quantum computing with long ion chains.
Analytical Solutions to Asymmetric Two-Photon Rabi Model
This paper develops analytical solutions for a generalized quantum Rabi model that includes two-photon interactions and asymmetric terms using the Segal-Bargmann representation and Bethe ansatz approach. The work provides exact mathematical solutions to a fourth-order differential equation describing light-matter interactions in quantum systems.
Key Contributions
- Development of nearly exact analytical solutions for asymmetric two-photon Rabi model using Bethe ansatz
- Application of Segal-Bargmann representation to solve fourth-order differential equations in quantum optics
View Full Abstract
Within the Segal-Bargmann representation, a generalized Rabi model is considered that includes both two-photon and asymmetric terms. It is shown that, through a suitable transformation, nearly exact solutions can be obtained using the Bethe ansatz approach. Applying this approach to the meromorphic structure of the resulting differential equation, solutions in exact analytical form of the fourth-order problem are presented for both an arbitrary state and for the restriction between the parameters.
Emission Dynamics of Rydberg Excitons in $\mathbf{\mathrm{Cu_2O}}$: Distinguishing Second Harmonic Generation from Secondary Emission
This paper studies Rydberg excitons in copper oxide crystals and develops methods to distinguish between two different optical responses when excited with laser light: coherent second-harmonic generation and secondary emission from excited states. The researchers use time-resolved measurements to separate these processes and understand how they depend on various experimental conditions.
Key Contributions
- Development of time-resolved methods to cleanly separate second-harmonic generation from secondary emission in Rydberg excitons
- Systematic mapping of how both optical processes depend on quantum number n, temperature, excitation power, and crystal quality
- Establishment of practical criteria for identifying different emission channels in nonlinear optical experiments
View Full Abstract
Rydberg excitons in $\mathrm{Cu_2O}$ simultaneously give rise to two very different optical responses under resonant two-photon excitation: a coherent second-harmonic signal mediated by the excitonic second order susceptibility tensor $χ^{(2)}$, and a secondary emission originating from the radiative decay of real exciton populations. Distinguishing these two channels is essential for interpreting nonlinear and quantum-optical experiments based on high-$n$ states, yet their temporal, spectral, and power-dependent signatures often overlap. Here we use time-resolved resonant two-photon excitation to cleanly separate SHG and SE and to map how each depends on $n$, temperature, excitation power, and crystal quality. This approach reveals the markedly different sensitivities of the two processes to phonons, defects, and many-body effects, and establishes practical criteria for identifying SE and SHG in a wide range of experimental conditions. Our results provide a unified framework for interpreting emission from Rydberg excitons and offer guidelines for future studies aiming to exploit their nonlinear response and long-range interactions.
From compatibility of measurements to exploring Quantum Darwinism on NISQ
This paper studies how Quantum Darwinism (which explains how classical reality emerges from quantum mechanics) breaks down in specific models and connects this to non-classical measurement statistics. The researchers use this connection to develop benchmarking tools for testing quantum hardware characteristics on NISQ devices from IonQ and IBM.
Key Contributions
- Connected breakdown of Quantum Darwinism to non-classical measurement statistics
- Developed benchmarking tools for testing genuine quantum characteristics of NISQ hardware
View Full Abstract
Quantum Darwinism explains how tenets of classical reality, such as objectivity and repeatability, emerge within a quantum universe. As a mathematical framework, Quantum Darwinism also provides guiding principles that determine what physical models support emergent classical behavior, what specific observables obey classical laws, and much more. For instance, in a recent work we elucidated that the limit under which Kirkwood-Dirac quasiprobability distributions become effectively classical coincides with the regime where the underlying physical model obeys the rules of Quantum Darwinism. In the present work, we study the breaking of Quantum Darwinism in a specific model and how that translates to non-classical measurement statistics. Interestingly, this provides effective tools for benchmarking the genuine quantum characteristics of NISQ hardware, which we demonstrate with IonQ's trapped-ion and IBM's superconducting quantum computing platforms.
Remarkable Dates and Place: One Hundred Years Ago
This paper appears to be a historical commentary commemorating the centennial of Schrödinger's development of wave quantum mechanics during his vacation in Arosa, Switzerland in December 1925. It discusses the historical significance and location of this foundational breakthrough in quantum physics.
Key Contributions
- Historical commemoration of Schrödinger's wave equation discovery
- Documentation of the geographical and temporal context of a foundational quantum mechanics breakthrough
View Full Abstract
Exactly a century ago, wave quantum mechanics was born in Arosa, Switzerland. Erwin Schrödinger was vacationing in this classic Swiss Alps town at Christmas 1925 when he made his breakthrough discovery of the wave equation \cite{SchrQMI}.
Fundamental Limitations on the Reliabilities of Power and Work in Quantum Batteries
This paper studies quantum batteries (microscopic energy storage devices for quantum technologies) and discovers fundamental limits on their reliability due to noise and fluctuations. The researchers find that there's an unavoidable trade-off between achieving high power output and maintaining reliable performance, suggesting that hybrid charging schemes work best.
Key Contributions
- Established fundamental lower bounds on noise-to-signal ratios for quantum battery work and power
- Discovered a quantum uncertainty relation that prevents simultaneous suppression of work and power fluctuations
- Demonstrated that hybrid charging schemes optimize the trade-off between high power and reliability
View Full Abstract
Quantum batteries, microscopic devices designed to address energy demands in quantum technologies, promise high power during charging and discharging processes. Yet their practical usefulness and performance depend critically on reliability, quantified by the noise-to-signal ratios (NSRs), i.e., normalized fluctuations of work and power, where reliability decreases inversely with increasing NSR. We establish fundamental limits to this reliability: both work and power NSRs are universally bounded from below by a function of charging speed, imposing a reliability limit inherent to any quantum battery. More strikingly, we find that a quantum mechanical uncertainty relation forbids the simultaneous suppression of work and power fluctuations, revealing a fundamental trade-off that also limits the reliability of quantum batteries. We analyze the trade-off and limits, as well as their scaling behavior, across parallel (local), collective {(fully non-local)}, and hybrid (semi-local) charging schemes for many-body quantum batteries, finding that increasing power by exploiting stronger entanglement comes at the cost of diminished reliability of power. Similar trends are also observed in the charging of quantum batteries utilizing transverse Ising-like interactions. These suggest that achieving both high power and reliability require neither parallel nor collective charging, but a hybrid charging scheme with an intermediate range of interactions. Therefore, our analysis shapes the practical and efficient design of reliable and high-performance quantum batteries.
When and why non-Hermitian eigenvalues miss eigenstates in topological physics
This paper analyzes non-Hermitian quantum systems where the traditional eigenvalue spectrum fails to detect all eigenstates, particularly in systems with topological properties. The authors show that certain eigenstates remain completely hidden from eigenvalue analysis and explain apparent failures in bulk-edge correspondence through this eigenvalue-eigenstate mismatch.
Key Contributions
- Demonstrates that non-Hermitian systems can have eigenstates completely undetected by eigenvalue analysis
- Explains bulk-edge correspondence failures in non-Hermitian topological systems through eigenvalue spectrum limitations
- Provides exact analytical solutions showing hidden modes and exceptional points in the Hatano-Nelson model
View Full Abstract
Non-Hermitian systems exhibit a fundamental spectral dichotomy absent in Hermitian physics: the eigenvalue spectrum and the eigenstate spectrum can deviate significantly in the thermodynamic limit. We explain how non-Hermitian Hamiltonians can support eigenstates completely undetected by eigenvalues, with the unidirectional Hatano-Nelson model serving as both a minimal realization and universal paradigm for this phenomenon. Through exact analytical solutions, we show that this model contains not only hidden modes but multiple macroscopic hidden exceptional points that appear more generally in all systems with a non-trivial bulk winding. Our framework explains how the apparent bulk-edge correspondence failures in models like the non-Hermitian SSH chain instead reflect the systematic inability of the eigenvalue spectrum to detect certain eigenstates in systems with a skin-effect. These results establish the limitation of the eigenvalue spectrum and suggest how the eigenstate approach can lead to improved characterization of non-Hermitian topology.
Fast convergence of Majorana Propagation for weakly interacting fermions
This paper introduces and analyzes Majorana Propagation, an algorithm that efficiently simulates the time evolution of quantum systems with weakly interacting fermions. The authors prove that this algorithm can efficiently approximate the dynamics of sparse quartic Hamiltonians for time periods that become arbitrarily long as interaction strength approaches zero.
Key Contributions
- First provable guarantee for Majorana Propagation algorithm in Hamiltonian evolution
- Efficient classical simulation of weakly interacting fermionic systems with polynomial runtime scaling
View Full Abstract
Simulating the time dynamics of an observable under Hamiltonian evolution is one of the most promising candidates for quantum advantage as we do not expect efficient classical algorithms for this problem except in restricted settings. Here, we introduce such a setting by showing that Majorana Propagation, a simple algorithm combining Trotter steps and truncations, efficiently finds a low-degree approximation of the time-evolved observable as soon as such an approximation exists. This provides the first provable guarantee about Majorana Propagation for Hamiltonian evolution. As an application of this result, we prove that Majorana Propagation can efficiently simulate the time dynamics of any sparse quartic Hamiltonian up to time $t_{\text{max}}(u)$ depending on the interaction strength $u$. For a time horizon $t \leq t_{\text{max}}(u)$, the runtime of the algorithm is $N^{O(\log(t/\varepsilon))}$ where $N$ is the number of Majorana modes and $\varepsilon$ is the error measured in the normalized Frobenius norm. Importantly, in the limit of small $u$, $t_{\text{max}}(u)$ goes to $+\infty$, formalizing the intuition that the algorithm is accurate at all times when the Hamiltonian is quadratic.
Cat states and violation of the Bell-CHSH inequality in relativistic Quantum Field Theory
This paper studies quantum cat states (superpositions of coherent states) in relativistic quantum field theory and shows they can violate the Bell-CHSH inequality. The authors derive analytical expressions for Bell correlations using bounded field operators and demonstrate how interference effects in cat states enable non-local quantum correlations in relativistic settings.
Key Contributions
- Analytical derivation of Bell-CHSH correlator for cat states in relativistic quantum field theory
- Explicit demonstration of Bell inequality violation using bounded field operators in Rindler spacetime
- Concrete realization of theoretical Summers-Werner results on non-locality in quantum field theory
View Full Abstract
A cat state localized in the right Rindler wedge is employed to study the violation of the Bell-CHSH inequality in a relativistic scalar free Quantum Field Theory. By means of the bounded Hermitian operator $sign(\varphi(f))$, where $\varphi(f)$ stands for the smeared scalar field, it turns out that the Bell-CHSH correlator can be evaluated in closed analytic form in terms of the imaginary error function. Being the superposition of two coherent states, cat states allow for the existence of interference terms which give rise to a violation of the Bell-CHSH inequality. As such, the present setup can be considered as an explicit realization of the results obtained by Summers-Werner.
Chiral Graviton Modes in Fermionic Fractional Chern Insulators
This paper studies chiral graviton modes (collective excitations) in lattice-based quantum materials called Fractional Chern Insulators, showing these exotic modes survive even when continuous symmetries are broken by the discrete lattice structure. The researchers use advanced computational methods to demonstrate these modes exist and are long-lived despite lattice effects.
Key Contributions
- Derivation of lattice stress tensor operator for fermionic Harper-Hofstadter model that captures graviton modes in flat band limit
- Demonstration of adiabatic connection between Fractional Quantum Hall and Fractional Chern Insulator chiral graviton modes through computational modeling
- Evidence that chiral graviton modes remain long-lived in lattice systems despite broken continuous symmetries and scattering effects
View Full Abstract
Chiral graviton modes are hallmark collective excitations of Fractional Quantum Hall (FQH) liquids. However, their existence on the lattice, where continuum symmetries that protect them from decay are lost, is still an open and urgent question, especially considering the recent advances in the realization of Fractional Chern Insulators (FCI) in transition metal dichalcogenides and rhombohedral pentalayer graphene. Here we present a comprehensive theoretical and numerical study of graviton-modes in fermionic FCI, and thoroughly demonstrate their existence. We first derive a lattice stress tensor operator in the context of the fermionic Harper-Hofstadter(HH) model which captures the graviton in the flat band limit. Importantly, we discover that such lattice stress-tensor operators are deeply connected to lattice quadrupolar density correlators, readily generalizable to generic Chern bands. We then explicitly show the adiabatic connection between FQH and FCI chiral graviton modes by interpolating from a low flux HH model to a Checkerboard lattice model that hosts a topological flat band. In particular, using state-of-the-art matrix product state and exact diagonalization simulations, we provide strong evidence that chiral graviton modes are long-lived excitations in FCIs despite the lack of continuous symmetries and the scattering with a two-magnetoroton continuum. By means of a careful finite-size analysis, we show that the lattice generates a finite but small intrinsic decay rate for the graviton mode. We discuss the relevance of our results for the exploration of graviton modes in FCI phases realized in solid state settings, as well as cold atom experiments.
Beyond the imbalance: site-resolved dynamics probing resonances in many-body localization
This paper investigates many-body localization (MBL) in quantum systems by showing that traditional measurement approaches miss important microscopic details. The researchers demonstrate that examining individual sites rather than averaged properties reveals complex local dynamics and resonant structures that provide a more complete picture of how quantum systems fail to thermalize.
Key Contributions
- Demonstrated that site-resolved measurements reveal microscopic features of MBL that are hidden by spatially averaged observables
- Identified resonant structures and local instabilities within the MBL phase using numerical simulations and analytical toy models
View Full Abstract
We explore the limitations of using imbalance dynamics as a diagnostic tool for many-body localization (MBL) and show that spatial averaging can mask important microscopic features. Focusing on the strongly disordered regime of the random-field XXZ chain, we use state-of-the-art numerical techniques (Krylov time evolution and full diagonalization) to demonstrate that site-resolved spin autocorrelators reveal a rich and complex dynamical behavior that is obscured by the imbalance observable. By analyzing the time evolution and infinite-time limits of these local probes, we reveal resonant structures and rare local instabilities within the MBL phase. These numerical findings are supported by an analytical, few-site toy model that captures the emergence of a multiple-peak structure in local magnetization histograms, which is a hallmark of local resonances. These few-body local effects provide a more detailed understanding of ergodicity-breaking dynamics, and also allow us to explain the finite-size effects of long-time imbalance, and its sensitivity to the initial conditions in quench protocols. Overall, our experimentally testable predictions highlight the necessity of a refined, site-resolved approach to fully understand the complexities of MBL and its connection to rare-region effects.
Quantum Elastic Network Models and their Application to Graphene
This paper introduces Quantum Elastic Network Models (QENMs) to simulate molecular vibrations in materials like graphene using quantum computers. The authors demonstrate that quantum algorithms could simulate atomic-scale properties of centimeter-sized graphene sheets using only ~160 logical qubits, which would be computationally prohibitive on classical computers.
Key Contributions
- Introduction of Quantum Elastic Network Models (QENMs) for materials simulation
- Demonstration of exponential quantum advantage for simulating coupled oscillator systems in materials
- Resource estimation showing centimeter-scale graphene simulation requires only ~160 logical qubits
View Full Abstract
Molecular dynamics simulations are a central computational methodology in materials design for relating atomic composition to mechanical properties. However, simulating materials with atomic-level resolution on a macroscopic scale is infeasible on current classical hardware, even when using the simplest elastic network models (ENMs) that represent molecular vibrations as a network of coupled oscillators. To address this issue, we introduce Quantum Elastic Network Models (QENMs) and utilize the quantum algorithm of Babbush et al. (PRX, 2023), which offers an exponential advantage when simulating systems of coupled oscillators under some specific conditions and assumptions. Here, we demonstrate how our method enables the efficient simulation of planar materials. As an example, we apply our algorithm to the task of simulating a 2D graphene sheet. We analyze the exact complexity for initial-state preparation, Hamiltonian simulation, and measurement of this material, and provide two real-world applications: heat transfer and the out-of-plane rippling effect. We estimate that an atomistic simulation of a graphene sheet on the centimeter scale, classically requiring hundreds of petabytes of memory and prohibitive runtimes, could be encoded and simulated with as few as $\sim 160$ logical qubits.
Composable simultaneous purification: when all communication scenarios reduce to spatial correlations
This paper investigates how Bell non-locality concepts extend to communication scenarios, proving that any composition of non-signalling quantum operations still produces correlations within the standard spatial Bell correlations set. The work establishes fundamental limits on what types of quantum correlations can be achieved through complex multipartite communication schemes.
Key Contributions
- Extension of simultaneous purification results from states to instruments and super-instruments for composable quantum structures
- Proof that arbitrary compositions of non-signalling assemblages cannot exceed standard spatial quantum Bell correlations
View Full Abstract
Bell non-locality is a powerful framework to distinguish classical, quantum and post-quantum resources, which relies on non-communicating players. Under which restriction can we have the same separations, if we allow for communication? Non-signalling state assemblages, and the fact that they can always be simultaneously purified, turned out to be the key element to restrict the simplest bipartite communication scenario, the prepare-and-measure, to the standard bipartite Bell scenario. Yet, many distinctive features of quantum theory are genuinely multipartite and cannot be reduced to two-party behaviour. In this work we are interested in extending this simultaneous purification inspired result to all multipartite communication schemes. As a first step, we unify and extend the simultaneous purification result from states to instruments and super-instruments, which are composable structures, and open up the possibility to explore more complex communication scenarios. Our main contribution is to establish that arbitrary compositions of non-signalling assemblages cannot escape the standard spatial quantum Bell correlations set. As a consequence, any interactive quantum realization of correlations outside of this set must involve at least one signalling assemblage of quantum operations, even when the resulting correlations are non-signalling.
Low-loss Material for Infrared Protection of Cryogenic Quantum Applications
This paper develops a new filter material made of sapphire spheres in epoxy resin that blocks infrared radiation (which can damage quantum states) while allowing low-frequency gigahertz signals to pass through. The material uses Mie scattering to selectively filter wavelengths and is designed to protect cryogenic quantum devices.
Key Contributions
- Development of a Mie-scattering based filter material that blocks infrared while transmitting gigahertz frequencies
- Demonstration of low insertion loss (<0.4 dB below 10 GHz) at millikelvin temperatures for protecting quantum devices
View Full Abstract
The fragile quantum states of low-temperature quantum applications require protection from infrared radiation caused by higher-temperature stages or other sources. We propose a material system that can efficiently block radiation up to the optical range while transmitting photons at low gigahertz frequencies. It is based on the effect that incident photons are strongly scattered when their wavelength is comparable to the size of particles embedded in a weakly absorbing medium (Mie-scattering). The goal of this work is to tailor the absorption and transmission spectrum of an non-magnetic epoxy resin containing sapphire spheres by simulating its dependence on the size distribution. Additionally, we fabricate several material compositions, characterize them, as well as other materials, at optical, infrared, and gigahertz frequencies. In the infrared region (stop band) the attenuation of the Mie-scattering optimized material is high and comparable to that of other commonly used filter materials. At gigahertz frequencies (pass-band), the prototype filter exhibits a high transmission at millikelvin temperatures, with an insertion loss of less than $0.4\,$dB below $10\,$GHz.
Simulation of noisy quantum circuits using frame representations
This paper develops a unified mathematical framework based on frame theory for classically simulating noisy quantum circuits. The framework provides a common way to measure computational costs across different simulation approaches and enables the discovery of new, more efficient classical simulation algorithms.
Key Contributions
- Unified framework for classical simulation of noisy quantum circuits using frame theory
- Novel simulation algorithm based on generalized Pauli frame with improved performance
- Common quantitative measure for computational costs across different simulation approaches
View Full Abstract
One of the core research questions in the theory of quantum computing is to find out to what precise extent the classical simulation of a noisy quantum circuits is possible and where potential quantum advantages can set in. In this work, we introduce a unified framework for the classical simulation of quantum circuits based on frame theory, encompassing and generalizing a broad class of existing simulation strategies. Within this framework, the computational cost of a simulation algorithm is determined by the one-norm of an associated quasi-probability distribution, providing a common quantitative measure across different simulation approaches. This enables a comprehensive perspective on common methods for the simulation of noisy circuits based on different quantum resources, such as entanglement or non-stabilizerness. It further provides a clear scheme for generating novel classical simulation algorithms. Indeed, by exploring different choices of frames within this formalism and resorting to tools of convex optimization, we are able not only to obtain new insights and improved bounds for existing methods -- such as stabilizer state simulation or Pauli back-propagation -- but also to discover a new approach with an improved performance based on a generalization of the Pauli frame. We, thereby, show that classical simulation techniques can directly benefit from a perspective -- that of frames -- that goes beyond the traditional classification of quantum resources.
Scalable Generation of Macroscopic Fock States Exceeding 10,000 Photons
This paper introduces a novel protocol for generating extremely large Fock states (quantum states with well-defined photon numbers) containing over 10,000 photons in a single optical mode. The method uses engineered Kerr nonlinearity with optimized phase and displacement operations to achieve high fidelities while being robust against photon loss.
Key Contributions
- Demonstration of scalable protocol for generating macroscopic Fock states exceeding 10,000 photons with execution time scaling as N^(-1/2)
- Achievement of >73% fidelity for photon numbers up to 100,000 using Kerr-engineered multi-lens approach with robustness against photon loss
- Enabling exploration of quantum-to-classical transitions in giant Fock states for advanced quantum metrology applications
View Full Abstract
The scalable preparation of bosonic quantum states with macroscopic excitations poses a fundamental challenge in quantum technologies, limited by control complexity and photon-loss rates that severely constrain prior theoretical and experimental efforts to merely dozens of excitations per mode. Here, based on the duality of the quantum state evolution in Fock state space and the optical wave-function propagation in a waveguide array, we introduce a Kerr-engineered multi-lens protocol in a single bosonic mode to deterministically generate Fock states exceeding $10,000$ photons. By optimizing phase and displacement operations across lens groups, our approach compensates for non-paraxial aberrations, achieving fidelities above $73\%$ in numerical simulations for photon numbers up to $N=100,000$. Counterintuitively, the protocol's execution time scales as $N^{-1/2}$ with the target photon number $N$, exhibiting robustness against the photon loss. Our framework enables exploration of quantum-to-classical transitions of giant Fock states, paving the way for advanced quantum metrology with significant quantum gains, and error-corrected quantum information processing in high-dimensional Hilbert spaces.
Preconditioned Multivariate Quantum Solution Extraction
This paper presents a quantum algorithm for extracting solutions to partial differential equations that have been encoded in quantum state amplitudes. The method uses preconditioning, Chebyshev polynomial fitting, and cumulative distribution sampling to achieve better scaling and handle higher-dimensional functions compared to previous approaches.
Key Contributions
- Achieves Heisenberg limit scaling for quantum state amplitude extraction
- Extends method to higher dimensional functions with reduced quantum complexity
- Removes dependency on function minimum through preconditioning technique
View Full Abstract
Numerically solving partial differential equations is a ubiquitous computational task with broad applications in many fields of science. Quantum computers can potentially provide high-degree polynomial speed-ups for solving PDEs, however many algorithms simply end with preparing the quantum state encoding the solution in its amplitudes. Trying to access explicit properties of the solution naively with quantum amplitude estimation can subsequently diminish the potential speed-up. In this work, we present a technique for extracting a smooth positive function encoded in the amplitudes of a quantum state, which achieves the Heisenberg limit scaling. We improve upon previous methods by allowing higher dimensional functions, by significantly reducing the quantum complexity with respect to the number of qubits encoding the function, and by removing the dependency on the minimum of the function using preconditioning. Our technique works by sampling the cumulative distribution of the given function, fitting it with Chebyshev polynomials, and subsequently extracting a representation of the whole encoded function. Finally, we trial our method by carrying out small scale numerical simulations.
Anomaly to Resource: The Mpemba Effect in Quantum Thermometry
This paper demonstrates how the Mpemba effect (where hotter systems can cool faster than colder ones) can be exploited as a resource for quantum thermometry, showing that non-equilibrium probe states can achieve better temperature measurement precision than traditional equilibrium approaches. The authors prove this theoretically and demonstrate it with specific quantum systems, establishing a new design principle for ultrafast quantum sensing.
Key Contributions
- Theoretical proof that Mpemba-type inversions enhance quantum Fisher information for temperature estimation
- Demonstration of metrological Mpemba effect in two-level and Lambda-level quantum probes
- Establishment of anomalous relaxation as a design principle for nonequilibrium quantum thermometry
View Full Abstract
Quantum thermometry provides a key capability for nanoscale devices and quantum technologies, but most existing strategies rely on probes initialized near equilibrium. This equilibrium paradigm imposes intrinsic limitations: sensitivity is tied to long-time thermalization and often cannot be improved in fast, noisy, or nonstationary settings. In contrast, the \textit{Mpemba effect}, the counterintuitive phenomenon where hotter states relax faster than colder ones, has mostly been viewed as a thermodynamic anomaly. Here, we bridge this gap by proving that Mpemba-type inversions generically yield a finite-time enhancement of the quantum Fisher information (QFI) for temperature estimation, thereby converting an anomalous relaxation effect into a concrete metrological resource. Through explicit analyses of two-level and $Λ$-level probes coupled to bosonic baths, we show that nonequilibrium initializations can transiently outperform both equilibrium strategies and colder states, realizing a \emph{metrological Mpemba effect}. Our results establish anomalous relaxation as a general design principle for nonequilibrium quantum thermometry, enabling ultrafast and nanoscale sensing protocols that exploit, rather than avoid, transient dynamics.
Exponential capacity scaling of classical GANs compared to hybrid latent style-based quantum GANs
This paper compares classical generative adversarial networks (GANs) with hybrid quantum GANs for image generation, finding that quantum generators can achieve similar performance with exponentially fewer trainable parameters than their classical counterparts. The researchers tested this on satellite image generation and demonstrated what they claim is a quantum advantage in computational efficiency.
Key Contributions
- First comprehensive experimental analysis showing exponential capacity scaling advantage of quantum GANs over classical GANs
- Demonstration of quantum advantage in generative modeling through reduced parameter requirements while maintaining performance
View Full Abstract
Quantum generative modeling is a very active area of research in looking for practical advantage in data analysis. Quantum generative adversarial networks (QGANs) are leading candidates for quantum generative modeling and have been applied to diverse areas, from high-energy physics to image generation. The latent style-based QGAN, relying on a classical variational autoencoder to encode the input data into a latent space and then using a style-based QGAN for data generation has been proven to be efficient for image generation or drug design, hinting at the use of far less trainable parameters than their classical counterpart to achieve comparable performance, however this advantage has never been systematically studied. We present in this work the first comprehensive experimental analysis of this advantage of QGANS applied to SAT4 image generation, obtaining an exponential advantage in capacity scaling for a quantum generator in the hybrid latent style-based QGAN architecture. Careful tuning of the autoencoder is crucial to obtain stable, reliable results. Once this tuning is performed and defining training optimality as when the training is stable and the FID score is low and stable as well, the optimal capacity (or number of trainable parameters) of the classical discriminator scales exponentially with respect to the capacity of the quantum generator, and the same is true for the capacity of the classical generator. This hints toward a type of quantum advantage for quantum generative modeling.
Landau Zener Interaction Enhanced Quantum Sensing in Spin Defects of Hexagonal Boron Nitride
This paper demonstrates enhanced quantum sensing using negatively charged boron vacancies in hexagonal boron nitride by employing frequency-ramped microwave pulses instead of conventional resonant excitation. The technique achieves 4-fold better spin-state population transfer and 16-fold shorter measurement times, making quantum sensing more practical in noisy environments.
Key Contributions
- Development of frequency-ramped microwave pulse technique for improved spin-state population transfer in hBN defects
- Demonstration of 16-fold reduction in measurement time for quantum sensing applications
- Theoretical modeling using Landau-Zener dynamics to explain the enhanced performance
View Full Abstract
Negatively charged boron vacancies (V$_{\text{B}}^{-}$) in hexagonal boron nitride (hBN) comprise a promising quantum sensing platform, optically addressable at room temperature and transferrable onto samples. However, broad hyperfine-split spin transitions of the ensemble pose challenges for quantum sensing with conventional resonant excitation due to limited spectral coverage. While isotopically enriched hBN using $^{10}$B and $^{15}$N isotopes (h$^{10}$B$^{15}$N) exhibits sharper spectral features, significant inhomogeneous broadening persists. We demonstrate that, implemented via frequency modulation on an FPGA, a frequency-ramped microwave pulse achieves around 4-fold greater $|0\rangle\rightarrow|-1\rangle$ spin-state population transfer and thus contrast than resonant microwave excitation and thus 16-fold shorter measurement time for spin relaxation based quantum sensing. Quantum dynamics simulations reveal that an effective two-state Landau-Zener model captures the complex relationship between population inversion and pulse length with relaxations incorporated. Our approach is robust and valuable for quantum relaxometry with spin defects in hBN in noisy environments.
Encoding complex-balanced thermalization in quantum circuits
This paper develops a method to create specific thermal states in quantum systems using engineered quantum circuits with reservoir qubits, enabling controlled heating and cooling processes that violate time-reversibility. The approach allows creation of out-of-equilibrium quantum states at desired temperatures with applications to synchronized quantum emission and protected quantum synchronization.
Key Contributions
- Novel quantum circuit protocol for complex-balanced thermalization using engineered reservoir qubits
- Demonstration of applications including temporally-correlated dichromatic emission and Liouvillian exception point protected quantum synchronization at finite temperatures
View Full Abstract
We propose a protocol for effectively implementing complex-balanced thermalization via Markovian processes on a quantum-circuit platform that couples the system with engineered reservoir qubits. The non-orthogonality of qubit eigenstates facilitates non-uniform heating through a modified Kubo-Martin-Schwinger relation, while simultaneously supports amplification-dissipation dynamics by violating microscopic time-reversibility. This offers a new approach to realizing out-of-equilibrium states at given temperatures. We show two applications of this platform: temporally-correlated dichromatic emission and Liouvillian exception point protected quantum synchronization at finite temperatures, both of which are challenging to achieve with conventional thermal reservoirs.
Entanglement negativity for a free scalar chiral current
This paper studies entanglement negativity in a two-dimensional quantum field theory model, deriving analytical expressions for how quantum entanglement behaves between different spatial regions. The work focuses on theoretical properties of entanglement measures in systems with specific symmetries and topological features.
Key Contributions
- Analytical expressions for entanglement negativity in chiral current systems
- Verification of theoretical predictions with numerical lattice model calculations
- Analysis of topological contributions to entanglement structure in systems with symmetries
View Full Abstract
We study the entanglement negativity for the free, scalar chiral current in two spacetime dimensions, which is a simple model violating the Haag duality in regions with nontrivial topology. For the ground state of the system, both on the line and on the circle, we consider the setups given by two intervals, either adjacent or disjoint. We find analytic expressions for the moments of the partial transpose of the reduced density matrix and the logarithmic negativity. In the limit of small separation distance, this expression yields the same subleading topological contribution occurring in the mutual information. In the limit of large separation distance between the two intervals, the exponential decay of the logarithmic negativity is obtained from its analytic expression. The analytic formulas are checked against exact numerical results from a bosonic lattice model, finding a perfect agreement. We observe that, since the chiral current generates the neutral subalgebra of the full chiral Dirac fermion theory, this analysis highlights how symmetries produce nontrivial features in the entanglement structure that are analogue to those ones already observed in the mutual information for regions with nontrivial topology.
Quantum Neural Network Training and Inference with Low Resolution Control Electronics
This paper investigates how low-resolution control electronics affect quantum neural network performance, finding that pre-trained QNNs work well with 6-bit control systems but training requires at least 12-bit resolution unless special stochastic techniques are used. The researchers develop a method to enable successful QNN training even with 4-10 bit control electronics, which could significantly reduce power and hardware requirements for quantum computers.
Key Contributions
- Demonstrated that quantum neural networks can achieve near-optimal performance with low-resolution (6-bit) control electronics during inference
- Developed temperature-controlled stochastic training methods that overcome gradient deadlock problems in low-resolution quantum systems
View Full Abstract
Scaling quantum computers requires tight integration of cryogenic control electronics with quantum processors, where Digital-to-Analog Converters (DACs) face severe power and area constraints. We investigate quantum neural network (QNN) training and inference under finite DAC resolution constraints across various DAC resolutions. Pre-trained QNNs achieve accuracy nearly indistinguishable from infinite-precision baselines when deployed on quantum systems with 6-bit DAC control electronics, exhibiting an elbow curve with diminishing returns beyond 4 bits. However, training under quantization reveals gradient deadlock below 12-bit resolution as gradient magnitudes fall below quantization step sizes. We introduce temperature-controlled stochasticity that overcomes this through probabilistic parameter updates, enabling successful training at 4-10 bit resolutions that remarkably matches or exceeds infinite-precision baseline performance. Our findings demonstrate that low-resolution control electronics need not compromise QML performance, enabling significant power and area reduction in cryogenic control systems for practical deployment as quantum hardware scales.
Signatures of Spin Coherence in Chiral Coupled Quantum Dots
This paper investigates quantum spin coherence effects in chiral quantum dot assemblies, showing that circularly polarized light excitation produces magnetic field-dependent photoluminescence lifetimes due to spin precession. The work demonstrates room-temperature quantum coherent behavior in chiral-induced spin selectivity systems.
Key Contributions
- First demonstration of quantum coherent spin dynamics in chiral quantum dot systems at room temperature
- Development of magnetic field-dependent spin coherence measurement technique using circularly polarized photoluminescence
View Full Abstract
Chiral-induced spin selectivity (CISS) enables spin selectivity of charge carriers in chiral molecular systems without magnetic materials. While spin selectivity has been widely investigated, its quantum coherence has not yet been explored. Here, we investigate spin-dependent photoluminescence (PL) dynamics in multilayer quantum-dot (QD) assemblies coupled by chiral linkers. Using circularly polarized excitation in the presence of an external magnetic field, we observe a pronounced modulation of the PL lifetime that depends on the magnetic field magnitude and geometry. The lifetime difference between left- and right-circularly polarized excitations exhibits a field-angle dependence, consistent with spin precession driven by the transverse magnetic-field component relative to the chiral axis. A model incorporating coupled spin precession and decay processes reproduces the experimental trends. These results establish chiral QD assemblies as a room-temperature platform for probing quantum coherent manifestations of the CISS effect, with implications for spintronic and quantum technologies.
Machine learning-aided direct estimation of coherence and entanglement for unknown states
This paper develops a machine learning approach using support vector regression to efficiently estimate quantum coherence and entanglement in unknown quantum states. The method requires only minimal experimental measurements (diagonal density matrix entries and matrix traces) rather than full quantum state tomography, making it much more resource-efficient while maintaining high accuracy.
Key Contributions
- Development of SVR-based method for direct estimation of coherence and entanglement using minimal experimental resources
- Introduction of support vector quantile regression with pinball loss to provide conservative lower bounds and prevent overestimation
- Demonstration of scalable approach that avoids full quantum state tomography while maintaining over 95% accuracy
View Full Abstract
Quantum coherence and entanglement are fundamental resources in quantum technologies, yet their efficient estimation for unknown states by employing minimal resources in experimental settings remains challenging, particularly in high-dimensional systems. We present a machine learning approach based on support vector regression (SVR) that directly estimates the coherence measures and the geometric measure of quantum entanglement using minimal experimental resources. Our method requires only the diagonal entries of the density matrix, along with the traces of the squared and cubed density matrices for quantum coherence, and additionally along with the traces of the squared and cubed reduced density matrix for estimating quantum entanglement. These quantities can be obtained through random measurements or a hybrid quantum-classical framework. This approach significantly reduces the resource overhead compared to quantum state tomography while maintaining high accuracy. {Furthermore, the support vector quantile regression (SVQR) with pinball loss is employed to prevent SVR overestimation. This model not only ensures that over 95\% of predictions are conservative lower bounds in most cases, but also maintains this lower-bound reliability for over 93\% of predictions, despite 2\% perturbations in the input features.} The proposed technique provides a practical and scalable tool for characterizing quantum resources across computation, communication, and metrology applications.
High-Rate Free-Running Reference-Frame-Independent Measurement-Device-Independent Quantum Key Distribution with Classified Distillation
This paper presents an improved quantum key distribution protocol that works reliably in mobile environments with rapidly changing reference frames, such as satellite communications. The new method uses a classification-distillation technique to achieve nine times higher key rates than previous approaches while tolerating much higher signal losses.
Key Contributions
- Free-running RFI-MDI-QKD protocol that maintains performance under rapid reference-frame variations
- Classification-distillation method that achieves 9x higher key rates than previous schemes
- Tolerance for channel losses exceeding 24 dB enabling mobile quantum communication platforms
View Full Abstract
Reference-frame-independent measurement-device-independent quantum key distribution (RFI-MDI-QKD) eliminates detector side-channel attacks and avoids reference-frame calibration. While its feasibility has been widely demonstrated, existing implementations typically assume fixed or slowly drifting reference-frame misalignment, conditions rarely satisfied outside the laboratory. In realistic environments, rapid and free-running reference-frame variations can severely degrade both the key rate and transmission distance of conventional RFI-MDI-QKD. Here we propose a free-running RFI-MDI-QKD protocol that maintains high-rate key generation under rapid reference-frame variations. By introducing a classification-distillation method that reclassifies total detection events, secure keys can be extracted without modifying the experimental setup. Our protocol achieves a key rate more than nine times higher than the best previous RFI-MDI-QKD scheme and tolerates channel losses exceeding 24 dB, where earlier approaches fail. These results enable practical quantum key distribution on mobile platforms, including satellite-to-ground links and airborne nodes.
Long-lived state of a helium-like magnesium donor in silicon
This paper investigates the relaxation behavior of long-lived quantum states in magnesium-doped silicon, finding that certain spin-triplet states can persist for about 20 milliseconds with relaxation governed by thermal activation processes.
Key Contributions
- Discovery of 20 ms lifetime spin-triplet states in Mg-doped silicon
- Identification of Orbach relaxation mechanism with 13 meV activation energy
View Full Abstract
The relaxation of ortho states of a helium-like Mg donor in silicon is investigated by measuring the modulation of background radiation transmission through impurity centers under pulsed photoexcitation. Long-lived states of the spin-triplet 1s(3T2) group with a lifetime of about 20 ms are observed. The temperature dependence indicates that the relaxation is governed by the Orbach mechanism with an activation energy ~13 meV, which is close to the exchange splitting energy of the excited 1s states of the Mg donor.
Virtual temperatures as a key quantifier for passive states in quantum thermodynamic processes
This paper introduces virtual temperatures as a tool to analyze passive quantum states in thermodynamic processes, using majorization theory to characterize heat flow and optimize quantum thermal machines like Otto engines. The work connects quantum passivity concepts to classical thermodynamics through these virtual temperature parameters.
Key Contributions
- Definition of virtual temperatures for passive quantum states using majorization theory
- Derivation of efficiency bounds for quantum Otto engines in terms of min-max virtual temperatures
- Connection between quantum thermodynamic processes and classical counterparts through virtual temperature framework
View Full Abstract
We analyze the role of virtual temperatures for passive quantum states through the lens of majorization theory. A mean temperature over the virtual temperatures of adjacent energy levels is defined to compare the passive states of the system resulting from isoenergetic and isoentropic transformations. The role of the minimum and the maximum (min-max) values of the virtual temperatures in determining the direction of heat flow between the system and the environment is argued based on majorization relations. We characterize the intermediate passive states in a quantum Otto engine using these virtual temperatures and derive an upper bound for the Otto efficiency that can be expressed in terms of the min-max virtual temperatures of the working medium. An explicit example of the coupled-spins system is worked out. Moreover, virtual temperatures serve to draw interesting parallels between the quantum thermodynamic processes and their classical counterparts. Thus, virtual temperature emerges as a key operational quantity linking passivity and majorization to the optimal performance of quantum thermal machines.
Quantenlogische Systeme und Tensorproduktraeume
This paper provides a detailed mathematical proof showing that composed quantum mechanical systems must be described using tensor product spaces, building on foundational work by Mackey, Aerts, and Daubechies. The authors use lattice theory and c-morphism theory to rigorously establish the quantum logical framework for combining multiple quantum systems.
Key Contributions
- Rigorous mathematical proof that quantum composite systems require tensor product description
- Detailed exposition of quantum logical axiomatic systems using lattice and c-morphism theory
View Full Abstract
In this work we present an intuitive construction of the quantum logical axiomatic system provided by George Mackey. The goal of this work is a detailed discussion of the results from the paper 'Physical justification for using the tensor product to describe two quantum systems as one joint system' [1] published by Diederik Aerts and Ingrid Daubechies. This means that we want to show how certain composed physical systems from classical and quantum mechanics should be described logically. To reach this goal, we will, like in [1], discuss a special class of axiomatically defined composed physical systems. With the help of certain results from lattice and c-morphism theory (see [2] and [23]), we will present a detailed proof of the statement, that in the quantum mechanical case, a composed physical system must be described via a tensor product space.
Distinguishing Coherent and Incoherent Errors in Multi-Round Time-Reversed Dynamics via Scramblons
This paper studies how different types of errors (coherent vs incoherent) affect quantum systems when time evolution is reversed multiple times. The researchers use theoretical analysis and the Sachdev-Ye-Kitaev model to show that incoherent errors accumulate linearly while coherent errors show a quadratic-to-linear crossover behavior.
Key Contributions
- Derived closed-form expressions showing distinct accumulation patterns for coherent vs incoherent errors in multi-round time-reversed dynamics
- Provided theoretical framework using scramblon theory to characterize and distinguish different error types in quantum chaotic systems
View Full Abstract
Despite the rapid development of quantum science and technology, errors are inevitable and play a crucial role in quantum simulation and quantum computation. In quantum chaotic systems, coherent errors arising from imperfect Hamiltonian control and incoherent errors induced by coupling to the environment are both exponentially amplified during time evolution due to information scrambling. A fundamental question is how these two classes of errors imprint distinct signatures on the emergent irreversibility of many-body dynamics. In this Letter, we address this question by investigating multi-round time-reversed dynamics in the presence of both coherent and incoherent errors. By applying scramblon theory, we obtain closed-form expressions for the Loschmidt echo over different rounds of time-reversed evolution. For incoherent errors, the error accumulates linearly with the number of rounds, whereas coherent errors exhibit a crossover from quadratic to linear accumulation. These predictions are explicitly verified using the solvable Sachdev-Ye-Kitaev model. Our results provide a theoretical foundation for characterizing and calibrating coherent and incoherent errors in reversed dynamics, with particular relevance to nuclear magnetic resonance systems.
Unconditionally teleported quantum gates between remote solid-state qubit registers
This paper demonstrates quantum teleportation of logic gates between remote solid-state quantum processors based on diamond NV centers and carbon-13 nuclear spins. The researchers successfully performed an unconditional CNOT gate between distant qubits and created entangled states across multiple network nodes without requiring post-selection.
Key Contributions
- First demonstration of unconditional remote quantum gates between solid-state qubit registers
- Implementation of distributed quantum logic without post-selection using real-time feed-forward
- Creation of genuine 4-partite entanglement across network nodes
- Demonstration of key capabilities for modular quantum computing architectures
View Full Abstract
Quantum networks connecting quantum processing nodes via photonic links enable distributed and modular quantum computation. In this framework, quantum gates between remote qubits can be realized using quantum teleportation protocols. The essential requirements for such non-local gates are remote entanglement, local quantum logic within each processor, and classical communication between nodes to perform operations based on measurement outcomes. Here, we demonstrate an unconditional Controlled-NOT quantum gate between remote diamond-based qubit devices. The control and target qubits are Carbon-13 nuclear spins, while NV electron spins enable local logic, readout, and remote entanglement generation. We benchmark the system by creating a Greenberger-Horne-Zeilinger state, showing genuine 4-partite entanglement shared between nodes. Using deterministic logic, single-shot readout, and real-time feed-forward, we implement non-local gates without post-selection. These results demonstrate a key capability for solid-state quantum networks, enabling exploration of distributed quantum computing and testing of complex network protocols on fully integrated systems.
Floquet-driven tunneling control in monolayer MoS$_2$
This paper studies how laser fields control electron transmission through molybdenum disulfide (MoS2) barriers using Floquet theory. The researchers found that laser intensity can be tuned to selectively filter and channel different transmission bands, enabling controllable quantum transport in this 2D material.
Key Contributions
- Demonstration of laser-controlled transmission filtering in MoS2 using Floquet theory
- Discovery of spin-dependent oscillation periods in transmission probability under laser driving
View Full Abstract
We study how fermions in molybdenum disulfide MoS$_2$ interact with a laser field and a static potential barrier, focusing on the transmission probability. Our aim is to understand and control photon-assisted quantum transport in this two-dimensional material under external driving. We use the Floquet approximation to describe the wave functions in the three regions of the system. By applying continuity conditions at the boundaries, we obtain a set of equations involving an infinite number of Floquet modes. We explicitly determine transmissions involving the central band $E$ and the first sidebands $E \pm \hbarω$. As for higher-order bands, we use the transfer matrix approach together with current density to compute the associated transmissions. Our results reveal that the transmission probability oscillates for both spin-up and spin-down electrons. The oscillations of spin-down electrons occur over nearly twice the period of spin-up electrons. Among all bands, the central one consistently shows the highest transmission. We also find that stronger laser fields and wider barriers both lead to reduced transmission. Moreover, laser irradiation enables controllable channeling and filtering of transmission bands by tuning the laser intensity and system parameters. This highlights the potential of laser-driven MoS$_2$ structures for highly sensitive electromagnetic sensors and advanced optoelectronic devices.
Noise tailoring for error mitigation and for diagnozing digital quantum computers
This paper introduces Noise Tailoring (NT), a method to modify the structure of noise in two-qubit quantum gates to make error mitigation techniques more effective. The authors show through classical simulations that combining NT with error mitigation can improve accuracy up to 5 times compared to error mitigation alone, though real quantum hardware presents additional challenges.
Key Contributions
- Introduction of Noise Tailoring method to modify two-qubit gate noise structure for improved error mitigation
- Demonstration of up to 5x accuracy improvement in classical simulations when combining NT with error mitigation
- Proposal to use NT as a diagnostic tool for characterizing error sources in quantum hardware
View Full Abstract
Error mitigation (EM) methods are crucial for obtaining reliable results in the realm of noisy intermediate-scale quantum (NISQ) computers, where noise significantly impacts output accuracy. Some EM protocols are particularly efficient for specific types of noise. Yet the noise in the actual hardware may not align with that. In this article, we introduce Noise Tailoring (NT) -- an innovative strategy designed to modify the structure of the noise associated with two-qubit gates through statistical sampling. We perform classical emulation of the protocol behavior and find that the NT+EM results can be up to 5 times more accurate than the results of EM alone for realistic Pauli noise acting on two-qubit gates. At the same time, on actual IBM quantum computers, the NT method falls victim to various small error sources beyond Markovian Pauli noise. We propose to use the NT method for characterizing such error sources on quantum computers in order to inform hardware development.
PACOX: A FPGA-based Pauli Composer Accelerator for Pauli String Computation
This paper presents PACOX, a specialized FPGA-based accelerator designed to speed up Pauli string computations, which are essential building blocks in hybrid quantum-classical algorithms. The accelerator achieves up to 100x speedup compared to CPU methods while being much more energy efficient.
Key Contributions
- First dedicated FPGA-based accelerator for Pauli string computation
- Novel parallel and pipelined processing element architecture with XOR-based encoding
- Demonstrated 100x speedup over CPU methods with superior energy efficiency
View Full Abstract
Pauli strings are a fundamental computational primitive in hybrid quantum-classical algorithms. However, classical computation of Pauli strings suffers from exponential complexity and quickly becomes a performance bottleneck as the number of qubits increases. To address this challenge, this paper proposes the Pauli Composer Accelerator (PACOX), the first dedicated FPGA-based accelerator for Pauli string computation. PACOX employs a compact binary encoding with XOR-based index permutation and phase accumulation. Based on this formulation, we design a parallel and pipelined processing element (PE) cluster architecture that efficiently exploits data-level parallelism on FPGA. Experimental results on a Xilinx ZCU102 FPGA show that PACOX operates at 250 MHz with a dynamic power consumption of 0.33 W, using 8,052 LUTs, 10,934 FFs, and 324 BRAMs. For Pauli strings of up to 19 qubits, PACOX achieves speedups of up to 100 times compared with state-of-the-art CPU-based methods, while requiring significantly less memory and achieving a much lower power-delay product. These results demonstrate that PACOX delivers high computational speed with superior energy efficiency for Pauli-based workloads in hybrid quantum-classical systems.
Quantum Wiener architecture for quantum reservoir computing
This paper develops quantum Wiener architectures for quantum reservoir computing, which use quantum linear dynamics with continuous measurements and classical readouts to process information. The authors prove these systems maintain key computational properties and demonstrate superior performance compared to classical reservoir computing on standard benchmarks.
Key Contributions
- First rigorous proof that quantum Wiener systems retain fading-memory property and universality despite quantum constraints
- Kernel-theoretic interpretation showing quantum Wiener reservoirs naturally induce deep kernels
- Empirical demonstration of performance gains over classical and quantum reservoir computing models
View Full Abstract
This work focuses on quantum reservoir computing and, in particular, on quantum Wiener architectures (qWiener), consisting of quantum linear dynamic networks with weak continuous measurements and classical nonlinear static readouts. We provide the first rigorous proof that qWiener systems retain the fading-memory property and universality of classical Wiener architectures, despite quantum constraints on linear dynamics and measurement back-action. Furthermore, we develop a kernel-theoretic interpretation showing that qWiener reservoirs naturally induce deep kernels, providing a principled framework for analysing their expressiveness. We further characterise the simplest qWiener instantiation, consisting of concatenated quantum harmonic oscillators, and show the difference with respect to the classical case. Finally, we empirically evaluate the architecture on standard reservoir computing benchmarks, demonstrating systematic performance gains over prior classical and quantum reservoir computing models.
Fast thermal state preparation beyond native interactions
This paper presents a method to prepare thermal quantum states for Hamiltonians with non-native interactions using only unitary dynamics, making it suitable for current quantum devices. The approach can find control sequences for system sizes beyond what classical simulation methods can handle, with experimental resource requirements that don't depend on temperature or criticality.
Key Contributions
- Framework for thermal state preparation with non-native interactions using unitary dynamics
- Scalable classical method to find control sequences for large quantum systems beyond density matrix simulation limits
View Full Abstract
While questions on quantum simulation of ground state physics are mostly focussed on the realization of effective interactions, most work on quantum simulation of thermal physics explores the realization of dynamics towards a thermal mixed state under native interactions. Many open questions that could be answered with quantum simulations, however, involve thermal states with respect to synthetic interactions. We present a framework based solely on unitary dynamics to design quantum simulations for thermal states with respect to Hamiltonians that include non-native interactions, suitable for both present-day digital and analogue devices. By classical means, our method finds the control sequence to reach a target thermal state for system sizes well out of reach of state-vector or density-matrix control methods, even though quantum hardware is required to explicitly simulate the thermal state dynamics. With the illustrative example of the cluster Ising model that includes non-native three-body interactions, we find that required experimental resources, such as the total evolution time, are independent of temperature and criticality.
Bound state solutions with a linear combination of Yuakawa plus four-parameter diatomic potentials using path integral approach: Thermodynamic properties
This paper uses path integral methods to solve for bound states in diatomic molecules with combined Yukawa and four-parameter potentials, then calculates thermodynamic properties from the energy solutions. It's theoretical work on molecular quantum mechanics rather than quantum information science.
Key Contributions
- Analytical bound state solutions for combined Yukawa and four-parameter diatomic potentials using path integral formalism
- Derivation of thermodynamic properties from the compact energy equation for the molecular system
View Full Abstract
In this paper, we investigate the approximate analytical bound states with a linear combination of two diatomic molecule potentials, Yukawa and four parameters potentials, within the framework of the path integral formalism. With the help of an appropriate approximation to evaluate the centrifugal term, the energy spectrum and the normalized wave functions of the bound states are derived from the poles of Green's function and its residues. The partition function and other thermodynamic properties were obtained using the compact form of the energy equation.
Topological sensing of superfluid rotation using non-Hermitian optical dimers
This paper proposes a quantum sensing method that uses non-Hermitian optical systems coupled to ring-trapped Bose-Einstein condensates to detect superfluid rotation. The approach exploits exceptional points in the optical system to create a robust, non-destructive way to measure the topological winding number of persistent currents in the superfluid.
Key Contributions
- Development of a non-destructive topological sensing scheme using exceptional points in non-Hermitian optical dimers
- Demonstration of noise-resilient digital sensing based on eigenmode permutation rather than fragile eigenvalue splittings
- Exact theoretical framework using Schur-complement reduction to describe light-matter dynamics in the optical-BEC system
View Full Abstract
We theoretically investigate a non-Hermitian optical dimer whose parameters are renormalized by dispersive and dissipative backaction from the coupling of the passive cavity with a ring-trapped Bose-Einstein condensate. The passive cavity is driven by a two-tone control laser, where each tone is in a coherent superposition of Laguerre-Gaussian beams carrying orbital angular momenta $\pm \ell \hbar$. This imprints an optical lattice on the ring trap, leading to Bragg-diffracted sidemode excitations. Using an exact Schur-complement reduction of the full light-matter dynamics, we derive a frequency-dependent self-energy and identify a static regime in which the atomic response produces a complex shift of the passive optical mode. This renormalized dimer supports a tunable exceptional point, enabling spectroscopic signatures in the optical transmission due to a probe field, which can in turn be utilized for estimating the winding number of the persistent current. Exploiting the associated half-integer topological charge, we propose a digital exceptional-point-based sensing scheme based on eigenmode permutation, providing a noise-resilient method to sense superfluid rotation without relying on fragile eigenvalue splittings. Importantly, the sensing proposals are intrinsically non-destructive, preserving the coherence of the atomic superfluid.
A scalable gallium-phosphide-on-diamond spin-photon interface
This paper demonstrates the first high-cooperativity coupling between quantum defects (silicon-vacancy centers in diamond) and a scalable planar photonic platform using gallium phosphide nanophotonic cavities integrated on diamond substrates. The researchers achieved spin-dependent optical switching and single-shot spin readout, establishing a promising architecture for quantum networking applications.
Key Contributions
- First demonstration of high-cooperativity coupling between SiV centers and hybrid-integrated nanophotonics in a scalable planar platform
- Integration of over 600 gallium phosphide nanophotonic cavities on diamond substrate with controllable spin-photon interfaces
- Achievement of spin-dependent transmission switching and quantum jump detection via single-shot readout of SiV spin states
View Full Abstract
The efficient interfacing of quantum emitters and photons is fundamental to quantum networking. Quantum defects embedded in integrated nanophotonic circuits are promising for such applications due to the deterministic light-matter interactions of high-cooperativity ($C>1$) cavity quantum electrodynamics and potential for scalable integration with active photonic processing. Silicon-vacancy (SiV) centers embedded in diamond nanophotonic cavities are a leading approach due to their excellent optical and spin coherence, however their long-term scalability is limited by the diamond itself, as its suspended geometry and weak nonlinearity necessitates coupling to a second processing chip. Here we realize the first high-cooperativity coupling of quantum defects to hybrid-integrated nanophotonics in a scalable, planar platform. We integrate more than 600 gallium phosphide (GaP) nanophotonic cavities on a diamond substrate with near-surface SiV centers. We examine a particular device with two strongly coupled SiV centers in detail, confirming above-unity cooperativity via multiple independent measurements. Application of an external magnetic field via a permanent magnet enables optical resolution of the SiV spin transitions from which we determine a spin-relaxation time $T_1>0.4$ ms at 4 K. We utilize the high cooperativity coupling to observe spin-dependent transmission switching and the quantum jumps of the SiV spin via single-shot readout. These results, coupled with GaP's strong nonlinear properties, establish GaP-on-diamond as a scalable planar platform for quantum network applications.
The Role of Quantum in Hybrid Quantum-Classical Neural Networks: A Realistic Assessment
This paper systematically evaluates hybrid quantum-classical neural networks to determine whether quantum components actually improve performance compared to purely classical models. The researchers tested these hybrid models on medical signals and image data, finding that quantum components typically either provide no benefit or actually worsen performance compared to classical approaches.
Key Contributions
- Rigorous statistical comparison of hybrid quantum-classical neural networks versus classical counterparts across multiple data types
- Systematic analysis of quantum components' impact including encoding schemes, entanglement, and circuit size on model performance
View Full Abstract
Quantum machine learning has emerged as a promising application domain for near-term quantum hardware, particularly through hybrid quantum-classical models that leverage both classical and quantum processing. Although numerous hybrid architectures have been proposed and demonstrated successfully on benchmark tasks, a significant open question remains regarding the specific contribution of quantum components to the overall performance of these models. In this work, we aim to shed light on the impact of quantum processing within hybrid quantum-classical neural network architectures through a rigorous statistical study. We systematically assess common hybrid models on medical signal data as well as planar and volumetric images, examining the influence attributable to classical and quantum aspects such as encoding schemes, entanglement, and circuit size. We find that in best-case scenarios, hybrid models show performance comparable to their classical counterparts, however, in most cases, performance metrics deteriorate under the influence of quantum components. Our multi-modal analysis provides realistic insights into the contributions of quantum components and advocates for cautious claims and design choices for hybrid models in near-term applications.
Regularization from Superpositions of Time Evolutions
This paper presents a novel approach to regularizing quantum field theory calculations by using superpositions of time evolution operators with postselection. The method naturally generates smooth filters that suppress problematic high-energy contributions in path integrals, providing an alternative to traditional regularization techniques like cutoffs or lattice discretizations.
Key Contributions
- Development of a regularization method based on coherently controlled superpositions of time evolutions with postselection
- Demonstration that Gaussian superpositions produce natural energy filters that stabilize path integral calculations for singular potentials
- Extension to scalar quantum field theory showing how local coupling smearing provides large-field stabilization
View Full Abstract
Short-time approximations and path integrals can be dominated by high-energy or large-field contributions, especially in the presence of singular interactions, motivating regulators that are suppressive yet removable. Standard regulators typically impose such suppressions by hand (e.g. cutoffs, higher-derivative terms, heat-kernel smearing, lattice discretizations), while here we show that closely related smooth filters can arise as the conditional map produced by interference in a coherently controlled, postselected superposition of evolutions. A successful postselection implements a single heralded operator that is a coherent linear combination of time-evolution operators. For a Gaussian superposition of time translations in quantum mechanics, the postselected step is $V_{σ,Δt}=e^{-iHΔt}\,e^{-\frac12σ^2Δt^2H^2}$, i.e.\ the desired unitary step multiplied by a Gaussian energy filter suppressing energies above order $1/(σΔt)$. This renders short-time kernels in time-sliced path-integral approximations well behaved for singular potentials, while the target unitary dynamics is recovered as $σ\to0$ and (for fixed $σ$) also as $Δt\to0$ at fixed $t$. In scalar QFT, a local Gaussian smearing of the quartic coupling induces a positive $(σ^2/2)φ^8$ term in the Euclidean action, providing a symmetry-compatible large-field stabilizer; it is naturally viewed as an irrelevant operator whose effects can be renormalized at fixed $σ$ (together with a conventional UV regulator) and removed by taking $σ\to0$. We give short-time error bounds and analyze multi-step success probabilities.
Hardy nonlocality for entangled pairs in a four-particle system
This paper investigates Hardy's paradox (a form of quantum nonlocality) in a four-particle system with cyclic entanglement structure, where particles are entangled only with their neighbors. The researchers find this configuration provides more ways to demonstrate nonlocality than fully entangled systems and test their theoretical predictions using quantum circuits on IBM quantum hardware.
Key Contributions
- Demonstrated that cyclic entanglement structures in four-particle systems provide enhanced Hardy nonlocality compared to fully entangled systems
- Implemented and tested quantum circuits for Hardy paradox verification on IBM quantum hardware, revealing significant deviations between simulation and experimental results
View Full Abstract
Nonlocality can be studied through different approaches, such as Bell's inequalities, and it can be found in numerous quantum states, including GHZ states or graph states. Hardy's paradox, or Hardy-type nonlocality, provides a way to investigate nonlocality for entangled states of particles without using inequalities. Previous studies of Hardy's nonlocality have mostly focused on the fully entangled systems, while other entanglement configurations remain less explored. In this work, the system under investigation consists of four particles arranged in a cyclic entanglement configuration, where each particle forms entangled pairs with two neighbors, while non-neighboring particles remain unentangled. We found that this entanglement structure offers a larger set of conditions that lead to the contradiction with the LHV model, compared to the fully entangled systems. This enhancement can be attributed to the presence of multiple excluded states and correlations, in which the measurement result of a particle only influences the result of its paired partners. We implement quantum circuits compatible with the cyclic entanglement structure, and through simulation, the correlation patterns and the states of interest are identified. We further execute the proposed circuits on IBM Brisbane, a practical backend; however, the results show considerable deviations from the simulation counterparts.
Classical solution of the FeMo-cofactor model to chemical accuracy and its implications
This paper demonstrates that the electronic structure of the FeMo-cofactor (a complex biological catalyst with 8 transition metal ions) can be computed to chemical accuracy using classical computers, challenging previous assumptions that quantum computers would be necessary for this task.
Key Contributions
- Developed classical computational protocols that achieve chemical accuracy for the 76-orbital FeMo-cofactor model previously thought to require quantum computing
- Provided a benchmarking result that informs resource estimates and expectations for quantum chemistry algorithms on quantum computers
View Full Abstract
The main source of reduced nitrogen for living things comes from nitrogenase, which converts N2 to NH3 at the FeMo-cofactor (FeMo-co). Because of its role in supporting life, the uncertainty surrounding the catalytic cycle, and its compositional richness with eight transition metal ions, FeMo-co has fascinated scientists for decades. After much effort, the complete atomic structure was resolved. However, its electronic structure, central to reactivity, remains under intense debate. FeMo-co's complexity, arising from many unpaired electrons, has led to suggestions that it lies beyond the reach of classical computing. Consequently, there has been much interest in the potential of quantum algorithms to compute its electronic structure. Estimating the cost to compute the ground-state to chemical accuracy (~1 kcal/mol) within one or more FeMo-co models is a common benchmark of quantum algorithms in quantum chemistry, with numerous resource estimates in the literature. Here we address how to perform the same task using classical computation. We use a 76 orbital/152 qubit resting state model, the subject of most quantum resource estimates. Based on insight into the multiple configuration nature of the states, we devise classical protocols that yield rigorous or empirical upper bounds to the ground-state energy. Extrapolating these we predict the ground-state energy with an estimated uncertainty on the order of chemical accuracy. Having performed this long-discussed computational task, we next consider implications beyond the model. We distill a simpler computational procedure which we apply to reveal the electronic landscape in realistic representations of the cofactor. We thus illustrate a path to a precise computational understanding of FeMo-co electronic structure.
Path Integral Lindblad Dynamics in Presence of Time-Dependent Fields
This paper presents an improved method for modeling quantum systems that interact with thermal environments and are subject to time-dependent external fields. The new formulation overcomes limitations of the previous Path Integral Lindblad Dynamics method by eliminating the need to directly calculate complex memory kernels, making it applicable to time-varying systems.
Key Contributions
- Extended PILD method to handle time-dependent external fields
- Simplified formulation that avoids direct evaluation of non-Markovian memory kernels
- Enabled application to Floquet systems with periodic time dependence
View Full Abstract
The path integral Lindblad dynamics (PILD) method [A. Bose, J. Phys. Chem. Lett. 15(12), 3363-3368 (2024)] had been introduced as a way of incorporating the impact of certain empirical processes like pumps and drains on the dynamics of quantum systems interacting with thermal environments. The method being based on the time-translational invariance of the Nakajima-Zwanzig memory kernel, however, was not able to account for time-dependent external fields. In this communication, we give an alternate, simpler formulation of PILD, that allows us to go beyond this limitation. It does not require the evaluation of the non-Markovian memory kernel directly, and consequently can be applied to Floquet systems as well.
Multimode Fock-State Measurements using Dispersive Shifts in a Trapped Ion
This paper demonstrates new techniques for measuring quantum states in trapped ion systems by using dispersive shifts to map information about multiple vibrational modes onto a single spin qubit. The researchers show they can measure complex quantum states and perform filtering operations using only one qubit to probe multiple motional modes.
Key Contributions
- Development of single-spin multimode measurement primitive using dispersive shifts
- Implementation of selective-decoupling scheme to cancel unwanted phase shifts while preserving phonon-dependent phases
- Demonstration of two-mode Fock-state distribution extraction and parity-based filtering
- Realization of nondestructive single-shot Fock state measurement via repeated filtering
View Full Abstract
Trapped ions naturally host multiple motional modes alongside long-lived spin qubits, providing a scalable multimode bosonic register. Efficiently characterizing such bosonic registers requires the ability to access many motional modes with limited spin resources. Here we introduce a single-spin, multimode measurement primitive using dispersive shifts in the far-detuned multimode Jaynes-Cummings interaction. We implement a Ramsey sequence that maps phonon-number-dependent phases onto the spin, thereby realizing a multimode spin-dependent rotation (SDR). We also introduce a selective-decoupling scheme that cancels the phase induced by the carrier AC-Stark shift while preserving the phonon-number-dependent phase induced by the dispersive shift. Using this SDR-based Ramsey sequence on a single trapped ion, we experimentally extract two-mode Fock-state distributions, perform parity-based filtering of two-mode motional states, and realize a nondestructive single-shot measurement of a single-mode Fock state via repeated filtering steps.
Observation of ΔJ=0 Rotational Excitation in Dense Hydrogens
This paper reports Raman spectroscopy measurements of dense hydrogen isotopes that reveal a previously unobserved rotational excitation mode (ΔJ=0) with unique properties - it has zero energy shift in gas/fluid phases but gains energy in solid phases, and shows isotope-independent behavior unlike normal rotational or vibrational modes.
Key Contributions
- Discovery of ΔJ=0 rotational excitation mode in dense hydrogen isotopes with crystal field effects
- Demonstration of isotope-independent quantum excitation that differs from standard harmonic oscillator and quantum rotor behavior
View Full Abstract
Raman measurements performed on dense H2, D2 and H2+D2 in a wide pressure-temperature range reveal the presence of the ΔJ=0 rotational excitation. In the gas/fluid state this excitation has zero Raman shift, but in the solid, the crystal field drive s it away from the zero value e.g. 75 cm-1 at around 50 GPa and 10 K for both isotopes and their mixture. In the case of deuterium, the ΔJ=0 mode splits upon entering phase II suggesting a very complex molecular environment of the broken symmetry phase (BSP). In the fluid state and phases I and II the frequencies (energies) of the ΔJ=0 transition for H2 and D2 do not scale either as rotational (by factor of 2) nor vibrational (by square 2) modes and appear to be completely isotope independent. This independence on mass marks this transition as unique and a fundamentally different type of excitation from the commonly considered harmonic oscillator and quantum rotor.
Increasing the secret key rates and point-to-multipoint extension for experimental coherent-one-way quantum key distribution protocol
This paper demonstrates experimental improvements to quantum key distribution (QKD) by combining information from multiple detectors to increase secret key generation rates and extending the protocol to support one transmitter communicating securely with two receivers simultaneously.
Key Contributions
- Experimental demonstration of increased secret key rates by combining time-bin information from two detectors in COW QKD protocol
- Implementation of point-to-multipoint COW QKD protocol enabling secure key sharing between one transmitter and two receivers
View Full Abstract
Using quantum key distribution (QKD) protocols, a secret key is created between two distant users (transmitter and receiver) at a particular key rate. Quantum technology can facilitate secure communication for cryptographic applications, combining QKD with one-time-pad (OTP) encryption. In order to ensure the continuous operation of QKD in real-world networks, efforts have been concentrated on optimizing the use of components and effective QKD protocols to improve secret key rates and increase the transmission between multiple users. Generally, in experimental implementations, the secret key rates are limited by single-photon detectors, which are used at the receivers of QKD and create a bottleneck due to their limited detection rates (detectors with low detection efficiency and high detector dead-time). We experimentally show that secret key rates can be increased by combining the time-bin information of two such detectors on the data line of the receiver for the coherent-one-way (COW) QKD protocol with a minimal increase in quantum bit error rate (QBER, the proportion of erroneous bits). Further, we implement a point-to-multipoint COW QKD protocol, introducing an additional receiver module. The three users (one transmitter and two receivers) share the secret key in post-processing, relying on OTP encryption. Typically, the dual-receiver extension can improve the combined secret key rates of the system; however, one has to optimise the experimental parameters to achieve this within security margins. These methods are general and can be applied to any implementation of the COW protocol.
Momentum-Space Entanglement Entropy as a Universal Signature of Dynamical Quantum Phase Transitions
This paper introduces a new way to detect dynamical quantum phase transitions by measuring entanglement between different momentum modes after a quantum system is suddenly changed. The authors prove that at critical points where these transitions occur, the entanglement reaches its maximum possible value, providing a universal signature for identifying such transitions.
Key Contributions
- Introduction of momentum-space entanglement entropy as a universal probe for dynamical quantum phase transitions
- Analytical proof that critical momenta in DQPT saturate entanglement entropy to maximum value ln(d)
- Establishment of direct connection between entanglement saturation and vanishing Loschmidt echo at DQPT
View Full Abstract
We introduce a momentum-space entanglement entropy to quantify quantum correlations between distinct momentum modes following a quench. We prove analytically in the transverse-field Ising (TFI) model and the Su-Schrieffer-Heeger (SSH) chain that every critical momentum $k^{*}$ associated with a dynamical quantum phase transition (DQPT) saturates its entanglement entropy to the maximal value $\ln{d}$ ($d=2$ in TFI and SSH models), coinciding with the vanishing of the Loschmidt echo. This saturation of mode entanglement thus provides a universal, direct signature of DQPTs. Our work thus establishes a unified, entanglement-based perspective on dynamical quantum phase transitions.
Pauli Measurements Are Near-Optimal for Pure State Tomography
This paper presents an improved algorithm for quantum state tomography that can reconstruct an unknown n-qubit pure state using fewer copies than previous methods. The algorithm uses only simple Pauli measurements and achieves near-optimal efficiency, reducing the required number of state copies from O(3^n/ε) to O(2^n/ε).
Key Contributions
- Improved copy complexity for pure state tomography from O(3^n/ε) to O(2^n/ε)
- Algorithm using only nonadaptive Pauli measurements with polynomial runtime
View Full Abstract
We give an algorithm for pure state tomography with near-optimal copy complexity using single-qubit measurements. Specifically, given $\widetilde{O}(2^n/ε)$ copies of an unknown pure $n$-qubit state $\lvertψ\rangle$, the algorithm performs only \textit{nonadaptive Pauli measurements}, runs in time $\mathrm{poly}(2^n,1/ε)$, and outputs $\lvert \widehatψ \rangle$ that has fidelity $1-ε$ with $\lvert ψ\rangle$ with high probability. This improves upon the previous best copy complexity bound of $\widetilde{O}(3^n/ε)$.
A Broadband Nanowire Quantum Dot Cavity Design for the Efficient Extraction of Entangled Photons
This paper proposes a new nanowire cavity design that uses quantum dots to generate entangled photons more efficiently. The design achieves high light extraction efficiency and better photon quality, which could improve quantum communication networks.
Key Contributions
- Novel nanowire cavity design based on quasi-bound states in the continuum
- Demonstration of 17x Purcell enhancement with 74% light extraction efficiency
- Enhanced single-photon indistinguishability for entangled photon sources
View Full Abstract
A bright source of on-demand entangled photons is needed for quantum networks. A single quantum dot in a site-selected nanowire waveguide is a promising candidate for realizing such sources. However, such sources are associated with poor single-photon indistinguishability, limiting their applicability in quantum networks. A common approach for enhancing the single-photon indistinguishability in quantum dot-based entangled photon sources is to implement a broadband optical cavity. Achieving a high-Purcell cavity while retaining the advantages of the nanowire, such as directional emission, a broad operational bandwidth, and high light extraction efficiency, has been a significant challenge. Here, we propose a nanowire cavity based on quasi-bound states in the continuum formed by the strong coupling of two resonant optical modes. We numerically predict this design to support a cavity mode with 4 nm bandwidth and a Purcell enhancement of $\sim$17. This cavity mode enables a directional far-field emission profile (88% overlap with a Gaussian) with a light extraction efficiency of $\sim$74%. Our solution opens up a route for generating entangled photon pairs with enhanced extraction efficiency and single-photon indistinguishability for the practical realization of quantum networks.
Solving nonlinear differential equations on noisy $156$-qubit quantum computers
This paper demonstrates the use of IBM's 156-qubit quantum computers to solve nonlinear differential equations using a hybrid classical-quantum algorithm called H-DES. The researchers successfully applied their approach to solve a material deformation problem and the inviscid Burgers' equation on current noisy quantum hardware.
Key Contributions
- Development and demonstration of H-DES hybrid algorithm for solving nonlinear differential equations on NISQ devices
- Successful implementation on IBM's 156-qubit quantum computers for physically relevant problems
View Full Abstract
In this paper, we report on the resolution of nonlinear differential equations using IBM's quantum platform. More specifically, we demonstrate that the hybrid classical-quantum algorithm H-DES successfully solves a one-dimensional material deformation problem and the inviscid Burgers' equation on IBM's 156-qubit quantum computers. These results constitute a step toward performing physically relevant simulations on present-day Noisy Intermediate-Scale Quantum (NISQ) devices.
Improved Lower Bounds for Learning Quantum Channels in Diamond Distance
This paper proves improved theoretical lower bounds on the number of queries needed to learn unknown quantum channels within a specified accuracy (diamond distance). The work establishes that learning requires more queries than previously thought, with the bound now explicitly depending on the desired accuracy level.
Key Contributions
- Improved lower bound for quantum channel learning with explicit epsilon-dependence
- Construction of channel ensembles that are well-separated in diamond norm but have close Stinespring isometries
View Full Abstract
We prove that learning an unknown quantum channel with input dimension $d_A$, output dimension $d_B$, and Choi rank $r$ to diamond distance $\varepsilon$ requires $ Ω\!\left( \frac{d_A d_B r}{\varepsilon \log(d_B r / \varepsilon)} \right)$ queries. This improves the best previous $Ω(d_A d_B r)$ bound by introducing explicit $\varepsilon$-dependence, with a scaling in $\varepsilon$ that is near-optimal when $d_A=rd_B$ but not tight in general. The proof constructs an ensemble of channels that are well-separated in diamond norm yet admit Stinespring isometries that are close in operator norm.
Below-shot-noise capacity in phase estimation using nonlinear interferometers
This paper compares three different quantum interferometer designs for phase measurement, finding that while some can theoretically achieve better-than-classical precision, the Mandel-type interferometer with differential detection provides the most robust performance under realistic conditions with losses.
Key Contributions
- Comparative analysis of three nonlinear interferometer configurations for phase estimation under realistic conditions
- Demonstration that Mandel interferometer with differential detection provides most robust quantum-enhanced sensing performance in presence of loss
View Full Abstract
Over the past decade, several schemes for imaging and sensing based on nonlinear interferometers have been proposed and demonstrated experimentally. These interferometers exhibit two main advantages. First, they enable probing a sample at a chosen wavelength while detecting light at a different wavelength with high efficiency (bicolor quantum imaging and sensing with undetected light). Second, they can show quantum-enhanced sensitivities below the shot-noise limit, potentially reaching Heisenberg-limited precision in parameter estimation. Here, we compare three quantum-imaging configurations using only easily accessible intensity-based measurements for phase estimation: a Yurke-type SU(1,1) interferometer, a Mandel-type induced-coherence interferometer, and a hybrid scheme that continuously interpolates between them. While an ideal Yurke interferometer can exhibit Heisenberg scaling, this advantage is known to be fragile under realistic detection constraints and in the presence of loss. We demonstrate that differential intensity detection in the Mandel interferometer provides the highest and most robust phase sensitivity among the considered schemes, reaching but not surpassing the shot-noise limit, even in the presence of loss. Intensity measurements in a Yurke-type configuration can achieve genuine sub-shot-noise sensitivity under balanced losses and moderate gain; however, their performance degrades in realistic high-gain regimes. Consequently, in this regime, the Mandel configuration with differential detection outperforms the Yurke-type setup and constitutes the most robust approach for phase estimation.
Bridging the Linear-Quadratic Gap: A Quantum-Classical Hybrid Approach to Robust Supply Chain Design
This paper applies quantum-inspired optimization algorithms to supply chain network design in urban logistics, comparing their performance to classical greedy algorithms on a simulated Delhi road network. The quantum-inspired approach achieves better spatial distribution of facilities and reduced operational overlap while maintaining similar demand satisfaction levels.
Key Contributions
- Introduction of quantum-inspired optimization for supply chain design that addresses the Linear-Quadratic Gap in facility placement
- Demonstration that quantum-inspired methods achieve more spatially balanced facility distribution with 35.8% improvement in overlap penalty compared to greedy algorithms
View Full Abstract
The design of supply chain networks in densely populated urban logistics systems faces a timely dilemma: the traditional optimisation approaches are effective to maximise the level of demand perfusion, but they are limited to embracing large expenses in overlapping the facilities and cannibalisation in the market. When tested on a high-fidelity digital twin of the Delhi NCR road network of thirty candidate sites, we establish that Classical Greedy algorithms using the theoretical maximum demand of (473 units) lack any theoretical overlap penalty, but incur a prohibitive overlap penalty (5.08). Here, in comparison, the Quantum-Inspired solution only losses 3.2% of demand (450 compared to 465 units relative to the optimal solution), but the solution preserves 21.8% less operational overlap risk (3.26 compared to 4.17), which can be viewed as a 35.8% improvement compared to the Greedy solution. Geospatial analysis shows that it can be attributed to a shift in strategies: This, in contrast to Classical approaches, which focus on locating facilities in the high-density central areas (North/Central Delhi), the quantum-inspired solver autonomously chooses the diversified topology of the North-south network, penetrating into the underserved periphery growth markets. This is a spatially balanced arrangement which is congruent to the polycentric structure of modern time megacities, and displays better stability to volatility in demand. We have shown that quantum-inspired optimisation methods can close the so-called Linear-Quadratic Gap phenomenon, i.e. the systematic inability of greedy methods to capture the actual quadratic interactions between facilities, and offer a way of computing the pathway to operationally robust and risk-optimised supply chain networks in dense urban conditions.
Extracting scattering phase shift in quantum mechanics on quantum computers
This paper explores using quantum computers to calculate scattering phase shifts in quantum mechanics by implementing integrated correlation functions on quantum circuits. The researchers test their approach on IBM quantum hardware, finding success with two qubits but failure with three qubits due to gate errors and decoherence.
Key Contributions
- Development of quantum circuits to compute scattering phase shifts using integrated correlation functions
- Demonstration of quantum algorithm performance limitations due to two-qubit gate errors and thermal relaxation on NISQ devices
View Full Abstract
We investigate the feasibility of extracting infinite volume scattering phase shift on quantum computers in a simple one-dimensional quantum mechanical model, using the formalism established in Ref.~\cite{Guo:2023ecc} that relates the integrated correlation functions (ICF) for a trapped system to the infinite volume scattering phase shifts through a weighted integral. The system is first discretized in a finite box with periodic boundary conditions, and the formalism in real time is verified by employing a contact interaction potential with exact solutions. Quantum circuits are then designed and constructed to implement the formalism on current quantum computing architectures. To overcome the fast oscillatory behavior of the integrated correlation functions in real-time simulation, different methods of post-data analysis are proposed and discussed. Test results on IBM hardware show that good agreement can be achieved with two qubits, but complete failure ensues with three qubits due to two-qubit gate operation errors and thermal relaxation errors.
Surface Optimization of Aluminum Resonators for Robust Quantum Device Fabrication
This paper investigates surface treatment methods to improve aluminum-based superconducting quantum resonators, focusing on reducing dielectric losses when devices are exposed to ambient conditions for extended periods before testing. The researchers developed chemical treatment processes that achieved very high quality factors, making aluminum resonators more suitable for industrial-scale quantum device manufacturing.
Key Contributions
- Developed surface passivation and selective etching techniques for aluminum resonators that maintain low dielectric losses after extended ambient exposure
- Achieved ultra-low dielectric losses of 5.2×10^-7 with quality factors approaching 1.9 million through sequential HF vapor and phosphoric acid treatments
View Full Abstract
Aluminum remains the central material for superconducting qubits, and considerable effort has been devoted to optimizing its deposition and patterning for quantum devices. However, while post-processing of Nb- and Ta-based resonators has been widely explored, primarily focusing on oxide removal using buffered oxide etch (BOE), post-treatment strategies for Al resonators remain underdeveloped. This challenge becomes particularly relevant for industry-scale fabrication with multichip bonding, where delays between sample preparation and cooldown require surface treatments that preserve low dielectric loss during extended exposure to ambient conditions. In this work, we investigate surface modification approaches for Al resonators subjected to a 24-hour delay prior to cryogenic measurement. Passivation using self-limiting oxygen and fluorine chemistries was evaluated utilizing different plasma processes. Remote oxygen plasma treatment reduced dielectric losses, in contrast to direct plasma, likely due to additional ashing of residual resist despite the formation of a thicker oxide layer on both Si and Al surfaces. A fluorine-based plasma process was developed that passivated the Al surface with fluorine for subsequent BOE treatment. However, increasing fluorine incorporation in the aluminum oxide correlated with higher loss, identifying fluorine as an unsuitable passivation material for Al resonators. Finally, selective oxide removal using HF vapor and phosphoric acid was assessed for surface preparation. HF vapor selectively etched SiO2 while preserving Al2O3, whereas phosphoric acid exhibited the opposite selectivity. Sequential application of both etches yielded dielectric losses as low as $δ_\mathrm{LP} = 5.2 \times 10^{-7}$ ($Q\mathrm{i} \approx 1.9\,\mathrm{M}$) in the single photon regime, demonstrating a promising pathway for robust Al-based resonator fabrication.
Quantum computing for multidimensional option pricing: End-to-end pipeline
This paper develops a framework for pricing financial options on multiple assets by combining traditional financial modeling with quantum computing acceleration. The authors use quantum algorithms to speed up the complex calculations needed to price these financial derivatives, achieving significant computational improvements over classical methods.
Key Contributions
- Development of end-to-end quantum-accelerated option pricing pipeline
- Demonstration of 10-100x reduction in computational queries using Quantum Amplitude Estimation
- Integration of market-consistent risk modeling with quantum Monte Carlo methods
View Full Abstract
This work introduces an end-to-end framework for multi-asset option pricing that combines market-consistent risk-neutral density recovery with quantum-accelerated numerical integration. We first calibrate arbitrage-free marginal distributions from European option quotes using the Normal Inverse Gaussian (NIG) model, leveraging its analytical tractability and ability to capture skewness and fat tails. Marginals are coupled via a Gaussian copula to construct joint distributions. To address the computational bottleneck of the high-dimensional integration required to solve the option pricing formula, we employ Quantum Accelerated Monte Carlo (QAMC) techniques based on Quantum Amplitude Estimation (QAE), achieving quadratic convergence improvements over classical Monte Carlo (CMC) methods. Theoretical results establish accuracy bounds and query complexity for both marginal density estimation (via cosine-series expansions) and multidimensional pricing. Empirical tests on liquid equity entities (Credit Agricole, AXA, Michelin) confirm high calibration accuracy and demonstrate that QAMC requires 10-100 times fewer queries than classical methods for comparable precision. This study provides a practical route to integrate arbitrage-aware modelling with quantum computing, highlighting implications for scalability and future extensions to complex derivatives.
Phase-Randomized Laser Pulse Generation at 10 GHz for Quantum Photonic Applications
This paper presents a method to generate laser pulses with truly random phases at very high rates (10 GHz) by using an external source of spontaneous emission to eliminate phase correlations between consecutive pulses that normally occur at high repetition rates.
Key Contributions
- Development of a method to overcome phase diffusion limitations in gain-switched laser diodes at high repetition rates
- Demonstration of phase-randomized pulse generation at 10 GHz using external spontaneous emission sources
View Full Abstract
Gain-switching laser diodes is a well-established technique for generating optical pulses with random phases, where the quantum randomness arises naturally from spontaneous emission. However, the maximum switching rate is limited by phase diffusion: at high repetition rates, residual photons in the cavity seed subsequent pulses, leading to phase correlations, which degrade randomness. We present a method to overcome this limitation by employing an external source of spontaneous emission in conjunction with the laser. Our results show that this approach effectively removes interpulse phase correlations and restores phase randomization at repetition rates as high as 10 GHz. This technique opens new opportunities for high-rate quantum key distribution and quantum random number generation.
Limitations for adaptive quantum state tomography in the presence of detector noise
This paper investigates how detector noise affects adaptive quantum state tomography, finding that any readout noise eliminates the theoretical quadratic advantage of adaptive measurement strategies, though practical benefits may still exist in well-calibrated experiments.
Key Contributions
- Proved that any nonzero readout noise eliminates the asymptotic quadratic scaling advantage of adaptive quantum state tomography
- Demonstrated through numerical simulations that adaptive strategies still provide constant-factor improvements in reconstruction accuracy for realistic noise levels
- Analyzed the impact of limited detector calibration on quantum state reconstruction bias and accuracy
View Full Abstract
Assumption-free reconstruction of quantum states from measurements is essential for benchmarking and certifying quantum devices, but it remains difficult due to the extensive measurement statistics and experimental resources it demands. An approach to alleviating these demands is provided by adaptive measurement strategies, which can yield up to a quadratic improvement in reconstruction accuracy for pure states by dynamically optimizing measurement settings during data acquisition. A key open question is whether these asymptotic advantages remain in realistic experiments, where readout is inevitably noisy. In this work, we analyze the impact of readout noise on adaptive quantum state tomography with readout-error mitigation, focusing on the challenging regime of reconstructing pure states using mixed-state estimators. Using analytical arguments based on Fisher information optimization and extensive numerical simulations using Bayesian inference, we show that any nonzero readout noise eliminates the asymptotic quadratic scaling advantage of adaptive strategies. We numerically investigate the behavior for finite measurement statistics for single- and two-qubit systems with exact readout-error mitigation and find a gradual transition from ideal to sub-optimal scaling. We furthermore investigate realistic scenarios where detector tomography is performed with a limited number of state copies for calibration, showing that insufficient detector characterization leads to estimator bias and limited reconstruction accuracy. Although our result imposes an upper bound on the reconstruction accuracy that can be achieved with adaptive strategies, we nevertheless observe numerically a constant-factor gain in reconstruction accuracy, which becomes larger as the readout noise decreases. This indicates potential practical benefits in using adaptive measurement strategies in well-calibrated experiments.
An SU(2n)-valued nonlinear Fourier transform
This paper develops a mathematical framework called a nonlinear Fourier transform that converts sequences of matrix data into special unitary group functions, with applications to quantum signal processing techniques used in quantum computing algorithms.
Key Contributions
- Development of SU(2n)-valued nonlinear Fourier transform with characterized image sets
- Connection to quantum signal processing over U(2n) and multivariate quantum signal processing
View Full Abstract
We define a nonlinear Fourier transform which maps sequences of contractive $n \times n$ matrices to $SU(2n)$-valued functions on the circle $\mathbb{T}$. We characterize the image of finitely supported sequences and square-summable sequences on the half-line, and construct an inverse for $SU(2n)$-valued functions whose diagonal $n \times n$ blocks are outer matrix functions. As an application, we relate this nonlinear Fourier transform with quantum signal processing over $U(2n)$ and multivariate quantum signal processing.
Cavity-Driven Multispectral Gain for High-Sensitivity NV Center Magnetometers
This paper demonstrates a highly sensitive magnetic field sensor using nitrogen-vacancy (NV) centers in diamond coupled to a dielectric cavity, achieving exceptional sensitivity of 12 pT/√Hz through cavity-enhanced multispectral detection. The researchers show how the cavity splits NV hyperfine levels and creates 'doubly dressed states' that maintain quantum coherence, projecting future sensitivities near fundamental noise limits.
Key Contributions
- Demonstrated 12 pT/√Hz magnetic field sensitivity with NV-cavity system and threefold gain from multispectral features
- Established frequency multiplexing paradigm for quantum metrology with projections toward 100 fT/√Hz sensitivity approaching Johnson-Nyquist limit
View Full Abstract
We report a cavity-enabled solid-state magnetometer based on an NV ensemble coupled with a dielectric cavity, achieving 12 pT/$\sqrt{\rm{Hz}}$ sensitivity and a nearly threefold gain from multispectral features. The features originate from cavity-induced splitting of the NV hyperfine levels and leverages robust quantum coherence in the doubly dressed states of the system to achieve high sensitivity. We project simulated near-term sensitivities approaching 100 fT/$\sqrt{\rm{Hz}}$, close to the Johnson-Nyquist limit. Our results establish frequency multiplexing as a new operational paradigm, offering a robust and scalable quantum resource for metrology under ambient conditions.
Quantum Monte Carlo Simulations for predicting electron-positron pair production via the linear Breit-Wheeler process
This paper demonstrates how quantum Monte Carlo simulations can predict electron-positron pair production when photon beams collide, showing that quantum computing methods can achieve high accuracy (up to 90%) on current quantum hardware for high-energy physics calculations.
Key Contributions
- Demonstration of quantum Monte Carlo integration for predicting electron-positron pair production via Breit-Wheeler process
- Implementation and validation on current quantum hardware achieving 90% accuracy
- Proposal for hybrid quantum-classical simulation integration pathways
View Full Abstract
Quantum computing (QC) has the potential to revolutionise the future of scientific simulations. To harness the capabilities that QC offers, we can integrate it into hybrid quantum-classical simulations, which can boost the capabilities of supercomputing by leveraging quantum modules that offer speedups over classical counterparts. One example is quantum Monte Carlo integration, which is theorised to achieve a quadratic speedup over classical Monte Carlo, making it suitable for high-energy physics, strong-field QED, and multiple scientific and industrial applications. In this paper, we demonstrate that quantum Monte Carlo can be used to predict the number of pairs created when two photon beams collide head-on, a problem relevant to high-energy physics and intense laser-matter interactions. The results from the quantum simulations demonstrate high accuracy relative to theoretical predictions. The accuracy of the simulations is only constrained by the approximations required to embed polynomials and to initialise the quantum state. We also demonstrate that our algorithm can be used in current quantum hardware, providing up to 90 % accuracy relative to theoretical predictions. Furthermore, we propose pathways towards integrations with classical simulation codes.
In-plane ferromagnetism-driven topological nodal-point superconductivity with tilted Weyl cones
This paper demonstrates a new type of topological superconducting phase created by combining a one-atom-thick ferromagnetic layer with a conventional superconductor. Using scanning tunneling microscopy, the researchers observed unique electronic properties that indicate the formation of tilted Weyl cones, which are exotic quantum states of matter.
Key Contributions
- Experimental demonstration of topological nodal-point superconducting phase in magnet-superconductor heterostructures
- Discovery of tilted Weyl cones in two-dimensional hybrid quantum materials using scanning tunneling spectroscopy
View Full Abstract
The potential application of topological superconductivity in quantum transport and quantum information has fueled an intense investigation of hybrid materials with emergent electronic properties, including magnet-superconductor heterostructures. Here, we report evidence of a topological nodal-point superconducting phase in a one-atom-thick in-plane ferromagnet in direct proximity to a conventional $s$-wave superconductor. Low-temperature scanning tunneling spectroscopy data reveal the presence of a double-peak low-energy feature in the local density of states of the hybrid system, which is rationalized via model calculations to be an emergent topological nodal-point superconducting phase with tilted Weyl cones. Our results further establish the combination of in-plane ferromagnetism and conventional superconductivity as a route to design two-dimensional topological quantum phases.
MPM-QIR: Measurement-Probability Matching for Quantum Image Representation and Compression via Variational Quantum Circuit
This paper presents MPM-QIR, a method that uses variational quantum circuits to compress and represent classical images by matching quantum measurement probabilities to pixel intensities. The approach achieves good image reconstruction quality while using fewer parameters than traditional methods across standard image datasets like MNIST and CIFAR-10.
Key Contributions
- Novel variational quantum circuit framework for classical image compression that matches measurement probabilities to pixel intensities
- Bidirectional convolutional quantum architecture that captures global image correlations with improved parameter efficiency
- Demonstration of quantum advantage in compression ratios while maintaining reconstruction quality above 30 dB PSNR across multiple benchmark datasets
View Full Abstract
We present MPM-QIR, a variational-quantum-circuit (VQC) framework for classical image compression and representation whose core objective is to achieve equal or better reconstruction quality at a lower Parameter Compression Ratio (PCR). The method aligns a generative VQC's measurement-probability distribution with normalized pixel intensities and learns positional information implicitly via an ordered mapping to the flattened pixel array, thus eliminating explicit coordinate qubits and tying compression efficiency directly to circuit (ansatz) complexity. A bidirectional convolutional architecture induces long-range entanglement at shallow depth, capturing global image correlations with fewer parameters. Under a unified protocol, the approach attains PSNR $\geq$ 30 dB with lower PCR across benchmarks: MNIST 31.80 dB / SSIM 0.81 at PCR 0.69, Fashion-MNIST 31.30 dB / 0.91 at PCR 0.83, and CIFAR-10 31.56 dB / 0.97 at PCR 0.84. Overall, this compression-first design improves parameter efficiency, validates VQCs as direct and effective generative models for classical image compression, and is amenable to two-stage pipelines with classical codecs and to extensions beyond 2D imagery.
Phases of the $q$-deformed $\mathrm{SU}(N)$ Yang-Mills theory at large $N$
This paper studies a quantum field theory called q-deformed SU(N) Yang-Mills theory using mathematical techniques to understand different phases of matter, particularly focusing on confinement and topological order properties. The authors use variational mean-field analysis to map out the phase structure and find that topologically ordered phases persist even when the number of particle types (N) becomes very large.
Key Contributions
- Determination of large-N phase structure for q-deformed SU(N) Yang-Mills theory using variational mean-field analysis
- Discovery that topologically ordered phases remain robust at large N under appropriate parameter scalings
View Full Abstract
We investigate the $(2+1)$-dimensional $q$-deformed $\mathrm{SU}(N)_k$ Yang-Mills theory in the lattice Hamiltonian formalism, which is characterized by three parameters: the number of colors $N$, the coupling constant $g$, and the level $k$. By treating these as tunable parameters, we explore how key properties of the theory, such as confinement and topological order, emerge in different regimes. Employing a variational mean-field analysis that interpolates between the strong- and weak-coupling regimes, we determine the large-$N$ phase structure in terms of the 't Hooft coupling $λ_\mathrm{tH}=g^2N$ and the ratio $k/N$. We find that the topologically ordered phase remains robust at large $N$ under appropriate scalings of these parameters. This result indicates that the continuum limit of large-$N$ gauge theory may be more intricate than naively expected, and motivates studies beyond the mean-field theory, both to achieve a further understanding of confinement in gauge theories and to guide quantum simulations of large-$N$ gauge theories.
Iterative Matrix Product State Simulation for Scalable Grover's Algorithm
This paper develops an efficient classical simulation method for Grover's quantum search algorithm using matrix product states (MPS), achieving 15x speedup over conventional simulation approaches and enabling simulation of larger quantum circuits up to 29 qubits.
Key Contributions
- Development of iterative MPS framework for efficient Grover's algorithm simulation
- Demonstration of 15x speedup over non-iterative approaches and scalability to 29 qubits
- Discovery that single-shot measurements provide reliable results for large qubit numbers, reducing measurement overhead
View Full Abstract
Grover's algorithm is a cornerstone of quantum search algorithm, offering quadratic speedup for unstructured problems. However, limited qubit counts and noise in today's noisy intermediate-scale quantum (NISQ) devices hinder large-scale hardware validation, making efficient classical simulation essential for algorithm development and hardware assessment. We present an iterative Grover simulation framework based on matrix product states (MPS) to efficiently simulate large-scale Grover's algorithm. Within the NVIDIA CUDA-Q environment, we compare iterative and common (non-iterative) Grover's circuits across statevector and MPS backends. On the MPS backend at 29 qubits, the iterative Grover's circuit runs about 15x faster than the common (non-iterative) Grover's circuit, and about 3-4x faster than the statevector backend. In sampling experiments, Grover's circuits demonstrate strong low-shot stability: as the qubit number increases beyond 13, a single-shot measurement still closely mirrors the results from 4,096 shots, indicating reliable estimates with minimal sampling and significant potential to cut measurement costs. Overall, an iterative MPS design delivers speed and scalability for Grover's circuit simulation, enabling practical large-scale implementations.
Finite-size security of QKD: comparison of three proof techniques
This paper compares three mathematical proof techniques for analyzing the security of quantum key distribution (QKD) protocols when using finite-size data blocks, focusing on how well each method performs with practically relevant block sizes. The study finds that different techniques work better in different scenarios, with implications for improving secure quantum communication systems.
Key Contributions
- Comparative analysis of three finite-size security proof techniques (EUR, AEP, FME) for QKD protocols
- Demonstration that EUR-based bounds provide best performance across parameter ranges while AEP becomes pessimistic at small block sizes
- Recommendation for using FME-type analyses for continuous-variable protocols where EUR-based bounds are unavailable
View Full Abstract
We compare three proof techniques for composable finite-size security of quantum key distribution under collective attacks, with emphasis on how the resulting secret-key rates behave at practically relevant block lengths. As a benchmark, we consider the BB84 protocol and evaluate finite-size key-rate estimates obtained from entropic uncertainty relations (EUR), from the asymptotic equipartition property (AEP), and from a direct finite-block analysis based on the conditional min-entropy, which we refer to as the finite-size min-entropy (FME) approach. For BB84 we show that the EUR-based bound provides the most favorable performance across the considered parameter range, while the AEP bound is asymptotically tight but can become overly pessimistic at moderate and small block sizes, where it may fail to certify a positive key. The FME approach remains effective in this small-block regime, yielding nonzero rates in situations where the AEP estimate vanishes, although it is not asymptotically optimal for BB84. These results motivate the use of FME-type analyses for continuous-variable protocols in settings where tight EUR-based bounds are unavailable, notably for coherent-state schemes where current finite-size analyses typically rely on AEP-style corrections.
Topological Sensing in the Dynamics of Quantum Walks with Defects
This paper develops a quantum sensing protocol that uses quantum walks with defects to precisely measure parameters, leveraging topological properties to achieve high precision approaching the Heisenberg limit while maintaining robustness against noise.
Key Contributions
- Novel topological quantum sensing protocol using quantum walks with defects
- Demonstration of Heisenberg-limit precision in parameter estimation
- Proof of robustness against disorder through Bayesian estimation methods
View Full Abstract
Topological quantum sensing leverages unique topological features to suppress noise and improve the precision of parameter estimation, emerging as a promising tool in both fundamental research and practical application. In this Letter, we propose a sensing protocol that exploits the dynamics of topological quantum walks incorporating localized defects. Unlike conventional schemes that rely on topological protection to suppress disorder and defects, our protocol harnesses the evolution time as a resource to enable precise estimation of the defect parameter. By utilizing topologically nontrivial properties of the quantum walks, the sensing precision can approach the Heisenberg limit. We further demonstrate the performance and robustness of the protocol through Bayesian estimation. Our results show that this approach maintains high precision over a broad range of parameters and exhibits strong robustness against disorder, offering a practical pathway for topologically enhanced quantum metrology.
Detection-loophole-free nonlocality in the simplest scenario
This paper demonstrates a simplified quantum steering experiment that closes the detection loophole with minimal complexity, requiring only one detector on the untrusted side with 51.6% efficiency. The work establishes fundamental efficiency thresholds for proving quantum nonlocality using two-qubit entangled states in the simplest possible experimental setup.
Key Contributions
- Identified fundamental efficiency thresholds for quantum steering with minimal detector requirements
- Demonstrated detection-loophole-free quantum nonlocality with unprecedented experimental simplicity using only 51.6% detector efficiency
View Full Abstract
Loophole-free quantum nonlocality often demands experiments with high complexity (defined by all parties' settings and outcomes) and multiple efficient detectors. Here, we identify the fundamental efficiency and complexity thresholds for quantum steering using two-qubit entangled states. Remarkably, it requires only one photon detector on the untrusted side, with efficiency $ε> 1/X$, where $X \geq 2$ is the number of settings on that side. This threshold applies to all pure entangled states, in contrast to analogous Bell-nonlocality tests, which require almost unentangled states to be loss-tolerant. We confirm these predictions in a minimal-complexity ($X = 2$ for the untrusted party and a single three-outcome measurement for the trusted party), detection-loophole-free photonic experiment with $ε= (51.6 \pm 0.4)\% $.
Quantum vs. Classical Machine Learning: A Benchmark Study for Financial Prediction
This paper compares quantum machine learning (QML) models against classical machine learning methods for financial prediction tasks, including stock return prediction, live trading simulation, and volatility forecasting. The study finds that quantum approaches can outperform classical methods in specific scenarios when the quantum circuit design aligns well with the data structure.
Key Contributions
- Development of a standardized benchmarking framework for comparing QML and classical ML methods in finance
- Demonstration that hybrid quantum neural networks and quantum LSTMs can achieve performance gains over classical counterparts in specific financial prediction tasks
View Full Abstract
In this paper, we present a reproducible benchmarking framework that systematically compares QML models with architecture-matched classical counterparts across three financial tasks: (i) directional return prediction on U.S. and Turkish equities, (ii) live-trading simulation with Quantum LSTMs versus classical LSTMs on the S\&P 500, and (iii) realized volatility forecasting using Quantum Support Vector Regression. By standardizing data splits, features, and evaluation metrics, our study provides a fair assessment of when current-generation QML models can match or exceed classical methods. Our results reveal that quantum approaches show performance gains when data structure and circuit design are well aligned. In directional classification, hybrid quantum neural networks surpass the parameter-matched ANN by \textbf{+3.8 AUC} and \textbf{+3.4 accuracy points} on \texttt{AAPL} stock and by \textbf{+4.9 AUC} and \textbf{+3.6 accuracy points} on Turkish stock \texttt{KCHOL}. In live trading, the QLSTM achieves higher risk-adjusted returns in \textbf{two of four} S\&P~500 regimes. For volatility forecasting, an angle-encoded QSVR attains the \textbf{lowest QLIKE} on \texttt{KCHOL} and remains within $\sim$0.02-0.04 QLIKE of the best classical kernels on \texttt{S\&P~500} and \texttt{AAPL}. Our benchmarking framework clearly identifies the scenarios where current QML architectures offer tangible improvements and where established classical methods continue to dominate.
Reshaping and quantifying inter- and intramolecular exchange in signal amplification by reversible exchange of pyruvate
This paper investigates SABRE, a technique that uses parahydrogen to enhance NMR signals by transferring nuclear spin polarization to target molecules like pyruvate. The study reveals new insights into the molecular binding mechanisms and exchange processes that occur during this hyperpolarization method.
Key Contributions
- Discovery of intramolecular hydrogen exchange occurring faster than substrate loss in SABRE complexes
- Identification of a novel stable H2-IrImes-DMSO2-pyruvate complex and the crucial role of counterions in binding
- Development of parahydrogen-enhanced spin-selective NMR methods combined with exchange-model fitting for studying molecular dynamics
View Full Abstract
Signal amplification by reversible exchange (SABRE) is a nuclear spin hyperpolarization technique in which the transient interaction of parahydrogen (pH2) and a target substrate with an iridium complex leads to polarization transfer to the substrate. Here, we use a parahydrogen-enhanced, spin-selective NMR method to investigate pyruvate binding, which is combined with exchange-model fitting and DFT calculations. Our study reveals several key findings that reshape the current understanding of SABRE: (a) intramolecular hydrogen exchange of the hydrides, occurring faster than pyruvate or H2 loss; (b) the discovery of a novel stable H2-IrImes-DMSO2-pyruvate complex; and (c) the crucial role of counterions, here Na+, in Ir-pyruvate binding. Previously unknown insights into complex kinetics and distributions as a function of temperature, [DMSO], [pyruvate], and hydrogen pressure are presented. The methods demonstrated here, exemplified by SABRE, provide a framework that is expected to guide future research in the field.
Computational hardness of estimating quantum entropies via binary entropy bounds
This paper studies the computational difficulty of estimating quantum entropy measures (Rényi and Tsallis entropies) for quantum states, proving that these problems are as hard as the hardest problems a quantum computer can solve efficiently. The authors establish new complexity results by developing novel mathematical inequalities that relate different types of binary entropies.
Key Contributions
- Proves BQP-hardness for rank-2 variants of quantum Rényi and Tsallis entropy approximation problems for all positive real orders
- Establishes BQP-completeness results for low-rank versions of these entropy estimation problems across different parameter ranges
- Develops new inequalities relating α-Rényi and q-Tsallis binary entropies of different orders as the basis for novel reduction techniques
View Full Abstract
We investigate the computational hardness of estimating the quantum $α$-Rényi entropy ${\rm S}^{\tt R}_α(ρ) = \frac{\ln {\rm Tr}(ρ^α)}{1-α}$ and the quantum $q$-Tsallis entropy ${\rm S}^{\tt T}_q(ρ) = \frac{1-{\rm Tr}(ρ^q)}{q-1}$, both converging to the von Neumann entropy as the order approaches $1$. The promise problems Quantum $α$-Rényi Entropy Approximation (RényiQEA$_α$) and Quantum $q$-Tsallis Entropy Approximation (TsallisQEA$_q$) ask whether $ {\rm S}^ {\tt R}_α(ρ)$ or ${\rm S}^{\tt T}_q(ρ)$, respectively, is at least $τ_{\tt Y}$ or at most $τ_{\tt N}$, where $τ_{\tt Y} - τ_{\tt N}$ is typically a positive constant. Previous hardness results cover only the von Neumann entropy (order $1$) and some cases of the quantum $q$-Tsallis entropy, while existing approaches do not readily extend to other orders. We establish that for all positive real orders, the rank-$2$ variants Rank2RényiQEA$_α$ and Rank2TsallisQEA$_q$ are ${\sf BQP}$-hard. Combined with prior (rank-dependent) quantum query algorithms in Wang, Guan, Liu, Zhang, and Ying (TIT 2024), Wang, Zhang, and Li (TIT 2024), and Liu and Wang (SODA 2025), our results imply: - For all real orders $α> 0$ and $0 < q \leq 1$, LowRankRényiQEA$_α$ and LowRankTsallisQEA$_q$ are ${\sf BQP}$-complete, where both are restricted versions of RényiQEA$_α$ and TsallisQEA$_q$ with $ρ$ of polynomial rank. - For all real order $q>1$, TsallisQEA$_q$ is ${\sf BQP}$-complete. Our hardness results stem from reductions based on new inequalities relating the $α$-Rényi or $q$-Tsallis binary entropies of different orders, where the reductions differ substantially from previous approaches, and the inequalities are also of independent interest.
Scalar vacuum densities on Beltrami pseudosphere
This paper studies quantum field vacuum properties in curved spacetime, specifically examining how spatial curvature and topology affect vacuum expectation values of scalar fields on a Beltrami pseudosphere geometry. The work analyzes both finite and divergent contributions to vacuum energy and stress under different compactification conditions.
Key Contributions
- Analysis of vacuum expectation values for scalar fields in curved spacetime with non-trivial topology
- Characterization of finite topological contributions versus divergent geometric contributions to vacuum properties
View Full Abstract
We investigate the combined effects of spatial curvature and topology on the properties of the vacuum state for a charged scalar field localized on the (2+1)-dimensional Beltrami pseudosphere, assuming that the field obeys quasiperiodicity condition with constant phase. As important local characteristics of the vacuum state the vacuum expectation values (VEVs) of the field squared and energy-momentum tensor are evaluated. The contributions in the VEVs coming from geometry with an uncompactified azimuthal coordinate are divergent, whereas the compact counterparts are finite and are analysed both numerically and asymptotically. For small values of proper radius of the compactified dimension, the leading terms of topological contributions are independent of the field mass and curvature coupling parameter, increasing by a power-law. In the opposite limit, the VEVs decay following a power-law in the general case. In the special case of a conformally coupled massless field the behavior is different. Unlike the VEV of field squared and vacuum energy density, the radial and azimuthal stresses are increasing by absolute value. As a consequence, the effects of nontrivial topology are strong for the stresses in this case at small values of radial coordinate.
Interference-Induced Suppression of Doublon Transport and Prethermalization in the Extended Bose-Hubbard Model
This paper studies how to control the movement of particle pairs (doublons) in quantum lattice systems by using interference effects to suppress their transport. The researchers show how adding specific interaction terms can dramatically slow down doublon mobility and create long-lived ordered states that resist thermalization.
Key Contributions
- Development of disorder-free mechanism to suppress doublon transport using destructive interference from nearest-neighbor pair-hopping terms
- Analytical derivation of optimal conditions using third-order Schrieffer-Wolff transformation with lattice geometry corrections
- Demonstration of prethermal plateau formation in many-body systems with separated timescales
View Full Abstract
The coherent mobility of doublons, arising from second-order virtual dissociation-recombination processes, fundamentally limits their use as information carriers in the strongly interacting Bose-Hubbard model. We propose a disorder-free suppression mechanism by introducing an optimized nearest-neighbor pair-hopping term that destructively interferes with the dominant virtual hopping channel. Using the third-order Schrieffer-Wolff transformation, we derive an analytical optimal condition that accounts for lattice geometry corrections. Exact numerical simulations demonstrate that this optimized scheme achieves near-complete dynamical arrest and entanglement preservation in one-dimensional chains, while in two-dimensional square lattices, it significantly suppresses ballistic spreading yet permits a slow residual expansion. Furthermore, in the many-body regime, finite-size scaling analysis identifies the observed long-lived density-wave order as a prethermal plateau emerging from the dramatic separation of microscopic and thermalization timescales.
Double interval entanglement in quasiparticle excited states
This paper studies how quantum entanglement behaves between separated regions when quasiparticles (particle-like excitations) are present in quantum systems. The authors develop computational methods to measure entanglement and discover that at large momentum differences, the entanglement from different quasiparticles simply adds together, with quantum systems approaching classical behavior in this limit.
Key Contributions
- Development of an efficient algorithm for calculating entanglement measures from density matrices in non-orthonormal basis
- Discovery of universal additivity property for entanglement measures at large momentum differences
- Demonstration that bosonic and fermionic systems converge to classical behavior when momentum differences are large
View Full Abstract
We investigate double-interval entanglement measures, specifically reflected entropy, mutual information, and logarithmic negativity, in quasiparticle excited states for classical, bosonic, and fermionic systems. We develop an algorithm that efficiently calculates these measures from density matrices expressed in a non-orthonormal basis, enabling straightforward numerical implementation. We find a universal additivity property that emerges at large momentum differences, where the entanglement measures for states with distinct quasiparticle sets equal the sum of their individual contributions. The classical limit arises as a special case of this additivity, with both bosonic and fermionic results converging to classical behavior when all momentum differences are large.
Koopman Nonlinear Non-Hermitian Skin Effect
This paper extends the concept of non-Hermitian skin effects (where quantum states localize at boundaries) to nonlinear systems by using the Koopman operator framework. Instead of looking at physical states, the authors characterize localization through mathematical functions in a higher-dimensional space of observables, revealing new types of boundary effects unique to nonlinear quantum systems.
Key Contributions
- Introduces Koopman operator framework to characterize skin effects in nonlinear non-Hermitian systems where traditional eigenstate analysis fails
- Demonstrates that nonlinear skin effects manifest as localization in higher-order observable spaces rather than physical states, with distinct dynamical signatures
View Full Abstract
Non-Hermitian skin effects are conventionally manifested as boundary localization of eigenstates in linear systems. In nonlinear settings, however, where eigenstates are no longer well defined, it becomes unclear how skin effects should be faithfully characterized. Here, we propose a Koopman-based characterization of nonlinear skin effects, in which localization is defined in terms of Koopman eigenfunctions in a lifted observable space, rather than physical states. Using a minimal nonlinear extension of the Hatano-Nelson model, we show that dominant Koopman eigenfunctions localize sharply on higher-order observables, in stark contrast to linear skin effects confined to linear observables. This lifted-space localization governs the sensitivity to boundary amplitude perturbations, providing a distinct dynamical signature of the nonlinear skin effect. Our results establish the Koopman framework as a natural setting in which skin effects unique to nonlinear non-Hermitian systems can be identified.
Transmutation based Quantum Simulation for Non-unitary Dynamics
This paper presents a quantum algorithm for simulating non-unitary quantum dynamics, specifically diffusion processes described by positive semidefinite operators. The method uses the Kannai transform to represent diffusion processes as weighted combinations of unitary operations, achieving improved computational complexity compared to existing approaches.
Key Contributions
- Novel quantum algorithm for simulating non-unitary dissipative dynamics using the Kannai transform
- Improved query complexity scaling for diffusion equation simulation and quantum linear system solving
View Full Abstract
We present a quantum algorithm for simulating dissipative diffusion dynamics generated by positive semidefinite operators of the form $A=L^\dagger L$, a structure that arises naturally in standard discretizations of elliptic operators. Our main tool is the Kannai transform, which represents the diffusion semigroup $e^{-TA}$ as a Gaussian-weighted superposition of unitary wave propagators. This representation leads to a linear-combination-of-unitaries implementation with a Gaussian tail and yields query complexity $\tilde{\mathcal{O}}(\sqrt{\|A\| T \log(1/\varepsilon)})$, up to standard dependence on state-preparation and output norms, improving the scaling in $\|A\|, T$ and $\varepsilon$ compared with generic Hamiltonian-simulation-based methods. We instantiate the method for the heat equation and biharmonic diffusion under non-periodic physical boundary conditions, and we further use it as a subroutine for constant-coefficient linear parabolic surrogates arising in entropy-penalization schemes for viscous Hamilton--Jacobi equations. In the long-time regime, the same framework yields a structured quantum linear solver for $A\mathbf{x}=\mathbf{b}$ with $A=L^\dagger L$, achieving $\tilde{\mathcal{O}}(κ^{3/2}\log^2(1/\varepsilon))$ queries and improving the condition-number dependence over standard quantum linear-system algorithms in this factorized setting.
Unitary Transformation of Two-Dimensional Spin-Orbit Coupled Models
This paper demonstrates that different spin-orbit coupling models (Rashba, Dresselhaus, and Weyl) used in condensed matter physics can be mathematically connected through unitary transformations, revealing they are not separate frameworks but unified descriptions of the same physics. The authors introduce a new combined model that could enable better control of spin textures in materials.
Key Contributions
- Demonstration of unitary transformations connecting Rashba, Dresselhaus, and Weyl spin-orbit coupling models
- Introduction of unified MKM Hamiltonian model combining foundational spintronic models with relaxed constraints
View Full Abstract
The Rashba, Dresselhaus, and Weyl Hamiltonians form a foundational framework for modeling spin-orbit interactions across condensed matter systems. Although they describe distinct material classes and produce seemingly different spin textures, they are conventionally treated as separate, unrelated theoretical frameworks. Here, this work demonstrates that the linear 2D Rashba and Weyl models are connected by a specific unitary transformation that maps one Hamiltonian exactly onto the other. The same unitary can be applied to map the linear Dresselhaus-1 model onto the Dresselhaus-2 models and vice versa. Such hidden correspondence establishes a unified theoretical foundation for spin-orbit interactions, deepening our conceptual understanding of spin-orbit coupling and opening new avenues for exploring complex spin textures. To illustrate the application, this work introduces a unique, improved, and more realistic model Hamiltonian H_MKM combining all known foundational spintronic models, where the stringent condition of equal spin-orbit coupling strength of Rashba and Dresselhaus may not be required to observe persistent spin texture under MKM transformation.
Local Scale Invariance in Quantum Theory: A Non-Hermitian Pilot-Wave Formulation
This paper develops a new formulation of quantum mechanics that combines Weyl's local scale invariance with pilot-wave theory by complexifying electromagnetic gauge coupling. The authors modify the probability density from |ψ|² to a trajectory-dependent scale-invariant form and apply this framework to various quantum equations including Schrödinger, Pauli, and Dirac.
Key Contributions
- Development of non-Hermitian pilot-wave formulation incorporating local scale invariance
- Modification of probability density to trajectory-dependent scale-invariant form
- Extension of the framework to multiple quantum equations and quantum field theory
View Full Abstract
We show that Weyl's abandoned idea of local scale invariance has a natural realization at the quantum level in pilot-wave (deBroglie-Bohm) theory. We obtain the Weyl covariant derivative by complexifying the electromagnetic gauge coupling parameter. The resultant non-hermiticity has a natural interpretation in terms of local scale invariance of the quantum state in pilot-wave theory. The conserved current density is modified from $|ψ|^2$ to the local scale invariant, trajectory-dependent ratio $|ψ|^2/ \mathbf{1}^2[\mathcal{C}]$, where $\mathbf 1[\mathcal C]$ is a scale factor that depends on the pilot-wave trajectory $\mathcal C$ in configuration space. Our approach is general, and we implement it for the Schrödinger, Pauli, and Dirac equations coupled to an external electromagnetic field. We also implement it in quantum field theory for the case of a quantized axion field interacting with a quantized electromagnetic field. We discuss the equilibrium probability density and show that the corresponding trajectories are unique.
Tailoring Dynamical Quantum Phase Transitions via Double-Mode Squeezing Manipulation
This paper develops a method to control quantum phase transitions in many-body systems by applying specific quantum squeezing operations to initial states. The researchers discover a universal regime where the system's behavior becomes independent of how the quantum system is driven, linked to maximum entanglement between particle modes.
Key Contributions
- Development of double-mode squeezing protocol to control dynamical quantum phase transitions
- Discovery of universal DQPT behavior at specific squeezing strength r=π/4 with path-independent dynamics
- Establishment of direct connection between entanglement saturation and universal nonanalytic quantum dynamics
View Full Abstract
We propose a protocol to tailor dynamical quantum phase transitions (DQPTs) by double-mode squeezing onto the initial state in the XY chain. The effect of squeezing depends critically on the system's symmetry and parameters. When the squeezing operator breaks particle-hole symmetry (PHS), DQPTs become highly tunable, allowing one to either induce transitions within a single phase or suppress them. Remarkably, when PHS is preserved and the squeezing strength reaches $r=π/4$, a universal class of DQPTs emerges, independent of the quench path. This universality is characterized by two key features: (i) the collapse of all Fisher zeros onto the real-time axis, and (ii) the saturation of intermode entanglement to its maximum in each $(k,-k)$ modes. Moreover, the critical momenta governing the DQPTs coincide exactly with the modes attaining the maximal entanglement. At this universal point, the dynamical phase vanishes, leading to a purely geometric evolution marked by $π$-jumps in the Pancharatnam geometric phase. Our work establishes initial-state squeezing as a versatile tool for tailoring far-from-equilibrium criticality and reveals a direct link between entanglement saturation and universal nonanalytic dynamics.
Many-body Quantum Score: a scalable benchmark for digital and analog quantum processors and first test on a commercial neutral atom device
This paper introduces the Many-body Quantum Score (MBQS), a new benchmarking protocol that measures how well quantum processors can simulate many-body quantum physics by testing their ability to reproduce correlation functions in the transverse-field Ising model. The authors demonstrate this benchmark on a commercial neutral atom quantum processor called Ruby.
Key Contributions
- Development of MBQS as a scalable benchmark protocol for evaluating quantum processing units on many-body physics simulations
- First experimental validation of the MBQS protocol on a commercial neutral atom quantum processor (Ruby by Pasqal)
View Full Abstract
We propose the Many-body Quantum Score (MBQS), a practical and scalable application-level benchmark protocol designed to evaluate the capabilities of quantum processing units (QPUs)--both gate-based and analog--for simulating many-body quantum dynamics. MBQS quantifies performance by identifying the maximum number of qubits with which a QPU can reliably reproduce correlation functions of the transverse-field Ising model following a specific quantum quench. This paper presents the MBQS protocol and highlights its design principles, supported by analytical insights, classical simulations, and experimental data. It also displays results obtained with Ruby, an analog QPU based on Rydberg atoms developed by the Pasqal company. These findings demonstrate MBQS's potential as a robust and informative tool for benchmarking near-term quantum devices for many-body physics.
Optical Spectroscopy of Waveguide coupled Er$^{3+}$ ensembles in CaWO$_4$ and YVO$_4$
This paper studies how erbium ions in two different crystal materials behave when coupled to optical waveguides, finding that surface effects and polarization of light significantly affect the optical properties in one material but not the other. The research reveals that surface charges cause broadening of spectral lines, which could impact the performance of quantum devices based on these materials.
Key Contributions
- Demonstrated polarization-dependent surface effects in Er3+:CaWO4 waveguides with significant spectral broadening
- Identified surface charges as the dominant decoherence mechanism affecting rare-earth ion ensembles in non-charge-neutral hosts
View Full Abstract
We present an optical study of near-surface Er$^{3+}$ ensembles in waveguide-integrated CaWO$_4$ and YVO$_4$, investigating how nanophotonic coupling modifies rare-earth spectroscopy. In particular, we compare bulk excitation with evanescently coupled TE and TM waveguide modes. In Er$^{3+}$:CaWO$_4$, we observe a pronounced polarization-dependent surface effect. TE-coupled spectra closely reproduce bulk behavior. In contrast, TM coupling induces strong inhomogeneous broadening and an asymmetric low-energy shoulder of the site S1 Y1Z1 transition, with linewidths exceeding those of the bulk by more than a factor of four. Temperature-dependent measurements and surface termination studies indicate that surface charges are the dominant mechanism. Er$^{3+}$:YVO$_4$ remains largely unaffected by mode polarization, and surface termination leads only to minor spectral shifts. These observations suggest that non-charge-neutral rare-earth systems are more susceptible to surface-induced decoherence sources than charge-neutral hosts.
Testing measurement-based computational phases of quantum matter on a quantum processor
This paper experimentally tests theoretical predictions about measurement-based quantum computation using quantum phases of matter on IBM superconducting quantum hardware. The researchers verify how symmetric imperfections in resource states affect computational performance and demonstrate the operational stability of measurement-based quantum computing approaches.
Key Contributions
- Experimental verification of four theoretical predictions for computational phases of quantum matter on IBM quantum hardware
- Comprehensive investigation of how symmetric imperfections translate to logical decoherence and mitigation strategies
- Testing scaling laws that govern uniformity of computational power in measurement-based quantum computation
- Analysis of correlated measurement regimes and validation of densest packing efficiency for quantum algorithms
View Full Abstract
Many symmetry protected or symmetry enriched phases of quantum matter have the property that every ground state in a given such phase endows measurement based quantum computation with the same computational power. Such phases are called computational phases of quantum matter. Here, we experimentally verify four theoretical predictions for them on an IBM superconducting quantum device. We comprehensively investigate how symmetric imperfections of the resource states translate into logical decoherence, and how this decoherence is mitigated. In particular, the central experiment probes the scaling law from which the uniformity of computational power follows. We also analyze the correlated regime, where local measurements give rise to logical operations collectively. We test the prediction that densest packing of a measurement-based algorithms remains the most efficient, in spite of the correlations. Our experiments corroborate the operational stability of measurement based quantum computation in quantum phases of matter with symmetry.
Hybrid non-degenerate parametric amplifier for a microwave cavity mode and an NV ensemble
This paper demonstrates a hybrid parametric amplifier that combines microwave cavity modes with nitrogen-vacancy (NV) spin ensembles to achieve signal amplification and quantum squeezing. The system works by modulating the spin ensemble frequency, creating amplification in both the microwave and spin components without requiring traditional oscillator modulation.
Key Contributions
- Novel hybrid parametric amplifier design combining microwave cavities with NV spin ensembles
- Demonstration of 18 dB microwave amplification and 5 dB squeezing through spin ensemble frequency modulation
- Analysis of experimental requirements for room temperature and cryogenic operation
View Full Abstract
We introduce an implementation of a non-degenerate parametric amplifier in which the signal and idler modes, respectively, a microwave mode and an ensemble of spins (e.g., nitrogen-vacancy centers in diamond), are operated in their linear regime. This paramp, which amplifies signals in both parts at room and cryogenic temperatures, can be used to generate both the two-mode and single-mode squeezing of either system. It requires merely modulating the frequency of the spin ensemble at the sum of the cavity and spin frequencies (providing the classical pump) with the two systems sufficiently detuned. This effect is remarkable given that modulating a spin ensemble by itself produces neither amplification nor squeezing, unlike modulating an oscillator, and that an off-resonant perturbative analysis would suggest that modulating the spin ensemble merely parametrically drives the cavity mode. With typical cavity parameters including a cavity quality factor~$Q=10^4$, and a 1 GHz modulation amplitude, the microwave signal can be amplified by approximately $18~\mbox{dB}$ in $1.7~\mbox{$μ$s}$, with a resonant bandwidth of about $0.5~\mbox{MHz}$. At $10~\mbox{mK}$ with the same modulation amplitude and a cavity and spin $Q=5\times 10^4$ it generates approximately $5~\mbox{dB}$ of squeezing. We also examine the experimental requirements for implementation.
Multiphoton Interference with a symmetric SU(N) beam splitter and the generalization of the extended Hong-Ou-Mandel effect
This paper studies multiphoton interference effects in symmetric SU(N) beam splitters, generalizing the Hong-Ou-Mandel effect to N-port devices. The authors develop analytical methods to predict when destructive interference leads to zero probability for certain output states where photons are equally distributed across all ports.
Key Contributions
- Generalization of Hong-Ou-Mandel effect to arbitrary N-port symmetric beam splitters
- Development of analytical constraint equations for permanent calculations that determine zero-amplitude conditions
- Extension of central nodal line properties to even-N systems with specific parity conditions
View Full Abstract
We examine multiphoton interference with a symmetric $SU(N)$ beam splitter $S_N$, an extension of features of the $SU(2)$ 50/50 beam splitter extended Hong-Ou-Mandel (eHOM) effect, whereby one obtains a zero amplitude (probability) for the output coincidence state (defined by equal number of photons $n/N$ in each output port), when a total number $n$ of photons impinges on the $N$-port device. These are transitions of the form $|n_1,n_2,\ldots,n_N\rangle\overset{S_N}{\to}|n/N\rangle^{\otimes N}$, where $n=\sum_{i=1}^N n_i$, which generalize the Hong-Ou-Mandel (HOM) effect $|1,1\rangle \overset{S_2}{\to}|1,1\rangle $, the eHOM effect $|n_1,n_2\rangle \overset{S_2}{\to}|\tfrac{n_1+n_2}{2},\tfrac{n_1+n_2}{2}\rangle $, and the generalized HOM effect (gHOM) $|1\rangle^{\otimes N}\overset{S_N}{\to}|1\rangle^{\otimes N}$, which have previously been studied in the literature. The emphasis of this work is on illuminating how the overall destructive interference occurs in separate groups of destructive interferences of sub-amplitudes of the total zero amplitude. We develop symmetry properties for the generalized eHOM effect (geHOM) $|n_1,n_2,\ldots,n_N\rangle\overset{S_N}{\to}|n/N\rangle^{\otimes N}$ involving a zero amplitude governed by Perm($Λ$)=0, for an appropriately constructed matrix $Λ(S_N)$ built from the matrix elements of $S_N$. We develop an analytical constraint equation for Perm$(Λ)$ for arbitrary $N$ that allows us to determine when it is zero. We generalize the SU(2) beam splitter feature of central nodal line (CNL), which has a zero diagonal along the output probability distribution when one of the input states is of odd parity (containing only odd number of photons), to the general case of $N = 2 * N'$ where $N'\in odd$.
Non-Markovian dynamics of the giant atom beyond the rotating-wave approximation
This paper studies 'giant atoms' - superconducting qubits with spatially separated coupling points that exhibit long-lived quantum memory effects. The researchers use advanced mathematical methods to analyze these systems beyond previous limitations, finding enhanced non-Markovian dynamics and bound-state formation in strong coupling regimes.
Key Contributions
- Extension of giant atom analysis beyond rotating-wave approximation using hierarchical equations of motion
- Demonstration of enhanced non-Markovian effects and bound-state formation in strong-coupling regime
- Establishment of giant atoms as platform for non-Markovian quantum dynamics applications
View Full Abstract
Superconducting qubits coupled to meandering transmission lines or surface acoustic waves may realize giant artificial atoms, whose spatially separated coupling points give rise to long-lived non-Markovian dynamics. Previous studies were limited to the zero-temperature, weak-coupling regime, where the rotating-wave approximation applies and only single-phonon processes contribute. Here we go beyond these limits using the hierarchical equations of motion (HEOM). We show that HEOM accurately captures the exact dynamics at zero temperature and weak coupling, whereas perturbative Redfield theory fails due to long bath memory times. The non-Markovian effects persist at finite temperatures. In the strong-coupling regime, they are further enhanced, and we observe bound-state formation at zero temperature with only two coupling points. These results establish giant atoms as a powerful platform for exploring non-Markovian open quantum dynamics and their applications in quantum information and thermodynamics.
Enhancing Small Dataset Classification Using Projected Quantum Kernels with Convolutional Neural Networks
This paper proposes using projected quantum kernels (PQK) to improve convolutional neural networks for image classification when training data is limited. The authors claim their quantum-enhanced CNN achieves much higher accuracy than classical CNNs on small datasets from MNIST and CIFAR-10.
Key Contributions
- Introduction of projected quantum kernels for enhancing CNN feature extraction
- Demonstration of improved classification performance on small datasets using quantum-enhanced CNNs
View Full Abstract
Convolutional Neural Networks (CNNs) have shown promising results in efficiency and accuracy in image classification. However, their efficacy often relies on large, labeled datasets, posing challenges for applications with limited data availability. Our research addresses these challenges by introducing an innovative approach that leverages projected quantum kernels (PQK) to enhance feature extraction for CNNs, specifically tailored for small datasets. Projected quantum kernels, derived from quantum computing principles, offer a promising avenue for capturing complex patterns and intricate data structures that traditional CNNs might miss. By incorporating these kernels into the feature extraction process, we improved the representational ability of CNNs. Our experiments demonstrated that, with 1000 training samples, the PQK-enhanced CNN achieved 95% accuracy on the MNIST dataset and 90% on the CIFAR-10 dataset, significantly outperforming the classical CNN, which achieved only 60% and 12% accuracy on the respective datasets. This research reveals the potential of quantum computing in overcoming data scarcity issues in machine learning and paves the way for future exploration of quantum-assisted neural networks, suggesting that projected quantum kernels can serve as a powerful approach for enhancing CNN-based classification in data-constrained environments.
Time-Dependent Dunkl-Pauli Oscillator in the Presence of the Aharonov-Bohm Effect
This paper derives exact solutions for a quantum mechanical oscillator system that combines Dunkl operators (which encode reflection symmetries) with the Aharonov-Bohm effect (a topological quantum phenomenon) in a time-dependent framework. The work reveals how topological phases constrain the symmetry parameters and modify the quantum energy spectrum.
Key Contributions
- Exact time-dependent solution for Dunkl-Pauli oscillator with Aharonov-Bohm flux using Lewis-Riesenfeld invariant method
- Discovery of symmetry constraints linking Aharonov-Bohm flux to Dunkl parameters that modify energy spectra and wavefunctions
View Full Abstract
We present an exact, time-dependent solution for a two-dimensional Pauli oscillator deformed by Dunkl operators in the presence of an Aharonov--Bohm (AB) flux. By replacing conventional momenta with Dunkl momenta and allowing arbitrary time dependence in both, mass and frequency, we derive a deformed Pauli Hamiltonian that encodes reflection symmetries and topological gauge phases. Employing the Lewis-Riesenfeld invariant method, we derive exact expressions for the eigenvalues and spinor eigenfunctions of the system. Crucially, the AB flux imposes symmetry constraints on the Dunkl parameters of the form $ν_1 = \mp ν_2 $, linking the reflection symmetry ($ε= \pm 1 $) to the quantization of angular momentum. These constraints modify the energy spectrum and wavefunctions of the angular operator and the invariant operator. Our framework reveals novel spectral characteristics arising from the interplay between topology and Dunkl symmetry, with potential implications for quantum simulation in engineered systems such as cold atoms and quantum dots.
Grand-Canonical Typicality
This paper provides theoretical foundations for how macroscopic quantum systems naturally evolve toward grand-canonical statistical distributions, extending beyond energy conservation to include particle number conservation in systems with chemical reactions or particle exchange. The work establishes mathematical justification for why typical quantum wave functions in large systems produce density matrices that match grand-canonical ensemble predictions.
Key Contributions
- Extension of canonical typicality theory to grand-canonical ensembles with particle number conservation
- Mathematical proof that typical wave functions in generalized microcanonical subspaces yield grand-canonical density matrices
- Foundation for understanding statistical mechanics in quantum systems with chemical reactions and particle exchange
View Full Abstract
We study how the grand-canonical density matrix arises in macroscopic quantum systems. ``Canonical typicality'' is the known statement that for a typical wave function $Ψ$ from a micro-canonical energy shell of a quantum system $S$ weakly coupled to a large but finite quantum system $B$, the reduced density matrix $\hatρ^S_Ψ=\mathrm{tr}^B |Ψ\rangle\langle Ψ|$ is approximately equal to the canonical density matrix $\hatρ_\mathrm{can}=Z^{-1}_\mathrm{can} \exp(-β\hat{H}^S)$. Here, we discuss the analogous statement and related questions for the \emph{grand-canonical} density matrix $\hatρ_\mathrm{gc}=Z^{-1}_\mathrm{gc} \exp(-β(\hat{H}^S-μ_1 \hat{N}_{1}^S-\ldots-μ_r\hat{N}_{r}^S))$ with $\hat{N}_{i}^S$ the number operator for molecules of type $i$ in the system $S$. This includes (i) the case of chemical reactions and (ii) that of systems $S$ defined by a spatial region which particles may enter or leave. It includes the statements (a) that the density matrix of the appropriate (generalized micro-canonical) Hilbert subspace $H_\mathrm{gmc} \subset H^S \otimes H^B$ (defined by a micro-canonical interval of total energy and suitable particle number sectors), after tracing out $B$, yields $\hatρ_\mathrm{gc}$; (b) that typical $Ψ$ from $H_\mathrm{gmc}$ have reduced density matrix $\hatρ^S_Ψ$ close to $\hatρ_\mathrm{gc}$; and (c) that the conditional wave function $ψ^S$ of $S$ has probability distribution $\mathrm{GAP}_{\hatρ_\mathrm{gc}}$ if a typical orthonormal basis of $H^B$ is used. That is, we discuss the foundation and justification of both the density matrix and the distribution of the wave function in the grand-canonical case. We also extend these considerations to the so-called generalized Gibbs ensembles, which apply to systems for which some macroscopic observables are conserved.
$\mathsf{QAC}^0$ Contains $\mathsf{TC}^0$ (with Many Copies of the Input)
This paper proves that quantum constant-depth circuits (QAC^0) are significantly more powerful than classical constant-depth circuits, showing they can compute complex Boolean functions like those in TC^0 when given multiple copies of the input. The work resolves fundamental questions about the computational power of shallow quantum circuits compared to classical ones.
Key Contributions
- Proves unconditional separation QAC^0 ⊄ AC^0[p] showing quantum circuits are more powerful than classical
- Demonstrates TC^0 ⊆ QAC^0 ∘ NC^0, showing quantum constant-depth circuits can compute threshold functions with multiple input copies
- Introduces amplitude amplification technique for making approximate constant-depth quantum constructions exact
View Full Abstract
$\mathsf{QAC}^0$ is the class of constant-depth polynomial-size quantum circuits constructed from arbitrary single-qubit gates and generalized Toffoli gates. It is arguably the smallest natural class of constant-depth quantum computation which has not been shown useful for computing any non-trivial Boolean function. Despite this, many attempts to port classical $\mathsf{AC}^0$ lower bounds to $\mathsf{QAC}^0$ have failed. We give one possible explanation of this: $\mathsf{QAC}^0$ circuits are significantly more powerful than their classical counterparts. We show the unconditional separation $\mathsf{QAC}^0\not\subset\mathsf{AC}^0[p]$ for decision problems, which also resolves for the first time whether $\mathsf{AC}^0$ could be more powerful than $\mathsf{QAC}^0$. Moreover, we prove that $\mathsf{QAC}^0$ circuits can compute a wide range of Boolean functions if given multiple copies of the input: $\mathsf{TC}^0 \subseteq \mathsf{QAC}^0 \circ \mathsf{NC}^0$. Along the way, we introduce an amplitude amplification technique that makes several approximate constant-depth constructions exact.
Shallow-circuit Supervised Learning on a Quantum Processor
This paper demonstrates a quantum machine learning approach that uses shallow quantum circuits to encode classical data as ground states of local Hamiltonians, overcoming key obstacles like data loading costs and poor trainability. The researchers successfully tested their method on benchmark datasets using up to 50 qubits on an IBM quantum processor.
Key Contributions
- Development of a linear Hamiltonian-based quantum machine learning method that provides compact quantum representation of classical data
- Demonstration of scalable quantum machine learning on real quantum hardware using up to 50 qubits with the Krylov quantum diagonalization method
View Full Abstract
Quantum computing has long promised transformative advances in data analysis, yet practical quantum machine learning has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. In this work, we show that one can overcome these obstacles by using a linear Hamiltonian-based machine learning method which provides a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. We use the recent sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, whose parameters are trained to express classical datasets through local gradients. We demonstrate the efficacy and scalability of the methods by performing experiments on benchmark datasets using up to 50 qubits of an IBM Heron quantum processor.
Restoring Bloch's Theorem for Cavity Exciton Polaron-Polaritons
This paper develops a new mathematical framework that restores Bloch's theorem for quantum systems where light, electrons, and phonons are strongly coupled in crystalline materials. The work enables better theoretical predictions of material properties under strong light-matter coupling without computational approximations.
Key Contributions
- Development of symmetry-informed representation for hybrid photon-exciton-phonon Hamiltonians
- Restoration of translational symmetry in strongly coupled cavity systems enabling exact calculations
View Full Abstract
We introduce a symmetry-informed representation for hybrid photon--exciton--phonon quantum electrodynamics Hamiltonians to restore Bloch's theorem. The interchange of momenta between fermions and bosons breaks crystalline excitons' translational symmetry under strong coupling. Restoring said symmetry, we efficiently compute experimentally accessible observables without introducing approximations to the Hamiltonian, enabling investigations that elucidate material properties in strong coupling with applications enhancing coherent transport and unlocking symmetry-forbidden matter transitions.
When does entanglement through gravity imply gravitons?
This paper critically examines claims that detecting quantum entanglement through gravitational interactions would prove the existence of gravitons. The authors analyze thought experiments involving complementarity and causality, concluding that such entanglement detection alone does not necessarily support graviton existence unless retardation effects are also observed.
Key Contributions
- Clarifies the distinction between Newtonian action-at-a-distance and quantum no-signalling in causality violations
- Demonstrates that entanglement through gravity does not necessarily imply graviton existence without additional evidence of retardation effects
View Full Abstract
Detection of entanglement through the Newtonian potential has been claimed to support the existence of gravitons, by extrapolating to a thought experiment which demonstrates that complementarity and causality would be in conflict unless quantum fluctuations exist. We critically assess this consistency argument using scalar field models. We show that whether complementarity or no-signalling is violated when quantum fluctuations are neglected, depends on how this approximation is taken, while in both cases entanglement is generated locally in spacetime. We clarify that the correct reading of the paradox requires making a clear distinction between two notions of causality violation: Newtonian action-at-a-distance and the quantum mechanical no-signalling; the latter is pertinent while the former is not. We conclude that the thought experiment (a) does not add to the epistemological relevance of entanglement through Newtonian potentials (b) lends support for the existence of gravitons, if retardation effects are detected in entanglement through gravity.
A Unified Frequency Principle for Quantum and Classical Machine Learning
This paper develops a theoretical framework showing that quantum neural networks, like classical ones, preferentially learn low-frequency components of target functions first during training. The authors prove that noise in quantum circuits exponentially suppresses high-frequency learning while preserving low-frequency structure, and demonstrate that such noisy circuits can be efficiently simulated classically.
Key Contributions
- Unified theoretical framework proving quantum neural networks exhibit spectral bias toward low-frequency learning similar to classical networks
- Mathematical analysis showing how Pauli noise exponentially suppresses high-frequency Fourier components while preserving low-frequency learnability
- Proof that noisy quantum circuits with specific noise models admit efficient classical simulation
View Full Abstract
Quantum neural networks constitute a key class of near-term quantum learning models, yet their training dynamics remain not fully understood. Here, we present a unified theoretical framework for the frequency principle (F-principle) that characterizes the training dynamics of both classical and quantum neural networks. Within this framework, we prove that quantum neural networks exhibit a spectral bias toward learning low-frequency components of target functions, mirroring the behavior observed in classical deep networks. We further analyze the impact of noise and show that, when single-qubit noise is applied after encoding-layer rotations and modeled as a Pauli channel aligned with the rotation axis, the Fourier component labeled by $\boldsymbolω$ is suppressed by a factor $(1-2γ)^{\|\boldsymbolω\|_1}$. This leads to exponential attenuation of high-frequency terms while preserving the learnability of low-frequency structure. In the same setting, we establish that the resulting noisy circuits admit efficient classical simulation up to average-case error. Numerical experiments corroborate our theoretical predictions: Quantum neural networks primarily learn low-frequency features during early optimization and maintain robustness against dephasing and depolarizing noise acting on the encoding layer. Our results provide a frequency-domain lens that unifies classical and quantum learning dynamics, clarifies the role of noise in shaping trainability, and guides the design of noise-resilient quantum neural networks.
Higher-Dimensional Anyons via Higher Cohomotopy
This paper explores mathematical connections between integer Heisenberg groups and topological quantum phenomena, specifically showing how these algebraic structures relate to anyons in fractional quantum Hall systems. The authors generalize previous results using homotopy theory to predict the existence of higher-dimensional analogs of anyons in theoretical physics frameworks like 11D supergravity.
Key Contributions
- Streamlined proof that integer Heisenberg groups at level 2 correspond to quantum observables of abelian anyons in fractional quantum Hall systems
- Generalization using homotopy theory showing non-torsion parts of certain mapping spaces are integer Heisenberg groups for specific dimensional cases
- Prediction of higher-dimensional anyon analogs in cohomotopical completion of 11D supergravity
View Full Abstract
We highlight that integer Heisenberg groups at level 2 underlie topological quantum phenomena: their group algebras coincide with the algebras of quantum observables of abelian anyons in fractional quantum Hall (FQH) systems on closed surfaces. Decades ago, these groups were shown to arise as the fundamental groups of the space of maps from the surface to the 2-sphere -- which has recently been understood as reflecting an effective FQH flux quantization in 2-Cohomotopy. Here we streamline and generalize this theorem using the homotopy theory of H-groups, showing that for $k \in \{1,2,4\}$, the non-torsion part of $π_1 \mathrm{Map}\big({(S^{2k-1})^2, S^{2k}}\big)$ is an integer Heisenberg group of level 2, where we identify this level with 2 divided by the Hopf invariant of the generator of $π_{4k-1}(S^{2k})$. This result implies the existence of higher-dimensional analogs of FQH anyons in the cohomotopical completion of 11D supergravity ("Hypothesis H").
Collective light-matter interaction in plasmonic waveguide quantum electrodynamics
This paper investigates a new regime of quantum electrodynamics where multiple quantum emitters collectively interact with collective light modes in plasmonic waveguides, creating hybrid plasmon-polariton states. The researchers identify distinct coupling regimes and decay dynamics, including non-Markovian evolution effects similar to those seen in cavity quantum electrodynamics.
Key Contributions
- Discovery of collective-light-collective-matter interaction regime in waveguide QED with timed-Dicke states
- Identification of three distinct decay regimes and weak/strong coupling transitions in hybrid plasmon-polaritons
- Demonstration of anticrossing behavior and non-Markovian dynamics in plasmonic waveguide systems
View Full Abstract
Rabi oscillations characterize light-matter hybridization in the waveguide quantum electrodynamics~(WQED) framework, with their associated decay rates reflecting excitation damping, yet their behavior remains unresolved when collective emitters are coupled to a collective waveguide mode. This scenario reveals a conceptually novel collective-light-collective-matter interaction, realizable when a timed-Dicke state~(TDS) of subwavelength emitters couples to a slow, delocalized surface-plasmon mode, forming a hybridized plasmon-polariton~(HPP). The HPP acquires its directionality from the TDS via momentum matching. It also exhibits plasmonic characteristics, with excitation frequencies following the surface-plasmon dispersion relation. We obtain a Rabi oscillation and a long-time decay that describe the HPP and use them to reveal weak- and strong-coupling regimes through the emergence of normal-mode splitting. By performing a finite-time Lyapunov-exponent analysis, we show that the HPP also exhibits instantaneous decay and identify three distinct decay regimes: early-time rapid, transient-time oscillatory, and long-time classical. Finally, by analyzing the emission spectrum, we observe an anticrossing of the peak doublets~(a feature also seen in cavity QED setups) which originates from quantum vacuum effects and the resulting non-Markovian HPP evolution in our WQED.
Operational modes of a Raman-coupled two-qubit quantum thermal machine
This paper studies a quantum thermal machine made of two qubits connected by Raman coupling, analyzing how it can operate as different types of heat engines, refrigerators, or heaters under various thermodynamic cycles. The researchers map out the conditions and parameter ranges where each operational mode occurs and find that Stirling cycles with regenerators achieve the best performance.
Key Contributions
- Comprehensive analysis of operational modes for Raman-coupled two-qubit thermal machines across multiple thermodynamic cycles
- Identification of parameter space boundaries and efficiency maps for different operational regimes using frequency ratios and coupling strengths
- Demonstration that Stirling cycles with regeneration achieve near-ideal efficiencies in quantum thermal machines
View Full Abstract
We investigate a quantum thermal machine composed of two qubits coupled through a Raman-induced exchange interaction and driven by inhomogeneous transition frequencies. The system is analyzed within Carnot, Otto, and Stirling thermodynamic cycles, including the Stirling cycle with and without regeneration. We identify the conditions under which the device operates as a heat engine, refrigerator, thermal accelerator, or heater. Efficiency maps and operational-mode diagrams reveal well-defined boundaries in parameter space, governed by the frequency ratio $r=\barω/ω$, the coupling strength $g$, and the thermal gradient between reservoirs. The Carnot cycle exhibits sharp transitions between engine and refrigerator regimes, while the Otto cycle displays a richer structure with the coexistence of all operational modes. The Stirling cycle shows enhanced versatility and performance, particularly when assisted by a regenerator, where near-ideal efficiencies are achieved. Overall, the Raman-type interaction introduces a controllable left-right asymmetry that enables nontrivial manipulation of thermodynamic behavior through frequency tuning.
Collective dynamics versus entanglement in quantum battery performance
This paper investigates quantum batteries to understand whether enhanced charging performance comes from genuine quantum entanglement or from classical coherent collective dynamics. The researchers find that peak charging power occurs before strong quantum correlations develop, suggesting coherent transport dominates early charging while entanglement builds later.
Key Contributions
- Demonstrated that peak charging power in quantum batteries occurs before strong quantum correlations develop, indicating coherent transport dominates over entanglement effects
- Established that fully collective interactions provide genuine advantages because all particles participate coherently, while partial interactions don't guarantee improved efficiency
View Full Abstract
Identifying the physical origin of enhanced charging performance in many-body quantum batteries is a key challenge in quantum thermodynamics. We investigate whether improvements in stored energy and instantaneous charging power arise from genuine quantum correlations or from coherent collective dynamics that are not intrinsically quantum. We compare the time evolution of energetic quantities with a hierarchy of information-theoretic measures probing bipartite, tripartite, and further-partite correlations. Across different battery charger configurations, we find a consistent temporal ordering in which the instantaneous power peaks before the buildup of strong quantum correlations, indicating that peak charging is dominated by coherent transport, while entanglement and scrambling develop at later times. Furthermore, charging protocols based on k local interactions are examined under both unconstrained and norm-constrained (fair) settings, enabling a clear distinction between classical scaling effects and genuine collective enhancements. Increasing the interaction order or the participation number does not automatically translate into higher charging power. Instead, the performance is primarily dictated by how many particles actually become mutually correlated and contribute to entanglement. Fully collective interactions provide a genuine advantage because all particles participate coherently, whereas partially extended interaction schemes fail to monotonically increase the number of effectively interacting particles, and therefore do not guarantee improved charging efficiency.
Multipartite Non-local Magic and SYK Model
This paper develops new mathematical tools to measure quantum 'magic' (non-stabilizerness) in complex quantum systems, specifically studying how computational complexity is distributed across different parts of interacting disordered systems. The authors apply these tools to the Sachdev-Ye-Kitaev model, revealing hidden complexity in black hole microstates that disappears in thermal averages.
Key Contributions
- Introduction of multipartite non-local magic functional to quantify distribution of quantum magic across different scales
- Demonstration that thermal pure quantum states contain hidden computational complexity not visible in thermal density matrices
- Analysis of stabilizer complexity patterns in supersymmetric SYK models revealing sector-dependent global correlations
View Full Abstract
We investigate the structure of quantum magic in interacting disordered fermionic systems, quantifying non-stabilizerness via the fermionic stabilizer Rényi entropy (SRE). To resolve the distribution of magic across different scales, we introduce a multipartite non-local magic functional, constructed from an inclusion-exclusion combination of subsystem contributions. This measure serves as a fine-grained diagnostic, isolating genuinely global contributions and revealing nontrivial interactions between local and collective supports of magic. We illustrate the measure on paradigmatic multipartite states and apply these diagnostics to the Sachdev-Ye-Kitaev model and its variants. Crucially, for thermal/typical ensembles, we observe a marked disparity between Thermal Pure Quantum (TPQ) states and the thermal density matrix. This reveals a concealed complexity: the immense computational hardness characterizing the unitary evolution is encoded in the specific microstructure of the black hole microstates, while being washed out in the coarse-grained thermodynamic description. Furthermore, in $\mathcal N=2$ supersymmetric SYK, we show that while fortuitous BPS states exhibit intermediate stabilizer complexity, the multipartite measure unveils a rich, sector-dependent pattern of global correlations, distinguishing them from generic chaotic states.
Egorov-Type Semiclassical Limits for Open Quantum Systems with a Bi-Lindblad Structure
This paper develops mathematical connections between classical bi-Hamiltonian systems and quantum open systems described by the Lindblad formalism. It shows how certain classical dynamical systems with dissipation can be consistently quantized while preserving their integrable structure and semiclassical behavior.
Key Contributions
- Establishes mathematical bridge between bi-Hamiltonian Poisson-Lie structures and GKSL quantum open system formalism
- Develops contact-compatible Lindblad generators that preserve semiclassical limits and integrability
- Provides explicit Euler-top example demonstrating bi-Lindblad structure with semiclassical behavior
View Full Abstract
This paper develops a bridge between bi-Hamiltonian structures of Poisson-Lie type, contact Hamiltonian dynamics, and the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) formalism for quantum open systems. On the classical side, we consider bi-Hamiltonian systems defined by a Poisson pencil with non-trivial invariants. Using an exact symplectic realization, these invariants are lifted and projected onto a contact manifold, yielding a completely integrable contact Hamiltonian system in terms of dissipated quantities and a Jacobi-commutative algebra of observables. On the quantum side, we introduce a class of contact-compatible Lindblad generators: GKSL evolutions whose dissipative part preserves a commutative $C^\ast$-subalgebra generated by the quantizations of the classical dissipated quantities, and whose Hamiltonian part admits an Egorov-type semiclassical limit to the contact dynamics. This construction provides a mathematical mechanism compatible with the semiclassical limit for pure dephasing, compatible with integrability and contact dissipation. An explicit Euler-top-type Poisson-Lie pencil, inspired by deformed Euler top models, is developed as a fully worked-out example illustrating the resulting bi-Lindblad structure and its semiclassical behavior.
Who can compete with quantum computers? Lecture notes on quantum inspired tensor networks computational techniques
This paper presents lecture notes on tensor network computational techniques, particularly Matrix Product States (MPS) and Matrix Product Operators (MPO), as alternatives to quantum computing for solving exponentially large linear algebra problems. It covers algorithms like DMRG, tensor cross interpolation, and applications including quantum computer simulation and solving partial differential equations.
Key Contributions
- Comprehensive treatment of tensor network algorithms as classical alternatives to quantum computing
- Detailed construction of MPOs for mathematical operations like differentiation, integration, and quantum Fourier transform
- Applications to quantum computer simulation and PDE solving using quantics representation
View Full Abstract
This is a set of lectures on tensor networks with a strong emphasis on the core algorithms involving Matrix Product States (MPS) and Matrix Product Operators (MPO). Compared to other presentations, particular care has been given to disentangle aspects of tensor networks from the quantum many-body problem: MPO/MPS algorithms are presented as a way to deal with linear algebra on extremely (exponentially) large matrices and vectors, regardless of any particular application. The lectures include well-known algorithms to find eigenvectors of MPOs (the celebrated DMRG), solve linear problems, and recent learning algorithms that allow one to map a known function into an MPS (the Tensor Cross Interpolation, or TCI, algorithm). The lectures end with a discussion of how to represent functions and perform calculus with tensor networks using the "quantics" representation. They include the detailed analytical construction of important MPOs such as those for differentiation, indefinite integration, convolution, and the quantum Fourier transform. Three concrete applications are discussed in detail: the simulation of a quantum computer (either exactly or with compression), the simulation of a quantum annealer, and techniques to solve partial differential equations (e.g. Poisson, diffusion, or Gross-Pitaevskii) within the "quantics" representation. The lectures have been designed to be accessible to a first-year PhD student and include detailed proofs of all statements.
Entanglement signatures of quantum criticality in Floquet non-Hermitian topological systems
This paper uses entanglement entropy as a diagnostic tool to study topological phase transitions in one-dimensional quantum systems that are periodically driven (Floquet systems). The researchers map out phase diagrams and show that entanglement signatures remain reliable even in non-Hermitian systems, demonstrating the robustness of this measurement technique.
Key Contributions
- Demonstrated entanglement entropy as robust diagnostic for topological phase transitions in Floquet systems
- Constructed phase diagrams using entanglement signatures and showed their reliability in non-Hermitian regimes
- Identified characteristic entanglement spectrum splittings that reveal hybridization between different topological modes
View Full Abstract
The entanglement entropy can be an effective diagnostic tool for probing topological phase transitions. In one-dimensional single particle systems, the periodic driving generates a variety of topological phases and edge modes. In this work, we investigate the topological phase transition of the one-dimensional Floquet Su-Schrieffer-Heeger model using entanglement entropy, and construct the phase diagram based on entanglement entropy. The entanglement entropy exhibits pronounced peaks and follows the logarithmic scaling law at the phase transition points, from which we extract the central charge $c=1$. We further investigate the entanglement spectrum to accurately distinguish the different topological phases. In addition, the coupling between zero and $π$ modes leads to characteristic splittings in the entanglement spectrum, signaling their hybridization under periodic driving. These results remain robust in non-Hermitian regimes and in the presence of next-nearest-neighbor hopping, demonstrating the reliability and universality of entanglement entropy as a diagnostic for topological phase transitions.
Nonseparability as Time-Averaged Dynamic States
This paper proposes a new theoretical framework that explains quantum nonseparability (entanglement) as arising from time-averaged oscillatory processes with auxiliary angular frequencies. The approach offers both an alternative theoretical perspective on entanglement mechanisms and practical methods for simulating quantum entanglement in classical wave systems.
Key Contributions
- Novel theoretical framework explaining nonseparability through time-averaged oscillatory dynamics
- Practical approach for simulating multipartite entanglement in classical wave systems
View Full Abstract
Nonseparability - multipartite states that cannot be factorized - is one of the most striking features of quantum mechanics, as it gives rise to entanglement and non-causal correlations. In quantum computing, it also contributes directly to the computational advantage of quantum computers over its digital counterparts. In this work, we introduce a simple mechanism that frames nonseparability as a time-averaged manifestation of an underlying oscillatory process within state space. The central idea is the inclusion of auxiliary angular frequencies that modulate the temporal evolution of composite states. These additional dynamical degrees of freedom act as coherence channels through which nonseparability is mediated. While the proposed formalism could eventually serve as an alternative theoretical handle on the mechanisms of quantum entanglement, its greater significance lies in opening practical routes for simulating multipartite entanglement in controlled classical wave systems.
Does relativistic motion really freeze initially maximal entanglement?
This paper investigates how relativistic motion affects quantum entanglement in four-qubit cluster states, discovering that certain types of entanglement can remain completely frozen (unchanged) even under extreme acceleration, contrary to the usual expectation that relativistic effects degrade entanglement.
Key Contributions
- Discovery of complete freezing of maximal entanglement under relativistic acceleration in four-qubit cluster states
- Demonstration that the Unruh effect does not universally degrade all forms of maximal entanglement
View Full Abstract
We investigate the relativistic dynamics of quantum entanglement in a four-qubit cluster ($CL_4$) state using a fully operational Unruh-DeWitt detector framework. Contrary to the widely held expectation that the Unruh effect inevitably degrades initially maximal entanglement, we demonstrate that the 1-3 bipartite entanglement of the $CL_4$ state remains strictly maximal for all accelerations, including the infinite-acceleration limit. This result uncovers a previously unexplored phenomenon, namely the ``complete freezing of initially maximal entanglement" under relativistic motion. To the best of our knowledge, this is the first identification and systematic characterization of such a phenomenon within a relativistic framework. These findings overturn the conventional view that acceleration universally diminishes maximal entanglement and establish the $CL_4$ state as a promising resource for quantum information processing in non-inertial or curved-spacetime settings.
Violation of Bell Monogamy Relations
This paper demonstrates that Bell monogamy relations, which describe how quantum nonlocality is shared between subsystems in multipartite entangled states, can be violated through local filtering operations. The researchers use W states as examples to show that these fundamental constraints on nonlocality sharing can be circumvented.
Key Contributions
- Demonstration that Bell monogamy relations can be violated using local filtering operations
- Analysis of permutation-symmetric multipartite pure states, particularly W states, to show monogamy relation violations
View Full Abstract
The entangled multipartite systems, specially in pure states, exhibit the phenomenon entanglement monogamy. Such systems also display the phenomenon of Bell nonlocality. Like entanglement monogamy relations, there are Bell monogamy relations. These relations suggest a sharing of nonlocality across the subsystems. The nonlocality, as characterized by Bell inequalities, of one subsystem limits the nonlocality exhibited by another subsystem. We show that the Bell monogamy relations can be violated by using local filtering operations. We consider permutation-symmetric multipartite pure states, in particular $W$ states, to demonstrate the violation.
Trading symmetry for Hilbert-space dimension in Bell-inequality violation
This paper investigates how the requirement for symmetry between parties in Bell inequality tests affects the quantum strategies needed to achieve maximum violations. The authors find that some symmetric Bell inequalities can only achieve maximum violation using asymmetric quantum strategies with minimal dimensional Hilbert spaces.
Key Contributions
- Demonstrates that symmetric Bell inequalities sometimes require asymmetric quantum strategies for maximal violation in minimal dimensions
- Provides counterexamples showing asymmetric Bell inequalities can be maximally violated by symmetric correlations
- Analyzes the trade-off between symmetry and Hilbert space dimension in Bell test optimization
View Full Abstract
In quantum information, asymmetry, i.e., the lack of symmetry, is a resource allowing one to accomplish certain tasks that are otherwise impossible. Similarly, in a Bell test using any given Bell inequality, the maximum violation achievable using quantum strategies respecting or disregarding a certain symmetry can be different. In this work, we focus on the symmetry involved in the exchange of parties and explore when we have to trade this symmetry for a lower-dimensional quantum strategy in achieving the maximal violation of given Bell inequalities. For the family of symmetric Collins-Gisin-Linden-Massar-Popescu inequalities, we provide evidence showing that there is no such trade-off. However, for several other Bell inequalities with a small number of dichotomic measurement settings, we show that symmetric quantum strategies in the minimal Hilbert space dimension can only lead to a suboptimal Bell violation. In other words, there exist symmetric Bell inequalities that can only be maximally violated by asymmetric quantum strategies of minimal dimension. In contrast, one can also find examples of asymmetric Bell inequalities that are maximally violated by symmetric correlations. The implications of these findings on the geometry of the set of quantum correlations and the possibility of performing self-testing therefrom are briefly discussed.
Entanglement Entropy for Screened Interactions via Dimensional Mapping to Harmonic Oscillators
This paper develops a mathematical method for calculating entanglement entropy in quantum systems by converting screened Coulomb-like interactions into harmonic oscillator problems that can be solved analytically. The authors show how weak screening effects systematically modify quantum entanglement properties through controlled perturbative corrections.
Key Contributions
- Novel dimensional mapping technique converting Yukawa interactions to harmonic oscillators for analytical treatment
- Systematic perturbative framework for computing entanglement entropy corrections in weakly interacting systems
View Full Abstract
We investigate interaction-induced corrections to entanglement entropy by mapping a screened Yukawa-type interaction to an effective harmonic oscillator system with controlled anharmonic perturbations. Starting from a one-dimensional interaction $V(x) = -g^2 e^{-αm x}/x$, we reformulate the problem in terms of a four-dimensional radial oscillator, where the finite screening length generates a systematic hierarchy of polynomial interactions in the radial coordinate. This mapping enables a controlled Rayleigh-Schrodinger perturbative treatment of the ground-state wavefunction and an explicit spectral analysis of the reduced density matrix. Working in the weak-screening regime, we compute the leading non-Gaussian correction arising from the quartic interaction $ρ^4$, which appears at order $α^2$ in the expansion of the Yukawa-like potential. We obtain closed analytic expressions for the resulting small eigenvalues of the reduced density matrix and evaluate their contribution to the von Neumann entanglement entropy. We show that the entropy receives analytic corrections at order $α^2$, originating both from explicit anharmonic state-mixing effects and from the implicit $α$ dependence of the Gaussian width parameter. Our results clarify the distinct roles of harmonic renormalization and genuinely non-Gaussian interactions in generating entanglement, establish a systematic power-counting and normalization scheme for higher-order $ρ^{2n}$ perturbations, and provide a transparent oscillator-based framework for computing entanglement entropy in weakly interacting low-dimensional and field-theoretic systems.
Quantum-Enhanced Neural Contextual Bandit Algorithms
This paper introduces a quantum-enhanced algorithm for online decision-making that uses frozen quantum neural networks as kernels for contextual bandit problems. The approach avoids training instabilities while achieving better theoretical scaling than classical neural network methods.
Key Contributions
- Novel QNTK-UCB algorithm that uses frozen quantum neural networks as kernels
- Improved theoretical parameter scaling from O((TK)^8) to O((TK)^3) compared to classical methods
- Demonstration of quantum advantage in online learning through implicit regularization
View Full Abstract
Stochastic contextual bandits are fundamental for sequential decision-making but pose significant challenges for existing neural network-based algorithms, particularly when scaling to quantum neural networks (QNNs) due to issues such as massive over-parameterization, computational instability, and the barren plateau phenomenon. This paper introduces the Quantum Neural Tangent Kernel-Upper Confidence Bound (QNTK-UCB) algorithm, a novel algorithm that leverages the Quantum Neural Tangent Kernel (QNTK) to address these limitations. By freezing the QNN at a random initialization and utilizing its static QNTK as a kernel for ridge regression, QNTK-UCB bypasses the unstable training dynamics inherent in explicit parameterized quantum circuit training while fully exploiting the unique quantum inductive bias. For a time horizon $T$ and $K$ actions, our theoretical analysis reveals a significantly improved parameter scaling of $Ω((TK)^3)$ for QNTK-UCB, a substantial reduction compared to $Ω((TK)^8)$ required by classical NeuralUCB algorithms for similar regret guarantees. Empirical evaluations on non-linear synthetic benchmarks and quantum-native variational quantum eigensolver tasks demonstrate QNTK-UCB's superior sample efficiency in low-data regimes. This work highlights how the inherent properties of QNTK provide implicit regularization and a sharper spectral decay, paving the way for achieving ``quantum advantage'' in online learning.
Quantum key distribution without authentication and information leakage
This paper proposes a new quantum key distribution (QKD) protocol that eliminates the need for separate authentication mechanisms and removes information leakage from public classical processing steps. The approach uses two additional protocol keys and avoids public classical steps entirely, achieving higher security and key rates than conventional QKD.
Key Contributions
- Eliminates need for separate authentication in QKD protocols
- Removes information leakage from public classical post-processing
- Achieves higher key rates with reusable protocol keys
View Full Abstract
Quantum key distribution (QKD) is the most widely studied quantum cryptographic model that exploits quantum effects to achieve information-theoretically secure key establishment. Conventional QKD contains public classical post-processing steps that require authentication to prevent impersonation and maintain security. However, a major limitation of QKD is it cannot perform authentication by itself, and thus requires a separate authentication mechanism. In addition, these public classical steps also have information leakage which subjects QKD to additional attack strategies and reduces the final key rate. In this work, we propose a new QKD variant that removes the need for a separate authentication mechanism, eliminates information leakage, and achieves a substantially higher key rate. By having two more protocol keys than conventional QKD and no public classical steps, our design achieves (almost) perfect information-theoretic security with the protocol keys reusable.
Quantum-enhanced long short-term memory with attention for spatial permeability prediction in oilfield reservoirs
This paper develops a quantum-enhanced machine learning model that combines quantum circuits with traditional neural networks to predict oil reservoir properties like permeability. The quantum-classical hybrid approach shows improved accuracy compared to purely classical methods when tested on geological data.
Key Contributions
- First application of quantum-enhanced LSTM networks to subsurface spatial prediction
- Development of two quantum circuit architectures (QLSTMA-SG and QLSTMA-IG) showing 19-20% improvement in prediction accuracy over classical methods
View Full Abstract
Spatial prediction of reservoir parameters, especially permeability, is crucial for oil and gas exploration and development. However, the wide range and high variability of permeability prevent existing methods from providing reliable predictions. For the first time in subsurface spatial prediction, this study presents a quantum-enhanced long short-term memory with attention (QLSTMA) model that incorporates variational quantum circuits (VQCs) into the recurrent cell. Using quantum entanglement and superposition principles, the QLSTMA significantly improves the ability to predict complex geological parameters such as permeability. Two quantization structures, QLSTMA with Shared Gates (QLSTMA-SG) and with Independent Gates (QLSTMA-IG), are designed to investigate and evaluate the effects of quantum structure configurations and the number of qubits on model performance. Experimental results demonstrate that the 8-qubit QLSTMA-IG model significantly outperforms the traditional long short-term memory with attention (LSTMA), reducing Mean Absolute Error (MAE) by 19% and Root Mean Squared Error (RMSE) by 20%, with particularly strong performance in regions featuring complex well-logging data. These findings validate the potential of quantum-classical hybrid neural networks for reservoir prediction, indicating that increasing the number of qubits yields further accuracy gains despite the reliance on classical simulations. This study establishes a foundational framework for the eventual deployment of such models on real quantum hardware and their extension to broader applications in petroleum engineering and geoscience.
Q-based, objective-field model for wave-function collapse: Analyzing measurement on a macroscopic superposition state
This paper proposes a Q-based objective-field model to explain quantum wave-function collapse during measurement by analyzing how microscopic quantum superpositions coupled to macroscopic meters evolve through forward-backward stochastic differential equations. The authors argue that measurement outcomes are determined when the coupling is complete and describe collapse as a two-stage amplification process that explains Born's rule while maintaining consistency with macroscopic realism.
Key Contributions
- Proposes a Q-based objective-field model that provides a two-stage explanation for wave-function collapse during quantum measurements
- Demonstrates how the model maintains consistency with macroscopic realism while explaining Born's rule through amplification processes
View Full Abstract
The measurement problem remains unaddressed in modern physics, with an array of proposed solutions but as of yet no agreed resolution. In this paper, we examine measurement using the Q-based, objective-field model for quantum mechanics. Schrodinger considered a microscopic system prepared in a superposition of states which is then coupled to a macroscopic meter. We analyze the entangled meter and system, and measurements on it, by solving forward-backward stochastic differential equations for real amplitudes $x(t)$ and $p(t)$ that correspond to the phase-space variables of the Q function of the system at a time $t$. We model the system and meter as single-mode fields, and measurement of $\hat{x}$ by amplification of the amplitude $x(t)$. Our conclusion is that the outcome for the measurement is determined at (or by) the time $t_{m}$, when the coupling to the meter is complete, the meter states being macroscopically distinguishable. There is consistency with macroscopic realism. By evaluating the distribution of the amplitudes $x$ and $p$ postselected on a given outcome of the meter, we show how the $Q$-based model represents a more complete description of quantum mechanics: The variances associated with amplitudes $x$ and $p$ are too narrow to comply with the uncertainty principle, ruling out that the distribution represents a quantum state. We conclude that the collapse of the wavefunction occurs as a two-stage process: First there is an amplification that creates branches of amplitudes $x(t)$ of the meter, associated with distinct eigenvalues. The outcome of measurement is determined by $x(t)$ once amplified, explaining Born's rule. Second, the distribution that determines the final collapse is the state inferred for the system conditioned on the outcome of the meter: information is lost about the meter, in particular, about the complementary variable $p$.
Multiparameter quantum estimation with a uniformly accelerated Unruh-DeWitt detector
This paper studies how to simultaneously estimate multiple parameters using a quantum detector moving with constant acceleration in vacuum, finding that standard quantum estimation bounds are too loose and that tighter bounds like the Nagaoka bound provide better accuracy limits. The research shows that adding boundaries to the system improves estimation precision.
Key Contributions
- Demonstrated that quantum Cramér-Rao bound fails to provide tight error bounds for multiparameter estimation with Unruh-DeWitt detectors
- Showed that Nagaoka bound provides the tightest achievable error bounds among considered bounds and that boundary conditions systematically improve estimation precision
View Full Abstract
The uniformly accelerated Unruh-DeWitt detector serves as a fundamental model in relativistic quantum metrology. While previous studies have mainly concentrated on single-parameter estimation via quantum Cramér-Rao bound, the multi-parameter case remains significantly underexplored. In this paper, we investigate the multiparameter estimation for a uniformly accelerated Unruh-DeWitt detector coupled to a vacuum scalar field in both bounded and unbounded Minkowski vacuum. Our analysis reveals that quantum Cramér-Rao bound fails to provide a tight error bound for the two-parameter estimation involving the initial phase and weight parameters. For this reason, we numerically compute two tighter error bounds, Holevo Cramér-Rao bound and Nagaoka bound, based on a semidefinite program. Notably, our results demonstrate that Nagaoka bound yields the tightest error bound among all the considered error bounds, consistent with the general hierarchy of multiparameter quantum estimation. In the case with a boundary, we observe the introduction of boundary systematically reduces the values of both Holevo Cramér-Rao bound and Nagaoka bound, indicating an improvement on the attainable estimation precision. These results offer valuable insights on and practical guidance for advancing multiparameter estimation in relativistic context.
Stable boundary modes for fragile topology from spontaneous PT-symmetry breaking
This paper shows how non-Hermitian effects (loss and gain) in parity-time symmetric systems can create stable topological edge modes in materials where such modes are normally unstable due to fragile topology. The authors demonstrate that spontaneous PT-symmetry breaking can convert fragile topological phenomena into robust topological states with protected boundary modes.
Key Contributions
- Demonstration that PT-symmetry breaking can stabilize edge modes in fragile topological systems
- Extension of anomaly cancellation concepts to non-Hermitian systems for protecting topological modes
View Full Abstract
Two-dimensional topological insulators protected by nonlocal symmetries or with fragile topology usually do not admit robust in-gap edge modes due to the incompatibility between the symmetry and the boundary. Here, we show that in a parity-time (PT) symmetric system robust in-gap topological edge modes can be stably induced by non-Hermitian couplings that spontaneously break the PT symmetry of the eigenstates. The topological edge modes traverse the imaginary spectral gap between a pair of fragile topological bands, which is opened by the presence of the non-Hermitian perturbation. We demonstrate that the net number of resulting in-gap modes is protected by an operator version of anomaly cancellation that extends beyond the Hermitian limit. The results imply that loss and gain can in principle drive fragile topological phenomena to stable topological phenomena.
Localization of joint quantum measurements on $\mathbb{C}^d \otimes \mathbb{C}^d$ by entangled resources with Schmidt number at most $d$
This paper studies quantum measurements that can be performed using only local operations and shared entanglement, providing mathematical characterizations of which measurements are possible under these constraints. The work resolves theoretical questions about the limitations of non-adaptive quantum measurement protocols compared to adaptive ones.
Key Contributions
- Complete characterization of two-qubit rank-1 PVMs that can be localized with two-qubit entanglement, resolving a conjecture by Gisin and Del Santo
- Protocol-independent characterization showing that rank-1 PVMs with maximal Schmidt rank can be localized using entanglement of Schmidt number at most d if and only if they form maximally entangled bases corresponding to nice unitary error bases
View Full Abstract
Localizable measurements are joint quantum measurements that can be implemented using only non-adaptive local operations and shared entanglement. We provide a protocol-independent characterization of localizable projection-valued measures (PVMs) by exploiting algebraic structures that any such measurement must satisfy. We first show that a rank-1 PVM on $\mathbb{C}^d\otimes\mathbb{C}^d$ containing an element with the maximal Schmidt rank can be localized using entanglement of a Schmidt number at most $d$ if and only if it forms a maximally entangled basis corresponding to a nice unitary error basis. This reveals strong limitations imposed by non-adaptive local operations, in contrast to the adaptive setting where any joint measurement is implementable. We then completely characterize two-qubit rank-1 PVMs that can be localized with two-qubit entanglement, resolving a conjecture of Gisin and Del Santo, and finally extend our characterization to ideal two-qudit measurements, strengthening earlier results.
Further Improving the Decoy State Quantum Key Distribution Protocol with Advantage Distillation
This paper improves the security analysis of quantum key distribution protocols by developing better bounds on Eve's information when vacuum states are used in classical advantage distillation, leading to increased transmission distances and noise tolerance for practical quantum cryptography.
Key Contributions
- Derived improved security proof for classical advantage distillation applied to decoy-state BB84 protocol
- Developed tighter bounds on quantum entropy for vacuum photon events
- Demonstrated increased maximum distances and noise tolerances for practical QKD systems
View Full Abstract
In this paper, we revisit the application of classical advantage distillation (CAD) to the decoy-state BB84 protocol. Prior work has shown that CAD can greatly improve maximal distances and noise tolerances of the practical decoy state protocol. However, past work in deriving key-rate bounds for this protocol with CAD have assumed a trivial bound on the quantum entropy, whenever Alice sends a vacuum state in a CAD block (i.e., the entropy of such blocks is taken to be zero). Since such rounds contribute, negatively, to the error correction leakage, this results in a correct, though sub-optimal bound. Here, we derive a new proof of security for CAD applied to the decoy state BB84 protocol, computing a bound on Eve's uncertainty in all possible single and vacuum photon events. We use this to derive a new asymptotic key-rate bound which, we show, outperforms prior work, allowing for increased distances and noise tolerances.
Deep learning parameter estimation and quantum control of single molecule
This paper develops machine learning methods to estimate physical parameters of single molecules at room temperature using optical signals, with the goal of improving coherent quantum control of molecular systems. The researchers compare optimization-based and neural network approaches for parameter estimation using two-photon absorption measurements.
Key Contributions
- Development of deep learning approach for quantum parameter estimation in single molecules
- Comparison of optimization vs neural network methods for coherent control parameter inference
- Demonstration of robust parameter estimation for quantum control at room temperature
View Full Abstract
Coherent control, a central concept in physics and chemistry, has sparked significant interest due to its ability to fine-tune interference effects in atoms and individual molecules for applications ranging from light-harvesting complexes to molecular qubits. However, precise characterization of the system's dissipative dynamics is required for its implementation, especially at high temperature. In a quantum control experiment, this means learning system-bath parameters and driving coupling strengths. Here, we demonstrate how to infer key physical parameters of a single molecule driven by spectrally modulated pulses at room temperature. We develop and compare two computational approaches based on two-photon absorption photoluminescence signals: an optimization-based minimization scheme and a feed-forward neural network. The robustness of our approach highlights the importance of reliable parameter estimation in designing effective coherent control protocols. Our results have direct applications in ultrafast spectroscopy, quantum materials and technology.
Compressed Qubit Noise Spectroscopy: Piecewise-Linear Modeling and Rademacher Measurements
This paper develops improved methods for characterizing noise in quantum systems using random pulse sequences. It introduces two advances: a new mathematical approach that can reconstruct more realistic piecewise-linear noise patterns, and a simplified experimental technique using pseudorandom sequences that are easier to implement.
Key Contributions
- Extended noise spectroscopy method using total generalized variation (TGV) norm regularizer to reconstruct piecewise-linear noise spectra with finer resolution
- Introduction of Rademacher measurements using pseudorandom pulse sequences for simplified experimental implementation while maintaining reconstruction accuracy
View Full Abstract
Random pulse sequences are a powerful method for qubit noise spectroscopy, enabling efficient reconstruction of sparse noise spectra. Here, we advance this method in two complementary directions. First, we extend the method using a regularizer based on the total generalized variation (TGV) norm, in order to reconstruct a larger class of noise spectra, namely piecewise-linear noise spectra, which more realistically model many physical systems. We show through numerical simulations that the new method resolves finer spectral features, while maintaining an order-of-magnitude speedup over conventional approaches to noise spectroscopy. Second, we simplify the experimental implementation of the method, by introducing Rademacher measurements for reconstructing sparse noise spectra. These measurements use pseudorandom pulse sequences that can be generated in real time from a short random seed, reducing experimental complexity without compromising reconstruction accuracy. Together, these developments broaden the reach of random pulse sequences for accurate and efficient noise characterization in realistic quantum systems.
Superextensive charging speeds in a correlated quantum charger
This paper introduces quantum chargers - interacting quantum systems that can transfer energy between two drives with enhanced efficiency. The researchers show that long-range interactions enable superlinear scaling of charging power with system size, outperforming non-interacting parallel units.
Key Contributions
- Demonstration of superextensive charging speeds in correlated quantum systems that scale superlinearly with system size
- Theoretical analysis using driven Lipkin-Meshkov-Glick model and power-law interacting spin chains showing collective steady-state charging modes
- Identification of critical system size limits and proposal for experimental verification in trapped-ion systems
View Full Abstract
We define a quantum charger as an interacting quantum system that transfers energy between two drives. The key figure of merit characterizing a charger is its charging power. Remarkably, the presence of long-range interactions within the charger can induce a collective steady-state charging mode that depends superlinearly on the size of the charger, exceeding the performance of noninteracting, parallel units. Using the driven Lipkin-Meshkov-Glick model and power-law interacting spin chains, we show that this effect persists up to a critical system size set by the breakdown of the high-frequency regime. We discuss optimal work output as well as experimentally accessible initial states. The superlinear charging effect can be probed in trapped-ion experiments, and positions interacting Floquet systems as promising platforms for enhanced energy conversion.
Probing Dark Matter-Electron Interactions with Superconducting Qubits
This paper uses superconducting transmon qubits as dark matter detectors by measuring unexplained changes in their decoherence times. The researchers propose that interactions between dark matter particles and electrons could cause these decoherence effects, allowing them to set new constraints on dark matter properties.
Key Contributions
- Novel application of transmon qubits as dark matter detectors
- Most stringent laboratory-based constraints on dark matter-electron scattering at keV scale
View Full Abstract
Quantum device measurements are powerful tools to probe dark matter interactions. Among these, transmon qubits stand out for their ability to suppress external noise while remaining highly sensitive to tiny energy deposits. Ambient galactic halo dark matter interacting with electrons can deposit energy in the qubit, leading to changes in its decoherence time. Recent measurements of transmons have consistently measured, in various experimental setups, a residual contribution to the decoherence time unexplained by thermal noise or known external sources. We use such measurements to set the most stringent laboratory-based constraints to date on dark matter-electron scattering at the keV scale and competitive constraints on dark photon absorption.
Gravitational time dilation in quantum clock interferometry with entangled multi-photon states and quantum memories
This paper demonstrates a quantum clock interferometer that uses entangled photons stored in quantum memories at different heights to measure gravitational time dilation effects. The researchers show that using multi-photon entangled states can amplify the gravitational phase shifts by a factor N, making these relativistic effects observable in laboratory settings with height differences of just a few meters.
Key Contributions
- Demonstration that N-photon entangled states provide N-fold enhancement in sensitivity to gravitational time dilation effects
- Identification of practical experimental parameters using Rb/Cs and rare-earth quantum memories that enable gravitational time dilation measurements at meter-scale height differences
View Full Abstract
Gravitational time dilation implies that clocks held at different heights accumulate different proper times. We analyze a memory-assisted quantum clock interferometer in which a frequency-bin photonic clock is stored in two vertically separated quantum memories for a controllable duration, such that the joint state evolves in a quantum superposition of two proper times. After retrieval, the photonic modes interfere in a Hong-Ou-Mandel (HOM) interferometer, for which we derive analytic expressions for the resulting multiphoton detection statistics. Extending this HOM-based scheme from entangled photon pairs to frequency-entangled 2N-photon inputs, we show that the proper-time dependent phase is amplified by a factor N, leading to an N-times faster collapse and revival of the interference signal compared with the two-photon case. Incorporating finite memory efficiency and lifetime, we identify regimes where this modulation remains observable. For parameters compatible with demonstrated Rb and Cs memories and achievable optical frequency separations, the first collapse occurs for height differences in the order of 10-100 m with subsecond to few-second storage times, while suitable rare-earth ion and alkali memory combinations can reduce the required height to the few-metre scale. These results establish near-term laboratory conditions for observing entanglement dynamics driven by gravitational time dilation in a photonic platform.
Renormalization Group is the principle behind the Holographic Entropy Cone
This paper establishes a fundamental connection between holographic entropy inequalities and the renormalization group by showing that these inequalities can be understood as constraints on how deep different entanglement wedges penetrate into the bulk spacetime. The work demonstrates that bulk depth geometrically represents CFT energy scales, linking holographic entanglement structure to renormalization group flow.
Key Contributions
- Reformulation of holographic entropy inequalities in terms of entanglement wedge depth comparisons
- Establishment of the connection between bulk geometry depth and CFT renormalization group scales
View Full Abstract
We show that every holographic entropy inequality can be recast in the form: `some entanglement wedges reach deeper in the bulk than some other entanglement wedges.' When the inequality is saturated, the two sets of wedges reach equally deep. Because bulk depth geometrizes CFT scales, the inequalities enforce and protect the holographic Renormalization Group.
Gaussian time-translation covariant operations: structure, implementation, and thermodynamics
This paper provides a rigorous mathematical classification of Gaussian quantum operations that remain unchanged under time translations, revealing that these continuous-variable systems behave very differently from their discrete-variable counterparts in terms of physical implementation and thermodynamic properties.
Key Contributions
- Complete mathematical classification of Gaussian time-translation covariant operations for continuous-variable quantum systems
- Discovery that key results from discrete-variable covariant operations do not hold in Gaussian optical settings, revealing fundamental differences in thermodynamic implementation and asymmetry properties
- Development of comprehensive mathematical and operational toolkits including novel non-extensive asymmetry measures for Gaussian covariant operations
View Full Abstract
Time-translation symmetry strongly constrains physical dynamics, yet systematic characterization for continuous-variable systems lags behind its discrete-variable counterpart. We close this gap by providing a rigorous classification of Gaussian quantum operations that are covariant under time translations, termed Gaussian covariant operations. We show that several key results known for discrete-variable covariant operations break down in the Gaussian optical setting: discrepancies arise in physical and thermodynamic implementation, in the extensivity of asymmetry, and in catalytic advantages. Our results provide comprehensive mathematical and operational toolkits for Gaussian covariant operations, including a peculiar pair of asymmetry measures that are completely non-extensive. Our findings also reveal surprising consequences of the interplay among symmetry, Gaussianity, and thermodynamic constraints, suggesting that real-world scenarios with multiple constraints have a rich structure not accessible from examining individual constraints separately.
Asymptotic freedom, lost: Complex conformal field theory in the two-dimensional $O(N>2)$ nonlinear sigma model and its realization in the spin-1 Heisenberg chain
This paper discovers that two-dimensional O(N) nonlinear sigma models for N>2 have nontrivial fixed points in the complex coupling plane, described by complex conformal field theories. The authors demonstrate this theoretically and confirm it numerically using a non-Hermitian spin-1 Heisenberg chain, showing how dissipative dynamics can prepare long-range entangled quantum states.
Key Contributions
- Discovery of complex conformal field theory fixed points in O(N>2) nonlinear sigma models through analytic continuation
- Numerical verification of CCFT predictions in non-Hermitian spin-1 Heisenberg chains with excellent agreement
- Construction of realistic Lindbladian dynamics for preparing long-range entangled CFT states through engineered dissipation
View Full Abstract
The two-dimensional $O(N)$ nonlinear sigma model (NLSM) is asymptotically free for $N>2$: it exhibits neither a nontrivial fixed point nor spontaneous symmetry-breaking. Here we show that a nontrivial fixed point generically does exist in the $\textit{complex}$ coupling plane and is described by a complex conformal field theory (CCFT). This CCFT fixed point is generic in the sense that it has a single relevant singlet operator, and is thus expected to arise in any non-Hermitian model with $O(N)$ symmetry upon tuning a single complex parameter. We confirm this prediction numerically by locating the CCFT at $N = 3$ in a non-Hermitian spin-1 antiferromagnetic Heisenberg chain, finding good agreement between the complex central charge and scaling dimensions and those obtained by analytic continuation of real fixed points from $N\leq 2$. We further construct a realistic Lindbladian for a spin-1 chain whose no-click dynamics are governed by the non-Hermitian Hamiltonian realizing the CCFT. Since the CCFT vacuum is the eigenstate with the smallest decay rate, the system naturally relaxes under dissipative dynamics toward a CFT state, thus providing a route to preparing long-range entangled states through engineered dissipation.
On the temperature of the quantum black hole
This paper addresses a theoretical discrepancy in quantum black hole physics, specifically a factor-of-two mismatch in Hawking temperature calculations that arises when considering the parallel universe structure of black holes. The authors propose that this temperature discrepancy can be resolved by using a generalized thermofield double structure for the quantum state describing the black hole's density matrix.
Key Contributions
- Identifies a factor-of-two discrepancy in Hawking temperature calculations related to black hole horizon physics
- Proposes that generalized thermofield double states can resolve the temperature mismatch in quantum black hole thermodynamics
View Full Abstract
A nontrivial peculiarity of general relativity is that when the horizon region of black holes is rendered harmless, the exterior doubles, resulting in a causally disconnected parallel universe. This intricacy plays a central role in 't Hooft's unitarity arguments, emphasising an exact identification between the physical universe and its duplicate on the other side of the horizon. However, it leads to another tension in the form of a factor of two correction in Hawking's temperature. This discrepancy is concerning because the Rindler temperature is universal and complies with the Bekenstein-Hawking entropy. We demonstrate that the mismatch in the Boltzmann factor gets fixed if the state that forms the corresponding density matrix adopts a generalised thermofield double structure. That leaves room for some interesting discussion.
Binarisation-loophole-free observation of high-dimensional quantum nonlocality
This paper demonstrates quantum nonlocality using four-dimensional photonic entanglement while closing a technical loophole that could allow classical explanations. The researchers use true multi-outcome measurements rather than collections of binary measurements to provide stronger evidence for genuinely high-dimensional quantum entanglement.
Key Contributions
- Closed the binarisation loophole in high-dimensional Bell inequality tests using true multi-outcome measurements
- Demonstrated genuinely high-dimensional quantum nonlocality with four-dimensional photonic path-mode entanglement
- Provided experimental violations strong enough to rule out lower-dimensional entanglement explanations
View Full Abstract
Bell inequality tests based on high-dimensional entanglement usually require measurements that can resolve multiple possible outcomes. However, the implementation of high-dimensional multi-outcome measurements is often only emulated via a collection of ``click or no-click'' measurements. This reduction of multi-outcome measurements to binary-outcome measurements opens a loophole in high-dimensional tests Bell inequalities which can be exploited by local hidden variable models [Tavakoli et al., Phys. Rev. A 111, 042433 (2025)]. Here, we close this loophole by using four-dimensional photonic path-mode entanglement and multi-outcome detection. We test both the well-known Collins-Gisin-Linden-Massar-Popescu inequality and a related Bell inequality tailored for maximally entangled states in high-dimension. We observe violations that are large enough to also rule out any quantum model based on entanglement of lower dimension, thereby demonstrating genuinely high-dimensional nonlocality free of the binarisation loophole.
A Length-Gauge Origin-Invariant Approach to Vibrational Circular Dichroism Spectra without Gauge-Including Atomic Orbitals
This paper develops a new computational method for calculating vibrational circular dichroism (VCD) spectra that avoids the need for gauge-including atomic orbitals while maintaining accuracy. The authors benchmark their length-gauge origin-invariant approach against traditional methods using several chiral molecules and find it produces reliable results with large enough basis sets.
Key Contributions
- Extension of length-gauge origin-invariant approach to vibrational circular dichroism calculations
- Comprehensive benchmarking against GIAO and other methods showing comparable accuracy for large basis sets
View Full Abstract
We have extended the origin-invariant length gauge (LG(OI)) approach -- originally developed by Caricato and co-workers for optical rotation (OR) and electronic circular dichroism (ECD) -- to vibrational circular dichroism (VCD). This approach avoids the need for gauge-including atomic orbitals (GIAOs), which are typically required to circumvent the unphysical dependence of the CD rotatory strengths on the arbitrary choice of coordinate origin for length gauge (LG) computations. Benchmark VCD spectra are presented for (P)-hydrogen peroxide, (S)-methyloxirane, (1R, 5R)-α-pinene, and (1R, 4R)-camphor using Hartree-Fock (HF) theory and density functional theory (DFT) methods across a range of basis sets and compared to those obtained from LG, velocity-gauge (VG), and GIAO computations. These analyses show that for VCD the LG(OI) approach does not converge to the basis-set limit as rapidly as the GIAO approach, but does yield similar quality spectra as GIAO for all major VCD peaks for quadruple-zeta-quality basis sets. The LG(OI) and VG VCD spectra are less reliable compared to GIAOs for smaller basis sets.
Mechanisms and Opportunities for Tunable High-Purity Single Photon Emitters: A Review of Hybrid Perovskites and Prospects for Bright Squeezed Vacuum
This paper reviews single-photon emitters for quantum technologies, focusing on hybrid perovskite quantum dots as tunable sources that can operate at room temperature. The authors propose a new classification framework and explore bright squeezed vacuum states as a promising approach for generating high-purity photons for quantum applications.
Key Contributions
- Mechanism-based classification framework for single-photon emitters
- Comparative analysis of hybrid perovskite quantum dots as tunable SPE platforms
- Introduction of bright squeezed vacuum states for scalable photon generation
- Performance framework to guide development of deterministic single-photon sources
View Full Abstract
Single-photon emitters (SPEs) are central to quantum communication, computing, and metrology, yet their development remains constrained by trade-offs in purity, indistinguishability, and tunability. This review presents a mechanism-based classification of SPEs, offering a physics-oriented framework to clarify the performance limitations of conventional sources, including quantum emitters and nonlinear optical processes. Particular attention is given to hybrid organic-inorganic perovskite quantum dots (HOIP QDs), which provide size- and composition-tunable emission with narrow linewidths and room-temperature operation. Through comparative analysis of physical mechanisms and performance metrics, we show how HOIP QDs may address key limitations of established SPE platforms. Recognizing the constraints of current deterministic sources, we introduce a performance framework to guide the development of scalable SPEs, and examine the theoretical potential of bright squeezed vacuum (BSV) states, discussing how BSV mechanisms could serve as a promising avenue for multiplexable, high-purity photon generation beyond conventional heralded schemes. The review concludes by outlining future directions for integrating HOIP- and BSV-based concepts into scalable quantum photonic architectures.
Schwarz maps with symmetry
This paper studies quantum information maps (CPTP, PPT, and Schwarz maps) through the lens of symmetry theory, classifying their structure when they respect certain group symmetries like unitary transformations. The research provides explicit characterizations of when these maps satisfy important quantum information properties and proves several conjectures about positive partial transpose operations.
Key Contributions
- Complete classification of U(n)-equivariant Schwarz maps on matrix algebras with explicit conditions for complete positivity
- Proof that several families of symmetric maps satisfy the PPT² conjecture using direct symmetry arguments
- Systematic framework for analyzing equivariant quantum maps between C*-algebras using group theory
View Full Abstract
The theory of symmetry of quantum mechanical systems is applied to study the structure and properties of several classes of relevant maps in quantum information theory: CPTP, PPT and Schwarz maps. First, we develop the general structure that equivariant maps $Φ:\mathcal A \to \mathcal B$ between $C^\ast$-algebras satisfy. Then, we undertake a systematic study of unital, Hermiticity-preserving maps that are equivariant under natural unitary group actions. Schwarz maps satisfy Kadison's inequality $Φ(X^\ast X) \geq Φ(X)^\ast Φ(X)$ and form an intermediate class between positive and completely positive maps. We completely classify $U(n)$-equivariant on $M_n(\mathbb C)$ and determine those that are completely positive and Schwarz. Partial classifications are then obtained for the weaker $DU(n)$-equivariance (diagonal unitary symmetry) and for tensor-product symmetries $U(n_1) \otimes U(n_2)$. In each case, the parameter regions where $Φ$ is Schwarz or completely positive are described by explicit algebraic inequalities, and their geometry is illustrated. Finally, we further show that the $U(n)$-equivariant family satisfies $\mathrm{PPT} \iff \mathrm{EB}$, while the $DU(2)$, symmetric $DU(3)$, $U(2) \otimes U(2)$ and $U(2) \otimes U(3)$, families obey the $\mathrm{PPT}^2$ conjecture through a direct symmetry argument. These results reveal how group symmetry controls the structure of non-completely positive maps and provide new concrete examples where the $\mathrm{PPT}^2$ property holds.
Exact Mobility Edges in a Disorder-Free Dimerized Stark Lattice with Effective Unbounded Hopping
This paper proposes a theoretical model for a disorder-free quantum system that exhibits mobility edges - sharp boundaries separating extended and localized electronic states. The authors achieve this by applying a linear potential to only one sublattice of a dimerized chain, creating effective unbounded hopping that circumvents theoretical no-go theorems.
Key Contributions
- Theoretical demonstration of exact mobility edges in disorder-free systems using unbounded hopping
- Analytical derivation of bulk spectrum and identification of sharp mobility edge separating extended and localized states
- Proposal for experimental realization using photonic frequency synthetic dimensions with robustness analysis
View Full Abstract
We propose a disorder-free one-dimensional single-particle Hamiltonian hosting an exact mobility edge (ME), placing the system outside the assumptions of no-go theorems regarding unbounded potentials. By applying a linear Stark potential selectively to one sublattice of a dimerized chain, we generate an effective Hamiltonian with unbounded, staggered hopping amplitudes. The unbounded nature of the hopping places the model outside the scope of the Simon-Spencer theorem, while the staggered scaling allows it to evade broader constraints on Jacobi matrices. We analytically derive the bulk spectrum in reciprocal space, identifying a sharp ME where the energy magnitude equals the inter-cell hopping strength. This edge separates a continuum of extended states from two distinct localized branches: a standard unbounded Wannier-Stark ladder and an anomalous bounded branch accumulating at the ME. The existence of extended states is supported by finite-size scaling of the inverse participation ratio up to system sizes $L \sim 10^9$. Furthermore, we propose an experimental realization using photonic frequency synthetic dimensions. Our numerical results indicate that the ME is robust against potential experimental imperfections, including frequency detuning errors and photon loss, establishing a practical path for observing MEs in disorder-free systems.
Topological Obstructions for Quantum Adiabatic Algorithms: Evidence from MaxCut Instances
This paper analyzes quantum adiabatic optimization algorithms by studying the global behavior of quantum states throughout the computation, rather than just looking at energy gaps. The authors show that even when these algorithms work well, the quantum states must undergo complex rearrangements that create unavoidable computational constraints, using MaxCut optimization problems as examples.
Key Contributions
- Identification of topological obstructions in quantum adiabatic algorithms based on global spectral flow rather than local energy gaps
- Quantitative analysis of spectral congestion and band permutations in degenerate optimization problems using MaxCut instances
View Full Abstract
Quantum adiabatic algorithms are commonly analyzed through local spectral properties of an interpolating Hamiltonian, most notably the minimum energy gap. While this perspective captures an important constraint on adiabatic runtimes, it does not fully describe the global structure of spectral evolution in optimization problems with degenerate solution manifolds. In this work, we show that degeneracy alone imposes unavoidable global constraints on spectral flow, even in instances where adiabatic algorithms succeed with high probability. Focusing on digitized quantum adiabatic evolutions, we analyze the eigenphases of the cumulative unitary operator generated along the interpolation path. By explicitly tracking eigenphase trajectories, we demonstrate that multiple spectral bands are forced to interact, braid, and permute before coalescing into a degenerate manifold at the end of the evolution. This global reordering manifests as persistent spectral congestion and nontrivial band permutations that cannot be removed by increasing evolution time or refining the digitization. Using MaxCut instances with controlled degeneracy as a concrete setting, we extract quantitative diagnostics of spectral congestion and explicitly compute the induced band permutations. Our results show that successful adiabatic optimization can coexist with complex and constrained spectral flow, revealing a form of topological obstruction rooted in the global connectivity of eigenstates rather than in local gap closures. These findings highlight intrinsic limitations of gap-based analyses and motivate spectral-flow-based diagnostics for understanding adiabatic algorithms in degenerate optimization landscapes.
A General Class of Functionals for Certifying Quantum Incompatibility
This paper develops a mathematical framework for detecting quantum incompatibility - situations where quantum measurements or states cannot be described by classical probability theory. The authors create optimization-free methods to certify when quantum systems exhibit fundamentally non-classical behavior across steering, measurement incompatibility, and entanglement.
Key Contributions
- Development of optimization-free nonlinear witnesses for quantum incompatibility based on convex functionals
- Proof that witnesses are nontrivial when functionals are non-affine on extremal points
- Extension of framework to measurement and instrument incompatibility with genuine incompatibility monotones
- Demonstration using Wigner-Yanase skew information and ℓ₂-type coherence functionals
View Full Abstract
Quantum steering, measurement incompatibility, and instrument incompatibility have recently been recognized as unified manifestations of quantum incompatibility. Building on this perspective, we develop a general framework for constructing optimization-free, nonlinear incompatibility witnesses based on convex functionals, valid in arbitrary dimensions. We prove that these witnesses are nontrivial precisely when the underlying functional is non-affine on extremal points (e.g., pure states for ensembles). For pure bipartite states, the witnesses yield lower bounds on entanglement measures, thereby outperforming most linear steering inequalities in the pure-state regime. Moreover, the construction extends in full generality to certify measurement and instrument incompatibility, where the witnesses act as genuine incompatibility monotones. We demonstrate the versatility of our approach with two operationally relevant functionals: the Wigner-Yanase skew information and an $\ell_{2}$-type coherence functional.
Quantum AI for Cybersecurity: A hybrid Quantum-Classical models for attack path analysis
This paper explores using hybrid quantum-classical machine learning models for cybersecurity, specifically for detecting network intrusions. The researchers compare classical machine learning approaches with quantum-enhanced feature representations on cybersecurity data, finding that quantum methods show promise when training data is limited.
Key Contributions
- Demonstration of hybrid quantum-classical models for cybersecurity intrusion detection
- Evidence that quantum feature embeddings improve performance when training data is scarce
- Reproducible framework for evaluating quantum advantages in cybersecurity applications
View Full Abstract
Modern cyberattacks are increasingly complex, posing significant challenges to classical machine learning methods, particularly when labeled data is limited and feature interactions are highly non-linear. In this study we investigates the potential of hybrid quantum-classical learning to enhance feature representations for intrusion detection and explore possible quantum advantages in cybersecurity analytics. Using the UNSW-NB15 dataset, network traffic is transformed into structured feature vectors through classical preprocessing and normalization. Classical models, including Logistic Regression and Support Vector Machines with linear and RBF kernels, are evaluated on the full dataset to establish baseline performance under large-sample conditions. Simultaneously, a quantum-enhanced pipeline maps classical features into variational quantum circuits via angle encoding and entangling layers, executed on a CPU-based quantum simulator, with resulting quantum embeddings classified using a classical SVM. Experiments show that while classical models achieve higher overall accuracy with large datasets, quantum-enhanced representations demonstrate superior attack recall and improved class separability when data is scarce, suggesting that quantum feature spaces capture complex correlations inaccessible to shallow classical models. These results highlight the potential of quantum embeddings to improve generalization and representation quality in cybersecurity tasks and provide a reproducible framework for evaluating quantum advantages as quantum hardware and simulators continue to advance.
PauliEngine: High-Performant Symbolic Arithmetic for Quantum Operations
This paper presents PauliEngine, a high-performance C++ software framework designed to efficiently perform mathematical operations on Pauli strings, which are fundamental building blocks in quantum computing. The tool provides fast symbolic arithmetic for quantum operations and includes a Python interface, making it useful for quantum software development and simulations.
Key Contributions
- High-performance C++ framework for Pauli string operations with binary symplectic representation
- Efficient symbolic arithmetic primitives including multiplication, commutators, and phase tracking
- Scalable backend infrastructure for quantum software tools with demonstrated performance improvements
View Full Abstract
Quantum computation is inherently hybrid, and fast classical manipulation of qubit operators is necessary to ensure scalability in quantum software. We introduce PauliEngine, a high-performance C++ framework that provides efficient primitives for Pauli string multiplication, commutators, symbolic phase tracking, and structural transformations. Built on a binary symplectic representation and optimized bit-wise operations, PauliEngine supports both numerical and symbolic coefficients and is accessible through a Python interface. Runtime benchmarks demonstrate substantial speedups over state-of-the-art implementations. PauliEngine provides a scalable backend for operator-based quantum software tools and simulations.
Topological States Enabled by Non-local Nonlinearity in Synthetic Dimensions
This paper studies how nonlinear interactions affect topological properties in synthetic quantum systems, specifically examining a Su-Schrieffer-Heeger lattice with long-range interactions. The researchers find that nonlinearity can induce new topological phases and fractional windings even in systems that are topologically trivial without interactions.
Key Contributions
- Development of Bogoliubov nonlinear adiabatic theory for topological systems with nonlocal interactions
- Discovery that nonlocal nonlinearity can induce emergent topological phases with fractional windings in trivial systems
- Identification of nonlinearity-driven topological transitions with characteristic swallowtail band structures
View Full Abstract
The interplay between topology and nonlinearity represents a central challenge in modern physics. Here, we investigate this interplay by considering a synthetic Su-Schrieffer-Heeger lattice with all-to-all nonlocal interactions. We find that the distinctive nonlinearity maintains an effective chiral symmetry and leads to a quantized nonlinear winding and Berry phase, as corroborated by the developed Bogoliubov nonlinear adiabatic theory. Increasing nonlinearity drives a sequence of topological transitions signaled by the appearance of characteristic swallowtail band structures at intermediate interaction strengths and band swapping in the strong nonlinear regime. The band swapping results in quantized fractional windings and double-period Bloch oscillations that are closely related to discrete time crystals. Remarkably, even starting from a topologically trivial linear system, nonlocal nonlinearity can induce an emergent topological phase with fractional windings. Experimentally, our model can be realized using photons in a degenerate optical cavity with Rydberg-mediated interactions. Our results establish a rigorous framework and pave the way for exploring nonlinear topological phenomena and their applications in synthetic quantum platforms.
Simulating Non-Markovian Dynamics in Open Quantum Systems
This paper provides a comprehensive review and unified framework for understanding different computational methods used to simulate quantum systems that interact with their environment, particularly when these interactions have long-lasting memory effects that make the dynamics more complex than standard approximations can handle.
Key Contributions
- Provides a unified theoretical framework connecting previously disparate simulation methods for open quantum systems
- Systematically compares strengths and limitations of different non-Markovian simulation approaches including hierarchical equations of motion, chain-mapping, and stochastic methods
View Full Abstract
Recent advances in quantum technologies and related experiments have created a need for highly accurate, versatile, and computationally efficient simulation techniques for the dynamics of open quantum systems. Long-lived correlation effects (non-Markovianity), system-environment hybridization, and the necessity for accuracy beyond the Born-Markov approximation form particular challenges. Approaches to meet these challenges have been introduced, originating from different fields, such as hierarchical equations of motion, Lindblad-pseudomode formulas, chain-mapping approaches, quantum Brownian motion master equations, stochastic unravelings, and refined quantum master equations. This diversity, while indicative of the field's relevance, has inadvertently led to a fragmentation that hinders cohesive advances and their effective cross-community application to current problems for complex systems. How are different approaches related to each other? What are their strengths and limitations? Here we give a systematic overview and concise discussion addressing these questions. We make use of a unified framework which very conveniently allows to link different schemes and, this way, may also catalyze further progress. In line with the state of the art, this framework is formulated not in a fully reduced space of the system but in an extended state space which in a minimal fashion includes effective reservoir modes. This in turn offers a comprehensive understanding of existing methods, elucidating their physical interpretations, interconnections, and applicability.
Optical nonlinearity of cold atomic ensemble driven by strong coherent field in a saturation regime
This paper analyzes how cold atoms respond to strong laser fields and weak probe beams, showing that optical nonlinearity can be enhanced in dense atomic media. The work has implications for parametric processes that create entangled photons used in quantum communication.
Key Contributions
- Microscopic analysis of dielectric susceptibility in vector two-level atomic systems under strong coherent driving
- Demonstration that optical nonlinearity can be significantly enhanced by controlling pump strength and atomic density
- Analysis of limitations for quantum communication protocols using parametrically generated entangled photons
View Full Abstract
We present a microscopic analysis and evaluation of the dielectric susceptibility of a dielectric medium consisting of vector-type two-energy-level atoms responding on a weak probe mode when the atoms are driven by a strong coherent field. Each atom, in an environment of others, exists as a quasiparticle further structuring a bulk medium. In a limit of dilute atomic gas, the dynamics of each atom follows the Mollow-type nonlinear excitation regime, and the medium susceptibility collectivizes the individual atomic responses to the probe mode. We outline how the collective dynamics can be interpolated up to a dense medium, and we argue from general positions that in such a medium the optical nonlinearity and, in particular, its parametric part could be significantly magnified by manipulating both the coherent pump and the sample density. That indicates certain limitations for potential capabilities of quantum communication protocols utilizing the entangled photons, created by a parametric process, as a main resource of quantum correlations.
Quantum Extreme Reservoir Computing for Phase Classification of Polymer Alloy Microstructures
This paper applies quantum extreme reservoir computing (QERC) to classify microstructure images of polymer alloys, demonstrating how quantum machine learning can be used for materials science applications. The researchers examine how various quantum computing parameters affect classification performance and create phase diagrams showing material behavior transitions.
Key Contributions
- First application of quantum extreme reservoir computing to realistic materials science data
- Systematic analysis of quantum computing parameters' effects on classification performance
- Development of quantum-based phase diagrams for polymer alloy microstructures
View Full Abstract
Quantum machine learning (QML) is expected to offer new opportunities to process high-dimensional data efficiently by exploiting the exponentially large state space of quantum systems. In this work, we apply quantum extreme reservoir computing (QERC) to the classification of microstructure images of polymer alloys generated using self-consistent field theory (SCFT). While previous QML efforts have primarily focused on benchmark datasets such as MNIST, our work demonstrates the applicability of QERC to engineering data with direct materials relevance. Through numerical experiments, we examine the influence of key computational parameters-including the number of qubits, sampling cost (the number of measurement shots), and reservoir configuration-on classification performance. The resulting phase classifications are depicted as phase diagrams that illustrate the phase transitions in polymer morphology, establishing an understandable connection between quantum model outputs and material behavior. These results illustrate QERC performance on realistic materials datasets and suggest practical guidelines for quantum encoder design and model generalization. This work establishes a foundation for integrating quantum learning techniques into materials informatics.
O Nature, Where Art Thou?
This paper discusses the conceptual foundations of quantum mechanics, specifically addressing where quantum events occur and proposing that Feynman's Sum Over Histories approach provides a better framework than standard textbook quantum mechanics for understanding quantum gravity. The work focuses on unifying quantum theory with general relativity through shared concepts of events and histories.
Key Contributions
- Critique of standard quantum mechanical formulation regarding event localization
- Proposal that Sum Over Histories approach is better suited for quantum gravity applications
View Full Abstract
Where does what happens happen in a quantum system? The standard textbook formulation of quantum mechanics provides a strange, imprecise and yet successful-in-practice answer to this question. In the struggle to unify our understanding of gravity with quantum theory, though, the textbook answer no longer suffices, and an alternative approach is needed. The Feynman Sum Over Histories approach provides an alternative that is particularly suited to quantum gravity because the Sum Over Histories and General Relativity are built on the same fundamental concepts of `event' and `history'.
Efficient Calculation of the Maximal Rényi Divergence for a Matrix Product State via Generalized Eigenvalue Density Matrix Renormalization Group
This paper develops an efficient computational method to calculate the maximal Rényi divergence for quantum systems represented as matrix product states, offering an alternative to the computationally expensive von Neumann entropy-based quantum mutual information. The authors create a generalized eigenvalue version of the density matrix renormalization group algorithm and demonstrate it on the XXZ spin chain.
Key Contributions
- Development of a generalized eigenvalue density matrix renormalization group algorithm for computing maximal Rényi divergence
- Demonstration that maximal Rényi divergence can exhibit different behavior than von Neumann mutual information in quantum many-body systems
View Full Abstract
The study of quantum and classical correlations between subsystems is fundamental to understanding many-body physics. In quantum information theory, the quantum mutual information, $I(A;B)$, is a measure of correlation between the subsystems $A,B$ in a quantum state, and is defined by the means of the von Neumann entropy: $I\left(A;B\right)=S\left(ρ_{A}\right)+S\left(ρ_{B}\right)-S\left(ρ_{AB}\right)$. However, such a computation requires an exponential amount of resources. This is a defining feature of quantum systems, the infamous ``curse of dimensionality'' . Other measures, which are based on Rényi divergences instead of von Neumann entropy, were suggested as alternatives in a recent paper showing them to possess important theoretical features, and making them leading candidates as mutual information measures. In this work, we concentrate on the maximal Rényi divergence. This measure can be shown to be the solution of a generalized eigenvalue problem. To calculate it efficiently for a 1D state represented as a matrix product state, we develop a generalized eigenvalue version of the density matrix renormalization group algorithm. We benchmark our method for the paradigmatic XXZ chain, and show that the maximal Rényi divergence may exhibit different trends than the von Neumann mutual information.
Addressing intramolecular vibrational redistribution in a single molecule through pump and probe surface-enhanced vibrational spectroscopy
This paper develops a theoretical framework to study how vibrational energy redistributes within single molecules using surface-enhanced Raman spectroscopy (SERS). The researchers propose pump-and-probe techniques that could detect signatures of energy transfer between molecular vibration modes at the single-molecule level.
Key Contributions
- Development of quantum mechanical framework based on molecular optomechanics to model intramolecular vibrational redistribution
- Demonstration of clear spectroscopic signatures for detecting vibrational energy transfer between coupled molecular modes using pump-and-probe SERS
View Full Abstract
The development of accurate tools to characterize Intramolecular Vibrational Redistribution (IVR) is of major interest in chemistry. In this context, surface-enhanced vibrational spectroscopies stand up as well-established techniques to study molecular vibrational lines and populations with a sensitivity that can reach the singe-molecule level. However, to date, this possibility has not been fully developed to address IVR. Here, we establish a quantum mechanical framework based on molecular optomechanics that accounts for IVR, and adopt it to analyze strategies to optimize IVR characterization by vibrational spectroscopy. In particular, we model two different pump-and-probe configurations where the vibrational pumping is provided either by infrared laser illumination or by Stokes SERS. We show for the two pumping configurations the existence of clear signatures on the anti-Stokes SERS spectra of population transfer between coupled vibrational modes in a molecule. Our calculations adopt realistic molecular and SERS parameters, suggesting that these signatures of IVR are accessible at the single-molecule level with current experimental platforms.
Magnetically Induced Transparency-Absorption and Normal-Anomalous Dispersion Characteristics of ${}^{87}\text{Rb}$ Medium or Any J-Type Configuration Atomic Vapors Subject to a Vector Magnetic Field and a Weak Resonant Pump
This paper develops a theoretical framework for controlling light transmission and dispersion in rubidium vapor using magnetic fields, showing how to create alternating transparent and absorbing regions at different frequencies. The work demonstrates potential applications in magnetic field sensing and optical signal processing through controlled manipulation of atomic vapor properties.
Key Contributions
- Analytical framework for magnetically induced transparency-absorption (MITA) and normal-anomalous dispersion (MINAD) in atomic vapors
- Closed-form solutions for atomic populations and coherences under vector magnetic fields with identification of bifurcation dynamics
- Theoretical basis for precision magnetometry applications using frequency-dependent transparency/absorption switching
View Full Abstract
We have developed an analytical framework for magnetically induced transparency-absorption (MITA) and normal-anomalous dispersion (MINAD) in a weakly driven ${}^{87}\text{Rb}$ vapor, or any J-type three-level system, under a vector magnetic field. By solving the Bloch equations in the stationary, quasi-stationary, and short-pulse regimes, we obtained closed-form expressions for the atomic populations and coherences and identified a bifurcation in the oscillatory dynamics at zero longitudinal Zeeman splitting. The Fourier-domain analysis reveals alternating transparency/absorption and normal/anomalous dispersion with frequency-dependent sign reversals, enabling spectrally selective filtering and group-delay effects. Slow oscillatory behavior in the radio-frequency range makes the system suitable for weak magnetic-field sensing, while fast oscillations at optical frequencies suggest applications in spectral filtering and frequency-comb-like signal shaping. The results provide a theoretical basis for experimental observation of MITA/MINAD and for optimizing atomic-vapor platforms for precision magnetometry and related photonic functionalities.
Adaptive Framework for Failure-Aware Protocols in Fusion-Based Graph-State Generation
This paper develops adaptive protocols for generating large photonic graph states by connecting smaller clusters through fusion measurements, using graph theory to optimize the building process. The approach treats the generation process as a Markov chain to minimize the expected number of fusion attempts needed, achieving orders of magnitude improvement over simple retry strategies.
Key Contributions
- Development of failure-adaptive fusion protocols that reuse leftover graph states instead of discarding them
- Markov chain framework for analyzing and optimizing fusion measurement sequences to minimize expected completion time
View Full Abstract
We consider the generation of photonic graph states in a linear optics setting where sequential non-deterministic fusion measurements are used to build large graph states out of small linear clusters and develop a framework to optimize the building process using graph theoretic characterizations of fusion networks. We present graph state generation protocols for linear cluster resource states and Type-I/Type-II fusions which are adaptive to fusion failure, that is, they reuse leftover graph states in the remaining building process. To estimate hardware costs, we interpret our protocols as finite Markov processes. This viewpoint allows to cast the expected number of fusion measurements until success as a first passage problem. We then deploy a pipeline of polynomial algorithms to optimize arbitrary graph states, extract fusion networks and find beneficial orderings of fusions with the goal of lowering the corresponding mean first passage times. We evaluate our pipeline for different initial resource states and fusion mechanisms with varying success probabilities. Results show that our strategies can reduce the fusion overhead by several orders of magnitude when compared to simple repeat until success protocols, especially for realistic fusion success probabilities between 50-75 %.
Optimization of modulation transfer protocol for Rydberg RF receivers
This paper develops and optimizes a new protocol for quantum RF receivers using hot Rydberg atoms, where phase modulation of a coupling beam is converted to amplitude modulation of a probe beam through atomic nonlinear response. The optimized protocol shows improved sensitivity for detecting RF signals detuned by more than a few MHz compared to conventional methods.
Key Contributions
- Development of theoretical model to optimize modulation frequency and amplitude in Rydberg RF receivers
- Demonstration that modulation transfer protocol outperforms conventional methods for detuned RF signals beyond a few MHz
View Full Abstract
We explore theoretically and experimentally the recently demonstrated modulation transfer protocol [D.-A. Trinh, K. V. Adwaith, M. Branco, A. Rouxel, S. Welinski, P. Berger, F. Goldfarb, and F. Bretenaker, Applied Physics Letters 125, 154001 (2024)] aiming at extending the bandwidth of quantum RF receivers based on hot Rydberg atoms. This protocol is based on a phase modulation of the coupling beam, which is transformed by the nonlinear response of the atoms into an amplitude modulation of the probe beam. We develop a theoretical model to optimize both the modulation frequency and the modulation amplitude of the coupling beam, thereby maximizing the atomic response. Once optimized, the sensitivity to detuned RF fields of this modulation transfer protocol is compared with that of the conventional protocol. This comparison shows that the new protocol outperforms the usual one as soon as the RF signal to be measured is detuned by more than a few MHz and offers a complementary approach to increase the detection bandwidth. In all cases, the experimental results are in good agreement with the simulations.
Cutting Quantum Circuits Beyond Qubits
This paper extends quantum circuit cutting techniques to work with mixed-dimensional quantum systems (qudits) rather than just qubits, allowing large quantum circuits to be broken down and run on smaller, disconnected hardware pieces. The method uses generalized Gell-Mann matrices to decompose interactions and demonstrates significant memory savings in simulating high-dimensional quantum systems.
Key Contributions
- Extension of quantum circuit cutting from qubits to heterogeneous qudit registers
- Demonstration of significant memory reduction (128 MB to 64 KB) for high-dimensional quantum circuit simulation
- Validation of exact state reconstruction for qubit-qutrit hybrid systems
View Full Abstract
We extend quantum circuit cutting to heterogeneous registers comprising mixed-dimensional qudits. By decomposing non-local interactions into tensor products of local generalised Gell-Mann matrices, we enable the simulation and execution of high-dimensional circuits on disconnected hardware fragments. We validate this framework on qubit--qutrit ($2$--$3$) interfaces, achieving exact state reconstruction with a Total Variation Distance of 0 within single-precision floating-point tolerance. Furthermore, we demonstrate the memory advantage in an 8-particle, dimension-8 system, reducing memory usage from 128 MB to 64 KB per circuit.
Integrating Quantum Software Tools with(in) MLIR
This paper provides a practical guide for quantum software engineers to integrate quantum computing tools using MLIR (Multi-Level Intermediate Representation), addressing the current lack of interoperability between quantum software frameworks. The authors demonstrate their approach through a case study connecting PennyLane and the Munich Quantum Toolkit to create more unified quantum software stacks.
Key Contributions
- Practical guide for integrating quantum software tools using MLIR framework
- Case study demonstrating integration between PennyLane and Munich Quantum Toolkit
- Best practices and insights for overcoming MLIR's learning curve in quantum computing context
View Full Abstract
Compilers transform code into action. They convert high-level programs into executable hardware instructions - a crucial step in enabling reliable and scalable quantum computation. However, quantum compilation is still in its infancy, and many existing solutions are ad hoc, often developed independently and from scratch. The resulting lack of interoperability leads to significant missed potential, as quantum software tools remain isolated and cannot be seamlessly integrated into cohesive toolchains. The Multi-Level Intermediate Representation (MLIR) has addressed analogous challenges in the classical domain. It was developed within the LLVM project, which has long powered robust software stacks and enabled compilation across diverse software and hardware components, with particular importance in high-performance computing environments. However, MLIR's steep learning curve poses a significant barrier to entry, particularly in quantum computing, where much of the software stack is still predominantly built by experimentalists out of necessity rather than by experienced software engineers. This paper provides a practical and hands-on guide for quantum software engineers to overcome this steep learning curve. Through a concrete case study linking Xanadu's PennyLane framework with the Munich Quantum Toolkit (MQT), we outline actionable integration steps, highlight best practices, and share hard-earned insights from real-world development. This work aims to support quantum tool developers in navigating MLIR's complexities and to foster its adoption as a unifying bridge across a rapidly growing ecosystem of quantum software tools, ultimately guiding the development of more modular, interoperable, and integrated quantum software stacks.
Absolutely Maximal Contextual Correlations
This paper investigates maximally contextual quantum correlations using sheaf theory, defining a new measure called contextual fraction and introducing absolutely maximal contextual correlations (AMCC) as an analog to maximally entangled states. The authors construct families of such correlations and demonstrate their applications in secret sharing and randomness extraction.
Key Contributions
- Introduction of contextual fraction metric and absolutely maximal contextual correlations (AMCC) framework
- Construction of infinite families of AMCC using parity check methods and constraint satisfiability problems
- Demonstration of applications to secret sharing and randomness extraction protocols
View Full Abstract
The foundational work by Bell led to an interest in understanding non-local correlations that arise from entangled states shared between distinct, spacelike-separated parties, which formed a foundation for the theory of quantum information processing. We investigate the question of maximal correlations analogous to the maximally entangled states defined in the entanglement theory of multipartite systems. To formalize this, we employ the sheaf-theoretic framework for contextuality, which generalizes non-locality. This provides a new metric for correlations called contextual fraction (CF), which ranges from 0 (non-contextual) to 1 (maximally contextual). Using this, we have defined the absolutely maximal contextual correlations (AMCC), which are maximally contextual and have maximal marginals, which captures the notion of absolutely maximal entangled (AME) states. The Popescu-Rohrlich (PR) box serves as the bipartite example, and we construct various extensions of such correlations in the tripartite case. An infinite family of various forms of AMCC is constructed using the parity check method and the constraint satisfiability problem (CSP) scheme. We also demonstrate the existence of maximally contextual correlations, which do not exhibit maximal marginals, and refer to them as non-AMCC. The results are further applied to secret sharing and randomness extraction using AMCC correlations.
Continuous Unitary Designs for Universally Robust Quantum Control
This paper introduces continuous unitary designs - continuous paths of quantum operations that mimic random quantum transformations - extending previous work on discrete sets. The authors develop mathematical frameworks to construct these continuous paths and demonstrate their application to robust quantum control that can handle unknown noise.
Key Contributions
- First systematic study of continuous unitary designs extending discrete unitary design theory
- Construction frameworks based on topological bundle theory and Heisenberg-Weyl group for arbitrary dimensions
- Analytical solutions for universally robust quantum control that outperform conventional pulse techniques
- Explicit construction of single-qubit unitary 1-design paths using spherical 2-design curves and Hopf fibration
View Full Abstract
Unitary designs are unitary ensembles that emulate Haar-random unitary statistics. They provide a vital tool for studying quantum randomness and have found broad applications in quantum technologies. However, existing research has focused on discrete ensembles, despite that many physical processes, such as in quantum chaos, thermalization, and control, naturally involve continuous ensembles generated from continuous time-evolution. Here we initial the study of continuous unitary designs, addressing fundamental questions about their construction and practical utility. For single-qubit system, we construct explicit unitary 1-design paths from spherical 2-design curves and Hopf fibration theory. For arbitrary dimensions, we develop two systematic construction frameworks, one based on topological bundle theory of the unitary group and the other based on the Heisenberg-Weyl group. On the practical front, our unitary design paths provide analytical solutions to universally robust quantum control. Simulations show they outperform conventional pulse techniques in mitigating arbitrary unknown static noises, demonstrating immediate utility for quantum engineering. Extending unitary designs to the continuous domain not only introduces powerful geometric and topological tools that complement conventional combinatorial and group-theoretic methods, but also enhances experimental feasibility over discrete counterparts which usually involve instantaneous pulses. As an outlook, we anticipate that this work will pave the way for using continuous unitary designs to explore complex quantum dynamics and devise quantum information protocols.
Experimental realization of quantum Zeno dynamics for robust quantum metrology
This paper demonstrates how quantum Zeno dynamics can protect quantum sensors from noise while maintaining their measurement precision. The researchers used nuclear magnetic resonance to experimentally show that strong particle interactions during measurement encoding can achieve near-optimal sensitivity even in noisy environments.
Key Contributions
- Experimental demonstration of quantum Zeno dynamics for noise-resilient quantum metrology using NMR platform
- Development of approach using strong inter-particle interactions during parameter encoding to overcome previous QZD limitations
- Achievement of near-optimal precision scaling under amplitude damping noise in both parallel and sequential measurement settings
View Full Abstract
Quantum Zeno dynamics (QZD), which restricts the system's evolution to a protected subspace, provides a promising approach for protecting quantum information from noise. Here, we explore a practical approach to harnessing QZD for robust quantum metrology. By introducing strong inter-particle interactions during the parameter encoding stage, we overcome the typical limitations of previous QZD studies, which have largely focused on single-particle systems and faced challenges where QZD could interfere with the encoding process. We experimentally validate the proposed scheme on a nuclear magnetic resonance platform, achieving near-optimal precision scaling under amplitude damping in both parallel and sequential settings. Numerical simulations further demonstrate the scalability of the approach and its compatibility with other control techniques for suppressing more general types of noise. These findings highlight QZD as a powerful strategy for noise-resilient quantum metrology.
Discrete symmetries in classical and quantum oscillators
This paper reinterprets quantum harmonic oscillator wave functions as classical coordinates on conical spaces, proposing that quantum superposition arises from incomplete knowledge of initial conditions rather than fundamental quantum behavior. The authors use complex Bargmann-Fock-Segal representation to connect quantum eigenfunctions to classical phase space geometry.
Key Contributions
- Reinterpretation of quantum harmonic oscillator eigenfunctions as classical coordinates on conical spaces
- Novel geometric connection between quantum superposition and incomplete classical information using discrete symmetry groups
View Full Abstract
We consider the nature of the wave function using the example of a harmonic oscillator. We show that the eigenfunctions $ψ_n{=}z^n$ of the quantum Hamiltonian in the complex Bargmann-Fock-Segal representation with $z\in\mathbb C$ are the coordinates of a classical oscillator with energy $E_n=\hbarωn$, $n=0,1,2,...\,$. They are defined on conical spaces ${\mathbb C}/{\mathbb Z}_n$ with cone angles $2π/n$, which are embedded as subspaces in the phase space $\mathbb C$ of the classical oscillator. Here ${\mathbb Z}_n$ is the finite cyclic group of rotations of the space $\mathbb C$ by an angle $2π/n$. The superposition $ψ=\sum_n c_nψ_n$ of the eigenfunctions $ψ_n$ arises only with incomplete knowledge of the initial data for solving the Schrödinger equation, when the conditions of invariance with respect to the discrete groups ${\mathbb Z}_n$ are not imposed and the general solution takes into account all possible initial data parametrized by the numbers $n\in\mathbb N$.
On the homogeneity of the quantum transition probability
This paper investigates the mathematical properties of quantum transition probabilities using Jordan algebra theory, showing that these probabilities achieve maximum homogeneity in simple Euclidean Jordan algebras that include standard quantum mechanics. The authors connect geometric results about homogeneous spaces to the structure of quantum state spaces and transition probabilities.
Key Contributions
- Establishes connection between Jordan algebra theory and quantum transition probability homogeneity
- Shows that atomic parts of Jordan algebras can be characterized topologically rather than through entire state space analysis
View Full Abstract
In the years 1952 and 1965, H.-C. Wang and U. Hirzebruch showed that the two-point homogeneous compact spaces with convex metrics are isometric to the spheres, the real, complex, octonion projective spaces and the Moufang plane and as well to the sets of the minimal idempotents or pure states in the simple Euclidean Jordan algebras. Here we reveal the physical meaning of these mathematical achievements for the quantum mechanical transition probability. We show that this transition probability features a maximum degree of homogeneity in all simple Euclidean Jordan algebras, which includes common finite-dimensional Hilbert space quantum theory. The atomic parts of these algebras or, equivalently, the extreme boundaries of their state spaces can be characterized by purely topological means. This is an important difference to many other recent approaches that aim to distinguish the entire state spaces among the convex compact sets. An interesting case with non-homogeneous transition probability arises, when the $E_6$-symmetric bioctonionic projective plane is used as quantum logic.
High-Resolution Spectroscopy of the X-A Transition of the Carbon Monoxide Dication CO$^{2+}$
This paper presents high-resolution spectroscopic measurements of doubly-charged carbon monoxide molecules (CO2+), using a technique that detects molecular breakup after laser excitation. The work provides detailed measurements of the electronic and vibrational energy levels of this rare type of doubly-charged molecular ion.
Key Contributions
- First high-resolution spectroscopic study of CO2+ rovibronic transitions with ~5 cm-1 resolution
- Resolution of spin-orbit splittings in the ground vibronic state of CO2+ guided by ab initio calculations
View Full Abstract
We report rovibronic spectra of the A $^3Σ^+$($v'=0-2$) - X $^3Π_Ω(v=0)$ rovibronic transitions ($|Ω|=0, 1$ and 2) of the CO$^{2+}$ doubly-charged molecular ion. Spectra were recorded at high resolution ($\sim 5$~cm$^{-1}$) in a fast beam of CO$^{2+}$ molecules by detecting the Coulomb explosion of the molecules upon excitation to the A state. Measurements were guided by \textit{ab initio} calculations which then assisted the assignment of the observed spectral features. Our results resolve the spin-orbit splittings of the ground vibronic state X $^3Π_Ω(v=0)$, but not the rotational structure of the bands due to spectral congestion, and provide spectroscopic information on CO$^{2+}$ with unprecedented resolution. In doing so they expand our knowledge of this benchmark doubly charged molecular ion and expand the short list of doubly charged molecules studied at high resolution.
Self-Supervised Learning with Noisy Dataset for Rydberg Microwave Sensors Denoising
This paper develops a self-supervised deep learning method to reduce noise in Rydberg atom-based quantum sensors, achieving the same accuracy as averaging 10,000 measurements but from a single measurement. The approach trains neural networks on noisy data without requiring clean reference signals, making it practical for real quantum sensing applications.
Key Contributions
- Self-supervised denoising framework that works without clean reference data
- Achieves 10,000x measurement averaging performance from single shots
- Comparative analysis of U-Net vs Transformer architectures for quantum sensor denoising
- Three orders of magnitude reduction in computation time compared to traditional methods
View Full Abstract
We report a self-supervised deep learning framework for Rydberg sensors that enables single-shot noise suppression matching the accuracy of multi-measurement averaging. The framework eliminates the need for clean reference signals (hardly required in quantum sensing) by training on two sets of noisy signals with identical statistical distributions. When evaluated on Rydberg sensing datasets, the framework outperforms wavelet transform and Kalman filtering, achieving a denoising effect equivalent to 10,000-set averaging while reducing computation time by three orders of magnitude. We further validate performance across diverse noise profiles and quantify the complexity-performance trade-off of U-Net and Transformer architectures, providing actionable guidance for optimizing deep learning-based denoising in Rydberg sensor systems.
Interpretation of Unfair Sampling in Quantum Annealing by Node Centrality
This paper analyzes why quantum annealing tends to unfairly sample certain optimal solutions over others when multiple solutions exist, finding that eigenvector centrality in the graph of ground states predicts sampling probabilities. The researchers use degenerate perturbation theory to explain this bias and propose methods to achieve more balanced sampling.
Key Contributions
- Theoretical explanation of sampling bias in quantum annealing using eigenvector centrality and degenerate perturbation theory
- Practical methods for achieving fairer sampling by promoting ground state connectivity and reducing centrality heterogeneity
View Full Abstract
In applications where multiple optimal solutions are needed, transverse-field quantum annealing (QA) is known to sample degenerate ground states in a strongly biased manner. Despite extensive empirical observations, it remains unclear which features of degenerate ground states are preferentially sampled and why by QA. Here we analyze the final states using degenerate perturbation theory to characterize the preference among them. In this analysis, the adjacency matrix of the graph composed by the ground states naturally emerges, and we can predict the eigenvector centralities (one of the node centralities) are related to the probabilities of these states. We verify this prediction on toy models where degeneracy is lifted at first and second order, and we show that second-order weights encode local barrier information, relating sampling fairness to the flatness of the local energy landscape. Finally, this perspective suggests two practical routes toward fair sampling -- promoting connectivity of the graph and reducing heterogeneity of centralities -- and we illustrate consistency with higher-order drivers and minor-embedding transformations.
Pervasive Vulnerability Analysis and Defense for QKD-based Quantum Private Query
This paper identifies critical security vulnerabilities in Quantum Private Query protocols that are based on Quantum Key Distribution, showing how attackers can extract hidden database information even without complex quantum resources. The authors propose a multi-encryption defense scheme to protect against these newly identified attack methods.
Key Contributions
- Identification of security vulnerabilities in QKD-based QPQ protocols including direct observation and minimum error discrimination attacks
- Development of a multi-encryption defense scheme compatible with existing QPQ protocols
View Full Abstract
Quantum Private Query (QPQ) based on Quantum Key Distribution (QKD) is among the most practically viable quantum communication protocols, with application value second only to QKD itself. However, prevalent security vulnerabilities in the post-processing stages of most existing QKD-based QPQ protocols have been severely overlooked. This study focuses on hidden information extraction under undetermined signal bits, revealing that most such QPQ protocols face severe security threats even without complex quantum resources. Specifically, direct observation attack causes incremental information leakage, while the minimum error discrimination attack efficiently steals additional database inforamtion. To address these critical flaws, the proposed multi-encryption defense scheme is compatible with existing QPQ protocols. The study demonstrates the necessity of the multi-encryption strategy for the security of databases in QPQ, providing key theoretical and technical support for constructing practical QPQ protocols resistant to real-world attacks.
Random-Matrix-Induced Simplicity Bias in Over-parameterized Variational Quantum Circuits
This paper explains why over-parameterized variational quantum circuits (VQCs) perform poorly by showing they collapse to simple, near-constant functions due to random matrix effects. The authors demonstrate that structured tensor-based circuit architectures can avoid this problem and maintain good performance even when over-parameterized.
Key Contributions
- Theoretical explanation of barren plateaus and poor trainability in over-parameterized VQCs through random matrix universality
- Demonstration that tensor-structured VQCs avoid the simplicity bias and maintain expressivity in over-parameterized regimes
- Unification of barren plateaus, expressivity limits, and generalization collapse under a single theoretical framework
View Full Abstract
Over-parameterization is commonly used to increase the expressivity of variational quantum circuits (VQCs), yet deeper and more highly parameterized circuits often exhibit poor trainability and limited generalization. In this work, we provide a theoretical explanation for this phenomenon from a function-class perspective. We show that sufficiently expressive, unstructured variational ansatze enter a Haar-like universality class in which both observable expectation values and parameter gradients concentrate exponentially with system size. As a consequence, the hypothesis class induced by such circuits collapses with high probability to a narrow family of near-constant functions, a phenomenon we term simplicity bias, with barren plateaus arising as a consequence rather than the root cause. Using tools from random matrix theory and concentration of measure, we rigorously characterize this universality class and establish uniform hypothesis-class collapse over finite datasets. We further show that this collapse is not unavoidable: tensor-structured VQCs, including tensor-network-based and tensor-hypernetwork parameterizations, lie outside the Haar-like universality class. By restricting the accessible unitary ensemble through bounded tensor rank or bond dimension, these architectures prevent concentration of measure, preserve output variability for local observables, and retain non-degenerate gradient signals even in over-parameterized regimes. Together, our results unify barren plateaus, expressivity limits, and generalization collapse under a single structural mechanism rooted in random-matrix universality, highlighting the central role of architectural inductive bias in variational quantum algorithms.
High-Order Epistasis Detection Using Factorization Machine with Quadratic Optimization Annealing and MDR-Based Evaluation
This paper develops a new computational method for detecting complex genetic interactions (epistasis) in disease studies by combining factorization machines with quantum-inspired optimization annealing techniques. The approach aims to efficiently search through massive combinations of genetic loci to find meaningful disease associations without exhaustive testing.
Key Contributions
- Novel application of factorization machine with quadratic optimization annealing (FMQA) to epistasis detection
- Efficient black-box optimization approach that avoids computationally expensive exhaustive searches for high-order genetic interactions
View Full Abstract
Detecting high-order epistasis is a fundamental challenge in genetic association studies due to the combinatorial explosion of candidate locus combinations. Although multifactor dimensionality reduction (MDR) is a widely used method for evaluating epistasis, exhaustive MDR-based searches become computationally infeasible as the number of loci or the interaction order increases. In this paper, we define the epistasis detection problem as a black-box optimization problem and solve it with a factorization machine with quadratic optimization annealing (FMQA). We propose an efficient epistasis detection method based on FMQA, in which the classification error rate (CER) computed by MDR is used as a black-box objective function. Experimental evaluations were conducted using simulated case-control datasets with predefined high-order epistasis. The results demonstrate that the proposed method successfully identified ground-truth epistasis across various interaction orders and the numbers of genetic loci within a limited number of iterations. These results indicate that the proposed method is effective and computationally efficient for high-order epistasis detection.
A Survey of Bargmann Invariants: Geometric Foundations and Applications
This survey paper reviews Bargmann invariants, which are mathematical quantities that describe the geometric structure of quantum states. The paper focuses on how these invariants can be used to characterize quantum systems and detect entanglement without requiring complete measurement of the quantum state.
Key Contributions
- Comprehensive survey of Bargmann invariants and their geometric foundations
- Framework for entanglement detection without full state tomography
- Methods for characterizing local unitary equivalence of quantum states
View Full Abstract
Bargmann invariants, a class of gauge-invariant quantities arising from the overlaps of quantum state vectors, provide a profound and unifying framework for understanding the geometric structure of quantum mechanics. This survey offers a comprehensive overview of Bargmann invariants, with a particular focus on their role in shaping the informational geometry of the state space. The core of this review demonstrates how these invariants serve as a powerful tool for characterizing the intrinsic geometry of the space of quantum states, leading to applications in determining local unitary equivalence and constructing a complete set of polynomial invariants for mixed states. Furthermore, we explore their pivotal role in modern quantum information science, specifically in developing operational methods for entanglement detection without the need for full state tomography. By synthesizing historical context with recent advances, this survey aims to highlight Bargmann invariants not merely as mathematical curiosities, but as essential instruments for probing the relational and geometric features of quantum systems.
Quantum Interaction Between Free Electrons and Light Involving First-order and Second-order Process
This paper develops a quantum theory for interactions between free electrons and light that involves two-photon processes, extending beyond the typical single-photon interactions. The work connects different electron-photon scattering phenomena and shows how manipulating optical near fields can enhance two-photon absorption/emission by electrons.
Key Contributions
- Development of full quantum theory for electron-photon interactions including two-photon processes
- Demonstration that two-photon absorption/emission can be enhanced by manipulating optical near field electric components
- Analytical unification of PINEM, Kapitza-Dirac effect, and nonlinear Compton scattering under the two-photon process framework
View Full Abstract
Photon-induced Near-field Electron Microscopy (PINEM) effect has revealed the quantum interaction between free electrons and optical near filed, which demonstrated plenty of novel phenomena of manipulating free electron wave packet and detecting/shaping quantum photonic states. However, free electrons generally only absorb/emit one photon at a time, while the physical mechanism and phenomena of free electron-two-photon interaction have not been studied yet. Moreover, the relationship between PINEM and Kapitza-Dirac (KD) effect and nonlinear Compton scattering is still unclear. Here we develop the full quantum theory of electron-photon interaction considering the two-photon process. It is revealed that the emission/absorption of two photons by electrons can be greatly enhanced by manipulating the electric field component of optical near field, and the quantum interference between single-photon and two-photon processes can occur in some circumstances, which affects the photon number state, electron energy states and electron-photon entanglement. Meanwhile, it is found that the KD effect (elastic electron-photon scattering) and nonlinear Compton scattering (inelastic electron-photon scattering) are also a kind of two-photon process and the distribution of electrons can be deduced analytically based on the full quantum theory. Our work uncovers the possible abundant phenomena when free electron interacting with two photons, paves the way for more in-depth studies of nonlinear processes in electron-photon quantum interactions in the future.
Global Parametric Gates for Multi-qubit Entanglement
This paper demonstrates a new method to create entangled quantum states among multiple qubits simultaneously using a single microwave pulse applied to a common auxiliary qubit, achieving high fidelities for 2-4 qubit entanglement and showing potential for scaling to larger systems.
Key Contributions
- Development of global parametric gate for single-step multi-qubit entanglement generation
- Experimental demonstration achieving >90% fidelity for 2-4 qubit entangled states
- Scalable approach compatible with fixed-frequency qubits using only microwave control
View Full Abstract
We propose and experimentally demonstrate a global parametric gate that generates multi-qubit entangled states in a single step. By applying a parametric drive to a common qubit at precise detunings relative to computational qubits, we directly produce two-, three-, and four-qubit entanglement with state fidelities of 99.4\%\pm0.2\%, 93.4\%\pm0.3\%, and 91.4\%\pm0.3\%, respectively. This scheme enables efficient, reconfigurable control using only microwave drives and is compatible with fixed-frequency qubits. Error analyses indicate that infidelity stems primarily from decoherence and coherent control errors, with negligible contributions from static ZZ coupling and flux noise. Furthermore, simulations with state-of-the-art parameters predict this global gate can generate high-fidelity (99.70\%) entanglement in systems of up to six qubits.
Quantum information of optical magnetometry: Semiclassical Cramer-Rao bound violation and Heisenberg scaling
This paper investigates optical magnetometers that use laser light rotation to measure magnetic fields, comparing semiclassical and quantum models. The researchers find that quantum effects enable better-than-classical precision scaling and could provide a test of quantum mechanics in large atomic systems.
Key Contributions
- Demonstrates violation of semiclassical Cramer-Rao bound in optical magnetometry under certain conditions
- Shows Heisenberg scaling emerges from measurement-induced quantum correlations in non-interacting systems
- Proposes experimental test of quantum mechanics foundations using macroscopic atomic ensembles
View Full Abstract
Optical magnetometers use the rotation of linearly polarized laser light induced by the Faraday effect for high precision magnetic field measurements. Here, we carry out an in-depth quantum information investigation, deploying two distinct models: The first, semiclassical model can violate the quantum Cramer-Rao bound by several orders of magnitude for weak dissipation and large atom numbers, invalidating the semiclassical approach in this parameter regime. The second model, describing the atoms as a collective spin, respects the Cramer-Rao bound for all parameters. Interestingly, the collective model also predicts Heisenberg scaling for the quantum Fisher information. The comparison of both models shows that Heisenberg scaling is a result of measurement-induced quantum correlation in an otherwise non-interacting quantum system. As the Heisenberg scaling appears in a stationary state of a macroscopic quantum system, it can be thus viewed as a new paradigm in quantum sensing. Intriguingly, the comparison of both models with experimental data can constitute a test for the foundations of quantum mechanics in a macroscopic ensemble of atoms.
Photon blockade effect from synergistic optical parametric amplification and driving force in Kerr-medium single-mode cavity
This paper investigates photon blockade in a quantum optical system combining a Kerr-nonlinear cavity with an optical parametric amplifier. The research demonstrates how to control single-photon emission through destructive quantum interference and provides both analytical and numerical methods to optimize the blockade effect.
Key Contributions
- Analytical solution for photon blockade in hybrid Kerr-OPA cavity system
- Demonstration of driving phase control over optimal blockade parameters
- Physical explanation of blockade mechanism through destructive quantum interference
View Full Abstract
This work investigates photon blockade control in a hybrid quantum system containing a Kerr-nonlinear cavity coupled to an optical parametric amplifier (OPA). The dynamics are governed by a master equation derived from an effective Hamiltonian that includes cavity decay.To obtain analytical solutions, the system's quantum state is expanded in the Fock basis up to the two-photon level. Solving the steady-state Schrodinger equation yields probability amplitudes and the analytical conditions for optimal photon blockade. Results confirm that photon blockade is achievable with suitable parameters. Excellent agreement is found between the analytical solutions and numerical simulations for the steady-state, equal-time second-order correlation function, validating both the analytical method and the blockade effect.Numerically, the average intracavity photon number increases significantly under resonance, providing a theoretical pathway for enhancing single-photon source brightness. Furthermore, the driving phase is shown to regulate the optimal blockade region: it shifts the parabolic region within the two-dimensional parameter space of driving strength and OPA nonlinearity and can even reverse its opening direction.The influence of Kerr nonlinearity is also examined. Photon blockade remains robust across a wide range of Kerr strengths. Physical analysis attributes the effect to destructive quantum interference between two distinct excitation pathways that suppress two-photon states. While Kerr nonlinearity shifts the system's energy levels, it does not disrupt this interference mechanism, explaining the effect's stability over a broad parameter range.
Physically natural metric-measure Lindbladian ensembles and their learning hardness
This paper studies how difficult it is to learn the structure of noise and dissipation in open quantum systems from measurement data, and develops cryptographic protocols based on this learning hardness. The authors prove that randomly generated quantum noise processes require exponentially many measurements to characterize, making them suitable for quantum cryptographic applications.
Key Contributions
- Proved exponential lower bounds on learning random Lindbladian dynamics using statistical query frameworks
- Developed physically unclonable function protocols based on random open quantum systems
View Full Abstract
In open quantum systems, a basic question at the interface of quantum information, statistical physics, and many-body dynamics is how well can one infer the structure of noise and dissipation generators from finite-time measurement statistics alone. Motivated by this question, we study the learnability and cryptographic applications of random open-system dynamics generated by Lindblad-Gorini-Kossakowski-Sudarshan (GKSL) master equations. Working in the affine hull of the GKSL cone, we introduce physically motivated ensembles of random local Lindbladians via a linear parametrisation around a reference generator. On top of this geometric structure, we extend statistical query (SQ) and quantum-process statistical query (QPStat) frameworks to the open-system setting and prove exponential (in the parameter dimension $M$) lower bounds on the number of queries required to learn random Lindbladian dynamics. In particular, we establish average-case SQ-hardness for learning output distributions in total variation distance and average-case QPStat-hardness for learning Lindbladian channels in diamond norm. To support these results physically, we derive a linear-response expression for the ensemble-averaged total variation distance and verify the required nonvanishing scaling in a random local amplitude-damping chain. Finally, we design two Lindbladian physically unclonable function (Lindbladian-PUF) protocols based on random Lindbladian ensembles with distribution-level and tomography-based verification, thereby providing open-system examples where learning hardness can be translated into cryptographic security guarantees.
A Survey on Applications of Quantum Computing for Unit Commitment
This paper surveys how quantum computing methods can be applied to solve the Unit Commitment problem in power systems, which involves optimizing when to turn electrical generators on/off and how much power they should produce. The authors review different quantum approaches including quantum annealing and hybrid quantum-classical algorithms that could potentially solve these complex scheduling problems more efficiently than traditional methods.
Key Contributions
- Comprehensive survey of quantum computing applications to Unit Commitment optimization problems
- Categorization of quantum approaches including annealing-based, variational hybrid, and quantum machine learning methods
- Analysis of modeling strategies, hardware implementations, and computational trade-offs for quantum-enabled power system optimization
View Full Abstract
Unit Commitment (UC) is a core optimization problem in power system operation and electricity market scheduling. It determines the optimal on/off status and dispatch of generating units while satisfying system, operational, and market constraints. Traditionally, UC has been solved using mixed-integer programming, dynamic programming, or metaheuristic methods, all of which face scalability challenges as systems grow in size and uncertainty. Recent advances in quantum computing, spanning quantum annealing, variational algorithms, and hybrid quantum classical optimization, have opened new opportunities to accelerate UC solution processes by exploiting quantum parallelism and entanglement. This paper presents a comprehensive survey of existing research on the applications of quantum computing for solving the UC problem. The reviewed works are categorized based on the employed quantum paradigms, including annealing-based, variational hybrid, quantum machine learning, and quantum-inspired methods. Key modeling strategies, hardware implementations, and computational trade-offs are discussed, highlighting the current progress, limitations, and potential future directions for large-scale quantum-enabled UC.
Demonstration of Discrete-Time Quantum Walks and Observation of Topological Edge States in a Superconducting Qutrit Chain
This paper demonstrates discrete-time quantum walks using superconducting qutrits (three-level quantum systems) instead of traditional qubits, showing improved hardware efficiency and the first observation of topological edge states in superconducting circuits. The researchers used qutrit chains to encode both walker position and coin states, enabling scalable quantum walk implementations with potential applications in quantum computing and simulation.
Key Contributions
- First experimental demonstration of discrete-time quantum walks using superconducting qutrits with improved hardware efficiency
- First observation of particle-hole-symmetry-protected topological edge states in superconducting quantum circuits
- Demonstration of scalable quantum walk implementation that encodes both walker position and coin degree of freedom in qutrit systems
View Full Abstract
Quantum walk serves as a versatile tool for universal quantum computing and algorithmic research. However, the implementation of discrete-time quantum walks (DTQWs) with superconducting circuits is still constrained by some limitations such as operation precision, circuit depth and connectivity. With improved hardware efficiency by using superconducting qutrits (three-level systems), we experimentally demonstrate a scalable DTQW in a superconducting circuit, observing the ballistic spreading of quantum walk in a qutrit chain. The usage of qutrits in our implementation allows hardware efficiently encoding of the walker position and the coin degree of freedom. By exploiting the flexibility and intrinsic symmetries of qutrit-based DTQWs, we successfully prepare two topological phases in the chain. For the first time, particle-hole-symmetry-protected edge states, bounded at the interface between these two topological phases, are observed in the superconducting platform. Measured parameter dependencies further validate the properties of edge states. The scalability and gate-control compatibility of the demonstrated DTQWs enable a versatile tool for superconducting quantum computing and quantum simulation.
Two-Qubit Module Based on Phonon-Coupled Ge Hole-Spin Qubits: Design, Fabrication, and Readout at 1-4 K
This paper presents a complete design for a two-qubit quantum computing module using germanium hole-spin qubits that communicate through engineered sound waves (phonons) at relatively high operating temperatures of 1-4 K. The work provides detailed fabrication instructions and readout methods for creating entangled qubit pairs that could serve as building blocks for larger quantum computers.
Key Contributions
- Complete device-level design integrating germanium hole-spin qubits with phononic crystal cavities for two-qubit operations
- Detailed nanofabrication process flow and RF readout architecture compatible with 1-4 K operation
- Scalable template for phonon-mediated coupling of spin qubits enabling future entangling gates and Bell state generation
View Full Abstract
We present a device-level design for a two-qubit module based on phonon-coupled germanium (Ge) hole-spin qubits operating at $1$-$4~\mathrm{K}$. Building on prior work on phonon-engineered Ge qubits and phononic-crystal (PnC) cavities, we specify a lithography-ready layout that integrates two gate-defined hole-spin qubits in a strained Ge quantum well with a GHz PnC defect mode that mediates a coherent phonon-based interaction. We detail the SiGe/Ge heterostructure, PnC cavity design, and a compatible nanofabrication process flow, including the gate stack, membrane patterning and release, and RF/DC wiring. We further develop a readout architecture combining spin-to-charge conversion with RF reflectometry on a proximal charge sensor, supported by a cryogenic RF chain optimized for operation at $1$-$4~\mathrm{K}$. Finally, we outline the cryogenic measurement environment, tuning procedures, and a stepwise benchmarking program targeting single-qubit control, phonon-bandgap suppression of relaxation channels, and resolvable phonon-mediated two-qubit coupling. The resulting module provides a scalable template for medium-range coupling of Ge hole-spin qubits and connects materials and phonon engineering with circuit-level readout, enabling future experimental demonstrations of entangling gates, Bell-state generation, and phonon-enabled quantum sensing.
A Geometric Approach to Strongly Correlated Bosons: From $N$-Representability to the Generalized BEC Force
This paper develops a mathematical framework using geometry to describe strongly interacting bosonic particles on lattices, deriving exact functional forms for ground states and discovering a generalized force that appears at the boundary of physically allowed states.
Key Contributions
- Development of geometric framework for strongly correlated lattice bosons using reduced density matrix theory
- Discovery of generalized BEC force at N-representability boundary with explicit mathematical expression
- Establishment of systematic hierarchy for functional approximations in many-body boson systems
View Full Abstract
Building on recent advances in reduced density matrix theory, we develop a geometric framework for describing strongly correlated lattice bosons. We first establish that translational symmetry, together with a fixed pair interaction, enables an exact functional formulation expressed solely in terms of momentum occupation numbers. Employing the constrained-search formalism and exploiting a geometric correspondence between $N$-boson configuration states and their one-particle reduced density matrices, we derive the general form of the ground-state functional. Its structure highlights the omnipresent significance of one-body $N$-representability: (i) the domain is exactly determined by the $N$-representability conditions; (ii) at its boundary, the gradient of the functional diverges repulsively, thereby generalizing the recently discovered Bose-Einstein condensate (BEC) force; and (iii) an explicit expression for this boundary force follows directly from geometric arguments. These key results are demonstrated analytically for few-site lattice systems, and we illustrate the broader significance of our functional form in defining a systematic hierarchy of functional approximations.
Variation on the theme of Jarzynski's inequality
This paper extends Jarzynski's inequality, a fundamental relationship in non-equilibrium statistical thermodynamics that connects work and free energy, to quantum field theory systems and chemical systems. The authors use mathematical techniques to generalize this inequality beyond its standard derivation and explore connections to fluctuation theories.
Key Contributions
- Extension of Jarzynski's inequality to many-body quantum field theory systems using functional-integral techniques
- Analysis of the inequality for chemical systems in both linear-response and non-linear thermodynamic regimes
View Full Abstract
The Jarzynski equality, which relates equilibrium free-energy difference to an average of non-equilibrium work, plays a central role in modern non-equilibrium statistical thermodynamics. In this paper, we study a weaker consequence of this relation, known as Jarzynski's inequality, which can be formally obtained from the Jarzynski equality via Jensen's inequality. We identify and analyze several extensions of Jarzynski's inequality that go beyond its direct derivation from the Jarzynski equality. In particular, we consider chemical systems both in the linear-response regime and away from linear thermodynamics. Furthermore, by employing functional-integral techniques, we extend Jarzynski's inequality to many-body statistical systems described by quantum field theory. Salient issues, such as connections of the Jarzynski inequality with the maximum work theorem and the Landau--Lifshitz theory of fluctuations, are also discussed.
Quantum simulation with Rydberg ions in a Penning trap
This paper proposes a new quantum simulation platform using Rydberg ions in Penning traps to study many-body spin systems with dramatically stronger interactions than conventional ion trap quantum simulators. The approach uses strong dipolar interactions between Rydberg electronic states to achieve MHz-scale spin-spin coupling strengths, enabling exploration of long-timescale quantum phenomena in frustrated spin systems.
Key Contributions
- Novel quantum simulation platform combining Rydberg ions with Penning trap confinement for enhanced spin-spin interactions
- Demonstration that MHz-scale interaction strengths are achievable under realistic experimental conditions
- Analysis of how strong electric and magnetic fields in Penning traps affect Rydberg state properties
View Full Abstract
Quantum simulation of interacting many-body spin systems is routinely performed with cold trapped ions, and systems with hundreds of spins have been studied in one and two dimensions. In the most common realizations of these platforms, spin degrees of freedom are encoded in low-lying electronic levels, and interactions among the spins are mediated through crystal vibrations. Here we propose a new approach which enables the quantum simulation of two-dimensional spin systems with interaction strengths that are increased by orders of magnitude. This, together with the unprecedented longevity of trapped ions, opens an avenue for the exploration of phenomena that take place on long timescales, e.g., slow and collective relaxation in frustrated and kinetically constrained systems. Our platform makes use of the strong dipolar interactions among electronic Rydberg states and planar confinement provided by a Penning trap. We investigate how the strong electric and magnetic fields that form this trap affect the properties of the Rydberg states and show that spin-spin interaction strengths on the order of MHz are achievable under experimentally realistic conditions. As a brief illustration of the capabilities of this quantum simulator, we study the entanglement in a frustrated spin system realized by three ions.
Scattering Cross Section Formula Derived From Macroscopic Model of Detectors
This paper derives and justifies the scattering cross section formula used in quantum mechanics by modeling realistic detection processes. The authors provide two different macroscopic models of particle detectors and show they both yield the same asymptotic probability distribution for detection events.
Key Contributions
- Provides rigorous justification for commonly used scattering cross section formula through macroscopic detector models
- Develops two independent derivation methods: negative imaginary potential approach and repeated measurement approach
- Extends results to non-spherical surfaces, multi-particle systems, time-dependent surfaces, and relativistic Dirac equation
View Full Abstract
We are concerned with the justification of the statement, commonly (explicitly or implicitly) used in quantum scattering theory, that for a free non-relativistic quantum particle with initial wave function $Ψ_0(\boldsymbol{x})$, surrounded by detectors along a sphere of large radius $R$, the probability distribution of the detection time and place has asymptotic density (i.e., scattering cross section) $σ(\boldsymbol{x},t)= m^3 \hbar^{-3} R t^{-4} |\widehatΨ_0(m\boldsymbol{x}/\hbar t)|^2$ with $\widehatΨ_0$ the Fourier transform of $Ψ_0$. We give two derivations of this formula, based on different macroscopic models of the detection process. The first one consists of a negative imaginary potential of strength $λ>0$ in the detector volume (i.e., outside the sphere of radius $R$) in the limit $R\to\infty,λ\to 0, Rλ\to \infty$. The second one consists of repeated nearly-projective measurements of (approximately) the observable $1_{|\boldsymbol{x}|>R}$ at times $\mathscr{T},2\mathscr{T},3\mathscr{T},\ldots$ in the limit $R\to\infty,\mathscr{T}\to\infty,\mathscr{T}/R\to 0$; this setup is similar to that of the quantum Zeno effect, except that there one considers $\mathscr{T}\to 0$ instead of $\mathscr{T}\to\infty$. We also provide a comparison to Bohmian mechanics: while in the absence of detectors, the arrival times and places of the Bohmian trajectories on the sphere of radius $R$ have asymptotic distribution density given by the same formula as $σ$, their deviation from the detection times and places is not necessarily small, although it is small compared to $R$, so the effect of the presence of detectors on the particle can be neglected in the far-field regime. We also cover the generalization to surfaces with non-spherical shape, to the case of $N$ non-interacting particles, to time-dependent surfaces, and to the Dirac equation.