Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Manjushri: A Tool for Equivalence Checking of Quantum Circuits
This paper introduces Manjushri, a new automated tool for checking whether two quantum circuits produce equivalent results. The tool uses a novel approach with local projections and weighted binary decision diagrams to efficiently verify quantum circuit equivalence, showing significant speed improvements over existing methods for circuits up to depth 30.
Key Contributions
- Introduction of Manjushri framework using local projections and WBDDs for quantum circuit equivalence checking
- Comprehensive experimental comparison showing 8-10x speed improvements over existing ECMC tool for circuits up to depth 30
- Demonstration of scalability to large quantum circuits with up to 128 qubits
View Full Abstract
Verifying whether two quantum circuits are equivalent is a central challenge in the compilation and optimization of quantum programs. We introduce \textsc{Manjushri}, a new automated framework for scalable quantum-circuit equivalence checking. \textsc{Manjushri} uses local projections as discriminative circuit fingerprints, implemented with weighted binary decision diagrams (WBDDs), yielding a compact and efficient symbolic representation of quantum behavior. We present an extensive experimental evaluation that, for random 1D Clifford+$T$ circuits, explores the trade-off between \textsc{Manjushri} and \textsc{ECMC}, a tool for equivalence checking based on a much different approach. \textsc{Manjushri} is much faster up to depth 30 (with the crossover point varying from 39--49, depending on the number of qubits and whether the input circuits are equivalent or inequivalent): when inputs are equivalent, \textsc{Manjushri} is about 10$\times$ faster (or more); when inputs are inequivalent, \textsc{Manjushri} is about 8$\times$ faster (or more). For both kinds of equivalence-checking outcomes, \textsc{ECMC}'s success rate out to depth 50 is impressive on 32- and 64-qubit circuits: on such circuits, \textsc{ECMC} is almost uniformly successful. However, \textsc{ECMC} struggled on 128-qubit circuits for some depths. \textsc{Manjushri} is almost uniformly successful out to about depth 38, before tailing off to about 75\% at depth 50 (falling to 0\% at depth 48 for 128-qubit circuits that are equivalent). These results establish that \textsc{Manjushri} is a practical and scalable solution for large-scale quantum-circuit verification, and would be the preferred choice unless clients need to check equivalence of circuits of depth $>$38.
Quantum bootstrap product codes
This paper introduces a new method called 'quantum bootstrap product' for constructing quantum error-correcting codes that goes beyond traditional approaches by solving consistency equations rather than just combining existing codes. The method unifies different types of important quantum codes and can generate self-correcting codes that surpass previous theoretical limits.
Key Contributions
- Introduction of quantum bootstrap product framework that extends beyond homological paradigm for constructing CSS codes
- Unification of diverse code families including hypergraph product codes and fracton codes under single framework
- Development of fork complexes structure that elucidates topological structures of fracton codes
- Generation of self-correcting quantum codes that surpass code-rate upper bounds of existing methods
View Full Abstract
Product constructions constitute a powerful method for generating quantum CSS codes, yielding celebrated examples such as toric codes and asymptotically good low-density parity check (LDPC) codes. Since a CSS code is fully described by a chain complex, existing product formalisms are predominantly homological, defined via the tensor product of the underlying chain complexes of input codes, thereby establishing a natural connection between quantum codes and topology. In this Letter, we introduce the \textit{quantum bootstrap product} (QBP), an approach that extends beyond this standard homological paradigm. Specifically, a QBP code is determined by solving a consistency condition termed the ``bootstrap equation''. We find that the QBP paradigm unifies a wide range of important codes, including general hypergraph product (HGP) codes of arbitrary dimensions and fracton codes typically represented by the X-cube code. Crucially, the solutions to the bootstrap equation yield chain complexes where the chain groups and associated boundary maps consist of multiple components. We term such structures \textit{fork complexes}. This structure elucidates the underlying topological structures of fracton codes, akin to foliated fracton order theories. Beyond conceptual insights, we demonstrate that the QBP paradigm can generate self-correcting quantum codes from input codes with constant energy barriers and surpass the code-rate upper bounds inherent to HGP codes. Our work thus substantially extends the scope of quantum product codes and provides a versatile framework for designing fault-tolerant quantum memories.
Efficient learning of logical noise from syndrome data
This paper develops methods to efficiently characterize logical errors in fault-tolerant quantum computers by analyzing syndrome measurement data from error correction, rather than requiring many direct measurements of rare logical errors. The authors extend previous work to realistic circuit-level noise and demonstrate orders-of-magnitude improvements in sample efficiency.
Key Contributions
- Extended syndrome-based logical error characterization from phenomenological to realistic circuit-level noise models
- Developed efficient estimators with provable sample complexity guarantees using Fourier analysis and compressed sensing
- Demonstrated orders-of-magnitude sample-complexity savings over direct logical benchmarking on syndrome-extraction circuits
View Full Abstract
Characterizing errors in quantum circuits is essential for device calibration, yet detecting rare error events requires a large number of samples. This challenge is particularly severe in calibrating fault-tolerant, error-corrected circuits, where logical error probabilities are suppressed to higher order relative to physical noise and are therefore difficult to calibrate through direct logical measurements. Recently, Wagner et al. [PRL 130, 200601 (2023)] showed that, for phenomenological Pauli noise models, the logical channel can instead be inferred from syndrome measurement data generated during error correction. Here, we extend this framework to realistic circuit-level noise models. From a unified code-theoretic perspective and spacetime code formalism, we derive necessary and sufficient conditions for learning the logical channel from syndrome data alone and explicitly characterize the learnable degrees of freedom of circuit-level Pauli faults. Using Fourier analysis and compressed sensing, we develop efficient estimators with provable guarantees on sample complexity and computational cost. We further present an end-to-end protocol and demonstrate its performance on several syndrome-extraction circuits, achieving orders-of-magnitude sample-complexity savings over direct logical benchmarking. Our results establish syndrome-based learning as a practical approach to characterizing the logical channel in fault-tolerant quantum devices.
A Bravyi-König theorem for Floquet codes generated by locally conjugate instantaneous stabiliser groups
This paper extends the Bravyi-König theorem, which limits the types of logical operations possible in topological quantum error correcting codes, to a new class called Floquet codes where the codespace is dynamically generated through time-dependent measurements. The authors prove that similar fundamental limitations apply to these time-dependent codes and introduce a broader class of operations that work within these constraints.
Key Contributions
- Extension of Bravyi-König theorem to Floquet codes with locally conjugate stabilizer groups
- Introduction and characterization of generalized unitaries for Floquet codes that preserve logical operations without preserving codespace at each time step
View Full Abstract
The Bravyi-König (BK) theorem is an important no-go theorem for the dynamics of topological stabiliser quantum error correcting codes. It states that any logical operation on a $D$-dimensional topological stabiliser code that can be implemented by a short-depth circuit acts on the codespace as an element of the $D$-th level of the Clifford hierarchy. In recent years, a new type of quantum error correcting codes based on Pauli stabilisers, dubbed Floquet codes, has been introduced. In Floquet codes, syndrome measurements are arranged such that they dynamically generate a codespace at each time step. Here, we show that the BK theorem holds for a definition of Floquet codes based on locally conjugate stabiliser groups. Moreover, we introduce and define a class of generalised unitaries in Floquet codes that need not preserve the codespace at each time step, but that combined with the measurements constitute a valid logical operation. We derive a canonical form of these generalised unitaries and show that the BK theorem holds for them too.
Error-detectable Universal Control for High-Gain Bosonic Quantum Error Correction
This paper introduces error-detectable universal control for bosonic quantum error correction, where ancilla relaxation events are detected and discarded to suppress operational errors. They achieve universal gates with 99.6% fidelity and demonstrate 8.33× QEC gains beyond break-even for binomial codes, with projections showing 10× gains are possible.
Key Contributions
- Error-detectable universal control method that suppresses ancilla-induced operational errors by detecting and discarding trajectories with ancilla relaxation events
- Demonstration of 8.33× QEC gains beyond break-even with universal gates achieving 99.6% fidelity for binomial codes
View Full Abstract
Protecting quantum information through quantum error correction (QEC) is a cornerstone of future fault-tolerant quantum computation. However, current QEC-protected logical qubits have only achieved coherence times about twice those of their best physical constituents. Here, we show that the primary barrier to higher QEC gains is ancilla-induced operational errors rather than intrinsic cavity coherence. To overcome this bottleneck, we introduce error-detectable universal control of bosonic modes, wherein ancilla relaxation events are detected and the corresponding trajectories discarded, thereby suppressing operational errors on logical qubits. For binomial codes, we demonstrate universal gates with fidelities exceeding $99.6\%$ and QEC gains of $8.33\times$ beyond break-even. Our results establish that gains beyond $10\times$ are achievable with state-of-the-art devices, establishing a path toward fault-tolerant bosonic quantum computing.
Hierarchical quantum decoders
This paper introduces a new family of quantum error correction decoders that use mathematical optimization techniques to provide a controllable trade-off between decoding speed and accuracy. The approach uses the Lasserre Sum-of-Squares hierarchy to create multiple levels of decoders, where lower levels are faster but less accurate, while higher levels are slower but approach optimal performance.
Key Contributions
- Development of hierarchical quantum decoders using Sum-of-Squares optimization with tunable speed-accuracy trade-offs
- Demonstration that low levels of the hierarchy significantly outperform standard Linear Programming relaxations on surface codes and color codes
View Full Abstract
Decoders are a critical component of fault-tolerant quantum computing. They must identify errors based on syndrome measurements to correct quantum states. While finding the optimal correction is NP-hard and thus extremely difficult, approximate decoders with faster runtime often rely on uncontrolled heuristics. In this work, we propose a family of hierarchical quantum decoders with a tunable trade-off between speed and accuracy while retaining guarantees of optimality. We use the Lasserre Sum-of-Squares (SOS) hierarchy from optimization theory to relax the decoding problem. This approach creates a sequence of Semidefinite Programs (SDPs). Lower levels of the hierarchy are faster but approximate, while higher levels are slower but more accurate. We demonstrate that even low levels of this hierarchy significantly outperform standard Linear Programming relaxations. Our results on rotated surface codes and honeycomb color codes show that the SOS decoder approaches the performance of exact decoding. We find that Levels 2 and 3 of our hierarchy perform nearly as well as the exact solver. We analyze the convergence using rank-loop criteria and compare the method against other relaxation schemes. This work bridges the gap between fast heuristics and rigorous optimal decoding.
Reinforcement Learning for Adaptive Composition of Quantum Circuit Optimisation Passes
This paper develops a reinforcement learning approach to automatically optimize the order of quantum circuit optimization passes, achieving better two-qubit gate reduction than default sequences. The RL agent learns to compose circuit optimization sequences tailored to individual quantum circuits rather than using general-purpose optimization sequences.
Key Contributions
- Development of reinforcement learning framework for adaptive quantum circuit optimization pass composition
- Demonstration of 57.7% mean two-qubit gate reduction compared to 41.8% for best default pass sequences
View Full Abstract
Many quantum software development kits provide a suite of circuit optimisation passes. These passes have been highly optimised and tested in isolation. However, the order in which they are applied is left to the user, or else defined in general-purpose default pass sequences. While general-purpose sequences miss opportunities for optimisation which are particular to individual circuits, designing pass sequences bespoke to particular circuits requires exceptional knowledge about quantum circuit design and optimisation. Here we propose and demonstrate training a reinforcement learning agent to compose optimisation-pass sequences. In particular the agent's action space consists of passes for two-qubit gate count reduction used in default PyTKET pass sequences. For the circuits in our diverse test set, the (mean, median) fraction of two-qubit gates removed by the agent is $(57.7\%, \ 56.7 \%)$, compared to $(41.8 \%, \ 50.0 \%)$ for the next best default pass sequence.
A biased-erasure cavity qubit with hardware-efficient quantum error detection
This paper demonstrates a new type of quantum bit (qubit) called a biased-erasure qubit that can detect its own errors very efficiently. The researchers encoded quantum information in microwave cavity states and achieved over 99% error detection while maintaining good quantum coherence, representing a significant step toward fault-tolerant quantum computing.
Key Contributions
- Demonstration of hardware-efficient biased-erasure qubit with 265:1 erasure bias ratio
- Achievement of over 99.3% error detection efficiency with sub-1% logical assignment errors
- Establishment of strong error hierarchy with 6x coherence improvement beyond break-even point
- Hardware-efficient platform using single cavity with transmon ancilla for scalable error correction
View Full Abstract
Erasure qubits are beneficial for quantum error correction due to their relaxed threshold requirements. While dual-rail erasure qubits have been demonstrated with a strong error hierarchy in circuit quantum electrodynamics, biased-erasure qubits -- where erasures originate predominantly from one logical basis state -- offer further advantages. Here, we realize a hardware-efficient biased-erasure qubit encoded in the vacuum and two-photon Fock states of a single microwave cavity. The qubit exhibits an erasure bias ratio of over 265. By using a transmon ancilla for logical measurements and mid-circuit erasure detections, we achieve logical state assignment errors below 1% and convert over 99.3% leakage errors into detected erasures. After postselection against erasures, we achieve effective logical relaxation and dephasing rates of $(6.2~\mathrm{ms})^{-1}$ and $(3.1~\mathrm{ms})^{-1}$, respectively, which exceed the erasure error rate by factors of 31 and 15, establishing a strong error hierarchy within the logical subspace. These postselected error rates indicate a coherence gain of about 6.0 beyond the break-even point set by the best physical qubit encoded in the two lowest Fock states in the cavity. Moreover, randomized benchmarking with interleaved erasure detections reveals a residual logical gate error of 0.29%. This work establishes a compact and hardware-efficient platform for biased-erasure qubits, promising concatenations into outer-level stabilizer codes toward fault-tolerant quantum computation.
High-Coherence and High-frequency Quantum Computing: The Design of a High-Frequency, High-Coherence and Scalable Quantum Computing Architecture
This paper proposes a design for high-frequency transmon quantum computing architecture operating at 11-13.5 GHz (above the typical 4-6 GHz range), aiming to achieve longer coherence times and better scalability. The design includes an 8-qubit system with potential expansion to 72 qubits using advanced superconducting materials and manufacturing techniques.
Key Contributions
- High-frequency transmon qubit architecture operating beyond 10 GHz
- Scalable design from 8 to 72 qubits with new connection topology
- Integration of advanced superconducting materials for improved coherence times
View Full Abstract
High-coherence, fault-tolerant and scalable quantum computing architectures with unprecedented long coherence times, faster gates, low losses and low bit-flip errors may be one of the only ways forward to achieve the true quantum advantage. In this context, high-frequency high-coherence (HCQC) qubits with new high-performance topologies could be a significant step towards efficient and high-fidelity quantum computing by facilitating compact size, higher scalability and higher than conventional operating temperatures. Although transmon type qubits are designed and manufactured routinely in the range of a few Giga-Hertz, normally from 4 to 6 GHz (and, at times, up to around 10GHz), achieving higher-frequency operation has challenges and entails special design and manufacturing considerations. This report presents the proposal and preliminary design of an 8-qubit transmon (with possible upgrade to up to 72 qubits on a chip) architecture working beyond an operation frequency of 10GHz, as well as presents a new connection topology. The current design spans a range of around 11 to 13.5 GHz (with a possible full range of 9-12GHz at the moment), with a central optimal operating frequency of 12.0 GHz, with the aim to possibly achieve a stable, compact and low-charge-noise operation, as lowest as possible as per the existing fabrication techniques. The aim is to achieve average relaxation times of up to 1.9ms with average quality factors of up to 2.75 x 10^7 after trials, while exploiting the new advances in superconducting junction manufacturing using tantalum and niobium/aluminum/aluminum oxide tri-layer structures on high-resistivity silicon substrates (carried out elsewhere by other groups and referred in this report).
Transversal gates for quantum CSS codes
This paper develops methods to compute transversal gates for CSS quantum error-correcting codes, specifically focusing on diagonal gates and their logical actions. The authors provide explicit equations defining these gate groups and apply their approach to monomial codes, extending previous results on several important code families.
Key Contributions
- Development of explicit equations defining transversal gate groups for CSS codes
- Complete characterization of transversal gates for monomial codes including polar codes and triorthogonal codes
View Full Abstract
In this paper, we focus on the problem of computing the set of diagonal transversal gates fixing a CSS code. We determine the logical actions of the gates as well as the groups of transversal gates that induce non-trivial logical gates and logical identities. We explicitly declare the set of equations defining the groups, a key advantage and differentiator of our approach. We compute the complete set of transversal stabilizers and transversal gates for any CSS code arising from monomial codes, a family that includes decreasing monomial codes and polar codes. As a consequence, we recover and extend some results in the literature on CSS-T codes, triorthogonal codes, and divisible codes.
In-situ benchmarking of fault-tolerant quantum circuits. I. Clifford circuits
This paper develops methods to benchmark and characterize both physical and logical errors in fault-tolerant quantum circuits using syndrome data collected during circuit execution, rather than requiring separate benchmarking runs. The approach can efficiently estimate error rates and predict logical fidelities even when logical errors are exponentially suppressed.
Key Contributions
- Development of in-situ benchmarking methods for fault-tolerant quantum circuits using syndrome data
- Mapping of fault-tolerant Clifford circuits to subsystem codes using spacetime formalism
- Polynomial-time estimation scheme that provides exponential advantage over direct fidelity estimation methods
- Necessary and sufficient conditions for learnability of physical and logical noise from syndrome data
View Full Abstract
Benchmarking physical devices and verifying logical algorithms are important tasks for scalable fault-tolerant quantum computing. Numerous protocols exist for benchmarking devices before running actual algorithms. In this work, we show that both physical and logical errors of fault-tolerant circuits can even be characterized in-situ using syndrome data. To achieve this, we map general fault-tolerant Clifford circuits to subsystem codes using the spacetime code formalism and develop a scheme for estimating Pauli noise in Clifford circuits using syndrome data. We give necessary and sufficient conditions for the learnability of physical and logical noise from given syndrome data, and show that we can accurately predict logical fidelities from the same data. Importantly, our approach requires only a polynomial sample size, even when the logical error rate is exponentially suppressed by the code distance, and thus gives an exponential advantage against methods that use only logical data such as direct fidelity estimation. We demonstrate the practical applicability of our methods in various scenarios using synthetic data as well as the experimental data from a recent demonstration of fault-tolerant circuits by Bluvstein et al. [Nature 626, 7997 (2024)]. Our methods provide an efficient, in-situ way of characterizing a fault-tolerant quantum computer to help gate calibration, improve decoding accuracy, and verify logical circuits.
Quantum Memory and Autonomous Computation in Two Dimensions
This paper presents a breakthrough method for quantum error correction that works passively in two dimensions without requiring active measurements or classical processing, using quantum cellular automata with self-correcting properties that can maintain quantum information indefinitely and enable fault-tolerant universal quantum computation.
Key Contributions
- First demonstration of passive quantum error correction in physically realistic two spatial dimensions
- Construction of a self-correcting universal quantum computer using hierarchical quantum cellular automata
- Proof of noise threshold below which logical errors are arbitrarily suppressed with increasing system size
View Full Abstract
Standard approaches to quantum error correction (QEC) require active maintenance using measurements and classical processing. The possibility of passive QEC has so far only been established in an unphysical number of spatial dimensions. In this work, we present a simple method for autonomous QEC in two spatial dimensions, formulated as a quantum cellular automaton with a fixed, local and translation-invariant update rule. The construction uses hierarchical, self-simulating control elements based on the classical schemes from the seminal results of Gács (1986, 1989) together with a measurement-free concatenated code. We analyze the system under a local noise model and prove a noise threshold below which the logical errors are suppressed arbitrarily with increasing system size and the memory lifetime diverges in the thermodynamic limit. The scheme admits a continuous-time implementation as a time-independent, translation-invariant local Lindbladian with engineered dissipative jump operators. Further, the recursive nature of our protocol allows for the fault-tolerant encoding of arbitrary quantum circuits and thus constitutes a self-correcting universal quantum computer.
Computer Science Challenges in Quantum Computing: Early Fault-Tolerance and Beyond
This paper analyzes how quantum computing progress is shifting from hardware-only challenges to computer science challenges, focusing on the systems design, software, and integration needs for early fault-tolerant quantum computers with small numbers of logical qubits.
Key Contributions
- Identifies computer science research challenges for early fault-tolerant quantum computing
- Organizes research priorities around algorithms, error correction, software, and architecture for near-term quantum systems
View Full Abstract
Quantum computing is entering a period in which progress will be shaped as much by advances in computer science as by improvements in hardware. The central thesis of this report is that early fault-tolerant quantum computing shifts many of the primary bottlenecks from device physics alone to computer-science-driven system design, integration, and evaluation. While large-scale, fully fault-tolerant quantum computers remain a long-term objective, near- and medium-term systems will support early fault-tolerant computation with small numbers of logical qubits and tight constraints on error rates, connectivity, latency, and classical control. How effectively such systems can be used will depend on advances across algorithms, error correction, software, and architecture. This report identifies key research challenges for computer scientists and organizes them around these four areas, each centered on a fundamental question.
Theory of low-weight quantum codes
This paper develops theoretical foundations for quantum low-density parity-check (qLDPC) codes with constrained check weights, proving that optimal weight calculation is NP-hard and establishing analytical bounds on code parameters. The authors provide explicit characterizations of low-weight stabilizer codes and develop linear programming methods to determine optimal parameters for practical quantum error correction.
Key Contributions
- Proved that computing optimal code weight for stabilizer codes is NP-hard
- Completely characterized stabilizer codes with weight at most 3, showing distance 2 and rate at most 1/4
- Developed linear programming scheme for exact optimal weight bounds for small systems (n≤9)
- Demonstrated practical application to IBM 127-qubit chip architecture
View Full Abstract
Low check weight is practically crucial code property for fault-tolerant quantum computing, which underlies the strong interest in quantum low-density parity-check (qLDPC) codes. Here, we explore the theory of weight-constrained stabilizer codes from various foundational perspectives including the complexity of computing code weight and the explicit boundary of feasible low-weight codes in both theoretical and practical settings. We first prove that calculating the optimal code weight is an $\mathsf{NP}$-hard problem, demonstrating the necessity of establishing bounds for weight that are analytical or efficiently computable. Then we systematically investigate the feasible code parameters with weight constraints. We provide various explicit analytical lower bounds and in particular completely characterize stabilizer codes with weight at most 3, showing that they have distance 2 and code rate at most 1/4. We also develop a powerful linear programming (LP) scheme for setting code parameter bounds with weight constraints, which yields exact optimal weight values for all code parameters with $n\leq 9$. We further refined this constraint from multiple perspectives by considering the generator weight distribution and overlap. In particular, we consider practical architectures and demonstrate how to apply our methods to e.g.~the IBM 127-qubit chip. Our study brings the weight as a crucial parameter into coding theory and provide guidance for code design and utility in practical scenarios.
A Folded Surface Code Architecture for 2D Quantum Hardware
This paper presents a new architecture for implementing quantum error correction codes on 2D quantum hardware using qubit shuttling to create effective 3D connectivity. The approach enables faster logical gate operations and more efficient magic state distillation compared to conventional 2D surface code implementations.
Key Contributions
- Native implementation of folded surface codes on 2D hardware using qubit shuttling
- Reduction of logical Clifford gates and CNOT operations from O(d) to constant time
- Order-of-magnitude improvement in spacetime volume for magic-state distillation
- Introduction of virtual-stack layout for efficient multilayer routing on 2D devices
View Full Abstract
Qubit shuttling has become an indispensable ingredient for scaling leading quantum computing platforms, including semiconductor spin, neutral-atom, and trapped-ion qubits, enabling both crosstalk reduction and tighter integration of control hardware. Cai et al. (2023) proposed a scalable architecture that employs short-range shuttling to realize effective three-dimensional connectivity on a strictly two-dimensional device. Building on recent advances in quantum error correction, we show that this architecture enables the native implementation of folded surface codes on 2D hardware, reducing the runtime of all single-qubit logical Clifford gates and logical CNOTs within subsets of qubits from $\mathcal{O}(d)$ in conventional surface code lattice surgery to constant time. We present explicit protocols for these operations and demonstrate that access to a transversal $S$ gate reduces the spacetime volume of 8T-to-CCZ magic-state distillation by more than an order of magnitude compared with standard 2D lattice surgery approaches. Finally, we introduce a new "virtual-stack" layout that more efficiently exploits the quasi-three-dimensional structure of the architecture, enabling efficient multilayer routing on these two-dimensional devices.
Spectral Codes: A Geometric Formalism for Quantum Error Correction
This paper introduces a new mathematical framework for quantum error correction using spectral geometry, where error correcting codes are viewed as low-energy projections of geometric operators. The approach unifies different types of quantum codes under a single geometric language and provides new methods for improving error correction thresholds.
Key Contributions
- Unified geometric framework for quantum error correction using spectral triples
- Demonstration that spectral gaps control error correction performance and can be enhanced
- Recovery of diverse code types (stabilizer, topological, GKP) from single construction
View Full Abstract
We present a new geometric perspective on quantum error correction based on spectral triples in noncommutative geometry. In this approach, quantum error correcting codes are reformulated as low energy spectral projections of Dirac type operators that separate global logical degrees of freedom from local, correctable errors. Locality, code distance, and the Knill Laflamme condition acquire a unified spectral and geometric interpretation in terms of the induced metric and spectrum of the Dirac operator. Within this framework, a wide range of known error correcting codes including classical linear codes, stabilizer codes, GKP type codes, and topological codes are recovered from a single construction. This demonstrates that classical and quantum codes can be organized within a common geometric language. A central advantage of the spectral triple perspective is that the performance of error correction can be directly related to spectral properties. We show that leakage out of the code space is controlled by the spectral gap of the Dirac operator, and that code preserving internal perturbations can systematically increase this gap without altering the encoded logical subspace. This yields a geometric mechanism for enhancing error correction thresholds, which we illustrate explicitly for a stabilizer code. We further interpret Berezin Toeplitz quantization as a mixed spectral code and briefly discuss implications for holographic quantum error correction. Overall, our results suggest that quantum error correction can be viewed as a universal low energy phenomenon governed by spectral geometry.
Quantum Circuit Pre-Synthesis: Learning Local Edits to Reduce $T$-count
This paper presents Q-PreSyn, a reinforcement learning approach that optimizes quantum circuits before synthesis by learning sequences of local edits that preserve circuit equivalence but reduce the number of expensive T gates needed for fault-tolerant quantum computing. The method achieves up to 20% reduction in T-count on circuits with up to 25 qubits without introducing approximation errors.
Key Contributions
- Development of Q-PreSyn reinforcement learning framework for pre-synthesis circuit optimization
- Demonstration of up to 20% T-count reduction on circuits up to 25 qubits without approximation error
View Full Abstract
Compiling quantum circuits into Clifford+$T$ gates is a central task for fault-tolerant quantum computing using stabilizer codes. In the near term, $T$ gates will dominate the cost of fault tolerant implementations, and any reduction in the number of such expensive gates could mean the difference between being able to run a circuit or not. While exact synthesis is exponentially hard in the number of qubits, local synthesis approaches are commonly used to compile large circuits by decomposing them into substructures. However, composing local methods leads to suboptimal compilations in key metrics such as $T$-count or circuit depth, and their performance strongly depends on circuit representation. In this work, we address this challenge by proposing \textsc{Q-PreSyn}, a strategy that, given a set of local edits preserving circuit equivalence, uses a RL agent to identify effective sequences of such actions and thereby obtain circuit representations that yield a reduced $T$-count upon synthesis. Experimental results of our proposed strategy, applied on top of well-known synthesis algorithms, show up to a $20\%$ reduction in $T$-count on circuits with up to 25 qubits, without introducing any additional approximation error prior to synthesis.
Efficient Application of Tensor Network Operators to Tensor Network States
This paper introduces a new algorithm called Cholesky-based compression (CBC) that efficiently applies tree tensor network operators to tree tensor network states, achieving significant runtime improvements over existing methods. The authors demonstrate their method on quantum circuit simulation tasks and show that complex tree structures can outperform linear structures with lower errors.
Key Contributions
- Development of Cholesky-based compression (CBC) algorithm for efficient tensor network operator application with order-of-magnitude runtime improvements
- Demonstration that complex tree tensor network structures can outperform linear structures in quantum circuit simulation with lower computational errors
View Full Abstract
The performance of tensor network methods has seen constant improvements over the last few years. We add to this effort by introducing a new algorithm that efficiently applies tree tensor network operators to tree tensor network states inspired by the density matrix method and the Cholesky decomposition. This application procedure is a common subroutine in tensor network methods. We explicitly include the special case of tensor train structures and demonstrate how to extend methods commonly used in this context to general tree structures. We compare our newly developed method with the existing ones in a benchmark scenario with random tensor network states and operators. We find our Cholesky-based compression (CBC) performs equivalently to the current state-of-the-art method, while outperforming most established methods by at least an order of magnitude in runtime. We then apply our knowledge to perform circuit simulation of tree-like circuits, in order to test our method in a more realistic scenario. Here, we find that more complex tree structures can outperform simple linear structures and achieve lower errors than those possible with the simple structures. Additionally, our CBC still performs among the most successful methods, showing less dependence on the different bond dimensions of the operator.
DynQ: A Dynamic Topology-Agnostic Quantum Virtual Machine via Quality-Weighted Community Detection
This paper presents DynQ, a quantum virtual machine that enables multiple users to share quantum hardware by dynamically partitioning quantum processors into execution regions based on real-time calibration data and device quality, rather than using fixed geometric divisions.
Key Contributions
- First dynamic, topology-agnostic quantum virtual machine using quality-weighted community detection
- Enables quantum hardware virtualization and resource sharing while maintaining high fidelity execution
- Demonstrates resilience to hardware defects and calibration drift with up to 19.1% higher fidelity than existing approaches
View Full Abstract
Quantum cloud platforms remain fundamentally non-virtualised: despite rapid hardware scaling, each user program still monopolises an entire quantum processor, preventing resource sharing, economic scalability, and quality-of-service differentiation. Existing Quantum Virtual Machine (QVM) designs attempt spatial multiplexing through topology-specific or template-based partitioning, but these approaches are brittle under hardware heterogeneity, calibration drift, and transient defects, which dominate real quantum devices. We present DynQ, the first dynamic, topology-agnostic Quantum Virtual Machine that virtualises quantum hardware using quality-weighted community detection. Instead of imposing fixed geometric regions, DynQ models a quantum processor as a weighted graph derived from live calibration data and automatically discovers execution regions that maximise internal gate quality while minimising inter-region coupling. This operationalises the classical virtualisation principle of high cohesion and low coupling in a quantum-native setting, producing execution regions that are connectivity-efficient, noise-aware, and resilient to crosstalk and defects. We evaluate DynQ across five IBM Quantum backends using calibration-derived noise simulation and on two production devices, comparing against state-of-the-art QVM and standard compilation baselines. On hardware with pronounced spatial quality variation, DynQ achieves up to 19.1 percent higher fidelity and 45.1 percent lower output error. When transient hardware defects cause baseline executions to fail completely, DynQ adapts dynamically and achieves over 86 percent fidelity. By transforming calibrated device graphs into adaptive virtual hardware abstractions, DynQ decouples quantum programs from fragile physical layouts and enables reliable, high-utilisation quantum cloud services.
Reinforcement Learning for Enhanced Advanced QEC Architecture Decoding
This paper develops reinforcement learning techniques to improve the decoding of quantum error correction codes, particularly for advanced architectures beyond surface codes. The approach uses AI agents to learn optimal decoding strategies from noisy syndrome measurements, potentially achieving better error rates and scalability than traditional methods.
Key Contributions
- Application of reinforcement learning to advanced quantum error correction code decoding
- Development of hybrid and multi-agent RL approaches for complex QEC architectures
- Demonstration of autonomous agent training for deriving decoding schemes
View Full Abstract
The advent of promising quantum error correction (QEC) codes with efficient resource utilization and high-performance fault-tolerant quantum memories signifies a critical step towards realizing practical quantum computation. While surface codes have been a dominant approach, their limitations have spurred the development of more advanced QEC architectures. These advanced codes often present increased complexity, demanding innovative decoding methodologies. This work investigates the application of reinforcement learning (RL) techniques, including hybrid and multi-agent approaches, to enhance the decoding of various advanced QEC architectures. By leveraging the ability of RL to learn optimal strategies from noisy syndrome measurements, we explore the potential for achieving improved logical error rates and scalability compared to traditional decoding methods. Our approach examines the adaptation of reinforcement learning to exploit the structural properties of these modern QEC models. We also explore the benefits of combining different RL algorithms to address the multifaceted nature of the decoding problem, considering factors such as code degeneracy and real-world noise characteristics. With our proposed method, we are able to demonstrate that an autonomously trained agent can derive decoding schemes for the complex decoding requirement of advanced QEC architectures.
Pareto-Front Engineering of Dynamical Sweet Spots in Superconducting Qubits
This paper develops a multi-objective optimization framework for operating superconducting qubits at dynamical sweet spots to simultaneously improve both energy relaxation time (T1) and dephasing time (Tφ). The method enhances coherence times by 3-5x compared to existing approaches while maintaining microsecond-scale performance, and establishes fundamental limits on achievable improvements.
Key Contributions
- Multi-objective Pareto optimization framework for dynamical sweet spots that simultaneously optimizes T1 and Tφ
- Proof of fundamental upper bounds on achievable T1 improvements despite eliminating first-order noise sensitivity
- Identification of double-DSS regions providing robust operating points insensitive to both DC and AC flux noise
- Demonstration of high-fidelity single and two-qubit gate protocols at optimized operating points
View Full Abstract
Operating superconducting qubits at dynamical sweet spots (DSSs) suppresses decoherence from low-frequency flux noise. A key open question is how long coherence can be extended under this strategy and what fundamental limits constrain it. Here we introduce a fully parameterized, multi-objective periodic-flux modulation framework that simultaneously optimizes energy relaxation $T_1$ and pure dephasing $T_φ$, thereby quantifying the tradeoff between them. For fluxonium qubits with realistic noise spectra, our method enhances $T_φ$ by a factor of 3-5 compared with existing DSS strategies while maintaining $T_1$ in the hundred-microsecond range. We further prove that, although DSSs eliminate first-order sensitivity to low-frequency noise, relaxation rate cannot be reduced arbitrarily close to zero, establishing an upper bound on achievable $T_1$. At the optimized working points, we identify double-DSS regions that are insensitive to both DC and AC flux, providing robust operating bands for experiments. As applications, we design single- and two-qubit control protocols at these operating points and numerically demonstrate high-fidelity gate operations. These results establish a general and useful framework for Pareto-front engineering of DSSs that substantially improves coherence and gate performance in superconducting qubits.
High-Performance Exact Synthesis of Two-Qubit Quantum Circuits
This paper develops an exact synthesis framework for optimally constructing two-qubit quantum circuits using Clifford+T gates, minimizing the number of T gates needed. The approach uses exhaustive search with pruning techniques and creates lookup tables that enable fast synthesis by turning the optimization problem into a simple query.
Key Contributions
- Exact synthesis framework for two-qubit circuits that guarantees optimal T-count
- Efficient algorithmic approach combining meet-in-the-middle search with algebraic canonicalization and pruning
- Reusable lookup table system that converts synthesis into fast query operations
View Full Abstract
Exact synthesis provides unconditional optimality and canonical structure, but is often limited to small, carefully scoped regimes. We present an exact synthesis framework for two-qubit circuits over the Clifford+$T$ gate set that optimizes $T$-count exactly. Our approach exhausts a bounded search space, exploits algebraic canonicalization to avoid redundancy, and constructs a lookup table of optimal implementations that turns synthesis into a query. Algorithmically, we combine meet-in-the-middle ideas with provable pruning rules and problem-specific arithmetic designed for modern hardware. The result is an exact, reusable synthesis engine with substantially improved practical performance.
Time-series based quantum state discrimination
This paper proposes using machine learning techniques, specifically LSTM neural networks, to improve quantum state readout by analyzing the full time-series data from measurements rather than just integrated signals. The approach better distinguishes between qubits that started in the ground state versus those that decayed during measurement, leading to improved readout fidelity.
Key Contributions
- Introduction of time-series machine learning methods for quantum state discrimination using raw analog signals
- Demonstration that LSTM networks outperform traditional clustering methods for qubit readout, particularly in boundary regions between quantum states
View Full Abstract
Accurate quantum state readout is crucial for error correction and algorithms, but measurement errors are detrimental. Readout fidelity is typically limited by a poor signal-to-noise ratio (SNR) and energy relaxation ($T_1$ decay), a significant problem for superconducting qubits. While most approaches classify results using clustering algorithms on integrated readout signals, these methods cannot distinguish a qubit that was initially in the ground state from one that decayed to it during measurement. We instead propose using machine learning (ML) on the raw, non-integrated analog signal. We apply time-series classification models, such as a long short-term memory (LSTM) network, to the full data trajectory. We find that our LSTM model, combined with filtering and feature engineering, consistently outperforms clustering. The largest improvements come from reclassifying points in the boundary regions between clusters. These points correspond to atypical measurement records, likely due to transient or noisy features lost during data integration. By retaining temporal information, sequence-aware models like LSTMs can better discriminate these trajectories, whereas clustering methods based on integrated values are more prone to misclassification.
When Does Adaptation Win? Scaling Laws for Meta-Learning in Quantum Control
This paper develops mathematical scaling laws to determine when adaptive quantum controllers are worth their computational overhead compared to fixed controllers, showing that adaptation benefits increase with device-to-device variation and validating this on quantum gate calibration tasks.
Key Contributions
- Derived scaling law lower bounds for meta-learning adaptation gain that scales linearly with task variance and saturates exponentially with gradient steps
- Demonstrated >40% fidelity improvements for two-qubit gate calibration under high-noise out-of-distribution conditions
- Provided quantitative framework for optimizing per-device calibration overhead on cloud quantum processors
View Full Abstract
Quantum hardware suffers from intrinsic device heterogeneity and environmental drift, forcing practitioners to choose between suboptimal non-adaptive controllers or costly per-device recalibration. We derive a scaling law lower bound for meta-learning showing that the adaptation gain (expected fidelity improvement from task-specific gradient steps) saturates exponentially with gradient steps and scales linearly with task variance, providing a quantitative criterion for when adaptation justifies its overhead. Validation on quantum gate calibration shows negligible benefits for low-variance tasks but $>40\%$ fidelity gains on two-qubit gates under extreme out-of-distribution conditions (10$\times$ the training noise), with implications for reducing per-device calibration time on cloud quantum processors. Further validation on classical linear-quadratic control confirms these laws emerge from general optimization geometry rather than quantum-specific physics. Together, these results offer a transferable framework for decision-making in adaptive control.
Approximate level-by-level maximum-likelihood decoding based on the Chase algorithm for high-rate concatenated stabilizer codes
This paper develops an improved decoder for quantum error correction codes that combines level-by-level decoding with the Chase algorithm to better correct errors in high-rate concatenated stabilizer codes. The authors demonstrate through simulations that their decoder outperforms existing methods for correcting bit-flip errors in quantum systems.
Key Contributions
- Development of a general high-performance decoder that extends level-by-level minimum-distance decoding using the Chase algorithm
- Demonstration of superior performance compared to conventional decoders for high-rate concatenated Hamming codes under bit-flip noise
View Full Abstract
Fault-tolerant quantum computation (FTQC) is expected to address a wide range of computational problems. To realize large-scale FTQC, it is essential to encode logical qubits using quantum error-correcting codes. High-rate concatenated codes have recently attracted attention due to theoretical advances in fault-tolerant protocols with constant-space-overhead and polylogarithmic-time-overhead, as well as practical developments of high-rate many-hypercube codes equipped with a high-performance level-by-level minimum-distance decoder (LMDD). We propose a general, high-performance decoder for high-rate concatenated stabilizer codes that extends LMDD by leveraging the Chase algorithm to generate a suitable set of candidate errors. Our simulation results demonstrate that the proposed decoder outperforms conventional decoders for high-rate concatenated Hamming codes under bit-flip noise.
Data-Driven Qubit Characterization and Optimal Control using Deep Learning
This paper develops a machine learning approach using recurrent neural networks to optimize control pulses for quantum computing gates. The method learns qubit behavior from experimental data and uses this trained model to design high-fidelity control sequences without requiring detailed physical system models.
Key Contributions
- Data-driven approach for qubit control optimization using RNNs
- Model-free gradient-based pulse optimization method for quantum gates
View Full Abstract
Quantum computing requires the optimization of control pulses to achieve high-fidelity quantum gates. We propose a machine learning-based protocol to address the challenges of evaluating gradients and modeling complex system dynamics. By training a recurrent neural network (RNN) to predict qubit behavior, our approach enables efficient gradient-based pulse optimization without the need for a detailed system model. First, we sample qubit dynamics using random control pulses with weak prior assumptions. We then train the RNN on the system's observed responses, and use the trained model to optimize high-fidelity control pulses. We demonstrate the effectiveness of this approach through simulations on a single $ST_0$ qubit.
Bayesian Optimization for Quantum Error-Correcting Code Discovery
This paper develops a machine learning approach using Bayesian optimization to automatically discover new quantum error-correcting codes that protect quantum information from noise. The method uses neural networks to predict code performance without expensive simulations, successfully finding codes that balance encoding efficiency with error protection.
Key Contributions
- Multi-view chain-complex neural embedding for predicting logical error rates without expensive simulations
- Bayesian optimization framework for automated quantum error-correcting code discovery
- Discovery of high-performance codes including [[144,36]] and [[144,16]] that outperform existing gross codes
View Full Abstract
Quantum error-correcting codes protect fragile quantum information by encoding it redundantly, but identifying codes that perform well in practice with minimal overhead remains difficult due to the combinatorial search space and the high cost of logical error rate evaluation. We propose a Bayesian optimization framework to discover quantum error-correcting codes that improves data efficiency and scalability with respect to previous machine learning approaches to this task. Our main contribution is a multi-view chain-complex neural embedding that allows us to predict the logical error rate of quantum LDPC codes without performing expensive simulations. Using bivariate bicycle codes and code capacity noise as a testbed, our algorithm discovers a high-rate code [[144,36]] that achieves competitive per-qubit error rate compared to the gross code, as well as a low-error code [[144,16]] that outperforms the gross code in terms of error rate per qubit. These results highlight the ability of our pipeline to automatically discover codes balancing rate and noise suppression, while the generality of the framework enables application across diverse code families, decoders, and noise models.
Fundamentals, Recent Advances, and Challenges Regarding Cryptographic Algorithms for the Quantum Computing Era
This is a comprehensive book/reference work that provides an overview of how quantum computing impacts cryptography, covering both the threats posed by quantum algorithms like Shor's algorithm and the development of post-quantum cryptographic solutions. It serves as an educational resource progressing from basic concepts to practical implementation challenges in the transition to quantum-resistant cryptography.
Key Contributions
- Comprehensive Portuguese-language reference on quantum computing's impact on cryptography
- Progressive educational structure covering fundamentals through practical implementation of post-quantum cryptography
- Analysis of NIST standardization process and migration strategies for quantum-resistant algorithms
View Full Abstract
This book arises from the need to provide a clear and up-to-date overview of the impacts of quantum computing on cryptography. The goal is to provide a reference in Portuguese for undergraduate, master's, and doctoral students in the field of data security and cryptography. Throughout the chapters, we present fundamentals, we discuss classical and post-quantum algorithms, evaluate emerging patterns, and point out real-world implementation challenges. The initial objective is to serve as a guide for students, researchers, and professionals who need to understand not only the mathematics involved, but also its practical implications in security systems and policies. For more advanced professionals, the main objective is to present content and ideas so that they can assess the changes and perspectives in the era of quantum cryptographic algorithms. To that end, the text's structure was designed to be progressive: we begin with essential concepts, move on to quantum algorithms and their consequences (with emphasis on Shor's algorithm), present issues focusing on "families" of post-quantum schemes (based on lattices, codes, hash functions, multivariate, isogenies), analyze the state of the art in standardization (highlighting the NIST process), and finally, discuss migration, interoperability, performance, and cryptographic governance. We hope that this work will assist in the formation of critical thinking and informed technical decision-making, fostering secure transition strategies for the post-quantum era.
Quantum Error Correction on Error-mitigated Physical Qubits
This paper develops a framework for applying quantum error mitigation techniques directly to the physical qubits within logical qubits, showing that this approach can effectively increase code distance by 2 and achieve similar error rates to larger codes while using significantly fewer qubits.
Key Contributions
- General framework for integrating linear quantum error mitigation with quantum error correction at the physical layer
- Demonstration that distance-3 codes with physical-level error mitigation can match distance-5 unmitigated codes while using 40-64% fewer qubits
View Full Abstract
We present a general framework for applying linear quantum error mitigation (QEM) techniques directly to physical qubits within a logical qubit to suppress logical errors. By exploiting the linearity of quantum error correction (QEC), we demonstrate that any linear QEM method$\unicode{x2014}$including probabilistic error cancellation (PEC), zero-noise extrapolation (ZNE), and symmetry verification$\unicode{x2014}$can be integrated into the physical layer without requiring modifications to the subsequent QEC decoder. Applying this framework to memory experiments using PEC, we analytically prove and numerically verify that the leading-order contribution to the logical error can be removed, increasing the effective code distance by 2. Our simulations on repetition and rotated surface codes show that a distance-3 code with physical-level PEC achieves logical error rates lower than or similar to a distance-5 unmitigated code while using 40% and 64% fewer qubits, respectively. These results establish physical-level QEM as a widely compatible and resource-efficient strategy for enhancing logical performance in early fault-tolerant architectures.
Overcoming Barren Plateaus in Variational Quantum Circuits using a Two-Step Least Squares Approach
This paper proposes a two-stage optimization method to solve the barren plateau problem in variational quantum algorithms, where gradients vanish as quantum circuits scale up. The authors test their approach on quantum cryptanalysis of the BB84 quantum key distribution protocol, showing improved performance over random initialization methods.
Key Contributions
- Two-stage optimization framework with convex initialization followed by nonconvex refinement to overcome barren plateaus
- Application to quantum cryptanalysis of BB84 protocol for optimal cloning strategies
View Full Abstract
Variational Quantum Algorithms are a vital part of quantum computing. It is a blend of quantum and classical methods for tackling tough problems in machine learning, chemistry, and combinatorial optimization. Yet as these algorithms scale up, they cannot escape the barren-plateau phenomenon. As systems grow, gradients can vanish so quickly that training deep or randomly initialized circuits becomes nearly impossible. To overcome the barren plateau problem, we introduce a two-stage optimization framework. First comes the convex initialization stage. Here, we shape the quantum energy landscape, the Hilmaton landscape, into a smooth, low-energy basin. This step makes gradients easier to spot and keeps noise from derailing the process. Once we have gotten a stable gradient flow, we move to the second stage: nonconvex refinement. In this phase, we allow the algorithm to explore different energy minima, thereby making the model more expressive. Finally, we used our two-stage solution to perform quantum cryptanalysis of the quantum key distribution protocol (i.e., BB84) to determine the optimal cloning strategies. The simulation results showed that our proposed two-stage solution outperforms its random initialization counterpart.
Spectral Filtering for Learning Quantum Dynamics
This paper develops a method called Quantum Spectral Filtering to efficiently learn the dynamics of high-dimensional quantum systems by focusing on their spectral properties rather than reconstructing full system matrices. The approach uses mathematical techniques from the Slepian basis to prove that learning complexity depends only on an effective quantum dimension rather than the full system size.
Key Contributions
- Formulation of quantum evolution prediction as Complex-Valued Linear Dynamical System learning with sector-bounded eigenvalues
- Proof that learning complexity depends on effective quantum dimension k* rather than full Hilbert space dimension
View Full Abstract
Learning high-dimensional quantum systems is a fundamental challenge that notoriously suffers from the curse of dimensionality. We formulate the task of predicting quantum evolution in the linear response regime as a specific instance of learning a Complex-Valued Linear Dynamical System (CLDS) with sector-bounded eigenvalues -- a setting that also encompasses modern Structured State Space Models (SSMs). While traditional system identification attempts to reconstruct full system matrices (incurring exponential cost in the Hilbert dimension), we propose Quantum Spectral Filtering, a method that shifts the goal to improper dynamic learning. Leveraging the optimal concentration properties of the Slepian basis, we prove that the learnability of such systems is governed strictly by an effective quantum dimension $k^*$, determined by the spectral bandwidth and memory horizon. This result establishes that complex-valued LDSs can be learned with sample and computational complexity independent of the ambient state dimension, provided their spectrum is bounded.
Non-Equilibrium Quantum Many-Body Physics with Quantum Circuits
This paper presents pedagogical notes on using brickwork quantum circuits to study non-equilibrium quantum many-body physics. It demonstrates that these circuits can model quantum correlations similarly to local Hamiltonians and provides examples where exact calculations of dynamical properties are possible despite non-trivial interactions.
Key Contributions
- Pedagogical framework for studying quantum many-body dynamics using brickwork quantum circuits
- Demonstration of exact solvability for certain non-equilibrium quantum systems with interactions
View Full Abstract
These are the notes for the 4.5-hour course with the same title that I delivered in August 2025 at the Les Houches summer school ``Exact Solvability and Quantum Information''. In these notes I pedagogically introduce the setting of brickwork quantum circuits and show that it provides a useful framework to study non-equilibrium quantum many-body dynamics in the presence of local interactions. I first show that brickwork quantum circuits evolve quantum correlations in a way that is fundamentally similar to local Hamiltonians, and then present examples of brickwork quantum circuits where, surprisingly, one can compute exactly several relevant dynamical and spectral properties in the presence of non-trivial interactions.
Quantum-Inspired Reinforcement Learning for Secure and Sustainable AIoT-Driven Supply Chain Systems
This paper proposes a quantum-inspired machine learning approach for optimizing supply chain management that simultaneously considers environmental sustainability, security, and efficiency. The method uses reinforcement learning with quantum-inspired algorithms to balance carbon footprint reduction, inventory management, and security measures in AI-enabled Internet of Things supply chain systems.
Key Contributions
- Integration of quantum-inspired reinforcement learning for multi-objective supply chain optimization
- Framework combining sustainability metrics with security and efficiency in AIoT systems
View Full Abstract
Modern supply chains must balance high-speed logistics with environmental impact and security constraints, prompting a surge of interest in AI-enabled Internet of Things (AIoT) solutions for global commerce. However, conventional supply chain optimization models often overlook crucial sustainability goals and cyber vulnerabilities, leaving systems susceptible to both ecological harm and malicious attacks. To tackle these challenges simultaneously, this work integrates a quantum-inspired reinforcement learning framework that unifies carbon footprint reduction, inventory management, and cryptographic-like security measures. We design a quantum-inspired reinforcement learning framework that couples a controllable spin-chain analogy with real-time AIoT signals and optimizes a multi-objective reward unifying fidelity, security, and carbon costs. The approach learns robust policies with stabilized training via value-based and ensemble updates, supported by window-normalized reward components to ensure commensurate scaling. In simulation, the method exhibits smooth convergence, strong late-episode performance, and graceful degradation under representative noise channels, outperforming standard learned and model-based references, highlighting its robust handling of real-time sustainability and risk demands. These findings reinforce the potential for quantum-inspired AIoT frameworks to drive secure, eco-conscious supply chain operations at scale, laying the groundwork for globally connected infrastructures that responsibly meet both consumer and environmental needs.
Quaternionic Perfect Sequences and Hadamard Matrices
This paper studies mathematical structures called quaternionic perfect sequences and their relationship to Hadamard matrices, developing faster algorithms to enumerate these matrices up to order 21 and proving new properties about their structure.
Key Contributions
- Developed significantly faster enumeration algorithm for quaternion-type Hadamard matrices, extending exhaustive enumeration from order 13 to order 21
- Proved that circulant blocks in quaternion-type Hadamard matrices must be pairwise amicable, dramatically improving algorithm efficiency
- Constructed new quaternionic Hadamard matrices for quantum communication applications and proved their non-equivalence to existing constructions
View Full Abstract
A finite sequence of numbers is perfect if it has zero periodic autocorrelation after a nontrivial cyclic shift. In this work, we study quaternionic perfect sequences having a one-to-one correspondence with the binary sequences arising in Williamson's construction of quaternion-type Hadamard matrices. Using this correspondence, we devise an enumeration algorithm that is significantly faster than previously used algorithms and does not require the sequences to be symmetric. We implement our algorithm and use it to enumerate all circulant and possibly non-symmetric Williamson-type matrices of orders up to 21; previously, the largest order exhaustively enumerated was 13. We prove that when the blocks of a quaternion-type Hadamard matrix are circulant, the blocks are necessarily pairwise amicable. This dramatically improves the filtering power of our algorithm: in order 20, the number of block pairs needing consideration is reduced by a factor of over 25,000. We use our results to construct quaternionic Hadamard matrices of interest in quantum communication and prove they are not equivalent to those constructed by other means. We also study the properties of quaternionic Hadamard matrices analytically, and demonstrate the feasibility of characterizing quaternionic Hadamard matrices with a fixed pattern of entries. These results indicate a richer set of properties and suggest an abundance of quaternionic Hadamard matrices for sufficiently large orders.
Photon-graviton polarization entanglement induced by a classical electromagnetic wave
This paper studies how classical electromagnetic waves can generate entangled pairs of photons and gravitons, including Bell states between their polarizations. The work combines classical electromagnetic fields with quantum gravitational fields to analyze quantum state evolution and discuss potential observation scenarios.
Key Contributions
- Demonstrates photon-graviton entanglement generation via classical EM waves
- Shows Bell state formation in photon-graviton polarization basis
- Provides theoretical framework mixing classical EM fields with quantized gravity
View Full Abstract
We study the photon-graviton pair production induced by the propagation of a classical electromagnetic (EM) wave in a Minkowskian spacetime. In our model, the gravitational field is described in terms of the quantized graviton field, whereas the electromagnetic field is split into a classical drive (a linearly or circularly polarized electromagnetic wave) and a quantum fluctuation field. We analyze the time evolution of the quantum state showing that, among other outcomes, the propagation of the EM wave can generate Bell states in the photon-graviton polarization basis. We finally discuss the possibility to observe entangled photons in artificial and natural scenarios.
Local-oscillator-agnostic squeezing detection
This paper develops new methods to detect quantum nonclassicality in light systems without needing a perfect classical reference laser. The authors create detection criteria that can identify quantum properties even when the measurement apparatus itself might be quantum rather than classical.
Key Contributions
- Development of partial normal ordering criteria for nonclassicality detection without classical reference states
- Framework for balanced homodyne detection using arbitrary local oscillator states while isolating signal quantumness
View Full Abstract
We address the problem of measuring nonclassicality in continuous-variable bosonic systems without having access to a known reference signal. To this end, we construct broader classes of criteria for nonclassicality which allow us to investigate quantum phenomena regardless of the quantumness of selected subsystems. Such witnesses are based on the notion of partial normal ordering. This approach is applied to balanced homodyne detection using arbitrary, potentially nonclassical local oscillator states, yet only revealing the probed signal's quantumness. Our framework is compared to standard techniques, and the robustness and advanced sensitivity of our approach is shown. Therefore, a widely applicable framework, well-suited for applications in quantum metrology and quantum information, is derived to assess the quantum features of a photonic system when a well-defined coherent laser as a reference state is not available in the physical domain under study.
Three-dimensional squeezing of optically levitated nanospheres
This paper proposes a method to create quantum squeezed states in optically trapped nanoparticles by rapidly changing the trap frequency, enabling ultra-sensitive force measurements that surpass classical limits. The researchers predict they can achieve 10 dB of squeezing with current technology, making it possible to detect extremely weak impulses with quantum-enhanced precision.
Key Contributions
- Protocol for three-dimensional squeezing of optically levitated nanospheres via harmonic potential frequency jumps
- Quantitative analysis of decoherence limits and prediction of ~10 dB squeezing achievable with current technology
- Demonstration of quantum-enhanced impulse detection beyond the standard quantum limit
View Full Abstract
We propose a protocol to measure impulses beyond the standard quantum limit. The protocol reduces noise in all three spatial dimensions and consists of squeezing a mechanical system's state via a series of jumps in the frequency of the harmonic potential. We quantify how decoherence in a realistic system of an optically levitated, dielectric nanoparticle limits the ultimate sensitivity. We predict that $\sim$10 dB of squeezing is achievable with current technology, enabling quantum-enhanced detection of weak impulses.
Some properties of coherent states with singular complex matrix argument
This paper introduces a new type of coherent quantum states defined using singular 2x2 matrices with specific structure, and proves these states satisfy the mathematical requirements for coherent states. The authors explore connections to qubits and quantum entropy measures.
Key Contributions
- Introduction of coherent states with singular complex matrix arguments
- Mathematical proof that these states satisfy coherent state conditions
- Analysis of connections to qubits and von Neumann entropy
View Full Abstract
In the paper our aim was to study the properties of a new version of coherent states whose argument is a linear combination of two special singular square 2 x 2 matrix, having a single nonzero element, equal to 1, and two labeling complex variables as developing coefficients. We have shown that this new version of coherent states satisfies all the conditions imposed on coherent states, both of pure, as well as the mixed (thermal) states characterized by the density operator. As applications, we examined the connection between these coherent states and the notions of qubits and von Neuman entropy.
Entanglement and discord classification via deep learning
This paper develops a deep learning approach using convolutional autoencoders to classify quantum entanglement and discord in bipartite quantum systems. The method can distinguish between different types of entanglement and generate rare bound entangled states that are difficult to construct analytically.
Key Contributions
- Deep learning framework for automated classification of quantum entanglement and discord
- Method for generating bound entangled states using learned representations from neural networks
View Full Abstract
In this work, we propose a deep learning-based approach for quantum entanglement and discord classification using convolutional autoencoders. We train models to distinguish entangled from separable bipartite states for $d \times d$ systems with local dimension $d$ ranging from two to seven, which enables identification of bound and free entanglement. Through extensive numerical simulations across various quantum state families, we demonstrate that our model achieves high classification accuracy. Furthermore, we leverage the learned representations to generate samples of bound entangled states, the rarest form of entanglement and notoriously difficult to construct analytically. We separately train the same convolutional autoencoders architecture for detecting the presence of quantum discord and show that the model also exhibits high accuracy while requiring significantly less training time.
The metaplectic semigroup and its applications to time-frequency analysis and evolution operators
This paper develops a comprehensive mathematical theory for the metaplectic semigroup associated with complex symplectic matrices, extending classical metaplectic group theory beyond unitary operators. The authors use this framework to analyze time-frequency representations, study parabolic equations with complex quadratic Hamiltonians, and investigate the propagation of Wigner function singularities.
Key Contributions
- Extension of metaplectic group theory to complex symplectic matrices forming a semigroup structure
- Operator-theoretic approach to time-frequency analysis using metaplectic semigroup techniques
- Analysis of parabolic evolution equations with complex quadratic Hamiltonians and their propagators
- Study of Wigner distribution intertwining relations and singularity propagation
View Full Abstract
We develop a systematic analysis of the metaplectic semigroup $\mathrm{Mp}_+(d,\mathbb{C})$ associated with positive complex symplectic matrices, a notion introduced almost simultaneously and independently by Hörmander, Brunet, Kramer, and Howe, thereby extending the classical metaplectic theory beyond the unitary setting. While the existing literature has largely focused on propagators of quadratic evolution equations, for which results are typically obtained via Mehler formulas, our approach is operator-theoretic and symplectic in spirit and adapts techniques from the standard metaplectic group $\mathrm{Mp}(d,\mathbb{R})$ to a substantially broader framework that is not driven by differential problems or particular propagators. This point of view provides deeper insight into the structure of the metaplectic semigroup, and allows us to investigate its generators, polar decomposition, and intertwining relations with complex conjugation and with the Wigner distribution. We then exploit these structural results to characterize, from a metaplectic perspective, classes of time-frequency representations satisfying prescribed structural properties. Finally, we discuss further implications for parabolic equations with complex quadratic Hamiltonians, we study the boundedness of their propagators on modulation spaces, we obtain estimates in time of their operator norms. Finally, we apply our theory to the study of propagation of Wigner singularities.
The Photonic Foundation of Temperature: Mechanisms of Thermal Equilibrium and Entropy Production
This paper proposes that photons are the fundamental agents that establish and maintain temperature in matter, deriving the Boltzmann distribution from first principles and showing that thermal equilibrium requires continuous photon exchange with specific energy characteristics. The authors argue this provides the missing microscopic foundation for classical thermodynamics through quantum electrodynamics.
Key Contributions
- Derivation of Boltzmann distribution from minimal differential scaling postulate
- Quantification of photon exchange requirements for thermal equilibrium maintenance
- Physical criteria for distinguishing genuine thermal equilibrium from formal temperature assignments
View Full Abstract
I examine the physical foundations of temperature and thermal equilibrium by identifying photons as the fundamental agents that establish and maintain the characteristic energy scale $E_c = k_B T$ in ordinary matter. While classical thermodynamics successfully describes equilibrium phenomenologically, the realization of thermal distributions requires concrete microscopic mechanisms provided by quantum electrodynamics. We derive the Boltzmann distribution from a minimal differential scaling postulate and show that sustaining thermal equilibrium demands continuous photon exchange with average energy $\langle hν\rangle = 2.701\,E_c$, quantifying the energetic throughput necessary to counter radiative losses. Entropy production is shown to arise naturally from inelastic photon scattering that converts high-energy photons into many lower-energy quanta, thereby increasing accessible microstates and driving irreversible evolution toward equilibrium. We establish physical criteria distinguishing genuine thermal equilibrium from purely formal temperature assignments and demonstrate that the classical notion of an infinite thermal reservoir emerges as an effective idealization within a hierarchy of dynamically maintained photon baths. This photonic framework complements phenomenological thermodynamics by providing its microscopic foundation and clarifies the physical meaning of temperature as an emergent collective property of photon-mediated energy exchange.
Andreev spin qubits based on the helical edge states of magnetically doped two-dimensional topological insulators
This paper proposes a new type of spin qubit using Andreev states in superconducting junctions built on magnetically doped topological insulators. The authors show these qubits can be controlled with microwave pulses and demonstrate basic quantum logic gates through numerical simulations.
Key Contributions
- Novel Andreev spin qubit architecture using topological insulator edge states
- Demonstration of optical control via microwave radiation for quantum gate operations
- Numerical simulation of NOT and Hadamard gates in realistic device parameters
View Full Abstract
We show that Andreev spin qubits can be realized in a Josephson junction based on the helical edge states of a two-dimensional topological insulator (quantum spin Hall system) proximized by superconducting films, in the presence of magnetic doping. We demonstrate that the electrical dipole transitions between the Andreev spin states induced by the magnetic doping can be harnessed to optically manipulate the Andreev spin qubit by microwave radiation pulses. We numerically simulate the realization of NOT and Hadamard quantum logic gates, and discuss implementations in realistic setups.
Quantum fluctuations in hydrodynamics and quantum long-time tails
This paper develops a quantum field theory framework for studying how quantum fluctuations affect the hydrodynamic behavior of conserved quantities like particle density. The authors compute quantum corrections to correlation functions that lead to modified long-time behavior, extending classical hydrodynamic predictions to include quantum effects.
Key Contributions
- Construction of quantum Schwinger-Keldysh effective field theory for diffusive hydrodynamics with KMS symmetry constraints
- Computation of one-loop quantum corrections to density-density correlation functions showing quantum long-time tails
View Full Abstract
We construct a quantum Schwinger-Keldysh (SK) effective field theory for the diffusive hydrodynamics of a conserved scalar field. Quantum corrections within the SK framework are guided by fluctuation-dissipation relations, enforced via a dynamical Kubo-Martin-Schwinger (KMS) symmetry. We find that the KMS symmetry necessarily generates fluctuation contributions in the SK effective action at all orders in the noise field, thereby giving rise to intrinsically non-Gaussian noise. We use our results to compute one-loop quantum corrections to the two-point density-density retarded correlation function, leading to a quantum generalization of hydrodynamic long-time tails. Our results apply at arbitrarily high orders in $\hbar$. The one-loop results for retarded correlation functions have been expressed in terms of a family of polynomials. We also provide a closed-form expression for the one-loop results at leading order in the wavevector expansion.
Designing quantum technologies with a quantum computer
This paper develops a quantum computing framework to simulate and design solid-state spin-based quantum technologies like quantum sensors and processors. The researchers use advanced quantum algorithms to model interacting spin systems over long timescales, demonstrating their approach with nitrogen vacancy centers in diamond.
Key Contributions
- Development of quantum-computer-aided framework combining Gray-encoded qudit-to-qubit mappings and multi-reference selected quantum Krylov fast-forwarding algorithm
- Achievement of 18-30% reductions in gate counts and circuit depth for time-evolution circuits while accessing long-time dynamics up to ~100 ns
View Full Abstract
Interacting spin systems in solids underpin a wide range of quantum technologies, from quantum sensors and single-photon sources to spin-defect-based quantum registers and processors. We develop a quantum-computer-aided framework for simulating such devices using a general electron spin resonance Hamiltonian incorporating zero-field splitting, the Zeeman effect, hyperfine interactions, dipole-dipole spin-spin terms, and electron-phonon decoherence. Within this model, we combine Gray-encoded qudit-to-qubit mappings, qubit-wise commuting aggregation, and a multi-reference selected quantum Krylov fast-forwarding (sQKFF) hybrid algorithm to access long-time dynamics while remaining compatible with NISQ and early fault-tolerant hardware constraints. Numerical simulations demonstrate the computation of autocorrelation functions up to $\sim100$ ns, together with microwave absorption spectra and the $\ell_1$-norm of coherence, achieving 18-30$\%$ reductions in gate counts and circuit depth for Trotterized time-evolution circuits compared to unoptimized implementations. Using the nitrogen vacancy center in diamond as a testbed, we benchmark the framework against classical simulations and identify the reference-state selection in sQKFF as the primary factor governing accuracy at fixed hardware cost. This methodology provides a flexible blueprint for using quantum computers to design, compare, and optimize solid-state spin-qubit technologies under experimentally realistic conditions.
Thermodynamics of linear open quantum walks
This paper studies the thermodynamic properties of linear Open Quantum Walks, which are quantum systems that evolve through interaction with their environment rather than unitary evolution. The researchers develop a theoretical framework to understand temperature, entropy, and energy in these systems and examine how thermodynamic laws apply to this quantum walking process.
Key Contributions
- Development of statistical mechanics framework for linear Open Quantum Walks including equilibrium temperature definition and thermalization analysis
- Analysis of nonequilibrium thermodynamics and validation of second and third laws of thermodynamics in OQW systems
- Application to dissipative quantum computation within the OQW framework
View Full Abstract
Open quantum systems interact with their environment, leading to nonunitary dynamics. We investigate the thermodynamics of linear Open Quantum Walks (OQWs), a class of quantum walks whose dynamics is entirely driven by the environment. We define an equilibrium temperature, identify a population inversion near a finite critical value of a control parameter, analyze the thermalization process, and develop the statistical mechanics needed to describe the thermodynamical properties of linear OQWs. We also study nonequilibrium thermodynamics by analyzing the time evolution of entropy, energy, and temperature, while providing analytical tools to understand the system's evolution as it converges to the thermalized state. We examine the validity of the second and third laws of thermodynamics in this setting. Finally, we employ these developments to shed light on dissipative quantum computation within the OQW framework.
Photonic Links for Spin-Based Quantum Sensors
This paper develops a method to control quantum sensors based on nitrogen-vacancy centers in diamond using fiber-optic cables instead of direct microwave delivery. The approach reduces thermal noise and electromagnetic interference while enabling better integration with distributed quantum systems.
Key Contributions
- Development of RF-over-fiber control system for optically accessible spin qubits
- Demonstration of thermally isolated and cryo-compatible ODMR spectroscopy platform
- Framework bridging spin-based quantum sensors with distributed quantum technologies
View Full Abstract
A growing variety of optically accessible spin qubits have emerged in recent years as key components for quantum sensors, qubits, and quantum memories. However, the scalability of conventional spin-based quantum architectures remains limited by direct microwave delivery, which introduces thermal noise, electromagnetic cross-talk, and design constraints for cryogenic, high-field, and distributed systems. In this work, we present a unified framework for RF-over-fiber (RFoF) control of optically accessible spins through RFoF optically detected magnetic resonance (ODMR) spectroscopy of nitrogen-vacancy (NV) centers in diamond. The RFoF platform relies on an electro-optically modulated telecom-band laser that transmits microwave signals over fiber and a high-speed photodiode that recovers the RF waveform to drive NV center spin transitions. We obtain an RFoF efficiency of 1.81\% at 2.90~GHz, corresponding to $P_{\mathrm{RF,out}}=-0.7$~dBm. The RFoF architecture provides a path toward low-noise, thermally isolated, and cryo-compatible ODMR systems bridging conventional spin-based quantum sensing protocols with emerging distributed quantum technologies.
Machine learning with minimal use of quantum computers: Provable advantages in Learning Under Quantum Privileged Information (LUQPI)
This paper proposes a quantum machine learning framework where quantum computers are used minimally - only to extract features from individual training data points, not during actual deployment. The authors prove that even this limited quantum involvement can provide exponential advantages over classical methods and demonstrate the approach on many-body quantum systems.
Key Contributions
- Introduction of Learning Under Quantum Privileged Information (LUQPI) framework that minimizes quantum computer usage while maintaining provable advantages
- Theoretical proof of exponential quantum-classical separations for suitable concept classes using only quantum feature extraction during training
- Numerical demonstration of performance gains in many-body quantum systems using quantum-generated features with classical SVM+ algorithms
View Full Abstract
Quantum machine learning (QML) is often listed as a promising candidate for useful applications of quantum computers, in part due to numerous proofs of possible quantum advantages. A central question is how small a role quantum computers can play while still enabling provable learning advantages over classical methods. We study an especially restricted setting in which a quantum computer is used only as a feature extractor: it acts independently on individual data points, without access to labels or global dataset information, is available only to augment the training set, and is not available at deployment. Training and deployment are therefore carried out by fully classical learners on a dataset augmented with quantum-generated features. We formalize this model by adapting the classical framework of Learning Under Privileged Information (LUPI) to the quantum case, which we call Learning Under Quantum Privileged Information (LUQPI). Within this framework, we show that even such minimally involved quantum feature extraction, available only during training, can yield exponential quantum-classical separations for suitable concept classes and data distributions under reasonable computational assumptions. We further situate LUQPI within a taxonomy of related quantum and classical learning settings and show how standard classical machinery, most notably the SVM+ algorithm, can exploit quantum-augmented data. Finally, we present numerical experiments in a physically motivated many-body setting, where privileged quantum features are expectation values of observables on ground states, and observe consistent performance gains for LUQPI-style models over strong classical baselines.
Hierarchy of discriminative power and complexity in learning quantum ensembles
This paper develops new mathematical tools for measuring distances between collections of quantum states, creating a hierarchy of metrics that trade off between how well they can distinguish different quantum ensembles and how efficiently they can be estimated from experimental data. The authors show these tools can be applied to training quantum machine learning models.
Key Contributions
- Introduction of MMD-k hierarchy for quantum ensemble distance metrics with proven trade-offs between discriminative power and statistical efficiency
- Theoretical analysis showing quantum Wasserstein distance achieves full discriminative power more efficiently than MMD-k metrics
- Application to quantum machine learning through quantum denoising diffusion probabilistic models
View Full Abstract
Distance metrics are central to machine learning, yet distances between ensembles of quantum states remain poorly understood due to fundamental quantum measurement constraints. We introduce a hierarchy of integral probability metrics, termed MMD-$k$, which generalizes the maximum mean discrepancy to quantum ensembles and exhibit a strict trade-off between discriminative power and statistical efficiency as the moment order $k$ increases. For pure-state ensembles of size $N$, estimating MMD-$k$ using experimentally feasible SWAP-test-based estimators requires $Θ(N^{2-2/k})$ samples for constant $k$, and $Θ(N^3)$ samples to achieve full discriminative power at $k = N$. In contrast, the quantum Wasserstein distance attains full discriminative power with $Θ(N^2 \log N)$ samples. These results provide principled guidance for the design of loss functions in quantum machine learning, which we illustrate in the training quantum denoising diffusion probabilistic models.
Fabrication effects on Niobium oxidation and surface contamination in Niobium-metal bilayers using X-ray photoelectron spectroscopy
This paper studies how to protect niobium superconducting quantum devices from harmful surface oxidation by testing 17 different metal capping layers using X-ray spectroscopy. The researchers identify which protective coatings best survive manufacturing processes and maintain the quality needed for quantum resonators and qubits.
Key Contributions
- Systematic evaluation of 17 capping layers for niobium oxidation protection using XPS characterization
- Assessment of fabrication process effects on surface contamination and demonstration of improved microwave resonator performance
View Full Abstract
Superconducting resonators and qubits are limited by dielectric losses from surface oxides. Surface oxides are mitigated through various strategies such as the addition of a metal capping layer, surface passivation, and acid processing. In this study, we demonstrate the use of X-ray photoelectron spectroscopy (XPS) as a rapid characterization tool to study the effectiveness cap layers for niobium for further device fabrication. We non-destructively evaluate 17 capping layers to characterize their ability to prevent oxygen diffusion, and the effects of standard fabrication processes -- annealing, resist stripping, and acid cleaning. We downselect for resilient capping layers and test their microwave resonator performance.
Entropy production versus memory effects in two-level open quantum systems
This paper compares different definitions of entropy production rate in open quantum systems and analyzes their relationship to memory effects. Using a qubit coupled to a bosonic mode as an example, the authors show that while entropy production definitions agree at weak coupling, they diverge significantly at strong coupling, and establish connections between entropy production signs and quantum memory effects.
Key Contributions
- Systematic comparison of multiple entropy production rate definitions in open quantum systems
- Discovery that two entropy production definitions coincide exactly even at strong coupling
- Establishment of correspondence between entropy production signs and P-divisibility memory effects
View Full Abstract
We compare several definitions of entropy production rate introduced in the literature from a large variety of situations and motivations, and then analyze their relations with memory effects. Considering a relevant experimental example of a qubit interacting with a single bosonic mode playing the role of a finite bath, we show that all definitions of entropy production coincide at weak coupling. In the strong coupling regime, significant discrepancies emerge between the different entropy production rates, although some similarities in the overall behaviour remain. However, surprisingly, two of these definitions -- one based on local quantities of the system and the other on non-local quantities -- coincide exactly, even in the case of strong coupling. Finally, a high degree of correspondence is observed when memory effects characterized by P-divisibility are compared with the sign of all entropy production rates in the case of weak coupling. Such correspondence degrades at strong coupling, leading us to extend the concept of entropy production to the dynamical map. We show a perfect equivalence between the sign of this enlarged concept of entropy production and P-divisibility, both numerically and analytically, in the case of phase-covariant master equations.
Bound-state-free Förster resonant shielding of strongly dipolar ultracold molecules
This paper proposes a new method to prevent ultracold molecules from colliding and being lost by using combined electric fields to eliminate bound states while maintaining controllable long-range interactions. The technique could enable creation of large, stable samples of strongly interacting molecular gases for quantum applications.
Key Contributions
- Development of bound-state-free Förster resonant shielding method using combined ac/dc electric fields
- Demonstration of elastic-to-loss rate ratios greater than 10^6 for ultracold NaCs molecules
- Elimination of photon-changing collisions that limit other molecular shielding approaches
View Full Abstract
We propose a method to suppress collisional loss in strongly dipolar, rotationally excited ultracold molecules using a combination of static (dc) and microwave (ac) electric fields. By tuning two excited pair molecular rotational states into a Förster resonance with a dc field, simultaneously driving excited rotational transitions with an ac field removes all long-range bound states, allowing near complete suppression of all two- and three-body collisional loss channels. While permitting tunable dipolar and anti-dipolar interactions, this bound-state-free ac/dc scheme is not subject to photon-changing collisions that are the primary source of two-body loss in shielding with two microwave fields, used to achieve the first molecular Bose-Einstein condensate [Bigagli et al., Nature 631, 289 (2024)]. Using NaCs as a representative example for strongly dipolar molecules, close-coupling calculations are performed to show that bound-state-free shielding can achieve ratios of elastic-to-loss rates $\gtrsim 10^{6}$ at 100 nK, with currently accessible ac and dc field generation technologies. This work opens new opportunities for realizing large, long-lived samples of strongly interacting degenerate molecular gases with tunable long-range interactions.
A scalable quantum-enhanced greedy algorithm for maximum independent set problems
This paper develops a hybrid quantum-classical algorithm that combines QAOA with classical greedy methods to solve maximum independent set problems on graphs. The approach uses pre-computed quantum parameters to avoid expensive optimization while maintaining good performance, and was demonstrated on both small quantum hardware and larger problems via simulation.
Key Contributions
- Novel hybrid quantum-classical algorithm combining QAOA with greedy heuristics for maximum independent set problems
- Demonstration of scalable approach using pre-computed parameters that avoids instance-specific training
- Experimental validation on 20-qubit superconducting device and tensor network simulations showing performance advantages over classical methods
View Full Abstract
We investigate a hybrid quantum-classical algorithm for solving the Maximum Independent Set (MIS) problem on regular graphs, combining the Quantum Approximate Optimization Algorithm (QAOA) with a minimal degree classical greedy algorithm. The method leverages pre-computed QAOA angles, derived from depth-$p$ QAOA circuits on regular trees, to compute local expectation values and inform sequential greedy decisions that progressively build an independent set. This hybrid approach maintains shallow quantum circuit and avoids instance-specific parameter training, making it well-suited for implementation on current quantum hardware: we have implemented the algorithm on a 20 qubit IQM superconducting device to find independent sets in graphs with thousands of nodes. We perform tensor network simulations to evaluate the performance of the algorithm beyond the reach of current quantum hardware and compare to established classical heuristics. Our results show that even at low depth ($p=4$), the quantum-enhanced greedy method significantly outperforms purely classical greedy baselines as well as more sophisticated approximation algorithms. The modular structure of the algorithm and relatively low quantum resource requirements make it a compelling candidate for scalable, hybrid optimization in the NISQ era and beyond.
Rapid high-temperature initialisation and readout of spins in silicon with 10 THz photons
This paper demonstrates a new method to initialize and read quantum spins in silicon using ultrafast 10 THz laser pulses instead of slow microwave techniques, achieving over 1000-fold speed improvement and enabling operation at higher temperatures above 3K.
Key Contributions
- Demonstrated optical pumping technique for rapid spin initialization in silicon using circularly polarized THz photons
- Achieved simultaneous fast state preparation and readout at temperatures above 3K using 9 ps laser pulses
- Calculated potential for 99% spin initialization within 250 ps for boron in strained silicon at 3K
View Full Abstract
Each cycle of a quantum computation requires a quantum state initialisation. For semiconductor-based quantum platforms, initialisation is typically performed via slow microwave processes and usually requires cooling to temperatures where only the lowest quantum level is occupied. In silicon, boron atoms are the most common impurities. They bind holes in orbitals including an effective spin-3/2 ground state as well as excited states analogous to the Rydberg series for hydrogen. Here we show that initialisation temperature demands may be relaxed and speeds increased over a thousand-fold by importing, from atomic physics, the procedure of optical pumping via excited orbital states to preferentially occupy a target ground state spin. Spin relaxation within the orbital ground state of unstrained silicon is too fast to measure for conventional pulsed microwave technology, except at temperatures below 2 K, implying a need not only for fast state preparation but also fast state readout. Circularly polarised ~10 THz photon pulses from a free electron laser meet both needs at temperatures above 3 K: a 9 ps pulse enhances the population of one spin eigenstate for the "1s"-like ground state orbital, and the second interrogates this imbalance in spin population. Using parameters given by our data, we calculate that it should be possible to initialise 99% of spins for boron in strained silicon within 250 ps at 3 K. The speedup of both state preparation and measurement gained for THz rather than microwave photons should be explored for the many other solid state quantum systems hosting THz excitations potentially useful as intermediate states.
Defect Relative Entropy
This paper introduces defect relative entropy as a new measure to distinguish between different types of defects in quantum field theories, particularly focusing on conformal and topological defects in conformal field theories. The authors derive universal formulas and discover that certain topological defects have zero defect relative entropy, leading to the concept of defect relative sectors.
Key Contributions
- Introduction of defect relative entropy as a distinguishability measure for defects in quantum field theories
- Derivation of universal formulas for defect relative entropy in conformal field theories that reduce to Kullback-Leibler divergence
- Discovery of zero defect relative entropy between certain topological defects and introduction of defect relative sectors
View Full Abstract
We introduce the concept of \textit{defect relative entropy} as a measure of distinguishability within the space of defects. We compute the defect relative entropy for conformal/topological defects, deriving a universal formula in conformal field theories (CFTs) on a circle. This formula reduces to the Kullback-Leibler divergence. Furthermore, we provide a detailed expression of the defect relative entropy for diagonal CFTs with specific topological defect choices, utilizing the theory's modular $\mathcal{S}$ matrix. We also present a general formula for the \textit{ defect sandwiched Rényi relative entropy} and the \textit{defect fidelity}. Through explicit calculations in specific models, including the Ising model, the tricritical Ising model, and the $\widehat{su}(2)_{k}$ WZW model, we have made an intriguing finding: zero defect relative entropy between reduced density matrices associated with certain topological defect. Notably, we introduce the concept of the \textit{defect relative sector}, representing the set of topological defects with zero defect relative entropy.
Quotient geometry of tensor ring decomposition
This paper develops the mathematical framework for understanding the geometric structure of tensor ring decomposition, a method for efficiently representing high-dimensional data. The authors establish quotient geometry by handling gauge invariance issues and validate their theoretical results through numerical tensor completion experiments.
Key Contributions
- Established quotient geometry framework for tensor ring decomposition with full-rank conditions
- Extended results to uniform tensor ring decomposition where all core tensors are identical
View Full Abstract
Differential geometries derived from tensor decompositions have been extensively studied and provided the foundations for a variety of efficient numerical methods. Despite the practical success of the tensor ring (TR) decomposition, its intrinsic geometry remains less understood, primarily due to the underlying ring structure and the resulting nontrivial gauge invariance. We establish the quotient geometry of TR decomposition by imposing full-rank conditions on all unfolding matrices of the core tensors and capturing the gauge invariance. Additionally, the results can be extended to the uniform TR decomposition, where all core tensors are identical. Numerical experiments validate the developed geometries via tensor ring completion tasks.
Entanglement-Assisted Bosonic MAC: Achievable Rates and Covert Communication
This paper analyzes quantum communication systems where multiple senders use entanglement to communicate simultaneously over bosonic channels (like optical fiber), focusing on both maximizing data rates and enabling covert (undetectable) communication. The research shows that entanglement assistance allows for better performance in both regular and covert communication scenarios.
Key Contributions
- Derived closed-form achievable rate region for entanglement-assisted bosonic multiple access channels using phase-shift keying modulation
- Demonstrated that entanglement assistance enables superior covert communication throughput scaling of O(√n log n) compared to classical square-root law
View Full Abstract
We consider the problem of covert communication over the entanglement-assisted (EA) bosonic multiple access channel (MAC). We derive a closed-form achievable rate region for the general EA bosonic MAC using high-order phase-shift keying (PSK) modulation. Specifically, we demonstrate that in the low-photon regime the capacity region collapses into a rectangle, asymptotically matching the point-to-point capacity as multi-user interference vanishes. We also characterize an achievable covert throughput region, showing that entanglement assistance enables an aggregate throughput scaling of \(O(\sqrt{n} \log n)\) covert bits with the block length $n$ for both senders, surpassing the square-root law as in the point-to-point case. Our analysis reveals that the joint covertness constraint imposes a linear trade-off between the senders throughput.
Schroedinger's principle eliminates the EPR-locality paradox
This paper introduces a principle from Schrödinger's work to resolve the EPR paradox within the Copenhagen interpretation of quantum mechanics, arguing that the paradox doesn't actually exist when proper wave function collapse is assumed. The authors use entangled two-spin systems as their primary example to demonstrate this resolution.
Key Contributions
- Introduction of Schrödinger's principle to eliminate the EPR-locality paradox
- Demonstration that the paradox is well-posed in simple entangled spin systems within Copenhagen interpretation
View Full Abstract
We introduce a principle, implicitly contained in Schroedinger's paper (Schr35), which allows a proof of the non-existence of the EPR-locality paradox in the Copenhagen interpretation of quantum mechanics. The paradox is shown to be well-posed already in the simplest example of an entangled state of two spins one-half, independently of the (well-taken) objections by Araki and Yanase that the measurement of spin is not a local measurement. We assume that any measurement results in the collapse of the wave-packet.
A geometric criterion for optimal measurements in multiparameter quantum metrology
This paper develops a geometric framework for determining when quantum sensors can achieve optimal precision when measuring multiple parameters simultaneously. The authors establish mathematical conditions that tell researchers whether the fundamental quantum limits of precision can be reached with practical measurement schemes.
Key Contributions
- Establishes equivalence between quantum Cramér-Rao bound saturation and simultaneous hollowization of traceless operators
- Provides geometric characterization and direct criterion for constructing optimal measurement schemes (POVMs)
- Identifies conditions for partial commutativity and proves counter-intuitive limitations of informationally-complete POVMs
View Full Abstract
Determining when the multiparameter quantum Cramér--Rao bound (QCRB) is saturable with experimentally relevant single-copy measurements is a central open problem in quantum metrology. Here we establish an equivalence between QCRB saturation and the simultaneous hollowization of a set of traceless operators associated with the estimation model, i.e., the existence of complete (generally nonorthogonal) bases in which all corresponding diagonal matrix elements vanish. This formulation yields a geometric characterization: optimal rank-one measurement vectors are confined to a subspace orthogonal to a state-determined Hermitian span. This provides a direct criterion to construct optimal Positive Operator-Valued Measures(POVMs). We then identify conditions under which the partial commutativity condition proposed in [Phys. Rev. A 100, 032104(2019)] becomes necessary and sufficient for the saturation of the QCRB, demonstrate that this condition is not always sufficient, and prove the counter-intuitive uselessness of informationally-complete POVMs.
Quantum LEGO Learning: A Modular Design Principle for Hybrid Artificial Intelligence
This paper introduces Quantum LEGO Learning, a modular framework for hybrid quantum-classical machine learning that combines pre-trained classical neural networks with variational quantum circuits as separate, reusable components. The approach aims to improve efficiency and transferability in quantum machine learning by treating quantum and classical parts as interchangeable blocks.
Key Contributions
- Modular framework for hybrid quantum-classical learning with architecture-agnostic design
- Block-wise generalization theory that decomposes learning error and characterizes quantum representational advantages
- Empirical validation showing stable optimization and noise robustness in quantum dot classification
View Full Abstract
Hybrid quantum-classical learning models increasingly integrate neural networks with variational quantum circuits (VQCs) to exploit complementary inductive biases. However, many existing approaches rely on tightly coupled architectures or task-specific encoders, limiting conceptual clarity, generality, and transferability across learning settings. In this work, we introduce Quantum LEGO Learning, a modular and architecture-agnostic learning framework that treats classical and quantum components as reusable, composable learning blocks with well-defined roles. Within this framework, a pre-trained classical neural network serves as a frozen feature block, while a VQC acts as a trainable adaptive module that operates on structured representations rather than raw inputs. This separation enables efficient learning under constrained quantum resources and provides a principled abstraction for analyzing hybrid models. We develop a block-wise generalization theory that decomposes learning error into approximation and estimation components, explicitly characterizing how the complexity and training status of each block influence overall performance. Our analysis generalizes prior tensor-network-specific results and identifies conditions under which quantum modules provide representational advantages over comparably sized classical heads. Empirically, we validate the framework through systematic block-swap experiments across frozen feature extractors and both quantum and classical adaptive heads. Experiments on quantum dot classification demonstrate stable optimization, reduced sensitivity to qubit count, and robustness to realistic noise.
Quantum Random Features: A Spectral Framework for Quantum Machine Learning
This paper introduces Quantum Random Features (QRF) and Quantum Dynamical Random Features (QDRF), which are lightweight quantum machine learning models that can generate high-dimensional feature maps without expensive optimization. The models achieve competitive performance on Fashion-MNIST classification while requiring fewer resources than traditional deep quantum circuits.
Key Contributions
- Introduction of QRF and QDRF as scalable quantum machine learning models without variational optimization
- Demonstration that quantum random features can achieve O(log(N_f)) preprocessing cost for N_f-dimensional feature maps
- Experimental validation showing up to 89.3% accuracy on Fashion-MNIST matching classical baselines
View Full Abstract
Quantum machine learning (QML) models often require deep, parameterized circuits to capture complex frequency components, limiting their scalability and near-term implementation. We introduce \textit{Quantum Random Features} (QRF) and \textit{Quantum Dynamical Random Features} (QDRF), lightweight quantum reservoir models inspired by classical random Fourier features (RFF) that generate high-dimensional spectral representations without variational optimization. Using $Z$-rotation encoding combined with random permutations or Hamiltonian dynamics, these models achieve $N_f$-dimensional feature maps at preprocessing cost $O(\log(N_f))$. Spectral analysis shows that QRF and QDRF reproduce the behavior of RFF, while simulations on Fashion-MNIST reach up to 89.3\% accuracy-matching or surpassing classical baselines with scalable qubit requirements. By linking spectral theory with experimentally feasible quantum dynamics, this work provides a compact and hardware-compatible route to scalable quantum learning.
Non-invertible translation from Lieb-Schultz-Mattis anomaly
This paper investigates quantum many-body systems with Lieb-Schultz-Mattis anomalies, showing that translation operators become non-invertible and fuse with internal symmetry defects when the full internal symmetry is gauged. The work extends previous one-dimensional results to higher dimensions and connects them to mixed anomalies and higher-group structures.
Key Contributions
- Extension of one-dimensional non-invertible translation results to two and three spatial dimensions
- Demonstration that translation operators become non-invertible and fuse with internal symmetry defects after gauging
- Connection of the phenomena to mixed anomalies and higher-group structures through topological field theory
View Full Abstract
Symmetry provides powerful non-perturbative constraints in quantum many-body systems. A prominent example is the Lieb-Schultz-Mattis (LSM) anomaly -- a mixed 't Hooft anomaly between internal and translational symmetries that forbids a trivial symmetric gapped phase. In this work, we investigate lattice translation operators in systems with an LSM anomaly. We construct explicit lattice models in two and three spatial dimensions and show that, after gauging the full internal symmetry, translation becomes non-invertible and fuses into defects of the internal symmetry. The result is supported by the anomaly-inflow in view of topological field theory. Our work extends earlier one-dimensional observations to a unified higher-dimensional framework and clarifies their origin in mixed anomalies and higher-group structures, highlighting a coherent interplay between internal and crystalline symmetries.
Holographic Entanglement Propagation Through Wormholes
This paper investigates how quantum entanglement and energy transfer between two identical quantum field theories through holographic wormholes in the AdS/CFT framework. The authors show that local operator insertions enable signal transmission beyond event horizons and interpret this as a form of quantum teleportation that enhances rather than suppresses mutual information.
Key Contributions
- Demonstration of entanglement transfer through holographic wormholes via explicit CFT calculations
- Identification of anti-scrambling phenomenon where mutual information is enhanced rather than suppressed by local operator excitations
View Full Abstract
We study how energy and quantum entanglement are transferred when two identical CFTs are entangled locally. This is probed by considering a local operator insertion in one of the CFTs. When the CFTs have holographic duals via the AdS/CFT correspondence, the transfer happens through an AdS wormhole that allows signal propagation even beyond the horizon from one AdS boundary to the other; we demonstrate this in explicit CFT calculations. We argue that this transmission is possible because the insertion of a local operator is not a unitary process but a regularized version of projection measurement, and that this is interpreted as quantum teleportation. We also find that this leads to a phenomenon opposite to scrambling, where mutual information, instead of being suppressed, gets enhanced by the insertion of a local operator excitation.
Non-secular polariton leakage and dark-state protection in hybrid plasmonic cavities
This paper studies how plasmonic cavities lose energy through radiation and absorption, developing a new theoretical framework that accounts for interference effects between different decay pathways. The work shows that when the environment cannot distinguish between different polariton states, significant deviations from standard decay behavior occur, including the stabilization of dark states.
Key Contributions
- Development of non-secular master equation for plasmonic cavity dynamics that includes interference between decay pathways
- Prediction of dark-state stabilization and bath-induced coherences when environment cannot resolve polariton splitting
View Full Abstract
A major issue in exploiting plasmonic cavities as key components in nanotechnology is the effect of radiative and absorption losses on their electrodynamic behavior. Treating them as open-systems, we derive a time-local, completely positive master equation that retains non-secular interference between decay pathways and reduces to the standard secular description when the environment resolves polariton splitting. When it does not, the theory predicts order-one deviations from secular leakage dynamics, including bath-induced coherences and stabilization of dark polaritons, and provides a simple design criterion based on the ratio of polariton splitting to reservoir linewidth. A time-resolved leakage measurement, such as transmission, reflectivity, or photoluminescence, can be used to observe these effects.
Entanglement of quantum systems via a classical mediator in hybrid van Hove theory
This paper investigates whether quantum systems can become entangled through interaction with a classical mediator, using hybrid van Hove theory to show that two quantum spins can indeed become entangled via a classical harmonic oscillator. The work challenges existing no-go theorems and suggests that entanglement studies cannot definitively rule out theories with classical gravity.
Key Contributions
- Demonstrates entanglement generation through classical mediators in hybrid van Hove theory
- Shows that existing no-go theorems for classical-mediated entanglement don't universally apply
- Provides theoretical framework suggesting quantum entanglement studies cannot rule out classical gravity theories
View Full Abstract
It is a matter of ongoing discussion whether quantum states can become entangled while only interacting via a classical mediator. This lively debate is deeply interwoven with the question of whether entanglement studies can prove the quantum nature of gravity. However, the answer to this fundamental question depends crucially on which hybrid quantum-classical theory is used. In this letter, we demonstrate that entanglement by a classical mediator is possible within the framework of hybrid van Hove theory, showing that existing no-go theorems on that matter do not universally apply to hybrid theories in general. After briefly recapitulating the key features of the hybrid van Hove theory, we show this using the example of two quantum spins coupled by a classical harmonic oscillator. By deriving the spin density matrix for this scenario and comparing it to its equivalent for a pure quantum system, we show that entanglement between the two spins is generated in both cases. Conclusively, this is illustrated by presenting the purity and concurrence of the spin-spin system as a decisive measure for entanglement. Our results further imply that quantum entanglement studies cannot rule out consistent quantum theories featuring classical gravity.
Strassen's support functionals coincide with the quantum functionals
This paper solves a 30-year-old problem in tensor complexity theory by proving that Strassen's support functionals are equivalent to quantum functionals, which are universal spectral points defined through entropy optimization on entanglement polytopes. The result is achieved using a general minimax formula for convex optimization that also applies to other tensor parameters.
Key Contributions
- Proves equivalence between Strassen's support functionals and quantum functionals, solving a long-standing open problem from 1991
- Develops a general minimax formula for convex optimization on entanglement polytopes with applications to tensor parameters like asymptotic slice rank
View Full Abstract
Strassen's asymptotic spectrum offers a framework for analyzing the complexity of tensors. It has found applications in diverse areas, from computer science to additive combinatorics and quantum information. A long-standing open problem, dating back to 1991, asks whether Strassen's support functionals are universal spectral points, that is, points in the asymptotic spectrum of tensors. In this paper, we answer this question in the affirmative by proving that the support functionals coincide with the quantum functionals - universal spectral points that are defined via entropy optimization on entanglement polytopes. We obtain this result as a special case of a general minimax formula for convex optimization on entanglement polytopes (and other moment polytopes) that has further applications to other tensor parameters, including the asymptotic slice rank. Our proof is based on a recent Fenchel-type duality theorem on Hadamard manifolds due to Hirai.
Quantum Otto cycle in the Anderson impurity model
This paper studies a quantum heat engine based on the Otto cycle using the Anderson impurity model, investigating how electron interactions and coupling to thermal reservoirs affect the engine's performance. The researchers use advanced numerical methods to analyze different operating regimes and find that Coulomb interactions can enhance the engine's efficiency.
Key Contributions
- Development of thermodynamic analysis framework for quantum Otto cycles in strongly correlated systems
- Demonstration that Coulomb interactions can enhance quantum heat engine efficiency
View Full Abstract
We study the thermodynamic performance of a periodic quantum Otto cycle operating on the single-impurity Anderson model. Using a decomposition of the time-evolution generator based on the principle of minimal dissipation, combined with the numerically exact hierarchical equations of motion (HEOM) method, we analyze the operating regimes of the quantum thermal machine and investigate effects of Coulomb interactions, strong system-reservoir coupling, and energy level alignments. Our results show that Coulomb interaction can change the operating regimes and may lead to an enhancement of the efficiency.
Cooperative Emission from Quantum Emitters in Hexagonal Boron Nitride Layers
This paper demonstrates cooperative emission from quantum emitters in hexagonal boron nitride at room temperature, showing that groups of closely-spaced quantum dots can emit light collectively with enhanced brightness and faster decay rates. The researchers achieved this without needing special cooling or optical cavities, making it a practical platform for quantum light sources.
Key Contributions
- First demonstration of cooperative emission from quantum emitters in hBN at room temperature
- Observation of superlinear intensity enhancement and accelerated radiative decay in sub-wavelength emitter ensembles
- Establishment of hBN as a scalable solid-state platform for collective quantum optics without cryogenic cooling
View Full Abstract
Collective light emission from many-body quantum systems is a cornerstone of quantum optics, yet its implementation in solid-state platforms operating under ambient conditions remains highly challenging. Large-bandgap van der Waals materials such as hexagonal boron nitride (hBN) host stable room-temperature single-photon emitters with narrow linewidths across a broad spectral range. However, cooperative radiative effects in this system have not been previously explored. Here we demonstrate collective emission from quantum-emitter ensembles in hBN layers when the emitters are nearly indistinguishable and positioned within a sub-wavelength proximity. Using confocal microscopy and a Hanbury Brown-Twiss (HBT) configuration, we identify both isolated emitters and ensembles activated by localized electron-beam irradiation. Time-resolved photoluminescence measurements reveal a superlinear intensity enhancement and a pronounced acceleration of the radiative decay in tightly confined ensembles, with lifetimes approaching the temporal resolution of our experimental system (about 500 ps), compared to approximately 1.85 ns for single emitters or large, spatially extended ensembles. Complementary second-order photon-correlation measurements exhibit sub-Poissonian antidip consistent with emission from a few indistinguishable emitters. The simultaneous observation of lifetime shortening and enhanced emission provides direct evidence of cooperative emission at room temperature, achieved without optical cavities or cryogenic cooling. These results establish optically active defect ensembles in hBN as a scalable solid-state platform for engineered collective quantum optics in two-dimensional materials, opening avenues toward ultrabright superradiant light sources and nonclassical photonic states for quantum technologies.
Quantum Simulation with Fluxonium Qutrit Arrays
This paper explores using fluxonium superconducting circuits as three-level quantum systems (qutrits) instead of traditional two-level qubits, demonstrating how arrays of these qutrits can simulate exotic quantum matter with tunable interactions and hopping processes across four distinct operational regimes.
Key Contributions
- Demonstrates four operational regimes for fluxonium qutrit arrays with different interaction characteristics
- Proposes fluxonium qutrits as a platform for quantum simulation beyond standard Bose-Hubbard models
- Analyzes rich ground-state phase diagrams and practical experimental protocols for probing different regimes
View Full Abstract
Fluxonium superconducting circuits were originally proposed to realize highly coherent qubits. In this work, we explore how these circuits can be used to implement and harness qutrits, by tuning their energy levels and matrix elements via an external flux bias. In particular, we investigate the distinctive features of arrays of fluxonium qutrits, and their potential for the quantum simulation of exotic quantum matter. We identify four different operational regimes, classified according to the plasmon-like versus fluxon-like nature of the qutrit excitations. Highly tunable on-site interactions are complemented by correlated single-particle hopping, pair hopping and non-local interactions, which naturally emerge and have different weights in the four regimes. Dispersive corrections and decoherence are also analyzed. We investigate the rich ground-state phase diagram of qutrit arrays and propose practical dynamical experiments to probe the different regimes. Altogether, fluxonium qutrit arrays emerge as a versatile and experimentally accessible platform to explore strongly correlated bosonic matter beyond the Bose-Hubbard paradigm, and with a potential toward simulating lattice gauge theories and non-Abelian topological states.
RF-free driving of nuclear spins with color centers in silicon carbide
This paper demonstrates a method to control nuclear spins in silicon carbide color centers using only microwave pulses, eliminating the need for additional radio frequency fields. The technique uses a tilted magnetic field to enable hyperfine-enhanced effects that allow simultaneous control of both electron and nuclear spins, achieving 89% fidelity.
Key Contributions
- Demonstration of RF-free nuclear spin control using only microwave fields
- Achievement of 89% two-qubit tomography fidelity with simplified control scheme
- Development of hyperfine-enhanced control method using precisely tilted magnetic fields
View Full Abstract
Color centers that enable nuclear-spin control without RF fields offer a powerful route towards simplified and scalable quantum devices. Such capabilities are especially valuable for quantum sensing and computing platforms that already find applications in biology, materials science, and geophysics. A key challenge is the coherent manipulation of nearby nuclear spins, which serve as quantum memories and auxiliary qubits but conventionally require additional high-power RF fields which increase the experimental complexity and overall power consumption. Finding systems where both electron and nuclear spins can be controlled using a single MW source is therefore highly desirable. Here, using a modified divacancy center in silicon carbide, we show that coherent control of a coupled nuclear spin is possible without any RF fields. Instead, MW pulses driving the electron spin also manipulate the nuclear spin through hyperfineenhanced effects, activated by a precisely tilted external magnetic field. We demonstrate high-fidelity nuclear-spin control, achieving 89% two-qubit tomography fidelity and nearly T1-limited nuclear coherence times. This approach offers a simplified and scalable route for future quantum applications.
Optimized adiabatic-impulse protocol preserving Kibble-Zurek scaling with attenuated anti-Kibble-Zurek behavior
This paper develops an optimized adiabatic-impulse protocol that significantly reduces the time needed to drive quantum systems through critical points while preserving important scaling laws and reducing defect formation. The authors apply their method to the transverse Ising model and show how it performs in the presence of noise that can cause anti-Kibble-Zurek behavior.
Key Contributions
- Development of optimized adiabatic-impulse protocol that achieves faster evolution times while preserving Kibble-Zurek scaling
- Demonstration of attenuated anti-Kibble-Zurek behavior and altered universal power-law scaling in the presence of noise
- Generalization to incorporate nonlinear Kibble-Zurek scaling with application to transverse Ising chain
View Full Abstract
We propose an optimized adiabatic-impulse (OAI) protocol that achieves much shorter evolution time while preserving the Kibble-Zurek scaling. Near the critical regime, the control field is linearly ramped across the quantum critical point at a rate characterized by a quench time $τ_Q$. Away from the critical regime, the evolution is designed to follow the threshold of adiabatic breakdown, which we characterize by an adiabatic coefficient $ζ\proptoτ_Q^α$. As a consequence, the total evolution time exhibits a sublinear power-law dependence on $τ_Q$, and the conventional linear quench protocol is recovered in the limit $α\rightarrow\infty$. We apply the OAI protocol to the transverse Ising chain and numerically determine the minimum value of $ζ$. We further investigate the nonequilibrium dynamics in the presence of a noisy field that can induce anti-Kibble-Zurek behavior, leading to more defects for slower ramps. Within the OAI protocol, the optimal quench time that minimizes defects obeys an altered universal power-law scaling with the noise strength. Finally, we generalize the OAI protocol to incorporate nonlinear Kibble-Zurek scaling.
Quantum steering probes energy transfer in quantum batteries
This paper investigates how EPR steering (a quantum correlation phenomenon) can be used to monitor and characterize energy transfer processes in quantum batteries during charging cycles. The researchers found that steering serves as both a resource that enhances energy storage and a diagnostic tool for assessing battery performance across different charging scenarios.
Key Contributions
- Established EPR steering as a novel indicator for monitoring quantum battery energy variations
- Demonstrated that steering acts as both a resource for enhancing energy storage and a witness to battery population balance
View Full Abstract
This study investigates the role of EPR steering in characterizing the energy dynamics of quantum batteries (QBs) within \textcolor{black}{a charging system that features shared reservoirs. After optimizing parameter configurations to achieve high-energy systems, we observe across a variety of charging scenarios with low-dissipation regimes that steering serves as a vital resource: it is initially stored until the system reaches energy equilibrium, and then subsequently utilized to sustain the enhancement of energy storage. Furthermore, steering acts as a witness to battery population balance and a consumable that enhances extractable work. Additionally, we discuss the contribution of the steering potential to energy upon high-dissipation charging in details. These findings establish a novel indicator for monitoring QB energy variations, which will be beneficial to achieve the high-performance quantum batteries.
A general framework for interactions between electron beams and quantum optical systems
This paper develops a theoretical framework for describing how free-electron beams interact with quantum systems in electromagnetic environments. The framework shows that these environments can amplify weak electron-electron interactions to enable new applications in quantum control, imaging, and spectroscopy at the nanoscale.
Key Contributions
- General theoretical framework for free-electron beam interactions with quantized bound systems in electromagnetic environments
- Demonstration that electromagnetic environments can amplify weak electron-electron coupling to reach new interaction regimes
- Protocols for coherent qubit control and nondestructive readout using electron beam quantum statistics
View Full Abstract
We provide a theoretical framework to describe the dynamics of a free-electron beam interacting with quantized bound systems in arbitrary electromagnetic environments. This expands the quantum optics toolbox to incorporate free-electron beams for applications in highly tunable quantum control, imaging, and spectroscopy at the nanoscale. The framework recovers previously studied results and shows that electromagnetic environments can amplify the intrinsically weak coupling between a free-electron and a bound electron to reach previously inaccessible interaction regimes. We leverage this enhanced coupling for experimentally feasible protocols in coherent qubit control and towards the nondestructive readout and projective control of the electron beam's quantum-number statistics. Our framework is broadly applicable to microwave-frequency qubits, optical nanophotonics, cavity quantum electrodynamics, and emerging platforms at the interface of electron microscopy and quantum information.
Robust Floquet Topological Phases and Anomalous $π$-Modes in Quasiperiodic Quantum Walks
This paper studies quantum walks with quasiperiodic driving patterns and discovers that topological edge states can exist at both zero energy and at the energy boundary, remaining robust despite the fractal nature of the bulk spectrum. The work demonstrates that Floquet topological protection persists even in quasiperiodic systems.
Key Contributions
- Discovery of anomalous π-modes topological edge states at quasienergy zone boundary
- Demonstration that Floquet topological protection remains intact in quasiperiodic systems
View Full Abstract
We uncover the global topological phase diagram of one-dimensional discrete-time quantum walks driven by Fibonacci-modulated coin parameters. Utilizing the mean chiral displacement (MCD) as dynamical probe, we identify robust topological phases defined by a strictly quantized winding number $ν=-1$ and exponentially localized edge states. Crucially, we discover that these topological edge modes emerges not only at zero energy but also at the quasienergy zone boundary $E=π$, exhibiting identical localization robustness despite the fractal nature of the bulk spectrum. These results demonstrate that Floquet topological protection remains intact amidst quasiperiodic disorder, offering a concrete route to observing exotic non-equilibrium phases in photonic experiments.
Belief Propagation with Quantum Messages for Symmetric Q-ary Pure-State Channels
This paper develops a quantum message passing algorithm for decoding classical information sent over quantum channels with non-binary alphabets. The work extends existing binary quantum belief propagation methods to handle more general communication scenarios where the quantum channel outputs have specific mathematical symmetries.
Key Contributions
- Generalization of belief propagation with quantum messages from binary to q-ary alphabets for symmetric pure-state channels
- Development of closed-form recursions for tracking quantum message combining using Gram matrix eigenvalues
- Creation of density evolution framework enabling threshold analysis for LDPC codes and polar code construction on these channels
View Full Abstract
Belief propagation with quantum messages (BPQM) provides a low-complexity alternative to collective measurements for communication over classical--quantum channels. Prior BPQM constructions and density-evolution (DE) analyses have focused on binary alphabets. Here, we generalize BPQM to symmetric q-ary pure-state channels (PSCs) whose output Gram matrix is circulant. For this class, we show that bit-node and check-node combining can be tracked efficiently via closed-form recursions on the Gram-matrix eigenvalues, independent of the particular physical realization of the output states. These recursions yield explicit BPQM unitaries and analytic bounds on the fidelities of the combined channels in terms of the input-channel fidelities. This provides a DE framework for symmetric q-ary PSCs that allows one to estimate BPQM decoding thresholds for LDPC codes and to construct polar codes on these channels.
QCL-IDS: Quantum Continual Learning for Intrusion Detection with Fidelity-Anchored Stability and Generative Replay
This paper proposes QCL-IDS, a quantum machine learning framework for cybersecurity intrusion detection that can continuously learn new attack patterns while remembering old ones, using quantum algorithms to balance learning new threats with retaining knowledge of previous attacks without storing sensitive network data.
Key Contributions
- Q-FISH quantum Fisher anchoring method for preventing catastrophic forgetting in quantum neural networks
- Privacy-preserved quantum generative replay system for synthesizing training data without storing sensitive network flows
View Full Abstract
Continual intrusion detection must absorb newly emerging attack stages while retaining legacy detection capability under strict operational constraints, including bounded compute and qubit budgets and privacy rules that preclude long-term storage of raw telemetry. We propose QCL-IDS, a quantum-centric continual-learning framework that co-designs stability and privacy-governed rehearsal for NISQ-era pipelines. Its core component, Q-FISH (Quantum Fisher Anchors), enforces retention using a compact anchor coreset through (i) sensitivity-weighted parameter constraints and (ii) a fidelity-based functional anchoring term that directly limits decision drift on representative historical traffic. To regain plasticity without retaining sensitive flows, QCL-IDS further introduces privacy-preserved quantum generative replay (QGR) via frozen, task-conditioned generator snapshots that synthesize bounded rehearsal samples. Across a three-stage attack stream on UNSW-NB15 and CICIDS2017, QCL-IDS consistently attains the best retention-adaptation trade-off: the gradient-anchor configuration achieves mean Attack-F1 = 0.941 with forgetting = 0.005 on UNSW-NB15 and mean Attack-F1 = 0.944 with forgetting = 0.004 on CICIDS2017, versus 0.800/0.138 and 0.803/0.128 for sequential fine-tuning, respectively.
Dispersive Microwave Sensing for Quantum Computing with Floating Electrons
This dissertation develops microwave-based techniques to read out quantum states of floating electrons used as qubits, specifically electrons trapped on liquid helium and solid neon surfaces at very low temperatures. The work also includes development of low-noise microwave sources needed for precise qubit measurements.
Key Contributions
- Development of resonator-based readout techniques for floating electron qubits on liquid helium and solid neon
- Creation of cryogenic microwave sources for low-noise qubit state measurement
View Full Abstract
In this dissertation, resonator-based readout techniques were developed for floating electrons as qubits on cryogenic substrates, using two platforms: electrons on liquid helium and electrons on solid neon. In addition, a cryogenic microwave source was developed to enable low-noise measurement for qubit readout.
A Deterministic Framework for Neural Network Quantum States in Quantum Chemistry
This paper develops a new deterministic method for optimizing neural network quantum states used in quantum chemistry calculations, eliminating the randomness and computational inefficiencies of traditional Monte Carlo approaches. The method uses neural networks to represent quantum states and applies deterministic optimization with perturbation theory corrections to accurately simulate molecular systems like water, nitrogen, and chromium dimers.
Key Contributions
- Deterministic optimization framework that eliminates Monte Carlo sampling noise in neural network quantum states
- Hybrid CPU-GPU implementation with sub-linear scaling for computational efficiency in large Hilbert spaces
View Full Abstract
Stochastic optimization of Neural Network Quantum States (NQS) in discrete Fock spaces is limited by sampling variance and slow mixing. We present a deterministic framework that optimizes a neural backflow ansatz within dynamically adaptive configuration subspaces, corrected by second-order perturbation theory. This approach eliminates Monte Carlo noise and, through a hybrid CPU-GPU implementation, exhibits sub-linear scaling with respect to subspace size. Benchmarks on bond dissociation in H2O and N2, and the strongly correlated chromium dimer Cr2, validate the method's accuracy and stability in large Hilbert spaces.
A Quantum-Memory-Free Quantum Secure Direct Communication Protocol Based on Privacy Amplification of Coded Sequences
This paper develops a new quantum communication protocol that allows secure direct transmission of messages without requiring quantum memory storage. The protocol uses mathematical techniques called privacy amplification and universal hashing to extract security from coded sequences, offering an alternative to traditional quantum key distribution methods.
Key Contributions
- A quantum-memory-free quantum secure direct communication protocol using universal hashing without wiretap coding
- Privacy amplification theorems for extracting secrecy from coded classical sequences against quantum side-information
View Full Abstract
We develop an information-theoretic analysis of Quantum-Memory-Free (QMF) Quantum Secure Direct Communication (QSDC) under collective attacks as an alternative to the conventional Quantum Key Distribution (QKD) protocol with one-time pads. Our main contributions are: 1) a QMF-QSDC protocol that only relies on universal hashing of coded sequences without wiretap coding; 2) a set of privacy amplification theorems for extracting secrecy from coded classical sequences against quantum side-information. These tools open the way to the design of robust QMF-QSDC protocols.
Localization and scattering of a photon in quasiperiodic qubit arrays
This paper studies how single photons behave when traveling through waveguides connected to qubit arrays with quasiperiodic (partially ordered) spacing. The researchers find that unlike fully random systems where all states are trapped, quasiperiodic systems allow some photons to pass through while trapping others, creating controllable transmission and reflection properties.
Key Contributions
- Analytical proof that only a fraction (3-√5)/2 of states become localized in quasiperiodic systems, unlike fully disordered systems where all states localize
- Discovery of mobility edges in transmission spectra that can control photon transmission and reflection by tuning quasiperiodic strength
View Full Abstract
We study the localization and scattering of a single photon in a waveguide coupled to qubit arrays with quasiperiodic spacings. As the quasiperiodic strength increases, localized subradiant states with extremely long lifetime appear around the resonant frequency and form a continuum band. In stark contrast to the fully disordered waveguide QED where all states are localized, we analytically find that the fraction of localized states is up to $(3-\sqrt{5})/2$ when the modulation frequency is $(1+\sqrt{5})/2$. The localized and delocalized states can be related to excitation in flat and curved inverse energy bands under the approximation of large-period modulation. When the quasiperiodic strength is weak, an extended subradiant state can support the transmission of a photon. However, as the quasiperiodic strength increases, localized subradiant states can completely block the transmission of a single photon in resonance with the subradiant states, and enhance the overall reflection. At a fixed quasiperiodic strength, we also find mobility edge in transmission spectrum, below and above which the transmission is either turned on and off as system size increases. Our work give new insights into the localization in non-Hermitian systems.
Sampling methods to describe superradiance in large ensembles of quantum emitters
This paper develops numerical methods to study superradiance - a quantum effect where many atoms emit light together more efficiently than individually - in large groups of quantum emitters. The researchers created computational tools to calculate photon statistics that would otherwise be impossible to compute due to the exponential complexity of large quantum systems.
Key Contributions
- Development of two approximate sampling methods with upper and lower bounds for calculating g^(2)(t,0) in large emitter ensembles
- Introduction of offset corrections that significantly improve prediction accuracy across different system sizes
View Full Abstract
Superradiance is a quantum phenomenon in which coherence between emitters results in enhanced and directional radiative emission. Many quantum optical phenomena can be characterized by the two-time quantum correlation function $g^{(2)}(t,τ)$, which describes the photon statistics of emitted radiation. However, the critical task of determining $g^{(2)}(t,τ)$ becomes intractable for large emitter ensembles due to the exponential scaling of the Hilbert space dimension with the number of emitters. Here, we analyse and benchmark two approximate numerical sampling methods applicable to emitter arrays embedded within electromagnetic environments, which generally provide upper and lower bounds for $g^{(2)}(t,0)$. We also introduce corrections to these methods (termed offset corrections) that significantly improve the quality of the predictions. The optimal choice of method depends on the total number of emitters, such that taken together, the two approaches provide accurate descriptions across a broad range of important regimes. This work therefore provides new theoretical tools for studying the well-known yet complex phenomenon of superradiance in large ensembles of quantum emitters.
3D imaging of the biphoton spatiotemporal wave packet
This paper develops a new experimental method called '3D imaging of photonic wave packets' to comprehensively measure and visualize the complex spatiotemporal structure of quantum light fields, specifically biphoton pairs created through spontaneous parametric down-conversion. The technique reveals previously unobserved correlations between different properties of quantum light, including spatial, spectral, and temporal characteristics.
Key Contributions
- Development of self-referenced, high-efficiency, all-optical method for characterizing quantum light spatiotemporal structure
- First experimental observation of comprehensive spatial-spatial, spectral-spectral, and spatiotemporal correlations in biphoton wave packets
- Advancement of measurement techniques for quantum light fields that could enable new applications in quantum information processing
View Full Abstract
Photons are among the most important carriers of quantum information owing to their rich degrees of freedom (DoFs), including various spatiotemporal structures. The ability to characterize these DoFs, as well as the hidden correlations among them, directly determines whether they can be exploited for quantum tasks. While various methods have been developed for measuring the spatiotemporal structure of classical light fields, owing to the technical challenges posed by weak photon flux, there have so far been no reports of observing such structures in their quantum counterparts, except for a few studies limited to correlations within individual DoFs. Here, we propose and experimentally demonstrate a self-referenced, high-efficiency, and all-optical method, termed 3D imaging of photonic wave packets, for comprehensive characterization of the spatiotemporal structure of a quantum light field, i.e., the biphoton spatiotemporal wave packet. Benefiting from this developed method, we successfully observe the spatial-spatial, spectral-spectral, and spatiotemporal correlations of biphotons generated via spontaneous parametric down-conversion, revealing rich local and nonlocal spatiotemporal structure in quantum light fields. This method will further advance the understanding of the dynamics in nonlinear quantum optics and expand the potential of photons for applications in quantum communication and quantum computing.
Reflecting boundary induced modulation of tripartite coherence harvesting
This paper studies how three quantum detectors can extract quantum coherence and entanglement from vacuum fields near a reflecting boundary. The research finds that boundaries degrade coherence extraction but can enhance entanglement harvesting, with coherence being more robust and accessible than entanglement across different spatial configurations.
Key Contributions
- Demonstrated hierarchical distinction between quantum coherence and entanglement as operational resources in structured vacuum fields
- Showed that reflecting boundaries can simultaneously degrade coherence harvesting while enhancing entanglement extraction
View Full Abstract
We study the extraction of quantum coherence by three static Unruh-DeWitt (UDW) detectors that interact locally with a massless scalar vacuum field in the vicinity of an infinite perfectly reflecting boundary. Depending on the setup, the detectors are positioned either parallel or orthogonal to the boundary, with their energy gaps chosen to satisfy the hierarchy $Ω_C\geq Ω_B\geq Ω_A$. Our analysis reveals that decreasing the detector-boundary separation leads to a monotonic degradation of quantum coherence, whereas the same boundary effect can simultaneously preserve and even amplify the harvested quantum entanglement. Moreover, when the detectors possess distinct energy gaps, coherence extraction is further inhibited; strikingly, such non-identical configurations substantially enhance the efficiency of entanglement harvesting and markedly extend the range of detector separations over which non-negligible entanglement can be generated. Nevertheless, the harvesting of nonlocal quantum coherence is achievable over a significantly broader range of detector separations than that of quantum entanglement. Despite exhibiting similar overall behavior, orthogonal detector configurations outperform parallel ones in coherence harvesting, highlighting the quantitative influence of detector geometry. Overall, our study reveals a hierarchical distinction between quantum coherence and entanglement as operational resources in structured vacuum fields: quantum coherence is not only more readily accessible across space but also more robust than entanglement, whereas entanglement exhibits richer features and can be selectively activated and enhanced through boundary effects and detector non-uniformity.
Community detection in network using Szegedy quantum walk
This paper develops a new algorithm for finding communities (groups of well-connected nodes) in networks by using Szegedy quantum walks instead of classical random walks. The authors test their quantum-based community detection method on various network types including social networks like the Karate club graph and dolphin social networks.
Key Contributions
- Development of a community detection algorithm based on Szegedy quantum walks
- Application and testing of the quantum walk approach on various real-world network datasets including social networks
View Full Abstract
In a network, the vertices with similar characteristics construct communities. The vertices in a community are well-connected. Detecting the communities in a network is a challenging and important problem in the theory of complex networks. One approach to solve this problem uses the classical random walks on the graphs. In quantum computing, quantum walks are the quantum mechanical counterparts of classical random walks. In this article, we employ a variant of Szegedy's quantum walk to develop a procedure for discovering the communities in networks. The limiting probability distribution of quantum walks assists us in determining the inclusion of a vertex in a community. We apply our procedure of community detection on a number of graphs and social networks, such as the relaxed caveman graph, $l$-partition graph, Karate club graph, dolphin's social network, etc.
On the quantum nature of strong gravity
This paper analyzes thought experiments involving quantum effects in gravitational fields, showing that quantum fluctuations prevent superluminal signaling when using gravitational waves as detectors. The authors conclude that consistency between general relativity and quantum mechanics requires quantization of gravitational radiation, even from strong gravity sources like rotating black holes.
Key Contributions
- Reformulated gedankenexperiment using gravitational waves as detectors for Newtonian tidal fields
- Demonstrated that quantum fluctuations in gravitational waves prevent superluminal signaling and maintain consistency with quantum mechanics
View Full Abstract
Belenchia et al. [Phys. Rev. D 98, 126009 (2018)] have analyzed a gedankenexperiment where two observers, Alice and Bob, attempt to communicate via superluminal signals using a superposition of massive particles dressed by Newtonian fields and a test particle as field detector. Quantum fluctuations in the particle motion and in the field prevent signaling or violations of quantum mechanics in this setup. We reformulate this thought experiment by considering gravitational waves emitted by an extended quadrupolar object as a detector for Newtonian tidal fields. We find that quantum fluctuations in the gravitational waves prevent signaling. In the Newtonian limit, rotating black holes behave as extended quadrupolar objects, as consequence of the strong equivalence principle. It follows that consistency of the Newtonian limit of general relativity with quantum mechanics requires the quantization of gravitational radiation, even when the waves originate in strong gravity sources.
Quantum teleportation in expanding FRW universe
This paper studies how quantum teleportation works in an expanding universe, examining how cosmic expansion and spacetime curvature affect the ability to transfer quantum information between distant observers. The researchers use field theory and cosmological models to show that the expansion of the universe can degrade quantum teleportation efficiency compared to flat spacetime.
Key Contributions
- Analysis of quantum teleportation fidelity in expanding FRW spacetime using Bogoliubov transformations
- Demonstration that cosmological expansion and spacetime curvature affect quantum information transfer efficiency
View Full Abstract
We investigate the process of quantum teleportation in an expanding universe modeled by Friedmann-Robertson-Walker spacetime, focusing on two cosmologically relevant scenarios: a power-law expansion and the de Sitter universe. Adopting a field-theoretical approach, we analyze the quantum correlations between two comoving observers who share an entangled mode of a scalar field. Using the Bogoliubov transformation, we compute the teleportation fidelity and examine its dependence on the expansion rate, initial entanglement, and the mode frequency. Our findings indicate that spacetime curvature and the underlying cosmological background significantly affect the efficiency of quantum teleportation, particularly through mode mixing and vacuum structure. We also compare our results with the flat Minkowski case to highlight the role of cosmic expansion in degrading or preserving quantum information.
Symplectic Optimization on Gaussian States
This paper introduces a new computational method for finding the ground states of quantum systems described by Gaussian states, using a mathematical framework that automatically satisfies physical constraints. The approach makes optimization easier and allows for efficient reuse of solutions when studying similar quantum systems.
Key Contributions
- Novel symplectic optimization framework that enforces physical constraints exactly through unit-triangular factorizations
- Unconstrained variational formulation enabling efficient warm-starting and solution reuse across related Hamiltonians
View Full Abstract
Computing Gaussian ground states via variational optimization is challenging because the covariance matrices must satisfy the uncertainty principle, rendering constrained or Riemannian optimization costly, delicate, and thus difficult to scale, particularly in large and inhomogeneous systems. We introduce a symplectic optimization framework that addresses this challenge by parameterizing covariance matrices directly as positive-definite symplectic matrices using unit-triangular factorizations. This approach enforces all physical constraints exactly, yielding a globally unconstrained variational formulation of the bosonic ground-state problem. The unconstrained structure also naturally supports solution reuse across nearby Hamiltonians: warm-starting from previously optimized covariance matrices substantially reduces the number of optimization steps required for convergence in families of related configurations, as encountered in crystal lattices, molecular systems, and fluids. We demonstrate the method on weakly dipole-coupled lattices, recovering ground-state energies, covariance matrices, and spectral gaps accurately. The framework further provides a foundation for large-scale approximate treatments of weakly non-quadratic interactions and offers potential scaling advantages through tensor-network enhancements.
Semiclassical effective description of a quantum particle on a sphere with non-central potential
This paper develops a semiclassical framework for studying quantum particles moving on curved surfaces, specifically spheres, by incorporating quantum fluctuation effects into classical trajectory descriptions. The researchers show that quantum corrections significantly alter particle dynamics, causing measurable changes in motion patterns and trajectory distributions compared to purely classical predictions.
Key Contributions
- Development of momentous quantum mechanics formalism for curved surface dynamics incorporating quantum back-reaction effects
- Demonstration that quantum corrections induce 8-12% phase shifts in free particle motion and 40% faster timescales for non-central potential dynamics on spheres
View Full Abstract
We develop a semiclassical framework for studying quantum particles constrained to curved surfaces using the momentous quantum mechanics formalism, which extends classical phase-space to include quantum fluctuation variables (moments). In a spherical geometry, we derive quantum-corrected Hamiltonians and trajectories that incorporate quantum back-reaction effects absent in classical descriptions. For the free particle, quantum fluctuations induce measurable phase shifts in azimuthal precession of approximately 8-12%, with uncertainty growth rates proportional to initial moment correlations. When a non-central Makarov potential is introduced, quantum corrections dramatically amplify its asymmetry. For strong coupling ($γ$ = -1.9), the quantum-corrected force drives trajectories preferentially toward the southern hemisphere on timescales 40% shorter than classical predictions, with trajectory densities exhibiting up to 3-fold enhancement in the preferred region. Throughout evolution, the solutions rigorously satisfy Heisenberg uncertainty relations, validating the truncation scheme. These results demonstrate that quantum effects fundamentally alter semiclassical dynamics in curved constrained systems, with direct implications for charge transport in carbon nanostructures, exciton dynamics in curved quantum wells, and reaction pathways in cyclic molecules.
Neural Quantum States in Mixed Precision
This paper investigates using mixed-precision arithmetic (combining different floating-point precisions) in neural network-based Variational Monte Carlo simulations of quantum many-body systems. The authors show that significant portions of these quantum simulations can use lower precision arithmetic without losing accuracy, making the computations more efficient and scalable.
Key Contributions
- Derived analytical bounds on errors introduced by reduced precision in Metropolis-Hastings MCMC sampling
- Demonstrated that quantum state sampling in VMC can use half-precision arithmetic without accuracy loss
- Provided theoretical framework for assessing mixed-precision arithmetic in ML approaches using MCMC sampling
View Full Abstract
Scientific computing has long relied on double precision (64-bit floating point) arithmetic to guarantee accuracy in simulations of real-world phenomena. However, the growing availability of hardware accelerators such as Graphics Processing Units (GPUs) has made low-precision formats attractive due to their superior performance, reduced memory footprint, and improved energy efficiency. In this work, we investigate the role of mixed-precision arithmetic in neural-network based Variational Monte Carlo (VMC), a widely used method for solving computationally otherwise intractable quantum many-body systems. We first derive general analytical bounds on the error introduced by reduced precision on Metropolis-Hastings MCMC, and then empirically validate these bounds on the use-case of VMC. We demonstrate that significant portions of the algorithm, in particular, sampling the quantum state, can be executed in half precision without loss of accuracy. More broadly, this work provides a theoretical framework to assess the applicability of mixed-precision arithmetic in machine-learning approaches that rely on MCMC sampling. In the context of VMC, we additionally demonstrate the practical effectiveness of mixed-precision strategies, enabling more scalable and energy-efficient simulations of quantum many-body systems.
Millisecond spin coherence of electrons in semiconducting perovskites revealed by spin mode locking
This paper demonstrates exceptionally long electron spin coherence times (up to 1 millisecond) in perovskite semiconductor crystals using a technique called spin mode locking with periodic laser pulses. The researchers measured both transverse spin coherence (T2) and longitudinal spin relaxation (T1) times in the millisecond range, suggesting perovskites could be promising materials for quantum devices with optical control.
Key Contributions
- Demonstration of millisecond-scale electron spin coherence times in bulk perovskite semiconductors
- Development of spin mode locking technique to measure long spin coherence in inhomogeneous ensembles
- Identification of perovskites as promising materials for optically-controlled quantum devices
View Full Abstract
Long spin coherence times of carriers are essential for implementing quantum technologies using semiconductor devices for which, however, a possible obstacle is spin relaxation. For the spin dynamics, decisive features are the band structure, crystal symmetry, and quantum confinement. Perovskite semiconductors recently have come into focus of studies of their spin states, notivated by efficient optical access and potentially long-living coherence. Here, we report an electron spin coherence time $T_2$ of the order of 1 ms, measured for a bulk FA$_{0.95}$Cs$_{0.05}$PbI$_3$ lead halide perovskite crystal. Using periodic laser pulses, we synchronize the electron spin Larmor precession about an external magnetic field in an inhomogeneous ensemble, the effect known as spin mode locking. It appears as a decay of the optically created ensemble spin polarization within the dephasing time $T_2^*$ of up to 20 ns and its revival during the spin coherence time $T_2$ reaching the millisecond range. This exceptionally long spin coherence time in a bulk crystal is complemented by millisecond-long longitudinal spin relaxation times $T_1$ for electrons and holes, measured by optically-detected magnetic resonance. These long-lasting spin dynamics highlight perovskites as promising platform for the quantum devices with all-optical control.
A Zero-Range Model for the Efimov Effect in the Born-Oppenheimer Approximation
This paper analyzes a theoretical quantum system of three particles (two identical bosons and one lighter particle) with zero-range interactions, demonstrating that under the Born-Oppenheimer approximation, the system exhibits the Efimov effect - an infinite series of bound states that follow a universal geometric scaling law.
Key Contributions
- Generalization of previous Efimov effect results to a specific three-particle system under Born-Oppenheimer approximation
- Demonstration that the universal geometric scaling law holds for this zero-range interaction model
View Full Abstract
In this note we discuss the Efimov effect emerging in a three-particle quantum system with zero-range interactions. In particular, we consider two non-interacting identical bosons plus a different lighter particle such that the interaction between a boson and the light particle is resonant. We also assume the validity of the Born-Oppenheimer approximation. Under these conditions, we show that the three-particle system exhibits infinitely many negative eigenvalues which accumulate at zero and satisfy the universal geometrical law characterising the Efimov effect. The result we find is a generalisation of previous results recently obtained in [13, 24].
Spectrum-generating algebra and intertwiners of the resonant Pais-Uhlenbeck oscillator
This paper studies a quantum mechanical system called the Pais-Uhlenbeck oscillator at a special resonant frequency where conventional quantum treatment breaks down. The authors show that different but classically equivalent ways of describing the system lead to completely different quantum theories, demonstrating fundamental ambiguities in how classical systems should be quantized.
Key Contributions
- Discovery of hidden su(2) Lie algebra structure that emerges only at resonance in the Pais-Uhlenbeck oscillator
- Demonstration that classically equivalent Hamiltonians can yield inequivalent quantum theories, highlighting fundamental quantization ambiguities
View Full Abstract
We study the quantum Pais-Uhlenbeck oscillator at the resonant (equal-frequency) point, where the dynamics becomes non-diagonalisable and the conventional Fock-space construction collapses. At the classical level, the degenerate system admits more than one Hamiltonian formulation generating the same equations of motion, leading to a nontrivial quantisation ambiguity. Working first in the ghostly two-dimensional Hamiltonian formulation, we construct differential intertwiners that generate a spectrum-generating algebra acting on the generalised eigenspaces of the Hamiltonian. This algebra organises the generalised eigenvectors into finite Jordan chains and closes into a hidden $su(2)$ Lie algebra that exists only at resonance. We then show that quantising a classically equivalent Hamiltonian yields a radically different quantum theory, with a fully diagonalisable spectrum and genuine degeneracies. Our results demonstrate that the resonant Pais-Uhlenbeck oscillator provides a concrete example in which classically equivalent Hamiltonians define inequivalent quantum theories.
Entangled photon pair excitation and time-frequency filtered multidimensional photon correlation spectroscopy as a probe for dissipative exciton kinetics
This paper proposes using entangled photon pairs to selectively excite molecular aggregates and monitor their dynamics through advanced photon correlation spectroscopy. The technique allows researchers to probe specific energy pathways in complex molecular systems by filtering photon emissions in both time and frequency domains.
Key Contributions
- Development of entangled photon-enhanced narrowband excitation protocol for two-exciton states
- Introduction of time-frequency-filtered multidimensional photon correlation spectroscopy for pathway-selective monitoring
View Full Abstract
In molecular aggregates, multiple delocalized exciton states interact with phonons, making the state-resolved spectroscopic monitoring of dynamics challenging. We propose a protocol that combines photon-entanglement-enhanced narrowband excitation of two-exciton states with time-frequency-filtered two-photon coincidence counting. It can alleviate bottlenecks associated with probing exciton dynamics spread across multiple spectral and temporal windows. We demonstrate that non-classical correlations of entangled photon pairs can be used to prepare narrowband two-exciton population distributions, circumventing transport in mediating states. The distributions thus created can be monitored using time-frequency-filtered photon coincidence counting, and the pathways contributing to photon emission events can be classified by tuning filtering parameters. Numerical simulations for a light-harvesting aggregate highlight the ability of this protocol to achieve selectivity by suppressing or amplifying specific pathways. Combining entangled photonic sources and multidimensional photon counting allow promising applications to spectroscopy and sensing.
Co-Designed Adaptive Quantum State Preparation Protocols
This paper develops Co-ADAPT-VQE, an improved quantum algorithm that creates more efficient quantum circuits by considering hardware limitations like connectivity constraints and gate errors when preparing quantum states. The method dramatically reduces the number of two-qubit gates needed, achieving up to 97% reduction in CNOT gates for certain systems.
Key Contributions
- Development of Co-ADAPT-VQE algorithm that incorporates hardware constraints into quantum circuit design
- Demonstration of up to 97% reduction in CNOT gate count for quantum state preparation on NISQ devices
View Full Abstract
We propose a co-designed variant of ADAPT-VQE (Co-ADAPT-VQE) where the quantum hardware is taken into account in the construction of the ansatz. This framework can be readily used to optimize state preparation circuits for any device, addressing shortcomings such as limited connectivity, short coherence times, and variable gate errors. We exemplify the impact of Co-ADAPT-VQE by creating state preparation circuits for devices with linear nearest-neighbor (LNN) connectivity. We show a reduction of the CNOT count of the final circuits by up to 97% for 12-14 qubit systems, with the impact being greater for larger and more strongly correlated systems. Surprisingly, the circuits created by Co-ADAPT-VQE provide an over 70% CNOT count reduction with respect to the original ADAPT-VQE in all-to-all connectivity, despite being restricted to LNN qubit interactions.
Quantum Squeezing Enhanced Photothermal Microscopy
This paper introduces a new microscopy technique that uses quantum-squeezed light to enhance the sensitivity of photothermal imaging, achieving better than classical limits for detecting molecular absorption in biological samples and nanomaterials.
Key Contributions
- Developed squeezing-enhanced photothermal (SEPT) microscopy achieving 3.5 dB noise suppression beyond standard quantum limit
- Demonstrated 2.5-fold increase in imaging throughput with quantum-enhanced sensitivity for label-free biological imaging
- Showed detection of previously unobservable subcellular structures like cytochrome c using twin-beam quantum correlations
View Full Abstract
Label-free optical microscopy through absorption or scattering spectroscopy provides fundamental insights across biology and materials science, yet its sensitivity remains fundamentally limited by photon shot noise. While recent demonstrations of quantum nonlinear microscopy show sub-shot-limited sensitivity, they are intrinsically limited by availability of high peak-power squeezed light sources. Here, we introduce squeezing-enhanced photothermal (SEPT) microscopy, a quantum imaging technique that leverages twin-beam quantum correlations to detect absorption induced signals with unprecedented sensitivity. SEPT achieves 3.5 dB noise suppression beyond the standard quantum limit, enabling a 2.5-fold increase in imaging throughput or 31% reduction in pump power, while providing an unmatched versatility through the intrinsic compatibility between continuous-wave squeezing and photothermal modulation. We showcase SEPT applications by providing high-precision characterization of nanoparticles and revealing subcellular structures, such as cytochrome c, that remain undetectable under shot-noise-limited imaging. By combining label-free contrast, quantum-enhanced sensitivity, and compatibility with existing microscopy platforms, SEPT establishes a new paradigm for molecular absorption imaging with far-reaching implications in cellular biology, nanoscience, and materials characterization.
Rydberg Receivers for Space Applications
This paper reviews Rydberg-atom sensors that convert radio and microwave signals into optical signals for potential space-based applications. The authors evaluate five different sensor architectures and assess their suitability for space missions involving radiometry, radar, and terahertz sensing.
Key Contributions
- Comprehensive comparison of five Rydberg-atom sensor architectures for space applications
- Assessment of technical limitations and development roadmap for space-qualified quantum sensors
- Identification of promising applications in radiometry, radar, and terahertz sensing for space missions
View Full Abstract
Rydberg-atom sensors convert radiofrequency, microwave and terahertz fields into optical signals with SI-traceable calibration, high sensitivity, and broad tunability. This review assesses their potential for space applications by comparing five general architectures (Autler-Townes, AC-Stark, superheterodyne, radiofrequency-to-optical conversion, and fluorescence) against space application needs. We identify promising roles in radiometry, radar, terahertz sensing, and in-orbit calibration, and outline key limitations, including shot noise, sparse terahertz transitions, and currently large Size, Weight, Power and Cost. A staged roadmap highlights which uncertainties should be resolved first and how research organisations, industry and space agencies could take the lead for the different aspects.
Foundations of Quantum Optics for Quantum Information: Crash Course on Nonclassical States and Quantum Correlations
This paper presents educational lecture notes that introduce the fundamental concepts of quantum optics, focusing on nonclassical states of light and quantum correlations. It covers theoretical foundations from field quantization to squeezed states and Gaussian entanglement, complemented by practical computational examples using Python tools.
Key Contributions
- Unified educational framework connecting quantum optics foundations to quantum information applications
- Integration of theoretical concepts with practical computational tools using Strawberry Fields library
View Full Abstract
Nonclassical states of light and their correlations lie at the heart of quantum optics, serving as fundamental resources that underpin both the exploration of quantum phenomena and the realisation of quantum information protocols. These lecture notes provide an accessible yet rigorous introduction to the foundations of quantum optics, emphasising their relevance to quantum information science and technology. Starting from the quantisation of the electromagnetic field and the bosonic formalism of Fock space, the notes develop a unified framework for describing and analysing quantum states of light. Key families of states -- thermal, coherent, and squeezed -- are introduced as paradigmatic examples illustrating the transition from classical to nonclassical behaviour. The concepts of convexity, classicality, and quasiprobability representations are presented as complementary tools for characterising quantumness and defining operational notions such as P-nonclassicality. The discussion extends naturally to Gaussian states, composite systems, and continuous-variable entanglement, highlighting how nonclassicality serves as a resource for generating and quantifying quantum correlations. Theoretical developments are complemented by computational and experimental perspectives, including simulations of optical states using the Python library Strawberry Fields and data analysis from simulated data. Together, these notes aim to bridge the foundational concepts of quantum optics and modern quantum information, offering both conceptual insight and practical tools for students and researchers entering the field.
Enhanced quantum parameter estimation based on the Hardy paradox
This paper explores how quantum statistical paradoxes like the Hardy paradox can be used to enhance quantum parameter estimation by developing a post-selected quantum metrology scheme. The researchers identify connections between anomalous weak values characteristic of the Hardy paradox and improved sensitivity in phase measurements.
Key Contributions
- Established connection between Hardy paradox and enhanced quantum parameter estimation
- Identified role of anomalous weak values in post-selected quantum metrology enhancement
View Full Abstract
Statistical paradoxes such as the Hardy paradox and the enhancement of phase estimation via post-selection both draw upon the same non-classical features of quantum statistics described by non-positive quasi-probabilities. In this paper, we introduce a post-selected quantum metrology scenario where the initial state, the dynamics associated with the phase shift, and the post-selection are all inspired by the Hardy paradox. Specifically, we identify an anomalous weak value that is characteristic of both the Hardy paradox and the potential enhancement of sensitivity by the post-selection. We find that the efficiency of the enhancement is reduced when the expectation value associated with the anomalous weak value is different from the inverse of this value. We conclude that the relation between enhanced phase estimation and the Hardy paradox requires a detailed understanding of the relation between weak values and expectation values.
A Hybrid Jump-Diffusion Model for Coherent Optical Control of Quantum Emitters in hBN
This paper develops a theoretical model to understand how quantum light sources in hexagonal boron nitride (hBN) lose their coherence as temperature increases, combining two types of noise effects to match experimental observations and predict optimal operating conditions.
Key Contributions
- Development of hybrid jump-diffusion model combining Ornstein-Uhlenbeck fluctuations with discrete frequency jumps to describe spectral dynamics in hBN quantum emitters
- Quantitative prediction of critical temperature crossover at 25.91K where coherent optical control becomes overdamped
View Full Abstract
Hexagonal boron nitride (hBN) has emerged as a promising two-dimensional host for stable single-photon emission owing to its wide bandgap, high photostability, and compatibility with nanophotonic integration. We present a simulation-based study of temperature-dependent spectral dynamics and optical coherence in a mechanically decoupled quantum emitter in hBN. Employing a hybrid stochastic framework that combines Ornstein--Uhlenbeck detuning fluctuations with temperature-dependent, Gaussian-distributed discrete frequency jumps, motivated by experimentally observed spectral diffusion and blinking, we reproduce the measured evolution of inhomogeneous linewidth broadening and the progressive degradation of photon coherence across the relevant cryogenic range (5-30K). The model captures phonon-related spectral diffusion with a cubic temperature dependence and the onset of jump-like spectral instabilities at higher temperatures. By calibrating the hybrid diffusion, jump parameters to the experimentally measured full width at half maximum (FWHM) of the emission line and analyzing the second-order correlation function $g^{(2)}(τ)$ under resonant driving, we establish a unified phenomenological description that links stochastic detuning dynamics to the decay of optical coherence in a resonantly driven emitter. Analysis of $g^{(2)}(τ)$ under resonant driving reveals an additional dephasing rate $γ_{\mathrm{sd+j}}$ that rises monotonically with temperature and drive strength, leading to a predicted critical crossover to overdamped dynamics at $T_{\mathrm{crit}} \approx 25.91$~K. This hybrid framework provides a quantitative connection between accessible spectroscopic observables and the dominant noise mechanisms limiting coherent optical control in mechanically decoupled quantum emitters, exemplified in hBN and generalizable to similar emitters in other materials.
Time complexity of a monitored quantum search with resetting
This paper analyzes a quantum search algorithm that combines continuous monitoring of the target with periodic resetting of the quantum state, creating a non-unitary dynamics that could potentially improve search performance for finite-size databases compared to standard approaches.
Key Contributions
- Mathematical analysis of time complexity for monitored quantum search with resetting showing potential improvements over Grover's algorithm for finite databases
- Optimization framework for search parameters including hopping amplitude, detection rate, and resetting rate in non-Markovian quantum dynamics
View Full Abstract
Searching a database is a central task in computer science and is paradigmatic of transport and optimization problems in physics. For an unstructured search, Grover's algorithm predicts a quadratic speedup, with the search time $τ(N)=Θ(\sqrt{N})$ and $N$ the database size. Numerical studies suggest that the time complexity can change in the presence of feedback, injecting information during the search. Here, we determine the time complexity of the quantum analog of a randomized algorithm, which implements feedback in a simple form. The search is a continuous-time quantum walk on a complete graph, where the target is continuously monitored by a detector. Additionally, the quantum state is reset if the detector does not click within a specified time interval. This yields a non-unitary, non-Markovian dynamics. We optimize the search time as a function of the hopping amplitude, detection rate, and resetting rate, and identify the conditions under which time complexity could outperform Grover's scaling. The overall search time does not violate Grover's optimality bound when including the time budget of the physical implementation of the measurement. For databases of finite sizes monitoring can warrant rapid convergence and provides a promising avenue for fault-tolerant quantum searches.
Detector's response to coherent Rindler and Minkowski photons
This paper studies how quantum detectors respond differently to coherent photons depending on whether the detector is stationary (static detector with Rindler photons) or accelerating (Rindler detector with Minkowski photons). The authors find that transition probabilities differ between these scenarios in both 1+1 and 3+1 spacetime dimensions, with some interesting convergence behavior in the classical limit for 1+1 dimensions.
Key Contributions
- Demonstrates that detector transition probabilities depend on the reference frame (static vs accelerating) when interacting with coherent photons
- Shows dimensional dependence of detector response, with 1+1D and 3+1D exhibiting different behaviors in the classical limit
View Full Abstract
We observe that the transition probability in a static two-level quantum detector interacting with a coherent Rindler photon is different from the same of the Rindler detector which is in interaction with a coherent Minkowski photon. Situation does not change in the response of quantum detector for the classical limit of the photon state. This we investigate in $(1+1)$ and $(3+1)$-spacetime dimensions. Interestingly, the transition probabilities of the ``classical'' detector in the classical limit of the photon state in $(1+1)$-dimensions, for these two scenarios, appear to be identical when the frequencies of photon mode and detector are taken to be same. However, our obtained detector's transition probabilities in $(3+1)$-dimensions, which are calculated under the large acceleration condition, do not show such signature. The implications of these observations are discussed as well.
Gravitational wave detection via photon-graviton scattering and quantum interference
This paper proposes a new quantum method for detecting gravitational waves by treating the detection as photon-graviton scattering events. The approach uses quantum interference effects in entangled photon pairs, where gravitational waves cause phase shifts that disrupt the interference and can be measured through photon coincidence rates.
Key Contributions
- Development of quantum field-theoretic framework for gravitational wave detection via photon-graviton scattering
- Proposal of Hong-Ou-Mandel interference-based detection scheme using frequency-entangled photons
View Full Abstract
We present a fully quantum field-theoretic framework for gravitational wave (GW) detection in which the interaction is described as photon-graviton scattering. In this picture, the GW acts as a coherent background that induces inelastic energy exchanges with the electromagnetic field - analogous to the Stokes and anti-Stokes shifts in Raman spectroscopy. We propose a detection scheme sensitive to this microscopic mechanism based on Hong-Ou-Mandel interference. We show that the scattering-induced phase shifts render frequency-entangled photon pairs distinguishable, spoiling their destructive quantum interference. GW signal is thus encoded in the modulation of photon coincidence rates rather than classical field intensity, offering a complementary quantum probe of the gravitational universe that recovers the standard classical response in the macroscopic limit.
A Unified Symmetry Classification of Many-Body Localized Phases
This paper develops a systematic classification scheme for many-body localized (MBL) quantum phases based on their symmetry properties, extending the well-known Altland-Zirnbauer classification from single-particle systems to interacting many-body systems. The authors identify which types of symmetries are compatible with stable MBL phases and construct a complete classification table.
Key Contributions
- Development of a unified symmetry classification framework for many-body localized phases based on local integrals of motion
- Systematic identification of stable, fragile, and unstable MBL symmetry classes with specific lattice realizations
View Full Abstract
Anderson localization admits a complete symmetry classification given by the Altland-Zirnbauer (AZ) tenfold scheme, whereas an analogous framework for interacting many-body localization (MBL) has remained elusive. Here we develop a symmetry-based classification of static MBL phases formulated at the level of local integrals of motion (LIOMs). We show that a symmetry is compatible with stable MBL if and only if its action can be consistently represented within a quasi-local LIOM algebra, without enforcing extensive degeneracies or nonlocal operator mixing. This criterion sharply distinguishes symmetry classes: onsite Abelian symmetries are compatible with stable MBL and can host distinct symmetry-protected topological MBL phases, whereas continuous non-Abelian symmetries generically preclude stable MBL. By systematically combining AZ symmetries with additional onsite symmetries, we construct a complete classification table of MBL phases, identify stable, fragile, and unstable classes, and provide representative lattice realizations. Our results establish a unified and physically transparent framework for understanding symmetry constraints on MBL.
Will we ever quantize the center of mass of macroscopic systems? A case for a Heisenberg cut in quantum mechanics
This paper argues that there should be a fundamental boundary (Heisenberg cut) between quantum and classical physics, specifically proposing that objects heavier than the Planck mass cannot have their center of mass described by quantum mechanics. The authors challenge the assumption that quantum mechanics applies universally to all scales, suggesting that macroscopic objects like rocks cannot be treated as quantum particles with creation and annihilation operators.
Key Contributions
- Proposes a fundamental mass threshold (Planck mass) as a boundary between quantum and classical descriptions of center-of-mass motion
- Argues against universal applicability of quantum field theory formalism for macroscopic systems while allowing for macroscopic quantum phenomena in laboratory settings
View Full Abstract
The concept of quantum particles derives from quantum field theory. Accepting that quantum mechanics is valid all the way implies that not only composite particles (such as protons and neutrons) would be derived from a field theory, but also the center of mass of bodies as heavy as rocks. Despite the fabulous success of quantum mechanics, it is unreasonable to assume the existence of annihilation and creation operators for rocks, and so on. Fortunately, there are strong reasons to doubt that wave mechanics can describe the center of mass of systems at or above the Planck scale, thereby jeopardizing the construction of the corresponding Fock space. As a result, systems with masses exceeding the Planck mass would have their center of mass described through classical mechanics, regardless of being able to harbor macroscopic quantum phenomena as observed in the laboratory. Here, we briefly revisit (i) the arguments for the need for a Heisenberg cut delimitating the boundary between the quantum and classical realms and (ii) the kind of new physics expected at (the uncharted region of) the Heisenberg cut.''
Three-body scattering area of identical bosons in two dimensions
This paper studies the quantum mechanics of three identical bosons colliding in two dimensions, deriving mathematical expressions for their wave function and introducing a new parameter called the 'three-body scattering area' that characterizes these interactions. The work has applications to understanding ultracold atomic gases and many-body quantum systems.
Key Contributions
- Introduction of the three-body scattering area parameter D for finite two-body scattering length
- Derivation of asymptotic expansions for three-boson wave functions in different regimes
- Connection between the scattering area and recombination rates in ultracold atomic gases
View Full Abstract
We study the wave function $φ^{(3)}$ of three identical bosons scattering at zero energy, zero total momentum, and zero orbital angular momentum in two dimensions, interacting via short-range potentials with a finite two-body scattering length $a$. We derive asymptotic expansions of $φ^{(3)}$ in two regimes: the 111-expansion, where all three pairwise distances are large, and the 21-expansion, where one particle is far from the other two. In the 111-expansion, the leading term grows as $\ln^3(B/a)$ at large hyperradius $B=\sqrt{(s_1^2+s_2^2+s_3^2)/2}$. At order $B^{-2}\ln^{-3}(B/a)$, we identify a three-body parameter $D$ with dimension of length squared, which we term the three-body scattering area. This quantity should be contrasted with the three-body scattering area previously studied for infinite or vanishing two-body scattering length. If the two-body interaction is attractive and supports bound states, $D$ acquires a negative imaginary part, and we derive its relation to the probability amplitudes for the production of two-body bound states in three-body collisions. Under weak modifications of the interaction potentials, we derive the corresponding shift of $D$ in terms of $φ^{(3)}$ and the changes of the two-body and three-body potentials. We also study the effects of $D$ and $φ^{(3)}$ on three-body and many-body physics, including the three-body ground-state energy in a large periodic volume, the many-body energy and the three-body correlation function of the dilute two-dimensional Bose gas, and the three-body recombination rates of two-dimensional ultracold atomic Bose gases.
Revisiting the Interpretations of Quantum Mechanics: From FAPP Solutions to Contextual Ontologies
This paper compares different interpretations of quantum mechanics, distinguishing between practical 'FAPP' solutions and true ontological explanations for quantum measurement. The authors propose that the Contexts-Systems-Modalities (CSM) framework provides a complete ontological interpretation that naturally incorporates measurement devices and irreversibility.
Key Contributions
- Distinguishes FAPP solutions from ontological solutions to the quantum measurement problem
- Proposes CSM framework as a complete non-FAPP ontological interpretation of quantum mechanics
View Full Abstract
This note presents a concise and non-polemical comparison of several major interpretations of quantum mechanics, with a particular emphasis on the distinction between FAPP-solutions ("For All Practical Purposes'') versus ontological solutions to the measurement problem. Building on this distinction, we argue that the Contexts-Systems-Modalities (CSM) framework, supplemented by the operator-algebraic description of macroscopic contexts, provides a conceptually complete, non-FAPP ontology that naturally incorporates irreversibility and the physical structure of measurement devices. This approach differs significantly from other ontological interpretations such as Bohmian mechanics, spontaneous collapse, or many-worlds, and highlights the major role of contextual quantization in shaping quantum theory.
Multiple mobility rings in non-Hermitian Su-Schrieffer-Heeger chain with quasiperiodic potentials
This paper investigates quantum phase transitions in a modified Su-Schrieffer-Heeger chain with non-Hermitian properties and quasiperiodic potentials, discovering the formation of multiple 'mobility rings' that represent boundaries between localized and extended quantum states. The research explores how varying hopping strengths and mosaic modulation patterns affects the localization-delocalization behavior of quantum states in these complex systems.
Key Contributions
- Discovery of multiple mobility rings in non-Hermitian SSH chains with quasiperiodic potentials
- Demonstration that quantum phase transitions can be controlled by adjusting intracellular or intercellular hopping strengths
- Investigation of how mosaic modulation period affects the emergence of multiple mobility rings
View Full Abstract
The localization property of a non-Hermitian Su-Schrieffer-Heeger (SSH) chain with quasi-periodic on-site potential is investigated. In contrast to the preceding investigations, the quantum phase transition between localized state and extended one is achieved by adjusting the strength of intracellular or intercellular hopping. The energy spectra and eigenstate distributions of the system's Hamiltonian near the boundary of the phase transition exhibit different behaviors when the Hermiticity, non-Hermiticity and mosaic modulation of the quasi-periodic potential are considered, respectively. The existence of the mobility ring is revealed in the non-Hermitian SSH chain by studying of the critical behaviors near the boundary. More interestingly, the multiple mobility rings emerge when the period number of the mosaic modulation is increased. The result is helpful for the investigation of the localization-delocalization transition in the SSH-type system under the combined action of the non-Hermiticity and quasi-periodicity.
Critical Charge and Current Fluctuations across a Voltage-Driven Phase Transition
This paper studies how an interacting quantum dot connected to metal leads undergoes phase transitions when voltage is applied, finding that charge fluctuations behave like equilibrium systems with an effective temperature while current fluctuations show genuinely non-equilibrium behavior including negative effective temperatures.
Key Contributions
- Mapping the non-equilibrium phase diagram of voltage-driven quantum dot systems using Random Phase Approximation
- Discovering that charge fluctuations can be described by an effective temperature while current fluctuations exhibit negative effective temperatures in the ordered phase
View Full Abstract
We investigate bias-driven non-equilibrium quantum phase transitions in a paradigmatic quantum-transport setup: an interacting quantum dot coupled to non-interacting metallic leads. Using the Random Phase Approximation, which is exact in the limit of a large number of dot levels, we map out the zero-temperature non-equilibrium phase diagram as a function of interaction strength and applied bias. We focus our analysis on the behavior of the charge susceptibility and the current noise in the vicinity of the transition. Remarkably, despite the intrinsically non-equilibrium nature of the steady state, critical charge fluctuations admit an effective-temperature description, $T_{\text{eff}}(T,V)$, that collapses the steady-state behavior onto its equilibrium form. In sharp contrast, current fluctuations exhibit genuinely non-equilibrium features: the fluctuation-dissipation ratio becomes negative in the ordered phase, corresponding to a negative effective temperature for the current degrees of freedom. These results establish current noise as a sensitive probe of critical fluctuations at non-equilibrium quantum phase transitions and open new directions for exploring voltage-driven critical phenomena in quantum transport systems.
Echo Cross Resonance gate error budgeting on a superconducting quantum processor
This paper develops error budgeting procedures for two-qubit gates on a 32-qubit superconducting quantum processor, identifying sources of error and applying pulse-shaping and compensating gates to achieve a 3.7x reduction in average error rates. The techniques improve gate fidelity across the device with minimal hardware overhead, particularly benefiting previously under-performing qubit pairs.
Key Contributions
- Development of systematic error budgeting procedure for Echo Cross Resonance gates on superconducting quantum processors
- Demonstration of 3.7x average error rate reduction using pulse-shaping and compensating gates with no additional hardware overhead
- Significant improvement in device uniformity by reducing the low-performing tail of gate qualities across a 32-qubit system
View Full Abstract
High fidelity quantum operations are key to enabling fault-tolerant quantum computation. Superconducting quantum processors have demonstrated high-fidelity operations, but on larger devices there is commonly a broad distribution of qualities, with the low-performing tail affecting near-term performance and applications. Here we present an error budgeting procedure for the native two-qubit operation on a 32-qubit superconducting-qubit-based quantum computer, the OQC Toshiko gen-1 system. We estimate the prevalence of different forms of error such as coherent error and control qubit leakage, then apply error suppression strategies based on the most significant sources of error, making use of pulse-shaping and additional compensating gates. These techniques require no additional hardware overhead and little additional calibration, making them suitable for routine adoption. An average reduction of 3.7x in error rate for two qubit operations is shown across a chain of 16 qubits, with the median error rate improving from 4.6$\%$ to 1.2$\%$ as measured by interleaved randomized benchmarking. The largest improvements are seen on previously under-performing qubit pairs, demonstrating the importance of practical error suppression in reducing the low-performing tail of gate qualities and achieving consistently good performance across a device.
Violation of the Leggett-Garg inequality in photon-graviton conversion
This paper theoretically analyzes how converting photons to gravitons in a magnetic field creates quantum superposition states that violate the Leggett-Garg inequality through temporal correlations. The research proposes this violation as a method to experimentally test whether gravity exhibits quantum properties.
Key Contributions
- Theoretical demonstration that photon-graviton conversion systems violate the Leggett-Garg inequality
- Novel proposed method for experimentally testing the quantum nature of gravity using temporal quantum correlations
View Full Abstract
The Leggett-Garg inequality (LGI) is a temporal analogue of Bell's inequality and provides a quantitative test of the nonclassicality of a system through its violation. We analytically investigate the violation of the LGI in the context of photon-graviton conversion in a magnetic field background, motivated by its potential applications to testing the nonclassicality of gravity. When gravitational perturbations are quantized as gravitons, the conversion of an initial single photon state gives rise to a superposition of photon and graviton states. We show that the temporal correlations obtained from successive projective measurements on the photon-graviton system violate the LGI. Observation of such a violation would provide a novel avenue for probing the quantum nature of gravity.
Network Nonlocality Sharing in Generalized Star Network from Bipartite Bell Inequalities
This paper studies how quantum nonlocality can be shared across complex star-shaped quantum networks, where multiple parties perform measurements on entangled quantum states. The research develops mathematical frameworks to analyze when multiple parties in the network can simultaneously demonstrate quantum nonlocal correlations that violate classical limits.
Key Contributions
- Developed analytical framework for studying network nonlocality sharing in generalized star networks with arbitrary configurations
- Derived analytical expressions for quantum correlators in networks with weak measurements and multiple measurement settings
- Extended network nonlocality analysis beyond CHSH inequalities to broader classes of bipartite Bell inequalities including Vértesi inequalities
View Full Abstract
This work investigates network nonlocality sharing for a broad class of bipartite Bell inequalities in a generalized star network with an $(n,m,k)$ configuration, comprising $n$ independent branches, $m$ sequential Alices per branch, and $k$ measurement settings per party. On each branch, the intermediate Alices implement optimal weak measurements, whereas the final Alice and the central Bob perform sharp projective measurements. Network nonlocality sharing is witnessed when the quantum values of the network correlations associated with relevant parties simultaneously violate a star-network Bell inequality generated from the given class of bipartite Bell inequalities. We streamline the calculation of the quantum values of the network correlations and derive an analytical expression for the bipartite quantum correlator, valid for arbitrary measurement settings and weak-measurement strengths. The network nonlocality sharing for Vértesi inequalities has been studied within the framework, and simultaneous violations are found in $(2,2,6)$ and $(2,2,465)$ cases, with the latter exhibiting greater robustness. Our approach suggests a practical route to studying network nonlocality sharing by utilizing diverse bipartite Bell inequalities beyond the commonly used CHSH-type constructions.
Scalable Multi-QPU Circuit Design for Dicke State Preparation: Optimizing Communication Complexity and Local Circuit Costs
This paper presents a method for preparing Dicke states (quantum states with a fixed number of excitations) across multiple quantum processing units (QPUs), achieving efficient communication between QPUs while maintaining reasonable circuit complexity. The authors provide both an optimal construction and theoretical lower bounds on the communication requirements.
Key Contributions
- First distributed quantum circuit for Dicke state preparation with simultaneous logarithmic communication complexity and polynomial circuit size/depth
- Established lower bounds on communication complexity using CP-rank analysis, proving optimality for the two-QPU case
View Full Abstract
Preparing large-qubit Dicke states is of broad interest in quantum computing and quantum metrology. However, the number of qubits available on a single quantum processing unit (QPU) is limited -- motivating the distributed preparation of such states across multiple QPUs as a practical approach to scalability. In this article, we investigate the distributed preparation of $n$-qubit $k$-excitation Dicke states $D(n,k)$ across a general number $p$ of QPUs, presenting a distributed quantum circuit (each QPU hosting approximately $\lceil n/p \rceil$ qubits) that prepares the state with communication complexity $O(p \log k)$, circuit size $O(nk)$, and circuit depth $O\left(p^2 k + \log k \log (n/k)\right)$. To the best of our knowledge, this is the first construction to simultaneously achieve logarithmic communication complexity and polynomial circuit size and depth. We also establish a lower bound on the communication complexity of $p$-QPU distributed state preparation for a general target state. This lower bound is formulated in terms of the canonical polyadic rank (CP-rank) of a tensor associated with the target state. For the special case $p = 2$, we explicitly compute the CP-rank corresponding to the Dicke state $D(n,k)$ and derive a lower bound of $\lceil\log (k + 1)\rceil$, which shows that the communication complexity of our construction matches this fundamental limit.
Miniatures on Open Quantum Systems
This paper provides a comprehensive mathematical framework for understanding open quantum systems - quantum systems that interact with their environment - using advanced operator algebra techniques. It covers both equilibrium and non-equilibrium quantum phenomena, with particular focus on how quantum systems behave when coupled to thermal reservoirs and how entropy is produced in such systems.
Key Contributions
- Unified mathematical treatment of open quantum systems using C*- and W*-algebraic formulations
- Systematic analysis of non-equilibrium steady states and quantum entropy production
- Integration of modular theory with linear response theory for quantum statistical mechanics
- Comprehensive framework connecting equilibrium and non-equilibrium quantum phenomena
View Full Abstract
We presents a unified and concise exposition of key topics in the mathematical theory of open quantum systems, developed within the framework of operator algebras. The manuscript consolidates and extends a series of invited articles originally prepared for the Modern Encyclopedia of Mathematical Physics, combining foundational material with modern perspectives on non-equilibrium quantum statistical mechanics. After introducing the C*- and W*-algebraic formulation of quantum mechanics, the paper reviews quantum dynamical systems, KMS states, and Tomita-Takesaki modular theory, as well as CCR and CAR algebras for bosonic and fermionic systems. Particular emphasis is placed on infinite systems, non-equilibrium steady states, entropy production, and linear response theory. The later sections develop a systematic treatment of small systems coupled to reservoirs, open lattice quantum spin systems, culminating in a detailed discussion of competing notions of quantum entropy production. The presentation highlights structural insights, conceptual clarity, and connections between equilibrium and non-equilibrium phenomena, providing a self-contained reference for researchers and graduate students in mathematical physics.
Electromagnetically Induced Transparency Spectra of Ladder Four-Level System with Quantum Frequency Mixing
This paper studies how quantum frequency mixing affects electromagnetically induced transparency in a four-level atomic system, discovering a new type of spectral splitting and demonstrating how two different quantum interference effects can be controlled and measured simultaneously for improved AC field sensing.
Key Contributions
- Discovery of secondary splitting of Autler-Townes splitting in four-level systems with quantum frequency mixing
- Development of a method to simultaneously control and readout two distinct quantum interference effects for enhanced AC field sensing
View Full Abstract
In this paper, we generalized the quantum frequency mixing technology to a ladder-type four-level system and studied its effect on electromagnetically induced transparency spectra. We found a secondary splitting of Autler-Townes splitting in the probing field transmission spectra, which could be understood by the effective Hamiltonian derived with multi-mode Floquet theory. The Frequency mixing scheme developed here enables continuous tunablity of the resonant frequency between upper levels, which facilitates the broad band sensing of AC field. Furthermore, by introducing an additional periodic driving, we realize an effective model that two distinct quantum interference effects coexist: interference among Floquet channels and loop interference arising from closed coherent pathways. Both interference effects could be read out from the transmission spectra independently. The changing of the distance between double splitting peaks represents the interference of Floquet channels, while their asymmetric linewidth broadening is linked with the total effective phase of the loop. This not only provides complementary readout for extracting the phase of AC field, but also establishes a new paradigm for coherent control in multi-level quantum systems.
Fingerprints of classical memory in quantum hysteresis
This paper develops a theoretical framework for understanding quantum systems where the Hamiltonian depends on past values of control parameters through memory kernels, distinguishing between classical memory effects in control systems and genuine quantum non-Markovian dynamics. The authors demonstrate how this leads to hysteresis effects in driven qubits and provide mathematical tools for analyzing such memory-dependent quantum evolution.
Key Contributions
- Development of a mathematical framework for quantum systems with classical memory kernels in the Hamiltonian
- Demonstration of hysteresis effects in driven qubits due to memory in control parameters
- Derivation of unitarity conditions and time-local descriptions for exponential memory kernels
View Full Abstract
We present a simple framework for classical and quantum ``memory'' in which the Hamiltonian at time $t$ depends on past values of a control Hamiltonian through a causal kernel. This structure naturally describes finite-bandwidth or filtered control channels and provides a clean way to distinguish between memory in the control and genuine non-Markovian dynamics of the state. We focus on models where $H(t)=H_0+\int_{-\infty}^{t}K(t-s)\,H_1(s)\,ds$, and illustrate the framework on single-qubit examples such as $H(t)=σ_z+Φ(t)σ_x$ with $Φ(t)=\int_{-\infty}^{t}K(t-s)\,u(s)\,ds$. We derive basic properties of such dynamics, discuss conditions for unitarity, give an equivalent time-local description for exponential kernels, and show explicitly how hysteresis arises in the response of a driven qubit.
A Quantum Photonic Approach to Graph Coloring
This paper uses Gaussian Boson Sampling, a quantum photonic computing approach, to solve graph coloring problems by reformulating them as finding independent sets. The authors test their quantum-enhanced method against classical algorithms on random graphs and real-world smart-charging scenarios.
Key Contributions
- Reformulation of graph coloring as independent set problem suitable for GBS
- Demonstration of competitive quantum-enhanced heuristic for combinatorial optimization
View Full Abstract
Gaussian Boson Sampling (GBS) is a quantum computational model that leverages linear optics to solve sampling problems believed to be classically intractable. Recent experimental breakthroughs have demonstrated quantum advantage using GBS, motivating its application to real-world combinatorial optimization problems. In this work, we reformulate the graph coloring problem as an integer programming problem using the independent set formulation. This enables the use of GBS to identify cliques in the complement graph, which correspond to independent sets in the original graph. Our method is benchmarked against classical heuristics and exact algorithms on two sets of instances: Erdős-Rényi random graphs and graphs derived from a smart-charging use case. The results demonstrate that GBS can provide competitive solutions, highlighting its potential as a quantum-enhanced heuristic for graph-based optimization.
Fast state transfer via loop weights
This paper demonstrates how to achieve fast, high-fidelity quantum state transfer in spin chains by strategically placing loop weights at specific nodes (second and second-to-last positions). The authors provide exact parameter values and prove the transfer can be accomplished in almost-linear time with quantified performance metrics.
Key Contributions
- Proof that almost-linear-time high-fidelity state transfer is possible using strategically placed loop weights
- Specific parameter values and quantitative analysis of transfer time and strength through eigenvector analysis
View Full Abstract
We prove that almost-linear-time high-fidelity state transfer is achievable in a quantum spin chain using loop weights at the second and second-to-last nodes. We provide specific parameter values, and using a careful analysis of the eigenvectors we make precise quantitative estimates of the transfer time and strength.
A general interpretation of nonlinear connected time crystals: quantum self-sustaining combined with quantum synchronization
This paper demonstrates how to create quantum time crystals - systems that oscillate in time while maintaining quantum coherence - by combining nonlinear self-sustaining dynamics with quantum synchronization between components. The authors show that phase correlations between synchronized quantum oscillators can overcome the dephasing that normally destroys time-periodic behavior in quantum systems.
Key Contributions
- Identified dephasing suppression through intercomponent phase correlations as key mechanism for quantum time crystals
- Demonstrated continuous time crystals in synchronized van der Pol oscillator arrays with both semiclassical and full quantum analysis
- Provided framework to classify uncorrelated time crystals as trivial and reduce identification to two-body correlations
View Full Abstract
Although classical nonlinear dynamics suggests that sufficiently strong nonlinearity can sustain oscillations, quantization of such model typically yields a time-independent steady state that respects time-translation symmetry and thus precludes time-crystal behavior. We identify dephasing as the primary mechanism enforcing this symmetry, which can be suppressed by intercomponent phase correlations. Consequently, a sufficient condition for realizing a continuous time crystal is a nonlinear quantum self-sustaining system exhibiting quantum synchronization among its constituents. As a concrete example, we demonstrate spontaneous oscillations in a synchronized array of van der Pol oscillators, corroborated by both semiclassical dynamics and the quantum Liouville spectrum. These results reduce the identification of time crystals in many-body systems to the evaluation of only two-body correlations and provide a framework for classifying uncorrelated time crystals as trivial.
Contextuality as an Information-Theoretic Obstruction to Classical Probability
This paper reframes quantum contextuality as an information-theoretic constraint rather than just a quantum anomaly, showing that classical models reproducing contextual statistics must pay an unavoidable information cost. The work demonstrates that quantum probability naturally handles contextual operations without requiring explicit contextual encoding, providing new theoretical insight into the classical-quantum distinction.
Key Contributions
- Reframes contextuality as an information-theoretic obstruction requiring additional information cost in classical models
- Demonstrates that quantum probability provides a canonical framework for contextual operations without explicit contextual encoding
View Full Abstract
Contextuality is a central feature distinguishing quantum from classical probability theories, yet its operational meaning remains subject to interpretation. We reconsider contextuality from an information-theoretic perspective, focusing on operational models constrained to maintain a single internal state with fixed semantics across multiple contexts. Under this constraint, we show that contextual statistics certify an unavoidable obstruction to classical probabilistic descriptions. Specifically, any classical model that reproduces such statistics must either embed contextual dependence into the internal state or introduce additional external labels carrying nonzero information. This result identifies contextuality as a witness of irreducible information cost in classical representations, rather than as a purely nonclassical anomaly. From this viewpoint, quantum probability emerges as a canonical framework that accommodates contextual operations without requiring explicit contextual encoding.
Complex nonlinear sigma model
This paper studies nonlinear sigma models with complex-valued couplings as a theoretical framework for understanding critical phenomena in open quantum many-body systems. The researchers use renormalization group analysis to show that these complexified models exhibit unique fixed points and phase transitions not found in conventional models with real couplings.
Key Contributions
- Development of complexified nonlinear sigma models as framework for nonunitary field theory
- Demonstration of novel fixed points with complex scaling dimensions in tenfold symmetric spaces
- Mapping of global phase diagrams in complex-coupling plane identifying continuous and discontinuous transitions
View Full Abstract
Motivated by the recent interest in the criticality of open quantum many-body systems, we study nonlinear sigma models with complexified couplings as a general framework for nonunitary field theory. Applying the perturbative renormalization-group analysis to the tenfold symmetric spaces, we demonstrate that fixed points with complex scaling dimensions and critical exponents arise generically, without counterparts in conventional nonlinear sigma models with real couplings. We further clarify the global phase diagrams in the complex-coupling plane and identify both continuous and discontinuous phase transitions. Our work elucidates universal aspects of critical phenomena in complexified field theory.
Universal thermodynamic implementation of a process with a variable work cost
This paper develops a universal protocol for implementing quantum processes with optimal, variable work costs by leveraging thermodynamic frameworks and thermal operations. The protocol can implement multiple copies of quantum channels while achieving optimal work efficiency for each individual input state, though it introduces some decoherence that reveals the work consumption.
Key Contributions
- Development of a universal thermodynamic protocol for implementing quantum channels with variable, optimal work costs
- Extension of conditional erasure protocols to achieve variable work consumption while maintaining process fidelity
View Full Abstract
The minimum amount of thermodynamic work required in order to implement a quantum computation or a quantum state transformation can be quantified using frameworks based on the resource theory of thermodynamics, deeply rooted in the works of Landauer and Bennett. For instance, the work we need to invest in order to implement $n$ independent and identically distributed (i.i.d.) copies of a quantum channel is quantified by the thermodynamic capacity of the channel when we require the implementation's accuracy to be guaranteed in diamond norm over the $n$-system input. Recent work showed that work extraction can be implemented universally, meaning the same implementation works for a large class of input states, while achieving a variable work cost that is optimal for each individual i.i.d. input state. Here, we revisit some techniques leading to derivation of the thermodynamic capacity, and leverage them to construct a thermodynamic implementation of $n$ i.i.d. copies of any time-covariant quantum channel, up to some process decoherence that is necessary because the implementation reveals the amount of consumed work. The protocol uses so-called thermal operations and achieves the optimal per-input work cost for any i.i.d. input state; it relies on the conditional erasure protocol in our earlier work, adjusted to yield variable work. We discuss the effect of the work-cost decoherence. While it can significantly corrupt the correlations between the output state and any reference system, we show that for any time-covariant i.i.d. input state, the state on the output system faithfully reproduces that of the desired process to be implemented. As an immediate consequence of our results, we recover recent results for optimal work extraction from i.i.d. states up to the error scaling and implementation specifics, and propose an optimal preparation protocol for time-covariant i.i.d. states.
Engineering the non-Hermitian SSH model with skin effects in Rydberg atom arrays
This paper proposes a method to create and study non-Hermitian topological systems using arrays of Rydberg atoms, where engineered dissipation creates exotic 'skin effects' where quantum states localize at boundaries. The researchers demonstrate how to build a quantum simulator that can explore unusual topological phenomena that don't occur in conventional quantum systems.
Key Contributions
- Practical implementation scheme for non-Hermitian SSH model using Rydberg atom arrays with three-atom unit cells
- Demonstration of robust non-Hermitian skin effects through adiabatic elimination of auxiliary atoms
- Analysis of parameter fluctuation tolerance showing robust topological features under experimental conditions
View Full Abstract
We propose and systematically analyze a practical scheme for implementing a one-dimensional non-Hermitian Su-Schrieffer-Heeger model using individually addressable Rydberg atom arrays. Our setup consists of an atomic chain with three-atom unit cells, in which a synthetic gauge field is generated by applying multi-color laser fields. By engineering fast dissipative channels for one auxiliary atom in each unit cell, the adiabatic elimination effectively gives rise to a non-Hermitian skin effect. We examine how fluctuations in the experimental parameters influence both the skin effect and the topological invariant under open and periodic boundary conditions in real space and find that both features remain highly robust. This work establishes a versatile, controllable, and programmable open-system quantum simulator with neutral atoms, providing a clear route for exploring rich non-Hermitian topological phenomena.
Krypton-sputtered tantalum films for scalable high-performance quantum devices
This paper demonstrates a new method for creating high-quality tantalum superconducting films for quantum devices using krypton sputtering at lower temperatures (200-350°C), making the process compatible with standard semiconductor manufacturing. The researchers show these films can produce state-of-the-art microwave resonators and transmon qubits with quality factors up to 14 million.
Key Contributions
- Development of low-temperature krypton sputtering process for BCC tantalum films compatible with semiconductor fabrication
- Demonstration of transmon qubits with quality factors up to 14 million using the new fabrication process
View Full Abstract
Superconducting qubits based on tantalum (Ta) thin films have demonstrated the highest-performing microwave resonators and qubits. This makes Ta an attractive material for superconducting quantum computing applications, but, so far, direct deposition has largely relied on high substrate temperatures exceeding \SI{400}{\celsius} to achieve the body-centered cubic phase, BCC (\textalpha-Ta). This leads to compatibility issues for scalable fabrication leveraging standard semiconductor fabrication lines. Here, we show that changing the sputter gas from argon (Ar) to krypton (Kr) promotes BCC Ta synthesis on silicon (Si) at temperatures as low as \SI{200}{\celsius}, providing a wide process window compatible with back-end-of-the-line fabrication standards. Furthermore, we find these films to have substantially higher electronic conductivity, consistent with clean-limit superconductivity. We validated the microwave performance through coplanar waveguide resonator measurements, finding that films deposited at \SI{250}{\celsius} and \SI{350}{\celsius} exhibit a tight performance distribution at the state of the art. Higher temperature-grown films exhibit higher losses, in correlation with the degree of Ta/Si intermixing revealed by cross-sectional transmission electron microscopy. Finally, with these films, we demonstrate transmon qubits with a relatively compact, \SI{20}{\micro\meter} capacitor gap, achieving a median quality factor up to 14 million.
Spectral Transitions and Singular Continuous Spectrum in A New Family of Quasi-periodic Quantum Walks
This paper introduces a new class of one-dimensional quantum walks with quasi-periodic dynamics that generalizes existing models and demonstrates novel spectral properties. The research provides the first example of a solvable quasi-periodic quantum walk that exhibits purely singular continuous spectrum in certain parameter regions.
Key Contributions
- Introduction of a new family of quasi-periodic quantum walks based on extended CMV matrices
- First demonstration of stable singular continuous spectrum in solvable quasi-periodic quantum walks
- Rigorous spectral analysis showing richer phase diagram than previous models
View Full Abstract
This paper introduces and rigorously analyzes a new class of one-dimensional discrete-time quantum walks whose dynamics are governed by a parametrized family of extended CMV matrices. The model generalizes the unitary almost Mathieu operator (UAMO) and exhibits a richer spectral phase diagram, closely resembling the extended Harper's model. It provides the first example of a solvable quasi-periodic quantum walk that exhibits a stable region of purely singular continuous spectrum.
Local Distinguishability of Multipartite Orthogonal Quantum States: Generalized and Simplified
This paper proves that any two orthogonal multipartite quantum states can be perfectly distinguished using one-way local operations and classical communication (LOCC), extending previous results to infinite dimensions with a simpler proof and providing an efficient algorithm for the two-party case.
Key Contributions
- Extended the Walgate-Short-Hardy-Vedral result on local distinguishability of orthogonal quantum states to infinite dimensions with simplified proof
- Developed an O(d_A^2 d_B^2)-time algorithm for constructing perfect one-way LOCC protocols for bipartite states
- Established equivalence between local distinguishability results and one-shot environment-assisted classical capacity of quantum channels being at least 1 bit
View Full Abstract
In a seminal work [PRL85.4972], Walgate, Short, Hardy, and Vedral prove in finite dimensions that for every pair of pure multipartite orthogonal quantum states, there exists a one-way local operations and classical communication (LOCC) protocol that perfectly distinguishes the pair. We extend this result to infinite dimensions with a simpler proof. For states on $\mathbb{C}^{d_A \times d_A} \otimes \mathbb{C}^{d_B \times d_B}$, we strengthen this existence result by constructing an $O(d_A^2 d_B^2)$-time algorithm that specifies such a perfect one-way LOCC protocol. Finally, we establish the equivalence between Walgate et al.'s result and the fact that the one-shot environment-assisted classical capacity of every quantum channel is at least 1 bit per channel use, thereby clarifying the literature on these notions. At the core of all of these results is the fact that every operator with vanishing trace admits a basis where its diagonal entries are all zero.
Ensemble-Based Quantum Signal Processing for Error Mitigation
This paper introduces a new approach to reduce noise in quantum computers by using multiple noisy quantum circuits and averaging their results to suppress errors, specifically targeting random phase errors that accumulate during quantum algorithm execution. The method is applied to Quantum Signal Processing algorithms without requiring additional circuit depth or extra qubits.
Key Contributions
- Novel ensemble-based error mitigation framework for Quantum Signal Processing that doesn't increase circuit depth or qubit requirements
- Robust QSP algorithms for polynomial function implementation and observable estimation with applications to Hamiltonian simulation and quantum linear systems
View Full Abstract
Despite rapid advances in quantum hardware, noise remains a central obstacle to deploying quantum algorithms on near-term devices. In particular, random coherent errors that accumulate during circuit execution constitute a dominant and fundamentally challenging noise source. We introduce a noise-resilient framework for Quantum Signal Processing (QSP) that mitigates such coherent errors without increasing circuit depth or ancillary qubit requirements. Our approach uses ensembles of noisy QSP circuits combined with measurement-level averaging to suppress random phase errors in Z rotations. Building on this framework, we develop robust QSP algorithms for implementing polynomial functions of Hermitian matrices and for estimating observables, with applications to Hamiltonian simulation, quantum linear systems, and ground-state preparation. We analyze the trade-off between approximation error and hardware noise, which is essential for practical implementation under the stringent depth and coherence constraints of current quantum hardware. Our results establish a practical pathway for integrating error mitigation seamlessly into algorithmic design, advancing the development of robust quantum computing, and enabling the discovery of scientific applications with near- and mid-term quantum devices.
Comment on "Determining angle of arrival of radio-frequency fields using subwavelength, amplitude-only measurements of standing waves in a Rydberg atom sensor"
This paper critiques a previous study on using Rydberg atoms to determine radio frequency field directions, specifically addressing how excluding certain RF transitions between quantum states affects the predicted optical spectra when the system is probed using electromagnetically induced transparency (EIT).
Key Contributions
- Identifies theoretical gaps in modeling RF-transitions between Rydberg substates in quantum sensing schemes
- Provides corrections to spectral predictions for EIT-based Rydberg atom RF sensors
View Full Abstract
We discuss the consequence of excluding allowed RF-transition between substates of a field-dressed Rydberg manifold when predicting the spectrum that will be observed if the dressed system is probed in an optical EIT scheme.
Superfluidity in the spin-1/2 XY model with power-law interactions
This paper studies a quantum spin model with long-range interactions that can be implemented in trapped-ion quantum simulators, focusing on how these interactions enhance superfluidity compared to short-range models. The researchers develop new quantum Monte Carlo simulation methods to measure superfluid properties and identify phase transitions in this system.
Key Contributions
- Development of stochastic series expansion quantum Monte Carlo methods for power-law interacting spin systems
- Demonstration of enhanced superfluidity in long-range interacting 1D XY models
- Novel quantum Monte Carlo probe for identifying critical points in power-law interacting systems
View Full Abstract
In trapped-ion quantum simulators, effective spin-1/2 XY interactions can be engineered via laser-induced coupling between internal atomic states and collective phonon modes. In the simplest one-dimensional ($1d$) traps, these interactions decay as a power-law with distance $1/r^α$, with a tunable exponent $α$. For small $α$, the resulting long-range $1d$ XY model exhibits continuous symmetry breaking, in marked contrast to its nearest neighbor counterpart. In this paper, we examine this model near the phase transition at $α_c$ from the lens of the spin stiffness, or superfluid density. We develop a stochastic series expansion (SSE) quantum Monte Carlo (QMC) simulation and a generalized winding number estimator to measure the superfluid density in the presence of power-law interactions, which we test against exact diagonalization for small lattice sizes. Our results show how conventional superfluidity in the $1d$ XY model is enhanced in the long-range interacting regime. This is observed as a diverging superfluid density as $α\rightarrow 0$ in the thermodynamic limit, which we show is consistent with linear spin-wave theory. Finally, we define a normalized superfluid density estimator that clearly distinguishes the short, medium, and long-range interacting regimes, providing a novel QMC probe of the critical value $α_c$.
Quantum Channels on Graphs: a Resonant Tunneling Perspective
This paper develops a theoretical framework for analyzing quantum information transport through networks by treating connected scattering sites as quantum channels. The researchers discover that internal reflections in these networks can create 'resonant concatenation' effects that suppress noise and even enable quantum communication through channels that individually cannot transmit information.
Key Contributions
- Development of quantum-information-theoretic framework for scattering on graphs using Redheffer star product
- Discovery of resonant concatenation phenomenon that can suppress noise and produce super-activation of quantum capacity
View Full Abstract
Quantum transport on structured networks is strongly influenced by interference effects, which can dramatically modify how information propagates through a system. We develop a quantum-information-theoretic framework for scattering on graphs in which a full network of connected scattering sites is treated as a quantum channel linking designated input and output ports. Using the Redheffer star product to construct global scattering matrices from local ones, we identify resonant concatenation, a nonlinear composition rule generated by internal back-reflections. In contrast to ordinary channel concatenation, resonant concatenation can suppress noise and even produce super-activation of the quantum capacity, yielding positive capacity in configurations where each constituent channel individually has zero capacity. We illustrate these effects through models exhibiting resonant-tunneling-enhanced transport. Our approach provides a general methodology for analyzing coherent information flow in quantum graphs, with relevance for quantum communication, control, and simulation in structured environments.
Correlated dynamics of three-particle bound states induced by emergent impurities in Bose-Hubbard model
This paper studies how three particles form bound states in the Bose-Hubbard model, discovering two types of bound states (dimer-monomer and bound edge states) and analyzing their collective motion properties including quantum walks and Bloch oscillations.
Key Contributions
- Identification of two distinct types of three-particle bound states in Bose-Hubbard model
- Characterization of collective dynamics including spread velocities in quantum walks and modified oscillation periods in Bloch oscillations
View Full Abstract
Bound states, known as particles tied together and moving as a whole, are profound correlated effects induced by particle-particle interactions. While dimer-monomer bound states are manifested as a single particle attached to dimer bound pair, it is still unclear about quantum walks and Bloch oscillations of dimer-monomer bound states. Here, we revisit three-particle bound states in the Bose-Hubbard model and find that interaction-induced impurities adjacent to bound pair and boundaries cause two kinds of bound states: one is dimer-monomer bound state and the other is bound edge states. In quantum walks, the spread velocity of dimer-monomer bound state is determined by the maximal group velocity of their energy band, which is much smaller than that in the single-particle case. In Bloch oscillations, the period of dimer-monomer bound states is one third of that in the single-particle case. Our works provide new insights to the collective dynamics of three-particle bound states.
A Cyclic Layerwise QAOA Training
This paper proposes Orbit-QAOA, an improved training method for the quantum approximate optimization algorithm that cyclically updates parameters layer-by-layer and selectively freezes stable parameters. The method reduces training computational overhead by up to 81.8% while maintaining equivalent performance to standard multi-angle QAOA for solving combinatorial optimization problems.
Key Contributions
- Development of Orbit-QAOA algorithm that cyclically revisits and selectively updates QAOA layers based on gradient tracking
- Demonstration that layer-wise parameter optimization provides optimal granularity for efficient MA-QAOA training with up to 81.8% reduction in training steps
View Full Abstract
The quantum approximate optimization algorithm (QAOA) is a hybrid quantum-classical algorithm for solving combinatorial optimization problems. Multi-angle QAOA (MA-QAOA), which assigns independent parameters to each Hamiltonian operator term, achieves superior approximation performance even with fewer layers than standard QAOA. Unfortunately, this increased expressibility can raise the classical computational cost due to a greater number of parameters. The recently proposed Layerwise MA-QAOA (LMA-QAOA) reduces this overhead by training one layer at a time, but it may suffer from obtaining the precise solution due to the previously fixed parameters. This work addresses two questions for efficient MA-QAOA training: (i) What is the optimal granularity for parameter updates per epoch, and (ii) How can we get precise final cost function results while only partially updating the parameters per epoch? Despite the benefit of reducing the parameters that update per epoch can reduce the classical computation overhead, too fine or coarse a granularity of Hamiltonian update can degrade the MA-QAOA training efficiency. We find that optimizing one complete layer per epoch is an efficient granularity. Moreover, selectively retraining each layer by tracking gradient variations can achieve a final cost function equivalent to the standard MA-QAOA while lowering the parameter update overhead. Based on these insights, we propose Orbit-QAOA, which cyclically revisits layers and selectively freezes stabilized parameters. Across diverse graph benchmarks, Orbit-QAOA reduces training steps by up to 81.8%, reduces approximation ratio error by up to 72x compared to the unified stop condition-applied enhanced LMA-QAOA, and achieves equivalent approximation performance compared to the standard MA-QAOA.
Foundry-Enabled Patterning of Diamond Quantum Microchiplets for Scalable Quantum Photonics
This paper develops a scalable manufacturing method for diamond quantum devices by using commercial semiconductor foundries to create silicon masks that are then transferred to diamond substrates, enabling mass production of quantum photonic components with improved uniformity and yield.
Key Contributions
- Development of foundry-compatible manufacturing process for diamond quantum photonics using microtransfer printing
- Demonstration of scalable production of diamond quantum microchiplets with improved optical performance and uniformity
View Full Abstract
Quantum technologies promise secure communication networks and powerful new forms of information processing, but building these systems at scale remains a major challenge. Diamond is an especially attractive material for quantum devices because it can host atomic-scale defects that emit single photons and store quantum information with exceptional stability. However, fabricating the optical structures needed to control light in diamond typically relies on slow, bespoke processes that are difficult to scale. In this work, we introduce a manufacturing approach that brings diamond quantum photonics closer to industrial production. Instead of sequentially defining each device by lithography written directly on diamond, we fabricate high-precision silicon masks using commercial semiconductor foundries and transfer them onto diamond via microtransfer printing. These masks define large arrays of nanoscale optical structures, shifting the most demanding pattern-definition steps away from the diamond substrate, improving uniformity, yield, and throughput. Using this method, we demonstrate hundreds of diamond "quantum microchiplets" with improved optical performance and controlled interaction with quantum emitters. The chiplet format allows defective devices to be replaced and enables integration with existing photonic and electronic circuits. Our results show that high-quality diamond quantum devices can be produced using scalable, foundry-compatible techniques. This approach provides a practical pathway toward large-scale quantum photonic systems and hybrid quantum-classical technologies built on established semiconductor manufacturing infrastructure.
Alternating ZX Circuit Extraction for Hardware-Adaptive Compilation
This paper presents a new method for converting quantum circuit diagrams (ZX diagrams) into executable quantum circuits by alternating between generating multiple conversion options and evaluating them against hardware constraints. The approach creates a feedback loop that optimizes circuit compilation for specific quantum hardware architectures.
Key Contributions
- Novel alternating extraction scheme that integrates ZX diagram conversion with hardware-adaptive routing
- Modular framework supporting different extraction algorithms, routing strategies, and target hardware platforms
View Full Abstract
We present a novel quantum circuit extraction scheme that tightly integrates graph-like ZX diagrams with hardware-adaptive routing. The method utilizes the degrees of freedom during the conversion from a ZX diagram to a quantum circuit (extraction). It alternates between generating multiple extraction options and evaluating them based on hardware constraints, allowing the routing algorithm to inform and guide the extraction process. This feedback loop extends existing graph-like ZX extraction and supports modular integration of different extraction algorithms, routing strategies, and target hardware, making it a versatile building block during compilation. To perform numerical evaluations, a reference instance of the scheme is implemented with SWAP-based routing for neutral atom hardware and evaluated using various benchmark collections on small-to mid-scale circuits. The reference code is available as open-source, allowing fast integration of other extraction and/or routing tools to stimulate further research and foster improvements of the proposed scheme.
Real-Time Iteration Scheme for Dynamical Mean-Field Theory: A Framework for Near-Term Quantum Simulation
This paper develops a new computational method for simulating strongly correlated quantum materials that works in real-time rather than imaginary time, making it more compatible with near-term quantum computers. The approach successfully captures complex electronic behavior like metal-to-insulator transitions while being efficient enough to run on quantum hardware with limited capabilities.
Key Contributions
- Real-time domain DMFT iteration scheme compatible with quantum hardware
- Demonstration of metal-to-insulator transition simulation using minimal quantum resources
- Framework enabling quantum simulation of strongly correlated materials on near-term devices
View Full Abstract
We present a time-domain iteration scheme for solving the Dynamical Mean-Field Theory (DMFT) self-consistent equations using retarded Green's functions in real time. Unlike conventional DMFT approaches that operate in imaginary time or frequency space, our scheme operates directly with real-time quantities. This makes it particularly suitable for near-term quantum computing hardware with limited Hilbert spaces, where real-time propagation can be efficiently implemented via Trotterization or variational quantum algorithms. We map the effective impurity problem to a finite one-dimensional chain with a small number of bath sites, solved via exact diagonalization as a proof-of-concept. The hybridization function is iteratively updated through time-domain fitting until self-consistency. We demonstrate stable convergence across a wide range of interaction strengths for the half-filled Hubbard model on a Bethe lattice, successfully capturing the metal-to-insulator transition. Despite using limited time resolution and a minimal bath discretization, the spectral functions clearly exhibit the emergence of Hubbard bands and the suppression of spectral weight at the Fermi level as interaction strength increases. This overcomes major limitations of two-site DMFT approximations by delivering detailed spectral features while preserving efficiency and compatibility with quantum computing platforms through real-time dynamics.
Distinguishing synthetic unravelings on quantum computers
This paper demonstrates how different measurement schemes on quantum systems can produce distinguishable quantum trajectories even when their average behavior is identical. The researchers implemented synthetic versions of these 'unravelings' on IBM quantum computers using superconducting qubits and showed that nonlinear statistical measures can distinguish between different measurement approaches.
Key Contributions
- Demonstrated synthetic unravelings on digital quantum hardware using IBM superconducting qubits
- Showed that trajectory variance and von Neumann entropy can experimentally distinguish different measurement unravelings with identical unconditional dynamics
View Full Abstract
Distinct monitoring or intervention schemes can produce different conditioned stochastic quantum trajectories while sharing the same unconditional (ensemble-averaged) dynamics. This is the essence of unravelings of a given Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation: any trajectory-ensemble average of a function that is linear in the conditional state is completely determined by the unconditional density matrix, whereas applying a nonlinear function before averaging can yield unraveling-dependent results beyond the average evolution. A paradigmatic example is resonance fluorescence, where direct photodetection (jump/Poisson) and homodyne or heterodyne detection (diffusive/Wiener) define inequivalent unravelings of the same GKSL dynamics. In earlier work, we showed that nonlinear trajectory averages can distinguish such unravelings, but observing the effect in that optical setting requires demanding experimental precision. Here we translate the same idea to a digital setting by introducing synthetic unravelings implemented as quantum circuits acting on one and two qubits. We design two unravelings - a projective measurement unraveling and a random-unitary "kick" unraveling - that share the same ensemble-averaged evolution while yielding different nonlinear conditional-state statistics. We implement the protocols on superconducting-qubit hardware provided by IBM Quantum to access trajectory-level information. We show that the variance across trajectories and the ensemble-averaged von Neumann entropy distinguish the unravelings in both theory and experiment, while the unconditional state and the ensemble-averaged expectation values that are linear in the state remain identical. Our results provide an accessible demonstration that quantum trajectories encode information about measurement backaction beyond what is fixed by the unconditional dynamics.
Symmetric and Antisymmetric Quantum States from Graph Structure and Orientation
This paper establishes a mathematical relationship between graph structures and quantum particle exchange symmetries, proving that complete graphs generate symmetric quantum states and introducing directed graphs that can create antisymmetric states. It provides a unified framework for describing both bosonic and fermionic quantum states using graph theory.
Key Contributions
- Proved that graph states are fully symmetric under particle permutations if and only if the underlying graph is complete
- Introduced a generalized construction using directed graphs with non-commutative gates that generates antisymmetric multipartite states
View Full Abstract
Graph states provide a powerful framework for describing multipartite entanglement in quantum information science. In their standard formulation, graph states are generated by controlled-$Z$ interactions and naturally encode symmetric exchange properties. Here we establish a precise correspondence between graph topology and exchange symmetry by proving that a graph state is fully symmetric under particle permutations if and only if the underlying graph is complete. We then introduce a generalized graph-based construction using a non-commutative two-qudit gate, denoted $GR$, which requires directed edges and an explicit vertex ordering. We show that complete directed graphs endowed with appropriate orientations, for an odd number of qudits generate fully antisymmetric multipartite states. Together, these results provide a unified graph-theoretic description of bosonic and fermionic exchange symmetry based on graph completeness and edge orientation.
Enhanced quantum state discrimination under general measurements with entanglement and nonorthogonality restrictions
This paper investigates how quantum state discrimination can achieve error rates below the theoretical Helstrom limit by using measurement strategies that go beyond standard positive operator-valued measurements. The authors show that such enhanced discrimination can be achieved even without initial entanglement between the system and auxiliary qubits.
Key Contributions
- Demonstrates that quantum state discrimination error can be reduced below the Helstrom bound using non-positive operator-valued measurements
- Shows that initial entanglement between system and auxiliary is not necessary for achieving sub-Helstrom discrimination, as product states can also enable enhanced measurements
View Full Abstract
The minimum error probability for distinguishing between two quantum states is bounded by the Helstrom limit, derived under the assumption that measurement strategies are restricted to positive operator-valued measurements. We explore scenarios in which the error probability for discriminating two quantum states can be reduced below the Helstrom bound under some constrained access of resources, indicating the use of measurement operations that go beyond the standard positive operator-valued measurements framework. We refer to such measurements as non-positive operator-valued measurements. While existing literature often associates these measurements with initial entanglement between the system and an auxiliary, followed by joint projective measurement and discarding the auxiliary, we demonstrate that initial entanglement between system and auxiliary is not necessary for the emergence of such measurements in the context of state discrimination. Interestingly, even initial product states can give rise to effective non-positive measurements on the subsystem, and achieve sub-Helstrom discrimination error when discriminating quantum states of the subsystem.
Simple broadband signal detection at the fundamental limit
This paper develops a theoretical framework for optimal broadband detection of weak signals with unknown frequencies, connecting quantum limits to geometric bounds. The authors propose an analog quantum sensing protocol using multi-resonant systems and demonstrate near-optimal performance scaling through simulations.
Key Contributions
- Establishes geometric connection between Grover-like integration-time bounds and quantum Fisher information limits for broadband signal detection
- Develops all-analog multi-resonant sensing protocol using randomized Su-Schrieffer-Heeger Hamiltonian with GHZ probe states
View Full Abstract
Broadband detection of a weak oscillatory field with unknown carrier frequency underlies magnetometry, axion searches and gravitational-wave sensing. We show that the Grover-like integration-time lower bound for this task is a geometric corollary an upper bound on the integrated quantum Fisher information, a metrological constraint. We further give an all-analog multi-resonant protocol based on a randomized Su-Schrieffer-Heeger control Hamiltonian and an m-register GHZ probe and verify near-optimal scaling through simulation.
Resolving Gauge Ambiguities of the Berry Connection in Non-Hermitian Systems
This paper resolves mathematical ambiguities that arise when calculating Berry phases in non-Hermitian quantum systems by introducing a new covariant-derivative formalism that provides a unique, real-valued Berry connection. The work establishes a rigorous geometric framework for studying topological properties in quantum systems with non-Hermitian Hamiltonians.
Key Contributions
- Resolution of gauge ambiguities in Berry connections for non-Hermitian systems through covariant-derivative formalism
- Establishment of unique real-valued Berry connection that transforms consistently under gauge changes
- Unified geometric framework for Berry phases and topological invariants in non-Hermitian quantum systems
View Full Abstract
Non-Hermitian systems display spectral and topological phenomena absent in Hermitian physics; yet, their geometric characterization can be hindered by an intrinsic ambiguity rooted in the eigenspace of non-Hermitian Hamiltonians, which becomes especially pronounced in the pure quantum regime. Because left and right eigenvectors are not related by conjugation, their norms are not fixed, giving rise to a biorthogonal ${\rm GL}(N,{\mathbb C})$ gauge freedom. Consequently, the standard Berry connection admits four inequivalent definitions depending on how left and right eigenvectors are paired, giving rise to distinct Berry phases and generally complex-valued holonomies. Here we show that these ambiguities and the emergence of complex phases are fully resolved by introducing a covariant-derivative formalism built from the metric tensor of the Hilbert space of the underlying non-Hermitian Hamiltonian. The resulting unique Berry connection remains real-valued under an arbitrary ${\rm GL}(N,{\mathbb C})$ frame change, and transforms as an affine gauge potential, while reducing to the conventional Berry (or Wilczek-Zee) connection in the Hermitian limit. This establishes an unambiguous and gauge-consistent geometric framework for Berry phases, non-Abelian holonomies, and topological invariants in quantum systems described by non-Hermitian Hamiltonians.
Inter-branch message transfer on superconducting quantum processors: a multi-architecture benchmark
This paper benchmarks quantum message transfer protocols across different IBM superconducting quantum processors, testing how well these devices can transfer information between quantum circuit branches using up to 32 qubits. The researchers compare performance across multiple IBM architectures and analyze how different message types and circuit complexities affect success rates and noise characteristics.
Key Contributions
- Comprehensive benchmarking of inter-branch message transfer across multiple IBM quantum processor architectures
- Analysis of how message complexity and circuit depth affect quantum processor performance with circuits up to 32 qubits
- Open-source release of benchmarking data and analysis tools for quantum processor characterization
View Full Abstract
We treat inter-branch message transfer in a Wigner's-friend circuit as a practical benchmark for near-term superconducting quantum processors. Implementing Violaris' unitary message-transfer primitive, we compare performance across IBM Eagle, Nighthawk, and Heron (r2/r3) processors for message sizes up to $n=32$, without error mitigation. We study three message families -- sparse (one-hot), half-weight, and dense -- and measure conditional string success $p_{\mathrm{all}}=\Pr(P=μ\mid R=0)$, memory erasure after uncomputation, and correlation diagnostics (branch contrast and bitwise mutual information). The sparse family compiles to essentially constant two-qubit depth, yielding a depth-controlled probe of device noise: at $n=32$ we observe $p_{\mathrm{all}}$ spanning $\approx0.07$ to $\approx0.68$ across backends. In contrast, half and dense messages incur rapidly growing routing overhead, and transpiler-seed variability becomes a practical limitation near the coherence frontier. We further report an amplitude sweep (no-amplification test) and a divergence ``cousins'' sweep that quantifies degradation with branch-conditioned complexity. All data and figure-generation scripts are released.
A two-mode model for black hole evaporation and information flow
This paper develops a simplified two-oscillator model to study black hole evaporation by coupling a geometric degree of freedom with a Hawking radiation mode. The study shows how energy and information flow between these modes through numerical simulations, demonstrating periodic entropy growth and out-of-phase quantum exchange.
Key Contributions
- Development of analytical two-oscillator model for black hole evaporation with coupled harmonic oscillators
- Demonstration of periodic entanglement generation and out-of-phase quantum exchange in minimal framework
View Full Abstract
We develop and analyze a two-oscillator model for black hole evaporation in which an effective geometric degree of freedom and a representative Hawking radiation mode are described by coupled harmonic oscillators with opposite signs in their free Hamiltonians. The normal-mode structure is obtained analytically and the corresponding modal amplitudes determine the pattern of energy exchange between the two sectors. To bridge the discrete and semiclassical pictures, we introduce smooth envelope functions that provide a continuous effective description along the geometric variable. Numerical simulations in a truncated Fock space show that the two oscillators exchange quanta in an approximately out-of-phase manner, consistent with an effective conservation of $\langle n_x\rangle - \langle n_y\rangle$. The reduced entropy $S_x(t)$ exhibits periodic growth, indicating entanglement generation. These results demonstrate that even a minimal two-mode framework can capture key qualitative features of energy transfer and information flow during evaporation.
Quantum Light Detection with Enhanced Photonic Neural Network
This paper presents a hybrid quantum-classical system that combines photonic neural networks with quantum reservoir computing to create more accurate and robust quantum light sensors. The approach overcomes limitations of weak optical nonlinearities by integrating quantum processing with classical machine learning to improve detection of quantum states.
Key Contributions
- Hybrid quantum-classical detection protocol combining quantum reservoirs with analogue neural networks
- Demonstrated improved quantum state classification and tomography with small nonlinearity-to-loss ratios
- Practical approach for chip-scale photonic quantum sensors with reduced material requirements
View Full Abstract
Advances in quantum technologies are accelerating the demand for optical quantum state sensors that combine high precision, versatility, and scalability within a unified hardware platform. Quantum reservoir computing offers a powerful route toward this goal by exploiting the nonlinear dynamics of quantum systems to process and interpret quantum information efficiently. Photonic neural networks are particularly well suited for such implementations, owing to their intrinsic sensitivity to photon-encoded quantum information. However, the practical realisation of photonic quantum reservoirs remains constrained by the inherently weak optical nonlinearities of available materials and the technological challenges of fabricating densely coupled quantum networks. To address these limitations, we introduce a hybrid quantum-classical detection protocol that integrates the advantages of quantum reservoirs with the adaptive learning capabilities of analogue neural networks. This synergistic architecture substantially enhances information-extraction accuracy and robustness, enabling low-cost performance improvements of quantum light sensors. Based on the proposed approach, we achieved significant improvements in quantum state classification, tomography, and feature regression, even for reservoirs with a relatively small nonlinearity-to-losses ratio $U/γ\approx 0.02$ in a network of only five nodes. By reducing reliance on material nonlinearity and reservoir size, the proposed approach facilitates the practical deployment of high-fidelity photonic quantum sensors on existing integrated platforms, paving the way toward chip-scale quantum processors and photonic sensing technologies.
Optimally Driven Dressed Qubits
This paper introduces a new control protocol for dressed qubits that eliminates unwanted counter-rotating terms without requiring the rotating-wave approximation, leading to improvements in gate speed, fidelity, and coherence properties. The method uses only a single coupling axis and provides a general framework for optimizing dressed-qubit performance across multiple metrics.
Key Contributions
- Novel dressed-qubit control protocol that optimally removes counter-rotating terms without rotating-wave approximation
- Demonstrated improvements in single-qubit gate speed, two-qubit gate fidelity, spectroscopic range, and coherence preservation
- General parameterization and Floquet-based coherence-time expression for the control protocol
View Full Abstract
The applicability and performance of qubits dressed by classical fields are limited because their control protocols give rise to an undesired counter-rotating term (CRT). This in turn forces operation in a regime where a (dressed) rotating-wave approximation (RWA) is valid, thereby restricting key aspects of their operation. Here, using only a single coupling axis in the laboratory frame, we introduce a dressed-qubit control protocol that optimally removes the CRT, eliminating the need for the RWA and delivering substantial improvements in multiple performance metrics, including single-qubit gate speed, two-qubit gate fidelity, spectroscopic range, clock stability, and coherence preservation. In addition, we provide a general parameterization together with a Floquet-based coherence-time expression, which elucidates the protocol's working principles and lowers the barrier to adoption. Collectively, these advances position our scheme as the state-of-the-art strategy for qubit control, paving the way for a wider class of quantum technologies to be realized using dressed-qubit architectures.
Robust topological quantum state transfer with long-range interactions in Rydberg arrays
This paper develops methods for transferring quantum states between the ends of atomic chains using Rydberg atoms with long-range interactions. The authors show that these long-range interactions create topologically protected pathways that enable more efficient and robust quantum state transfer compared to systems with only nearest-neighbor interactions.
Key Contributions
- Development of theoretical framework for topological quantum state transfer in long-range interacting Rydberg arrays
- Demonstration that long-range couplings enhance energy gaps and improve transfer efficiency compared to nearest-neighbor models
- Proof of robustness against positional disorder due to topological protection
View Full Abstract
We develop a theoretical framework for fast, robust and high-fidelity topological quantum state transfer in one-dimensional systems with long-range couplings, motivated by chains of Rydberg atoms with dipole-dipole interactions. Such long-range interactions naturally give rise to extended Su-Schrieffer-Heeger and Rice-Mele models supporting topologically protected edge states. We show that these edge states enable high-fidelity edge-to-edge excitation transfer using both time-independent protocols, based on coherent edge state dynamics, and time-dependent protocols, based on adiabatic modulation of system parameters. Long-range couplings play a central role by enhancing the relevant energy gaps, leading to a substantial improvement in transfer efficiency compared to nearest neighbour models. The resulting transfer is robust against positional disorder, reflecting its topological origin and highlighting the potential of long-range interacting platforms for reliable quantum state transfer.
Approximate Decoherence, Recoherence and Records in Isolated Quantum Systems
This paper studies how quantum decoherence affects the formation of detectable records in isolated quantum systems, finding that the number of reliable records can be much smaller than possible events when decoherence is imperfect. The authors use random matrix models to reveal recoherence effects and provide insights into the Many Worlds Interpretation and Born's rule emergence.
Key Contributions
- Established asymptotic limits on reliable record formation under approximate decoherence conditions
- Revealed recoherence structures in quantum histories using numerically exact random matrix models
View Full Abstract
Using the framework of decoherent histories, we study which past events leave detectable records in isolated quantum systems under the realistic assumption that decoherence is approximate and not perfect. In the first part we establish -- asymptotically for a large class of (pseudo-)random histories -- that the number of reliable records can be much smaller than the number of possible events, depending on the degree of decoherence. In the second part we reveal a clear decoherence structure for long histories based on a numerically exact solution of a random matrix model that, as we argue, captures generic aspects of decoherence. We observe recoherence between histories with a small Hamming distance, for localized histories admitting a high purity Petz recovery state, and for maverick histories that are statistical outliers with respect to Born's rule. From the perspective of the Many Worlds Interpretation, the first part -- which views the self-location problem as a coherent version of quantum state discrimination -- reveals a "branch selection problem", and the second part sheds light on the emergence of Born's rule and the theory confirmation problem.
Orthogonally Constrained CASSCF Framework: Newton-Raphson Orbital Optimization and Nuclear Gradients
This paper develops an improved computational chemistry method called orthogonally constrained CASSCF that produces better molecular orbital calculations for multiple electronic states simultaneously. The authors add mathematical optimization techniques and the ability to optimize molecular geometries, showing improvements over existing methods when tested on simple molecules like LiH and water.
Key Contributions
- Development of Newton-Raphson orbital optimization scheme for OC-CASSCF with analytical gradient and Hessian expressions
- Implementation of analytical nuclear gradients enabling geometry optimization within the OC-CASSCF framework
View Full Abstract
In a recent work, we introduced the foundations of an orthogonally constrained complete active space self-consistent field (OC-CASSCF) framework that produces state-specific molecular orbitals for mutually orthogonal multiconfigurational electronic states. In the present study, we extend this approach by incorporating a Newton-Raphson orbital-optimization scheme, for which we derive analytical expressions of the orbital gradient and Hessian. Furthermore, we outline a practical route toward the evaluation of analytical nuclear gradients, enabling geometry optimizations within the OC-CASSCF formalism. Benchmark calculations on the three lowest singlet states of LiH and H$_2$O molecules demonstrate a systematic improvement as compared to conventional state-averaged CASSCF, even when using modestly sized active spaces.
Transversal gates of the ((3,3,2)) qutrit code and local symmetries of the absolutely maximally entangled state of four qutrits
This paper establishes a mathematical connection between absolutely maximally entangled (AME) states of four qutrits and quantum error correcting codes, proving they have equivalent symmetry structures. The authors use advanced algebraic techniques to find generators for both the transversal gates of the qutrit code and the local symmetries of the AME state.
Key Contributions
- Proved bijection between local unitary orbits of AME states and quantum error correcting codes for even n
- Found generators for transversal gates of qutrit codes and local symmetries of AME states using Vinberg theory
View Full Abstract
We provide a proof that there exists a bijection between local unitary (LU) orbits of absolutely maximally entangled (AME) states in $(\mathbb{C}^D)^{\otimes n}$ where $n$ is even, also known as perfect tensors, and LU orbits of $((n-1,D,n/2))_D$ quantum error correcting codes. Thus, by a result of Rather et al. (2023), the AME state of 4 qutrits and the pure $((3,3,2))_3$ qutrit code $\mathcal{C}$ are both unique up to the action of the LU group. We further explore the connection between the 4-qutrit AME state and the code $\mathcal{C}$ by showing that the group of transversal gates of $\mathcal{C}$ and the group of local symmetries of the AME state are closely related. Taking advantage of results from Vinberg's theory of graded Lie algebras, we find generators of both of these groups.
Broadcasting quantum nonlinearity in hybrid systems
This paper proposes a method to give linear quantum oscillators nonlinear capabilities by using light to mediate interactions with naturally nonlinear systems, effectively 'broadcasting' nonlinearity from one system to another. The authors suggest using optically levitated mechanical oscillators as the source of nonlinearity and demonstrate how this could enable universal quantum processing with linear systems.
Key Contributions
- Proposes a light-mediated method to broadcast nonlinear operations from inherently nonlinear systems to linear oscillators
- Demonstrates theoretical framework for achieving universal quantum processing with linear systems through externally introduced nonlinearity
- Provides specific implementation using optically levitated mechanical oscillators as nonlinearity sources
View Full Abstract
Linear oscillators contribute to most branches of contemporary quantum science. They have already successfully served as quantum sensors and memories, found applications in quantum communication, and hold promise for cluster-state-based quantum computing. To master universal quantum processing with linear oscillators, an unconditional nonlinear operation is required. We propose such an operation using light-mediated interaction with another system that possesses a nonlinearity equivalent to more than a quadratic potential. Such a potential grants access to a nonlinear operation that can be broadcast to the target linear system. The nonlinear character of the operation can be verified by observing adequate negative values of the target system's Wigner function and the squeezing of the variance of a certain nonlinear combination of the quadratures below the thresholds attainable by Gaussian states. We explicitly evaluate an optically levitated mechanical oscillator as a flexible source of nonlinearity for a proof-of-principle demonstration of the nonlinearity broadcasting to linear systems, for example, mechanical oscillators or macroscopic atomic spin ensembles.
Flux-tunable transmon incorporating a van der Waals superconductor via an Al/AlO$_x$/4Hb-TaS$_2$ Josephson junction
This paper demonstrates a new type of superconducting qubit (transmon) that incorporates a van der Waals superconductor (4Hb-TaS2) in its Josephson junction, replacing the standard aluminum-only design. The researchers successfully fabricated and characterized this hybrid device, showing it behaves as expected for a transmon qubit but with shorter coherence times and some unusual junction properties.
Key Contributions
- First demonstration of a flux-tunable transmon using van der Waals superconductor 4Hb-TaS2 in the Josephson junction
- Development of a robust fabrication process for integrating vdW superconductors with conventional aluminum-based superconducting circuits
- Characterization showing sub-microsecond T1 times and identification of discrepancies between expected and observed Josephson energies in hybrid junctions
View Full Abstract
Incorporating van der Waals (vdW) superconductors into Josephson elements extends circuit-QED beyond conventional Al/AlO$_x$/Al tunnel junctions and enables microwave probes of unconventional condensates and subgap excitations. In this work, we realize a flux-tunable transmon whose nonlinear inductive element is an Al/AlO$_x$/4Hb-TaS$_2$ Josephson junction. The tunnel barrier is formed by sequential deposition and full in-situ oxidation of ultrathin Al layers on an exfoliated 4Hb-TaS$_2$ flake, followed by deposition of a top Al electrode, yielding a robust, repeatable hybrid junction process compatible with standard transmon fabrication. Embedding the device in a three-dimensional copper cavity, we observe a SQUID-like flux-dependent spectrum that is quantitatively reproduced by a standard dressed transmon--cavity Hamiltonian, from which we extract parameters in the transmon regime. Across measured devices we obtain sub-microsecond energy relaxation ($T_1$ from $0.08$ to $0.69~μ$s), while Ramsey measurements indicate dephasing faster than our $16$ ns time resolution. We also find a pronounced discrepancy between the Josephson energy inferred from spectroscopy and that expected from the Ambegaokar--Baratoff relation using room-temperature junction resistances, pointing to nontrivial junction physics in the hybrid Al/AlO$_x$/4Hb-TaS$_2$ system. Although we do not resolve material-specific subgap modes in the present geometry, this work establishes a practical route to integrating 4Hb-TaS$_2$ into coherent quantum circuits and provides a baseline for future edge-sensitive designs aimed at enhancing coupling to boundary and subgap degrees of freedom in vdW superconductors.
Analytical solution of the Schrödinger equation with $1/r^3$ and attractive $1/r^2$ potentials: Universal three-body parameter of mixed-dimensional Efimov states
This paper provides analytical solutions to the Schrödinger equation with specific long-range potentials (1/r³ and 1/r²) and uses these solutions to study Efimov states in mixed-dimensional three-body systems involving polar molecules and light atoms. The work establishes universal parameters for these quantum few-body systems and validates the theoretical predictions with numerical calculations.
Key Contributions
- Analytical solutions for Schrödinger equation with 1/r³ and attractive 1/r² potentials using quantum defect theory
- Determination of universal three-body parameters for mixed-dimensional Efimov states with dipolar interactions
View Full Abstract
We study the Schrödinger equation with $1/r^3$ and attractive $1/r^2$ potentials. Using the quantum defect theory, we obtain analytical solutions for both repulsive and attractive $1/r^3$ interactions. The obtained discrete-scale-invariant energies and wave functions, validated by excellent agreement with numerical results, provide a natural framework for describing the universality of Efimov states in mixed dimension. Specifically, we consider a three-body system consisting of two heavy particles with large dipole moments confined to a quasi-one-dimensional geometry and resonantly interacting with an unconfined light particle. With the Born-Oppenheimer approximation, this system is effectively reduced to the Schrödinger equation with $1/r^3$ and $1/r^2$ potentials, and manifests the Efimov effect. Our analytical solution suggests that, for repulsive dipole interactions, the three-body parameter of the mixed-dimensional Efimov states is universally set by the dipolar length scale, whereas for attractive interactions it explicitly depends on the short-range phase. We also investigate the effects of finite transverse confinement and find that our analytical results are useful for describing the Efimov states composed of two polar molecules and a light atom.
Quantum Zeno-like Paradox for Position Measurements: A Particle Precisely Found in Space is Nowhere to be Found in Hilbert Space
This paper demonstrates that when a quantum particle's position is measured with perfect precision, it becomes impossible to detect it in any quantum state within Hilbert space, suggesting that perfectly precise measurements create quantum states that exist outside the standard mathematical framework of quantum mechanics.
Key Contributions
- Proves that perfect position measurements lead to quantum states unrepresentable by density matrices in Hilbert space
- Identifies a novel quantum Zeno-like paradox where measurement precision fundamentally alters quantum state structure
View Full Abstract
On a quantum particle in the unit interval $[0,1]$, perform a position measurement with inaccuracy $1/n$ and then a quantum measurement of the projection $|φ\rangle\langleφ|$ with some arbitrary but fixed normalized $φ$. Call the outcomes $X \in[0,1]$ and $Y \in\{0,1\}$. We show that in the limit $n\to\infty$ corresponding to perfect precision for $X$, the probability of $Y=1$ tends to 0 for every $φ$. Since there is no density matrix, pure or mixed, which upon measurement of any $|φ\rangle\langleφ|$ yields outcome 1 with probability 0, our result suggests that a novel type of quantum state beyond Hilbert space is necessary to describe a quantum particle after a perfect position measurement.
Mikado strategy for the detection of atoms in images of microtrap arrays
This paper presents an improved computational method called the 'Mikado strategy' for detecting individual atoms in high-resolution microscopy images of microtrap arrays. The technique alternates between estimation and detection steps to better identify which traps contain atoms, particularly when the optical resolution makes individual sites difficult to distinguish.
Key Contributions
- Introduction of the Mikado strategy algorithm that alternates estimation and detection steps without requiring explicit posterior probability models
- Improved detection accuracy for poorly resolved optical sites in microtrap arrays compared to previous methods
View Full Abstract
Building on top of our recent work [arXiv:2502.08511], we introduce a new strategy to solve the problem of detecting atoms in high-resolution images of microtrap arrays. By alternating estimation and detection steps, we get rid of the need for an explicit model to compute the posterior occupancy probability of each site given its a priori optimal estimate. As direct benefits, we show an improved detection accuracy compared to our previous work when the sites are not optically well resolved, and we expect a greater robustness against real experimental conditions.
Nanomechanical sensor resolving impulsive forces below its zero-point fluctuations
This paper demonstrates a nanomechanical sensor using an optically levitated nanoparticle that can measure extremely small impulsive forces below the fundamental quantum limit by using quantum squeezing techniques to amplify weak signals.
Key Contributions
- Demonstration of force sensing below zero-point quantum fluctuations using squeezed states
- Achievement of 0.6 dB sub-quantum-limited sensitivity in a nanomechanical sensor
View Full Abstract
The sensitivity of a mechanical transducer is ultimately limited by its inherent quantum fluctuations. Here, we use an optically levitated nanoparticle to measure impulsive forces smaller than the particle's zero-point momentum uncertainty. Our approach relies on reversibly squeezing the levitated particle's center-of-mass motion to coherently amplify the perturbation. We demonstrate resolving single impulsive-force kicks as small as 6.9 keV/c, a value 0.6 dB below the sensor's zero-point value.
Remote magnon-phonon entanglement in the waveguide-magnomechanics
This paper proposes a method to create quantum entanglement between magnetic excitations (magnons) and mechanical vibrations (phonons) over long distances using a waveguide system. The approach can generate various types of entanglement between multiple components and shows that certain dissipative interactions can produce stronger long-distance entanglement than traditional coherent couplings.
Key Contributions
- Development of protocol for remote magnon-phonon entanglement generation in hybrid waveguide-magnomechanical systems
- Demonstration that dissipative magnon-magnon interactions can generate stronger long-distance entanglement than coherent couplings
- Support for multiple entanglement configurations including genuine multimode and four-mode entanglement
View Full Abstract
Generating long-distance quantum entanglement is crucial for advancing quantum information processing. In this work, we propose a protocol for generating remote magnon-phonon entanglement in a hybrid waveguide-magnomechanical system, where multiple spatially separated magnon modes couple to a common waveguide while interacting with their respective phonon modes. By applying tailored pulsed drives and engineering the magnon-phonon interactions, our scheme enables the creation of diverse long-distance and dynamically stable entanglement. Beyond basic magnon-phonon two-mode entanglement, it supports genuine multimode entanglement between a single phonon and multiple magnons, bipartite entanglement between a single magnon and multiple phonons, as well as genuine four-mode entanglement involving two magnons and two phonons. Moreover, we show that dissipative magnon-magnon interactions mediated by traveling photons can generate substantially stronger long-distance entanglement than coherent couplings. Our work provides an experimentally feasible scheme for the remote generation of magnon-phonon entanglement.
Experimental High-Accuracy and Broadband Quantum Frequency Sensing via Geodesic Control
This paper demonstrates a new quantum frequency sensing technique using nitrogen-vacancy centers in diamond that can accurately measure oscillating signals across a very wide frequency range (megahertz to gigahertz) while avoiding errors from unwanted harmonic frequencies. The method achieves extremely precise frequency measurements down to millihertz resolution even in noisy conditions.
Key Contributions
- Experimental demonstration of geodesic control for broadband quantum frequency sensing with suppression of harmonic-induced systematic errors
- Achievement of millihertz-level frequency resolution in noisy environments using synchronized readout techniques
View Full Abstract
Accurate frequency estimation of oscillating signals over a broad bandwidth is a central task in quantum sensing, yet it is often compromised by spurious responses to higher-order harmonics in realistic multi-frequency environments. Here we experimentally demonstrate a high-accuracy and broadband quantum frequency sensing protocol based on geodesic control, implemented using the electron spin of a single nitrogen-vacancy center in diamond. By engineering an intrinsically single-frequency response, geodesic control enables bias-free frequency estimation with strong suppression of harmonic-induced systematic errors across a wide spectral range spanning from the megahertz to the gigahertz regime. Furthermore, by incorporating synchronized readout, we achieve millihertz-level frequency resolution under noisy signal conditions. Our results provide systematic experimental benchmarking of geodesic control for quantum frequency sensing and establish it as a practical approach for high-accuracy metrology in realistic environments.
Continuous-mode analysis of improved two-way CV-QKD
This paper analyzes an improved two-way quantum key distribution protocol that uses continuous variables, focusing on how real-world device imperfections affect system performance when optical fields operate in continuous-mode rather than ideal single-mode conditions. The researchers develop a security analysis framework that accounts for these practical limitations and finite-size effects.
Key Contributions
- Development of security analysis framework for continuous-mode CV-QKD with temporal modes characterization
- Integration of finite-size effects and adaptive normalization with calibrated shot-noise unit for practical implementations
View Full Abstract
Continuous-variable quantum key distribution (CV-QKD) enables information-theoretically secure key generation between legitimate parties. To further enhance system performance, an improved two-way CV-QKD protocol has been proposed, which is accessible in practice and exhibits increased robustness against excess noise. However, in practical implementations, device nonidealities inevitably drive the optical field from the single-mode regime into the continuous-mode regime. In this work, we introduce temporal modes to characterize the evolution of optical fields in the improved two-way protocol and establish a security analysis framework for the continuous-mode scenario based on adaptive normalization with calibrated shot-noise unit. In addition, finite-size effects are taken into account in the analysis. Our results demonstrate that the improved two-way protocol retains a performance advantage over one-way counterpart. The analysis provides useful guidance for the practical implementation and performance optimization of improved two-way CV-QKD systems.
Beyond Photon Shot Noise: Chemical Limits in Spectrophotometric Precision
This paper studies fundamental limits on how precisely we can measure chemical concentrations using light-based spectroscopy. The researchers show that chemical reactions themselves create noise that can limit measurement precision beyond the usual quantum shot noise from photons.
Key Contributions
- Development of Photon-resolved Floquet theory for analyzing higher-order measurement statistics in spectrophotometry
- Identification of three distinct sensitivity regimes: photon-shot-noise limited, chemically limited, and intermediate
- Demonstration that phase measurements provide superior sensitivity compared to intensity measurements in chemical spectroscopy
View Full Abstract
In this work, we investigate precision limitations in spectrophotometry (i.e., spectroscopic concentration measurements) imposed by chemical processes of molecules. Using the recently developed Photon-resolved Floquet theory, which generalizes Maxwell-Bloch theory for higher-order measurement statistics, we analyze a molecular model system subject to chemical reactions whose electronic and optical properties depend on the chemical state. Analysis of sensitivity bounds reveals: (i) Phase measurements are more sensitive than intensity measurements; (ii) Sensitivity exhibits three regimes: photon-shot-noise limited, chemically limited, and intermediate; (iii) Sensitivity shows a turnover as a function of reaction rate due to the interplay between coherent electronic dynamics and incoherent chemical dynamics. Our findings demonstrate that chemical properties must be considered to estimate ultimate precision limits in optical spectrophotometry.
Graphene Josephson Junctions for Engineering Motional Quanta
This paper proposes a hybrid quantum device that combines graphene's mechanical vibrations with superconducting circuits through Josephson junctions, enabling strong coupling between motion and electrical states. The device can generate non-classical mechanical states and enhance quantum sensing capabilities through controllable quantum interactions.
Key Contributions
- Novel hybrid quantum device architecture combining graphene mechanical motion with superconducting circuits
- Demonstration of parametric processes for generating non-classical mechanical states
- Enhanced quantum sensing capabilities through quantum control of motional degrees of freedom
View Full Abstract
We propose a hybrid quantum device based on the graphene Josephson junctions, where the vibrational degrees of freedom of a graphene membrane couple to the superconducting circuits. The flexural mode-controlled tunneling of the Cooper pairs introduces a strong and tunable coupling even at the zero-point fluctuations level. By employing this interaction, we show that a parametric process can be efficiently implemented. We then investigate foundational and technological applications of our hybrid device empowered by nonlinear interactions, with fast generation of non-classical mechanical states, and critically enhanced quantum sensing under suitable quantum control. Our work provides the possibility of employing the graphene motional degree of freedom for quantum information processing in circuit quantum nanomechanical structures.
Broadband Heterodyne Microwave Detection using Rydberg Atoms with High Sensitivity
This paper develops a Rydberg atom-based sensor for detecting microwave electric fields with exceptional sensitivity (sub-uV/cm/Hz^1/2) and broad bandwidth (up to 3 GHz). The researchers use quantum spectroscopic methods to simultaneously measure both microwave frequency and field strength, achieving a 90 dB dynamic range for precision electric field measurements.
Key Contributions
- Development of dual-tone heterodyne detection method using Rydberg atoms achieving sub-uV/cm/Hz^1/2 sensitivity
- Demonstration of broadband microwave sensing up to 3 GHz with 90 dB dynamic range
- Systematic characterization of optimal operating conditions balancing power broadening and sensitivity
View Full Abstract
We present a Rydberg atom-based microwave electric field sensor that achieves extended dynamic range and enhanced sensitivity across a broad bandwidth. By characterizing the Autler-Townes (AT) splitting induced by a single-tone microwave field, we demonstrate a spectroscopic method that simultaneously extracts both the microwave frequency and electric field strength directly from the splitting pattern. We implement dual-tone heterodyne detection, achieving a minimum detectable field strength on the order of uV/cm and a sensitivity in the sub-uV/cm/Hz^1/2 regime, while extending the operational bandwidth up to 3 GHz. Through systematic characterization of frequency and power dependencies, we identify optimal operating conditions to minimize power broadening in the resonant AT regime and maximize sensitivity in the far-off-resonance AC Stark regime. The resulting platform combines high sensitivity, broad bandwidth, and a dynamic range of approximately 90 dB, establishing Rydberg atoms as practical sensors for precision electric field metrology.
Universal Operational Privacy in Distributed Quantum Sensing
This paper develops a new privacy framework for distributed quantum sensing networks where multiple parties measure different parameters using shared quantum resources while keeping their individual measurements private from potentially untrusted servers. The researchers demonstrate both theoretical privacy conditions and experimental validation using fewer photons than parameters being measured.
Key Contributions
- Introduction of universal operational privacy framework based on classical Fisher information matrix for distributed quantum sensing
- Experimental demonstration of privacy-preserving quantum sensing protocol achieving Heisenberg-limited precision with fewer photons than estimated parameters
View Full Abstract
Privacy is a fundamental requirement in distributed quantum sensing networks, where multiple clients estimate spatially distributed parameters using shared quantum resources while interacting with potentially untrusted servers. Despite its importance, existing privacy conditions rely on idealized quantum bounds and do not fully capture the operational constraints imposed by realistic measurements. Here, we introduce a universal operational privacy framework for distributed quantum sensing, formulated in terms of the experimentally accessible classical Fisher information matrix and applicable to arbitrary protocols characterized by singular information structures. The proposed condition provides a protocol-independent criterion ensuring that no information about individual parameters is accessible to untrusted parties. We further experimentally demonstrate that a distributed quantum sensing protocol employing fewer photons than the number of estimated parameters simultaneously satisfies the universal privacy condition and achieves Heisenberg-limited precision. Our results establish universal operational constraints governing privacy in distributed quantum sensing networks and provide a foundation for practical, privacy-preserving quantum sensing beyond full-rank regimes.
Analytical construction of $(n, n-1)$ quantum random access codes saturating the conjectured bound
This paper develops an analytical method to construct optimal quantum random access codes that encode n bits of classical information into n-1 qubits, achieving the theoretically conjectured maximum success probability. The authors provide explicit formulas and efficient quantum circuit implementations for these codes.
Key Contributions
- Analytical construction method for (n,n-1)-QRACs achieving optimal success probability
- Systematic algorithm to decompose optimal POVM into implementable quantum gates
- Circuit implementation with O(n) depth under linear connectivity constraints
- Analysis of information-theoretic gaps in high-dimensional limit
View Full Abstract
Quantum Random Access Codes (QRACs) embody the fundamental trade-off between the compressibility of information into limited quantum resources and the accessibility of that information, serving as a cornerstone of quantum communication and computation. In particular, the $(n, n-1)$-QRACs, which encode $n$ bits of classical information into $n-1$ qubits, provides an ideal theoretical model for verifying quantum advantage in high-dimensional spaces; however, the analytical derivation of optimal codes for general $n$ has remained an open problem. In this paper, we establish an analytical construction method for $(n, n-1)$-QRACs by using an explicit operator formalism. We prove that this construction strictly achieves the numerically conjectured upper bound of the average success probability, $\mathcal{P} = 1/2 + \sqrt{(n-1)/n}/2$, for all $n$. Furthermore, we present a systematic algorithm to decompose the derived optimal POVM into standard quantum gates. Since the resulting decoding circuit consists solely of interactions between adjacent qubits, it can be implemented with a circuit depth of $O(n)$ even under linear connectivity constraints. Additionally, we analyze the high-dimensional limit and demonstrate that while the non-commutativity of measurements is suppressed, an information-theoretic gap of $O(\log n)$ from the Holevo bound inevitably arises for symmetric encoding. This study not only provides a scalable implementation method for high-dimensional quantum information processing but also offers new insights into the mathematical structure at the quantum-classical boundary.
Quantum simulation of the nonlinear Schrödinger equation via measurement-induced potential reconstruction
This paper proposes a hybrid quantum-classical algorithm to simulate the nonlinear Schrödinger equation on quantum computers by combining the split-step Fourier method with measurement-based potential reconstruction and quantum phase kickback techniques. The authors validate their approach by simulating wave packet evolution, solitons, and fluid flow dynamics, showing good agreement with classical solutions.
Key Contributions
- Novel hybrid quantum-classical framework for simulating nonlinear partial differential equations
- Measurement-based reconstruction technique for implementing nonlinear potentials on quantum circuits
- Demonstration of quantum simulation accuracy for complex wave dynamics including solitons and fluid flows
View Full Abstract
The nonlinear Schrödinger equation (NLSE) is a fundamental model that describes diverse complex phenomena in nature. However, simulating the NLSE on a quantum computer is inherently challenging due to the presence of the nonlinear term. We propose a hybrid quantum-classical framework for simulating the NLSE based on the split-step Fourier method. During the linear propagation step, we apply the kinetic evolution operator to generate an intermediate quantum state. Subsequently, the Hadamard test is employed to measure the Fourier components of low-wavenumber modes, enabling the efficient reconstruction of nonlinear potentials. The phase transformation corresponding to the reconstructed potential is then implemented via a quantum circuit using the phase kickback technique. To validate the efficacy of the proposed algorithm, we numerically simulate the evolution of a Gaussian wave packet, a soliton wave, and the wake flow past a cylinder. The simulation results demonstrate excellent agreement with the corresponding classical solutions. This work provide a concrete basis for analyzing accuracy-cost trade-offs in quantum-classical simulations of nonlinear dispersive wave dynamics.
The strong converse exponent of composable randomness extraction against quantum side information
This paper finds a tight mathematical bound for how efficiently quantum random numbers can be extracted when an adversary has quantum side information. The work establishes the first precise operational interpretation of club-sandwiched conditional entropy in quantum settings, providing theoretical foundations for quantum cryptographic protocols.
Key Contributions
- Tight characterization of strong converse exponent for quantum randomness extraction using composable error criteria
- First precise operational interpretation of club-sandwiched conditional entropy in quantum settings
View Full Abstract
We find a tight characterization of the strong converse exponent for randomness extraction against quantum side information. In contrast to previous tight bounds, we employ a composable error criterion given by the fidelity (or purified distance) to a uniform distribution in product with the marginal state. The characterization is in terms of a club-sandwiched conditional entropy recently introduced by Rubboli, Goodarzi and Tomamichel and used by Li, Li and Yu to establish the strong converse exponent for the case of classical side information. This provides the first precise operational interpretation of this family of conditional entropies in the quantum setting.
The complexity of semidefinite programs for testing $k$-block-positivity
This paper analyzes the computational complexity of algorithms that test whether quantum states have a mathematical property called k-block-positivity. The authors derive explicit formulas for how difficult these tests are by connecting the problem to symmetry reduction techniques and representation theory of unitary groups.
Key Contributions
- Explicit complexity formula for k-block-positivity testing algorithms using symmetry reduction
- Mathematical explanation for why semidefinite program hierarchy collapses in the k=d case
View Full Abstract
We extend \cite{chen2025srkbp} by analyzing the complexity of the $k$-block-positivity testing algorithm. In this paper, we investigate a symmetry reduction scheme based on rectangular shaped Young diagrams. Connecting the complexity to the dimensions of irreducible representations of $\mathrm{U}(d)$, we derive an explicit formula for the complexity, which also clarifies why the semidefinite program hierarchy collapses in the $k=d$ case.
Evolution of quantum geometric tensor of 1D periodic systems after a quench
This paper studies how the quantum geometric tensor evolves in one-dimensional periodic systems after a sudden change in the Hamiltonian. The researchers show that different components of this tensor reveal information about position variance, energy variance, and quench-induced effects, demonstrating its potential as a comprehensive tool for probing non-equilibrium quantum phenomena.
Key Contributions
- Theoretical framework for post-quench evolution of quantum geometric tensor in 1D periodic systems
- Demonstration that QGT components encode physical observables like position variance, energy variance, and geometric properties
- Numerical validation using Su-Schrieffer-Heeger model showing QGT as comprehensive probe for nonequilibrium phenomena
View Full Abstract
We investigate the post-quench dynamics of the quantum geometric tensor (QGT) of 1D periodic systems with a suddenly changed Hamiltonian. The diagonal component with respect to the crystal momentum gives a metric corresponding to the variance of the time-evolved position, and its coefficient of the quadratic term in time is the group-velocity variance, signaling ballistic wavepacket dispersion. The other diagonal QGT component with respect to time reveals the energy variance. The off-diagonal QGT component features a real part as a covariance and an imaginary part representing a quench-induced curvature. Using the Su-Schrieffer-Heeger (SSH) model as an example, our numerical results of different quenches confirm that the post-quench QGT is governed by physical quantities and local geometric objects from the initial state and post-quench bands, such as the Berry connection, group velocities, and energy variance. Furthermore, the connections between the QGT and physical observables suggest the QGT as a comprehensive probe for nonequilibrium phenomena.
How Entanglement Reshapes the Geometry of Quantum Differential Privacy
This paper investigates how quantum entanglement affects privacy guarantees in quantum information systems, discovering that entanglement can actually enhance privacy protection through a phase-transition phenomenon. The researchers find that above a certain entanglement threshold, higher entanglement levels improve privacy guarantees and can even make non-private quantum mechanisms become private.
Key Contributions
- Discovery of a sharp phase-transition phenomenon where entanglement entropy affects quantum differential privacy in a nonlinear way
- Demonstration that entanglement serves as a privacy-enhancing resource that can improve or enable privacy guarantees in quantum protocols
View Full Abstract
Quantum differential privacy provides a rigorous framework for quantifying privacy guarantees in quantum information processing. While classical correlations are typically regarded as adversarial to privacy, the role of their quantum analogue, entanglement, is not well understood. In this work, we investigate how quantum entanglement fundamentally shapes quantum local differential privacy (QLDP). We consider a bipartite quantum system whose input state has a prescribed level of entanglement, characterized by a lower bound on the entanglement entropy. Each subsystem is then processed by a local quantum mechanism and measured using local operations only, ensuring that no additional entanglement is generated during the process. Our main result reveals a sharp phase-transition phenomenon in the relation between entanglement and QLDP: below a mechanism-dependent entropy threshold, the optimal privacy leakage level mirrors that of unentangled inputs; beyond this threshold, the privacy leakage level decreases with the entropy, which strictly improves privacy guarantees and can even turn some non-private mechanisms into private ones. The phase-transition phenomenon gives rise to a nonlinear dependence of the privacy leakage level on the entanglement entropy, even though the underlying quantum mechanisms and measurements are linear. We show that the transition is governed by the intrinsic non-convex geometry of the set of entanglement-constrained quantum states, which we parametrize as a smooth manifold and analyze via Riemannian optimization. Our findings demonstrate that entanglement serves as a genuine privacy-enhancing resource, offering a geometric and operational foundation for designing robust privacy-preserving quantum protocols.
Introduction to Quantum Entanglement Geometry
This paper presents a geometric approach to understanding quantum entanglement in many-body systems, using advanced mathematical tools like Azumaya algebras and Severi-Brauer schemes to characterize when entanglement structures can exist globally across parameter spaces. The authors show that geometric obstructions to global entanglement decomposition appear as Brauer classes, and demonstrate how geometric holonomy can produce entangling quantum gates.
Key Contributions
- Development of geometric framework for understanding entanglement using Azumaya algebras and Severi-Brauer schemes
- Characterization of global obstructions to entanglement decomposition through Brauer classes
- Demonstration that geometric holonomy can generate entangling quantum gates
View Full Abstract
This article is an expository account aimed at viewing entanglement in finite-dimensional quantum many-body systems as a phenomenon of global geometry. While the mathematics of general quantum states has been studied extensively, this article focuses specifically on their entanglement. When a quantum system varies over a classical parameter space, each fiber may look like the same Hilbert space, yet there may be no global identification because of twisting in the gluing data. Describing this situation by an Azumaya algebra, one always obtains the family of pure-state spaces as a Severi-Brauer scheme. The main focus is to characterize the condition under which the subsystem decomposition required to define entanglement exists globally and compatibly, by a reduction to the stabilizer subgroup of the Segre variety, and to explain that the obstruction appears in the Brauer class. As a consequence, quantum states yield a natural filtration dictated by entanglement on the Severi-Brauer scheme. Using a spin system on a torus as an example, we show concretely that the holonomy of the gluing can produce an entangling quantum gate, and can appear as an obstruction class distinct from the usual Berry numbers or Chern numbers. For instance, even for quantum systems that have traditionally been regarded as having no topological band structure, the entanglement of their eigenstates can be related to global geometric universal quantities, reflecting the background geometry.
Inverse-Squeezing Kennedy Receiver for Near-Helstrom Discrimination of Displaced-Squeezed BPSK
This paper develops an improved quantum receiver called the Inverse-squeezing Kennedy receiver that can better distinguish between binary phase-shift keyed signals made from squeezed quantum states of light. The receiver converts these squeezed signals into easier-to-detect coherent states and achieves near-optimal performance, getting close to fundamental quantum limits for signal discrimination.
Key Contributions
- Development of the Inverse-squeezing Kennedy receiver architecture that approaches the Helstrom bound for discriminating displaced-squeezed BPSK signals
- Demonstration that squeezing resources at the transmitter can be converted to displacement gain at the receiver, achieving sub-1% error rates in low-photon regimes
- Analysis of robustness against non-ideal conditions including dark counts and squeezing parameter mismatch
View Full Abstract
To address the discrimination problem of binary phase-shift keyed displaced squeezed vacuum states (S-BPSK), this paper proposes an Inverse-squeezing Kennedy (IS-Kennedy) receiver. This architecture incorporates an inverse-squeezing operator following the displacement operation of a conventional Kennedy receiver, mapping the S-BPSK signals onto equivalent large-amplitude coherent states. Furthermore, it employs a photon-number-resolving (PNR) detector to perform maximum a posteriori (MAP) decision-making. Theoretical analysis demonstrates that, under ideal conditions, the IS-Kennedy receiver effectively translates the transmitter's squeezing resources into a displacement gain at the receiver. Consequently, its error probability approaches the Helstrom bound across the entire energy spectrum, remaining within a constant factor of 3 dB. In the low-photon-number regime ($N \approx 0.6$), the proposed scheme surpasses the coherent-state limit, achieving an error rate below 1\%. Furthermore, this paper provides an in-depth analysis of system performance under non-ideal conditions, revealing the robustness of PNR detection against background dark counts and a characteristic ``parity photon-number step'' saturation effect arising from squeezing parameter mismatch.
C2NP: A Benchmark for Learning Scale-Dependent Geometric Invariances in 3D Materials Generation
This paper introduces C2NP, a benchmark for testing whether AI models that generate materials can properly handle the transition from infinite crystal structures to finite nanoparticles. The researchers found that current state-of-the-art generative models fail to capture the geometric and physical principles needed for this scale transition, suggesting they memorize templates rather than learning true physical relationships.
Key Contributions
- Created C2NP benchmark with 170,000+ nanoparticle configurations for evaluating scale-dependent generalization in materials generation
- Demonstrated that state-of-the-art generative models fail at geometric generalization across crystal-to-nanoparticle transitions despite low training losses
View Full Abstract
Generative models for materials have achieved strong performance on periodic bulk crystals, yet their ability to generalize across scale transitions to finite nanostructures remains largely untested. We introduce Crystal-to-Nanoparticle (C2NP), a systematic benchmark for evaluating generative models when moving between infinite crystalline unit cells and finite nanoparticles, where surface effects and size-dependent distortions dominate. C2NP defines two complementary tasks: (i) generating nanoparticles of specified radii from periodic unit cells, testing whether models capture surface truncation and geometric constraints; and (ii) recovering bulk lattice parameters and space-group symmetry from finite particle configurations, assessing whether models can infer underlying crystallographic order despite surface perturbations. Using diverse materials as a structurally consistent testbed, we construct over 170,000 nanoparticle configurations by carving particles from supercells derived from DFT-relaxed crystal unit cells, and introduce size-based splits that separate interpolation from extrapolation regimes. Experiments with state-of-the-art approaches, including diffusion, flow-matching, and variational models, show that even when losses are low, models often fail geometrically under distribution shift, yielding large lattice-recovery errors and near-zero joint accuracy on structure and symmetry. Overall, our results suggest that current methods rely on template memorization rather than scalable physical generalization. C2NP offers a controlled, reproducible framework for diagnosing these failures, with immediate applications to nanoparticle catalyst design, nanostructured hydrides for hydrogen storage, and materials discovery. Dataset and code are available at https://github.com/KurbanIntelligenceLab/C2NP.
The cost of quantum algorithms for biochemistry: A case study in metaphosphate hydrolysis
This paper evaluates the quantum computing resources needed to simulate ATP/metaphosphate hydrolysis, a crucial biological reaction, comparing three quantum algorithms (VQE, quantum Krylov, and quantum phase estimation) to determine which approaches are most feasible for current and near-future quantum devices.
Key Contributions
- Comprehensive resource estimation for quantum simulation of biochemically important ATP/metaphosphate hydrolysis reaction
- Comparative analysis of three major quantum algorithms (VQE, quantum Krylov, QPE) for molecular ground state problems
- Open-source benchmark dataset of biomolecular Hamiltonians for future quantum algorithm development
View Full Abstract
We evaluate the quantum resource requirements for ATP/metaphosphate hydrolysis, one of the most important reactions in all of biology with implications for metabolism, cellular signaling, and cancer therapeutics. In particular, we consider three algorithms for solving the ground state energy estimation problem: the variational quantum eigensolver, quantum Krylov, and quantum phase estimation. By utilizing exact classical simulation, numerical estimation, and analytical bounds, we provide a current and future outlook for using quantum computers to solve impactful biochemical and biological problems. Our results show that variational methods, while being the most heuristic, still require substantially fewer overall resources on quantum hardware, and could feasibly address such problems on current or near-future devices. We include our complete dataset of biomolecular Hamiltonians and code as benchmarks to improve upon with future techniques.
Frequency- and time-resolved second order quantum coherence function of IDTBT single-molecule fluorescence
This paper develops a new quantum light spectroscopy technique that can measure quantum coherence properties of single molecules by analyzing their fluorescence at different frequencies and times. The researchers demonstrate this technique on IDTBT polymer molecules and observe signatures that suggest quantum coherence effects in the molecule's excited states.
Key Contributions
- Development of single-molecule fluorescence g(2)(τ) quantum light spectroscopy (SMFg2-QLS) technique
- First experimental demonstration of frequency- and time-resolved quantum coherence measurements in single IDTBT molecules
View Full Abstract
The frequency- and time-resolved second order quantum coherence function (g(2)(τ)) of single-molecule fluorescence has recently been proposed as a powerful new quantum light spectroscopy that can reveal intrinsic quantum coherence in excitation energy transfer in molecular systems ranging from simple dimers to photosynthetic complexes. Yet, no experiments have been reported to date. Here, we have developed a single-molecule fluorescence g(2)(τ) quantum light spectroscopy (SMFg2-QLS) that can simultaneously measure the fluorescence intensity, lifetime, spectra, and g(2)(τ) with frequency resolution, for a single molecule in a controlled environment at both room temperature and cryogenic temperature. As a proof of principle, we have studied single molecules of IDTBT (indacenodithiophene-co-benzothiadiazole), a semiconducting donor-acceptor conjugated copolymer with a chain-like structure that shows a high carrier mobility and annihilation-limited long-range exciton transport. We have observed different g(2)(τ=0) values with different bands or bandwidths of the single molecule IDTBT fluorescence. The general features are consistent with theoretical predictions and suggest non-trivial excited state quantum dynamics, possibly showing quantum coherence, although further analysis and confirmation will require additional theoretical calculations that take into account the complexity and inhomogeneity of individual IDTBT single molecular chains. Our results demonstrate the feasibility and promise of frequency- and time-resolved SMFg2-QLS to provide new insights into molecular quantum dynamics and to reveal signatures of intrinsic quantum coherence in photosynthetic light harvesting that are independent of the nature of the light excitation.
Qubit-qudit entanglement transfer in defect centers with high-spin nuclei
This paper proposes a method for transferring quantum entanglement from electron spins to nuclear spins in defect centers, enabling the creation of long-lived quantum memory systems using high-dimensional quantum states (qudits). The scheme uses naturally occurring hyperfine interactions to repeatedly transfer entanglement without requiring active control of the nuclear spins.
Key Contributions
- Development of entanglement transfer scheme from electron qubits to nuclear qudits using Ising hyperfine interactions
- Demonstration that maximal entanglement can be generated deterministically when qudit dimension is a power of two
- Application to germanium vacancy centers in diamond as a practical quantum memory system
View Full Abstract
We propose a scheme for accumulating entanglement between long-lived qudits provided by central nuclear spins of defect centers. Assuming a generic setting, the electron spin of each node acts as the communication qubit and may be entangled with other nodes, e.g., through a spin-photon interface. The generally available Ising component of the hyperfine interaction is shown to facilitate repeated entanglement transfer onto memory qudits of arbitrary dimension $d\le 2I+1$ with $I$ the nuclear spin quantum number. When $d$ is set to an integer power of two, maximal entanglement can be generated deterministically and without intermittent driving of nuclear spins. The scheme is applicable to several candidate systems, including the $^{73}$Ge germanium vacancy in diamond.
Hamiltonian Decoded Quantum Interferometry for General Pauli Hamiltonians
This paper develops new quantum algorithms for preparing special quantum states related to general Pauli Hamiltonians using a technique called Hamiltonian Decoded Quantum Interferometry (HDQI). The work extends previous methods to work with a much broader class of quantum systems and shows these algorithms can approximate important thermal states (Gibbs states) used in quantum optimization.
Key Contributions
- Extension of HDQI beyond stabilizer Hamiltonians to general Pauli Hamiltonians
- Development of efficient quantum algorithms for preparing polynomial matrix function states
- Demonstration of robustness to decoding imperfections
- New method for Gibbs state preparation and Hamiltonian optimization
View Full Abstract
In this work, we study the Hamiltonian Decoded Quantum Interferometry (HDQI) for the general Hamiltonians $H=\sum_ic_iP_i$ on an $n$-qubit system, where the coefficients $c_i\in \mathbb{R}$ and $P_i$ are Pauli operators. We show that, given access to an appropriate decoding oracle, there exist efficient quantum algorithms for preparing the state $ρ_{\mathcal P}(H) = \frac{\mathcal P^2(H)}{\text{Tr}[\mathcal P^2(H)]}$, where $\mathcal P(H)$ denotes the matrix function induced by a univariate polynomial $\mathcal P(x)$. Such states can be used to approximate the Gibbs states of $H$ for suitable choices of polynomials. We further demonstrate that the proposed algorithms are robust to imperfections in the decoding procedure. Our results substantially extend the scope of HDQI beyond stabilizer-like Hamiltonians, providing a method for Gibbs-state preparation and Hamiltonian optimization in a broad class of physically and computationally relevant quantum systems.
Practical block encodings of matrix polynomials that can also be trivially controlled
This paper presents more efficient quantum circuits for implementing matrix polynomial transformations using block encodings, reducing the circuit depth overhead from scaling with both polynomial degree and system size to scaling only linearly with polynomial degree. The work uses the FOQCS-LCU framework to make these operations practical for current quantum hardware.
Key Contributions
- Reduced circuit depth overhead for matrix polynomial block encodings from scaling with both degree and system size to linear scaling in degree only
- Demonstrated efficient controllability of FOQCS-LCU circuits for applications like Hadamard tests
- Provided explicit practical circuits with detailed gate counts for spin model implementations
View Full Abstract
Quantum circuits naturally implement unitary operations on input quantum states. However, non-unitary operations can also be implemented through block encodings, where additional ancilla qubits are introduced and later measured. While block encoding has a number of well-established theoretical applications, its practical implementation has been prohibitively expensive for current quantum hardware. In this paper, we present practical and explicit block encoding circuits implementing matrix polynomial transformations of a target matrix. With standard approaches, block-encoding a degree-$d$ matrix polynomial requires a circuit depth scaling as $d$ times the depth for block-encoding the original matrix alone. By leveraging the recently introduced Fast One-Qubit Controlled Select LCU (FOQCS-LCU) framework, we show that the additional circuit-depth overhead required for encoding matrix polynomials can be reduced to scale linearly in $d$ with no dependence on system size or the cost of block encoding the original matrix. Moreover, we demonstrate that the FOQCS-LCU circuits and their associated matrix polynomial transformations can be controlled with negligible overhead, enabling efficient applications such as Hadamard tests. Finally, we provide explicit circuits for representative spin models, together with detailed non-asymptotic gate counts and circuit depths.
Efficient Trotter-Suzuki Schemes for Long-time Quantum Dynamics
This paper develops improved mathematical methods for simulating quantum systems over long time periods by creating more efficient high-order Trotter-Suzuki decomposition schemes. The authors optimize these schemes to reduce computational errors that accumulate during long-time quantum simulations, demonstrating better performance than traditional methods on test systems like the Heisenberg model.
Key Contributions
- Framework for constructing optimized high-order Trotter-Suzuki schemes through direct parameter optimization
- Two novel 4th and 6th order decomposition schemes with significantly improved efficiency over traditional constructions
- Demonstration that schemes with more uniform coefficients have better long-time error accumulation properties
View Full Abstract
Accurately simulating long-time dynamics of many-body systems is a challenge in both classical and quantum computing due to the accumulation of Trotter errors. While low-order Trotter-Suzuki decompositions are straightforward to implement, their rapidly growing error limits access to long-time observables. We present a framework for constructing efficient high-order Trotter-Suzuki schemes by identifying their structure and directly optimizing their parameters over a high-dimensional space. This method enables the discovery of new schemes with significantly improved efficiency compared to traditional constructions, such as those by Suzuki and Yoshida. Based on the theoretical efficiency and practical performance, we recommend two novel highly efficient schemes at $4^{\textrm{th}}$ and $6^{\textrm{th}}$ order. We also demonstrate the effectiveness of these decompositions on the Heisenberg model and the quantum harmonic oscillator, and find that for a fixed final time they perform better across the computational cost. Even when using large time steps, they surpass established low-order schemes like the Leapfrog. Finally, we investigate the in-practice performance of different Trotter schemes and find the decompositions with more uniform coefficients tend to feature improved error accumulation over long times. We have included this observation into our choice of recommended schemes.
Coherent control of photon pairs via quantum interference between second- and third-order quantum nonlinear processes
This paper demonstrates quantum interference between different order nonlinear optical processes (second-order spontaneous parametric down-conversion and third-order spontaneous four-wave mixing) to create photon pairs. The interference allows coherent control over both the generation rate and spectral properties of the two-photon quantum states.
Key Contributions
- Demonstrated genuine quantum interference between second- and third-order nonlinear processes in integrated photonics
- Achieved coherent control over biphoton generation rate and spectral structure through phase-dependent modulation
- Established a new method for shaping biphoton wavefunctions and quantum correlations
View Full Abstract
Genuine quantum interference between independent nonlinear processes of different order provides a route to coherent control that cannot be reduced to a classical field interference. Here we present an all-optical analogue of coherent carrier injection by exploiting interference between second- and third-order quantum nonlinear processes in an integrated photonic platform. Photon pairs generated via spontaneous parametric down-conversion and spontaneous four-wave mixing coherently contribute to the same final two-photon state, resulting in a phase-dependent modulation of both the generation rate and the spectral structure of the emitted biphoton state. We illustrate the features of such interference and how it can be used to shape biphoton wavefunctions and their quantum correlations. These results identify interference between nonlinear processes of different order as a distinct form of coherent quantum control within quantum nonlinear optics.
On the Stochastic-Quantum Correspondence
This paper presents a theoretical framework that attempts to derive all of quantum mechanics from a single axiom stating that physical systems evolve according to stochastic laws. The work builds on Barandes' 2023 stochastic-quantum correspondence, using bra-ket notation to address foundational issues like the measurement problem and classical limits.
Key Contributions
- Derivation of six quantum mechanical axioms from a single stochastic axiom
- Proposed solution to the measurement problem through environmental degrees of freedom
- Analysis of discrete vs continuous bases suggesting fundamental discreteness of space
View Full Abstract
This paper aims to first explain, somewhat more clearly, the Stochastic-Quantum correspondence put forward in by Barandes in 2023. Specifically, the quantum-mechanical bra-ket notation is used, illuminating some results of previous results. With this, we prove the six axioms of textbook quantum mechanics from a single axiom: every physical system evolves according to a, generally indivisible, stochastic law. Afterwards, we generalise the treatment to continuous bases, which showcases a problem with them, indicating that space (and other physical variables) may be discrete in nature. Some concrete examples are also given, including the generalisation to classical and quantum fields. Then, we treat some practical issues of this new stochastic approach, regarding the solving of problems in physics, which turns out to still be most tractable in the traditional way. Finally, we explain the classical limit, where a system of many particles is found to behave classically according to Newton's second law. Along with that, we present a way of solving the measurement problem, characterising what is an environment and a measuring device and explaining how the wavefunction collapse comes about. Specifically, it is found that what distinguishes an environment is its number of degrees of freedom, while a measuring device is a low-entropy type of environment.
Nontrivial bounds on extractable energy in quantum energy teleportation for gapped manybody systems with a unique ground state
This paper proves theoretical limits on quantum energy teleportation, showing that the amount of energy that can be extracted through local measurements and operations on distant regions of quantum many-body systems decreases exponentially with distance. The work provides rigorous mathematical bounds for systems with energy gaps and unique ground states.
Key Contributions
- Established universal exponentially decaying upper bound on extractable energy in quantum energy teleportation protocols
- Provided nonperturbative mathematical framework connecting spectral gaps to energy extraction limits in gapped many-body systems
View Full Abstract
We establish a universal, exponentially decaying upper bound on the average energy that can be extracted in quantum energy teleportation (QET) protocols executed on finite-range gapped lattice systems possessing a unique ground state. Under mild regularity assumptions on the Hamiltonian and uniform operator-norm bounds on the local measurement operators, there exist positive constants $C$ and $μ$ (determined by the spectral gap, interaction range and local operator norms) such that for any local measurement performed in a region $A$ and any outcome-dependent local unitaries implemented in a disjoint region $B$ separated by distance $d=\operatorname{dist}(A,B)$ one has $|E_A-E_B|\le C\,e^{-μd}.$ The bound is nonperturbative, explicit up to model-dependent constants, and follows from the variational characterization of the ground state combined with exponential clustering implied by the spectral gap.
Analyzing Images of Blood Cells with Quantum Machine Learning Methods: Equilibrium Propagation and Variational Quantum Circuits to Detect Acute Myeloid Leukemia
This paper tests quantum machine learning algorithms for detecting acute myeloid leukemia from blood cell images, comparing Equilibrium Propagation and Variational Quantum Circuits against classical methods. The quantum approaches achieve reasonable accuracy (83-86%) with much less training data than classical neural networks, demonstrating potential for medical applications in the near-term quantum computing era.
Key Contributions
- Demonstrated competitive performance of quantum machine learning on real medical imaging data with limited training samples
- Established reproducible baselines for NISQ-era quantum machine learning applications in healthcare
View Full Abstract
This paper presents a feasibility study demonstrating that quantum machine learning (QML) algorithms achieve competitive performance on real-world medical imaging despite operating under severe constraints. We evaluate Equilibrium Propagation (EP), an energy-based learning method that does not use backpropagation (incompatible with quantum systems due to state-collapsing measurements) and Variational Quantum Circuits (VQCs) for automated detection of Acute Myeloid Leukemia (AML) from blood cell microscopy images using binary classification (2 classes: AML vs. Healthy). Key Result: Using limited subsets (50-250 samples per class) of the AML-Cytomorphology dataset (18,365 expert-annotated images), quantum methods achieve performance only 12-15% below classical CNNs despite reduced image resolution (64x64 pixels), engineered features (20D), and classical simulation via Qiskit. EP reaches 86.4% accuracy (only 12% below CNN) without backpropagation, while the 4-qubit VQC attains 83.0% accuracy with consistent data efficiency: VQC maintains stable 83% performance with only 50 samples per class, whereas CNN requires 250 samples (5x more data) to reach 98%. These results establish reproducible baselines for QML in healthcare, validating NISQ-era feasibility.
Error-mitigation aware benchmarking strategy for quantum optimization problems
This paper develops a benchmarking framework for assessing when noisy quantum computers can achieve quantum advantage in optimization problems, specifically accounting for finite measurement shots and quantum error mitigation overhead. The authors demonstrate their approach using the Fermi-Hubbard model to identify conditions where quantum devices might outperform classical methods despite hardware limitations.
Key Contributions
- Development of a benchmarking framework that incorporates finite-shot statistics and quantum error mitigation resource overhead for optimization problems
- Demonstration of conditions where probabilistic error cancellation provides operational advantage in quantum optimization tasks
View Full Abstract
Assessing whether a noisy quantum device can potentially exhibit quantum advantage is essential for selecting practical quantum utility tasks that are not efficiently verifiable by classical means. For optimization, a prominent candidate for quantum advantage, entropy benchmarking provides insights based concomitantly on the specifics of the application and its implementation, as well as hardware noise. However, such an approach still does not account for finite-shot effects or for quantum error mitigation (QEM), a key near-term error suppression strategy that reduces estimation bias at the cost of increased sampling overhead. We address this limitation by developing a benchmarking framework that explicitly incorporates finite-shot statistics and the resource overhead induced by QEM. Our framework quantifies quantum advantage through the confidence that an estimated energy lies within an interval defined by the best-known classical upper and lower bounds. Using a proof-of-principle numerical study of the two-dimensional Fermi-Hubbard model at size $8\times8$, we demonstrate that the framework effectively identifies noise and shot-budget regimes in which the probabilistic error cancellation (PEC), a representative QEM method, is operationally advantageous, and potential quantum advantage is not hindered by finite-shot effects. Overall, our approach equips end-users with a framework based on lightweight numerics for assessing potential practical quantum advantage in optimization on near-future quantum hardware, in light of the allocated shot budget.
Quantum Rotation Diversity in Displaced Squeezed Binary Phase-Shift Keying
This paper proposes a quantum rotation diversity scheme to improve the reliability of optical quantum communication systems that transmit binary information using displaced squeezed quantum states of light through turbulent atmospheric channels. The method uses passive rotation to couple consecutive time slots, achieving better error performance and demonstrating 'super-diversity' behavior under certain conditions.
Key Contributions
- Development of quantum rotation diversity scheme for displaced squeezed binary phase-shift keying with analytical performance expressions
- Demonstration of super-diversity behavior achieving effective diversity order of four when both displacement and squeezing scale with total photon number
View Full Abstract
We propose a quantum rotation diversity (QRD) scheme for optical quantum communication using binary phase-shift-keying displaced squeezed states and homodyne detection over Gamma-Gamma turbulence channels. Consecutive temporal modes are coupled by a passive orthogonal rotation that redistributes the displacement amplitude between slots, yielding a diversity order of two under independent fading and joint maximum-likelihood detection. Analytical expressions for the symbol-error rate performance, along with asymptotic results for the diversity and coding gains, are derived. The optimal rotation angle and energy allocation between displacement and squeezing are obtained in closed form. Furthermore, we show that when both the displacement amplitude and the squeezing strength scale with the total photon number, an effective diversity order of four is achieved. Numerical results validate the analysis and demonstrate the super-diversity behaviour of the proposed QRD scheme.
Universality of Many-body Projected Ensemble for Learning Quantum Data Distribution
This paper proves that a quantum machine learning framework called Many-body Projected Ensemble (MPE) can universally approximate any quantum data distribution, addressing a fundamental theoretical question about the expressivity of quantum learning models. The authors demonstrate this with rigorous mathematical guarantees and propose practical improvements for training such models.
Key Contributions
- Proved universality theorem for MPE framework showing it can approximate any pure state distribution within 1-Wasserstein distance error
- Proposed Incremental MPE variant with layer-wise training to improve practical trainability of quantum machine learning models
View Full Abstract
Generating quantum data by learning the underlying quantum distribution poses challenges in both theoretical and practical scenarios, yet it is a critical task for understanding quantum systems. A fundamental question in quantum machine learning (QML) is the universality of approximation: whether a parameterized QML model can approximate any quantum distribution. We address this question by proving a universality theorem for the Many-body Projected Ensemble (MPE) framework, a method for quantum state design that uses a single many-body wave function to prepare random states. This demonstrates that MPE can approximate any distribution of pure states within a 1-Wasserstein distance error. This theorem provides a rigorous guarantee of universal expressivity, addressing key theoretical gaps in QML. For practicality, we propose an Incremental MPE variant with layer-wise training to improve the trainability. Numerical experiments on clustered quantum states and quantum chemistry datasets validate MPE's efficacy in learning complex quantum data distributions.
Sufficient conditions for additivity of the zero-error classical capacity of quantum channels
This paper studies quantum channels and their ability to transmit classical information with zero error probability. The authors provide mathematical conditions for when the zero-error capacity of multiple quantum channels used together equals the sum of their individual capacities, connecting this to properties of noncommutative graphs.
Key Contributions
- Provides sufficient conditions for multiplicativity of independence numbers in noncommutative graphs
- Establishes conditions for additivity of zero-error classical capacity in quantum channels with explicit examples
View Full Abstract
The one-shot zero-error classical capacity of a quantum channel is the amount of classical information that can be transmitted with zero probability of error by a single use. Then the one-shot zero-error classical capacity equals to the logarithmic value of the independence number of the noncommutative graph induced by the channel. Thus the additivity of the one-shot zero-error classical capacity of a quantum channel is equivalent to the multiplicativity of the independence number of the noncommutative graph. The independence number is not multiplicative in general, and it is not clearly understood when the multiplicativity occurs. In this work, we present sufficient conditions for multiplicativity of the independence number, and we give explicit examples of quantum channels. Furthermore, we consider a block form of noncommutative graphs, and provide conditions when the independence number is multiplicative.
Certifying optimal device-independent quantum randomness in quantum networks
This paper develops new multipartite Bell inequalities that can more efficiently certify quantum randomness in quantum networks without needing to characterize the devices used. The approach is particularly effective when Bell inequality violations are not maximal and can be extended to networks with many parties.
Key Contributions
- Development of a family of multipartite Bell inequalities based on GHZ state stabilizer groups that enables optimal quantum randomness certification
- Demonstration of improved efficiency over existing methods (Mermin-type, MABK, Parity-CHSH inequalities) for device-independent randomness certification in non-maximal scenarios
View Full Abstract
Bell nonlocality provides a device-independent (DI) way to certify quantum randomness, based on which true random numbers can be extracted from the observed correlations without detail characterizations on devices for quantum state preparation and measurement. However, the efficiency of current strategies for DI randomness certification is still heavily constrained when it comes to non-maximal Bell values, especially for multiple parties. Here, we present a family of multipartite Bell inequalities that allows to certify optimal quantum randomness and self-test GHZ (Greenberger-Horne-Zeilinger) states, which are inspired from the stabilizer group of the GHZ state. Due to the simple representation of stabilizer group for GHZ states, this family of Bell inequalities is of simple structure and can be easily expanded to more parties. Compared with the Mermin-type inequalities, this family of Bell inequality is more efficient in certifying quantum randomness when non-maximal Bell values achieved. Meanwhile, the general analytical upper bound for the Holevo quantity is presented, and achieves better performance compared with the MABK (Mermin-Ardehali-Belinskii-Klyshko) inequality, Parity-CHSH (Clauser-Horne-Shimony-Holt) inequality and Holz inequality at $N=3$, which is of particular interests for experimental researches on DI quantum cryptography in quantum networks.
Quantum Key Distribution with a Negatively Charged Quantum Dot Single-Photon Source
This paper develops improved single-photon sources using quantum dots for quantum key distribution (QKD), showing that adiabatic rapid passage excitation reduces unwanted multi-photon emissions and enhances security performance compared to conventional methods.
Key Contributions
- Demonstration that adiabatic rapid passage excitation significantly suppresses multiphoton emission in quantum dot single-photon sources
- Comparative analysis showing quantum dot sources outperform Poisson-distributed sources at short-to-intermediate distances for both BB84 and twin-field QKD protocols
View Full Abstract
Various quantum key distribution protocols require bright single-photon sources with a very low probability of multiphoton emission. In this work, we investigate single-photon generation from a negatively charged quantum dot embedded in an elliptical pillar microcavity, driven using either resonant excitation or adiabatic rapid passage (ARP). Our results show that ARP excitation significantly suppresses multiphoton emission probability and improves photon indistinguishability compared to resonant excitation. We further evaluate the secure key rate of both BB84 and twin-field quantum key distribution (TF-QKD) using quantum-dot single-photon sources and compare their performance with that of Poisson-distributed photon sources (PDS) such as weak coherent pulses and down-conversion sources. The analysis reveals that adiabatic excitation offers a modest but consistent enhancement in secure key rate relative to resonant excitation. Moreover, quantum-dot single-photon sources outperform PDS sources over short and intermediate distances; however, at longer distances, PDS sources eventually surpass quantum-dot sources in both infinite decoy-state BB84 and TF-QKD.
Simultaneous determination of multiple low-lying energy levels on a superconducting quantum processor
This paper demonstrates experimental implementation of the ancilla-entangled variational quantum eigensolver (AEVQE) algorithm on a superconducting quantum processor to simultaneously determine multiple low-lying energy states of molecules and magnetic systems. The researchers successfully computed potential energy curves for H₂ molecules and investigated phase transitions in magnetic models, showing the practical feasibility of this quantum algorithm on cloud-accessible quantum platforms.
Key Contributions
- Experimental demonstration of AEVQE algorithm on superconducting quantum hardware
- Simultaneous computation of multiple energy eigenvalues for H₂ molecule and transverse-field Ising models
- Performance comparison between ancilla-enhanced and ancilla-free VQE approaches on real quantum hardware
View Full Abstract
Determining the ground and low-lying excited states is critical in numerous scenarios. Recent work has proposed the ancilla-entangled variational quantum eigensolver (AEVQE) that utilizes entanglement between ancilla and physical qubits to simultaneously tagert multiple low-lying energy levels. In this work, we report the experimental implementation of the AEVQE on a superconducting quantum cloud platform, demonstrating the full procedure of solving the low-lying energy levels of the H$_2$ molecule and the transverse-field Ising models (TFIMs). We obtain the potential energy curves of H$_2$ and show an indication of the ferromagnetic to paramagnetic phase transition in the TFIMs from the average absolute magnetization. Moreover, we investigate multiple factors that affect the algorithmic performance and provide a comparison with ancilla-free VQE algorithms. Our work demonstrates the experimental feasibility of the AEVQE algorithm and offers a guidance for the VQE approach in solving realistic problems on publicly-accessible quantum platforms.
Imperfect blockade in Rydberg superatoms
This paper develops a theoretical model to accurately describe how large ensembles of Rydberg atoms (called 'superatoms') behave when used as quantum network components. The researchers created a scalable mathematical framework that can predict the performance of these systems for quantum communication applications.
Key Contributions
- Development of a scalable theoretical model for imperfect Rydberg blockade in large disordered atomic ensembles
- Validation of the model against experimental data and numerical simulations for predicting gate fidelities and photon emission efficiencies
View Full Abstract
Ensembles of atoms interacting via their Rydberg levels, known as "superatoms" for their ability to encode qubits and to emit single photons, attract increasing attention as building blocks for quantum network nodes. Assessing their performance requires an accurate, physically informative and numerically scalable description of interactions in a large and disordered ensemble. We derive such a description from first principles and successfully test it against brute-force numerics and experimental data. This model proves essential to make quantitative predictions about gate fidelities or photon emission efficiencies, and to guide experiments towards large-scale superatom-based systems.
Qubit-parity interference despite unknown interaction phases
This paper demonstrates a new type of quantum interference using a trapped calcium ion that works even when laser phases cannot be precisely controlled. The researchers create cat-like quantum states and show interference patterns that are naturally robust to unknown but stable driving phases, achieving theoretical visibility limits of 20-40%.
Key Contributions
- Demonstration of qubit-parity interference that is inherently robust to unknown but stable laser phases
- Development of a scalable coherence witness method for high-dimensional quantum states without full state tomography
View Full Abstract
Quantum interference between interacting systems is fundamental to basic science and quantum technology, but it typically requires precise control of the interaction phases of lasers or microwave generators. Can interference be observed if those interaction phases are stable but unknown, usually prohibitive for complex state without active control? Here, we answer this question by experimentally preparing a Schrödinger-cat-like state of an internal qubit and a motional oscillator of a trapped $^{40}$Ca$^{+}$ ion, and its robustness to such uncontrolled phase. By applying alternating red and blue sideband pulses, we enforce a strict qubit-parity correlation and interference inherently insensitive to stable but unknown phases of the driving laser. For this qubit-parity interference, we use a minimal two-pulse interferometric sequence to demonstrate characteristic visibilities of $20\%$ and $40\%$, which approach the theoretical visibility limit, providing a scalable coherence witness without full state tomography for high-dimensional states.
Physics-Informed Hybrid Quantum-Classical Dispatching for Large-Scale Renewable Power Systems:A Noise-Resilient Framework
This paper proposes a hybrid quantum-classical algorithm for optimizing electrical power grid operations with high renewable energy integration. The approach embeds physical power grid constraints directly into quantum circuit design to solve complex scheduling problems more efficiently than classical methods.
Key Contributions
- Physics-informed quantum algorithm design that embeds power flow equations into quantum Hamiltonians
- Noise-adaptive regularization mechanism for NISQ device performance
- Demonstration of quantum advantage for large-scale power system optimization
View Full Abstract
The integration of high-penetration renewable energy introduces significant stochasticity and non-convexity into power system dispatching, challenging the computational limits of classical optimization. While Variational Quantum Algorithms (VQAs) on Noisy Intermediate-Scale Quantum (NISQ) devices offer a promising path for combinatorial acceleration, existing approaches typically treat the power grid as a "black box", suffering from poor scalability (barren plateaus) and frequent violations of physical constraints. Bridging these gaps, this paper proposes a Physics-Informed Hybrid Quantum-Classical Dispatching (PI-HQCD) framework. We construct a topology-aware Hamiltonian that explicitly embeds linearized power flow equations, storage dynamics, and multi-timescale coupling directly into the quantum substrate, significantly reducing the search space dimensionality. We further derive a noise-adaptive regularization mechanism that theoretically bounds the effective Lipschitz constant of the objective function, guaranteeing convergence stability under realistic quantum measurement noise. Numerical experiments on the IEEE 39-bus benchmark and a 118-bus regional grid demonstrate that PI-HQCD achieves superior economic efficiency and higher renewable utilization compared to stochastic dual dynamic programming (SDDP). Theoretical analysis confirms that this topology-aware design leads to an O(1/N) gradient variance scaling, effectively mitigating barren plateaus and ensuring scalability for larger networks. This work establishes a rigorous paradigm for embedding engineering physics into quantum computing, paving the way for practical quantum advantage in next-generation grid operations.
Measurement induced faster symmetry restoration in quantum trajectories
This paper studies how continuous quantum measurements can accelerate the restoration of broken symmetries in quantum systems. The researchers show that measurement-induced back-action can be used as a resource to help quantum systems return to symmetric states faster than they would through unitary evolution alone.
Key Contributions
- Demonstrates that continuous measurement back-action can accelerate U(1) symmetry restoration in quantum trajectories
- Shows universality of faster restoration for states with superpositions of distant charge sectors under global monitoring
- Proves that local monitoring can further accelerate symmetry restoration for certain slowly-relaxing states
View Full Abstract
Continuous measurement of quantum systems provides a standard route to quantum trajectories through the successive acquisition of information which further results in measurement back-action. In this work, we harness this back-action as a resource for global $U(1)$ symmetry restoration where continuous measurement is combined with a $U(1)$-preserving unitary evolution. Starting from a $U(1)$ symmetry-broken initial state, we simulate quantum trajectories generated by continuous measurements of both global and local observables. We show that under global monitoring, states containing superpositions of distant charge sectors restore symmetry faster than those involving nearby sectors. We establish the universality of this behavior across different measurement protocols. Finally, we demonstrate that local monitoring can further accelerate symmetry restoration for certain states that relax slowly under global monitoring.
Hamiltonian formulation of the $1+1$-dimensional $φ^4$ theory in a momentum-space Daubechies wavelet basis
This paper develops a new computational approach using Daubechies wavelets in momentum space to study quantum field theory, specifically applying it to calculate energy spectra and phase transitions in scalar field theories. The method provides a systematic way to handle both short and long-distance physics in quantum field calculations.
Key Contributions
- Development of wavelet-based Hamiltonian formulation for quantum field theory calculations
- Demonstration of convergent results for critical coupling in phi^4 theory phase transitions
- Introduction of momentum-space Daubechies wavelets for natural UV/IR truncation in field theory
View Full Abstract
We apply the wavelet formalism of quantum field theory to investigate nonperturbative dynamics within the Hamiltonian framework. In particular, we employ Daubechies wavelets in momentum space, whose basis functions are labeled by resolution and translation indices, providing a natural nonperturbative truncation of both infrared and ultraviolet truncation of quantum field theories. As an application, we compute the energy spectra of a free scalar field theory and the interacting $1+1$-dimensional $φ^4$ theory. This approach successfully reproduces the well-known strong-coupling phase transition in the $m^2 > 0$ regime. We find that the extracted critical coupling systematically converges toward its established value as the momentum resolution is increased, demonstrating the effectiveness of the wavelet-based Hamiltonian formulation for nonperturbative field-theoretic calculations.
A Theory of Single-Antenna Atomic Beamforming
This paper develops a theoretical framework for Rydberg atomic receivers (RAREs) that can perform directional radio wave detection using quantum properties of highly excited atoms. The authors show how spatial variations in atomic quantum states within vapor cells can create beamforming effects, and propose a segmented vapor cell architecture to improve signal reception without increasing total cell length.
Key Contributions
- Theoretical analysis showing that vapor cell length creates directional beam patterns in Rydberg atomic receivers with beamwidth inversely proportional to cell length
- Novel segmented vapor cell architecture that increases effective interaction area without increasing total propagation loss, enabling higher beamforming gain
View Full Abstract
Leveraging the quantum advantages of highly excited atoms, Rydberg atomic receivers (RAREs) represent a paradigm shift in radio wave detection, offering high sensitivity and broadband reception. However, existing studies largely model RAREs as isotropic point receivers and overlook the spatial variations of atomic quantum states within vapor cells, thus inaccurately characterizing their reception patterns. To address this issue, we present a theoretical analysis of the aforementioned spatial responses of a standard local-oscillator (LO)- dressed RARE. Our results reveal that increasing the vapor-cell length produces a receive beam aligned with the LO field, with a beamwidth inversely proportional to the cell length. This finding enables atomic beamforming to enhance received signal-to-noise ratio using only a single-antenna RARE. Furthermore, we derive the achievable beamforming gain by characterizing and balancing the fundamental tradeoff between the effects of increasing the vapor cell length and the exponential power decay of laser propagating through the cell. To overcome the limitation imposed by exponential decay, we propose a novel RARE architecture termed segmental vapor cell. This architecture consists of vapor-cell segments separated by clear-air gaps, allowing the total cell length (and hence propagation loss) to remain fixed while the effective cell length increases. As a result, this segmented design expands the effective atom-field interaction area without increasing the total vapor cell length, yielding a narrower beamwidth and thus higher beamforming gain as compared with a traditional continuous vapor cell.
Emergent Cooperation in Quantum Multi-Agent Reinforcement Learning Using Communication
This paper explores how quantum agents using quantum Q-learning can learn to cooperate through various communication protocols in game theory scenarios like the Prisoner's Dilemma. The researchers test different communication methods (MATE, MEDIATE, Gifting, and RIAL) to see if quantum multi-agent systems can achieve emergent cooperation better than classical approaches.
Key Contributions
- Extension of classical multi-agent reinforcement learning communication protocols to quantum Q-learning agents
- Demonstration that communication mechanisms can foster emergent cooperation in quantum multi-agent systems across multiple game theory scenarios
View Full Abstract
Emergent cooperation in classical Multi-Agent Reinforcement Learning has gained significant attention, particularly in the context of Sequential Social Dilemmas (SSDs). While classical reinforcement learning approaches have demonstrated capability for emergent cooperation, research on extending these methods to Quantum Multi-Agent Reinforcement Learning remains limited, particularly through communication. In this paper, we apply communication approaches to quantum Q-Learning agents: the Mutual Acknowledgment Token Exchange (MATE) protocol, its extension Mutually Endorsed Distributed Incentive Acknowledgment Token Exchange (MEDIATE), the peer rewarding mechanism Gifting, and Reinforced Inter-Agent Learning (RIAL). We evaluate these approaches in three SSDs: the Iterated Prisoner's Dilemma, Iterated Stag Hunt, and Iterated Game of Chicken. Our experimental results show that approaches using MATE with temporal-difference measure (MATE\textsubscript{TD}), AutoMATE, MEDIATE-I, and MEDIATE-S achieved high cooperation levels across all dilemmas, demonstrating that communication is a viable mechanism for fostering emergent cooperation in Quantum Multi-Agent Reinforcement Learning.
Atom-light hybrid interferometer for atomic sensing with quantum memory
This paper develops a new quantum sensor that combines light and atomic quantum memories to detect magnetic fields. The key innovation is a technique that allows the light and atomic components to be measured at different times, eliminating the need for complex optical delay systems while maintaining high sensitivity.
Key Contributions
- Novel protocol using heterodyne detection with local oscillator to eliminate need for optical delay lines in atom-light interferometers
- Demonstration that sensitivity scales favorably with quantum memory lifetime, enabling longer coherence times for better sensing
View Full Abstract
Quantum memories feature a reversible conversion of optical fields into long-lived atomic spin waves, and are therefore ideal for operating as sensitive atomic sensors. However, up to now, atom-light interferometers have lacked an efficient approach to exploit their ultimate atomic sensing performance, since an extra optical delay line is required to compensate for the memory time. Here, we report a new protocol that records the photocurrent via heterodyne mixing with a stable local oscillator. The obtained complex quadrature amplitude that carries information imprinted on its phase by an external magnetic field, is successfully recovered from the interference patterns between the light and the atomic spin wave, without the stringent requirement of having them overlap in time. Our results reveal that the sensitivity scales favorably with the lifetime of the quantum memory. Our work may have important applications in building distributed quantum networks through quantum memory-assisted atom-light interferometers.
Bohr's complementarity principle tested on a real quantum computer via interferometer experiments
This paper tests Bohr's complementarity principle using interferometer experiments on real quantum computers with one and two qubit circuits. The researchers use quantum state tomography to reconstruct density matrices and directly verify complementarity relations through experimental measurements.
Key Contributions
- Experimental verification of Bohr's complementarity principle on real quantum hardware
- Implementation of interferometric experiments using quantum state tomography for direct measurement of complementarity relations
View Full Abstract
Bohr's Complementarity Principle is a core concept of quantum mechanics. In this article, an updated complementarity relation for the wave and ondulatory aspects of a quantum system is presented and discussed. Two interferometric experiments are implemented in one and two qubit circuits and executed on real hardware. The final state density matrices are reconstructed using quantum state tomography and the complementarity relation is tested via direct computation. Results of the executions are presented both graphically and with a mean squared error analysis for a better comprehension.
An Adaptive Purification Controller for Quantum Networks: Dynamic Protocol Selection and Multipartite Distillation
This paper presents an Adaptive Purification Controller (APC) that automatically optimizes entanglement purification protocols in quantum networks by dynamically selecting the best protocol based on changing network conditions like photon loss and noise. The system maximizes the rate of high-quality entangled pairs delivered across quantum network links.
Key Contributions
- Dynamic protocol selection system that adapts purification strategies to changing network conditions in real-time
- Extension to multipartite entanglement generation and demonstration of computational feasibility with millisecond decision latencies
View Full Abstract
Efficient entanglement distribution is the cornerstone of the Quantum Internet. However, physical link parameters such as photon loss, memory coherence time, and gate error rates fluctuate dynamically, rendering static purification strategies suboptimal. In this paper, we propose an Adaptive Purification Controller (APC) that autonomously optimizes the entanglement distillation sequence to maximize the "goodput," the rate of delivered pairs meeting a strict fidelity threshold. By treating protocol selection as a resource allocation problem, the APC dynamically switches between purification depths and protocol families (e.g., BBPSSW vs. DEJMPS) to navigate the trade-off between generation rate and state quality. Using a dynamic programming planner with Pareto pruning, simulation results demonstrate that our approach eliminates the "fidelity cliffs" inherent in static protocols and prevents resource wastage in high-noise regimes. Furthermore, we extend the controller to heterogeneous scenarios, demonstrating robustness for both multipartite GHZ state generation and continuous variable systems using effective noiseless linear amplification models. We benchmark its computational overhead, confirming real-time feasibility with decision latencies in the millisecond range per link.
Scaling of multicopy constructive interference of Gaussian states
This paper analyzes how multiple non-identical Gaussian quantum states interfere constructively when scaled up in bosonic systems, introducing metrics to predict performance and stability of large-scale quantum interference schemes. The work provides theoretical scaling laws and introduces a gain-to-instability ratio to handle realistic imperfections in multiplexed quantum resources.
Key Contributions
- Introduction of gain-to-instability ratio to quantify scaling laws in presence of resource instabilities
- Theoretical analysis and prediction of scaling laws for constructive interference of multiplexed nonclassical Gaussian states
View Full Abstract
Quantum technology advances crucially depend on the scaling up of essential quantum resources. Their ideal multiplexing offers more significant gains in applications; however, the scaling of the nonidentical, fragile and varying resources is neither theoretically nor experimentally known. For bosonic systems, multimode interference is an essential tool already widely exploited to develop quantum technology. Here, we analyze, predict and compare essential scaling laws for a constructive interference of multiplexed nonclassical Gaussian states carrying information by displacement with weakly fluctuating squeezing in different multimode interference architectures. The signal-to-noise ratio quantifies the increase in displacement relative to the noise. We introduce the gain-to-instability ratio to numerically estimate the effect of unexplored resource instabilities in a large scale interference scheme. The use of the gain-to-instability ratio to quantify the scaling laws opens steps for extensive theoretical investigation of other bosonic resources and follow-up feasible experimental verification necessary for further development of these platforms.
Quantum Hyperuniformity and Quantum Weight
This paper extends the concept of hyperuniformity from classical to quantum systems by studying charge-density fluctuations in many-body electron ground states. The authors show that quantum hyperuniformity can classify different quantum phases and provide a measure of energy gaps through a quantity called quantum weight, demonstrating this framework using the Aubry-Andre model.
Key Contributions
- Extension of hyperuniformity concept to quantum fluctuations in electron systems
- Introduction of quantum weight as a quantitative measure of energy gap size
- Demonstration that quantum hyperuniformity classes can distinguish between gapped, gapless, and localized phases
View Full Abstract
Extending hyperuniformity from classical to quantum fluctuations in electron systems yields a framework that identifies quantum phase transitions and reveals underlying gap structures through the quantum weight. We study long-wavelength fluctuations of many-body ground states through the charge-density structure factor by incorporating intrinsic quantum fluctuations into hyperuniformity. Although charge fluctuations at zero temperature are generally suppressed by particle-number conservation, their long-wavelength scaling reveals distinct universal behaviors that define quantum hyperuniformity classes. By exemplifying the Aubry-Andre model, we find that gapped, gapless, and localized-critical-extended phases are sharply distinguished by the quantum hyperuniformity classes. Notably, at the critical point, multifractal wave functions generate anomalous scaling behavior. We further show that, in quantum-hyperuniform gapped phases, the quantum weight provides a quantitative measure of the gap size through a universal power-law scaling. Along with classical hyperuniformity, quantum hyperuniformity serves a direct fingerprint of quantum criticality and a practical probe of quantum phase transitions in aperiodic electron systems.
Scalable Repeater Architecture for Long-Range Quantum Energy Teleportation in Gapped Systems
This paper develops a quantum repeater architecture to enable Quantum Energy Teleportation (QET) over long distances by overcoming the exponential decay of correlations in gapped quantum systems. The authors show that their hierarchical approach changes the resource scaling from exponential to polynomial, making long-range energy teleportation physically feasible.
Key Contributions
- Rigorous demonstration that monolithic QET approaches face exponentially diverging costs and vanishing success probabilities
- Development of hierarchical quantum repeater architecture that achieves polynomial rather than exponential resource scaling for long-range quantum energy teleportation
View Full Abstract
Quantum Energy Teleportation (QET) constitutes a paradigm-shifting protocol that permits the activation of local vacuum energy through the consumption of pre-existing entanglement and classical communication. Nevertheless, the implementation of QET is severely impeded by the fundamental locality of gapped many-body systems, where the exponential clustering of ground-state correlations restricts energy extraction to microscopic scales. In this work, we address this scalability crisis within the framework of the one-dimensional anisotropic XY model. We initially provide a rigorous characterization of a monolithic measurement-induced strategy, demonstrating that while bulk projective measurements can theoretically induce long-range couplings, the approach is rendered physically untenable by exponentially diverging thermodynamic costs and vanishing success probabilities. To circumvent this impasse, we propose and analyze a hierarchical quantum repeater architecture adapted for energy teleportation. By orchestrating heralded entanglement generation, iterative entanglement purification, and nested entanglement swapping, our protocol effectively counteracts the fidelity degradation inherent in noisy quantum channels. We establish that this architecture fundamentally alters the operational resource scaling from exponential to polynomial. This proves, for the first time, the physical permissibility and computational tractability of activating vacuum energy at arbitrary distances. The significance lies not in net energy gain, but in establishing long-range QET as a viable protocol for remote quantum control and resource distribution.
Resource-Efficient Noise Spectroscopy for Generic Quantum Dephasing Environments
This paper develops a new method using weak measurements and Ramsey interferometry to efficiently characterize noise in quantum environments that cause qubit decoherence. The technique can measure the full noise spectrum more resource-efficiently than existing methods, requiring O(N) time instead of O(N²) for the same accuracy.
Key Contributions
- Resource-efficient noise spectroscopy method using repetitive weak measurements that scales as O(N) instead of O(N²)
- Direct sampling of noise correlation functions through Ramsey interferometry measurements without frequency range limitations from coherence time
View Full Abstract
We present a resource-efficient method based on repetitive weak measurements to directly measure the noise spectrum of a generic quantum environment that causes qubit phase decoherence. The weak measurement is induced by a Ramsey interferometry measurement (RIM) on the qubit and periodically applied during the free evolution of the environment. We prove that the measurement correlation of such repetitive RIMs approximately corresponds to a direct sampling of the noise correlation function, thus enabling direct noise spectroscopy of the environment. Compared to dynamical-decoupling-based noise spectroscopy, this method can efficiently measure the full noise spectrum with the detected frequency range not limited by qubit coherence time. This method is also more resource-efficient than the correlation spectroscopy, as for the same detection accuracy with $N$ sampling times, it takes total detection time $O(N)$ while the latter one takes time $O(N^2)$. We numerically demonstrate this method for both bosonic and spin baths.
A particle on a ring or: how I learned to stop worrying and love $θ$-vacua
This paper examines a recent claim that the strong CP problem in quantum chromodynamics doesn't exist by analyzing a simpler quantum mechanical system of a particle on a ring. The authors demonstrate that consistent results require summing over all topological sectors before taking limits, thereby defending the conventional understanding that the strong CP problem does exist in QCD.
Key Contributions
- Provides a simple quantum mechanical model to test order of limits in topological sectors
- Refutes claims that the strong CP problem doesn't exist by showing proper mathematical treatment requires conventional approach
View Full Abstract
Recently, Ai, Cruz, Garbrecht, and Tamarit (ACGT)~\cite{Ai:2020ptm, Ai:2024vfa, Ai:2024cnp, Ai:2025quf} claimed that there is no strong CP problem by adopting a new order of limits in the volume and topological sector. We critically examine this proposal by focusing on simple one-dimensional quantum mechanics on a ring. We demonstrate that consistent results are obtained only when one sums over all topological sectors \textit{before} taking the large $T$ limit. This observation justifies the conventional path integral formulation of gauge theories and implies that the strong CP problem does exist in QCD.