Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Critical non-equilibrium phases from noisy topological memories
This paper studies quantum error correction in surface codes under noisy conditions, discovering a critical phase where quantum information partially survives but can only be recovered using global (not local) decoding methods. The researchers map this problem to a statistical physics model of loops to understand when and how quantum information can be preserved despite noise.
Key Contributions
- Discovery of extended non-equilibrium critical phase in surface codes with sub-exponential decay of conditional mutual information
- Introduction of punctured coherent information diagnostic to determine limits of quasi-local quantum error correction
View Full Abstract
We demonstrate the existence of an extended non-equilibrium critical phase, characterized by sub-exponential decay of conditional mutual information (CMI), in the surface code subject to heralded random Pauli measurement channels. By mapping the resulting mixed state to the ensemble of completely packed loops on a square lattice, we relate the extended phase to the Goldstone phase of the loop model. In particular, CMI is controlled by the characteristic length scale of loops, and we use analytic results of the latter to establish polylogarithmic decay of CMI in the critical phase. We find that the critical phase retains partial logical information that can be recovered by a global decoder, but not by any quasi-local decoder. To demonstrate this, we introduce a diagnostic called punctured coherent information which provides a necessary condition for quasi-local decoding.
Elevator Codes: Concatenation for resource-efficient quantum memory under biased noise
This paper introduces 'elevator codes,' a new quantum error correction scheme that uses a two-layer approach to dramatically reduce the number of qubits needed for quantum memory when noise is biased (one type of error is much more common than others). The method combines simple repetition codes for the common errors with high-rate codes for rare errors, achieving over 50% reduction in qubit overhead compared to existing approaches.
Key Contributions
- Introduction of elevator codes using concatenated classical codes with repetition phase-flip inner codes and high-rate bit-flip outer codes
- Demonstration of over 50% reduction in qubit overhead compared to rectangular surface codes and XZZX codes under biased noise conditions
View Full Abstract
Biased-noise qubits, in which one type of error (e.g. $X$- and $Y$-type errors) is significantly suppressed relative to the other (e.g. $Z$-type errors), can significantly reduce the overhead of quantum error correction. Codes such as the rectangular surface code or XZZX code substantially reduce the qubit overhead under biased noise, but they still face challenges. The rectangular surface code suffers from a relatively low threshold, while the XZZX code requires twice as many physical qubits to maintain the same code distance as the surface code. In this work, we introduce a 2D local code construction that outperforms these codes for noise biases $η\ge 7\times10^{4}$, reducing the qubit overhead by over 50% at $p_Z=10^{-3}$ and $η= 2 \times 10^6$ to achieve a logical error rate of $10^{-12}$. Our construction relies on the concatenation of two classical codes. The inner codes are repetition phase-flip codes while the outer codes are high-rate bit-flip codes enabled by their implementation at the logical level, which circumvents device connectivity constraints. These results indicate that under sufficiently biased noise, it is advantageous to address phase-flip and bit-flip errors at different layers of the coding scheme. The inner code should prioritize a high threshold for phase-flip errors, while the bit-flip outer code should optimize for encoding rate efficiency. In the strong biased-noise regime, high-rate outer codes keep the overhead for correcting residual bit-flip errors comparable to that of the repetition code itself, meaningfully lower than that required by earlier approaches.
Quantum Maxwell Erasure Decoder for qLDPC codes
This paper presents a new quantum error correction decoder called the quantum Maxwell erasure decoder for quantum low-density parity-check (qLDPC) codes. The decoder uses a 'bounded guessing' approach that can be tuned between fast linear-time decoding and optimal maximum-likelihood performance by adjusting a 'guessing budget' parameter.
Key Contributions
- Introduction of quantum Maxwell erasure decoder with tunable complexity-performance tradeoff via guessing budget
- Theoretical guarantees on asymptotic performance with demonstration on bivariate bicycle and quantum Tanner codes
View Full Abstract
We introduce a quantum Maxwell erasure decoder for CSS quantum low-density parity-check (qLDPC) codes that extends peeling with bounded guessing. Guesses are tracked symbolically and can be eliminated by restrictive checks, giving a tunable tradeoff between complexity and performance via a guessing budget: an unconstrained budget recovers Maximum-Likelihood (ML) performance, while a constant budget yields linear-time decoding and approximates ML. We provide theoretical guarantees on asymptotic performance and demonstrate strong performance on bivariate bicycle and quantum Tanner codes.
Symmetry-based Perspectives on Hamiltonian Quantum Search Algorithms and Schrodinger's Dynamics between Orthogonal States
This paper analyzes why Grover's quantum search algorithm fails when searching between orthogonal quantum states, proving that constant Hamiltonians cannot achieve time-optimal evolution in two-dimensional subspaces due to inherent symmetries. The authors demonstrate that overcoming this limitation requires either time-dependent Hamiltonians or evolution in higher-dimensional spaces.
Key Contributions
- Theoretical proof that constant Hamiltonians cannot achieve time-optimal evolution between orthogonal states in 2D subspaces
- Identification of symmetry as the fundamental cause of failure in analog quantum search with orthogonal states
View Full Abstract
It is known that the continuous-time variant of Grover's search algorithm is characterized by quantum search frameworks that are governed by stationary Hamiltonians, which result in search trajectories confined to the two-dimensional subspace of the complete Hilbert space formed by the source and target states. Specifically, the search approach is ineffective when the source and target states are orthogonal. In this paper, we employ normalization, orthogonality, and energy limitations to demonstrate that it is unfeasible to breach time-optimality between orthogonal states with constant Hamiltonians when the evolution is limited to the two-dimensional space spanned by the initial and final states. Deviations from time-optimality for unitary evolutions between orthogonal states can only occur with time-dependent Hamiltonian evolutions or, alternatively, with constant Hamiltonian evolutions in higher-dimensional subspaces of the entire Hilbert space. Ultimately, we employ our quantitative analysis to provide meaningful insights regarding the relationship between time-optimal evolutions and analog quantum search methods. We determine that the challenge of transitioning between orthogonal states with a constant Hamiltonian in a sub-optimal time is closely linked to the shortcomings of analog quantum search when the source and target states are orthogonal and not interconnected by the search Hamiltonian. In both scenarios, the fundamental cause of the failure lies in the existence of an inherent symmetry within the system.
Erasure conversion for singlet-triplet spin qubits enables high-performance shuttling-based quantum error correction
This paper develops a fault-tolerant quantum error correction framework using singlet-triplet spin qubits in semiconductor quantum dots, demonstrating how these qubits can function as erasure qubits with hardware-efficient leakage detection. The approach doubles the error correction threshold and significantly reduces logical error rates when combined with the XZZX surface code.
Key Contributions
- Hardware-efficient leakage-detection protocol for singlet-triplet qubits that projects leaked qubits back to computational subspace without measurement feedback
- Demonstration of twofold increase in error correction threshold and orders-of-magnitude reduction in logical error rates using XZZX surface code with leakage-aware decoding
View Full Abstract
Fast and high fidelity shuttling of spin qubits has been demonstrated in semiconductor quantum dot devices. Several architectures based on shuttling have been proposed; it has been suggested that singlet-triplet (dual-spin) qubits could be optimal for the highest shuttling fidelities. Here we present a fault-tolerant framework for quantum error correction based on such dual-spin qubits, establishing them as a natural realisation of erasure qubits within semiconductor architectures. We introduce a hardware-efficient leakage-detection protocol that automatically projects leaked qubits back onto the computational subspace, without the need for measurement feedback or increased classical control overheads. When combined with the XZZX surface code and leakage-aware decoding, we demonstrate a twofold increase in the error correction threshold and achieve orders-of-magnitude reductions in logical error rates. This establishes the singlet-triplet encoding as a practical route toward high-fidelity shuttling and erasure-based, fault-tolerant quantum computation in semiconductor devices.
Minimal-Energy Optimal Control of Tunable Two-Qubit Gates in Superconducting Platforms Using Continuous Dynamical Decoupling
This paper presents a method for creating high-fidelity quantum gates in superconducting quantum computers by combining continuous dynamical decoupling (to suppress noise) with variational optimization techniques to minimize energy while maximizing gate performance. The authors demonstrate their approach on key two-qubit gates like controlled-Z and controlled-X gates, achieving near-perfect fidelity under realistic experimental conditions.
Key Contributions
- Unified scheme combining continuous dynamical decoupling with variational optimal control for high-fidelity superconducting quantum gates
- Demonstration of near-unit fidelity for CZ, CX, and generic entangling gates with low-energy control fields and noise resilience
View Full Abstract
We present a unified scheme for generating high-fidelity entangling gates in superconducting platforms by continuous dynamical decoupling (CDD) combined with variational minimal-energy optimal control. During the CDD stage, we suppress residual couplings, calibration drifting, and quasistatic noise, resulting in a stable effective Hamiltonian that preserves the designed ZZ interaction intended for producing tunable couplers. In this stable $\mathrm{SU}(4)$ manifold, we calculate smooth low-energy single-quibt control functions using a variational geodesic optimization process that directly minimizes gate infidelity. We illustrate the methodology by applying it to CZ, CX, and generic engangling gates, achieving virtually unit fidelity and robustness under restricted single-qubit action, with experimentally realistic control fields. These results establish CDD-enhanced variational geometric optimal control as a practical and noise-resilient scheme for designing superconducting entangling gates.
Experimental Realization of Rabi-Driven Reset for Fast Cooling of a High-Q Cavity
This paper demonstrates a new method called Rabi-Driven Reset (RDR) for quickly cooling quantum memory devices to their ground state, achieving reset times over 100 times faster than natural decay. The technique uses engineered interactions between a superconducting qubit and cavity modes to create an efficient cooling pathway without requiring measurements.
Key Contributions
- Developed a measurement-free, hardware-efficient method for fast reset of high-Q bosonic memories that is over 100x faster than intrinsic decay
- Demonstrated engineered coupling that scales with qubit-mode dispersive interaction rather than weak intermode cross-Kerr, enabling fast cooling in weakly coupled architectures
View Full Abstract
High-Q bosonic memories are central to hardware-efficient quantum error correction, but their isolation makes fast, high-fidelity reset a persistent bottleneck. Existing approaches either rely on weak intermode cross-Kerr conversion or on measurement-based sequences with substantial latency. Here we demonstrate a hardware-efficient Rabi-Driven Reset (RDR) that implements continuous, measurement-free cooling of a superconducting cavity mode. A strong resonant Rabi drive on a transmon, together with sideband drives on the memory and readout modes detuned by the Rabi frequency, converts the dispersive interaction into an effective Jaynes-Cummings coupling between the qubit dressed states and each mode. This realizes a tunable dissipation channel from the memory to the cold readout bath. Crucially, the engineered coupling scales with the qubit-mode dispersive interaction and the drive amplitude, rather than with the intermode cross-Kerr, enabling fast cooling even in very weakly coupled architectures that deliberately suppress direct mode-mode coupling. We demonstrate RDR of a single photon with a decay time of $1.2 μs$, more than two orders of magnitude faster than the intrinsic lifetime. Furthermore, we reset about 30 thermal photons in about $80 μs$ to a steady-state average photon number of $\bar{n} = 0.045 \pm 0.025$.
Noise-Resilient Quantum Evolution in Open Systems through Error-Correcting Frameworks
This paper studies how quantum error correction codes protect quantum information in realistic noisy environments by modeling quantum systems coupled to thermal baths. The researchers compare different error correction codes (five-qubit, Steane, and toric codes) and find that the five-qubit code performs best across various temperature and noise conditions.
Key Contributions
- Developed a quantitative framework for evaluating quantum error correction codes under realistic thermal noise environments using microscopic system-bath models
- Demonstrated that the five-qubit code consistently outperforms Steane and toric codes in open-system settings across different temperature regimes
- Identified critical evolution times for entangled states where quantum error correction transitions from harmful to beneficial for state preservation
View Full Abstract
We analyze quantum state preservation in open quantum systems using quantum error-correcting (QEC) codes that are explicitly embedded into microscopic system-bath models. Instead of abstract quantum channels, we consider multi-qubit registers coupled to bosonic thermal environments, derive a second-order master equation for the reduced dynamics, and use it to benchmark the five-qubit, Steane, and toric codes under local and collective noise. We compute state fidelities for logical qubits as functions of coupling strength, bath temperature, and the number of correction cycles. In the low-temperature regime, we find that repeated error-correction with the five-qubit code strongly suppresses decoherence and relaxation, while in the high-temperature regime, thermal excitations dominate the dynamics and reduce the benefit of all codes, though the five-qubit code still outperforms the Steane and toric codes. For two-qubit Werner states, we identify a critical evolution time before which QEC does not improve fidelity, and this time increases as entanglement grows. After this critical time, QEC does improve fidelity. Comparative analysis further reveals that the five-qubit code (the smallest perfect code) offers consistently higher fidelities than topological and concatenated architectures in these open-system settings. These findings establish a quantitative framework for evaluating QEC under realistic noise environments and provide guidance for developing noise-resilient quantum architectures in near-term quantum technologies.
Generation of Large Coherent-State Superpositions in Free-Space Optical Pulses
This paper demonstrates the experimental generation of large-amplitude squeezed coherent-state superpositions (cat states) in optical pulses, achieving a record amplitude of 2.47 through controlled mixing of specific quantum states and detection techniques. These non-Gaussian quantum states are essential building blocks for continuous-variable quantum information processing.
Key Contributions
- Achievement of record-breaking amplitude (α=2.47) for squeezed cat states in free-space optical pulses
- Demonstration of protocol using controlled Fock state mixing and homodyne detection heralding
- Significant advancement toward scalable optical GKP states for fault-tolerant photonic quantum computing
View Full Abstract
The generation of non-Gaussian quantum states is a key requirement for universal continuous-variable quantum information processing. We report the experimental generation of large-amplitude squeezed coherent-state superpositions (squeezed cat states) on free-space optical pulses, reaching an amplitude of $α= 2.47$, which, to our knowledge, exceeds all previously reported values. Our protocol relies on the controlled mixing of the Fock states $|1\rangle$ and $|2\rangle$ through a tunable beam splitter, followed by heralding via homodyne detection. The resulting state displays three well-resolved negative regions in its Wigner function and achieves a fidelity of $0.53$ with the target state $\propto \hat{S}(z)(|α\rangle - |-α\rangle)$, with $α= 2.47$ and squeezing parameter $z = 0.56$. These results constitute a significant milestone for temporal breeding protocols and for the iterative generation of optical GKP states, opening new perspectives for scalable and fault-tolerant photonic quantum architectures.
Geometry- and Topology-Informed Quantum Computing: From States to Real-Time Control with FPGA Prototypes
This paper presents a comprehensive framework for quantum computing that bridges theoretical quantum mechanics with practical hardware implementation, covering everything from quantum state geometry to real-time error correction on FPGA platforms. It emphasizes the geometry and topology of quantum states while providing concrete hardware-aware approaches to quantum control systems.
Key Contributions
- Geometry-first approach to quantum computing connecting theoretical foundations to hardware implementation
- Real-time topological error correction with FPGA-based decoders and microarchitectural constraints
- Complete quantum control pipeline from quantum Fisher information geometry to low-latency streaming systems
- Integration of Shor's algorithm implementation with practical hardware considerations
View Full Abstract
This book gives a geometry-first, hardware-aware route through quantum-information workflows, with one goal: connect states, circuits, and measurement to deterministic classical pipelines that make hybrid quantum systems run. Part 1 develops the backbone (essential linear algebra, the Bloch-sphere viewpoint, differential-geometric intuition, and quantum Fisher information geometry) so evolution can be read as motion on curved spaces and measurement as statistics. Part 2 reframes circuits as dataflow graphs: measurement outcomes are parsed, aggregated, and reduced to small linear-algebra updates that schedule the next pulses, highlighting why low-latency, low-jitter streaming matters. Part 3 treats multi-qubit structure and entanglement as geometry and computation, including teleportation, superdense coding, entanglement detection, and Shor's algorithm via quantum phase estimation. Part 4 focuses on topological error correction and real-time decoding (Track A): stabilizer codes, surface-code decoding as "topology -> graph -> algorithm", and Union-Find decoders down to microarchitectural/RTL constraints, with verification, fault injection, and host/control-stack integration under product metrics (bounded latency, p99 tails, fail-closed policies, observability). Optional Track C covers quantum cryptography and streaming post-processing (BB84/E91, QBER/abort rules, privacy amplification, and zero-knowledge/post-quantum themes), emphasizing FSMs, counters, and hash pipelines. Appendices provide visualization-driven iCEstick labs (switch-to-bit conditioning, fixed-point phase arithmetic, FSM sequencing, minimal control ISAs), bridging principles to implementable systems.
Sparse quantum state preparation with improved Toffoli cost
This paper develops a more efficient method for preparing sparse quantum states (states with only a small number of non-zero components) by improving the circuit implementation of the isometry mapping step, reducing the required number of Toffoli gates by roughly a factor of log(s)/2 compared to previous methods.
Key Contributions
- Efficient algorithm for implementing isometry mappings in sparse state preparation with ~2s Toffoli gate cost
- Log(s)/2 improvement factor over state-of-the-art methods for sparse quantum state preparation
- Optimization strategies for joint dense-state preparation and isometry steps, particularly for real-coefficient states
View Full Abstract
The preparation of quantum states is one of the most fundamental tasks in quantum computing, and a key primitive in many quantum algorithms. Of particular interest to areas such as quantum simulation and linear-system solvers are sparse quantum states, which contain only a small number $s$ of non-zero computational basis states compared to a generic state. In this work, we present an approach that prepares $s$-sparse states on $n$ qubits, reducing the number of Toffoli gates required compared to prior art. We work in the established framework of first preparing a dense state on a $\lceil{\log(s)}\rceil$-qubit sub-register, and then mapping this state to the target state via an isometry, with the latter step dominating the cost of the full algorithm. The speed-up is achieved by designing an efficient algorithm for finding and implementing the isometry. The worst-case Toffoli cost of our isometry circuit, which may be viewed as a batched version of an approach by Malvetti et al., is essentially $2s$ for sufficiently large values of $n$, yielding roughly a $\log(s)/2$ improvement factor over the state-of-the-art. In numerical benchmarks on randomly chosen states, the cost is closer to $s$. With the improved isometry circuit, we examine the dense-state preparation step and present ways to optimize the joint cost of both steps, particularly in the case of target states with purely real coefficients, by outsourcing some sub-tasks from the dense-state preparation to the isometry.
Network-Based Quantum Computing: an efficient design framework for many-small-node distributed fault-tolerant quantum computing
This paper proposes a network-based quantum computing framework for distributed fault-tolerant quantum computation using many small-scale nodes that can each hold only a few logical qubits. The approach moves computational data continuously throughout the network and demonstrates improved efficiency compared to traditional circuit-based and measurement-based quantum computing methods.
Key Contributions
- Novel network-based quantum computing framework for distributed fault-tolerant quantum computation
- Demonstration of improved execution times and node efficiency compared to existing approaches
- Architecture design methodology for exploiting redundancy in many small fault-tolerant nodes
View Full Abstract
In fault-tolerant quantum computing, a large number of physical qubits are required to construct a single logical qubit, and a single quantum node may be able to hold only a small number of logical qubits. In such a case, the idea of distributed fault-tolerant quantum computing (DFTQC) is important to demonstrate large-scale quantum computation using small-scale nodes. However, the design of distributed systems on small-scale nodes, where each node can store only one or a few logical qubits for computation, has not been explored well yet. In this paper, we propose network-based quantum computation (NBQC) to efficiently realize distributed fault-tolerant quantum computation using many small-scale nodes. A key idea of NBQC is to let computational data continuously move throughout the network while maintaining the connectivity to other nodes. We numerically show that, for practical benchmark tasks, our method achieves shorter execution times than circuit-based strategies and more node-efficient constructions than measurement-based quantum computing. Also, if we are allowed to specialize the network to the structure of quantum programs, such as peak access frequencies, the number of nodes can be significantly reduced. Thus, our methods provide a foundation in designing DFTQC architecture exploiting the redundancy of many small fault-tolerant nodes.
Many-Body Effects in Dark-State Laser Cooling
This paper develops a theoretical framework for understanding laser cooling of trapped ions, specifically how cooling efficiency changes when multiple ions are present. The research provides guidelines for optimizing the cooling process needed to prepare ions in their quantum ground state, which is essential for high-fidelity quantum operations.
Key Contributions
- Unified many-body theory explaining ion-number-dependent cooling behavior
- Analytic results for both weak and strong coupling regimes with experimental optimization guidelines
- Identification of collective dynamics that enhance cooling rates in large ion crystals
View Full Abstract
We develop a unified many-body theory of two-photon dark-state laser cooling, the workhorse for preparing trapped ions close to their motional quantum ground state. For ions with a $Λ$ level structure, driven by Raman lasers, we identify an ion-number-dependent crossover between weak and strong coupling where both the cooling rate and final temperature are simultaneously optimized. We obtain simple analytic results in both extremes: In the weak coupling limit, we show a Lorentzian spin-absorption spectrum determines the cooling rate and final occupation of the motional state, which are both independent of the number of ions. We also highlight the benefit of including an additional spin dependent force in this case. In the strong coupling regime, our theory reveals the role of collective dynamics arising from phonon exchange between dark and bright states, allowing us to explain the enhancement of the cooling rate with increasing ion number. Our analytic results agree closely with exact numerical simulations and provide experimentally accessible guidelines for optimizing cooling in large ion crystals, a key step toward scalable, high-fidelity trapped-ion quantum technologies.
Bidirectional Decoding for Concatenated Quantum Hamming Codes
This paper develops a new 'bidirectional' decoding algorithm for concatenated quantum Hamming codes that uses information from higher-level error syndromes to improve lower-level error correction decisions. The method significantly improves error correction thresholds and maintains better distance scaling compared to conventional local decoding approaches.
Key Contributions
- Introduction of bidirectional decoding strategy that leverages higher-level syndrome information to improve lower-level error correction
- Demonstration of improved error threshold from 1.56% to 4.35% for concatenated quantum Hamming codes
- Preservation of full 3^L code-distance scaling across multiple concatenation levels
View Full Abstract
High-rate concatenated quantum codes offer a promising pathway toward fault-tolerant quantum computation, yet designing efficient decoders that fully exploit their error-correction capability remains a significant challenge. In this work, we introduce a hard-decision decoder for concatenated quantum Hamming codes with time complexity polynomial in the block length. This decoder overcomes the limitations of conventional local decoding by leveraging higher-level syndrome information to revise lower-level recovery decisions -- a strategy we refer to as bidirectional decoding. For the concatenated $[[15,7,3]]$ quantum Hamming code under independent bit-flip noise, the bidirectional decoder improves the threshold from approximately $1.56\%$ to $4.35\%$ compared with standard local decoding. Moreover, the decoder empirically preserves the full $3^{L}$ code-distance scaling for at least three levels of concatenation, resulting in substantially faster logical-error suppression than the $2^{L+1}$ scaling offered by local decoders. Our results can enhance the competitiveness of concatenated-code architectures for low-overhead fault-tolerant quantum computation.
Obfuscation of Arbitrary Quantum Circuits
This paper presents the first quantum obfuscation scheme that can hide the internal structure of arbitrary quantum circuits while preserving their functionality, extending beyond previous work that only handled specific types of quantum operations. The authors introduce a novel cryptographic primitive called subspace-preserving strong pseudorandom unitaries (spsPRU) to achieve this general obfuscation.
Key Contributions
- First quantum ideal obfuscation scheme for arbitrary quantum circuits supporting quantum inputs and outputs
- Introduction of subspace-preserving strong pseudorandom unitary (spsPRU) primitive
- Extension of obfuscation beyond unitaries to general completely positive trace-preserving maps
View Full Abstract
Program obfuscation aims to conceal a program's internal structure while preserving its functionality. A central open problem is whether an obfuscation scheme for arbitrary quantum circuits exists. Despite several efforts having been made toward this goal, prior works have succeeded only in obfuscating quantum circuits that implement either pseudo-deterministic functions or unitary transformations. Although unitary transformations already include a broad class of quantum computation, many important quantum tasks, such as state preparation and quantum error-correction, go beyond unitaries and fall within general completely positive trace-preserving maps. In this work, we construct the first quantum ideal obfuscation scheme for arbitrary quantum circuits that support quantum inputs and outputs in the classical oracle model assuming post-quantum one-way functions, thereby resolving an open problem posed in Bartusek et al. (STOC 2023), Bartusek, Brakerski, and Vaikuntanathan (STOC 2024), and Huang and Tang (FOCS 2025). At the core of our construction lies a novel primitive that we introduce, called the subspace-preserving strong pseudorandom unitary (spsPRU). An spsPRU is a family of efficient unitaries that fix every vector in a given linear subspace $S$, while acting as a Haar random unitary on the orthogonal complement $S^\perp$ under both forward and inverse oracle queries. Furthermore, by instantiating the classical oracle model with the ideal obfuscation scheme for classical circuits proposed by Jain et al. (CRYPTO 2023) and later enhanced by Bartusek et al. (arxiv:2510.05316), our obfuscation scheme can also be realized in the quantumly accessible pseudorandom oracle model.
Breaking the Orthogonality Barrier in Quantum LDPC Codes
This paper develops improved quantum low-density parity-check (LDPC) codes that overcome traditional trade-offs between orthogonality constraints and code performance. The authors construct quantum error-correcting codes with better structural properties (large girth, regular degree distributions) while maintaining the orthogonality requirements unique to quantum codes, demonstrating a specific code that achieves very low error rates under realistic noise conditions.
Key Contributions
- Breaking the conventional trade-off between orthogonality, regularity, girth, and minimum distance in quantum LDPC codes through controlled permutation matrices
- Demonstrating a concrete girth-8, (3,12)-regular quantum LDPC code with 9216 physical qubits encoding 4612 logical qubits that achieves frame error rates as low as 10^-8
View Full Abstract
Classical low-density parity-check (LDPC) codes are a widely deployed and well-established technology, forming the backbone of modern communication and storage systems. It is well known that, in this classical setting, increasing the girth of the Tanner graph while maintaining regular degree distributions leads simultaneously to good belief-propagation (BP) decoding performance and large minimum distance. In the quantum setting, however, this principle does not directly apply because quantum LDPC codes must satisfy additional orthogonality constraints between their parity-check matrices. When one enforces both orthogonality and regularity in a straightforward manner, the girth is typically reduced and the minimum distance becomes structurally upper bounded. In this work, we overcome this limitation by using permutation matrices with controlled commutativity and by restricting the orthogonality constraints to only the necessary parts of the construction, while preserving regular check-matrix structures. This design breaks the conventional trade-off between orthogonality, regularity, girth, and minimum distance, allowing us to construct quantum LDPC codes with large girth and without the usual distance upper bounds. As a concrete demonstration, we construct a girth-8, (3,12)-regular $[[9216,4612, \leq 48]]$ quantum LDPC code and show that, under BP decoding combined with a low-complexity post-processing algorithm, it achieves a frame error rate as low as $10^{-8}$ on the depolarizing channel with error probability $4 \%$.
Optimal logical Bell measurements on stabilizer codes with linear optics
This paper develops optimal methods for performing Bell measurements on logical qubits encoded in photonic quantum error-correcting codes using linear optics. The authors prove that any logical Bell measurement can be mapped to a single physical Bell measurement and demonstrate schemes that achieve theoretical upper bounds for success probability across multiple stabilizer codes.
Key Contributions
- Proved that any logical Bell measurement on stabilizer codes maps to a single physical Bell measurement on any qubit pair
- Established general upper bounds on success probability for logical Bell measurements with linear optics
- Developed optimal schemes achieving theoretical bounds for multiple quantum error-correcting codes including surface codes and Steane codes
View Full Abstract
Bell measurements (BMs) are ubiquitous in quantum information and technology. They are basic elements for quantum commmunication, computation, and error correction. In particular, when performed on logical qubits encoded in physical photonic qubits, they allow for a read-out of stabilizer syndrome information to enhance loss tolerance in qubit-state transmission and fusion. However, even in an ideal setting without photon loss, BMs cannot be done perfectly based on the simplest experimental toolbox of linear optics. Here we demonstrate that any logical BM on stabilizer codes can always be mapped onto a single physical BM perfomed on any qubit pair from the two codes. As a necessary condition for the success of a logical BM, this provides a general upper bound on its success probability, especially ruling out the possibility that the stabilizer information obtainable from only partially succeeding, physical linear-optics BMs could be combined into the full logical stabilizer information. We formulate sufficient criteria to find schemes for which a single successful BM on the physical level will always allow to obtain the full logical information by suitably adapting the subsequent physical measurements. Our approach based on stabilizer group theory is generally applicable to any stabilizer code, which we demonstrate for quantum parity, five-qubit, standard and rotated planar surface, tree, and seven-qubit Steane codes. Our schemes attain the general upper bound for all these codes, while this bound had previously only been reached for the quantum parity code.
Single-Period Floquet Control of Bosonic Codes with Quantum Lattice Gates
This paper introduces a new method for controlling bosonic quantum codes using single-period Floquet protocols, eliminating the need for slow multi-thousand period driving sequences. The approach enables efficient preparation of bosonic codes and implementation of logical gates using quantum lattice gates that exploit Josephson junction nonlinearity.
Key Contributions
- Development of single-period Floquet method for direct unitary synthesis in bosonic systems
- Demonstration of high-fidelity bosonic code preparation and logical gate implementation using quantum lattice gates
View Full Abstract
Bosonic codes constitute a promising route to fault-tolerant quantum computing. {Existing Floquet protocols enable analytical construction of bosonic codes but typically rely on slow adiabatic ramps with thousands of driving periods.} In this work, we circumvent this bottleneck by introducing an analytical and deterministic Floquet method that directly synthesizes arbitrary unitaries within a single period. The phase-space unitary ensembles generated by our approach reproduce the Haar-random statistics, enabling practical pseudorandom unitaries in continuous-variable systems. We prepare various prototypical bosonic codes from vacuum and implement single-qubit logical gates with high fidelities using quantum lattice gates. By harnessing the full intrinsic nonlinearity of Josephson junctions, quantum lattice gates decompose quantum circuits into primitive operations for efficient continuous-variable quantum computing.
Quantum CSS LDPC Codes based on Dyadic Matrices for Belief Propagation-based Decoding
This paper develops a new method for constructing quantum error-correcting codes called quantum CSS LDPC codes using dyadic matrices. The codes are designed to work with a specific type of decoder that can better handle problematic short cycles in the code structure by concentrating them at single points.
Key Contributions
- Algebraic construction method for quantum LDPC codes using dyadic matrices
- CSS framework extension with compatibility conditions for CAMEL-ensemble quaternary belief propagation decoder
View Full Abstract
Quantum low-density parity-check (QLDPC) codes provide a practical balance between error-correction capability and implementation complexity in quantum error correction (QEC). In this paper, we propose an algebraic construction based on dyadic matrices for designing both classical and quantum LDPC codes. The method first generates classical binary quasi-dyadic LDPC codes whose Tanner graphs have girth 6. It is then extended to the Calderbank-Shor-Steane (CSS) framework, where the two component parity-check matrices are built to satisfy the compatibility condition required by the recently introduced CAMEL-ensemble quaternary belief propagation decoder. This compatibility condition ensures that all unavoidable cycles of length 4 are assembled in a single variable node, allowing the mitigation of their detrimental effects by decimating that variable node.
Asymptotically good CSS codes that realize the logical transversal Clifford group fault-tolerantly
This paper develops new methods for constructing CSS quantum error-correcting codes that can perform fault-tolerant logical operations using transversal gates, specifically focusing on implementing the Clifford group of quantum operations. The work provides both theoretical frameworks and demonstrates that these codes can achieve good scaling properties while maintaining fault tolerance.
Key Contributions
- Framework for constructing CSS codes supporting fault-tolerant logical transversal Z-rotations
- Demonstration of asymptotically good CSS codes realizing the logical transversal Clifford group
- Analysis of CSS-T codes including necessary conditions and revised characterizations for logical T gate implementation
View Full Abstract
This paper introduces a framework for constructing Calderbank-Shor-Steane (CSS) codes that support fault-tolerant logical transversal $Z$-rotations. Using this framework, we obtain asymptotically good CSS codes that fault-tolerantly realize the logical transversal Clifford group. Furthermore, investigating CSS-T codes, we: (a) demonstrate asymptotically good CSS-T codes wherein the transversal $T$ realizes the logical transversal $S^{\dagger}$; (b) show that the condition $C_2 \ast C_1 \subseteq C_1^{\perp}$ is necessary but not sufficient for CSS-T codes; and (c) revise the characterizations of CSS-T codes wherein the transversal $T$ implements the logical identity and the logical transversal $T$, respectively.
Symmetry-Adapted State Preparation for Quantum Chemistry on Fault-Tolerant Quantum Computers
This paper develops efficient methods to prepare quantum states with proper symmetries for quantum chemistry calculations on fault-tolerant quantum computers. The approach uses symmetry projectors that significantly improve the success rate of quantum phase estimation while requiring orders of magnitude fewer resources than the main computation.
Key Contributions
- Development of resource-efficient symmetry projectors using linear combination of unitaries and generalized quantum signal processing for fault-tolerant quantum chemistry
- Demonstration that symmetry filtering reduces overall computational cost by 3-4 orders of magnitude compared to unfiltered approaches while substantially increasing quantum phase estimation success probability
View Full Abstract
We present systematic and resource-efficient constructions of continuous symmetry projectors, particularly $U(1)$ particle number and $SU(2)$ total spin, tailored for fault-tolerant quantum computations. Our approach employs a linear combination of unitaries (LCU) as well as generalized quantum signal processing (GQSP and GQSVT) to implement projectors. These projectors can then be coherently applied as state filters prior to quantum phase estimation (QPE). We analyze their asymptotic gate complexities for explicit circuit realizations. For the particle number and $S_z$ symmetries, GQSP offers favorable resource usage features owing to its low ancilla qubit requirements and robustness to finite precision rotation gate synthesis. For the total spin projection, the structured decomposition of $\hat{P}_{S,M_S}$ reduces the projector T gate count. Numerical simulations show that symmetry filtering substantially increases the QPE success probability, leading to a lower overall cost compared to that of unfiltered approaches across representative molecular systems. Resource estimates further indicate that the cost of symmetry filtering is $3$ to $4$ orders of magnitude lower than that of the subsequent phase estimation step This advantage is especially relevant in large, strongly correlated systems, such as FeMoco, a standard strongly correlated open-shell benchmark. For FeMoco, the QPE cost is estimated at ${\sim}10^{10}$ T gates, while our symmetry projector requires only ${\sim}10^{6}$--$10^{7}$ T gates. These results establish continuous-symmetry projectors as practical and scalable tools for state preparation in quantum chemistry and provide a pathway toward realizing more efficient fault-tolerant quantum simulations.
Toolchain for shuttling trapped-ion qubits in segmented traps
This paper presents a computational toolchain for optimizing the movement of trapped-ion qubits between different zones in segmented radiofrequency traps without causing unwanted vibrations. The framework helps design voltage waveforms that enable fast, reliable qubit transport in complex trap geometries for scalable quantum computing.
Key Contributions
- Numerical toolchain for generating optimized voltage waveforms for ion shuttling in segmented traps
- Framework supporting arbitrary trap geometries including junctions and multi-zone layouts with experimental constraints
- Validation methodology comparing predicted and measured secular frequencies with performance optimization for complex architectures
View Full Abstract
Scalable trapped-ion quantum computing requires fast and reliable transport of ions through complex, segmented radiofrequency trap architectures without inducing excessive motional excitation. We present a numerical toolchain for the systematic generation of time-dependent electrode voltages enabling fast, low-excitation ion shuttling in segmented radiofrequency traps. Based on a model of the trap electrode geometry, the framework combines an electrostatic field solver, efficient unconstrained optimization, waveform postprocessing, and dynamical simulations of ion motion to compute voltage waveforms that realize prescribed transport trajectories while respecting experimental constraints such as voltage limits and bandwidth. The toolchain supports arbitrary trap geometries, including junctions and multi-zone layouts, and allows for the flexible incorporation of optimization objectives. We provide a detailed assessment of the accuracy of the framework by investigating its numerical stability and by comparing measured and predicted secular frequencies. The framework is optimized for numerical performance, enabling rapid numerical prototyping of trap architectures of increasing complexity. As application examples, we apply the framework to the transport of a potential well along a linear, uniformly segmented trap, and we compute a solution for shuttling a potential well around the corner of an X-type trap junction. The presented approach provides an extensible and highly efficient numerical foundation for designing and validating transport protocols in current and next-generation trapped-ion processors.
A dataflow programming framework for linear optical distributed quantum computing
This paper introduces a graphical programming framework that combines linear optics, quantum computing theory, and dataflow programming to design and verify distributed quantum computing systems that use photons to connect quantum processors. The framework enables formal analysis and optimization of networked quantum architectures with classical control systems.
Key Contributions
- Development of a unified graphical framework integrating linear optics, ZX-calculus, and dataflow programming for distributed quantum computing
- Classification of entangling photonic fusion measurements with novel error correction flow structures
- Correctness proofs for repeat-until-success protocols enabling arbitrary fusions
- Construction of universal quantum computing architectures using practical optical components with deterministic operation guarantees
View Full Abstract
Photonic systems offer a promising platform for interconnecting quantum processors and enabling scalable, networked architectures. Designing and verifying such architectures requires a unified formalism that integrates linear algebraic reasoning with probabilistic and control-flow structures. In this work, we introduce a graphical framework for distributed quantum computing that brings together linear optics, the ZX-calculus, and dataflow programming. Our language supports the formal analysis and optimization of distributed protocols involving both qubits and photonic modes, with explicit interfaces for classical control and feedforward, all expressed within a synchronous dataflow model with discrete-time dynamics. Within this setting, we classify entangling photonic fusion measurements, show how their induced Pauli errors can be corrected via a novel flow structure for fusion networks, and establish correctness proofs for new repeat-until-success protocols enabling arbitrary fusions. Layer by layer, we construct qubit architectures incorporating practical optical components such as beam splitters, switches, and photon sources, with graphical proofs that they are deterministic and support universal quantum computation. Together, these results establish a foundation for verifiable compilation and automated optimization in networked quantum computing.
Extending Qubit Coherence Time via Hybrid Dynamical Decoupling
This paper presents a hybrid approach combining dynamical decoupling pulses with bath spin polarization to significantly extend qubit coherence times by 2-3 orders of magnitude. The technique is demonstrated using the central spin model applicable to GaAs quantum dots and similar quantum systems.
Key Contributions
- Development of hybrid dynamical decoupling technique combining pulsed DD with bath spin polarization
- Demonstration of 2-3 orders of magnitude improvement in qubit coherence time
- Application to practical quantum systems including GaAs/AlGaAs and silicon-based platforms
View Full Abstract
Dynamical decoupling (DD) and bath engineering are two parallel techniques employed to mitigate qubit decoherence resulting from their unavoidable coupling to the environment. Here, we present a hybrid DD approach that integrates pulsed DD with bath spin polarization to enhance qubit coherence within the central spin model. This model, which can be realized using GaAs semiconductor quantum dots or analogous quantum simulators, demonstrates a significant extension of the central spin's coherence time by approximately 2 to 3 orders of magnitude that compared with the free-induced decay time, where the dominant contribution from DD and a moderate improvement from spin-bath polarization. This study, which integrates uniaxial dynamical decoupling and auxiliary bath-spin engineering, paves the way for prolonging coherence times in various practical quantum systems, including GaAs/AlGaAs, silicon and Si/SiGe. And this advancement holds substantial promise for applications in quantum information processing.
Learning Better Error Correction Codes with Hybrid Quantum-Assisted Machine Learning
This paper develops a hybrid classical-quantum machine learning approach to automatically discover better quantum error correction codes. The method combines reinforcement learning with quantum device testing to find stabilizer codes optimized for specific hardware errors and photon loss.
Key Contributions
- Hybrid classical-quantum reinforcement learning algorithm for error correction code discovery
- Device-specific error correction code optimization using commercial quantum hardware
View Full Abstract
Quantum error correction is one of the fundamental building blocks of digital quantum computation. The Quantum Lego formalism has introduced a systematic way of constructing new stabilizer codes out of basic lego-like building blocks, which in previous work we have used to generate improved error correcting codes via an automated reinforcement learning process. Here, we take this a step further and show the use of a hybrid classical-quantum algorithm. We combine classical reinforcement learning with calls to two commercial quantum devices to search for a stabilizer code to correct errors specific to the device, as well as an induced photon loss error.
Mechanical Resonator-based Quantum Computing
This paper demonstrates a new quantum computing architecture that uses mechanical resonators (like acoustic wave devices) controlled by superconducting qubits to perform quantum computations. The researchers show they can implement universal quantum gates and run quantum algorithms like the quantum Fourier transform using this hybrid mechanical-superconducting system.
Key Contributions
- Demonstration of universal quantum gate set using mechanical resonators controlled by superconducting qubits
- Implementation of quantum Fourier transform and quantum period finding algorithms on mechanical modes
- New hybrid architecture combining mechanical systems with superconducting circuits for quantum computing
View Full Abstract
Hybrid quantum systems combine the unique advantages of different physical platforms with the goal of realizing more powerful and practical quantum information processing devices. Mechanical systems, such as bulk acoustic wave resonators, feature a large number of highly coherent harmonic modes in a compact footprint, which complements the strong nonlinearities and fast operation times of superconducting quantum circuits. Here, we demonstrate an architecture for mechanical resonator-based quantum computing, in which a superconducting qubit is used to perform quantum gates on a collection of mechanical modes. We show the implementation of a universal gate set, composed of single-qubit gates and controlled arbitrary-phase gates, and showcase their use in the quantum Fourier transform and quantum period finding algorithms. These results pave the way toward using mechanical systems to build crucial components for future quantum technologies, such as quantum random-access memories.
Hardware-Economic Manipulation of Dual-Type ${}^{171}$Yb$^+$ Qubits
This paper demonstrates a cost-effective method to control two different types of qubits in ytterbium ions using just one laser instead of multiple lasers. The researchers show they can perform quantum operations on both qubit types and create entanglement between them, which could make quantum computers simpler and cheaper to build.
Key Contributions
- Hardware-economic control of dual-type qubits using single 355 nm mode-locked pulsed laser
- Demonstration of direct entangling gate between two different qubit types in Yb-171 ions
- Simplification of trapped-ion quantum computer manipulation at both hardware and software levels
View Full Abstract
The dual-type qubit scheme is an emerging method to suppress crosstalk errors in scalable trapped-ion quantum computation and quantum network. Here we report a hardware-economic way to control dual-type $^{171}\mathrm{Yb}^+$ qubits using a single $355\,$nm mode-locked pulsed laser. Utilizing its broad frequency comb structure, we drive the Raman transitions of both qubit types encoded in the $S_{1/2}$ and the $F_{7/2}$ hyperfine levels, and probe their carrier transitions and the motional sidebands. We further demonstrate a direct entangling gate between the two qubit types. Our work can simplify the manipulation of the $^{171}\mathrm{Yb}^+$ qubits both at the hardware and the software level.
Fault-tolerant modular quantum computing with surface codes using single-shot emission-based hardware
This paper develops improved methods for connecting quantum computing modules in a network by generating high-quality entangled states using light-based emission protocols, eliminating the need for slow memory operations and achieving better error thresholds for fault-tolerant quantum computing.
Key Contributions
- Single-shot emission-based protocol for generating GHZ states without Bell-pair fusion
- Elimination of memory-based two-qubit gates in modular quantum computing
- Improved fault-tolerance thresholds from ~0.16% to 0.19-0.24% for surface codes
View Full Abstract
Fault-tolerant modular quantum computing requires stabilizer measurements across the modules in a quantum network. For this, entangled states of high quality and rate must be distributed. Currently, two main types of entanglement distribution protocols exist, namely emission-based and scattering-based, each with its own advantages and drawbacks. On the one hand, scattering-based protocols with cavities or waveguides are fast but demand stringent hardware such as high-efficiency integrated circulators or strong waveguide coupling. On the other hand, emission-based platforms are experimentally feasible but so far rely on Bell-pair fusion with extensive use of slow two-qubit memory gates, limiting thresholds to $\approx 0.16\%$. Here, we consider a fully distributed surface code using emission-based entanglement schemes that generate GHZ states in a single shot, i.e., without the need for Bell-pair fusions. We show that our optical setup produces Bell pairs, W states, and GHZ states, enabling both memory-based and optical protocols for distilling high-fidelity GHZ states with significantly improved success rates. Furthermore, we introduce protocols that completely eliminate the need for memory-based two-qubit gates, achieving thresholds of $\approx 0.19\%$ with modest hardware enhancements, increasing to above $\approx 0.24\%$ with photon-number-resolving detectors. These results show the feasibility of emission-based architectures for scalable fault-tolerant operation.
Quantum Error Correction and Detection for Quantum Machine Learning
This paper examines how to integrate quantum error correction and detection methods into quantum machine learning systems given current hardware limitations. The authors propose partial error correction approaches to reduce resource overhead and demonstrate quantum error detection methods for near-term QML applications.
Key Contributions
- Quantification of resource demands for fully error-corrected quantum machine learning
- Proposal of partial quantum error correction approach to reduce overhead while enabling error correction
- Demonstration and evaluation of quantum error detection methods for QML performance
View Full Abstract
At the intersection of quantum computing and machine learning, quantum machine learning (QML) is poised to revolutionize artificial intelligence. However, the vulnerability of the current generation of quantum computers to noise and computational error poses a significant barrier to this vision. Whilst quantum error correction (QEC) offers a promising solution for almost any type of hardware noise, its application requires millions of qubits to encode even a simple logical algorithm, rendering it impractical in the near term. In this chapter, we examine strategies for integrating QEC and quantum error detection (QED) into QML under realistic resource constraints. We first quantify the resource demands of fully error-corrected QML and propose a partial QEC approach that reduces overhead while enabling error correction. We then demonstrate the application of a simple QED method, evaluating its impact on QML performance and highlighting challenges we have yet to overcome before we achieve fully fault-tolerant QML.
Composable Verification in the Circuit-Model via Magic-Blindness
This paper develops new verification protocols that allow users to securely check whether their quantum computations were performed correctly, even when the quantum computer might be faulty or malicious. The approach works directly with circuit-based quantum computers using magic state injection, offering better efficiency and security guarantees than previous methods.
Key Contributions
- Introduction of magic-blindness concept for circuit-based quantum verification
- Development of noise-robust and composable verification protocols for Clifford + MSI circuits
- Reduction of quantum communication costs by requiring transmission only at magic state injection locations
- Bridge between MBQC and circuit-based verification protocols with equivalent security guarantees
View Full Abstract
As quantum computing machines move towards the utility regime, it is essential that users are able to verify their delegated quantum computations with security guarantees that are (i) robust to noise, (ii) composable with other secure protocols, and (iii) exponentially stronger as the number of resources dedicated to security increases. Previous works that achieve these guarantees and provide modularity necessary to optimization of protocols to real-world hardware are most often expressed in the Measurement-Based Quantum Computation (MBQC) model. This leaves architectures based on the circuit model -- in particular those using the Magic State Injection (MSI) -- with fewer options to verify their computations or with the need to compile their circuits in MBQC leading to overheads. This paper introduces a family of noise robust, composable and efficient verification protocols for Clifford + MSI circuits that are secure against arbitrary malicious behavior. This family contains the verification protocol of Broadbent (ToC, 2018), extends its security guarantees while also bridging the modularity gap between MBQC and circuit-based protocols, and reducing quantum communication costs. As a result, it opens the prospect of rapid implementation for near-term quantum devices. Our technique is based on a refined notion of blindness, called magic-blindness, which hides only the injected magic states -- the sole source of non-Clifford computational power. This enables verification by randomly interleaving computation rounds with classically simulable, magic-free test rounds, leading to a trap-based framework for verification. As a result, circuit-based quantum verification attains the same level of security and robustness previously known only in MBQC. It also optimizes the quantum communication cost as transmitted qubits are required only at the locations of state injection.
Efficient Quantum Circuits for the Hilbert Transform
This paper presents a novel quantum algorithm for computing the Hilbert transform, which is useful for analyzing non-stationary signals and detecting anomalies. The quantum implementation achieves exponential speedup over classical algorithms, requiring only polylogarithmic circuit size and logarithmic depth for signals of length N.
Key Contributions
- First efficient quantum circuit implementation of the discrete Hilbert transform with exponential speedup
- Generalization to d-dimensional Hilbert transforms with O(d log N) depth complexity
View Full Abstract
The quantum Fourier transform and quantum wavelet transform have been cornerstones of quantum information processing. However, for non-stationary signals and anomaly detection, the Hilbert transform can be a more powerful tool, yet no prior work has provided efficient quantum implementations for the discrete Hilbert transform. This letter presents a novel construction for a quantum Hilbert transform in polylogarithmic size and logarithmic depth for a signal of length $N$, exponentially fewer operations than classical algorithms for the same mapping. We generalize this algorithm to create any $d$-dimensional Hilbert transform in depth $O(d\log N)$. Simulations demonstrate effectiveness for tasks such as power systems control and image processing, with exact agreement with classical results.
Charging a quantum battery from the Bloch sphere
This paper analyzes a quantum battery model consisting of two qubits (charger and battery) and derives analytical expressions for energy storage, ergotropy, and capacity based on the charger's initial quantum state position on the Bloch sphere. The work identifies how quantum coherences and population inversions contribute to battery performance and finds optimal charging parameters.
Key Contributions
- Generalized analytical expressions for quantum battery performance based on arbitrary Bloch sphere initial states
- Identification of the role of quantum coherences and population inversions in ergotropy generation and battery capacity
View Full Abstract
We reconsider the quantum energetics and quantum thermodynamics of the charging process of a simple, two-component quantum battery model made up of a charger qubit and a single--cell battery qubit. We allow for the initial quantum state of the charger to lie anywhere on the surface of the Bloch sphere, and find the generalized analytical expressions describing the stored energy, ergotropy and capacity of the battery, all of which depend upon the initial Bloch sphere polar angle in a manner evocative of the quantum area theorem. The origin of the ergotropy produced, as well as the genesis of the battery capacity, can be readily traced back to the quantum coherences and population inversions generated (and the balance between these two mechanisms is contingent upon the starting Bloch polar angle). Importantly, the ergotropic charging power and its associated optimal charging time display notable deviations from standard results which disregard thermodynamic considerations. Our theoretical groundwork may be useful for guiding forthcoming experiments in quantum energy science based upon coupled two-level systems.
Widefield NV Magnetic Field Reconstruction for Probing the Meissner Effect and Critical Current Density under Pressure
This paper uses nitrogen vacancy (NV) centers in diamond as quantum sensors to map magnetic fields around a superconducting material under high pressure (4 GPa), allowing researchers to observe the Meissner effect and measure the critical current density with micrometer resolution. The work demonstrates the first quantitative magnetic field reconstruction of superconducting behavior under pressure using optical quantum sensing methods.
Key Contributions
- First widefield quantitative reconstruction of the Meissner effect under high pressure using NV center magnetometry
- Development of optical method to measure critical current density in superconductors using quantum sensors
- Advanced data analysis techniques for handling degeneracy in NV center magnetic field measurements
View Full Abstract
The spatial distribution of a magnetic field can be determined with micrometer resolution using widefield nitrogen vacancy (NV) center magnetic imaging. Nevertheless, reconstructing the magnetic field from the raw data can be challenging due to the degeneracy of the four possible NV axes and the tremendous amount of data. While a qualitative approach is sufficient for most analyses, a quantitative analysis offers deeper insight into the physical system. Here, we apply NV widefield magnetic imaging to a HgBa$_{2}$Ca$_{2}$Cu$_{3}$O$_{8+δ}$ (Hg-1223) superconducting microcrystal at a pressure of 4 GPa. We fit the results with solutions from the Hamiltonian describing the NV center ground state and take into account the relative intensities of the resonances to determine the local magnetic field magnitude and angle. Thus, we reconstruct the temperature-dependent expulsion of the magnetic field due to the Meissner effect around the superconductor. By comparing the resulting parameters to Brandt's model, which describes the magnetic behavior of a type-II superconductor, we extract the critical current density $j_c$. Overall, this work showcases the first widefield quantitative reconstruction of the Meissner effect under pressure and an optical method to study critical current density. Thus, it provides new insights into the application of NV magnetometry to superconductivity research at high pressures.
Chemically decisive benchmarks on the path to quantum utility
This paper develops a set of chemically meaningful benchmark problems to test quantum algorithms for electronic structure calculations, focusing on strongly correlated systems like transition metals and actinides. The authors test their ADAPT-GCIM quantum algorithm on these benchmarks and make the problem Hamiltonians publicly available for reproducible quantum computing research.
Key Contributions
- Introduction of a curated hierarchy of chemically decisive benchmark systems for quantum algorithms
- Development and testing of ADAPT-GCIM quantum algorithm with automated active-space selection
- Public release of Hamiltonians for systematic benchmarking of quantum chemistry algorithms
View Full Abstract
Progress towards quantum utility in chemistry requires not only algorithmic advances, but also the identification of chemically meaningful problems whose electronic structure fundamentally challenges classical methods. Here, we introduce a curated hierarchy of chemically decisive benchmark systems designed to probe distinct regimes of electronic correlation relevant to molecular, bioinorganic, and heavy-element chemistry. Moving beyond minimal toy models, our benchmark set spans multireference bond breaking (N$_2$), high-spin transition-metal chemistry (FeS), biologically relevant iron-sulfur clusters ([2Fe-2S]), and actinide-actinide bonding (U$_2$), which exhibits extreme sensitivity to active-space choice, relativistic treatment, and correlation hierarchy even within advanced multireference frameworks. As a concrete realization, we benchmark a recently developed automated and adaptive quantum algorithm based on generator-coordinate-inspired subspace expansion,ADAPT-GCIM, using a black-box workflow that integrates entropy-based active-space selection via the ActiveSpaceFinder tool. Across this chemically diverse problem set, ADAPT-GCIM achieves high accuracy in challenging correlation regimes. Equally importantly, these benchmarks expose general failure modes and design constraints-independent of any specific algorithm-highlighting the necessity of problem-aware and correlation-specific strategies for treating strongly correlated chemistry on quantum computers. To support systematic benchmarking and reproducible comparisons, the Hamiltonians for all systems studied are made openly available.
Parent Hamiltonians for stabilizer quantum many-body scars
This paper develops a systematic method for constructing quantum many-body Hamiltonians that have stabilizer states as special eigenstates called quantum many-body scars, which violate the typical thermalization behavior expected in quantum systems. The authors demonstrate their approach by creating parent Hamiltonians for various important quantum states including cluster states and toric code states.
Key Contributions
- General construction method for embedding stabilizer states as quantum many-body scars in local Hamiltonians
- Unified framework that reproduces known results and enables construction of new examples like cluster state and toric code QMBS
- Introduction of the antipodal toric code state as a new example of volume-law entangled quantum many-body scars
View Full Abstract
Quantum many-body scars (QMBS) have attracted considerable interest due to their role in weak ergodicity breaking in many-body systems. We present a general construction that embeds stabilizer states as QMBS of local Hamiltonians. The method relies on a notion of factorizability of Pauli strings on a lattice, which is used to convert stabilizer elements into local, few-body operators that annihilate the stabilizer state. This enables the systematic construction of parent Hamiltonians with zero-energy stabilizer QMBS typically near the middle of the spectrum. The method reproduces several known results in a unified framework, including recent examples of volume-law entangled QMBS, such as the ``rainbow'' QMBS and the entangled antipodal Bell pair state. We also apply the framework to construct examples of stabilizer QMBS with a more complex entanglement structure, such as the cluster state, the toric code state, and a volume-law entangled state we dub the antipodal toric code (ATC) state. Exact diagonalization confirms our results and reveal the stabilizer states as exact eigenstates of their parent Hamiltonian.
Towards Tensor Network Models for Low-Latency Jet Tagging on FPGAs
This paper develops tensor network models (Matrix Product States and Tree Tensor Networks) for identifying jets in high-energy physics experiments, focusing on ultra-fast implementation on FPGA hardware for real-time particle detection systems. The researchers show these models can achieve competitive performance with deep learning while meeting the strict sub-microsecond timing requirements of particle accelerator trigger systems.
Key Contributions
- Development of tensor network models for real-time jet tagging in particle physics
- Demonstration of sub-microsecond latency FPGA implementations with competitive classification performance
- Investigation of post-training quantization techniques for hardware-efficient tensor network deployment
View Full Abstract
We present a systematic study of Tensor Network (TN) models $\unicode{x2013}$ Matrix Product States (MPS) and Tree Tensor Networks (TTN) $\unicode{x2013}$ for real-time jet tagging in high-energy physics, with a focus on low-latency deployment on Field Programmable Gate Arrays (FPGAs). Motivated by the strict requirements of the HL-LHC Level-1 trigger system, we explore TNs as compact and interpretable alternatives to deep neural networks. Using low-level jet constituent features, our models achieve competitive performance compared to state-of-the-art deep learning classifiers. We investigate post-training quantization to enable hardware-efficient implementations without degrading classification performance or latency. The best-performing models are synthesized to estimate FPGA resource usage, latency, and memory occupancy, demonstrating sub-microsecond latency and supporting the feasibility of online deployment in real-time trigger systems. Overall, this study highlights the potential of TN-based models for fast and resource-efficient inference in low-latency environments.
Exponential gain in clock precision using quantum correlated ticks
This paper proposes a new approach to quantum timekeeping that uses quantum correlations between consecutive clock ticks to achieve exponentially improved precision at ultra-short timescales. The researchers demonstrate theoretically and through simulations that the Pauli exclusion principle can create autonomous self-correction in coupled quantum systems, offering a fundamentally different paradigm for precision timing.
Key Contributions
- Theoretical proof that quantum correlations between clock ticks can provide exponential precision advantage
- Full solution of coupled quantum systems model showing emergent Pauli exclusion effects for timekeeping
- Demonstration through realistic simulations that precision gains remain stable under imperfections
View Full Abstract
Creating precise timing devices at ultra-short time scales is not just an important technological challenge, but confronts us with foundational questions about timekeeping's ultimate precision limits. Research on clocks has either focused on long-term stability using an oscillator stabilized by a level transition, limiting precision at short timescales, or on making individual stochastic ticks as precise as possible. Here, we prove the viability of a conceptually different avenue: the autonomous self-correction of consecutive ticks by quantum correlations. This provides a new paradigm that integrates the advantages and insights from quantum transport theory to operate clocks at ultra-short timescales. We fully solve a model of coupled quantum systems and show how the emergent Pauli exclusion principle correlates the clock at the quantum level yielding an exponential advantage in precision. We furthermore demonstrate through simulations with realistic imperfections that this remarkable gain in precision remains stable providing a roadmap for implementation with contemporary quantum technologies.
Scalable Spin Squeezing in Power-Law Interacting XXZ Models with Disorder
This paper studies how spin squeezing - a quantum phenomenon useful for enhanced precision measurements - behaves in systems with power-law interactions when some particles are missing or disordered. The researchers show that spin squeezing can still scale effectively with system size up to a certain disorder threshold, and provide guidelines for maintaining this advantage in real quantum devices.
Key Contributions
- Demonstrates existence of scalable spin squeezing in power-law interacting systems with disorder up to a critical threshold
- Provides phase diagram mapping disorder tolerance for scalable squeezing across different quantum platforms
- Identifies controlled defect creation as a practical strategy for achieving scalable squeezing in solid-state quantum systems
View Full Abstract
While spin squeezing has been traditionally considered in all-to-all interacting models, recent works have shown that spin squeezing can occur in systems with power-law interactions, leading to direct testing in Rydberg atoms, trapped ions, ultracold atoms and nitrogen vacancy (NV) centers in diamond. For the latter, Wu. et al. Nature 646 (2025) demonstrated that spin squeezing is heavily affected by positional disorder, reducing any capacity for a practical squeezing advantage, which requires scalability with the system size. In this Letter we explore the robustness of spin-squeezing in two-dimensional lattices with a fraction of unoccupied lattice sites. Using semi-classical modeling, we demonstrate the existence of scalable squeezing in power-law interacting XXZ models up to a disorder threshold, above which squeezing is not scalable. We produce a phase diagram for scalable squeezing, and explain its absence in the aforementioned NV experiment. Our work illustrates the maximum disorder allowed for realizing scalable spin squeezing in a host of quantum simulators, highlights a regime with substantial tolerance to disorder, and identifies controlled defect creation as a promising route for scalable squeezing in solid-state systems.
Madelung hydrodynamics of spin-orbit coupling: action principles, currents, and correlations
This paper develops a theoretical framework using quantum hydrodynamics to analyze how spin-orbit coupling affects electron motion, identifying new quantum forces and correlation mechanisms that contribute to spin transport phenomena like the spin Hall effect.
Key Contributions
- Development of Madelung hydrodynamics framework for spin-orbit coupling systems
- Identification of previously overlooked SOC-induced orbital forces from current operators
- Theoretical analysis of quantum torques and spin transport mechanisms in spin Hall effect
View Full Abstract
We exploit the variational and Hamiltonian structures of quantum hydrodynamics with spin to unfold the correlation and torque mechanisms accompanying spin-orbit coupling (SOC) in electronic motion. Using Hamilton's action principle for the Pauli equation, we isolate SOC-induced quantum forces that act on the orbital Madelung--Bohm trajectories and complement the usual force terms known to appear in quantum hydrodynamics with spin. While the latter spin-hydrodynamic forces relate to the quantum geometric tensor (QGT), SOC-induced orbital forces originate from a particular current operator that contributes prominently to the spin current and whose contribution was overlooked in the past. The distinction between different force terms reveals two fundamentally different mechanisms generating quantum spin-orbit correlations. Leveraging the Hamiltonian structure of the hydrodynamic system, we also elucidate spin transport features such as the current shift in the spin Hall effect and the correlation-induced quantum torques. Finally, we illustrate the framework via the Madelung--Rashba equations for planar SOC configurations and propose a particle-based scheme for numerical implementation.
Constant-Depth Unitary Preparation of Dicke States
This paper presents the first unitary, constant-depth quantum circuits for preparing Dicke states (symmetric quantum superposition states) by using global interactions instead of standard gate-by-gate operations. The work overcomes previous logarithmic-depth limitations and could enable faster quantum state preparation for metrology and communication applications.
Key Contributions
- First unitary constant-depth protocols for exact Dicke state preparation using global interactions
- Demonstration that quantum circuit complexity depends critically on hardware connectivity and available operations
- Novel quantum circuits in QAC^0 and QAC_f^0 complexity classes that bypass logarithmic-depth barriers
View Full Abstract
Dicke states serve as a critical resource in quantum metrology, communication, and computation. However, unitary preparation of Dicke states is limited to logarithmic depth in standard circuit models and existing constant-depth protocols require measurement and feed-forward. In this work, we present the first unitary, constant-depth protocols for exact Dicke state preparation. We overcome the logarithmic-depth barrier by moving beyond the standard circuit model and leveraging global interactions (native to architectures such as neutral atoms and trapped ions). Specifically, utilizing unbounded CZ gates (i.e. within the QAC$^0$ circuit class), we offer circuits for exact computation of constant-weight Dicke states, using polynomial ancillae, and approximation of weight-1 Dicke states (i.e. $W$ states), using only constant ancillae. Granted additional access to the quantum FAN-OUT operation (i.e. upgrading to the QAC$_f^0$ circuit class), we also achieve exact preparation of arbitrary-weight Dicke states, with polynomial ancillae. These protocols distinguish the constant-depth capabilities of quantum architectures based on connectivity and offer a novel path toward resolving a long-standing quantum complexity conjecture.
Mitigating nonlinear transduction noise in high-cooperativity cavity optomechanics
This paper demonstrates a technique to improve precision measurements in cavity optomechanical systems by removing thermal intermodulation noise that occurs when mechanical vibrations nonlinearly mix with optical signals. The researchers achieved nearly 10 dB improvement in signal-to-noise ratio using a nonlinear transform that cancels all orders of this noise source.
Key Contributions
- Development of nonlinear transform technique that removes all orders of thermal intermodulation noise in optomechanical systems
- Nearly 10 dB improvement in mechanical signal-to-noise ratio in high-cooperativity room-temperature cavity optomechanical measurements
View Full Abstract
Coupling mechanical motion to an optical resonator enables displacement measurements approaching the standard quantum limit (SQL). However, increasing the optomechanical coupling strength will inevitably lead to probing of the nonlinear response of the optical resonator. Thermal intermodulation noise (TIN) arising from the nonlinear mixing of thermomechanical motion can further increase the imprecision well above the SQL and has hitherto been canceled up to second order of nonlinearity via operation at the "magic detuning". In this work, we record the output of a membrane-in-the-middle microcavity system operating at room temperature and achieving high cooperativity, $C>n_\text{th}$, and apply a nonlinear transform that removes all orders of TIN, improving the mechanical signal-to-noise ratio by nearly 10 dB. Our results can be applied to experiments affected by third-order TIN, which we expect to be the dominating intrinsic source of noise in high-cooperativity room-temperature cavity optomechanical systems.
Optimal lower bound for quantum channel tomography in away-from-boundary regime
This paper establishes optimal lower bounds for the number of queries needed to characterize quantum channels through tomography when the channels are away from a specific boundary condition. The work proves that the query complexity scales as Ω(rd₁d₂/ε²) and shows this matches existing upper bounds, fully resolving the complexity for equal dimension channels with Kraus rank ≥ 2.
Key Contributions
- Proves optimal lower bound Ω(rd₁d₂/ε²) for quantum channel tomography in away-from-boundary regime
- Fully resolves query complexity for equal dimension channels with Kraus rank ≥ 2, showing fundamental difference from unitary case
View Full Abstract
Consider quantum channels with input dimension $d_1$, output dimension $d_2$ and Kraus rank at most $r$. Any such channel must satisfy the constraint $rd_2\geq d_1$, and the parameter regime $rd_2=d_1$ is called the boundary regime. In this paper, we show an optimal query lower bound $Ω(rd_1d_2/\varepsilon^2)$ for quantum channel tomography to within diamond norm error $\varepsilon$ in the away-from-boundary regime $rd_2\geq 2d_1$, matching the existing upper bound $O(rd_1d_2/\varepsilon^2)$. In particular, this lower bound fully settles the query complexity for the commonly studied case of equal input and output dimensions $d_1=d_2=d$ with $r\geq 2$, in sharp contrast to the unitary case $r=1$ where Heisenberg scaling $Θ(d^2/\varepsilon)$ is achievable.
Breaking the Storage-Bandwidth Tradeoff in Distributed Storage with Quantum Entanglement
This paper shows how quantum entanglement and quantum communication can improve distributed storage systems by allowing both storage requirements and repair bandwidth to be minimized simultaneously, breaking a fundamental tradeoff that exists in classical systems. The work demonstrates that when nodes share quantum entanglement and communicate over quantum channels during system repairs, the storage-bandwidth limitations of classical distributed storage can be overcome.
Key Contributions
- Characterization of the fundamental storage-bandwidth tradeoff in quantum-enhanced distributed storage systems
- Demonstration that quantum entanglement among nodes can break the classical storage-bandwidth tradeoff, enabling simultaneous minimization of both parameters when d ≥ 2k-2
View Full Abstract
This work investigates the use of quantum resources in distributed storage systems. Consider an $(n,k,d)$ distributed storage system in which a file is stored across $n$ nodes such that any $k$ nodes suffice to reconstruct the file. When a node fails, any $d$ helper nodes transmit information to a newcomer to rebuild the system. In contrast to the classical repair, where helper nodes transmit classical bits, we allow them to send classical information over quantum channels to the newcomer. The newcomer then generates its storage by performing appropriate measurements on the received quantum states. In this setting, we fully characterize the fundamental tradeoff between storage and repair bandwidth (total communication cost). Compared to classical systems, the optimal storage--bandwidth tradeoff can be significantly improved with the enhancement of quantum entanglement shared only among the surviving nodes, particularly at the minimum-storage regenerating point. Remarkably, we show that when $d \geq 2k-2$, there exists an operating point at which \textit{both storage and repair bandwidth are simultaneously minimized}. This phenomenon breaks the tradeoff in the classical setting and reveals a fundamentally new regime enabled by quantum communication.
Efficiency, Curvature, and Complexity of Quantum Evolutions for Qubits in Nonstationary Magnetic Fields
This paper studies how efficiently quantum two-level systems (qubits) evolve when subjected to time-varying magnetic fields, analyzing the curvature and complexity of their paths through quantum state space. The researchers derive exact analytical expressions for evolution curvature and show that efficient quantum evolutions generally have lower complexity, though longer curved paths can sometimes be less complex than shorter ones.
Key Contributions
- Exact analytical expression for curvature of quantum evolution in two-level systems under time-dependent magnetic fields
- Analysis of relationship between geodesic efficiency, speed efficiency, and complexity of quantum evolutions
- Demonstration that efficient quantum evolutions generally exhibit lower complexity than inefficient ones
View Full Abstract
In optimal quantum-mechanical evolutions, motion can take place along paths of minimal length within an optimal time frame. Alternatively, optimal evolutions may occur along established paths without any waste of energy resources and achieving 100% speed efficiency. Unfortunately, realistic physical scenarios often lead to less-than-ideal evolutions that demonstrate suboptimal efficiency, nonzero curvature, and a high level of complexity. In this paper, we provide an exact analytical expression for the curvature of a quantum evolution pertaining to a two-level quantum system subjected to various time-dependent magnetic fields. Specifically, we examine the dynamics produced by a two-parameter nonstationary Hermitian Hamiltonian with unit speed efficiency. To enhance our understanding of the physical implications of the curvature coefficient, we analyze the curvature behavior in relation to geodesic efficiency, speed efficiency, and the complexity of the quantum evolution (as described by the ratio of the difference between accessible and accessed Bloch-sphere volumes for the evolution from initial to final state to the accessible volume for the given quantum evolution). Our findings indicate that, generally, efficient quantum evolutions exhibit lower complexity compared to inefficient ones. However, we also note that complexity transcends mere length. In fact, longer paths that are sufficiently curved can demonstrate a complexity that is less than that of shorter paths with a lower curvature coefficient.
Geometric Aspects of Entanglement Generating Hamiltonian Evolutions
This paper studies how quantum systems evolve from separable to maximally entangled two-qubit states, analyzing the geometric properties of these evolution paths and comparing time-optimal versus suboptimal trajectories in terms of their efficiency and entanglement characteristics.
Key Contributions
- Characterization of entanglement-generating Hamiltonian evolutions using geometric metrics like geodesic efficiency and curvature
- Discovery that time-optimal evolution trajectories have high geodesic efficiency with zero curvature but lower average path entanglement than suboptimal evolutions
- Analysis showing different behavior patterns for orthogonal versus nonorthogonal state evolutions in terms of nonlocality generation
View Full Abstract
We examine the pertinent geometric characteristics of entanglement that arise from stationary Hamiltonian evolutions transitioning from separable to maximally entangled two-qubit quantum states. From a geometric perspective, each evolution is characterized by means of geodesic efficiency, speed efficiency, and curvature coefficient. Conversely, from the standpoint of entanglement, these evolutions are quantified using various metrics, such as concurrence, entanglement power, and entangling capability. Overall, our findings indicate that time-optimal evolution trajectories are marked by high geodesic efficiency, with no energy resource wastage, no curvature (i.e., zero bending), and an average path entanglement that is less than that observed in time-suboptimal evolutions. Additionally, when analyzing separable-to-maximally entangled evolutions between nonorthogonal states, time-optimal evolutions demonstrate a greater short-time degree of nonlocality compared to time-suboptimal evolutions between the same initial and final states. Interestingly, the reverse is generally true for separable-to-maximally entangled evolutions involving orthogonal states. Our investigation suggests that this phenomenon arises because suboptimal trajectories between orthogonal states are characterized by longer path lengths with smaller curvature, which are traversed with a higher energy resource wastage compared to suboptimal trajectories between nonorthogonal states. Consequently, a higher initial degree of nonlocality in the unitary time propagators appears to be essential for achieving the maximally entangled state from a separable state. Furthermore, when assessing optimal and suboptimal evolutions...
Counterdiabatic driving for random-gap Landau-Zener transitions
This paper develops a control method for driving ensembles of two-level quantum systems through avoided crossings, where each system has a different energy gap. The authors design a single control field that minimizes transition probabilities across the entire ensemble on average, rather than optimizing for individual systems.
Key Contributions
- Development of ensemble-averaged control protocols for random-gap Landau-Zener systems
- Analytical treatment of special cases including systems with Dirac delta function gaps
- Demonstration of trade-offs between instantaneous adiabaticity and final transition probability
View Full Abstract
The Landau--Zener (LZ) model describes a two-level quantum system that undergoes an avoided crossing. In the adiabatic limit, the transition probability vanishes. An auxiliary control field $H_\text{CD}$ can be reverse-engineered so that the full Hamiltonian $H_0 + H_\text{CD}$ reproduces adiabaticity for all parameter values. Our aim is to construct a single control field $H_1$ that drives an ensemble of LZ-type Hamiltonians with a distribution of energy gaps. $H_1$ works best statistically, minimizing the average transition probability. We restrict our attention to a special class of $H_1$ controls, motivated by $H_\text{CD}$. We found a systematic trade-off between instantaneous adiabaticity and the final transition probability. Certain limiting cases with a linear sweep can be treated analytically; one of them being the LZ system with Dirac $δ(t)$ function. Comprehensive and systematic numerical simulations support and extend the analytic results.
Quantifying the properties of evolutionary quantum states of the XXZ spin model using quantum computing
This paper studies the entanglement properties and evolution speed of two-spin quantum systems using the XXZ spin model, comparing analytical calculations with quantum computing simulations. The researchers derive mathematical relationships showing how entanglement distance and evolution speed depend on the model's coupling constants and initial state parameters.
Key Contributions
- Analytical derivation of entanglement distance dependence on XXZ model parameters and initial states
- Investigation of evolution speed in two-spin systems with explicit parameter dependencies
- Validation of theoretical predictions using quantum computing simulations
View Full Abstract
The entanglement distance of evolutionary quantum states of a two-spin system with the XXZ model has been studied. The analysis has been conducted both analytically and using quantum computing. An analytical dependence of the entanglement distance on the values of the model coupling constants and the parameters of the initial states has been obtained. The speed of evolution of a two-spin system has been investigated. The analysis has been performed analytically and using quantum computing. An explicit dependence of the speed of evolution on the coupling constants and on the parameters of the initial state has been obtained. The results of quantum computations are in good agreement with the theoretical predictions.
Quantum solver for single-impurity Anderson models with particle-hole symmetry
This paper develops a quantum-classical hybrid algorithm using variational quantum eigensolvers (VQE) to solve Anderson impurity models, which are computationally bottlenecks in studying strongly correlated materials. The authors demonstrate how to reconstruct Green's functions from quantum computations and benchmark their approach against classical methods for materials science applications.
Key Contributions
- Development of a VQE-based quantum solver for Anderson impurity models with unified ansatz framework
- Demonstration of Green's function reconstruction from quantum computations with benchmarking against classical methods
- Performance evaluation of different optimization routines and quantum-computed moment corrections under realistic noisy conditions
View Full Abstract
Quantum embedding methods, such as dynamical mean-field theory (DMFT), provide a powerful framework for investigating strongly correlated materials. A central computational bottleneck in DMFT is in solving the Anderson impurity model (AIM), whose exact solution is classically intractable for large bath sizes. In this work, we develop and benchmark a quantum-classical hybrid solver tailored for DMFT applications, using the variational quantum eigensolver (VQE) to prepare the ground state of the AIM with shallow quantum circuits. The solver uses a unified ansatz framework to prepare the particle and hole excitations of the ground-state from parameter-shifted circuits, enabling the reconstruction of the impurity Green's function through a continued-fraction expansion. We evaluate the performance of this approach across a few bath sizes and interaction strengths under noisy, shot-limited conditions. We compare three optimization routines (COBYLA, Adam, and L-BFGS-B) in terms of convergence and fidelity, assess the benefits of estimating a quantum-computed moment (QCM) correction to the variational energies, and benchmark the approach by comparing the reconstructed density of states (DOS) against that obtained using a classical pipeline. Our results demonstrate the feasibility of Green's function reconstruction on near-term devices and establish practical benchmarks for quantum impurity solvers embedded within self-consistent DMFT loops.
Searching for Quantum Effects in the Brain: A Bell-Type Test for Nonclassical Latent Representations in Autoencoders
This paper proposes a new method to test whether the brain uses quantum effects by examining neural network representations rather than microscopic brain dynamics. The researchers develop a Bell-type test applied to autoencoder neural networks to detect quantum-like information processing signatures in latent space representations.
Key Contributions
- Novel Bell-type consistency test for detecting nonclassical information processing in neural representations
- Model-agnostic approach that shifts quantum biology testing from microscopic to information-theoretic level
View Full Abstract
Whether neural information processing is entirely classical or involves quantum-mechanical elements remains an open question. Here we propose a model-agnostic, information-theoretic test of nonclassicality that bypasses microscopic assumptions and instead probes the structure of neural representations themselves. Using autoencoders as a transparent model system, we introduce a Bell-type consistency test in latent space, and ask whether decoding statistics obtained under multiple readout contexts can be jointly explained by a single positive latent-variable distribution. By shifting the search for quantum-like signatures in neural systems from microscopic dynamics to experimentally testable constraints on information processing, this work opens a new route for probing the fundamental physics of neural computation.
Electro-optic frequency comb Doppler thermometry
This paper demonstrates a new type of thermometer that uses laser frequency combs to measure the temperature of rubidium vapor by analyzing Doppler broadening of atomic spectral lines. The technique reduces measurement errors and could enable more accurate temperature sensing for industrial applications.
Key Contributions
- Demonstration of electro-optic frequency comb Doppler thermometry with ~1K accuracy
- Mitigation of transit-induced optical pumping distortion that affects conventional Doppler thermometry
- Experimental comparison showing EOFC spectroscopy can use higher optical power without systematic errors
View Full Abstract
We demonstrate a Doppler thermometer based on direct optical frequency comb spectroscopy of an $^{85}$Rb vapor with a chirped electro-optic frequency comb (EOFC). The direct EOFC Doppler thermometer is accurate to within its approximately 1 K statistical uncertainty. We experimentally compare direct EOFC spectroscopy with conventional Doppler spectroscopy using a single-frequency, step-scanned laser probe. Our results show that direct EOFC spectroscopy mitigates transit-induced optical pumping distortion of the atomic lineshape, which is the dominant systematic temperature shift in alkali atom Doppler thermometry. Optical Bloch equation simulations of conventional and direct EOFC Doppler spectroscopy confirm that EOFC spectroscopy can use higher optical power to reduce statistical noise without optical pumping distortion. Our results indicate that EOFC Doppler thermometry is a promising approach to realizing a primary thermometer with size and measurement rate sufficient for applications including pharmaceutical manufacturing and nuclear waste monitoring.
Deterministic and scalable generation of large Fock states
This paper presents a scalable method for creating large Fock states (quantum states with well-defined photon numbers up to ~100) with high fidelity using a hybrid optimization approach that combines genetic algorithms with Adam optimization to design control sequences.
Key Contributions
- Scalable protocol for generating large Fock states with >0.9 fidelity up to ~100 photons
- Hybrid Genetic-Adam optimization framework for multi-pulse control sequence design
- Demonstration of high-fidelity non-classical state generation using native experimental operations
View Full Abstract
The scalable and deterministic preparation of large Fock-number states represents a long-standing frontier in quantum science, with direct implications for quantum metrology, communication, and simulation. Despite significant progress in small-scale implementations, extending such state generation to large excitation numbers while maintaining high fidelity remains a formidable challenge. Here, we present a scalable protocol for generating large Fock states with fidelities exceeding 0.9 up to photon numbers on the order of 100, achieved using only native control operations and, when desired, further enhanced by an optional post-selection step. Our method employs a hybrid Genetic-Adam optimization framework that combines the global search efficiency of genetic algorithms with the adaptive convergence of Adam to optimize multi-pulse control sequences comprising Jaynes-Cummings interactions and displacement operations, both of which are native to leading experimental platforms. The resulting control protocols achieve high fidelities with shallow circuit depths and strong robustness against parameter variations. These results establish an efficient and scalable pathway toward high-fidelity non-classical state generation for precision metrology and fault-tolerant quantum technologies.
A Mirror-Descent Algorithm for Computing the Petz-Rényi Capacity of Classical-Quantum Channels
This paper develops a new computational algorithm based on mirror descent to calculate the Rényi capacity of classical-quantum channels, which measures how much information can be transmitted through quantum communication channels. The authors prove that their algorithm converges to the optimal solution with guaranteed convergence rates.
Key Contributions
- Extension of the Blahut-Arimoto algorithm to quantum channels using mirror descent
- Proof of global sublinear convergence and local linear convergence rates for the proposed algorithm
View Full Abstract
We study the computation of the $α$-Rényi capacity of a classical-quantum (c-q) channel for $α\in(0,1)$. We propose an exponentiated-gradient (mirror descent) iteration that generalizes the Blahut-Arimoto algorithm. Our analysis establishes relative smoothness with respect to the entropy geometry, guaranteeing a global sublinear convergence of the objective values. Furthermore, under a natural tangent-space nondegeneracy condition (and a mild spectral lower bound in one regime), we prove local linear (geometric) convergence in Kullback-Leibler divergence on a truncated probability simplex, with an explicit contraction factor once the local curvature constants are bounded.
Optimized readout strategies for neutral atom quantum processors
This paper develops theoretical frameworks and optimization strategies for reading out quantum information from neutral atom quantum processors, focusing on maximizing the rate at which quantum circuits can be executed while maintaining high measurement accuracy. The authors demonstrate that their optimized readout methods can achieve quantum circuit iteration rates of nearly 200Hz, providing practical guidance for building scalable quantum computing systems.
Key Contributions
- Development of theoretical framework quantifying trade-offs between readout fidelity and atomic retention in neutral atom systems
- Introduction of quantum circuit iteration rate (qCIR) metric and demonstration of optimized readout strategies achieving ~200Hz execution rates
View Full Abstract
Neutral atom quantum processors have emerged as a promising platform for scalable quantum information processing, offering high-fidelity operations and exceptional qubit scalability. A key challenge in realizing practical applications is efficiently extracting readout outcomes while maintaining high system throughput, i.e., the rate of quantum task executions. In this work, we develop a theoretical framework to quantify the trade-off between readout fidelity and atomic retention. Moreover, we introduce a metric of quantum circuit iteration rate (qCIR) and employ normalized quantum Fisher information to characterize system overall performance. Further, by carefully balancing fidelity and retention, we demonstrate a readout strategy for optimizing information acquisition efficiency. Considering the experimentally feasible parameters for 87Rb atoms, we demonstrate that qCIRs of 197.2Hz and 154.5Hz are achievable using single photon detectors and cameras, respectively. These results provide practical guidance for constructing scalable and high-throughput neutral atom quantum processors for applications in sensing, simulation, and near-term algorithm implementation.
H-EFT-VA: An Effective-Field-Theory Variational Ansatz with Provable Barren Plateau Avoidance
This paper introduces a new quantum algorithm architecture called H-EFT-VA that solves the barren plateau problem in variational quantum algorithms by using physics-inspired hierarchical constraints while maintaining the ability to represent complex quantum states. The approach shows dramatic improvements in finding ground states of quantum systems compared to standard methods.
Key Contributions
- Theoretical proof that hierarchical UV-cutoff initialization prevents barren plateaus while maintaining volume-law entanglement
- Demonstration of 109x improvement in energy convergence and 10.7x increase in ground-state fidelity over standard variational ansatze
View Full Abstract
Variational Quantum Algorithms (VQAs) are critically threatened by the Barren Plateau (BP) phenomenon. In this work, we introduce the H-EFT Variational Ansatz (H-EFT-VA), an architecture inspired by Effective Field Theory (EFT). By enforcing a hierarchical "UV-cutoff" on initialization, we theoretically restrict the circuit's state exploration, preventing the formation of approximate unitary 2-designs. We provide a rigorous proof that this localization guarantees an inverse-polynomial lower bound on the gradient variance: $Var[\partial θ] \in Ω(1/poly(N))$. Crucially, unlike approaches that avoid BPs by limiting entanglement, we demonstrate that H-EFT-VA maintains volume-law entanglement and near-Haar purity, ensuring sufficient expressibility for complex quantum states. Extensive benchmarking across 16 experiments -- including Transverse Field Ising and Heisenberg XXZ models -- confirms a 109x improvement in energy convergence and a 10.7x increase in ground-state fidelity over standard Hardware-Efficient Ansatze (HEA), with a statistical significance of $p < 10^{-88}$.
Quantum Theory and Unusual Dielectric Functions of Graphene
This paper derives the spatially nonlocal dielectric functions of graphene using thermal quantum field theory, focusing on unusual properties like a double pole in the transverse dielectric function at zero frequency. The authors discuss how these properties might help resolve discrepancies between experimental and theoretical predictions in the Casimir effect.
Key Contributions
- Derivation of nonlocal dielectric functions for graphene using quantum field theory
- Identification of double pole structure in transverse dielectric function
- Discussion of implications for Casimir effect theory-experiment discrepancies
View Full Abstract
We address the spatially nonlocal dielectric functions of graphene at any frequency derived starting fromthe first principles of thermal quantum field theory using the formalism of the polarization tensor. After a brief review of this formalism, the longitudinal and transverse dielectric functions are considered at any relationship between the frequency and the wave vector. The analytic properties of their real and imaginary parts are investigated at low and high frequencies. Emphasis is given to the double pole at zero frequency which arises in the transverse dielectric function. The role of this unusual property for solving the problem of disagreement between experiment and theory in the Casimir effect is discussed. We guess that a more complete dielectric response of ordinary metals should also be spatially nonlocal and its transverse part may possess the double pole in the region of evanescent waves.
Analysis and Experimental Demonstration of Amplitude Amplification for Combinatorial Optimization
This paper extends Grover's quantum search algorithm to solve combinatorial optimization problems by developing Quantum Amplitude Amplification techniques that can find optimal solutions with high probability. The researchers demonstrate their approach through simulations up to 40 qubits and validate it experimentally on both IBM superconducting and IonQ trapped ion quantum computers.
Key Contributions
- Extension of Grover's algorithm 2D representation to oracles encoding cost functions like QUBO
- Derivation of exact formulas for optimal oracle parameters in linear cost function cases
- Experimental validation on both superconducting and trapped ion quantum hardware platforms
View Full Abstract
Quantum Amplitude Amplification (QAA), the generalization of Grover's algorithm, is capable of yielding optimal solutions to combinatorial optimization problems with high probabilities. In this work we extend the conventional 2-dimensional representation of Grover's (orthogonal collective states) to oracles which encode cost functions such as QUBO, and show that linear cost functions are a special case whereby an exact formula exists for determining optimal oracle parameter settings. Using simulations of problem sizes up to 40 qubits we demonstrate QAA's algorithmic performance across all possible solutions, with an emphasis on the closeness in Grover-like performance for solutions near the global optimum. We conclude with experimental demonstrations of generalized QAA on both IBMQ (superconducting) and IonQ (trapped ion) qubits, showing that the observed probabilities of each basis state match our equations as a function of varying the free parameters in the oracle and diffusion operators.
Nonlinear quantum Kibble-Zurek ramps in open systems at finite temperature
This paper studies quantum systems driven through phase transitions using simultaneous temperature and parameter ramps, showing how these protocols can reveal quantum critical behavior at finite temperature. The work demonstrates that nonlinear ramping protocols allow experimental access to quantum critical exponents that are normally only observable at zero temperature.
Key Contributions
- Development of nonlinear quantum Kibble-Zurek protocols that simultaneously ramp temperature and control parameters
- Demonstration that finite-temperature protocols can probe zero-temperature quantum critical universality classes
- Identification of optimal ramping conditions that suppress subleading corrections to scaling laws
View Full Abstract
We analyze quantum systems under a broad class of protocols in which the temperature and a Hamiltonian control parameter are ramped simultaneously and, in general, in a nonlinear fashion toward a quantum critical point. Using an open-system version of a Kitaev quantum wire as an example, we show that, unlike finite-temperature protocols at fixed temperature, these protocols allow us to probe, in an out-of-equilibrium situation and at finite temperature, the universality class (characterized by the critical exponents $ν$ and $z$) of an equilibrium quantum phase transition at zero temperature. Key to this is the identification of ramps in which both coherent and incoherent parts of the open-system dynamics affect the excitation density in a non-negligible way. We also identify the specific ramps for which subleading corrections to the asymptotic scaling laws are suppressed, which serves as a guide to dynamically probing quantum critical exponents in experimentally realistic finite-temperature situations.
Localization Landscape in Non-Hermitian and Floquet quantum systems
This paper extends the Filoche-Mayboroda localization landscape theory to predict where quantum particles become localized in complex quantum systems including non-Hermitian, time-driven (Floquet), and topological systems without having to calculate the actual quantum states. The method provides a unified geometric approach to understand localization phenomena across different types of quantum matter.
Key Contributions
- Generalization of localization landscape theory to non-Hermitian and Floquet quantum systems using H†H operator
- Unified predictor for localization that captures spectral instabilities, skin effects, and topological zero modes without computing eigenstates
View Full Abstract
We propose a generalization of the Filoche--Mayboroda localization landscape that extends the theory well beyond the static, elliptic and Hermitian settings while preserving its geometric interpretability. Using the positive operator $H^\dagger H$, we obtain a landscape that predicts localization across non-Hermitian, Floquet, and topological systems without computing eigenstates. Singular-value collapse reveals spectral instabilities and skin effects, the Sambe formulation captures coherent destruction of tunneling, and topological zero modes emerge directly from the landscape. Applications to Hatano--Nelson chains, driven two-level systems, and driven Aubry--André--Harper models confirm quantitative accuracy, establishing a unified predictor for localization in equilibrium and driven quantum matter.
The SpinPulse library for transpilation and noise-accurate simulation of spin qubit quantum computers
This paper introduces SpinPulse, an open-source Python package that simulates spin qubit-based quantum computers at the pulse level, including realistic noise modeling. The software enables transpilation of quantum circuits to native gate sets and supports hardware development through accurate simulation of experimental conditions.
Key Contributions
- Development of SpinPulse open-source simulation package for spin qubit quantum computers
- Implementation of non-Markovian noise modeling for realistic pulse-level simulations
- Integration of transpilation, pulse compilation, and large-scale tensor network simulations
View Full Abstract
We introduce SpinPulse, an open-source python package for simulating spin qubit-based quantum computers at the pulse-level. SpinPulse models the specific physics of spin qubits, particularly through the inclusion of classical non-Markovian noise. This enables realistic simulations of native gates and quantum circuits, in order to support hardware development. In SpinPulse, a quantum circuit is first transpiled into the native gate set of our model and then converted to a pulse sequence. This pulse sequence is subsequently integrated numerically in the presence of a simulated noisy experimental environment. We showcase workflows including transpilation, pulse-level compilation, hardware benchmarking, quantum error mitigation, and large-scale simulations via integration with the tensor-network library quimb. We expect SpinPulse to be a valuable open-source tool for the quantum computing community, fostering efforts to devise high-fidelity quantum circuits and improved strategies for quantum error mitigation and correction.
Reduction of thermodynamic uncertainty by a virtual qubit
This paper studies quantum thermal machines that use coherent coupling between two energy levels (a 'virtual qubit') to violate classical thermodynamic uncertainty relations. The researchers show that quantum coherence reduces current fluctuations below classical limits, potentially enabling more efficient quantum heat engines and refrigerators.
Key Contributions
- Demonstrates that quantum coherence in virtual qubits can reduce thermodynamic uncertainty below classical bounds
- Provides exact decomposition of thermodynamic uncertainty into classical and coherent contributions with optimization conditions
View Full Abstract
The thermodynamic uncertainty relation (TUR) imposes a fundamental constraint between current fluctuations and entropy production, providing a refined formulation of the second law for micro- and nanoscale systems. Quantum violations of the classical TUR reveal genuinely quantum thermodynamic effects, which are essential for improving performance and enabling optimization in quantum technologies. In this work, we analyze the TUR in a class of paradigmatic quantum thermal-machine models whose operation is enabled by coherent coupling between two energy levels forming a virtual qubit. Steady-state coherences are confined to this virtual-qubit subspace, while in the absence of coherent coupling the system satisfies detailed balance with the thermal reservoirs and supports no steady-state heat currents. We show that the steady-state currents and entropy production can be fully reproduced by an effective classical Markov process, whereas current fluctuations acquire an additional purely quantum correction originating from coherence. As a result, the thermodynamic uncertainty naturally decomposes into a classical (diagonal) contribution and a coherent contribution. The latter becomes negative under resonant conditions and reaches its minimum at the coupling strength that maximizes steady-state coherence. We further identify the optimization conditions and the criteria for surpassing the classical TUR bound in the vicinity of the reversible limit.
Unifying Quantum and Classical Dynamics
This paper claims to show that quantum and classical dynamics are mathematically equivalent by reformulating the Heisenberg equations of motion to look identical to Newton's equations, with quantum operators replacing classical observables. The work argues for exact equivalence rather than the typical emergence of classical physics from quantum mechanics in limiting cases.
Key Contributions
- Mathematical reformulation showing equivalence between Heisenberg equations and Newton's equations
- Theoretical framework claiming exact correspondence between quantum and classical dynamics without invoking limiting conditions
View Full Abstract
Classical and quantum physics represent two distinct theories; however, quantum physics is regarded as the more fundamental of the two. It is posited that classical mechanics should arise from quantum mechanics under certain limiting conditions. Nevertheless, this remains a challenging objective. In this work, we explore the potential for unifying the dynamics of classical and quantum physics. This discussion does not suggest that classical behavior emerges from quantum mechanics; rather, it demonstrates the exact equivalence between the dynamics of quantum observables and their classical counterparts. It is shown that the Heisenberg equations of motion can be cast in a form that is identical to Newton's equations of motion, with $\hbar$ being absent from the formulation. This implies that both quantum and classical dynamics are governed by the same equations, with the Heisenberg operators substituting the classical observables.
Cloud parameter estimation for interacting BEC after time-of-flight
This paper studies how interactions between condensed and thermal atoms in Bose-Einstein condensates affect the expansion patterns observed in time-of-flight experiments. The researchers use simulations to show that ignoring these interactions leads to systematic errors in measuring important system parameters like temperature and condensed fraction.
Key Contributions
- Systematic analysis of interaction effects on BEC expansion profiles in time-of-flight measurements
- Development of improved fitting methods that account for condensate-thermal atom interactions, reducing parameter extraction errors
View Full Abstract
Experiments on Bose-Einstein condensates at finite temperature typically extract the system parameters, such as temperature, atom number, and condensed fraction from time-of-flight images taken after a free expansion time. This paper systematically examines the effect of repulsive interactions between the condensed and thermal atoms in partially condensed clouds on the expansion profile of the thermal cloud. An analytical expression for the expansion can be obtained only if the interactions between the Bose-Einstein condensate and thermal atoms are neglected, resulting in a Bose-enhanced distribution for the thermal component. Here, the deformation of the cloud due to interactions and the effects on estimated parameters are investigated by simulating the expansion using a ballistic approximation. By fitting the simulated expansion profiles with a Bose-enhanced distribution, the errors of using such a fit are estimated, and the results are explained phenomenologically. The simulation was also used as a fitting function for experimental data, showing better agreement of the extracted condensed fraction with the semi-ideal model than results from a Bose-enhanced fit.
Tight bounds on recurrence time in closed quantum systems
This paper establishes rigorous upper bounds on how long it takes for isolated quantum systems to return close to their initial state (quantum recurrence), showing the recurrence time scales with Hilbert space dimension and depends on the system's energy variance and how quickly it initially escapes its starting neighborhood.
Key Contributions
- Established rigorous upper bounds on quantum recurrence time in terms of Hilbert space dimension, neighborhood size, and escape time
- Provided theoretical framework connecting recurrence to inverse quantum speed limit problems and Hamiltonian variance
View Full Abstract
The evolution of an isolated quantum system inevitably exhibits recurrence: the state returns to the vicinity of its initial condition after finite time. Despite its fundamental nature, a rigorous quantitative understanding of recurrence has been lacking. We establish upper bounds on the recurrence time, $t_{\mathrm{rec}} \lesssim t_{\mathrm{exit}}(ε)(1/ε)^d$, where $d$ is the Hilbert-space dimension, $ε$ the neighborhood size, and $t_{\mathrm{exit}}(ε)$ the escape time from this neighborhood. For pure states evolving under a Hamiltonian $H$, estimating $t_{\mathrm{exit}}$ is equivalent to an inverse quantum speed limit problem: finding upper bounds on the time a time-evolved state $ψ_t$ needs to depart from the $ε$-vicinity of the initial state $ψ_0$. We provide a partial solution, showing that under mild assumptions $t_{\mathrm{exit}}(ε) \approx ε/\sqrt{ Δ(H^2)}$, with $Δ(H^2)$ the Hamiltonian variance in $ψ_0$. We show that our upper bound on $t_{\mathrm{rec}}$ is generically saturated for random Hamiltonians. Finally, we analyze the impact of coherence of the initial state in the eigenbasis of $H$ on recurrence behavior.
Bounding many-body properties under partial information and finite measurement statistics
This paper develops scalable computational methods to calculate bounds on properties of many-body quantum systems using incomplete experimental measurements with finite statistics. The approach uses moment-matrix relaxations and semidefinite programming to make the calculations tractable for larger quantum systems while accounting for real-world measurement noise.
Key Contributions
- Development of scalable moment-matrix relaxation methods for bounding many-body quantum system properties
- Integration of experimental constraints like shot noise and partial information into semidefinite programming frameworks
- Adaptation of the approach to systems with specific knowledge like ground states, symmetries, or steady states
View Full Abstract
Calculating bounds of properties of many-body quantum systems is of paramount importance, since they guide our understanding of emergent quantum phenomena and complement the insights obtained from estimation methods. Recent semidefinite programming approaches enable probabilistic bounds from finite-shot measurements of easily accessible, yet informationally incomplete, observables. Here we render these methods scalable in the number of qubits by instead utilizing moment-matrix relaxations. After introducing the general formalism, we show how the approach can be adapted with specific knowledge of the system, such as it being the ground state of a given Hamiltonian, possessing specific symmetries or being the steady state of a given Lindbladian. Our approach defines a scalable real-world certification scheme leveraging semidefinite programming relaxations and experimental estimations which, unavoidably, contain shot noise.
A Collection of Pinsker-type Inequalities for Quantum Divergences
This paper develops mathematical inequalities that provide bounds on various quantum divergences (measures of how different two quantum states are) in terms of trace distance. It extends the classical Pinsker inequality to multiple types of quantum divergences including f-divergences, Rényi divergences, and their smoothed versions.
Key Contributions
- Extension of Pinsker's inequality to various quantum divergences including f-divergences and Rényi divergences
- Development of adaptation strategy for applying these bounds to smoothed divergences
View Full Abstract
Pinsker's inequality sets a lower bound on the Umegaki divergence of two quantum states in terms of their trace distance. In this work, we formulate corresponding estimates for a variety of quantum and classical divergences including $f$-divergences like Hellinger and $χ^2$-divergences as well as Rényi divergences and special cases thereof like the Umegaki divergence, collision divergence, max divergence. We further provide a strategy on how to adapt these bounds to smoothed divergences.
Learning Hamiltonians in the Heisenberg limit with static single-qubit fields
This paper presents a new protocol for learning quantum Hamiltonians that achieves optimal precision scaling using only simple, static single-qubit control fields, avoiding the need for complex multi-qubit operations or time-varying control strengths that current methods require.
Key Contributions
- Development of Heisenberg-limited Hamiltonian learning protocol using only static single-qubit fields
- Proof that the method is robust against state preparation and measurement errors
- Information-theoretic proof that non-vanishing static field strength is necessary for Heisenberg scaling
View Full Abstract
Learning the Hamiltonian governing a quantum system is a central task in quantum metrology, sensing, and device characterization. Existing Heisenberg-limited Hamiltonian learning protocols either require multi-qubit operations that are prone to noise, or single-qubit operations whose frequency or strength increases with the desired precision. These two requirements limit the applicability of Hamiltonian learning on near-term quantum platforms. We present a protocol that learns a quantum Hamiltonian with the optimal Heisenberg-limited scaling using only single-qubit control in the form of static fields with strengths that are independent of the target precision. Our protocol is robust against the state preparation and measurement (SPAM) error. By overcoming these limitations, our protocol provides new tools for device characterization and quantum sensing. We demonstrate that our method achieves the Heisenberg-limited scaling through rigorous mathematical proof and numerical experiments. We also prove an information-theoretic lower bound showing that a non-vanishing static field strength is necessary for achieving the Heisenberg limit unless one employs an extensive number of discrete control operations.
Realistic prospects for testing a relativistic local quantum measurement inequality
This paper investigates the theoretical limits of quantum detectors by deriving and testing an inequality that describes the fundamental trade-off between avoiding false detections (dark counts) and successfully detecting real quantum excitations. The researchers model realistic photodetection scenarios and provide numerical results showing how suppressing unwanted background clicks necessarily reduces the detector's ability to register true signals.
Key Contributions
- Derives an explicit bound for relativistic local quantum measurement inequality applicable to arbitrary coherent states
- Provides numerical modeling of realistic photodetection scenarios with finite-size detectors and time windows
View Full Abstract
We investigate the experimental prospects for testing a relativistic local quantum measurement inequality that quantifies the trade-off between vacuum insensitivity and responsiveness to excitations for finite-size detectors. Building on the Reeh--Schlieder approximation for coherent states, we derive an explicit and practically applicable bound for arbitrary coherent states. To connect with realistic photodetection scenarios, we model the detection region as a square prism operating over a finite time window and consider a normally incident single-mode coherent state. Numerical results exhibit the expected qualitative behavior: suppressing dark counts necessarily tightens the achievable click probability.
Principles of Optics in the Fock Space: Scalable Manipulation of Giant Quantum States
This paper introduces 'Fock-space optics', a framework that treats photon number as a synthetic dimension to control quantum states with many photons. The researchers experimentally demonstrated optical-like phenomena (refraction, lensing, interference) in quantum systems with up to 180 photons using superconducting microwave resonators.
Key Contributions
- Established conceptual framework mapping classical wave optics to quantum Fock space manipulation
- Demonstrated scalable control of large quantum states with up to 180 photons using superconducting resonators
View Full Abstract
The manipulation of distinct degrees of freedom of photons plays a critical role in both classical and quantum information processing. While the principles of wave optics provide elegant and scalable control over classical light in spatial and temporal domains, engineering quantum states in Fock space has been largely restricted to few-photon regimes, hindered by the computational and experimental challenges of large Hilbert spaces. Here, we introduce ``Fock-space optics", establishing a conceptual framework of wave propagation in the quantum domain by treating photon number as a synthetic dimension. Using a superconducting microwave resonator, we experimentally demonstrate Fock-space analogues of optical propagation, refraction, lensing, dispersion, and interference with up to 180 photons. These results establish a fundamental correspondence between Schrödinger evolution in a single bosonic mode and classical paraxial wave propagation. By mapping intuitive optical concepts onto high-dimensional quantum state engineering, our work opens a path toward scalable control of large-scale quantum systems with thousands of photons and advanced bosonic information processing.
Addition to the dynamic Stark shift of the coherent population trapping resonance
This paper analyzes how laser light shifts the frequency of coherent population trapping resonances in atoms, discovering an additional shift beyond the conventional dynamic Stark shift that occurs when two laser beams interact with atomic energy levels. The researchers derive mathematical expressions for this extra shift and show it could be useful for controlling precision in atomic clocks and other quantum devices.
Key Contributions
- Discovery and analytical description of an additional light shift in coherent population trapping beyond conventional dynamic Stark shift
- Demonstration that this additional shift shows nonlinear intensity dependence under strong coupling, offering new control mechanisms for precision atomic devices
View Full Abstract
This paper presents a theoretical study of the light-induced shift of the coherent population trapping resonance. An analytical model is proposed that describes the interaction of two radiation components with an atomic system using a $Λ$ scheme and takes into account an additional level of excited state. Both weak and strong coupling regimes with off-resonant transitions are considered. It is shown that, in addition to the conventional dynamic Stark shift, an extra shift arises due to the distortion of the resonance line shape when bichromatic laser radiation interacts with off-resonant atomic transitions. An analytical expression for this additional shift is derived in the weak-coupling limit, and its significant impact on the resonance shape and sensitivity to the intensities of the laser field components is demonstrated. It is found that under strong coupling conditions, the additional shift can deviate substantially from a linear dependence on light intensity, suggesting new opportunities for controlling light shifts in precision atomic devices such as quantum frequency standards.
Complex scalar relativistic field as a probability amplitude
This paper proposes a new relativistic quantum field theory for neutral complex scalar fields, treating the field as a probability amplitude. The authors derive conservation laws, show the existence of two types of particle excitations with different dispersion relations, and discuss the transition to second quantization.
Key Contributions
- Formulation of relativistic equation for complex scalar field as probability amplitude
- Derivation of continuity equation and conservation laws
- Identification of two types of particle excitations with different dispersion laws
- Treatment of second quantization for the proposed field theory
View Full Abstract
A relativistic equation for a neutral complex field as a probability amplitude is proposed. The continuity equation for the probability density is obtained. It is shown that there are two types of excitations of this field, which describe particles with positive energy and different dispersion laws. Based on the Lagrangian formalism, conservation laws are obtained. The transition to secondary quantization is considered.
Exponential improvement in benchmarking multiphoton interference
This paper develops a new method for testing how identical multiple photons are in quantum systems, which is crucial for photonic quantum technologies. The researchers achieve exponential improvement in efficiency by using quantum Fourier transform interferometry, making the testing process much faster and more scalable than previous methods.
Key Contributions
- Developed theorems connecting photon distinguishability to quantum Fourier transform interferometer suppression laws
- Created a protocol achieving constant sample complexity for prime photon numbers and sub-polynomial scaling otherwise, representing exponential improvement over existing methods
- Experimentally validated the approach on Quandela's photonic quantum processor showing clear runtime and precision advantages
View Full Abstract
Several photonic quantum technologies rely on the ability to generate multiple indistinguishable photons. Benchmarking the level of indistinguishability of these photons is essential for scalability. The Hong-Ou-Mandel dip provides a benchmark for the indistinguishability between two photons, and extending this test to the multi-photon setting has so far resulted in a protocol that computes the genuine n-photon indistinguishability (GI). However, this protocol has a sample complexity that increases exponentially with the number of input photons for an estimation of GI up to a given additive error. To address this problem, we introduce new theorems that strengthen our understanding of the relationship between distinguishability and the suppression laws of the quantum Fourier transform interferometer (QFT). Building on this, we propose a protocol using the QFT for benchmarking GI that achieves constant sample complexity for the estimation of GI up to a given additive error for prime photon numbers, and sub-polynomial scaling otherwise, representing an exponential improvement over the state of the art. We prove the optimality of our protocol in many relevant scenarios and validate our approach experimentally on Quandela's reconfigurable photonic quantum processor, where we observe a clear advantage in runtime and precision over the state of the art. We therefore establish the first scalable method for computing multi-photon indistinguishability, which applies naturally to current and near-term photonic quantum hardware.
Optimal control of a dissipative micromaser quantum battery in the ultrastrong coupling regime
This paper studies quantum batteries based on micromaser systems operating in the ultrastrong coupling regime, where a cavity interacts with a stream of qubits to store energy. The researchers develop optimal control strategies to maximize energy storage while managing the negative effects of dissipation and decoherence.
Key Contributions
- Demonstration that ultrastrong coupling improves quantum battery charging speed but requires dissipation control to prevent unbounded energy growth
- Development of optimal control protocols for maximizing stored ergotropy and stabilizing quantum batteries against dissipative losses
View Full Abstract
We investigate the open system dynamics of a micromaser quantum battery operating in the ultrastrong coupling (USC) regime under environmental dissipation. The battery consists of a single-mode electromagnetic cavity sequentially interacting, via the Rabi Hamiltonian, with a stream of qubits acting as chargers. Dissipative effects arise from the weak coupling of the qubit-cavity system to a thermal bath. Non-negligible in the USC regime, the counter-rotating terms substantially improve the charging speed, but also lead, in the absence of dissipation, to unbounded energy growth and highly mixed cavity states. Dissipation during each qubit-cavity interaction mitigates these detrimental effects, yielding steady-state of finite energy and ergotropy. Optimal control on qubit preparation and interaction times enhances battery's performance in: (i) Maximizing the stored ergotropy trhough an optimized charging protocol; (ii) Stabilizing the stored ergotropy against dissipative losses through an optimized measurement-based passive-feedback strategy. Overall, our numerical results demonstrate that the interplay of ultrastrong light-matter coupling, controlled dissipation, and optimized control strategies enables micromaser quantum batteries to achieve both enhanced charging performance and long-term stability under realistic conditions.
Adversarial Hypothesis Testing for Quantum Channels
This paper studies adversarial hypothesis testing for quantum channels, where one party (Alice) tries to make it harder for another party (Bob) to distinguish between different quantum communication channels by choosing inputs that minimize Bob's ability to tell them apart. The research reveals surprising differences in how quantum-quantum channels behave compared to classical-quantum channels in this adversarial setting.
Key Contributions
- Characterization of Stein exponents for adversarial hypothesis testing in four different settings of quantum channel discrimination
- Discovery that for quantum-quantum channels, Bob's advantage from knowing inputs disappears with general inputs, while for classical-quantum channels this advantage persists
View Full Abstract
This paper presents a systematic study of adversarial hypothesis testing for both quantum-quantum (QQ) and classical-quantum (CQ) channels. Unlike conventional channel discrimination, we consider a framework where the sender, Alice, selects the channel input adversarially to minimize Bob's distinguishability. We analyze this problem across four settings based on whether Alice employs i.i.d. or general inputs and whether the receiver, Bob, is informed of the specific input choice (allowing his measurement to depend on the input). We characterize the Stein exponents for each setting and reveal a striking distinction in behavior: for QQ channels with i.i.d. inputs, Bob's knowledge of the input significantly enhances distinguishability, yet this advantage vanishes when general inputs are permitted. In contrast, for CQ channels, Bob being informed provides a consistent advantage over the corresponding entanglement-breaking channels for both i.i.d. and general inputs. These results demonstrate a unique phenomenon in adversarial hypothesis testing where the CQ channel does not merely behave as a special case of the QQ channel.
Random matrix theory universality of current operators in spin-$S$ Heisenberg chains
This paper investigates quantum chaotic systems, specifically Heisenberg spin chains, to test whether their observable properties follow random matrix theory predictions. The researchers numerically study spin current operators in these systems and find evidence supporting the conjecture that quantum chaotic systems exhibit universal statistical behaviors predicted by random matrix theory.
Key Contributions
- Numerical verification of random matrix theory universality in quantum chaotic Heisenberg spin chains
- Application of quantum-typicality-based methods combined with symmetry exploitation to study spin current operators
View Full Abstract
Quantum chaotic systems exhibit certain universal statistical properties that closely resemble predictions from random matrix theory (RMT). With respect to observables, it has recently been conjectured that, when truncated to a sufficiently narrow energy window, their statistical properties can be described by an unitarily invariant ensemble, and testable criteria have been introduced, which are based on the scaling behavior of free cumulants. In this paper, we investigate the conjecture numerically in translationally invariant Heisenberg spin chains with spin quantum number $S =\frac{1}{2},1,\frac{3}{2}$. Combining a quantum-typicality-based numerical method with the exploitation of the system's symmetries, we study the spin current operator and find clear evidence of consistency with the proposed criteria in chaotic cases. Our findings further support the conjecture of the existence of RMT universality as manifest in the observable properties in quantum chaotic systems.
Quantitative approach for the Dicke-Ising chain with an effective self-consistent matter Hamiltonian
This paper studies the Dicke-Ising chain, a quantum many-body system of spins coupled to photons, by developing a method that maps the full problem onto an effective matter-only Hamiltonian in the thermodynamic limit. The authors use advanced numerical techniques to precisely map out the quantum phase diagram, including superradiant and magnetically ordered phases.
Key Contributions
- Development of effective self-consistent matter Hamiltonian framework for Dicke-Ising chain that eliminates need for photon-spin correlations
- High-precision determination of quantum phase diagram using NLCE+DMRG methods, including refinement of multicritical point location to 10^-4 accuracy
View Full Abstract
In the thermodynamic limit, the Dicke-Ising chain maps exactly onto an effective self-consistent matter Hamiltonian with the photon field acting solely as a self-consistent effective field. As a consequence, no quantum correlations between photons and spins are needed to understand the quantum phase diagram. This enables us to determine the quantum phase diagram in the thermodynamic limit using numerical linked-cluster expansions combined with density matrix renormalization group calculations (NLCE+DMRG) to solve the resulting self-consistent matter Hamiltonian. This includes magnetically ordered phases with significantly improved accuracy compared to previous estimates. For ferromagnetic Ising couplings, we refine the location of the multicritical point governing the change in the order of the superradiant phase transition, reaching a relative accuracy of $10^{-4}$. For antiferromagnetic Ising couplings, we confirm the existence of the narrow antiferromagnetic superradiant phase in the thermodynamic limit. The effective matter Hamiltonian framework identifies the antiferromagnetic superradiant phase as the many-body ground state of an antiferromagnetic transverse-field Ising model with longitudinal field. This phase emerges through continuous Dicke-type polariton condensation from the antiferromagnetic normal phase, followed by a first-order transition to the paramagnetic superradiant phase. Thus, NLCE+DMRG provides a precise determination of the Dicke-Ising phase diagram in one dimension by solving the self-consistent effective matter Hamiltonian.
Coherence Limits in Interference-Based cos(2$\varphi$) Qubits
This paper analyzes the coherence properties of a specific type of superconducting qubit called cos(2φ) qubits, which are protected from certain types of noise. The researchers find that despite this protection, there's still a fundamental trade-off between different noise sources that limits the qubit's coherence time to only a few microseconds.
Key Contributions
- Unified theoretical description of various cos(2φ) qubit implementations under a single Hamiltonian framework
- Identification of fundamental coherence limits and trade-offs between charge and flux noise in parity-protected qubits
View Full Abstract
We investigate the coherence properties of parity-protected $\cos(2\varphi)$ qubits based on interferences between two Josephson elements in a superconducting loop. We show that qubit implementations of a $\cos(2\varphi)$ potential using a single loop, such as those employing semiconducting junctions, rhombus circuits, flowermon and KITE structures, can be described by the same Hamiltonian as two multi-harmonic Josephson junctions in a SQUID geometry. We find that, despite the parity protection arising from the suppression of single Cooper pair tunneling, there exists a fundamental trade-off between charge and flux noise dephasing channels. Using numerical simulations, we examine how relaxation and dephasing rates depend on external flux and circuit parameters, and we identify the best compromise for maximum coherence. With currently existing circuit parameters, the qubit lifetime $T_1$ can exceed milliseconds while the dephasing time $T_\varphi$ remains limited to only a few microseconds due to either flux or charge noise. Our findings establish practical limits on the coherence of this class of qubits and raise questions about the long-term potential of this approach.
Topology-Aware Block Coordinate Descent for Qubit Frequency Calibration of Superconducting Quantum Processors
This paper develops a more efficient method for calibrating qubit frequencies in superconducting quantum processors by reformulating the widely-used Snake optimizer as Block Coordinate Descent and using topology-aware ordering to minimize calibration time while maintaining accuracy.
Key Contributions
- Establishes mathematical equivalence between Snake optimizer and Block Coordinate Descent for qubit frequency calibration
- Introduces topology-aware block ordering using Traveling Salesman Problem formulation to minimize calibration runtime
- Demonstrates linear complexity scaling with qubit count while maintaining calibration quality
View Full Abstract
Pre-execution calibration is a major bottleneck for operating superconducting quantum processors, and qubit frequency allocation is especially challenging due to crosstalk-coupled objectives. We establish that the widely-used Snake optimizer is mathematically equivalent to Block Coordinate Descent (BCD), providing a rigorous theoretical foundation for this calibration strategy. Building on this formalization, we present a topology-aware block ordering obtained by casting order selection as a Sequence-Dependent Traveling Salesman Problem (SD-TSP) and solving it efficiently with a nearest-neighbor heuristic. The SD-TSP cost reflects how a given block choice expands the reduced-circuit footprint required to evaluate the block-local objective, enabling orders that minimize per-epoch evaluation time. Under local crosstalk/bounded-degree assumptions, the method achieves linear complexity in qubit count per epoch, while retaining calibration quality. We formalize the calibration objective, clarify when reduced experiments are equivalent or approximate to the full objective, and analyze convergence of the resulting inexact BCD with noisy measurements. Simulations on multi-qubit models show that the proposed BCD-NNA ordering attains the same optimization accuracy at markedly lower runtime than graph-based heuristics (BFS, DFS) and random orders, and is robust to measurement noise and tolerant to moderate non-local crosstalk. These results provide a scalable, implementation-ready workflow for frequency calibration on NISQ-era processors.
On the average-case complexity of learning states from the circular and Gaussian ensembles
This paper proves that learning quantum states sampled from certain mathematical ensembles (circular and Gaussian) is computationally hard on average, using statistical query methods. The work extends previous complexity results to new classes of quantum states and develops novel mathematical techniques for analyzing random quantum circuits.
Key Contributions
- Established average-case hardness of learning Born distributions from circular and Gaussian ensembles in the statistical query model
- Developed unconventional integration techniques over compact groups that exactly evaluate total variation distances for Haar random circuits
View Full Abstract
Studying the complexity of states sampled from various ensembles is a central component of quantum information theory. In this work we establish the average-case hardness of learning, in the statistical query model, the Born distributions of states sampled uniformly from the circular and (fermionic) Gaussian ensembles. These ensembles of states are induced variously by the uniform measures on the compact symmetric spaces of type AI, AII, and DIII. This finding complements analogous recent results for states sampled from the classical compact groups. On the technical side, we employ a somewhat unconventional approach to integrating over the compact groups which may be of some independent interest. For example, our approach allows us to exactly evaluate the total variation distances between the output distributions of Haar random unitary and orthogonal circuits and the constant distribution, which were previously known only approximately.
Autonomous Quantum Simulation through Large Language Model Agents
This paper demonstrates that large language model agents can autonomously perform tensor network simulations of quantum many-body systems with 90% success rates. The researchers show that AI agents can learn specialized quantum simulation techniques through in-context learning and multi-agent architectures, potentially democratizing access to advanced quantum computational methods.
Key Contributions
- Demonstration of LLM agents autonomously performing tensor network quantum simulations with 90% success rate
- Development of multi-agent architecture with in-context learning for specialized quantum computational domains
- Systematic benchmarking across quantum phase transitions, open quantum systems, and photochemical reactions using multiple state-of-the-art LLMs
View Full Abstract
We demonstrate that large language model (LLM) agents can autonomously perform tensor network simulations of quantum many-body systems, achieving approximately 90% success rate across representative benchmark tasks. Tensor network methods are powerful tools for quantum simulation, but their effective use requires expertise typically acquired through years of graduate training. By combining in-context learning with curated documentation and multi-agent decomposition, we create autonomous AI agents that can be trained in specialized computational domains within minutes. We benchmark three configurations (baseline, single-agent with in-context learning, and multi-agent with in-context learning) on problems spanning quantum phase transitions, open quantum system dynamics, and photochemical reactions. Systematic evaluation using DeepSeek-V3.2, Gemini 2.5 Pro, and Claude Opus 4.5 demonstrates that both in-context learning and multi-agent architecture are essential. Analysis of failure modes reveals characteristic patterns across models, with the multi-agent configuration substantially reducing implementation errors and hallucinations compared to simpler architectures.
Exponential Analysis for Entanglement Distillation
This paper develops a theoretical framework for analyzing the reliability and error rates in entanglement distillation protocols, extending beyond traditional approaches to consider scenarios where the initial quantum state is not fully known. The authors establish mathematical bounds and optimal protocols for extracting useful entanglement from noisy quantum states.
Key Contributions
- Characterization of the reliability function for entanglement distillation using regularized quantum Hoeffding divergence
- Extension of entanglement distillation theory to black-box settings with unknown initial states
- Establishment of finite blocklength results connecting to composite hypothesis testing
- Construction of concrete optimal distillation protocols with full prior knowledge
View Full Abstract
Historically, the focus in entanglement distillation has predominantly been on the distillable entanglement, and the framework assumes complete knowledge of the initial state. In this paper, we study the reliability function of entanglement distillation, which specifies the optimal exponent of the decay of the distillation error when the distillation rate is below the distillable entanglement. Furthermore, to capture greater operational significance, we extend the framework from the standard setting of known states to a black-box setting, where distillation is performed from a set of possible states. We establish an exact finite blocklength result connecting to composite correlated hypothesis testing without any redundant correction terms. Based on this, the reliability function of entanglement distillation is characterized by the regularized quantum Hoeffding divergence. In the special case of a pure initial state, our result reduces to the error exponent for entanglement concentration derived by Hayashi et al. in 2003. Given full prior knowledge of the state, we construct a concrete optimal distillation protocol. Additionally, we analyze the strong converse exponent of entanglement distillation. While all the above results assume the free operations to be non-entangling, we also investigate other free operation classes, including PPT-preserving, dually non-entangling, and dually PPT-preserving operations.
Computing Statistical Properties of Velocity Fields on Current Quantum Hardware
This paper develops quantum algorithms for analyzing velocity fields in computational fluid dynamics, demonstrating methods to extract statistical properties like moments and structure functions from quantum circuits without full state reconstruction. The researchers test their approach on IBM's quantum hardware using 4 qubits to represent 16 spatial points, analyzing sine wave signals and Burgers' equation solutions.
Key Contributions
- Novel quantum readout methods for computational fluid dynamics that avoid full quantum state tomography
- Demonstration of statistical analysis of velocity fields on current NISQ hardware with error mitigation
View Full Abstract
Quantum algorithms are gaining attention in Computational Fluid Dynamics (CFD) for their favorable scaling, as encoding physical fields into quantum probability amplitudes enables representation of two to the power of n spatial points with only n qubits. A key challenge in Quantum CFD is the efficient readout of simulation results, a topic that has received limited attention in literature. This work presents methods to extract statistical properties of spatial velocity fields, such as central moments and structure functions, directly from parameterized ansatz circuits, avoiding full quantum state tomography. As a proof of concept, we implement our approach for 1D velocity fields, encoding 16 spatial points with 4 qubits, and analyze both a sine wave signal and four snapshots from Burgers' equation evolution. Using Qedma's error mitigation software QESEM, we demonstrate that such computations achieve high accuracy on current quantum devices, specifically IBMQ's Heron2 system ibm_fez.
Quantum graphs of homomorphisms
This paper introduces a mathematical framework called quantum graphs that extends classical graph theory using noncommutative geometry. The authors construct a category of quantum graphs with homomorphism structures and prove connections between quantum graph properties and quantum strategy games, while also establishing links to quantum channel theory.
Key Contributions
- Introduction of the qGph category of quantum graphs with closed symmetric monoidal structure
- Proof that quantum graph homomorphisms correspond to winning quantum strategies in graph homomorphism games
- Demonstration that finite reflexive quantum graphs are confusability quantum graphs of quantum channels
View Full Abstract
We introduce a category $\mathsf{qGph}$ of quantum graphs, whose definition is motivated entirely from noncommutative geometry. For all quantum graphs $G$ and $H$ in $\mathsf{qGph}$, we then construct a quantum graph $[G,H]$ of homomorphisms from $G$ to $H$, making $\mathsf{qGph}$ a closed symmetric monoidal category. We prove that for all finite graphs $G$ and $H$, the quantum graph $[G,H]$ is nonempty iff the $(G,H)$-homomorphism game has a winning quantum strategy, directly generalizing the classical case. The finite quantum graphs in $\mathsf{qGph}$ are tracial, real, and self-adjoint, and the morphisms between them are CP morphisms that are adjoint to a unital $*$-homomorphism. We show that Weaver's two notions of a CP morphism coincide in this context. We also show that every finite reflexive quantum graph is the confusability quantum graph of a quantum channel, answering a question of Daws.
A perturbative non-Markovian treatment to low-temperature spin decoherence
This paper develops a theoretical framework to predict how electronic spins in molecules lose their quantum coherence at low temperatures due to interactions with nuclear spins. The researchers create a mathematical model that connects fundamental molecular properties to decoherence behavior and validate it against experimental data for molecular qubit candidates.
Key Contributions
- Development of a non-Markovian time-convolutionless master equation for electronic spin-nuclear spin bath interactions
- Framework connecting ab initio electronic structure parameters directly to decoherence dynamics
- Computationally efficient method for predicting low-temperature decoherence in molecular spin systems
View Full Abstract
Molecular spins are promising candidates for quantum information science, leveraging coherent electronic spin states for quantum sensing and computation. However, the practical application of these systems is hindered by electronic spin decoherence, driven by interactions with nuclear spins in the molecule and the surrounding environment at low temperatures. Predicting dephasing dynamics remains a formidable challenge due to the complexity of the spin bath. In this work, we develop a non-Markovian time-convolutionless master equation to treat an electronic spin coupled to a nuclear-spin bath. By relating ab initio electronic structure parameters directly to the decoherence dynamics, we provide a framework that accounts for pure dephasing in the low-temperature limit. We apply this method to a series of molecular qubit candidates and demonstrate good agreement with experimental relaxation trends. This approach offers a computationally efficient path for the prediction of low-temperature decoherence trends in molecular spin systems.
Light-induced Magnetization by Quantum Geometry
This paper proposes a new mechanism for creating magnetization in materials using light, based on quantum geometric properties of electronic systems. The researchers develop theoretical formalism to describe how light can induce magnetic effects through quantum geometry, and suggest this could provide a way to experimentally observe quantum-geometric quantities.
Key Contributions
- Development of semiclassical framework linking quantum geometry to light-induced magnetization
- Establishment of general formalism for second-order magneto-optical responses using quantum metric properties
View Full Abstract
We propose a mechanism for the inverse Faraday and the inverse Cotton--Mouton effects arising from quantum geometry, characterized by the quantum metric quadrupole and the weighted quantum metric. Within a semiclassical framework based on the Boltzmann transport theory, we establish a general formalism describing light-induced magnetization in electronic systems as a second-order response to the electric field of light. Using continuum and tight-binding models, we discuss the symmetry constraints on these effects and estimate the magnitudes of the resulting magnetizations. Our results highlight a direct manifestation of quantum-geometric quantities in nonlinear magneto-optical responses and suggest a viable pathway for experimental detection.
Characterization of Silicon-Membrane TES Microcalorimeters for Large-Format X-ray Spectrometers with Integrated Microwave SQUID Readout
This paper describes the development and testing of silicon-based transition-edge sensor (TES) detectors for X-ray spectroscopy, designed to create a 10,000-pixel spectrometer for studying fragile chemical intermediates in catalysis research. The researchers demonstrate that silicon membranes can effectively replace silicon nitride membranes while enabling better integration with readout electronics.
Key Contributions
- Development of silicon-membrane TES microcalorimeters compatible with monolithic SQUID integration
- Demonstration of efficient focal plane area usage through bendable chip architecture for large-format X-ray spectrometers
View Full Abstract
We present the electro-thermal characterization of transition-edge sensor (TES) detectors suspended on Si membranes fabricated using a silicon-on-insulator (SOI) wafer. The use of an all-silicon fabrication platform, in contrast to the more commonly used silicon nitride membranes, is compatible with monolithic fabrication of integrated TES and SQUID circuits. The all-silicon architecture additionally allows efficient use of focal plane area; the readout circuitry may be positioned out of the focal plane by bending a thinned portion of the chip. Compatibility with integrated fabrication and efficient use of focal plane area provide a path to an efficient soft X-ray spectrometer. This work is motivated by our goal to develop a 10,000-pixel TES spectrometer to overcome critical measurement limitations in catalysis research. The characterization of fragile, carbon-based intermediates via techniques like Resonant Inelastic X-ray Scattering (RIXS) is often precluded by the slow, high-flux nature of existing technologies. The new instrument will allow for fast RIXS measurements to be made without causing sample damage. We verify the detector models and measure the energy resolution using a pulsed optical laser, demonstrating the viability of this approach for the final instrument to be deployed at the National Synchrotron Light Source II (NSLS-II).
Spectral Distribution of Exceptional Points in Lattices with Localized Loss
This paper studies exceptional points (special mathematical singularities) in arrays of optical waveguides with energy loss, finding that arrays with even versus odd numbers of waveguides behave fundamentally differently. The research provides design principles for optical structures that can either exploit or avoid these singular behaviors.
Key Contributions
- Discovery of parity-dependent exceptional point behavior in finite waveguide arrays
- Analytical framework for predicting exceptional point emergence in non-Hermitian optical lattices
View Full Abstract
We explore the existence and stability of exceptional points (EPs) in finite waveguide arrays subject to single-site dissipation. We show that the EP landscape is dictated by a geometry-dependent parity effect, leading to strictly distinct spectral behaviors for arrays with even versus odd numbers of waveguides. Through analytical derivation and numerical analysis, we define the conditions under which these singularities emerge and evolve. Our findings clarify the mechanisms of symmetry breaking in finite non-Hermitian lattices, offering new guidelines for the design of robust optical structures that exploit or avoid exceptional points.
Dissipative State Engineering of Complex Entanglement with Markovian Dynamics
This paper presents a method to create highly entangled quantum states (specifically cluster states) by using engineered dissipation in spin systems. The researchers show how to design a quantum system that naturally evolves toward these complex entangled states as its steady state, rather than trying to actively prepare them.
Key Contributions
- Development of dissipative state engineering method for generating cluster states using Markovian dynamics
- Demonstration that cluster state fidelity and spectral gap are relatively insensitive to system size once dissipation dominates
- Explicit construction of Liouvillian superoperator framework for analyzing steady-state entanglement generation
View Full Abstract
Highly multipartite entangled states play an important role in various quantum computing tasks. We investigate the dissipative generation of a complex entanglement structure as in a cluster state through engineered Markovian dynamics in the spin systems coupled via Ising interactions. Using the Lindblad master equation, we design a projection based dissipative channel that drives the system toward a unique pure steady state corresponding to the desired cluster state. This is done by removing the contribution of the orthogonal states. By explicitly constructing the Liouvillian superoperator in the full $2^N$-dimensional Hilbert space, we compute the steady-state density matrix, the Liouvillian spectral gap, entanglement witness and the fidelity with respect to the ideal cluster state. The results demonstrate that the cluster state emerges as the steady state when the engineered Liouvillian dissipation dominates over the local Ising interaction between spins. Moreover, we find that the fidelity and Liouvillian spectral gap is relatively insensitive to the system size once the saturation dissipation has been achieved that scales linearly with the qubit number. This analysis illustrates a physically realizable path towards steady-state entanglement generation in the spin systems using engineered dissipation.
Genuine multipartite Rains entanglement
This paper introduces a new mathematical measure called genuine multipartite Rains entanglement (GMRE) to quantify how entangled multiple quantum particles are with each other. The measure can be computed efficiently and provides bounds on how much useful entanglement can be extracted from quantum systems involving many particles.
Key Contributions
- Introduction of genuine multipartite Rains entanglement as a computable entanglement measure using semi-definite programming
- Proof that GMRE bounds one-shot GHZ-distillable entanglement and serves as a multipartite entanglement monotone
- Generalization incorporating quantum Renyi relative entropies
View Full Abstract
We introduce the genuine multipartite Rains entanglement (GMRE) as a measure of genuine multipartite entanglement that can be computed using semi-definite programming. Similar to the Rains relative entropy (its bipartite counterpart), the GMRE is monotone under selective quantum operations that completely preserve the positivity of the partial transpose, implying that it is a multipartite entanglement monotone. As a consequence, we show that the GMRE bounds both the one-shot standard and probabilistic approximate GHZ-distillable entanglement from above. We also develop a generalization of this quantity that incorporates other entropies, including quantum Renyi relative entropies.
High-Resolution Spectroscopy of $^{173}$Yb$^{+}$ Ions
This paper demonstrates precise laser cooling and spectroscopy of a single trapped ytterbium-173 ion, measuring previously unobserved electronic transitions and determining the nuclear magnetic octupole moment with unprecedented accuracy. The researchers achieved high-resolution measurements of hyperfine structure and isotope shifts that could enable more sophisticated quantum computing architectures.
Key Contributions
- First observation and coherent excitation of the 2S1/2 → 2D3/2 transition at 436 nm in 173Yb+
- Determination of nuclear magnetic octupole moment with uncertainty reduced by more than 2 orders of magnitude
- High-precision isotope shift measurement between 171Yb+ and 173Yb+ with 1.4 Hz uncertainty
- Resolution of hyperfine structure of the 2D3/2 state with relative uncertainty below 10^-8
View Full Abstract
Compared to other stable isotopes of $\rm{Yb}^+$, $^{173}\rm{Yb}^+$ has a richer hyperfine structure, which leads to more favorable clock transitions, spectroscopic techniques for probing new physics, and more sophisticated quantum computing architectures. However, to date, its electronic spectrum remains poorly characterized. Here, we report on efficient laser cooling, state preparation, and detection of a single trapped $^{173}\rm{Yb}^+$ ion. The previously unobserved $^2\!S_{1/2} \rightarrow {}^2\!D_{3/2}$ electric quadrupole transition at 436 nm is coherently excited, and the isotope shift between $^{171}\rm{Yb}^+$ and $^{173}\rm{Yb}^+$ on this transition is determined with an uncertainty of 1.4 Hz. Using microwave spectroscopy, we resolve the hyperfine structure (HFS) of the ${}^2\!D_{3/2}$ state with a relative uncertainty below $10^{-8}$. From the HFS measurement data, we infer for ${}^{173}$Yb a nuclear magnetic octupole moment $Ω= -0.062(8)\,({\rm b} \times μ_N)$ with uncertainty reduced by more than 2 orders of magnitude compared to previous studies. The data also allow us to determine hyperfine anomalies for the ${}^2\!S_{1/2}$ and ${}^2\!D_{3/2}$ states.
Lattice fermion simulation of spontaneous time-reversal symmetry breaking in a helical Luttinger liquid
This paper develops a computational method called 'tangent fermion' to simulate helical Luttinger liquids on quantum lattices, studying how certain quantum materials can spontaneously break time-reversal symmetry when specific parameters are tuned. The researchers use tensor network calculations to confirm theoretical predictions about phase transitions in these one-dimensional quantum systems.
Key Contributions
- Extension of tangent fermion method to discretize helical Luttinger liquid Hamiltonians while preserving time-reversal symmetry
- Numerical confirmation using tensor networks of spontaneous time-reversal symmetry breaking at critical Luttinger parameter values
View Full Abstract
We extend a recently developed "tangent fermion" method to discretize the Hamiltonian of a helical Luttinger liquid on a one-dimensional lattice, including two-particle backscattering processes that may open a gap in the spectrum. The fermion-doubling obstruction of the sine dispersion is avoided by working with a tangent dispersion, preserving the time-reversal symmetry of the Hamiltonian. The numerical results from a tensor network calculation on a finite lattice confirm the expectation from infinite-system analytics, that a gapped phase with spontaneously broken time-reversal symmetry emerges when the Fermi level is tuned to the Dirac point and the Luttinger parameter crosses a critical value.
Is it possible to determine unambiguously the Berry phase solely from quantum oscillations?
This paper examines the challenges in accurately determining the Berry phase (a geometric quantum phase) from quantum oscillation measurements in materials. The authors show that factors like the spin factor and Zeeman effects create ambiguities that can lead to incorrect conclusions about topological properties of materials.
Key Contributions
- Identifies ambiguities in Berry phase determination from Shubnikov-de Haas oscillations due to unknown g-factor and spin effects
- Demonstrates how neglecting the spin factor can lead to incorrect interpretations in topological materials with strong spin-orbit coupling
View Full Abstract
The Berry phase, a fundamental geometric phase in quantum systems, has become a crucial tool for probing the topological properties of materials. Quantum oscillations, such as Shubnikov-de Haas (SdH) oscillations, are widely used to extract this phase, but its unambiguous determination remains challenging. This work highlights the inherent ambiguities in interpreting the oscillation phase solely from SdH data, primarily due to the influence of the spin factor $R_S$, which depends on the Landé $g$-factor and effective mass. While the Lifshitz-Kosevich (LK) theory provides a framework for analyzing oscillations, the unknown g-factor introduces significant uncertainty. For instance, a zero oscillation phase could arise either from a nontrivial Berry phase or a negative $R_S$. We demonstrate that neglecting $R_S$ in modern studies, especially for topological materials with strong spin-orbit coupling, can lead to doubtful conclusions. Through theoretical analysis and numerical examples, we show how the interplay between the Berry phase and Zeeman effect complicates phase determination. Additionally, we also discuss another underappreciated mechanism - the magnetic field dependence of the Fermi level. Our discussion underscores the need for complementary experimental techniques to resolve these ambiguities and calls for further research to refine the interpretation of quantum oscillations in topological systems.
Quantum properties of heavy-fermion pairs at a lepton collider with polarised beams
This paper analyzes quantum properties like entanglement and spin correlations in heavy particle pairs (top quarks, tau leptons) produced at future particle colliders with polarized electron/positron beams. The researchers show how beam polarization can enhance sensitivity to detect new physics beyond the Standard Model through quantum measurements.
Key Contributions
- Derived analytical expressions for spin-density matrices of heavy fermion pairs under various beam polarization configurations
- Demonstrated that quantum observables like entanglement and Bell inequality violations can provide enhanced sensitivity to new physics in particle collider experiments
View Full Abstract
We investigate the quantum properties of heavy-fermion pairs, such as $t\bar t$ or $τ^+τ^-$, produced in lepton-lepton collisions with polarised beams. Focusing on spin correlations, entanglement, Bell-inequality violation, and quantum-information--theoretic measures such as purity and magic, we analyse how beam polarisation shapes the structure of the spin-density matrix. We derive analytic expressions for a wide range of helicity configurations, including both Standard Model contributions and generic new-physics effects parametrised by scalar, vector, and tensor four-fermion operators within an effective field theory framework. We show that beam polarisation unlocks a substantially richer set of spin configurations and significantly enhances sensitivity to non-standard interactions. As a phenomenological application, we study $t\bar t$ production at a future linear collider and demonstrate that quantum observables provide a comprehensive and complementary probe of top-quark interactions and stronger constraints on the scale of new physics.
Non-invertible circuit complexity from fusion operations
This paper extends quantum circuit complexity theory to include non-invertible quantum gates that can transition between different quantum superselection sectors, combining continuous optimization within sectors with discrete jumps between sectors using fusion operations from topological quantum field theory.
Key Contributions
- Extension of Nielsen's geometric approach to circuit complexity to include non-invertible gates
- Formulation of sector-changing optimization as a weighted shortest-path problem on fusion graphs
- Integration of topological defect fusion operations into quantum circuit complexity theory
View Full Abstract
Modern understanding of symmetry in quantum field theory includes both invertible and non-invertible operations. Motivated by this, we extend Nielsen's geometric approach to quantum circuit complexity to incorporate non-invertible gates. These arise naturally from fusion of topological defects and allow transitions between superselection sectors. We realise fusion operations as completely positive, trace-preserving quantum channels. Including such gates makes the sector-changing optimisation problem discrete: it reduces to a weighted shortest-path problem on the fusion graph. Circuit complexity therefore combines continuous geometry within sectors with discrete sector jumps. We illustrate the framework in rational conformal field theories and briefly comment on an AdS$_3$ interpretation in which fusion-induced transitions correspond to geometry-changing boundary operations. A companion paper provides full derivations and extended examples.
Reservoir-Engineered Refrigeration of a Superconducting Cavity with Double-Quantum-Dot Spin Qubits
This paper develops a theory for using double-quantum-dot spin qubits as engineered reservoirs to cool superconducting microwave cavities to millikelvin temperatures. The researchers show how to control the cooling process through gate voltages and identify optimal conditions for achieving temperatures below both the bath and quantum dot setpoint temperatures.
Key Contributions
- Developed analytically tractable theory for reservoir-engineered cavity refrigeration using double-quantum-dot spin qubits
- Demonstrated targeted millikelvin cooling of superconducting cavities below bath temperature through engineered reservoir control
- Identified cooling bounds, refrigeration valleys, and constraints from memory effects in realistic solid-state implementations
View Full Abstract
We present an analytically tractable theory of reservoir-engineered refrigeration of a superconducting microwave cavity and map it onto a realistic solid-state implementation based on gate-defined double-quantum-dot (DQD) spin qubits. Treating the DQD not as a spectroscopic element but as a tunable engineered reservoir, we show how gate control of populations, coherences, linewidths, and detuning defines an effective photon birth-death process with predictable detailed balance. This framework yields closed-form expressions for the cavity steady state, identifies cooling bounds and detuning-dependent refrigeration valleys, and clarifies when refrigeration can drive the cavity below both the bath temperature and the DQD setpoint. By distinguishing refreshed (collision-like) and persistent reservoir regimes, we show how memory effects, saturation, and dark-state formation constrain cooling in realistic devices, while collective bright-mode coupling in a two-dot configuration can enhance refrigeration subject to mismatch and dephasing, as confirmed by numerical Lindblad simulations demonstrating targeted millikelvin cavity cooling relevant for cryogenic circuit-QED architectures.
Toward Spectral Engineering of Squeezed Light in High-Gain PDC
This paper investigates how to engineer the spectral properties of squeezed light generated through parametric down-conversion by controlling the gain and dispersion characteristics of waveguides. The researchers demonstrate that different waveguide configurations exhibit distinct behaviors in spectral purity as gain increases, providing a pathway to optimize squeezed-light sources for specific quantum applications.
Key Contributions
- Demonstration of gain-dependent spectral evolution in both unapodized and apodized dispersion-engineered waveguides
- Development of Schmidt-mode analysis combined with group-velocity interpretation to explain dispersion-dependent behavior
- Establishment of design principles for jointly exploiting dispersion engineering and parametric gain to tailor squeezed-light spectral properties
View Full Abstract
We investigated the spectral properties of squeezed light generated via parametric down-conversion in the high-gain regime, considering both unapodized and apodized dispersion-engineered waveguides. The gain-dependent evolution of these states is examined starting from the low-gain regime, which includes both highly correlated and nearly uncorrelated cases. For the unapodized configuration, we observe a monotonic increase in spectral purity with gain, whereas the apodized configuration exhibits a nonmonotonic dependence, initially decreasing and then recovering at higher gain. By combining Schmidt-mode analysis with a group-velocity-based interpretation, we explain why different dispersion conditions exhibit distinct gain-dependent behavior, specifically that rapid purification occurs when the pump group velocity lies between those of the signal and idler. Our study shows that the evolution of spectral purity is governed primarily by the underlying dispersion of the waveguide. These results demonstrate that dispersion engineering and parametric gain can be jointly exploited to tailor the spectral-mode structure of squeezed-light sources, enabling their optimization for a broad range of quantum applications.
Stationary perturbation theory without sums over intermediate states: Supersymmetric Expansion Algorithm
This paper presents a new mathematical method called the supersymmetric expansion algorithm for calculating quantum mechanical perturbations. Instead of using traditional sums over intermediate quantum states, the method uses integrals weighted by probability densities, potentially making certain quantum mechanical calculations more efficient.
Key Contributions
- Development of supersymmetric expansion algorithm as alternative to Rayleigh-Schrödinger perturbation theory
- Elimination of sums over intermediate states in favor of integral-based calculations
View Full Abstract
In this work we show that results of Rayleigh-Schrödinger perturbation theory can be easily obtained using the recently proposed supersymmetric expansion algorithm. Our formalism avoids the sums over intermediate states and yield directly corrections to the energy and eigenstates in terms of integrals weighted by the probability densities for the edge states of the involved supersymmetric Hamiltonians.
A measurement-based protocol for the generation of delocalised quantum states of a mechanical system
This paper proposes a method to create non-classical, spatially spread-out quantum states of mechanical oscillators by using optical measurements in cavity optomechanics systems. The researchers develop protocols that use light detection to herald the creation of these exotic mechanical quantum states, which could be useful for ultra-precise sensors.
Key Contributions
- Development of measurement-based protocol for generating non-Gaussian mechanical quantum states using photodetection
- Comparison of blue-detuned pulsed and continuous-wave schemes with analysis of heralding rates and temperature robustness
View Full Abstract
Non-Gaussian mechanical states are a key resource for quantum-enhanced sensing and tests of macroscopic quantum physics. We propose a measurement-based protocol to herald delocalized, nonclassical states of a mechanical oscillator in cavity optomechanics by conditioning on Geiger photodetection of the optical output. We analyse under which conditions Stokes-induced optomechanical entanglement give rise to mechanical Wigner Function negativity upon detection. We develop and compare a blue-detuned pulsed scheme and a continuous-wave steady-state scheme employing temporal-mode filtering, and we quantify heralding rates and robustness to finite temperature under realistic detection efficiencies.
Overcoming the No-Go Theorem Yields a Rich Dissipative Phase Diagram in the Open Quantum Rabi Model
This paper studies how adding a specific mathematical term (A²) to models of light-matter interaction creates richer phase behaviors in open quantum systems, overcoming previous theoretical limitations and enabling new types of quantum phase transitions with potential experimental applications.
Key Contributions
- Demonstrates that anisotropy provides a mechanism to overcome no-go theorems in dissipative quantum systems
- Shows that the A² term creates a richer phase diagram with normal, superradiant, and bistable phases intersecting at tricritical points
- Identifies that A² term fundamentally alters photon-number fluctuation scaling near critical-line intersections
View Full Abstract
The open quantum Rabi model is studied in this work, with the explicit $\mathbf{A}^{2}$ term incorporated as required by the Thomas-Reich-Kuhn sum rule. It is shown that anisotropy provides a generic and robust mechanism for overcoming the no-go theorem in dissipative quantum systems, thereby establishing a genuine platform for observing dissipative phase transitions. The inclusion of the $\mathbf{A}^{2}$ term yields a significantly richer and asymmetric steady-state phase diagram, consisting of normal, superradiant, and bistable phases that intersect at tricritical points, while isolated bistable phases also emerge and the number of tricritical points is reduced. Notably, it is near the intersection of the two critical-line branches enclosing the superradiant phases, rather than at the tricritical points, that the $\mathbf{A}^{2}$ term fundamentally alters the scaling of photon-number fluctuations. Given the inherent role of the $\mathbf{A}^{2}$ term in light-matter interactions, our findings open a realistic route toward the experimental investigation and dynamical control of nonequilibrium critical phenomena in practical open quantum platforms.
Efficient State Preparation for Quantum Machine Learning
This paper presents a method for efficiently encoding classical data into quantum states for machine learning applications using Matrix Product State representations. The authors show that their low-depth approximate encoding maintains classification accuracy while providing increased robustness against adversarial attacks, demonstrated on MNIST and Fashion-MNIST datasets.
Key Contributions
- Development of Matrix Product State-based quantum circuit construction for efficient classical data encoding
- Demonstration of adversarially robust variational quantum classifiers with maintained accuracy using low-depth approximate encodings
View Full Abstract
One of the key considerations in the development of Quantum Machine Learning (QML) protocols is the encoding of classical data onto a quantum device. In this chapter we introduce the Matrix Product State representation of quantum systems and show how it may be used to construct circuits which encode a desired state. Putting this in the context of QML we show how this process may be modified to give a low depth approximate encoding and crucially that this encoding does not hinder classification accuracy and is indeed exhibits an increased robustness against classical adversarial attacks. This is illustrated by demonstrations of adversarially robust variational quantum classifiers for the MNIST and FMNIST dataset, as well as a small-scale experimental demonstration on a superconducting quantum device.
Herzberg-Teller coupling in coherent multidimensional spectroscopy: analytical response functions for multilevel systems
This paper develops analytical mathematical expressions to describe how molecules interact with light in advanced spectroscopy experiments, specifically accounting for non-standard coupling effects (Herzberg-Teller coupling) that create shifted spectral patterns. The work provides a general theoretical framework for interpreting complex molecular spectroscopy data.
Key Contributions
- Analytical expressions for multidimensional nonlinear response functions with Herzberg-Teller coupling
- General framework applicable to arbitrary number of electronic states and response function orders
- Theoretical explanation for spectral replica formation in non-Condon systems
View Full Abstract
Coherent multidimensional spectroscopy enables detailed investigations of vibronic effects in molecular and solid-state systems. We present explicit analytical expressions for multidimensional nonlinear response functions in the presence of Herzberg-Teller (non-Condon) coupling, within the displaced harmonic oscillator model. The formulation applies to electronic systems with an arbitrary number N of electronic states and to response functions of arbitrary order M in the light-matter interaction. We show that Herzberg-Teller coupling introduces additional oscillatory factors in the time-domain response functions, leading, upon Fourier transformation, to replicas of the Franck-Condon multidimensional spectra shifted by integer multiples of the vibrational frequencies. The present results provide a general analytical framework for the interpretation of non-Condon effects in coherent multidimensional spectroscopies.
Eigenstate Thermalization and Spectral Imprints of the Hamiltonian in Local Observables
This paper studies how isolated quantum systems reach thermal equilibrium by analyzing the transition from ordered to chaotic behavior in a spin chain model. The researchers show that local measurements can reveal signatures of the underlying quantum chaos, even when the system is only partially chaotic.
Key Contributions
- Established direct correspondence between Hamiltonian spectral correlations and local observables as signature of ergodicity breaking
- Introduced submatrix-based framework for analyzing local observables in energy eigenbasis that captures both short-range and long-range spectral features
View Full Abstract
The Eigenstate Thermalization Hypothesis explains thermalization in isolated quantum systems through the statistical properties of observables in the energy eigenbasis. We investigate the crossover from integrability to chaos in the spin-$1/2$ XXZ chain, establishing a direct correspondence between the spectral correlations of the Hamiltonian and local observables expressed in the energy eigenbasis as a signature of ergodicity breaking. By introducing a local perturbation that drives the system from integrability to chaos, we track the standard ETH indicators and the eigenstate entanglement entropy. We introduce a submatrix-based framework for analyzing local observables in the energy eigenbasis. By extracting real-symmetric blocks along the diagonal of the local observables represented in eigenbasis, we show that these submatrices exhibit both the short-range and long-range spectral features of the Hamiltonian. Remarkably, this correspondence persists even in a partially ergodic regime, indicating that the emergence of chaos is already encoded locally within the observables' matrix structure and that small blocks are sufficient to capture the underlying spectral correlations.
A game-theoretic probability approach to loopholes in CHSH experiments
This paper develops a new game-theoretic approach to analyze Bell test experiments that closes both locality and freedom-of-choice loopholes simultaneously. The authors prove that nature cannot maintain both proper CHSH correlations and independence between measurement settings and hidden variables, providing a robust framework for interpreting quantum nonlocality without assuming underlying probability spaces.
Key Contributions
- Novel game-theoretic probability framework for analyzing CHSH experiments without assuming underlying probability spaces
- Proof that Nature cannot simultaneously satisfy CHSH correlations and measurement independence using capital processes
- Operational strategy for closing both locality and freedom-of-choice loopholes in Bell tests
View Full Abstract
We study the CHSH inequality from an informational, timing-sensitive viewpoint using game-theoretic probability, which avoids assuming an underlying probability space. The locality loophole and the measurement-dependence (``freedom-of-choice'') loophole are reformulated as structural constraints in a sequential hidden-variable game between Scientists and Nature. We construct a loopholes-closed game with capital processes that test (i) convergence of empirical conditional frequencies to the CHSH correlations and (ii) the absence of systematic correlations between measurement settings and Nature's hidden-variable assignments, and prove that Nature cannot satisfy both simultaneously: at least one capital process must diverge. This yields an operational winning strategy for Scientists and a game-theoretic probabilistic interpretation of experimentally observed CHSH violations.
A Posteriori Certification Framework for Generalized Quantum Arimoto-Blahut Algorithms
This paper develops an improved algorithm for quantum optimization problems that can verify its own correctness after running, rather than requiring strict conditions upfront. The researchers apply this to efficiently compute quantum relative entropy of channels, which measures how distinguishable quantum processes are from each other.
Key Contributions
- Introduction of a posteriori certification framework for quantum Arimoto-Blahut algorithms with weaker convergence conditions
- Development of certified iterative scheme for computing quantum relative entropy of channels with improved scalability over SDP approaches
View Full Abstract
The generalized quantum Arimoto--Blahut (QAB) algorithm is a powerful derivative-free iterative method in quantum information theory. A key obstacle to its broader use is that existing convergence guarantees typically rely on analytical conditions that are either overly restrictive or difficult to verify for concrete problems. We address this issue by introducing an a posteriori certification viewpoint: instead of requiring fully a priori verifiable assumptions, we provide convergence and error guarantees that can be validated directly from the iterates produced by the algorithm. Specifically, we prove a generalized global convergence theorem showing that, under convexity and a substantially weaker numerically verifiable condition, the QAB iteration converges to the global minimizer. This theorem yields a practical certification procedure: by checking explicit inequalities along the computed trajectory, one can certify global optimality and bound the suboptimality of the obtained value. As an application, we develop a certified iterative scheme for computing the quantum relative entropy of channels, a fundamental measure of distinguishability in quantum dynamics. This quantity is notoriously challenging to evaluate numerically: gradient-based methods are impeded by the complexity of matrix functions such as square roots and logarithms, while recent semidefinite programming approaches can become computationally and memory intensive at high precision. Our method avoids these bottlenecks by combining the QAB iteration with a posteriori certification, yielding an efficient and scalable algorithm. Numerical experiments demonstrate rapid convergence and improved scalability and adaptivity compared with SDP-based approaches.
Relaxation Process During Complex Time Evolution In Two-Dimensional Integrable and Chaotic CFTs
This paper studies how quantum states evolve under complex time (combining real and imaginary time evolution) in two-dimensional conformal field theories, finding that the relaxation behavior depends on whether the spatial dimension is compact or non-compact. The authors connect these results to black hole physics through holographic correspondence.
Key Contributions
- Demonstrated that complex time evolution in spatially-compact 2d CFTs drives subsystems to primary states with matching conformal dimensions
- Established connection between non-unitary quantum evolution and black hole relaxation processes through holographic duality
View Full Abstract
We investigate the complex time evolution of a vacuum state with the insertion of a local primary operator in two-dimensional conformal field theories (2d CFTs). This complex time evolution can be considered as a composite process constructed from Lorentzian time evolution and a Euclidean evolution induced by a post-selected measurement. Our main finding is that in the spatially-compact system, this complex time evolution drives the state of the subsystems to those of the primary state with the same conformal dimensions of the inserted operator. Contrary to the compact system, the subsystems of the spatially non-compact system evolve to states that depend on the non-unitary process during a certain time regime. In holographic systems with a compact spatial direction, this process induced by a heavy local operator can correspond to the relaxation from a black hole with an inhomogeneous horizon to that with a uniform one, while in the ones with a non-compact spatial direction, it can correspond to the relaxation to that with a horizon depending on the non-unitary process.
Geometric Hybrid Poincaré Sphere with Variable Poles
This paper introduces a geometric hybrid Poincaré sphere framework that provides unified control over both spin and orbital angular momentum of photons independently, enabling creation of complex structured light fields with controllable polarization and intensity patterns.
Key Contributions
- Development of geometric hybrid Poincaré sphere framework for independent control of photon SAM and OAM
- Demonstration of systematic state-space description for coherent geometrical control of structured light fields
View Full Abstract
We propose a geometric hybrid Poincaré sphere (GHPS) as a unified geometrical framework for describing structured photon states with independently controllable spin angular momentum (SAM) and orbital angular momentum (OAM). Unlike the conventional higher-order Poincaré sphere, in which the SAM and OAM are intrinsically coupled through fixed basis states, the GHPS is constructed by defining its poles as direct products of arbitrary orthogonal bases on the Poincaré sphere (PS) and orbital Poincaré sphere (OPS) and by superposing these pole states. Using numerical simulations, we analyze representative GHPS states and show that the GHPS spherical coordinates govern the amplitude ratio and relative phase between the pole bases. This framework enables spatially inhomogeneous polarization distributions and intensity patterns, including nonseparable structures in which polarization and intensity are intrinsically intertwined, and provides a systematic state-space description for the coherent geometrical control of advanced structured light fields.
Scale Invariance Breaking and Discrete Phase Invariance in Few-Body Problems
This paper studies how continuous scale invariance in quantum mechanics can break down to discrete phase invariance, examining this phenomenon in several few-body quantum systems including problems with inverse-square potentials and magnetic flux configurations.
Key Contributions
- Identification of discrete phase invariance as a new type of symmetry breaking from continuous scale invariance
- Analysis of S-matrix pole structures showing circular distribution on Riemann sheets as manifestation of discrete phase invariance
- Extension of discrete phase invariance concepts to few-body problems including Aharonov-Bohm and multi-particle systems
View Full Abstract
Scale invariance in quantum mechanics can be broken in several ways. A well-known example is the breakdown of continuous scale invariance to discrete scale invariance, whose typical realization is the Efimov effect of three-body problems. Here we discuss yet another discrete symmetry to which continuous scale invariance can be broken: discrete phase invariance. We first revisit the one-body problem on the half line in the presence of an inverse-square potential -- the simplest example of nontrivial scale-invariant quantum mechanics -- and show that continuous scale invariance can be broken to discrete phase invariance in a small window of coupling constant. We also show that discrete phase invariance manifests itself as circularly distributed simple poles on Riemann sheets of the S-matrix. We then present three examples of few-body problems that exhibit discrete phase invariance. These examples are the one-body Aharonov-Bohm problem, a two-body problem of nonidentical particles in two dimensions, and a three-body problem of nonidentical particles in one dimension, all of which contain a codimension-two ``magnetic'' flux in configuration spaces.
Ascertaining higher-order quantum correlations in high energy physics
This paper investigates higher-order quantum correlations in entangled hyperon-antihyperon particle systems that can be produced in high-energy physics experiments. The researchers develop new mathematical inequalities to detect these higher-order quantum correlations and show they can be experimentally observed in particle accelerator experiments like BESIII and Belle II.
Key Contributions
- Development of new Clauser-Horne inequalities for statistical cumulants and central moments to detect higher-order quantum correlations
- Demonstration that higher-order quantum correlations exist in hyperon-antihyperon systems and can be experimentally verified in high-energy physics experiments
View Full Abstract
Nonlocality is a peculiar nature of quanta and it stands as an important quantum resource in application. Yet mere linear property of it, viz. the first order in moment, has been explored through various inequalities. Noticing the vast higher-order regime unexplored, in this study we investigate the higher-order quantum correlations in entangled hyperon-antihyperon system, which may be generated massively in charmonium decays. A new type of Clauser-Horne inequality for statistical cumulants and central moments is formulated. We find that a significant violation of the third-order constraint, indicating the existence of higher-order correlation, exists in hyperon-antihyperon system and can be observed in high energy physics experiments, like BESIII and Belle II. Notably, the violation manifests more in higher energy systems of the $Λ\barΛ$ pair against the kinematic contamination of timelike events.
Transient fields in oblique scattering from an infinite planar dielectric interface -- a qubit lattice simulation
This paper uses a qubit lattice algorithm to simulate how electromagnetic pulses scatter when they hit a dielectric interface at an angle. The researchers found that while reflected pulses keep their Gaussian shape, transmitted pulses develop additional wave patterns that depend on the incident pulse width.
Key Contributions
- Development of qubit lattice algorithm simulation for electromagnetic scattering at dielectric interfaces
- Discovery that transmitted pulses exhibit Huygen-like wavefronts whose strength depends on incident pulse width
View Full Abstract
An initial value algorithm is utilized to examine the time dependent evolution of the electromagnetic fields arising from oblique scattering of bounded pulses from an infinite planar dielectric interface. Since the qubit lattice algorithm (QLA) is almost fully unitary, one finds excellent conservation of electromagnetic energy. Various Gaussian envelope pulses are considered in regimes where the incident angle is below that needed for total internal reflection. While the reflected pulse retains its overall Gaussian shape, the transmitted pulse exhibits a combination of a Gaussian envelope along with Huygen-like emitted wave fronts from the collision point of the initial pulse with the infinite dielectric interface. The strength of these Huygen wavefronts depends on the width of the incident pulse.
Quantum Latin squares of order $6m$ with all possible cardinalities
This paper studies quantum Latin squares, which are arrays of quantum state vectors that form orthonormal bases in rows and columns, and proves that for orders that are multiples of 6, nearly all possible cardinalities (numbers of distinct vectors) can be achieved. The work extends combinatorial mathematics into the quantum domain by analyzing the structure and properties of these quantum generalizations of classical Latin squares.
Key Contributions
- Proves existence of quantum Latin squares of order 6m with almost all possible cardinalities between 6m and 36m²
- Develops construction methods using sub-QLS(6) structures to achieve the cardinality results
View Full Abstract
A quantum Latin square of order $n$ (denoted as QLS$(n)$) is an $n\times n$ array whose entries are unit column vectors from the $n$-dimensional Hilbert space $\mathcal{H}_n$, such that each row and column forms an orthonormal basis. Two unit vectors $|u\rangle, |v\rangle\in \mathcal{H}_n$ are regarded as identical if there exists a real number $θ$ such that $|u\rangle=e^{iθ}|v\rangle$; otherwise, they are considered distinct. The cardinality $c$ of a QLS$(n)$ is the number of distinct vectors in the array. In this note,we use sub-QLS$(6)$ to prove that for any integer $m\geq 2$ and any $c\in [6m,36m^2]\setminus \{6m+1\}$, there is a QLS$(6m)$ with cardinality $c$.
Distributed Exact Quantum Amplitude Amplification Algorithm for Arbitrary Quantum States
This paper presents a distributed quantum amplitude amplification algorithm that splits quantum computations across multiple nodes to overcome hardware limitations in current quantum devices. The algorithm can amplify specific quantum states while using fewer quantum gates and achieving lower circuit depth compared to existing methods.
Key Contributions
- Development of DEQAAA algorithm that supports distributed quantum amplitude amplification across 2 to n nodes
- Demonstrated over 97% reduction in quantum gate count and circuit depth compared to existing QAAA and EQAAA algorithms
- Verification of exact amplitude amplification for arbitrary quantum states with multiple targets using quantum simulation
View Full Abstract
In the noisy intermediate-scale quantum (NISQ) era, distributed quantum computation has garnered considerable interest, as it overcomes the physical limitations of single-device architectures and enables scalable quantum information processing. In this study, we focus on the challenge of achieving exact amplitude amplification for quantum states with arbitrary amplitude distributions and subsequently propose a Distributed Exact Quantum Amplitude Amplification Algorithm (DEQAAA). Specifically, (1) it supports partitioning across any number of nodes $t$ within the range $2 \leq t \leq n$; (2) the maximum qubit count required for any single node is expressed as $\max \left(n_0,n_1,\dots,n_{t-1} \right) $, where $n_j$ represents the number of qubits at the $j$-th node, with $\sum_{j=0}^{t-1} n_j =n$; (3) it can realize exact amplitude amplification for multiple targets of a quantum state with arbitrary amplitude distributions; (4) we verify the effectiveness of DEQAAA by resolving a specific exact amplitude amplification task involving two targets (8 and 14 in decimal) via MindSpore Quantum, a quantum simulation software, with tests conducted on 4-qubit, 6-qubit, 8-qubit and 10-qubit systems. Notably, through the decomposition of $C^{n-1}PS$ gates, DEQAAA demonstrates remarkable advantages in both quantum gate count and circuit depth as the qubit number scales, thereby boosting its noise resilience. In the 10-qubit scenario, for instance, it achieves a reduction of over $97\%$ in both indicators compared to QAAA and EQAAA, underscoring its outstanding resource-saving performance.
A saturation-absorption rubidium magnetometer with multilevel optical Bloch-equation modeling for intermediate-to-high fields
This paper presents SASHMAG, a rubidium-based atomic magnetometer that can precisely measure strong magnetic fields (above 0.2 Tesla) using saturated absorption spectroscopy. The researchers developed sophisticated theoretical models based on optical Bloch equations to interpret the complex spectra and demonstrate magnetic field measurements with high precision, laying groundwork for machine learning-enhanced magnetometry applications.
Key Contributions
- Development of SASHMAG magnetometer for high-field precision measurements using Rb-87 atoms
- Comprehensive multilevel optical Bloch-equation modeling in the hyperfine Paschen-Back regime
- Physics-constrained optimization routine for magnetic field estimation with ±0.0017 T precision
- Foundation for ML-enhanced autonomous magnetometry applications
View Full Abstract
We present SASHMAG (Saturated Absorption Spectroscopy High-field MAGnetometer), an atomic sensor designed for precision magnetic-field measurements in the intermediate-to-high field regime ($>0.2\,\text{T}$) using Rubidium-87 ($^{87}Rb$). The sensor operates in the hyperfine Paschen-Back regime, where the hyperfine and Zeeman interactions decouple, and utilizes counter-propagating pump-probe configuration in Faraday geometry to resolve isolated, Doppler-free Zeeman transitions. To interpret the resulting spectra in this strongly field-dependent regime, we developed a comprehensive multilevel optical Bloch-equation model solved explicitly in the uncoupled $\ket{m_I, m_J}$ basis, capturing state mixing and nonlinear saturation dynamics. This model reproduces measured spectra at sub-Doppler resolution and is consistent with analytical expectations for power broadening and thermal Doppler scaling. Magnetic field estimation is performed using a physics-constrained optimization routine that infers the magnetic field by minimizing the residual between experimentally extracted line centers and calculated transition frequencies from the field-dependent Hamiltonian. We demonstrate magnetic field retrieval from $0.2\,\text{T}$ to $0.4\,\text{T}$ with a precision of $\pm 0.0017 \,\text{T}$). Furthermore, the validated simulation establishes a foundation for generating synthetic training datasets, paving the way for autonomous, Machine Learning-enhanced magnetometry in applications ranging from MRI to fusion reactors.
Learning Volterra Kernels for Non-Markovian Open Quantum Systems
This paper develops a machine learning approach to identify and model the dynamics of open quantum systems that interact with their environment in complex, memory-dependent ways. The method uses mathematical tools from the Nakajima-Zwanzig formalism to convert the quantum dynamics into a learnable format using Volterra integral equations.
Key Contributions
- Development of data-driven framework for learning non-Markovian quantum dynamics
- Mathematical formulation using Volterra integro-differential equations with operator-valued memory kernels
- Use of Padé approximants for correlation function approximation in constrained optimization
View Full Abstract
We develop a data-driven framework for identifying non-Markovian dynamical equations of motion for open quantum systems. Starting from the Nakajima--Zwanzig formalism, we vectorize the reduced density matrix into a four-dimensional state vector and cast the dynamics as a Volterra integro-differential equation with an operator-valued memory kernel. The learning task is then formulated as a constrained optimization problem over the admissible operator space, where correlation functions are approximated by rational functions using Padé approximants. We establish well-posedness of the learnin
Displacement-Squeeze receiver for BPSK displaced squeezed vacuum states surpassing the coherent-states Helstrom bound under imperfect conditions
This paper proposes a displacement-squeeze receiver for discriminating between binary phase-shift keyed signals using squeezed vacuum states, achieving better error rates than coherent state methods. The receiver uses displacement and squeezing operations followed by photon counting to improve signal discrimination in quantum communication systems.
Key Contributions
- Novel displacement-squeeze receiver design that surpasses coherent-state Helstrom bounds for BPSK discrimination
- Comprehensive analysis of receiver performance under realistic imperfections including detector inefficiency, dark counts, and thermal noise
View Full Abstract
We propose a displacement-squeeze receiver (DSR) for discriminating BPSK displaced squeezed vacuum states (S-BPSK). The receiver applies a displacement followed by a squeezing operation with the squeezing axis rotated by $\fracπ{2}$, and performs photon-number-resolving detection with a MAP threshold decision. This processing effectively increases the distinguishability of the input states by elongating their distance in phase space and reducing their population overlap in Fock basis. We show that for all signal energy N, $P_\text{err}^\text{DSR} \in \left[P_\text{HB}^\text{DSS}, 2P_\text{HB}^\text{DSS}\right]$, under equal priors and ideal condition. In the low-energy regime, DSR beats the S-BPSK SQL at $N \approx 0.3$ and drops below the coherent-state BPSK (C-BPSK) Helstrom bound at $N \approx 0.4$, reaching $P_\text{err}^\text{DSR} < 1\%$ near $N \approx 0.6$. Finally, we quantify performance under non-unit efficiency and dark counts, phase diffusion, and receiver thermal noise, with MAP threshold adaptation providing robustness across these nonidealities.
Mechanistic principles of exciton-polariton relaxation
This paper investigates how exciton-polaritons (light-matter hybrid particles) relax from excited states, revealing a two-step process involving phonon interactions and discovering that material thickness suppresses certain scattering processes due to spatial delocalization effects.
Key Contributions
- Identified two-step mechanism for upper-to-lower polariton relaxation via phonon interactions
- Discovered that finite material thickness suppresses Fröhlich scattering due to phonon fluctuation synchronization
- Derived analytical expressions relating material thickness to relaxation rate constants
View Full Abstract
Exciton-polaritons are light-matter hybrid quasi-particles that have emerged as a flexible platform for developing quantum technologies and engineering material properties. However, the fundamental mechanistic principles that govern their dynamics and relaxation remain elusive. In this work, we provide the microscopic mechanistic understanding of the exciton-polariton relaxation process that follows from an excitation in the upper polariton. Using both mixed quantum-classical simulations and analytical analysis, we reveal that phonon-induced upper-to-lower polariton relaxation proceeds via two steps: the first step is a vertical inter-band transition from the upper to the lower polariton, which is followed by a second step that is a phonon-induced Fröhlich scattering within the lower polariton. We find that in materials of finite thickness (which include filled cavities), phonon-induced polaritonic intraband Fröhlich scattering is significantly suppressed. We show that the microscopic origin of this suppression is phonon-fluctuations synchronization (or self-averaging) due to the polaritonic spatial delocalization in the quantization direction. Finally, we show that the same phonon fluctuation-synchronization effect plays a central role across polaritonic relaxation pathways, and we derive simple analytical expressions that relate a material's finite thickness to the corresponding relaxation rate constants.
Casimir effect with dielectric matter in salted water and implications at the cell scale
This paper studies the Casimir effect (quantum electromagnetic forces between objects) in salted water, finding that universal electromagnetic fluctuations create longer-range forces than previously expected. The authors argue these quantum forces could be significant at biological scales, particularly for structures like actin fibers in cells.
Key Contributions
- Identification of universal electromagnetic fluctuation contributions to Casimir forces in salted water
- Demonstration that these quantum forces have longer range than previously thought and may be relevant at cellular scales
View Full Abstract
The Casimir interaction in salted water contains a universal contribution of electromagnetic fluctuations that makes it of a longer range than previously thought. The universal contribution dominates non universal ones at the distances relevant for actin fibers inside the cell. We discuss universal and non-universal contributions with a model mimicking biological matter. We also show that the universal Casimir effect should have important implications at the cell scale.
Impact of control signal phase noise on qubit fidelity
This paper studies how phase noise in control signals degrades qubit fidelity during quantum operations. The researchers use numerical simulations with Qiskit-Dynamics to generate realistic phase noise and measure its impact on qubit state evolution, providing insights into how different spectral components of phase noise affect quantum control quality.
Key Contributions
- Development of simulation methodology to assess phase noise impact on qubit fidelity using Qiskit-Dynamics
- Analysis of how different spectral components of phase noise contribute to quantum control degradation
View Full Abstract
As qubit decoherence times are increased and readout technologies are improved, nonidealities in the drive signals, such as phase noise, are going to represent a growing limitation to the fidelity achievable at the end of complex control pulse sequencies. Here we study the impact on fidelity of phase noise affecting reference oscillators with the help of numerical simulations, which allow us to directly take into account the interaction between the phase fluctuations in the control signals and the evolution of the qubit state. Our method is based on the generation of phase noise realizations consistent with a given power spectral density, that are then applied to the pulse carrier in simulations, with Qiskit-Dynamics, of the qubit temporal evolution. By comparing the final state obtained at the end of a noisy pulse sequence with that in the ideal case and averaging over multiple noise realizations, we estimate the resulting degradation in fidelity, and exploiting an approximate analytical representation of a carrier affected by phase fluctuations, we discuss the contributions of the different spectral components of phase noise.
Quantifying the Relationship Between Strain and Bandgap in Thin Ga$_2$Se$_2$
This paper studies how mechanical strain affects the electronic bandgap in thin layers of gallium selenide (Ga2Se2), using experimental measurements and computer simulations to create a predictive model. The researchers demonstrate they can controllably tune the material's optical properties by applying different types of strain.
Key Contributions
- Development of strain gauge factors relating uniaxial and biaxial strain to bandgap shifts in Ga2Se2
- Framework for deterministic creation of tailored bandgap profiles through controlled strain engineering
View Full Abstract
We present a rigorous analysis that combines theory, simulation, and experimental measurements to quantify the relationship between strain and bandgap in two dimensional gallium selenide (Ga$_2$Se$_2$). Experimentally, we transfer thin Ga$_2$Se$_2$ flakes onto patterned substrates to deterministically induce multiaxial localized strain. We quantify the local strain using a combination of atomic force microscopy (AFM) measurements and COMSOL Multiphysics simulation. We then experimentally map the strain-induced bandgap shifts using high-resolution hyperspectral PL imaging to generate a robust and statistically significant dataset. We systematically fit this data to extract gauge factors that relate the bandgap shift to the local uniaxial and biaxial strain. We then compute the uniaxial and biaxial strain gauge factors via density functional theory (DFT) and find excellent agreement with the experimentally-determined values. Finally, we show that a simple model that computes bandgap shifts from the local uniaxial and biaxial strain predicts the observed multiaxial bandgap shift with less than 10\% error. The combined results provide a framework for deterministic realization of tailored bandgap profiles induced by controlled strain applied to Ga$_2$Se$_2$, with implications for the future realization of localized quantum emitters for quantum photonic applications.
Collective inhibition of light scattering from atoms into an optical cavity at a magic frequency
This paper reports the discovery of specific 'magic frequencies' where rubidium atoms stop scattering light into an optical cavity due to quantum interference effects. The researchers found two frequencies where different types of light scattering are suppressed, with one involving collective atom-photon coupling and another involving single-atom interference.
Key Contributions
- Discovery of magic frequency at -185 MHz where both Rayleigh and Raman scattering are suppressed due to collective quantum interference in strongly coupled atom-cavity system
- Identification of second magic frequency at -506 MHz where only Raman scattering is suppressed due to single-atom quantum interference
View Full Abstract
We report on the observation of a new magic frequency within the hyperfine structure of the D2 line of ${}^{87}$Rb atoms at which the scattering of light into a high-finesse cavity is suppressed by an interplay between quantum interference and the strong collective coupling of atoms to the cavity. Scattering from a cloud of laser-driven cold atoms into the cavity was measured in a polarization sensitive way. We have found that both the Rayleigh and Raman scattering processes into the near-resonant cavity modes are extinguished at 185 MHz below the F=2$\leftrightarrow$F'=3 transition frequency. This coincidence together with the shape of the observed spectral dip imply that the effect relies on a quantum interference in the polariton excitations of the strongly coupled combined atom-photon system. We have also demonstrated the existence of a magic frequency around -506 MHz, where only the Raman scattering is suppressed due to a quantum interference effect at the single-atom level.
Demonstration Of A Quantum Magnetometer Chip Based On Proprietary And Scalable 4H-Silicon Carbide Technology
This paper demonstrates a quantum magnetometer chip built using silicon carbide technology that can be manufactured at industrial scale. The device uses color centers in silicon carbide to detect magnetic fields with much better sensitivity than existing confocal microscopy methods.
Key Contributions
- Development of industrially scalable 4H-SiC quantum magnetometer chip with wafer-scale fabrication
- Integration of V2 silicon vacancy color centers into planar waveguides for improved optical efficiency
- Demonstration of 2-3 orders of magnitude better sensitivity compared to confocal methods
- Successful implementation of coherent control sequences (Rabi, Ramsey, Hahn-echo) on large ensembles of color centers
View Full Abstract
This work presents an industrially scalable, power-efficient and high-performance quantum magnetometer chip based on proprietary 4H-silicon carbide (SiC) technology, leveraging wafer-scale fabrication techniques to optimize V2 silicon vacancy color centers for highly reproducible, industry-grade fabrication with precise control of depth and density. The integration of these color center ensembles into a planar silicon carbide waveguide enables efficient excitation of a large ensemble and simplifies fluorescence extraction compared to standard confocal methods. We report continuous-wave (CW) optically detected magnetic resonance measurements, complemented by Rabi, Ramsey, and Hahn-echo sequences, which demonstrate coherent capabilities of the large embedded ensemble of V2 centers. Based on the data, our device exhibits sensor shot-noise limited sensitivities 2-3 orders of magnitude lower compared to more complex confocal techniques. Collectively, these advancements simplify the quantum sensor architecture, enhance sensitivity, and streamline optical excitation and collection, thereby paving the way for the development of next-generation SiC-quantum sensing technologies.
Plutarch: Toward Scalable Operational Parallelism on Racetrack-Shaped Trapped-Ion Processors
This paper presents Plutarch, a system to improve the execution efficiency of quantum programs on racetrack-shaped trapped-ion quantum processors by optimizing how quantum operations are scheduled and executed across different zones of the processor.
Key Contributions
- Development of Plutarch system with three optimization strategies for racetrack trapped-ion processors
- Discovery that increasing zones can counterintuitively degrade runtime performance due to ion circulation overhead
- Implementation of unitary decomposition, gate prioritization, and shortcut strategies to maximize zone utilization
View Full Abstract
A recent advancement in quantum computing shows a quantum advantage of certified randomness on the racetrack processor. This work investigates the execution efficiency of this architecture for general-purpose programs. We first explore the impact of increasing zones on runtime efficiency. Counterintuitively, our evaluations using variational programs reveal that expanding zones may degrade runtime performance under the existing scheduling policy. This degradation may be attributed to the increase in track length, which increases ion circulation overhead, offsetting the benefits of enhanced parallelism. To mitigate this, the proposed \textit{Plutarch} exploits 3 strategies: (i) unitary decomposition and translation to maximize zone utilization, (ii) prioritizing the execution of nearby gates over ion circulation, and (iii) implementing shortcuts to provide the alternative path.
Emergent chiral Higgs mode in $π$-flux frustrated lattices
This paper investigates interacting bosons on a special two-dimensional lattice structure, mapping out different quantum phases and discovering a novel chiral Higgs mode that emerges at phase transitions. The work provides theoretical insights into strongly correlated quantum matter with broken time-reversal symmetry that could be realized in neutral-atom quantum simulators.
Key Contributions
- Discovery of chiral Higgs mode in dimerized BBH lattices with time-reversal symmetry breaking
- Complete phase diagram mapping including vortex superfluid, vortex Mott insulator, and featureless Mott insulator phases
- Theoretical framework using slave-boson description to analyze excitation spectrum across quantum phase transitions
View Full Abstract
Neutral-atom quantum simulators provide a powerful platform for realizing strongly correlated phases, enabling access to dynamical signatures of quasiparticles and symmetry breaking processes. Motivated by recent observations of quantum phases in flux-frustrated ladders with non-vanishing ground state currents, we investigate interacting bosons on the dimerized BBH lattice in two dimensions-originally introduced in the context of higher-order topology. After mapping out the phase diagram, which includes vortex superfluid (V-SF), vortex Mott insulator (V-MI), and featureless Mott insulator (MI) phases, we focus on the integer filling case. There, the MI/V-SF transition simultaneously breaks the $\mathbb Z_2^{T}$ and U(1) symmetries, where $\mathbb Z_2^{T}$ corresponds to time-reversal symmetry (TRS). Using a slave-boson description, we resolve the excitation spectrum across the transition and uncover a chiral Higgs mode whose mass softens at criticality, providing a dynamical hallmark of emergent chirality that we numerically probe via quench dynamics. Our results establish an experimentally realistic setting for probing unconventional TRS-broken phases and quasiparticles with intrinsic chirality in strongly interacting quantum matter.
Exploring Bell Nonlocality with Extremal Non-Signaling Boxes
This paper studies extremal non-signaling boxes, which are theoretical correlations that violate Bell inequalities more strongly than quantum mechanics allows. The researchers systematically generate these boxes for various scenarios and use them to explore fundamental questions about quantum nonlocality and communication limits.
Key Contributions
- Complete characterization of extremal non-signaling boxes for unexplored Bell scenarios
- Demonstration that two copies of any ENS box violate exclusivity and Specker's principles
- Minimal decomposition of magic square correlation in terms of ENS boxes
- Identification of minimal scenario where limited communication cannot simulate ENS boxes
View Full Abstract
Extremal non-signaling (ENS) boxes are correlations that correspond to vertices of the non-signaling polytope of a Bell scenario. Neither quantum theory nor any theory for ideal measurements allows for ENS boxes. That is, according to quantum theory, ENS boxes are nonphysical. Still, ENS boxes are crucial for addressing a number of problems in Bell nonlocality. Here, we obtain ENS boxes in arbitrary bipartite Bell scenarios and present the complete list of ENS boxes for several unexplored scenarios. Equipped with the boxes, we revisit several foundational questions. We find that already two copies of any ENS box violate the exclusivity (or local orthogonality) and Specker's principles. We provide the minimal decomposition of the magic square correlation - the simplest known perfect correlation in nature - in terms of ENS boxes. We identify the minimal scenario in which a dit of communication (with d < 6) is insufficient to simulate ENS boxes. Our results show that the ENS boxes approach leads to new results and opens new avenues for research.
Simultaneous nondestructive measurement of many polar molecules using Rydberg atoms
This paper presents a method to nondestructively measure the internal states of polar molecule qubits using Rydberg atoms, enabling simultaneous measurement of many molecules without destroying their quantum states. The technique uses microwave dressing to control interactions and minimize crosstalk between measurements.
Key Contributions
- Development of nondestructive measurement scheme for polar molecule qubits using Rydberg atoms
- Microwave dressing technique to enable simultaneous measurements while minimizing Rydberg-Rydberg interactions
- Experimental demonstrations with bialkali molecules (NaCs and RbCs) using Cs Rydberg atoms
View Full Abstract
Tweezer arrays of polar molecules present new opportunities for quantum science and quantum information. However, a major challenge, especially in bialkali molecule platforms, is the fact that current measurement schemes for the internal states are destructive. In this work, we present a method to use Rydberg atoms to nondestructively measure the internal state of a molecular qubit. We achieve this via microwave dressing of both molecules and Rydberg atoms, allowing us to tune the interactions so that there are minimal Rydberg-Rydberg interactions and many measurements can take place simultaneously. We consider two experimentally-motivated examples of detecting $^{23}$Na$^{133}$Cs and $^{87}$Rb$^{133}$Cs with $^{133}$Cs atoms. Finally, we discuss several strategies for mitigating various sources of crosstalk.
Observation of Unidirectional s-p Orbital Topological Edge States in Driven Photonic Lattices
This paper demonstrates the creation of topological edge states using both s and p orbitals in a photonic lattice system with periodic driving. The researchers show that light can travel unidirectionally around corners by combining synthetic magnetic flux with time-periodic modulation, opening new possibilities for robust optical transport using orbital degrees of freedom.
Key Contributions
- First demonstration of higher-orbital Floquet topological insulators using s-p orbital couplings
- Realization of unidirectional topological edge states through combined periodic driving and synthetic magnetic flux
- Introduction of orbital degree of freedom for exploring topological phenomena in photonic systems
View Full Abstract
Time-periodic modulation of a static system is a powerful method for realizing robust unidirectional topological states. So far, all such realizations have been based on interactions among $s$ orbitals, without incorporating inter-orbital couplings. Here, we demonstrate higher-orbital Floquet topological insulators by introducing periodically modulated couplings between the optical $s$ and $p$ orbitals in a square lattice. The staggered phase of the $s$-$p$ couplings gives rise to a synthetic uniform $π$ magnetic flux per plaquette of the lattice, and periodic driving of the couplings opens a topological bandgap, characterized by the Floquet winding number. We image topological edge modes of $s$-$p$ orbitals traveling unidirectionally around a corner. Here, the topological phases are realized by a combined effect of the periodic driving and synthetic magnetic flux. Consequently, when the synthetic flux is turned off, the system becomes trivial over a range of driving parameters. Our results open a promising pathway for exploring topological phenomena by introducing the orbital degree of freedom.
The Quantum Complexity of String Breaking in the Schwinger Model
This paper uses quantum complexity measures and Matrix Product States to study how flux tubes break apart into particles in the 1+1D Schwinger model, a simplified quantum field theory. The researchers analyze quantum entanglement and 'magic' (non-classical correlations) to understand the confinement and fragmentation process in strongly-interacting quantum systems.
Key Contributions
- Demonstrated presence of nonlocal quantum correlations in string breaking dynamics
- Showed entanglement and magic provide complementary insights into confinement phenomena beyond traditional observables
View Full Abstract
String breaking, the process by which flux tubes fragment into hadronic states, is a hallmark of confinement in strongly-interacting quantum field theories. We examine a suite of quantum complexity measures using Matrix Product States to dissect the string breaking process in the 1+1D Schwinger model. We demonstrate the presence of nonlocal quantum correlations along the string that may affect fragmentation dynamics, and show that entanglement and magic offer complementary perspectives on string formation and breaking beyond conventional observables.
Enhancing classical simulation with noisy quantum devices
This paper introduces a hybrid quantum-classical approach called NDE-CS that uses noisy quantum devices to enhance classical simulation of quantum circuits. Instead of trying to eliminate noise, the method learns from noisy quantum executions to improve classical Monte Carlo simulation methods, achieving orders-of-magnitude reductions in computational cost.
Key Contributions
- Development of NDE-CS protocol that uses noisy quantum hardware to enhance classical simulation
- Demonstration of orders-of-magnitude reduction in sampling cost compared to purely classical Monte Carlo methods
- Framework that treats quantum noise as a computational asset rather than obstacle
View Full Abstract
As quantum devices continue to improve in scale and precision, a central challenge is how to effectively utilize noisy hardware for meaningful computation. Most existing approaches aim to recover noiseless circuit outputs from noisy ones through error mitigation or correction. Here, we show that noisy quantum devices can be directly leveraged as computational resources to enhance the classical simulation of quantum circuits. We introduce the Noisy-device-enhanced Classical Simulation (NDE-CS) protocol, which improves stabilizer-based classical Monte Carlo simulation methods by incorporating data obtained from noisy quantum hardware. Specifically, NDE-CS uses noisy executions of a target circuit together with noisy Clifford circuits to learn how the target circuit can be expressed in terms of Clifford circuits under realistic noise. The same learned relation can then be reused in the noiseless Clifford limit, enabling accurate estimation of ideal expectation values with substantially reduced sampling cost. Numerical simulations on Trotterized Ising circuits demonstrate that NDE-CS achieves orders-of-magnitude reductions in sampling cost compared to the underlying purely classical Monte Carlo approaches from which it is derived, while maintaining the same accuracy. We also compare NDE-CS with Sparse Pauli Dynamics (SPD), a powerful classical framework capable of simulating quantum circuits at previously inaccessible scales, and provide an example where the cost of SPD scales exponentially with system size, while NDE-CS scales much more favorably. These results establish NDE-CS as a scalable hybrid simulation approach for quantum circuits, where noise can be harnessed as a computational asset.
Local Magnetometry from Measurement-Induced Dissipation
This paper develops a quantum sensing technique that uses a primary qubit coupled to a magnetic lattice to detect local magnetic order through measurement-induced dissipation. The method can identify antiferromagnetic and altermagnetic phases that have no net magnetization, providing lattice-scale resolution of magnetic textures.
Key Contributions
- Development of measurement-induced dissipation as a resource for quantum magnetometry
- Analytical derivation showing steady-state encoding of locally weighted exchange fields
- Demonstration of lattice-scale resolution for detecting antiferromagnetic and altermagnetic order
View Full Abstract
Magnetic phases are commonly identified through macroscopic magnetization, yet many ordered states, including antiferromagnets and altermagnets, possess a vanishing net moment despite distinct local spin structure. We show that such an order can be accessed through the measurement-induced steady state of a single primary qubit locally coupled to a spin lattice. Using a controlled primary-ancillary qubit protocol, we derive analytically that the steady state \emph{encodes} a locally weighted exchange field in a signed observable that is linear in the weak-coupling regime. Numerical simulations demonstrate lattice-scale resolution of antiferromagnetic and altermagnetic textures and robustness against short-correlated noise. Our results establish measurement-induced dissipation as a resource for detecting magnetic order through microscopic structure rather than through global moments.
A Novel Approach to Explainable AI with Quantized Active Ingredients in Decision Making
This paper compares Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs) for explainable AI, testing them on image classification tasks. The authors claim QBMs achieved better accuracy and provided clearer explanations of which features drove their decisions.
Key Contributions
- Comparative study of QBMs vs CBMs for explainable AI
- Application of quantum-classical hybrid circuits to improve model interpretability
View Full Abstract
Artificial Intelligence (AI) systems have shown good success at classifying. However, the lack of explainability is a true and significant challenge, especially in high-stakes domains, such as health and finance, where understanding is paramount. We propose a new solution to this challenge: an explainable AI framework based on our comparative study with Quantum Boltzmann Machines (QBMs) and Classical Boltzmann Machines (CBMs). We leverage principles of quantum computing within classical machine learning to provide substantive transparency around decision-making. The design involves training both models on a binarised and dimensionally reduced MNIST dataset, where Principal Component Analysis (PCA) is applied for preprocessing. For interpretability, we employ gradient-based saliency maps in QBMs and SHAP (SHapley Additive exPlanations) in CBMs to evaluate feature attributions.QBMs deploy hybrid quantum-classical circuits with strongly entangling layers, allowing for richer latent representations, whereas CBMs serve as a classical baseline that utilises contrastive divergence. Along the way, we found that QBMs outperformed CBMs on classification accuracy (83.5% vs. 54%) and had more concentrated distributions in feature attributions as quantified by entropy (1.27 vs. 1.39). In other words, QBMs not only produced better predictive performance than CBMs, but they also provided clearer identification of "active ingredient" or the most important features behind model predictions. To conclude, our results illustrate that quantum-classical hybrid models can display improvements in both accuracy and interpretability, which leads us toward more trustworthy and explainable AI systems.
Rational degree is polynomially related to degree
This paper proves a mathematical relationship between two measures of complexity for Boolean functions: the standard degree and rational degree. The authors show that the degree is at most proportional to the fourth power of the rational degree, resolving a long-standing open problem in computational complexity theory.
Key Contributions
- Proves the polynomial relationship deg(f) ≤ 2 rdeg(f)^4 for Boolean functions
- Resolves a 30-year-old open problem posed by Nisan and Szegedy
View Full Abstract
We prove that $\mathrm{deg}(f) \leq 2 \, \mathrm{rdeg}(f)^4$ for every Boolean function $f$, where $\mathrm{deg}(f)$ is the degree of $f$ and $\mathrm{rdeg}(f)$ is the rational degree of $f$. This resolves the second of the three open problems stated by Nisan and Szegedy, and attributed to Fortnow, in 1994.
Kernel Learning for Regression via Quantum Annealing Based Spectral Sampling
This paper proposes using quantum annealing devices to learn adaptive kernels for regression tasks by sampling from quantum distributions to approximate kernel functions via random Fourier features. The method integrates quantum annealing into a machine learning pipeline where quantum samples help determine the spectral properties of kernels used in regression algorithms.
Key Contributions
- Novel framework integrating quantum annealing into kernel learning for regression
- Use of restricted Boltzmann machines with quantum sampling to model spectral distributions for adaptive kernels
View Full Abstract
While quantum annealing (QA) has been developed for combinatorial optimization, practical QA devices operate at finite temperature and under noise, and their outputs can be regarded as stochastic samples close to a Gibbs--Boltzmann distribution. In this study, we propose a QA-in-the-loop kernel learning framework that integrates QA not merely as a substitute for Markov-chain Monte Carlo sampling but as a component that directly determines the learned kernel for regression. Based on Bochner's theorem, a shift-invariant kernel is represented as an expectation over a spectral distribution, and random Fourier features (RFF) approximate the kernel by sampling frequencies. We model the spectral distribution with a (multi-layer) restricted Boltzmann machine (RBM), generate discrete RBM samples using QA, and map them to continuous frequencies via a Gaussian--Bernoulli transformation. Using the resulting RFF, we construct a data-adaptive kernel and perform Nadaraya--Watson (NW) regression. Because the RFF approximation based on $\cos(\bmω^{\top}Δ\bm{x})$ can yield small negative values and cancellation across neighbors, the Nadaraya--Watson denominator $\sum_j k_{ij}$ may become close to zero. We therefore employ nonnegative squared-kernel weights $w_{ij}=k(\bm{x}_i,\bm{x}_j)^2$, which also enhances the contrast of kernel weights. The kernel parameters are trained by minimizing the leave-one-out NW mean squared error, and we additionally evaluate local linear regression with the same squared-kernel weights at inference. Experiments on multiple benchmark regression datasets demonstrate a decrease in training loss, accompanied by structural changes in the kernel matrix, and show that the learned kernel tends to improve $R^2$ and RMSE over the baseline Gaussian-kernel NW. Increasing the number of random features at inference further enhances accuracy.
Superadditivity of Krylov Complexity for Tensor Products
This paper studies Krylov complexity, a measure of operator growth in quantum systems, and proves that when quantum systems are combined as tensor products, the total complexity is always greater than the sum of individual complexities. The authors develop a geometric framework using Krylov graphs to explain this superadditivity and show how it arises from the multidimensional nature of combined quantum systems.
Key Contributions
- Proof of superadditivity inequality for Krylov complexity under tensor products
- Introduction of Krylov graph representation to visualize operator growth in composite quantum systems
- Geometric explanation of excess complexity through diffusion on higher-dimensional lattices
View Full Abstract
We study Krylov complexity for quantum systems whose Hamiltonians factorise as tensor products. We prove that complexity is superadditive under tensor products, $C_{12}\ge C_1+C_2$, and identify a positive operator that quantifies the resulting excess complexity. The underlying mechanism is made transparent by introducing a Krylov graph representation in which tensor products generate a higher-dimensional lattice whose diagonal shells encode operator growth and binomial path multiplicities. In the continuum limit, Krylov dynamics reduces to diffusion on this graph, with superadditivity arising from geometric broadening across shells. Explicit examples illustrate how deviations from synchronous evolution generate bounded, oscillatory excess complexity.
Fragility of Optimal Measurements due to Noise in Probe States for Quantum Sensing
This paper studies how noise affects the performance of quantum sensors, specifically examining how measurement strategies that work optimally in ideal conditions become fragile and lose precision when noise is introduced. The authors develop a framework using Fisher information discontinuities to design more robust quantum sensing protocols.
Key Contributions
- Demonstrates that discontinuities in classical Fisher information quantify fragility of quantum sensing protocols under noise
- Provides framework using Jensen's inequality to understand and design more robust POVMs for quantum metrology
View Full Abstract
For a given quantum state used in sensing, the quantum Cramér-Rao bound (QCRB) sets a fundamental limit on the precision achievable by an unbiased estimator of an unknown parameter, determined by the inverse of the quantum Fisher information (QFI). The QFI serves as an upper bound on the classical Fisher information (CFI), representing the maximum extractable information about the unknown parameter from measurements on a physical system. Thus, a central goal in quantum parameter estimation is to find a measurement, described by a POVM, that saturates the QFI (achieves maximum CFI), and thereby achieves the QCRB. In the idealization that one uses pure states and unitary encodings for sensing, discontinuities can appear in the CFI but not the QFI. In this article, we demonstrate that these discontinuities are important features, quantifying how much Fisher information is lost in the presence of noise. We refer to this as the Fisher information "fragility". We present a simple framework for understanding how discontinuities increase fragility through Jensen's inequality, and demonstrate how one can use this framework to design more robust POVMs for quantum advantage in metrology.
On equivalent methods for functional determinants
This paper analyzes different mathematical methods for computing functional determinants of differential operators used in quantum field theory calculations. The authors prove that two specific approaches - the Gel'fand-Yaglom theorem and Green's function method - are mathematically equivalent for one-dimensional operators and provide better ways to handle problematic cases with zero or negative eigenvalues.
Key Contributions
- Demonstrates mathematical equivalence between Gel'fand-Yaglom theorem and Green's function method for computing functional determinants
- Provides improved prescription for handling vanishing and negative eigenvalues in functional determinant calculations
View Full Abstract
Computing functional determinants of differential operators is central to any field-theoretical calculation relying on a saddle-point expansion. A variety of approaches is available for the computation that avoid having to know the eigenspectrum of the operator, and in particular the Gel'fand-Yaglom theorem and the Green's function method. In this note, we show how both approaches can be constructed using a contour integral argument and conclude that these are completely equivalent for computing ratios of determinants of one-dimensional operators. Furthermore, we comment on the presence of vanishing as well as negative eigenvalues and show how the Green's function method provides a natural prescription for handling them.
Ultrafast quantum optics with attosecond control
This paper develops a quantum light field squeezer that can generate and control squeezed light on extremely fast timescales (femtoseconds and attoseconds). The researchers demonstrate precise control over quantum light properties at the level of individual light wave cycles, enabling new applications in quantum optics and strong-field physics.
Key Contributions
- Development of quantum light field squeezer enabling attosecond control of broadband squeezed light
- Demonstration of time-dependent squeezing distribution across individual half-cycles of electric field
- Achievement of sub-femtosecond control over quantum light-induced tunneling current noise
View Full Abstract
Modern Quantum optics largely remains quasi-stationary, far from intrinsic optical field timescales. Ultrafast quantum optics seeks to generate, shape, and measure quantum states of light on femtosecond and attosecond timescales. Here we introduce a quantum light field squeezer (QLFS) that enables the generation and attosecond control of ultrafast broadband squeezed light. Using degenerate four-wave mixing in a quasi-collinear focusing geometry, our approach overcomes conventional broadband phase-matching limits, producing intensity- and phase-squeezed states directly from few-cycle laser pulses. Our ultrafast quantum optical metrology reveals a time-dependent squeezing distribution across individual half-cycles of the electric field. Incorporating this time-dependent squeezing into strong-field simulations shows that the temporal redistribution of quantum uncertainty reshapes the high-harmonic emission. Moreover, by tuning the relative pulse delay and phase-matching angle, we achieve attosecond precision in controlling the squeezing characteristics by visualizing inferred effective Wigner representations of the quantum light field. Beyond characterization, we demonstrate that the quantum light-induced tunneling-current noise is sensitive to the nonclassical intensity-noise statistics of the driving squeezed light, with sub-femtosecond control. Together, these results extend the generation, control, and effective phase-space representation of squeezed light into the ultrafast and attosecond regime, opening new avenues for quantum optics in strong-field and solid-state systems.
From Classical to Quantum Reinforcement Learning and Its Applications in Quantum Control: A Beginner's Tutorial
This paper is a tutorial that teaches undergraduate students how to apply reinforcement learning techniques, progressing from classical methods to quantum applications with a specific focus on quantum control systems. It emphasizes practical implementation skills through hands-on coding examples to bridge the gap between theory and real-world application.
Key Contributions
- Educational tutorial bridging classical and quantum reinforcement learning for undergraduates
- Practical coding framework for implementing RL in quantum control applications
View Full Abstract
This tutorial is designed to make reinforcement learning (RL) more accessible to undergraduate students by offering clear, example-driven explanations. It focuses on bridging the gap between RL theory and practical coding applications, addressing common challenges that students face when transitioning from conceptual understanding to implementation. Through hands-on examples and approachable explanations, the tutorial aims to equip students with the foundational skills needed to confidently apply RL techniques in real-world scenarios.
Parameterized families of 2+1d $G$-cluster states
This paper constructs and analyzes a theoretical quantum many-body system called a G-cluster Hamiltonian in 2+1 dimensions, focusing on its symmetry properties and topological characteristics. The authors study families of quantum states with special symmetries and demonstrate nontrivial topological properties through interface modes.
Key Contributions
- Construction of G-cluster Hamiltonian with G×2Rep(G) symmetry including non-invertible symmetries
- Development of parameterized families of short-range-entangled states using symmetry interpolation methods
- Analysis of topological interface modes and their symmetry charges to demonstrate nontrivial topological properties
View Full Abstract
We construct a $G$-cluster Hamiltonian in 2+1 dimensions and analyze its properties. This model exhibits a $G\times2\mathrm{Rep}(G)$ symmetry, where the $2\mathrm{Rep}(G)$ sector realizes a non-invertible symmetry obtained by condensing appropriate algebra objects in $\mathrm{Rep}(G)$. Using the symmetry interpolation method, we construct $S^1$- and $S^2$-parameterized families of short-range-entangled (SRE) states by interpolating an either invertible $0$-form or $1$-form symmetry contained in $G\times2\mathrm{Rep}(G)$. Applying an adiabatic evolution argument to this family, we analyze the pumped interface mode generated by this adiabatic process. We then explicitly construct the symmetry operator acting on the interface and show that the interface mode carries a nontrivial charge under this symmetry, thereby demonstrating the nontriviality of the parameterized family.
Generalized cluster states in 2+1d: non-invertible symmetries, interfaces, and parameterized families
This paper develops 2D lattice models with non-invertible symmetries by gauging subgroups of finite group symmetries, creating 'generalized cluster states' that extend traditional cluster state quantum computation models. The authors study interfaces between different phases and demonstrate topological charge pumping phenomena using tensor network methods.
Key Contributions
- Construction of 2+1D lattice models with non-invertible symmetries described by fusion 2-categories
- Analysis of interface symmetries using strip 2-algebra and proof of mandatory interface degeneracy between different SPT phases
- Development of generalized Thouless pump for topological charge pumping in parameterized cluster state families
View Full Abstract
We construct 2+1-dimensional lattice models of symmetry-protected topological (SPT) phases with non-invertible symmetries and investigate their properties using tensor networks. These models, which we refer to as generalized cluster models, are constructed by gauging a subgroup symmetry $H \subset G$ in models with a finite group 0-form symmetry $G$. By construction, these models have a non-invertible symmetry described by the group-theoretical fusion 2-category $\mathcal{C}(G; H)$. After identifying the tensor network representations of the symmetry operators, we study the symmetry acting on the interface between two generalized cluster states. In particular, we will see that the symmetry at the interface is described by a multifusion category known as the strip 2-algebra. By studying possible interface modes allowed by this symmetry, we show that the interface between generalized cluster states in different SPT phases must be degenerate. This result generalizes the ordinary bulk-boundary correspondence. Furthermore, we construct parameterized families of generalized cluster states and study the topological charge pumping phenomena, known as the generalized Thouless pump. We exemplify our construction with several concrete cases, and compare them with known phases, such as SPT phases with $2\mathrm{Rep}((\mathbb{Z}_{2}^{[1]}\times\mathbb{Z}_{2}^{[1]})\rtimes\mathbb{Z}_{2}^{[0]})$ symmetry.
Open quantum spin chains with non-reciprocity: a theoretical approach based on the time-dependent generalized Gibbs ensemble
This paper develops a theoretical framework using time-dependent generalized Gibbs ensembles to study open quantum spin chains with non-reciprocal dissipation, deriving equations that govern the system's time evolution and relating magnetization density to current dynamics. The approach extends beyond previous non-interacting fermion analyses to handle more complex interacting quantum many-body systems.
Key Contributions
- Development of time-dependent generalized Gibbs ensemble framework for non-reciprocal open quantum spin chains
- Derivation of closed differential equations governing rapidity distribution evolution in weakly dissipative regimes
- Theoretical approach extending beyond non-interacting fermion models to describe interacting quantum many-body dynamics
View Full Abstract
We study an open quantum spin chain with non-reciprocal dissipation using a theoretical approach known as time-dependent generalized Gibbs ensemble. In the regime of weak dissipation the system is fully characterized by its rapidity distribution and we derive a closed set of coupled differential equations governing their time evolution. We check the accuracy of this theory by benchmarking the results against numerical simulations. Using this framework we are able to compute both the magnetization density and current dynamics, identifying some relations between the two. The problem of the anomalous power-law exponents identified in a previous work is discussed. Our work constitutes a theoretical approach that is able to describe the physics of non-reciprocal open quantum spin chains beyond analyses based on non-interacting fermions.
Sample Complexity of Composite Quantum Hypothesis Testing
This paper studies how to distinguish between two sets of possible quantum states when you don't know exactly which states you're dealing with, focusing on how many copies of the quantum state you need to make this determination accurately. The researchers provide mathematical bounds on the minimum number of samples required and extend their analysis to privacy-preserving scenarios.
Key Contributions
- Derived tight bounds on sample complexity for composite quantum hypothesis testing that match up to universal constants
- Extended the analysis to differentially private quantum hypothesis testing, establishing sample complexity for privacy-preserving scenarios
View Full Abstract
This paper investigates symmetric composite binary quantum hypothesis testing (QHT), where the goal is to determine which of two uncertainty sets contains an unknown quantum state. While asymptotic error exponents for this problem are well-studied, the finite-sample regime remains poorly understood. We bridge this gap by characterizing the sample complexity -- the minimum number of state copies required to achieve a target error level. Specifically, we derive lower bounds that generalize the sample complexity of simple QHT and introduce new upper bounds for various uncertainty sets, including of both finite and infinite cardinalities. Notably, our upper and lower bounds match up to universal constants, providing a tight characterization of the sample complexity. Finally, we extend our analysis to the differentially private setting, establishing the sample complexity for privacy-preserving composite QHT.
Entanglement-swapping measurements for deterministic entanglement distribution
This paper develops improved methods for entanglement swapping in quantum networks that eliminate the randomness typically present in such protocols. The researchers identify specific measurement strategies that guarantee successful entanglement distribution between distant nodes without requiring post-selection of favorable outcomes.
Key Contributions
- Complete characterization of deterministic entanglement-swapping measurements that eliminate probabilistic outcomes
- Dimension-dependent classification showing unique optimal measurements for d=2,3, infinite solutions for d=4, and 72 classes for d=5
- Proof that optimal measurements based on complex Hadamard matrices are order-independent in quantum networks for d=2,3
View Full Abstract
Entanglement swapping is a key primitive for distributing entanglement across nodes in quantum networks. In standard protocols, the outcome of the intermediate measurement determines the resulting state, making the process inherently probabilistic and requiring postselection. In this work, we fully characterize those measurements under which entanglement swapping becomes deterministic: for arbitrary pure inputs, every measurement outcome produces local-unitarily equivalent states. We also show that an optimal measurement, maximizing a concurrence-type entanglement measure, is built from complex Hadamard matrices. For this optimal protocol, we provide a complete, dimension-dependent classification of deterministic entanglement-swapping measurements: unique in dimensions $d=2,3$, infinite for $d=4$, and comprising $72$ inequivalent classes for $d=5$. We further consider a general network with multiple swapping nodes and show that, for $d=2,3$ the resulting end-to-end state is independent of the order in which the repeaters perform the optimal measurements. Our results establish optimal entanglement-swapping schemes that are post-selection free, in the sense that they distribute entanglement across generic quantum network architectures without unfavorable measurement outcomes.
Phase-sensitive superposition of quantum states
This paper develops new mathematical tools to measure and quantify quantum superposition by considering the phases of quantum state amplitudes. The authors introduce 'phase-sensitive superposition' measures and demonstrate their applications in analyzing quantum algorithms like Grover search, revealing fundamental trade-offs between superposition and algorithmic performance.
Key Contributions
- Introduction of phase-sensitive superposition quantifiers that account for amplitude phases
- Establishment of conservation relations and complementarity between superposition and algorithmic success probability
- Explicit connection between phase-sensitive superposition and l²-norm coherence measures
- Analysis of superposition dynamics in Grover's quantum search algorithm
View Full Abstract
Although the principle of superposition lies at the heart of quantum mechanics and is the root of almost all quantum phenomena such as coherence and entanglement, its quantification, except for that related to the resource theory of coherence and interference, remains relatively less studied. In this work, we address quantification of superposition from an information-theoretic perspective. We introduce a family of quantifiers of superposition, the phase-sensitive superposition, by taking into account the phases of amplitudes in the superposition of a fixed basis states (e.g., computational basis states). We establish a conservation relation for the phase-sensitive superposition, which is a kind of complementary relation and is reminiscent of wave-particle duality. We evaluate explicitly the second moment of phase-sensitive superposition and show that it is intrinsically related to the $l^2$-norm coherence. We characterize the dephasing channel induced by the maximally superposed states. We investigate the minimum and maximum superpositions, reveal their basic properties, and illustrate them through various examples. We further explore the dynamics of superposition in the Grover search algorithm, and demonstrate a complementary relation between superposition and success probability of the search algorithm. These results and quantifiers offer tools for analyzing structural features and implications of quantum superposition.
Quantum Computing -- Strategic Recommendations for the Industry
This whitepaper evaluates the current state and near-term prospects of quantum computing applications in industrial optimization and machine learning. It assesses different quantum hardware technologies and provides a traffic-light framework to determine where quantum approaches show promise versus where classical methods remain superior.
Key Contributions
- Standardized traffic-light evaluation framework for assessing quantum computing applications in industry
- Comprehensive assessment of quantum optimization and machine learning use cases with consistent evaluation criteria
- Analysis of hardware roadmaps and scaling trajectories for superconducting and ion-trap quantum technologies
View Full Abstract
This whitepaper surveys the current landscape and short- to mid-term prospects for quantum-enabled optimization and machine learning use cases in industrial settings. Grounded in the QCHALLenge program, it synthesizes hardware trajectories from different quantum architectures and providers, and assesses their maturity and potential for real-world use cases under a standardized traffic-light evaluation framework. We provide a concise summary of relevant hardware roadmaps, distinguishing superconducting and ion-trap technologies, their current states, modalities, and projected scaling trajectories. The core of the presented work are the use case evaluations in the domains of optimization problems and machine learning applications. For the conducted experiments, we apply a consistent set of evaluation criteria (model formulation, scalability, solution quality, runtime, and transferability) which are assessed in a shared system of three categories, ranging from optimistic (solutions produced by quantum computers are competitive with classical methods and/or a clear path to a quantum advantage is shown) to pessimistic (significant hurdles prevent practical application of quantum solutions now and potentially in the future). The resulting verdicts illuminate where quantum approaches currently offer promise, where hybrid classical-quantum strategies are most viable, and where classical methods are expected to remain superior.
MultiQ: Multi-Programming Neutral Atom Quantum Architectures
This paper introduces MultiQ, a system that allows multiple quantum circuits to run simultaneously on neutral atom quantum computers by partitioning the qubit array, improving hardware utilization and throughput by 3.8x to 12.3x while maintaining fidelity. The approach addresses the problem that large quantum circuits have poor performance while small circuits waste hardware resources.
Key Contributions
- First multi-programming system for neutral atom QPUs enabling concurrent execution of multiple quantum circuits
- Virtual zone layout compilation strategy and controller system that increases throughput 3.8x-12.3x while maintaining fidelity
- Cross-layer system architecture with compiler, controller, and checker components for optimizing spatio-temporal hardware utilization
View Full Abstract
Neutral atom Quantum Processing Units (QPUs) are emerging as a popular quantum computing technology due to their large qubit counts and flexible connectivity. However, performance challenges arise as large circuits experience significant fidelity drops, while small circuits underutilize hardware and face initialization latency issues. To tackle these problems, we propose $\textit{multi-programming on neutral atom QPUs}$, allowing the co-execution of multiple circuits by logically partitioning the qubit array. This approach increases resource utilization and mitigates initialization latency while maintaining result fidelity. Currently, state-of-the-art compilers for neutral atom architectures do not support multi-programming. To fill this gap, we introduce MultiQ, the first system designed for this purpose. MultiQ addresses three main challenges: (i) it compiles circuits into a $\textit{virtual zone layout}$ to optimize spatio-temporal hardware utilization; (ii) it parallelizes the execution of co-located circuits, allowing single hardware instructions to operate on different circuits; and (iii) it includes an algorithm to verify the functional independence of the bundled circuits. MultiQ functions as a cross-layer system comprising a compiler, controller, and checker. Our compiler generates \emph{virtual zone layouts} to enhance performance, while the controller efficiently maps these layouts onto the hardware and resolves any conflicts. The checker ensures the correct bundling of circuits. Experimental results show a throughput increase from 3.8$\times$ to 12.3$\times$ when multi-programming 4 to 14 circuits, with fidelity largely maintained, ranging from a 1.3% improvement for four circuits to only a 3.5% loss for fourteen circuits. Overall, MultiQ facilitates concurrent execution of multiple quantum circuits, boosting throughput and hardware utilization.
Rigorous phase-error-estimation security framework for QKD with correlated sources
This paper develops a mathematical framework to improve the security analysis of quantum key distribution (QKD) systems when the photon sources have imperfections that create correlations between consecutive pulses. The work bridges the gap between theoretical security proofs and real-world QKD implementations by accounting for practical modulator limitations.
Key Contributions
- Development of rigorous phase-error-estimation framework for correlated quantum sources
- Extension of existing security proofs to handle practical QKD modulator imperfections
View Full Abstract
Practical QKD modulators introduce correlations between consecutively emitted pulses due to bandwidth limitations, violating key assumptions underlying many security proof techniques. Here, we address this problem by introducing a simple yet powerful mathematical framework to directly extend phase-error-estimation-based security proofs for imperfect but uncorrelated sources to also incorporate encoding correlations. Our framework overcomes important limitations of previous approaches in terms of generality and rigor, significantly narrowing the gap between theoretical security guarantees and real-world QKD implementations.
On-chip semi-device-independent quantum random number generator exploiting contextuality
This paper demonstrates an on-chip quantum random number generator that uses quantum contextuality (a fundamental quantum property) rather than entanglement to certify true randomness. The system integrates two silicon photonic chips to create a practical device that can generate cryptographically secure random numbers with mathematical guarantees of their quality.
Key Contributions
- First semi-device-independent QRNG based on contextuality violation implemented on integrated silicon photonics
- Novel approach using qutrit states and KCBS inequality testing for randomness certification without requiring entanglement
- Demonstration of practical quantum random number generation with certified min-entropy and integration-compatible architecture
View Full Abstract
We present a semi-device-independent quantum random number generator (QRNG) based on the violation of a contextuality inequality, implemented by the integration of two silicon photonic chips. Our system combines a heralded single-photon source with a reconfigurable interferometric mesh to implement qutrit state preparation, transformations, and measurements suitable for testing a KCBS contextuality inequality. This architecture enables the generation of random numbers from the intrinsic randomness of single-photon interference in a complex optical network, while simultaneously allowing a quantitative certification of their security without requiring entanglement. We observe a contextuality violation exceeding the classical bound by more than 10σ, unambiguously confirming non-classical behavior. From this violation, we certify a conditional min-entropy per experimental round of Hmin = 0.077 +- 0.002, derived via a tailored semidefinite-programming-based security analysis. Each measurement outcome therefore contains at least 0.077 +- 0.002 bits of extractable genuine randomness, corresponding to an asymptotic generation rate of 21.7 +- 0.5 bits/s. These results establish a viable route towards general-purpose, untrusted quantum random number generators compatible with practical integrated photonic quantum networks.
A Methodological Analysis of Empirical Studies in Quantum Software Testing
This paper analyzes how researchers conduct and report empirical studies in quantum software testing by systematically reviewing 59 studies. The authors identify inconsistencies in methodologies and provide recommendations to improve the design and reporting of future quantum software testing research.
Key Contributions
- Systematic methodological analysis of 59 empirical studies in quantum software testing
- Identification of limitations and inconsistencies in current QST research practices
- Recommendations for improving design, execution, and reporting of future empirical studies in quantum software testing
View Full Abstract
In quantum software engineering (QSE), quantum software testing (QST) has attracted increasing attention as quantum software systems grow in scale and complexity. Since QST evaluates quantum programs through execution under designed test inputs, empirical studies are widely used to assess the effectiveness of testing approaches. However, the design and reporting of empirical studies in QST remain highly diverse, and a shared methodological understanding has yet to emerge, making it difficult to interpret results and compare findings across studies. This paper presents a methodological analysis of empirical studies in QST through a systematic examination of 59 primary studies identified from a literature pool of size 384. We organize our analysis around ten research questions that cover key methodological dimensions of QST empirical studies, including objects under test, baseline comparison, testing setup, experimental configuration, and tool and artifact support. Through cross-study analysis along these dimensions, we characterize current empirical practices in QST, identify recurring limitations and inconsistencies, and highlight open methodological challenges. Based on our findings, we derive insights and recommendations to inform the design, execution, and reporting of future empirical studies in QST.
Verification of continuous variable entanglement with undetected photons
This paper demonstrates a new method to verify quantum entanglement between photon pairs without needing to detect both photons, using single photon interference measurements. The technique works even when one photon cannot be detected due to experimental limitations and successfully violates key quantum entanglement criteria.
Key Contributions
- Novel entanglement verification method that doesn't require coincidence detection of both photons
- Experimental demonstration of EPR and MGVT criterion violations using single photon interference
- Method works with experimental losses and non-degenerate sources where suitable detectors may not exist
View Full Abstract
We verify transverse spatial entanglement of photon-pairs generated in spontaneous parametric down conversion using a nonlinear interferometric technique without relying on any coincidence detection. We experimentally demonstrate the violation of the Einstein-Podolsky-Rosen criterion and of the Mancini-Giovannetti-Vitali-Tombesi criterion using single photon interference of one of the photons of the pairs. We also provide a comprehensive theoretical analysis. The experimental results that we have obtained show good agreement with the theoretical values. Our method performs well under experimental losses and can be applied to highly non-degenerate sources, where there are no suitable detectors for one of the photons in the quantum state and our method could also be extended to the discrete degrees of freedom to certify high-dimensional (OAM) entanglement.
Quantum fluctuations of vacuum versus photon-pairs concerning Spontaneous-Parametric Down-Conversion and Four-Wave-Mixing
This paper analyzes the transition point between two quantum optical regimes in spontaneous parametric down-conversion and four-wave mixing processes, determining when vacuum fluctuations dominate versus when generated photon pairs dominate the nonlinear interaction. The authors calculate specific threshold values that define this transition, providing quantitative guidance for quantum optics experiments.
Key Contributions
- Calculated specific threshold value (0.369) for photon-pairs flux per frequency unit at the transition between vacuum-seeded and photon-dominated regimes
- Determined the electric field ratio (1.718) between generated photons and vacuum fluctuations at the transition point
View Full Abstract
The limit between the two regimes of spontaneous-parametric down-conversion (SPDC) or four-wave-mixing (FWM) regarding the pump intensity has been theoretically investigated using a semi-classical model and analytical calculations. A unitless quantity has been defined, corresponding to the photon-pairs flux per frequency unit: it has been found equal to 0.369 at this limit. The ratio between the magnitudes of the electric fields of the generated photons and of vacuum has been also calculated, equal to 1.718, and the pump intensity has been plotted as a function of the interaction length for different values of the second-order electric susceptibility in the case of SPDC and of the third-order electric susceptibility for FWM. These quantitative results confirm that below the limit, the nonlinear process can be truly considered as spontaneous, i.e. mainly seeded by the quantum fluctuations of vacuum, while the generated photons mainly govern the pump photon splitting above the limit, which corresponds more to an optical parametric amplification / difference frequency generation regime. Knowing quantitatively the limit between the two regimes thanks to the present calculations will be a useful guide for further quantum calculations and measurements from either side of the limit in order to catch the full quantum picture of SPDC and FWM from low to high pump intensities.
Eigenstate thermalization in thermal first-order phase transitions
This paper shows that the eigenstate thermalization hypothesis (ETH), which explains how isolated quantum systems reach thermal equilibrium, breaks down near first-order phase transitions. The researchers demonstrate that quantum eigenstates can have different thermal properties at the same energy, creating distinct classes of states and eigenstate phase transitions.
Key Contributions
- Demonstrates breakdown of eigenstate thermalization hypothesis near thermal first-order phase transitions
- Identifies coexistence of distinct eigenstate classes and Schrodinger-cat-like superposition states
- Proposes experimental detection methods via non-equilibrium dynamics
View Full Abstract
The eigenstate thermalization hypothesis (ETH) posits how isolated quantum many-body systems thermalize, assuming that individual eigenstates at the same energy density have identical expectation values of local observables in the limit of large systems. While the ETH apparently holds across a wide range of interacting quantum systems, in this work we show that it requires generalization in the presence of thermal first-order phase transitions. We introduce a class of all-to-all spin models, featuring first-order thermal phase transitions that stem from two distinct mean-field solutions (two ``branches'') that exchange dominance in the many-body density of states as the energy is varied. We argue that for energies in the vicinity of the thermal phase transition, eigenstate expectation values do not need to converge to the same thermal value. The system has a regime with coexistence of two classes of eigenstates corresponding to the two branches with distinct expectation values at the same energy density, and another regime with Schrodinger-cat-like eigenstates that are inter-branch superpositions; these two regimes are separated by an eigenstate phase transition. We support our results by semiclassical calculations and an exact diagonalization study of a microscopic spin model, and argue that the structure of eigenstates in the vicinity of thermal first-order phase transitions can be experimentally probed via non-equilibrium dynamics.
Critical quantum states and hierarchical spectral statistics in a Cantor potential
This paper studies how fractal (self-similar) patterns in a quantum potential create unusual quantum states that are neither fully localized nor extended, but exist in a critical state between the two. The researchers show that the fractal geometry directly influences the quantum energy levels and wave functions, creating hierarchical patterns in the spectrum.
Key Contributions
- Demonstration of hierarchical spectral statistics in fractal quantum systems with bimodal level-spacing distribution
- Establishment of direct connection between fractal geometry and quantum criticality through multifractal analysis of eigenstates
- Discovery of anomalous power-law scaling in integrated density of states with exponent matching Hausdorff dimension of underlying Cantor set
View Full Abstract
We study the spectral statistics and wave-function properties of a one-dimensional quantum system subject to a Cantor-type fractal potential. By analyzing the nearest-neighbor level spacings, inverse participation ratio (IPR), and the scaling behavior of the integrated density of states (IDS), we demonstrate how the self-similar geometry of the potential is imprinted on the quantum spectrum. The energy-resolved level spacings form a hierarchical, filamentary structure, in sharp contrast to those of periodic and random systems. The normalized level-spacing distribution exhibits a bimodal structure, reflecting the deterministic recurrence of spectral gaps. A multifractal analysis of eigenstates reveals critical behavior: the generalized fractal dimensions $D_q$ lie strictly between the limits of extended and localized states, exhibiting a distinct $q$-dependence. Consistently, the IPR indicates the coexistence of quasi-extended and localized features, characteristic of critical wave functions. The IDS shows anomalous power-law scaling at low energies, with an exponent close to the Hausdorff dimension of the underlying Cantor set, indicating that the geometric fractality governs the spectral dimensionality. At higher energies, this scaling crosses over to the semiclassical Weyl law. Our results establish a direct connection between deterministic fractal geometry, hierarchical spectral statistics, and quantum criticality.
A Preparation Nonstationarity Loophole in Superconducting-Qubit Bell Tests
This paper identifies a new loophole in Bell tests performed on superconducting quantum processors, where slow temporal drift in the quantum state preparation process can mimic Bell inequality violations without true quantum nonlocality. The authors develop methods to detect this preparation nonstationarity and show it affects the interpretation of quantum advantage demonstrations on current hardware.
Key Contributions
- Identification of preparation nonstationarity loophole in Bell tests on superconducting quantum processors
- Development of ensemble-divergence framework and operational witness for detecting preparation drift
- Experimental demonstration on IBM quantum processors showing drift effects persist after readout error mitigation
- Modified Bell bound accounting for preparation nonstationarity effects
View Full Abstract
Bell or Clauser-Horne-Shimony-Holt (CHSH) tests on superconducting quantum processors are commonly interpreted under the assumption that repeated circuit executions sample a single, stationary preparation ensemble. Here we show that this assumption can be violated on contemporary hardware, with direct implications for the interpretation of observed Bell violations. We introduce an ensemble-divergence framework in which slow temporal drift of the preparation process induces context-dependent effective ensembles, even when measurement independence and locality are preserved. This leads to a relaxed Bell bound $|S| \le 2 + 6δ_{\mathrm{ens}}$, where $δ_{\mathrm{ens}}$ quantifies preparation nonstationarity. Because $δ_{\mathrm{ens}}$ is not directly observable, we develop an operational witness $δ_{\mathrm{op}}$ based on bin-resolved outcome statistics for fixed measurement channels. Using Pauli-axis measurements on IBM superconducting processors, we observe statistically significant operational drift that persists after full two-qubit readout mitigation, ruling out measurement artifacts. In contrast, drift extracted from CHSH-optimal measurements is eliminated by mitigation, demonstrating that such settings are unsuitable for diagnosing preparation nonstationarity. We further show that the observed Bell violations imply only modest ensemble divergences, comparable in scale to those required in Hall-type measurement-dependence models, but arising here solely from preparation drift combined with experimental scheduling. Our results identify a preparation-dependent loophole relevant to Bell tests on noisy intermediate-scale quantum devices and highlight the necessity of drift-aware protocols for reliable quantum certification.
Efficient and broadband quantum frequency comb generation in a monolithic AlGaAs-on-insulator microresonator
This paper demonstrates a chip-scale quantum light source that generates multiple pairs of entangled photons at different wavelengths using an AlGaAs-on-insulator microresonator. The device produces eleven distinct wavelength pairs with high efficiency and strong entanglement, making it suitable for integrated quantum photonic systems.
Key Contributions
- Demonstration of efficient broadband quantum frequency comb generation with 11 wavelength pairs across 35.2 nm bandwidth
- Achievement of high spectral brightness (2.64 GHz mW⁻²nm⁻¹) and strong energy-time entanglement (93.1% average visibility)
- Development of optimized AlGaAs-on-insulator platform with high nonlinearity (~550 m⁻¹W⁻¹) for integrated quantum photonics
View Full Abstract
The exploration of photonic systems for quantum information processing has generated widespread interest in multiple cutting-edge research fields. Photonic frequency encoding stands out as an especially viable approach, given its natural alignment with established optical communication technologies, including fiber networks and wavelength-division multiplexing systems. Substantial reductions in hardware resources and improvements in quantum performance can be expected by utilizing multiple frequency modes. The integration of nonlinear photonics with microresonators provides a compelling way for generating frequency-correlated photon pairs across discrete spectral modes. Here, by leveraging the high material nonlinearity and low nonlinear loss, we demonstrate an efficient chip-scale multi-wavelength quantum light source based on AlGaAs-on-insulator, featuring a free spectral range of approximately 200 GHz at telecom wavelengths. The optimized submicron waveguide geometry provides both high effective nonlinearity (~550 m$^{-1}$W$^{-1}$) and broad generation bandwidth, producing eleven distinct wavelength pairs across a 35.2 nm bandwidth with an average spectral brightness of 2.64 GHz mW$^{-2}$nm$^{-1}$. The generation of energy-time entanglement for each pair of frequency modes is verified through Franson interferometry, yielding an average net visibility of 93.1%. With its exceptional optical gain and lasing capabilities, the AlGaAs-on-insulator platform developed here shows outstanding potential for realizing fully integrated, ready-to-deploy quantum photonic systems on chip.
Reference-frame-independent Quantum secure direct communication
This paper develops a new quantum secure direct communication protocol that works even when the communicating parties don't have perfectly aligned reference frames, making it more practical for mobile communications. The protocol maintains security while allowing misalignment in two spatial directions and demonstrates significantly extended communication distances compared to single-photon methods.
Key Contributions
- Development of reference-frame-independent quantum secure direct communication protocol that tolerates misalignment angles
- Introduction of β-independent security parameter and rederivation of security bounds for robustness against reference frame fluctuations
- Demonstration of 155.9% improvement in maximum transmission distance compared to single-photon-based QSDC protocols
View Full Abstract
Current quantum secure direct communication (QSDC) protocols guarantee communication security by estimating the error rates of photons in the X and Z bases. This take the reference frame calibration between communicating parties as a necessary prerequisite. However, in mobile communications scenarios, achieving continuous and accurate reference frame calibration poses significant challenges. To address this issue, this paper proposes a reference-frame-independent (RFI) QSDC protocol. This protocol only requires ensuring the calibration accuracy of one direction of the reference frame, while allowing a misalignment angle $β$ in the other two directions. To improve the protocol's robustness against reference frame fluctuations, we introduce a $β$-independent parameter C into the security analysis framework and rederive the protocol's security bounds. Additionally, we construct a system model and optimize the pulse intensity of the signal states, enabling the protocol to achieve optimal performance under each level of channel attenuation. At an attenuation of 10 dB (corresponding to a communication distance of 25 km), the secrecy message capacities for $β= 0^{ \circ} $ and $45^{ \circ} $ are $8.765 \times10^{-6}$ bit/pulse and $4.150 \times10^{-6}$ bit/pulse, respectively. Compared with the single-photon-based QSDC, the communication distance of the protocol proposed in this paper is significantly extended. When $β= 0^{ \circ} $ and $45^{ \circ} $, the maximum transmission distances of the RFI QSDC protocol are 27.875 km and 26.750 km, which is about 155.9 % and 149.7 % of that of the single-photon-based QSDC protocol.
Quantum State Discrimination Enhanced by FPGA-Based AI Engine Technology
This paper presents a real-time quantum state discrimination system that uses FPGA-based AI accelerators and neural networks to more accurately and efficiently determine the state of quantum bits in superconducting quantum processors. The system enables faster in-situ measurements and supports mid-circuit measurement experiments for multiple qubits.
Key Contributions
- Development of real-time FPGA-based AI system for quantum state discrimination
- Implementation of multi-layer neural networks on AMD Xilinx VCK190 platform for qubit readout
- Enhancement of mid-circuit measurement capabilities for multiple qubits
View Full Abstract
Identifying the state of a quantum bit (qubit), known as quantum state discrimination, is a crucial operation in quantum computing. However, it has been the most error-prone and time-consuming operation on superconducting quantum processors. Due to stringent timing constraints and algorithmic complexity, most qubit state discrimination methods are executed offline. In this work, we present an enhanced real-time quantum state discrimination system leveraging FPGA-based AI Engine technology. A multi-layer neural network has been developed and implemented on the AMD Xilinx VCK190 FPGA platform, enabling accurate in-situ state discrimination and supporting mid-circuit measurement experiments for multiple qubits. Our approach leverages recent advancements in architecture research and design, utilizing specialized AI/ML accelerators to optimize quantum experiments and reduce the use of FPGA resources.
Vacuum-dressed superconductivity in NbN observed in a high-$Q$ terahertz cavity
This paper demonstrates that placing a superconducting material (niobium nitride) inside a high-quality terahertz cavity can modify its superconducting properties through quantum vacuum fluctuations alone, without any external driving fields. The researchers observed a 13% reduction in superfluid density and 2% reduction in the superconducting gap, showing that cavity environments can engineer material ground states.
Key Contributions
- First experimental demonstration of cavity vacuum fluctuations modifying superconducting properties without external driving
- Development of a platform for engineering material ground states through vacuum-matter coupling in high-Q terahertz cavities
View Full Abstract
Emerging theoretical frameworks suggest that physical properties of matter can be altered within an optical cavity by harnessing quantum vacuum electromagnetic fluctuations, even in the total absence of external driving fields. Among the most intriguing predictions is the potential to noninvasively manipulate superconductivity. Here, we experimentally observe modified superconductivity in niobium nitride (NbN) thin films within high-quality-factor ($Q$) terahertz cavities. Using terahertz time-domain spectroscopy, we characterize the NbN response both in free space and within a high-$Q$ photonic-crystal cavity. Our analysis reveals significant cavity-induced modifications to the optical conductivity. A theoretical model indicates that these changes originate from a substantial ($\sim13\,\%$) reduction in the superfluid density and a minor ($\sim2\,\%$) reduction in the superconducting gap, driven by cavity vacuum fluctuations. These results demonstrate a platform for engineering ground states via vacuum--matter coupling, opening frontiers in cavity materials science.
Dissipative ground-state preparation of a quantum spin chain on a trapped-ion quantum computer
This paper demonstrates a method for preparing quantum ground states using dissipative protocols on a trapped-ion quantum computer with up to 19 qubits. The researchers show that despite hardware noise, their protocol robustly converges to low-energy states and can be improved using error mitigation techniques.
Key Contributions
- Extended Kraus representation of dissipation channel beyond Lindblad dynamics regime
- Demonstrated robust dissipative ground-state preparation on 19-qubit trapped-ion quantum computer
- Showed protocol resilience to hardware noise with circuits containing over 4000 entangling gates
- Applied zero-noise extrapolation to improve energy expectation values to match noiseless simulations
View Full Abstract
We demonstrate a dissipative protocol for ground-state preparation of a quantum spin chain on a trapped-ion quantum computer. As a first step, we derive a Kraus representation of a dissipation channel for the protocol recently proposed by Ding et al. [Phys. Rev. Res. 6, 033147 (2024)] that still holds for arbitrary temporal discretization steps, extending the analysis beyond the Lindblad dynamics regime. The protocol guarantees that the fidelity with the ground state monotonically increases (or remains unchanged) under repeated applications of the channel to an arbitrary initial state, provided that the ground state is the unique steady state of the dissipation channel. Using this framework, we implement dissipative ground-state preparation of a transverse-field Ising chain for up to 19 spins on the trapped-ion quantum computer Reimei provided by Quantinuum. Despite the presence of hardware noise, the dynamics consistently converges to a low-energy state far away from the maximally mixed state even when the corresponding quantum circuits contain as many as 4110 entangling gates, demonstrating the intrinsic robustness of the protocol. By applying zero-noise extrapolation, the resulting energy expectation values are systematically improved to agree with noiseless simulations within statistical uncertainties.
Cost scaling of MPS and TTNS simulations for 2D and 3D systems with area-law entanglement
This paper analyzes the computational efficiency of two different tensor network simulation methods (MPS vs TTNS) for studying quantum many-body systems in 2D and 3D, finding that traditional MPS simulations are surprisingly more efficient than TTNS for large systems despite theoretical advantages.
Key Contributions
- Comparative scaling analysis of MPS vs TTNS computational costs for 2D and 3D systems
- Demonstration that MPS simulations outperform TTNS for large systems with area-law entanglement despite graph distance advantages
View Full Abstract
Tensor network states are an indispensable tool for the simulation of strongly correlated quantum many-body systems. In recent years, tree tensor network states (TTNS) have been successfully used for two-dimensional systems and to benchmark quantum simulation approaches for condensed matter, nuclear, and particle physics. In comparison to the more traditional approach based on matrix product states (MPS), the graph distance of physical degrees of freedom can be drastically reduced in TTNS. Surprisingly, it turns out that, for large systems in $D>1$ spatial dimensions, MPS simulations of low-energy states are nevertheless more efficient than TTNS simulations. With a focus on $D=2$ and 3, the scaling of computational costs for different boundary conditions is determined under the assumption that the system obeys an entanglement (log-)area law, implying that bond dimensions scale exponentially in the surface area of the associated subsystems.
Quantum observers can communicate across multiverse branches
This paper proposes a theoretical thought experiment where observers in different branches of a quantum multiverse could communicate with each other through a Wigner's friend scenario, challenging the conventional belief that such inter-branch communication is impossible. The proposed mechanism requires quantum control over observers and results in memory loss of the sent messages to maintain quantum unitarity.
Key Contributions
- Demonstrates theoretical possibility of inter-branch communication in Everettian multiverse interpretation
- Proposes method to test Everettian quantum theory against single-world interpretations using knowledge-creation paradoxes
View Full Abstract
It is commonly thought that observers in distinct branches of an Everettian multiverse cannot communicate without violating the linearity of quantum theory. Here we show a counterexample, demonstrating that inter-branch communication is in fact possible, entirely within standard quantum theory. We do this by considering a Wigner's-friend scenario, where an observer (Wigner) can have quantum control over another observer (the friend). We present a thought experiment where the friend in superposition can receive a message written by a distinct copy of themselves in the multiverse, with the aid of Wigner. To maintain the unitarity of quantum theory, the observers must have no memory of the message that they sent. Our thought experiment challenges conventional wisdom regarding the ultimate limits of what is possible in an Everettian multiverse. It has a surprising potential application which involves using knowledge-creation paradoxes for testing Everettian quantum theory against single-world theories.
Multi-level charge fluctuations in a Si/SiGe double quantum dot device
This paper studies unwanted electrical charge fluctuations in silicon-based quantum dot devices that can cause errors in quantum bits. The researchers develop sophisticated analysis methods to characterize these noise sources and understand how they depend on device operating conditions, which could help improve quantum device performance.
Key Contributions
- Development of algorithmic methods for characterizing multi-level charge fluctuations in quantum dot devices using factorial hidden Markov models
- Quantitative analysis of charge noise sensitivity to gate voltages and operating conditions, providing lever arm estimates that could aid in spatial localization of noise sources
View Full Abstract
Discrete charge fluctuations, routinely observed in semiconductor quantum dot devices, may contribute significantly to device drift and errors resulting from qubit miscalibration. Understanding the nature and origins of these discrete charge fluctuations may provide insights into material improvements or means of mitigating charge noise in semiconductor quantum dot devices. In this work, we measure multi-level charge fluctuations present in a Si/SiGe double quantum dot device over a range of device operating voltages and temperatures. To characterize the parameter-dependent dynamics of the underlying fluctuating degrees of freedom, we perform a detailed analysis of the measured noise timeseries. We perform algorithmically assisted drift detection and change point detection to detrend the data and remove a slow fluctuator component, as a preprocessing step. We perform model comparison on the post-processed time series between different $n$-level fluctuator ($n$LF) factorial hidden Markov models (FHMMs), finding that although at most sweep values the independent pair of 2LFs model would be preferred, in a particular region of voltage space the 4LF model outperforms the other models, indicating a conditional rate dependence between the two fluctuators. By tracking fluctuator transition rates, biases, and weights over a range of different device configurations, we estimate gate voltage and conductivity sensitivity. In particular, we fit a phenomenological, detailed balance model to the extracted independent 2LFs rate data, yielding lever arm estimates in the range of $-2 μ$eV/mV up to $4 μ$eV/mV between the two 2LFs and nearby gate electrodes. We expect that these characterization results may aid in subsequent spatial triangulation of the charge fluctuators.
Learning parameter curves in feedback-based quantum optimization algorithms
This paper develops a machine learning approach to predict parameter sequences for feedback-based quantum algorithms without needing expensive quantum measurements. The researchers train a model to map optimization problems directly to quantum algorithm parameters, potentially reducing the computational overhead of quantum optimization algorithms.
Key Contributions
- Development of teacher-student model to predict FQA parameter curves without quantum measurements
- Demonstration that ML-predicted parameters perform similarly to measurement-based FQA and outperform linear quantum annealing
View Full Abstract
Feedback-based quantum algorithms (FQAs) operate by iteratively growing a quantum circuit to optimize a given task. At each step, feedback from qubit measurements is used to inform the next quantum circuit update. In practice, the sampling cost associated with these measurements can be significant. Here, we ask whether FQA parameter sequences can be predicted using classical machine learning, obviating the need for qubit measurements altogether. To this end, we train a teacher-student model to map a MaxCut problem instance to an associated FQA parameter curve in a single classical inference step. Numerical experiments show that this model can accurately predict FQA parameter curves across a range of problem sizes, including problem sizes not seen during model training. To evaluate performance, we compare the predicted parameter curves in simulation against FQA reference curves and linear quantum annealing schedules. We observe similar results to the former and performance improvements over the latter. These results suggest that machine learning can offer a heuristic, practical path to reducing sampling costs and resource overheads in quantum algorithms.
Monte Carlo to Las Vegas for Recursively Composed Functions
This paper studies the computational complexity of recursively composed Boolean functions, proving that for such functions, bounded-error randomized algorithms can be converted to zero-error algorithms. The work extends these results to quantum algorithms, showing that bounded-error quantum algorithms on recursively composed functions can be converted to quantum algorithms that find certificates with high probability.
Key Contributions
- Proves that composition limits of query complexity measures converge under reasonable assumptions
- Shows that bounded-error quantum algorithms on recursively composed functions can be converted to certificate-finding quantum algorithms
- Establishes the equality R_0*(f) = max{R*(f), C*(f)} for composition limits of randomized query complexity
View Full Abstract
For a (possibly partial) Boolean function $f\colon\{0,1\}^n\to\{0,1\}$ as well as a query complexity measure $M$ which maps Boolean functions to real numbers, define the composition limit of $M$ on $f$ by $M^*(f)=\lim_{k\to\infty} M(f^k)^{1/k}$. We study the composition limits of general measures in query complexity. We show this limit converges under reasonable assumptions about the measure. We then give a surprising result regarding the composition limit of randomized query complexity: we show $R_0^*(f)=\max\{R^*(f),C^*(f)\}$. Among other things, this implies that any bounded-error randomized algorithm for recursive 3-majority can be turned into a zero-error randomized algorithm for the same task. Our result extends also to quantum algorithms: on recursively composed functions, a bounded-error quantum algorithm can be converted into a quantum algorithm that finds a certificate with high probability. Along the way, we prove various combinatorial properties of measures and composition limits.
Quantum Energetic Advantage before Computational Advantage in Boson Sampling
This paper analyzes the energy efficiency of photonic quantum computers solving the Boson Sampling problem, comparing quantum versus classical energy consumption. The authors demonstrate that quantum systems can achieve lower energy costs per sample than classical computers even before achieving computational speed advantages.
Key Contributions
- Development of Metric-Noise-Resource methodology connecting experimental parameters to energetic resources in Boson Sampling
- Demonstration of quantum energetic advantage that emerges before computational advantage
- Proposal of experimentally feasible photonic architecture with complete noise budget for near-term quantum energetic advantage
View Full Abstract
Understanding the energetic efficiency of quantum computers is essential for assessing their scalability and for determining whether quantum technologies can outperform classical computation beyond runtime alone. In this work, we analyze the energy required to solve the Boson Sampling problem, a paradigmatic task for quantum advantage, using a realistic photonic quantum computing architecture. Using the Metric-Noise-Resource methodology, we establish a quantitative connection between experimental control parameters, dominant noise processes, and energetic resources through a performance metric tailored to Boson Sampling. We estimate the energy cost per sample and identify operating regimes that optimize energetic efficiency. By comparing the energy consumption of quantum and state-of-the-art classical implementations, we demonstrate the existence of a quantum energetic advantage -- defined as a lower energy cost per sample compared to the best-known classical implementation -- that emerges before the onset of computational advantage, even in regimes where classical algorithms remain faster. Finally, we propose an experimentally feasible Boson Sampling architecture, including a complete noise and loss budget, that enables a near-term observation of quantum energetic advantage.
On measurement-dependent variance in quantum neural networks
This paper investigates how measuring observables on only a subset of qubits (rather than all qubits) in quantum neural networks leads to higher variance in prediction accuracy. The authors show this effect is fundamentally linked to the number of distinct eigenvalues in the measured observable after applying variational quantum circuits.
Key Contributions
- Demonstrates that restricted measurement support increases prediction variance in quantum machine learning
- Identifies the connection between observable eigenvalue structure and variance in variational quantum circuits
View Full Abstract
Variational quantum circuits have become a widely used tool for performing quantum machine learning (QML) tasks on labeled quantum states. In some specific tasks or for specific variational ansätze, one may perform measurements on a restricted part of the overall input state. This is the case for, e.g., quantum convolutional neural networks (QCNNs), where after each layer of the circuit a subset of qubits of the processed state is measured or traced out, and at the end of the network one typically measures a local observable. In this work, we demonstrate that measuring observables with restricted support results in larger label prediction variance in regression QML tasks. We show that the reason for this is, essentially, the number of distinct eigenvalues of the observable one measures after the application of a variational circuit.
Interferometric discrepancy between the Schrödinger and Klein-Gordon wave equations due to their dissimilar phase velocities
This paper analyzes fundamental differences between Schrödinger and Klein-Gordon wave equations by examining interferometric effects when beamsplitters move faster than the phase velocity of particles. It demonstrates that the Schrödinger equation predicts interference patterns that are impossible under electromagnetic waves or the non-relativistic Klein-Gordon equation due to phase velocity constraints.
Key Contributions
- Identifies fundamental interferometric differences between Schrödinger and Klein-Gordon wave equations
- Analyzes phase velocity effects in quantum beamsplitter systems with relativistic considerations
View Full Abstract
The Schrödinger equation predicts interference when a beamsplitter's trajectory includes a segment where its speed exceeds the phase velocity of a free non-zero rest mass particle that is in a momentum eigenstate. Such interference is neither possible for electromagnetic waves nor for eigenstates of momentum in the non-relativistic limit of the Klein-Gordon equation since the speed of the beamsplitter cannot exceed the phase velocity of the wave. The dual behavior of reflection and transmission in this case is discussed for dielectric and diffracting beamsplitters.
Computing quantum magic of state vectors
This paper develops efficient algorithms to compute 'quantum magic' (non-stabilizerness) - a measure of how far quantum states are from classical-like stabilizer states. The researchers created faster computational methods and software tools that significantly reduce the time needed to calculate these important quantum complexity measures.
Key Contributions
- Efficient algorithms using fast Hadamard transform that reduce computational complexity from O(d^3N) to O(N d^2N)
- Open-source Julia package HadaMAG.jl with GPU acceleration for computing stabilizer Rényi entropy and mana
View Full Abstract
Non-stabilizerness, also known as ``magic,'' quantifies how far a quantum state departs from the stabilizer set. It is a central resource behind quantum advantage and a useful probe of the complexity of many-body quantum states. Yet standard magic quantifiers, such as the stabilizer Rényi entropy (SRE) for qubits and the mana for qutrits, are costly to evaluate numerically, with the computational complexity growing rapidly with the number $N$ of qudits. Here we introduce efficient, numerically exact algorithms that exploit the fast Hadamard transform to compute the SRE for qubits ($d=2$) and the mana for qutrits ($d=3$) for pure states given as state vectors. Our methods reduce the runtime to $O(N d^{2N})$, an exponential improvement over the naive $O(d^{3N})$ scaling, while exposing substantial parallelism and enabling GPU acceleration. We further show how to combine the fast Hadamard transform with Monte Carlo sampling to estimate the SRE of state vectors, and we extend the approach to compute the mana of mixed states. All algorithms are implemented in the open-source Julia package HadaMAG.jl, which provides a high-performance, GPU-enabled toolbox for computing SRE and mana. The package, together with the methods developed in this work, offers a practical route to large-scale numerical studies of magic in quantum many-body systems.
Measurement-based acceleration of optical computations
This paper proposes using collective oscillations in coupled optical resonators to perform matrix-vector multiplication for analog computation. The coupling constants form the matrix while initial mode occupancies form the input vector, with detection time decreasing as vector dimension increases.
Key Contributions
- Demonstration that collective oscillations in coupled resonators can implement matrix-vector multiplication
- Analysis showing detection time decreases with increasing input vector dimension
View Full Abstract
Analog coprocessors are intensively developing nowadays with the aim to optimize energy computations of neural networks. In this work we focus on the possibility of using detection of collective oscillations in optical systems for computational purposes. We show that in a system of coupled resonators, collective oscillations can be used to implement matrix-vector multiplication. The matrix is formed by the coupling constants between the resonators, and the input vector is formed by the initial occupancies of the involved modes. The frequency of the collective oscillations is growing with the number of the involved modes, similarly to Rabi oscillations. The time needed for their detection, i.e., averaging, decreases with an increase in the input vector dimension. We discuss the limitations imposed on parallel computation in the system by restriction of the allowed optical frequency band.
Disorder enhanced transport as a general feature of long-range hopping models
This paper studies how disorder (randomness) affects quantum transport in systems where particles can hop long distances between sites. Surprisingly, they find that in long-range hopping systems, increasing disorder can actually enhance transport rather than suppress it, which is the opposite of typical expectations.
Key Contributions
- Demonstrates that disorder-enhanced transport occurs generally in long-range hopping systems with decay rates 1/r^α for both strong (α<1) and weak (1≤α≤3) long-range regimes
- Identifies and characterizes the disorder thresholds that define the start and end of the disorder-enhanced transport regime
View Full Abstract
We analyze the interplay of disorder and long-range hopping in a paradigmatic one dimensional model of quantum transport. While typically the current is expected to decrease as the disorder strength increases due to localization effects, in systems with infinite range hopping it was shown in Chavez et al, Phys. Rev. Lett. 126, 153201 (2021), that the current can increase with disorder in the Disorder-Enhanced-Transport (DET) regime. Here, by analyzing models with variable hopping range decaying as $1/r^α$ with the distance $r$ among the sites, we show that the DET regime is a general feature of long-range hopping systems and it occurs, not only in the strong long-range limit $α<1$ but even for weak long-range $1 \le α\le 3$. Specifically, we show that, after an initial decrease, the current grows with the disorder strength until it reaches a local maximum. Both disorder thresholds at which the DET regime starts and ends are determined. Our results open the path to understand the effect of disorder on transport in many realistic systems where long range hopping is present.
Phase transition, phase separation and mode softening of a two-component Bose-Einstein condensate in an optical cavity
This paper studies how a two-component Bose-Einstein condensate behaves in an optical cavity when driven by laser light, finding that the system undergoes phase transitions that create stripe patterns and transitions from superfluid to supersolid states. The research reveals how different atomic components separate and form distinct patterns depending on their optical properties.
Key Contributions
- Demonstration that red-detuned components dominate superradiant phase transitions in two-component BECs
- Discovery of spontaneous phase separation creating alternating stripe patterns and distinct Bragg gratings
- Identification of roton-type mode softening indicating superfluid-to-lattice supersolid transition
View Full Abstract
We investigate the superradiant phase transition in a two-component Bose-Einstein condensate with distinct atomic detunings, confined in an optical cavity and driven by a transverse pump laser. By combining perturbation theory and numerical simulations, we demonstrate that the phase transition is dominated by the red-detuned component, resulting in a phase diagram completely different from that of a single-component case under blue-detuned condition. The system exhibits spontaneous phase separation between the two components, manifested as alternating stripe patterns in the normal phase and distinct Bragg gratings in the superradiant phase. Furthermore, the Bogoliubov excitation spectrum reveals roton-type mode softening, indicating that the phase transition also corresponds to the superfluid-to-lattice supersolid transition. Our findings provide insights into the interplay between atomic detunings and collective quantum many-body phenomena, offering potential applications in quantum simulation and optical switching technologies.
TrackHHL: The 1-Bit Quantum Filter for particle trajectory reconstruction
This paper develops a quantum algorithm called the 1-Bit Quantum Filter for reconstructing particle trajectories in high-energy physics experiments, specifically targeting the computational challenges expected at the High-Luminosity Large Hadron Collider. The approach adapts the HHL algorithm to achieve better efficiency by reformulating the tracking problem as binary filtering rather than matrix inversion.
Key Contributions
- Development of a domain-specific quantum algorithm with O(√N log N) gate complexity for particle tracking
- Demonstration of quantum advantage for high-energy physics computational problems using NISQ-era hardware constraints
View Full Abstract
The transition to the High-Luminosity Large Hadron Collider (HL-LHC) presents a computational challenge where particle reconstruction complexity may outpace classical computing resources. While quantum computing offers potential speedups, standard algorithms like Harrow-Hassidim-Lloyd (HHL) require prohibitive circuit depths for near-term hardware. Here, we introduce the 1-Bit Quantum Filter, a domain-specific adaptation of HHL that reformulates tracking from matrix inversion to binary ground-state filtering. By replacing high-precision phase estimation with a single-ancilla spectral threshold and exploiting the Hamiltonian's sparsity, we achieve an asymptotic gate complexity of $O(\sqrt{N} \log N)$, given Hamiltonian dimension $N$. We validate this approach by simulating LHCb Vertex Locator events with a toy model, and benchmark performance using the noise models of Quantinuum H2 trapped-ion and IBM Heron superconducting processors. This work establishes a resource-efficient track reconstruction method capable of solving realistic event topologies on noise-free simulators and smaller tracking scenarios within the current constraints of the Noisy Intermediate Scale Quantum (NISQ) era.
Explicit complex time integrators for stiff problems
This paper develops new numerical methods for solving differential equations that use complex-valued time steps instead of real ones, showing these methods have better stability properties for certain problems like the Schrödinger equation and stiff mathematical systems.
Key Contributions
- Development of complex time step integrators with expanded stability regions
- Demonstration of optimal performance for Schrödinger equation integration
- Extension to real-valued stiff systems via Projective Integration coupling
View Full Abstract
Most numerical methods for time integration use real-valued time steps. Complex time steps, however, can provide an additional degree of freedom, as we can select the magnitude of the time step in both the real and imaginary directions. We show that specific paths in the complex time plane lead to expanded stability regions, providing clear computational advantages for complex-valued systems. In particular, we highlight the Schrödinger equation, for which complex time integrators can be uniquely optimal. Furthermore, we demonstrate that these benefits extend to certain classes of real-valued stiff systems by coupling complex time steps with the Projective Integration method.
Assembly to Quantum Compiler
This paper presents a compiler that translates ARM assembly language instructions into quantum computing operations, demonstrated through implementing the Fibonacci sequence and Grover's algorithm. The goal is to help classical programmers transition more easily to quantum programming by providing familiar instruction sets.
Key Contributions
- Development of assembly-to-quantum compiler mapping ARM instructions to quantum operations
- Open-source implementation bridging classical and quantum programming paradigms
- Demonstration through Fibonacci sequence computation and Grover's algorithm implementation
View Full Abstract
This research presents a novel approach in quantum computing by transforming ARM assembly instructions for use in quantum algorithms. The core achievement is the development of a method to directly map the ARM assembly language, a staple in classical computing, to quantum computing paradigms. The practical application of this methodology is demonstrated through the computation of the Fibonacci sequence. This example serves to validate the approach and underscores its potential in simplifying quantum algorithms. Grover's Algorithm was realized through the use of quantum-specific instructions. These transformations were developed as part of an open-source assembly-to-quantum compiler (github.com/arhaverly/AssemblyToQuantumCompiler). This effort introduces a novel approach to utilizing classical instruction sets in quantum computing and offers insight into potential future developments in the field. The AssemblyToQuantumCompiler streamlines quantum programming and enables computer scientists to transition more easily from classical to quantum computer programming.
Hong-Ou-Mandel two-photon x-ray states
This paper demonstrates Hong-Ou-Mandel interference using high-energy x-ray photons from a synchrotron source in a Mach-Zehnder interferometer setup. The researchers successfully created two-photon quantum states in the x-ray regime, opening new possibilities for quantum optics experiments at much higher photon energies than typically used.
Key Contributions
- First demonstration of Hong-Ou-Mandel interference with x-ray photons
- Extension of quantum optics techniques to high-energy photon regime
- Development of x-ray quantum optics as a new experimental domain
View Full Abstract
We have observed Hong-Ou-Mandel interference of high-brightness synchrotron x-rays with a Mach-Zehnder interferometer, yielding two-photon states of potential interest for x-ray quantum optics.
Non Markovian Corrections to Tegmark's Decoherence Bound in Biological Media
This paper challenges Tegmark's famous bound on how quickly quantum coherence is lost in biological systems by showing that when environmental memory effects are included, quantum coherence can persist longer than previously thought. The authors derive new mathematical expressions for decoherence that reduce to Tegmark's result only in the special case of memoryless environments, suggesting quantum effects might survive in structured biological media.
Key Contributions
- Derived non-Markovian corrections to Tegmark's decoherence bound showing universal quadratic short-time behavior
- Demonstrated that decoherence time scales as square root of bath correlation time for Ornstein-Uhlenbeck environments
View Full Abstract
Tegmark's widely cited bound on decoherence times in biological systems is derived under the assumption of a delta correlated, memoryless environment. In this work we show that any finite environmental memory universally induces quadratic short time decoherence, in validating the exponential decay law at early times. For an Ornstein Uhlenbeck environment we derive a closed non markovian expression for the coherence dynamics and obtain a de-coherence time that scales as the square root of the bath correlation time. In the singular limit of vanishing bath memory our result reduces exactly to Tegmark's bound. Numerical simulations based on an exact pseudomode mapping confirm the predicted scaling. These findings demonstrate that Tegmark's result applies only in the Markovian limit and does not rule out mesoscopic quantum coherence in structured biological media.
A unified framework for Bell inequalities from continuous-variable contextuality
This paper develops a unified mathematical framework for studying Bell inequalities and quantum non-locality that works with both discrete and continuous quantum variables, as well as hybrid systems combining both types. The framework can find optimal Bell inequalities for any measurement scenario and identifies new quantum states that exhibit Bell non-locality through standard detection methods.
Key Contributions
- Unified framework for Bell inequalities across discrete, continuous, and hybrid variable systems
- Discovery of first continuous-variable non-locality example that cannot be mapped to CHSH Bell inequality
- Identification of new hybrid entangled states enabling near-term Bell inequality violations
View Full Abstract
Although the original EPR paradox was formulated in terms of position and momentum, most studies of these phenomena have focused on measurement scenarios with only a discrete number of possible measurement outcomes. Here, we present a framework for studying non-locality that is agnostic to the dimension of the physical systems involved, allowing us to probe purely continuous-variable, discrete-variable, or hybrid non-locality. Our approach allows us to find the optimal Bell inequality for any given measurement scenario and quantifies the amount of non-locality that is present in measurement statistics. This formalism unifies the existing literature on continuous-variable non-locality and allows us to identify new states in which Bell non-locality can be probed through homodyne detection. Notably, we find the first example of continuous-variable non-locality that cannot be mapped to a CHSH Bell inequality. Moreover, we provide several examples of simple hybrid DV-CV entangled states that could lead to near-term violation of Bell inequalities.
Quantum information and statistical complexity of hydrogen-like ions in Dunkl-Schrödinger system
This paper derives analytical solutions for hydrogen-like atoms using a modified Schrödinger equation that includes Dunkl reflection operators, then calculates various information-theoretic complexity measures like Shannon entropy and Rényi entropy for these quantum systems.
Key Contributions
- Analytical solutions of Dunkl-Schrödinger equation for Coulomb potential
- First-time calculation of multiple complexity measures (LMC, SRC, GRC, RCR) for hydrogen-like ions in Dunkl framework
View Full Abstract
In this work, we present analytical solutions of Schrödinger equation for Coulomb potential in presence of a Dunkl reflection operator. Expressions are offered for eigenvalues, eigenfunctions and radial densities for H-isoelectronic series (Z=1-3). The degeneracy in energy in absence and presence of the reflection has been discussed. The standard deviation, Shannon entropy, Rényi entropy in position space have been derived for arbitrary quantum states. Then several important complexity measures like López-Ruiz-Mancini-Calbet (LMC), Shape-Rényi complexity (SRC), Generalized Rényi complexity (GRC), Rényi complexity ratio (RCR) are considered in the analytical framework. Representative results are given for three one-electron atomic ions in tabular and graphical format. Changes in these measures with respect to parity and Dunkl parameter have been given in detail. Most of these results are offered here for the first time.
Bright Source of High-Dimensional Temporal Entanglement
This paper develops a bright source of high-dimensional time-bin entangled photons for quantum key distribution, optimized for stability and performance in noisy environments. The researchers create a new method to verify the entanglement and demonstrate a noise-resilient QKD protocol that can achieve high key rates with dimensions greater than two.
Key Contributions
- Development of a bright, stable source for high-dimensional time-bin entangled photons
- Novel entanglement certification method using nested Franson interferometry
- Noise-resilient QKD protocol with flexible parameter optimization for high-dimensional systems
View Full Abstract
High-dimensional entanglement is considered to hold great potential for quantum key distribution (QKD) in high-loss and -noise scenarios. To harness its robustness, we construct a source for high-dimensional time-bin entangled photons optimized for high brightness, low complexity, and long-term stability. We certify the generated high-dimensional entanglement with a new witness employing nested Franson interferometry. Finally, we obtain key rates using a novel, noise-resilient QKD protocol. Our flexible evaluation method, centered around discretizations of the time stream, enables the same dataset to be processed while varying parameters such as state dimensionality and time bin length, allowing optimization of performance under given environmental conditions. Our results indicate regions within the accessible parameter space where high key rates per time are achievable for dimensionalities larger than two.
On the Lifshitz formula of dispersion interaction
This paper analyzes the Lifshitz formula for calculating van der Waals dispersion forces between bodies, critiquing the original derivation and comparing different mathematical approaches. The authors use the Van Kampen method to calculate specific dispersion forces and show how the force density behaves at very small distances (less than 1 nm) and for thin plates.
Key Contributions
- Critical analysis showing inconsistencies in Lifshitz's original derivation of the dispersion force formula
- Demonstration that dispersion force density changes from inverse fourth-power distance dependence to distance-independent behavior at sub-nanometer scales
View Full Abstract
The Lifshitz formula and methods of its preparation in the literature are considered. It is shown that in Lifshitz's work itself, this formula is given without a consistent conclusion. Moreover, the approach to the conclusion proposed in this work does not allow us to obtain it. The most general conclusion of this formula can be the method proposed by Levin and Rytov, the variation method of Schwinger and the method proposed by Van Kampen and co-authors. The Levin and Rytov approach is applicable in principle to bodies of arbitrary shape if the diffraction loss fields for electric and magnetic dipoles are determined, while the Van Kampen approach is applicable to any plane-layered structure and is quite simple. It is enough to write down the dispersion equations of the plasmon-polaritone structure. The specific dispersion force for a number of structures is calculated based on the Van Kampen method. It is shown that at small gaps, the force (pressure) density changes the inverse fourth-degree dependence on the distance and practically ceases to depend on it at distances less than 1 nm. For thin identical plates, this density is proportional to the square of their thickness at such distances, but the dependence quickly becomes saturated and already at thicknesses of the order of 10 nm practically ceases to depend on it.
From coherent to fermionized microwave photons in a superconducting transmission line
This paper proposes using superconducting transmission lines to create strongly interacting microwave photons that behave like fermions. The researchers show that by carefully designing the transmission line parameters, they can convert regular coherent light into a special quantum state called a Tonks-Girardeau gas where photons act like impenetrable particles.
Key Contributions
- Demonstration that superconducting transmission lines can create strongly interacting photon fluids
- Method for adiabatic conversion of coherent fields into fermionized photon states using tapered transmission line parameters
View Full Abstract
We investigate superconducting transmission lines as a novel platform for realizing a quantum fluid of microwave photons in a propagating geometry. We predict that the strong photon-photon interactions provided by the intrinsic nonlinearity of Josephson junctions are sufficient to enter a regime of strongly interacting photons for realistic parameters. A suitable tapering of the transmission line parameters allows for the adiabatic conversion of an incident coherent field into a Tonks-Girardeau gas of fermionized photons close to its ground state. Signatures of the strong correlations are anticipated in the correlation properties of the transmitted light.
A directly observable, Zeeman-insensitive nuclear spin coherence in solution
This paper demonstrates a clock-like nuclear spin transition in a molecular liquid that is immune to magnetic field fluctuations and maintains quantum coherence for 25 seconds at ultralow magnetic fields. The researchers discovered an avoided crossing between specific spin states that creates a frequency minimum insensitive to field perturbations, similar to atomic clock transitions.
Key Contributions
- Discovery of a clock-like nuclear spin avoided crossing in molecular liquids with first-order immunity to magnetic field perturbations
- Demonstration of exceptionally long-lived quantum coherences (25 seconds) in solution at ultralow magnetic fields
View Full Abstract
Clock transitions are well known in atomic and solid-state systems, but are largely unexplored in molecular liquids. Here we demonstrate a clock-like, nuclear-spin avoided crossing in [1--$^{13}$C]-fumarate that supports long-lived and directly observable coherences at ultralow magnetic field: a three-spin transition $|S_0α\rangle \leftrightarrow |T_{+1}β\rangle$ near 400 nT exhibits a shallow crossing with a frequency minimum of 2 Hz. The transition is first-order immune to magnetic field perturbations and displays a lifetime of 25 s, around three times the longest single-spin $T_2^*$. Sensitivity to effective pseudo-fields is also demonstrated, including the internal dipolar field of the sample.
Quasi-optimal quantum Markov chain spectral gap estimation
This paper develops a quantum algorithm for estimating the spectral gap of Markov chains that achieves quasi-optimal performance, providing nearly quadratic speedup over classical methods. The algorithm uses quantum singular value transformation and could potentially accelerate Markov chain Monte Carlo sampling methods.
Key Contributions
- Quasi-optimal quantum algorithm for Markov chain spectral gap estimation with nearly quadratic classical advantage
- Development of block-encoding methods for Markov chain transition matrices using quantum singular value transformation
- Explicit block-encoding techniques for two algebraically-defined classes of Markov chains
View Full Abstract
This paper proposes a quantum algorithm for Markov chain spectral gap estimation that is quasi-optimal (i.e., optimal up to a polylogarithmic factor) in the number of vertices for all parameters, and additionally quasi-optimal in the reciprocal of the spectral gap itself, if the permitted relative error is above some critical value. In particular, these results constitute an almost quadratic advantage over the best-possible classical algorithm. Our algorithm also improves on the quantum state of the art, and we contend that this is not just theoretically interesting but also potentially practically impactful in real-world applications: knowing a Markov chain's spectral gap can speed-up sampling in Markov chain Monte Carlo. Our approach uses the quantum singular value transformation, and as a result we also develop some theory around block-encoding Markov chain transition matrices, which is potentially of independent interest. In particular, we introduce explicit block-encoding methods for the transition matrices of two algebraically-defined classes of Markov chains.
Anisotropic anomalous Hall effect in distorted kagome GdTi3Bi4
This paper studies GdTi3Bi4, a magnetic material with a unique crystal structure, and discovers that it shows anomalous Hall effect (unusual electrical behavior) only when a magnetic field is applied in certain directions. The researchers explain this directional behavior through quantum mechanical calculations involving spin-orbit coupling and Berry curvature.
Key Contributions
- Discovery of highly anisotropic anomalous Hall effect in kagome magnet GdTi3Bi4 with complete directional selectivity
- Theoretical explanation of the mechanism through Berry curvature redistribution controlled by magnetization direction and orbital mixing
View Full Abstract
Topological kagome magnets offer a rich landscape for exploring the intricate interplay of quantum interactions among geometry, topology, spin, and correlation. GdTi3Bi4 crystallizes in layered Ti based kagome nets intertwined with zigzag Gd chains along the a axis and orders antiferromagnetically below 15 K. Here, we present the temperature and field dependent electrical transport of GdTi3Bi4 in different directions. The material exhibits anomalous Hall conductivity (AHC) of 410 S cm-1 at 2 K for B parallel c and it is completely absent for B parallel a, despite the similar magnetization observed in both orientations. This behavior is quite contradictory, as anomalous Hall effect (AHE) typically scales with the magnetization. Through first principles calculations, it is demonstrated that in the presence of time reversal symmetry broken by the Gd 4f sublattice and spin orbit coupling, the magnetization direction controls the orbital mixing in the Ti t2g bands, relocating Berry curvature hot spots and producing the observed orientation selective AHC. The results establish GdTi3Bi4 as platform for investigating new avenues of AHE, such as directional AHE, and thus shed new light on the intricate coupling between magnetic and electronic structures, paving the way for exploring novel quantum phenomena.
Excitation spectrum of a bright solitary wave in a Bose-Einstein condensate and its connection with the Higgs and the Goldstone modes
This paper studies Bose-Einstein condensates in toroidal traps with attractive interactions, analyzing how localized matter wave 'blobs' form and examining their excitation spectrum. The researchers identify quantum excitations analogous to Goldstone and Higgs modes from particle physics, providing insight into spontaneous symmetry breaking in quantum many-body systems.
Key Contributions
- Analytical and numerical characterization of excitation spectra in bright solitons in BEC systems
- Identification and analysis of Goldstone and Higgs-like modes in quantum many-body systems with spontaneous symmetry breaking
View Full Abstract
We consider the problem of Bose-Einstein condensed atoms, which are confined in a (quasi) one-dimensional toroidal potential. We focus on the case of an effective attractive interaction between the atoms. The formation of a localized blob (i.e., a ``bright" solitary wave) for sufficiently strong interactions provides an example of spontaneous symmetry breaking. We evaluate analytically and numerically the excitation spectrum for both cases of a homogeneous and of a localized density distribution. We identify in the excitation spectrum the emergence of the analogous to the Goldstone and the Higgs modes, evaluating various relevant observables, gaining insight into these two fundamental modes of excitation.
Coupling a discrete state to a quasi-continuum: A model quantum mechanical system that interpolates between Rabi oscillations and decay-revival dynamics
This paper develops a theoretical quantum mechanical model where a single discrete state couples to a ladder of equally-spaced states through a Lorentzian profile, creating a unified framework that can reproduce various well-known quantum optical systems like Rabi oscillations and decay-revival dynamics in different parameter limits.
Key Contributions
- Unified theoretical model that interpolates between multiple quantum optical systems in different limits
- Semi-analytical solution method for the eigenvalue problem using transcendental equations
- Demonstration of rich dynamical behaviors including exponential decay, revivals, Rabi oscillations, and damped oscillations
View Full Abstract
We formulate a quantum mechanical system consisting of a single discrete state coupled to an infinite ladder of equally-spaced states, the coupling between the two being given by a Lorentzian profile. Various limits of this system correspond to well-known models from quantum optics, namely, the narrow resonance limit gives the Rabi system, the wide resonance limit gives the Bixon-Jortner system, the wide resonance, true continuum limit gives the Wigner-Weisskopf system, and the fixed resonance, true continuum limit gives a system that is typically studied by methods developed by Fano. We give a semi-analytical solution of the eigenvalue problem by reducing it to a transcendental equation, and demonstrate the aforementioned limiting behaviors. We then study the dynamics of the initial discrete state numerically, and show that it gives a wide range of behaviors in various limiting cases as predicted by our asymptotic theory including exponential decay, revivals, Rabi oscillations, and damped oscillations. The ability of this system to interpolate between such a rich set of behaviors and existing model systems, and the accessibility of a semi-analytical solution, make it a useful model system in quantum optics and related fields.
Thermodynamic Recycling in Quantum Computing: Demonstration Using the Harrow-Hassidim-Lloyd Algorithm and Information Erasure
This paper proposes a method to reuse 'failed' quantum states from quantum algorithms as thermodynamic resources, demonstrating that information can be erased with less heat dissipation than the fundamental Landauer limit. The researchers implemented this approach using the HHL algorithm on IBM quantum hardware and achieved below-Landauer-limit erasure despite hardware noise.
Key Contributions
- Framework for recycling failure branches in quantum algorithms as thermodynamic resources
- Demonstration of information erasure below the Landauer limit using quantum computing
- Experimental implementation on IBM superconducting quantum processor showing practical applicability
View Full Abstract
Branch selection, including postselection, is a standard method for implementing nonunitary transformations in quantum algorithms. Conventionally, states associated with unsuccessful branches are discarded and treated as useless. Here we propose a generic framework that reuses these failure branches as thermodynamic resources. The central element is an athermal bath that is naturally generated during the reset of a failure branch. By coupling this bath to a target system prior to relaxation, useful thermodynamic tasks can be performed, enabling performance beyond conventional thermodynamic limits. As an application, we analyze information erasure and derive the resulting gain analytically. We further demonstrate the framework by implementing the Harrow-Hassidim-Lloyd algorithm on IBM's superconducting quantum processor. Despite substantial noise and errors in current hardware, our method achieves erasure with heat dissipation below the Landauer limit. These results establish a practical connection between quantum computing and quantum thermodynamics and suggest a route toward reducing thermodynamic costs in future large-scale quantum computers.
Einstein's Worries and Actual Physics: Beyond Pilot Waves
This paper critiques standard quantum mechanics and pilot-wave theory, proposing alternative interpretations including stochastic mechanics and category-theoretic semantics to resolve fundamental issues like the measurement problem and nonlocality without invoking mysterious quantum dynamics.
Key Contributions
- Proposes stochastic mechanics as alternative to Bohmian mechanics where wavefunction emerges from diffusion processes
- Develops category-theoretic framework reinterpreting measurement and EPR correlations as contextual truth rather than nonlocal dynamics
View Full Abstract
Tim Maudlin has argued that the standard formulation of quantum mechanics fails to provide a clear ontology and dynamics and that the de Broglie--Bohm pilot-wave theory offers a better completion of the formalism, more in line with Einstein's concerns. I suggest that while Bohmian mechanics improves on textbook quantum theory, it does not go far enough. In particular, it relies on the ``quantum equilibrium hypothesis'' and accepts explicit nonlocality as fundamental. A deeper completion is available in stochastic mechanics, where the wavefunction and the Born rule emerge from an underlying diffusion process, and in a contextual, category-theoretic semantics in which measurement and EPR--Bell correlations are reinterpreted as features of contextual truth rather than of mysterious dynamics. In this framework, the measurement problem and ``spooky action-at-a-distance'' are dissolved rather than solved. Finally, a dynamics based on Rosen's ``classical Schrödinger equation'' provides a continuous passage between quantum and classical regimes, eliminating any sharp Heisenberg cut.
Quantum model for black holes and clocks
This theoretical paper develops a quantum model where two entangled subsystems simulate black hole physics, with one subsystem behaving like a particle near a black hole's event horizon and the other producing Hawking radiation. The authors connect this to quantum clock mechanisms, suggesting black holes can function as perfect timekeepers.
Key Contributions
- Establishes quantum model connecting black hole physics to entangled quantum systems
- Demonstrates how Schwarzschild black holes can function as perfect quantum clocks through Page-Wootters mechanism
View Full Abstract
We consider a stationary quantum system consisting of two non-interacting yet entangled subsystems, $Ξ$ and $Γ$. We identify a quantum theory characterizing $Ξ$ such that, in the quantum-to-classical crossover of the composite system, $Γ$ behaves as a test particle within the gravitational field of a Schwarzschild Black Hole (SBH) near its event horizon. We then show that this same quantum theory naturally provides a representation of $Ξ$ in terms of bosonic modes, whose features match those of the Hawking radiation; this facilitates the establishment of precise relations between the phenomenological parameters of the SBH and the microscopic details of the quantum model for $Ξ$. Finally, we recognize that the conditions used to characterize $Γ$ and $Ξ$ coincide with those required by the Page and Wootters mechanism for identifying an evolving system and an associated clock. This leads us to discuss how the quantum model for $Ξ$ endows the SBH with all the characteristics of a "perfect" clock.
Scalable Certification of Entanglement in Quantum Networks
This paper introduces a new method called sub-symmetric witnesses (SSWs) to efficiently verify genuine multipartite entanglement in quantum networks. The approach overcomes limitations of existing methods by being scalable to large networks and requiring only local measurements, with the optimization formulated as a computationally efficient linear program.
Key Contributions
- Development of sub-symmetric witnesses (SSWs) for scalable entanglement certification in quantum networks
- Connection between SSWs and graph theory cut space enabling practical detection criteria
- Formulation of optimal detection as linear program instead of semidefinite program for computational efficiency
- Experimental implementation requiring only local measurements with resources independent of network size
View Full Abstract
Quantum networks form the backbone of long-distance quantum information processing. Genuine multipartite entanglement (GME) serves as a key indicator of network performance and overall state quality. However, the widely used methods for certifying GME suffer from a major drawback that they either detect only a limited range of states or are applicable only to systems with a small number of parties. To overcome these limitations, we propose a family of sub-symmetric witnesses (SSWs), which are tractable both theoretically and experimentally. Analytically, we establish a connection between SSWs and the cut space of graph theory, enabling several powerful detection criteria tailored to practical quantum networks. Numerically, we show that the optimal detection can be formulated as a linear program, offering a significant efficiency advantage over the semidefinite programs commonly employed in quantum certification. Experimentally, SSWs can be evaluated via local measurements, with resource requirements independent of the local dimension in general, and even independent of the overall network size in many practical networks.
Impact of Boundary Conditions on the Double-Kicked Quantum Rotor
This paper studies how different boundary conditions (open vs periodic vs infinite) affect the behavior of a quantum rotor system that can exhibit topological phases. The researchers find that boundary conditions significantly impact the system's measurable properties, but topological signatures remain detectable through edge states.
Key Contributions
- Demonstrated that Mean Chiral Displacement measurements are sensitive to boundary conditions in topological quantum systems
- Showed that bulk-edge correspondence persists under open boundary conditions, providing reliable signatures of topological phase transitions
View Full Abstract
We study the on-resonance Spin-1/2 Double Kicked Rotor, a periodically driven quantum system that hosts topological phases. Motivated by experimental constraints, we analyze the effects of open and periodic boundary conditions in contrast to the idealized case of infinite momentum space. As a bulk probe for topological invariants, we focus on the Mean Chiral Displacement (MCD) and show that it exhibits a pronounced sensitivity to boundary conditions, which can be traced to the dynamics in momentum space. Under open boundaries, states that would otherwise extend freely become localized at the edges of the finite momentum space, forming quasienergy edge states. While the bulk response measured by the MCD is strongly affected once the evolving wave packet reaches the boundaries, the persistence of these edge states still reflects the bulk-edge correspondence and provides reliable signatures of topological transitions.
Reply to Comment on "Properties and dynamics of generalized squeezed states"
This paper responds to criticism of their previous work on generalized squeezed quantum states, defending their findings that higher-order squeezing exhibits oscillatory dynamics rather than monotonic behavior. The authors acknowledge numerical simulation issues related to even-odd parity dependence but maintain that their oscillatory results are physically valid when proper context is provided.
Key Contributions
- Clarification of even-odd parity dependence in generalized squeezing simulations
- Defense of oscillatory dynamics in higher-order squeezed states against claims of monotonic behavior
View Full Abstract
In our paper [1], our numerical simulations showed that, unlike displacement and conventional squeezing, higher-order squeezing exhibits oscillatory dynamics. Subsequently, Gordillo and Puebla pointed out that simulation results depend on whether the state space in the simulations is even or odd [2]. Using additional derivations, they argued that the oscillatory dynamics is unphysical and that the photon number must increase monotonically as a function of the squeezing parameter $r$. We agree with the observation of an even-odd parity dependence in the simulations. We independently noticed the same feature in our simulations after the publication of Ref. [1]. This observation led us to perform a more detailed investigation of the numerical simulation and mathematical aspects of the generalized squeezing problem. Our new findings were reported in Ref. [3]. Further analysis was reported in Ref. [4]. Our conclusion is that the generalized squeezing operator is physically not well defined but can be made well defined when combined with additional information about the physical system under study. We demonstrated this point in the case where we include an additional nonlinear interaction term in the Hamiltonian. We disagree with the claim that the photon number must be a monotonically increasing function of $r$. This claim contradicts the mathematically rigorous results of Ref. [4]. Furthermore, we show that the oscillatory behaviour persists in two closely related, well-behaved models.
Quantum-Compatible Dictionary Learning via Doubly Sparse Models
This paper develops a quantum-compatible approach to dictionary learning (a machine learning technique for finding sparse data representations) by introducing doubly sparse dictionary learning that works within quantum computing constraints. The authors present a hybrid quantum-classical algorithm using randomized Kaczmarz iterations with quantum inner products, focusing on practical implementation rather than theoretical speedups.
Key Contributions
- Identification of structural mismatches between classical dictionary learning and quantum computing constraints
- Development of doubly sparse dictionary learning model that avoids quantum implementation bottlenecks
- Hybrid quantum-classical algorithm with Qiskit-compatible implementation for near-term quantum devices
View Full Abstract
Dictionary learning (DL) is a core tool in signal processing and machine learning for discovering sparse representations of data. In contrast with classical successes, there is currently no practical quantum dictionary learning algorithm. We argue that this absence stems from structural mismatches between classical DL formulations and the operational constraints of quantum computing. We identify the fundamental bottlenecks that prevent efficient quantum realization of classical DL and show how a structurally restricted model, doubly sparse dictionary learning (DSDL), naturally avoids these problems. We present a simple, hybrid quantum-classical algorithm based on projection-based randomized Kaczmarz iterations with Qiskit-compatible quantum inner products. We outline practical considerations and share an open-source implementation at https://github.com/AngshulMajumdar/quantum-dsdl-kaczmarz. The goal is not to claim exponential speedups, but to realign dictionary learning with the realities of near-term quantum devices.
Direct temperature readout in nonequilibrium quantum thermometry
This paper develops a method to directly measure temperature in quantum systems that are not in thermal equilibrium, using a thermodynamic inference approach that assigns a reference temperature and corrects for nonequilibrium effects. The researchers demonstrate their technique works with qubit-based thermometers and find that quantum coherence can improve measurement precision.
Key Contributions
- Development of direct temperature readout scheme for nonequilibrium quantum systems using maximum entropy principle
- Introduction of corrected dynamical temperature concept with positive semi-definite error bounds for improved accuracy
View Full Abstract
Quantum thermometry aims to measure temperature in nanoscale quantum systems, paralleling classical thermometry. However, temperature is not a quantum observable, and most theoretical studies have therefore concentrated on analyzing fundamental precision limits set by the quantum Fisher information through the quantum Cramer-Rao bound. In contrast, whether a direct temperature readout can be achieved in quantum thermometry remains largely unexplored, particularly under the nonequilibrium conditions prevalent in real-world applications. To address this, we develop a direct temperature readout scheme based on a thermodynamic inference strategy. The scheme integrates two conceptual developments: (i) By applying the maximum entropy principle with the thermometer's mean energy as a constraint, we assign a reference temperature to the nonequilibrium thermometer. We demonstrate that this reference temperature outperforms a commonly used effective temperature defined through equilibrium analogy. (ii) We obtain positive semi-definite error functions that lower-bound the deviation of the reference temperature from the true temperature-in analogy to the quantum Cramer-Rao bound for the mean squared error-and vanish upon thermalization with the sample. Combining the reference temperature with these error functions, we introduce a notion of corrected dynamical temperature which furnishes a postprocessed temperature readout under nonequilibrium conditions. We validate the corrected dynamical temperature in a qubit-based thermometer under a range of nonequilibrium initial states, confirming its capability to estimate the true temperature. Importantly, we find that increasing quantum coherence can enhance the precision of this readout.
Chiroptical effect induced by gravitational waves
This paper proposes a new theoretical effect where gravitational waves can flip the handedness (chirality) of photons by exchanging angular momentum, creating a gravitational analog of chiroptical effects. The authors derive the physics governing this interaction and suggest it could provide new ways to study gravitational waves and test theories of gravity.
Key Contributions
- First theoretical proposal of gravitational analog of chiroptical effect
- Derivation of selection rules for photon-gravitational wave angular momentum exchange
- Novel observational method for probing gravitational wave chiral structure
View Full Abstract
We propose the gravitational analog of the chiroptical effect for the first time, demonstrating that gravitational waves (GWs) can induce a reversal of photon chirality through the exchange of angular momentum, namely the spin-2-gravitation chiroptical effect. By analyzing the interaction between photon spin angular momentum (SAM) and GWs, we derive the selection rules governing this exchange, which are strictly dictated by the spin-1 and spin-2 nature of the electromagnetic and gravitational fields, respectively. We find that the gravitational chiroptical effect reflects the local nature of SAM which prevents the accumulation of gravitational perturbations over spatial phase windings, and offers a theoretically rigorous tool to probe the chiral structure of GWs. This mechanism provides a novel observational pathway to constrain modified gravity theories, measure the asymmetric properties of compact binaries, and explore parity-violating physics in the early universe.
Strong coupling of virtual negative states in the Kapitza-Dirac effect
This paper investigates how negative energy states in relativistic quantum theory contribute to electron diffraction in the two-photon Kapitza-Dirac effect, showing that these states can dominantly influence the diffraction amplitude even at low field strengths. The authors use both perturbative analytical methods and numerical simulations to demonstrate this coupling between virtual negative states and the quantum dynamics of electrons in standing wave light fields.
Key Contributions
- Demonstrated that negative energy states can dominantly contribute to diffraction amplitudes in the two-photon Kapitza-Dirac effect
- Showed agreement between perturbative analytical solutions and numerical simulations for relativistic electron dynamics in standing wave fields
- Established connection between negative state coupling in single-photon processes and virtual electron-positron pair interactions in quantum field theory
View Full Abstract
Negative states are an intrinsic property of relativistic quantum theory and related to anti-particles in the context of the Dirac sea concept. We show that negative states can dominantly contribute to the diffraction amplitude in the quantum dynamics of the two-photon Kapitza-Dirac effect. We draw our conclusion by investigating solutions from time-dependent perturbation theory, where the perturbative solutions are in match with numeric solutions of the relativistic quantum system and also with the numeric and analytic solutions from the relativistic equations of motion of a classical point-like electron in an external standing wave light field. While our numeric solutions assume a strong laser field, the analytic solutions indicate that negative state coupling remains dominant for arbitrary low field amplitudes, where in the single-photon case (Compton scattering) negative state coupling can be mathematically associated with the interaction of a virtual electron-positron pair in the context of a quantized theory in old-fashioned perturbation theory.
Nonadiabatic theory for subcycle ionic dynamics in multielectron tunneling ionization
This paper develops a theoretical framework for understanding how intense laser fields cause multiple electrons to tunnel out of molecules simultaneously, creating coherent quantum states in the resulting ions. The work derives improved mathematical models for this process and demonstrates the theory by applying it to nitrogen and carbon dioxide molecules.
Key Contributions
- Established theoretical equivalence between wave function and density matrix approaches for subcycle ionic dynamics
- Derived accurate subcycle nonadiabatic ionization rate to improve quantitative predictions
- Demonstrated laser-induced ionic coherence in N2 and CO2 molecules with applications to lasing and chemical reaction control
View Full Abstract
Multielectron tunneling ionization creates ionic coherence crucial for lasing and driving electron motion in molecules. While tunneling is well understood as a single active electron process, less emphasis has been placed on theoretical descriptions of bound electrons during tunneling. This work systematically investigates multielectron tunneling ionization based on the strong field approximation, establishing a theoretical foundation and demonstrating the equivalence of wave function and density matrix approaches for subcycle ionic dynamics. An accurate subcycle nonadiabatic ionization rate is also derived and incorporated into the theory to improve its quantitative accuracy. Applying the theory to N$_{2}$ and CO$_{2}$, this work showcases how an intense laser field can induce ionic coherence in molecules as observed in previous experiments. These findings encourage future investigations into multielectron tunneling ionization and its applications in lasing and in controlling chemical reactions.
Stochastic phase-space simulation of multimode cat states via the positive-P representation
This paper develops a computational method using positive-P phase-space representation to simulate the behavior of multimode Schrödinger cat states in networks of coupled quantum resonators. The method enables simulation of much larger quantum systems (up to 21 sites) than traditional approaches, though with some computational limitations when measuring certain quantum properties.
Key Contributions
- Development of scalable positive-P phase-space simulation method for multimode cat states in large quantum systems
- Demonstration of transient dynamics simulation for networks up to N=21 sites, significantly larger than direct master equation methods
View Full Abstract
We present a comprehensive study of the transient dynamics of multimode Schrödinger cat states in dissipatively coupled resonator arrays using the positive-P phase-space method. By employing the positive-P representation, we derive the exact stochastic differential equations governing the system's dynamics, enabling the simulation of system sizes significantly larger than those accessible via direct master equation simulation. We demonstrate the utility of this method by simulating transient dynamics for networks up to N=21 sites. Furthermore, we critically examine the method's usefulness and limitations, specifically highlighting the computational instability encountered when estimating the state parity in the systems. Our results provide a pathway for scalable simulations of non-Gaussian states in large open quantum systems.
Subspace Selected Variational Quantum Configuration Interaction with a Partial Walsh Series
This paper proposes a new variational quantum eigensolver (VQE) algorithm that uses Walsh operators and subspace selection to find ground-state energies of quantum systems, particularly for electronic structure problems in molecules. The method aims to avoid expensive classical matrix calculations by using quantum circuits to represent configuration interaction wavefunctions.
Key Contributions
- Novel VQE ansatz using diagonal Walsh operators for configuration interaction wavefunctions
- Subspace selection method that bypasses classical matrix diagonalizations for large-scale quantum chemistry applications
View Full Abstract
Estimating the ground-state energy of a quantum system is one of the most promising applications for quantum algorithms. Here we propose a variational quantum eigensolver (VQE) \emph{Ansatz} for finding ground state configuration interaction (CI) wavefunctions. We map CI for fermions to a quantum circuit using a subspace superposition, then apply diagonal Walsh operators to encode the wavefunction. The algorithm can be used to solve both full CI and selected CI wavefunctions, resuling in exact and near-exact solutions for electronic ground states. Both the subspace selection and wavefunction \emph{Ansatz} can be applied to any Hamiltonian that can be written in a qubit basis. The algorithm bypasses costly classical matrix diagonalizations, which is advantageous for large-scale applications. We demonstrate results for several molecules using quantum simulators and hardware.
Irreversibility of decorrelating processes: an experimental assessment in cavity QED
This paper experimentally studies entropy production and irreversibility in quantum processes by examining how different methods of erasing correlations between an atom and cavity affect thermodynamic quantities. The researchers develop improved data analysis techniques to avoid mathematical divergences when measuring entropy production in quantum systems.
Key Contributions
- Experimental measurement of entropy production in quantum decorrelation processes
- Development of improved density matrix estimation methods that avoid spurious divergences in entropy calculations
View Full Abstract
Entropy production quantifies the amount of irreversibility of a physical process, leading to fundamental bounds for thermodynamic quantities. Particularly in the quantum realm, considerable research has been carried out in the last decades extending entropy production to nonequilibrium processes. We experimentally investigate the entropy production of forward-backward cycles containing different decorrelating processes realized to erase different types of correlations between two interacting systems, from obliterating solely quantum coherence to completely decorrelating local states. We apply these processes to the entanglement of a two-level atom, realized with a circular Rydberg atom, and a light field of a high-quality microwave cavity. The entropy production is computed from the full quantum-state tomography of the system performed at different stages of the interaction-decorrelation sequence. Due to the quantum nature of the atom-cavity system, we find that, although standard, the maximum likelihood estimation method for the density matrix leads to spurious divergences of the entropy production. We propose and implement an alternative estimator that remedies such divergences. Our work experimentally assesses irreversibility of non-thermal processes and addresses the care that must be taken in handling experimental data to estimate the entropy production.
Graphene-assisted resonant transmission and enhanced Goos-Hänchen shift in a frustrated total internal reflection configuration
This paper investigates how graphene can enhance optical transmission and control light beam shifts in a frustrated total internal reflection setup by exciting surface plasmons in the terahertz frequency range. The researchers show that graphene's unique properties enable better transmission with lower losses and controllable beam shifts compared to quantum wells.
Key Contributions
- Demonstration of graphene-enhanced resonant transmission with lower losses compared to quantum wells
- Control of Goos-Hänchen shifts through adjustment of graphene's chemical potential and electron relaxation time
View Full Abstract
Graphene-assisted resonant transmission and enhanced Goos-Hänchen shift are investigated in a two-prism frustrated-total-internal-reflection configuration. Due to the excitation of surface plasmons induced by graphene in low terahertz frequency range, there exist the resonant transmission and anomalous Goos-Hänchen shifts in such optical tunneling configuration. As compared to the case of quantum well, graphene sheet with unique optical properties can enhance the resonant transmission with relatively low loss, and modulate the large negative and positive Goos-Hänchen shifts by adjusting chemical potential or electron relaxation time. These intriguing phenomena may lead to some potential applications in graphene-based electro-optic devices.
Quantum state engineering of spin-orbit coupled ultracold atoms in a Morse potential
This paper develops methods to precisely control both the internal spin states and position of ultracold atoms in Bose-Einstein condensates using engineered laser fields and synthetic magnetic fields. The researchers demonstrate robust protocols for manipulating these quantum states, which could enable applications in precision measurement and quantum information processing.
Key Contributions
- Development of invariant-based inverse engineering protocols for simultaneous control of internal and motional states in spin-orbit coupled BECs
- Demonstration of robust state control methods that work for both interacting and non-interacting condensates with tolerance to experimental noise and errors
View Full Abstract
Achieving full control of a Bose-Einstein condensate can have valuable applications in metrology, quantum information processing, and quantum condensed matter physics. We propose protocols to simultaneously control the internal (related to its pseudospin-1/2) and motional (position-related) states of a spin-orbit-coupled Bose-Einstein condensate confined in a Morse potential. In the presence of synthetic spin-orbit coupling, the state transition of a noninteracting condensate can be implemented by Raman coupling and detuning terms designed by invariant-based inverse engineering. The state transfer may also be driven by tuning the direction of the spin-orbit-coupling field and modulating the magnitude of the effective synthetic magnetic field. The results can be generalized for interacting condensates by changing the time-dependent detuning to compensate for the interaction. We find that a two-level algorithm for the inverse engineering remains numerically accurate even if the entire set of possible states is considered. The proposed approach is robust against the laser-field noise and systematic device-dependent errors.
Counter-diabatic driving for fast spin control in a two-electron double quantum dot
This paper develops faster methods for controlling electron spins in quantum dots using counter-diabatic driving techniques, which allow rapid manipulation of quantum states while avoiding decoherence effects. The researchers design time-dependent electric fields to achieve fast adiabatic spin control and demonstrate robustness against noise.
Key Contributions
- Development of counter-diabatic driving protocol for fast spin manipulation in double quantum dots
- Simplification using Lie algebra transformation to enable single Cartesian electric field control
- Analysis of energy-time trade-offs and noise robustness for practical implementation
View Full Abstract
The techniques of shortcuts to adiabaticity have been proposed to accelerate the "slow" adiabatic processes in various quantum systems with the applications in quantum information processing. In this paper, we study the counter-diabatic driving for fast adiabatic spin manipulation in a two-electron double quantum dot by designing time-dependent electric fields in the presence of spin-orbit coupling. To simplify implementation and find an alternative shortcut, we further transform the Hamiltonian in term of Lie algebra, which allows one to use a single Cartesian component of electric fields. In addition, the relation between energy and time is quantified to show the lower bound for the operation time when the maximum amplitude of electric fields is given. Finally, the fidelity is discussed with respect to noise and systematic errors, which demonstrates that the decoherence effect induced by stochastic environment can be avoided in speeded-up adiabatic control.