Quantum Physics Paper Analysis
This page provides AI-powered analysis of new quantum physics papers published on arXiv (quant-ph). Each paper is automatically evaluated using AI, briefly summarized, and assessed for relevance across four key areas:
- CRQC/Y2Q Impact – Direct relevance to cryptographically relevant quantum computing and the quantum threat timeline
- Quantum Computing – Hardware advances, algorithms, error correction, and fault tolerance
- Quantum Sensing – Metrology, magnetometry, and precision measurement advances
- Quantum Networking – QKD, quantum repeaters, and entanglement distribution
Papers flagged as CRQC/Y2Q relevant are highlighted and sorted to the top, making it easy to identify research that could impact cryptographic security timelines. Use the filters to focus on specific categories or search for topics of interest.
Updated automatically as new papers are published. It shows one week of arXiv publishing (Sun to Thu). Archive of previous weeks is at the bottom.
Copy-cup Gates in Tensor Products of Group Algebra Codes
This paper develops quantum error-correcting codes with built-in constant-depth quantum gates (CZ and CCZ) by analyzing when classical group algebra codes can support specific mathematical structures called copy-cup gates. The researchers connect this problem to graph theory and provide concrete conditions for constructing these enhanced quantum codes.
Key Contributions
- Established conditions for classical group algebra codes to support copy-cup gates that enable constant-depth CZ and CCZ quantum gates
- Connected the copy-cup gate problem to perfect matching in graph theory
- Fully characterized conditions for 2- and 3-copy-cup gates in weight 4 group algebra codes
- Demonstrated that bivariate bicycle codes lack pre-orientation for copy-cup gates
View Full Abstract
We determine conditions on classical group algebra codes so that they have pre-orientation for cup products and copy-cup gates. This defines quantum codes that have constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates constructed via tensor products of classical group algebra codes, including hypergraph and balanced products. We show that determining the conditions relies on solving the perfect matching problem in graph theory. Conditions are fully determined for the 2- and 3-copy-cup gates, for group algebra codes up to weight 4, including for codes with odd check weight. These include the bivariate bicycle codes, which we show do not have the pre-orientation for either type of copy-cup gate. We show that abelian weight 4 group algebra codes satisfying the non-associative 3-copy-cup gate necessarily have a code distance of 2, whereas codes that satisfy conditions for the symmetric 3-copy-cup gate can have higher distances, and in fact also satisfy conditions for the 2-copy-cup gate. Finally we find examples of quantum codes from the product of abelian group algebra codes that have inter-code constant-depth $\operatorname{CZ}$ and $\operatorname{CCZ}$ gates.
Hyperbolic and Semi-Hyperbolic Floquet Codes for Photonic Quantum Computing
This paper develops new quantum error correcting codes called hyperbolic and semi-hyperbolic Floquet codes that are specifically designed for photonic quantum computing systems. The codes use only simple weight-2 measurements and are tested under various noise models, showing improved performance compared to surface codes for photon-mediated quantum computing applications.
Key Contributions
- Construction of new hyperbolic Floquet codes from {10,3} and {12,3} tessellations using the LINS algorithm
- Demonstration that these codes achieve better fault-tolerant performance than surface codes in photonic quantum computing with 2.2x larger fault-tolerant area while encoding 10 logical qubits
View Full Abstract
Tailoring error correcting codes to the structure of the physical noise can reduce the overhead of fault-tolerant quantum computation. Hyperbolic Floquet codes use only weight-2 measurements and can be implemented directly on hardware with native pair measurements. We construct hyperbolic and semi-hyperbolic Floquet codes from $\{8,3\}$, $\{10,3\}$, and $\{12,3\}$ tessellations via the Wythoff kaleidoscopic construction with the Low-Index Normal Subgroups (LINS) algorithm. The $\{10,3\}$ and $\{12,3\}$ families are new to hyperbolic Floquet codes. We evaluate these codes under four noise models: phenomenological, ancilla Entangling Measurement (EM3), Single-step Depolarizing EM3 (SDEM3), and erasure. Under phenomenological noise, specific-logical threshold crossings occur near $p_e \approx 0.3$--$0.5\%$ for $\{8,3\}$ ($k=6$--$56$) and $0.15$--$0.2\%$ for $\{10,3\}$ ($k=12$--$146$). EM3 ancilla noise yields a threshold of ${\sim}1.5\%$ for all three families. SDEM3 is a depolarizing noise model motivated by Majorana tetron architectures; fine-grained codes achieve thresholds of ${\sim}1.0$--$1.2\%$ for all three families. The erasure model captures detected photon loss on spin-optical links; fine-grained codes achieve erasure thresholds of ${\sim}8.5$--$9\%$ for $\{8,3\}$, ${\sim}7$--$8\%$ for $\{10,3\}$, and ${\sim}6.5$--$8\%$ for $\{12,3\}$. Photon loss is the dominant error source in photon-mediated quantum computing. Under the full three-parameter SPOQC-2 noise model, the $\{8,3\}$ codes achieve a 2D fault-tolerant area $2.2\times$ that of the surface code compiled to pair measurements while encoding $k = 10$ logical qubits. In a companion paper, we evaluate the same code families in a distributed setting.
Spin-Cat Qubit with Biased Noise in an Optical Tweezer Array
This paper demonstrates the implementation of spin-cat qubits using ytterbium-173 atoms with nuclear spin 5/2 in optical tweezers, showing how these qubits exhibit biased noise that favors dephasing errors over bit-flip errors. The researchers achieved single-qubit gate operations and characterized the noise properties, demonstrating the feasibility of using these qubits for bias-tailored quantum error correction codes.
Key Contributions
- Demonstration of single-qubit controls for spin-cat qubits in ytterbium-173 with nuclear spin I=5/2
- Characterization of biased noise in spin-cat qubits showing preference for dephasing errors over bit-flip errors
- Achievement of covariant SU(2) rotations and benchmarking of gate fidelities for bias-tailored quantum error correction
View Full Abstract
Bias-tailored quantum error correcting codes (QECCs) offer a higher error threshold than standard QECCs and have the potential to achieve lower logical errors with less space overhead. The spin-cat qubit, encoded in a large nuclear spin-$F$ system, is a promising candidate for bias-tailored QECCs. Yet its feasibility is hindered by the difficulty of performing fast covariant SU(2) rotation with arbitrary rotation angles for nuclear spins and by a lack of noise characterization for gate operations in neutral atom platforms. Here we demonstrate single-qubit controls of ${}^{173}\mathrm{Yb}$ spin-cat qubits with nuclear spin $I=5/2$ in an optical tweezer array. We implement a covariant SU(2) rotation and non-linear rotations by optical beams and achieve an averaged single-Clifford gate fidelity of $0.961_{-5}^{+5}$. The measurement of the coherence time and spin relaxation time shows that the idling error becomes increasingly biased toward dephasing errors as the magnitude of the encoded sublevel $|m_F|$ increases. Furthermore, we benchmark the noise bias of rank-preserving gates on spin-cat qubits, demonstrating a finite bias of $18_{-11}^{+132}$, in contrast to the case of the two-level system in ${}^{171}\mathrm{Yb}$, which shows no bias within the experimental uncertainty. Our work demonstrates the feasibility of spin-cat qubits for realizing bias-tailored QECCs, paving the way for achieving hardware-efficient quantum error correction.
A matching decoder for bivariate bicycle codes
This paper develops a new decoding algorithm for bivariate bicycle quantum error-correcting codes using minimum-weight perfect matching, introducing a 'cylinder trick' method that leverages code symmetries to efficiently find error corrections.
Key Contributions
- Development of matching-based decoder for bivariate bicycle codes using the 'cylinder trick' method
- Demonstration of improved decoder performance through augmentation with belief propagation and over-matching strategies
View Full Abstract
The discovery of new quantum error-correcting codes that encode several logical qubits into relatively few physical qubits motivates the development of efficient and accurate methods of decoding these systems. Here, we adopt the minimum-weight perfect matching algorithm, a subroutine invaluable to decoding topological codes, to decode bivariate bicycle codes. Using the equivalence of bivariate bicycle codes to copies of the toric code, we propose a method we call the 'cylinder trick' to rapidly find a correction using matching on code symmetries. We benchmark our decoder on the gross code family, cyclic hypergraph-product codes, generalized toric codes, and recently proposed directional codes, demonstrating the general applicability of our protocol. For a subset of these codes, we find that our decoder can be significantly improved by augmenting matching with strategies including belief propagation and 'over-matching', thus achieving performance competitive with state-of-the-art approaches.
The Road to Useful Quantum Computers
This paper provides a comprehensive overview of the current state of quantum computing development, examining the gap between existing quantum computer capabilities and the goal of achieving 'quantum utility' where quantum computers solve practically important problems. The authors analyze the key scientific and engineering challenges that must be overcome to build useful quantum computers.
Key Contributions
- Comprehensive assessment of current quantum computing capabilities versus requirements for quantum utility
- Identification and analysis of key scientific and engineering challenges blocking progress toward useful quantum computers
- Framework for tracking progress from current prototypes to quantum utility applications
View Full Abstract
Building a useful quantum computer is a grand science and engineering challenge, currently pursued intensely by teams around the world. In the 1980s, Richard Feynman and Yuri Manin observed independently that computers based on quantum mechanics might enable better simulations of quantum phenomena. Their vision remained an intellectual curiosity until Peter Shor published his famous quantum algorithm for integer factoring, and shortly thereafter a proof that errors in quantum computations can be corrected. Since then, quantum computing R&D has progressed rapidly, from small-scale experiments in university physics laboratories to well-funded industrial efforts and prototypes. Hype notwithstanding, quantum computers have yet to solve scientifically or practically important problems -- a target often called quantum utility. In this article, we describe the capabilities of contemporary quantum computers, compare them to the requirements of quantum utility, and illustrate how to track progress from today to utility. We highlight key science and engineering challenges on the road to quantum utility, touching on relevant aspects of our own research.
Computing with many encoded logical qubits beyond break-even
This paper demonstrates quantum error correction codes that encode many logical qubits and actually perform better than unencoded qubits, using up to 94 logical qubits on a 98-qubit trapped-ion quantum computer. The researchers achieved 'beyond break-even' performance where error correction improves rather than degrades computation quality.
Key Contributions
- First demonstration of beyond break-even performance with high-rate quantum error correction codes using up to 94 logical qubits
- Implementation of fault-tolerant operations including state preparation, measurement, and quantum simulation on the 98-qubit Quantinuum Helios processor
- Development of new encoded operation gadgets for iceberg QED and two-level concatenated iceberg QEC codes
View Full Abstract
High-rate quantum error correcting (QEC) codes encode many logical qubits in a given number of physical qubits, making them promising candidates for quantum computation. Implementing high-rate codes at a scale that both frustrates classical computing and improves performance by encoding requires both high fidelity gates and long-range qubit connectivity -- both of which are offered by trapped-ion quantum computers. Here, we demonstrate computations that outperform their unencoded counterparts in the high-rate $[[ k+2,\, k,\, 2 ]]$ iceberg quantum error detecting (QED) and $[[ (k_2 + 2)(k_1 + 2),\, k_2k_1,\, 4 ]]$ two-level concatenated iceberg QEC codes, using the 98-qubit Quantinuum Helios trapped-ion quantum processor. Utilizing new gadgets for encoded operations, we realize this "beyond break-even" performance with reasonable postselection rates across a range of fault-tolerant (FT) and partially-fault-tolerant (pFT) component and application benchmarks with between $48$ and $94$ logical qubits. These benchmarks include FT state preparation and measurement, QEC cycle benchmarking, logical gate benchmarking, GHZ state preparation, and a pFT quantum simulation of the three-dimensional $XY$ model of quantum magnetism. Additionally, we illustrate that postselection rates can be suppressed by increasing the code distance via concatenation. Our results represent state-of-the-art logical component and state fidelities and provide evidence that high-rate QED/QEC codes are viable on contemporary quantum computers for near-term beyond-classical-scale computation.
Controlled jump in the Clifford hierarchy
This paper develops a systematic method for reaching higher levels of the Clifford hierarchy in quantum computing by using controlled versions of Clifford operations, establishing precise rules for how much these controlled gates can advance up the hierarchy levels. The authors prove resource bounds showing that accessing very high hierarchy levels requires exponentially many qubits, and demonstrate applications to preparing states for fractional phase gates.
Key Contributions
- Proof of controlled-jump rule showing controlled Clifford gates CU reach hierarchy level m+2 where m is the Pauli periodicity of U
- Tight upper bound on Pauli periodicity showing exponential qubit requirements for high hierarchy levels
- Construction of explicit Clifford families achieving asymptotically optimal hierarchy jumps
- Protocol for preparing logical catalyst states enabling fractional Z gates via phase kickback
View Full Abstract
We develop a simple and systematic route to higher levels of the qubit Clifford hierarchy by coherently controlling Clifford operations. Our approach is based on Pauli periodicity, defined for a Clifford unitary $U$ as the smallest integer $m\ge 1$ such that $U^{2^{m}}$ is a Pauli operator up to phase. We prove a sharp controlled-jump rule showing that the controlled gate $CU$ lies strictly in level $m+2$ of the hierarchy, and equivalently that $CU$ lies in level $k$ if $U^{2^{k-2}}$ is Pauli while no smaller positive power of $U$ is Pauli. We further quantify the resources required to realize large level jumps in the Clifford hierarchy by proving an essentially tight upper bound on Pauli periodicity as a function of the number of qubits, which implies that accessing high hierarchy levels through controlled Cliffords requires a number of target qubits that grows exponentially with the desired level. We complement this limitation with explicit infinite families of Pauli-periodic Cliffords whose controlled versions achieve asymptotically optimal jumps. As an application, we propose a protocol for preparing logical catalyst states that enable logical $Z^{1/2^k}$ phase gates via phase kickback from a single jumped Clifford.
Beyond Single-Shot Fidelity: Chernoff-Based Throughput Optimization in Superconducting Qubit Readout
This paper develops a new approach to optimize qubit readout in superconducting quantum computers by focusing on minimizing the total time needed to certify quantum states, rather than just maximizing single-shot measurement accuracy. The researchers show that using longer integration times than what maximizes single-shot fidelity can actually reduce overall certification time by 9-11%.
Key Contributions
- Formulated information-theoretic framework treating qubit readout as a stochastic communication channel with Chernoff information analysis
- Demonstrated that throughput-optimal integration times are longer than fidelity-optimal times, achieving 9-11% speedup in state certification
View Full Abstract
Single-shot fidelity is the standard benchmark for superconducting qubit readout, but it does not directly minimize the total wall-clock time required to certify a quantum state. We formulate an information-theoretic description of dispersive readout that treats the measurement record as a stochastic communication channel and compute the classical Chernoff information governing the multi-shot error exponent using a trajectory model that incorporates T1 relaxation with full cavity memory. We find a consistent separation between the integration time that maximizes single-shot fidelity and the time that minimizes total certification time. For representative transmon parameters and hardware overheads, the throughput-optimal integration window is longer than the fidelity-optimal one, yielding certification speedups of approximately 9-11%, with the gain saturating near 1.13x in the high-readout-power and high-overhead regime. Comparing the extracted classical information to the Gaussian Chernoff limit defines an information-extraction efficiency metric and shows that typical dispersive schemes are limited to about 45% capture at short integration times by detection efficiency, decreasing to approximately 12% at the throughput-optimal integration time of approximately 1.22 us due to T1-induced trajectory smearing. This formulation connects readout calibration directly to the operational objective of minimizing certification time in high-throughput superconducting processors.
Analysis of the action of conventional trapped-ion entangling gates in qudit space
This paper analyzes how conventional quantum gates designed for qubits (2-level systems) behave when applied to qudits (multi-level quantum systems) in trapped-ion quantum computers. The researchers study unwanted phase accumulations that occur in higher-dimensional systems and propose methods to compensate for these phases to make qudit-based quantum processors more practical.
Key Contributions
- Theoretical analysis of phase accumulation in Mølmer-Sørensen and Light-shift gates when operating on qudits
- Methods to actively compensate for unwanted phases and enhance gate robustness in multi-level quantum systems
View Full Abstract
Qudits, or multi-level quantum information carriers, present a promising path for scaling quantum computers. However, their use introduces increased complexity in quantum logic, necessitating careful control of relative phases between different qudit levels. In trapped-ion systems, entangling operations accumulate phases on specific levels that are no longer global, unlike in qubit architectures. Furthermore, the structure of multi-level gates becomes increasingly intricate with higher-dimensional Hilbert spaces. This work explores the theory of these additional entangling and non-entangling phases, accumulated in Mølmer--Sørensen and Light-shift gates. We propose methods to actively compensate for these phases, enhance gate robustness against parameter fluctuations, and simplify native gates for more efficient circuit decomposition. Our results pave the way toward the practical and scalable implementation of qudit-based quantum processors.
Tuning Wave-Particle Duality of Quantum Light by Generalized Photon Subtraction
This paper demonstrates a technique called generalized photon subtraction to create quantum light states that can be tuned between wave-like and particle-like properties. The researchers show this method can efficiently generate special quantum states needed for fault-tolerant optical quantum computing, particularly addressing bottlenecks in creating GKP qubits.
Key Contributions
- Experimental demonstration of tunable wave-particle duality control using generalized photon subtraction
- High-rate generation of intermediate quantum states optimized for fault-tolerant quantum computing thresholds
- Pathway to efficient GKP qubit generation addressing bottlenecks in optical quantum computing
View Full Abstract
Wave--particle duality is a hallmark of quantum mechanics. For bosonic systems, there exists a continuum of intermediate states bridging wave-like Schrödinger cat states and particle-like Fock states. Such states have recently been recognized as valuable resources for enhancing fault-tolerant quantum computation (FTQC) with propagating light. Here we experimentally demonstrate tunable generation of these intermediate states by employing generalized photon subtraction (GPS). By detecting up to three photons from squeezed-light sources with a photon-number-resolving detector, we continuously control the balance between wave- and particle-like features. This approach allows us to construct a spectral family of quantum states with high generation rates, optimized according to the required fault-tolerance threshold. Our results establish GPS as a versatile toolbox for tailoring non-Gaussian resources, opening a pathway to efficient Gottesman--Kitaev--Preskill (GKP) qubit generation and addressing a central bottleneck in optical quantum computing.
Optimized ancillary drive for fast Rydberg entangling gates
This paper develops a method to speed up two-qubit quantum gates in neutral atom systems by using an optimized ancillary laser drive that enhances the coupling between ground and Rydberg states. The technique reduces gate execution time by over 30% while maintaining high fidelity above 99.54% and reducing laser power requirements.
Key Contributions
- Development of optimized ancillary drive technique to enhance two-photon Rabi frequency in Rydberg atom systems
- Demonstration of >30% reduction in CZ gate execution time while maintaining >99.54% fidelity with reduced laser power requirements
View Full Abstract
Reaching fast and robust two-qubit gates with low infidelities has been an outstanding challenge for the long-term goal of useful quantum computers. Typically, optimizing the pulse shapes can minimize the gate infidelity and improve its robustness to certain types of errors; yet it remains incapable of speeding up the gate execution time which is fundamentally restricted by the attainable Rabi frequency in a realistic setup. In this work, we develop a fast implementation of two-qubit CZ gates using optimized ancillary drive to enhance the two-photon Rabi frequency between the ground and Rydberg states.This ancillary drive can work in an error-robustness framework without increasing the original gate infidelity in the absence of the drive. Considering the experimentally feasible parameters for $^{87}$Rb atoms, we demonstrate that the execution time required for such CZ gates can be shortened by more than 30$\%$ as compared to standard two-photon protocols arising the gate fidelity above 0.9954 by taking account of all relevant error sources. Our results reduce the high-power laser requirement and unlock the potential toward fast, high-fidelity quantum operations for large-scale quantum computation with neutral atoms.
Correcting coherent quantum errors by going with the flow
This paper shows that coherent quantum errors (correlated errors across qubits) can be effectively managed in quantum error correction by using 'passive' correction strategies that track errors virtually rather than physically correcting them immediately. The authors demonstrate that this approach prevents coherent errors from compounding over multiple correction cycles, achieving performance comparable to simpler uncorrelated error models.
Key Contributions
- Demonstrates that passive error correction with virtual Pauli frame updates prevents coherent errors from compounding in quantum error correction
- Shows through theory and simulation that correlated Hamiltonian noise can achieve similar performance to uncorrelated Pauli noise when using proper correction strategies
View Full Abstract
The performance of a given quantum error correction (QEC) code depends upon the noise model that is assumed. Independent Pauli noise, applied after each quantum operation, is a simplistic noise model that is easy to simulate and understand in the context of stabilizer codes. Although such a noise model is artificial, it is equivalent to independent, random, unbiased qubit rotations. What about spatially or temporally correlated qubit rotations? Such a noise model is applicable to global operations (e.g., NMR or ESR), common control sources (e.g., lasers), or slow drift (e.g., charge or magnetic noise) in various qubit technologies. In the worst case, such errors can combine constructively and result in a post-correction failure rate that increases with the number of error correction cycles. However, we show that this worst case does not generally arise unless taking active corrective actions while performing QEC. That is, by employing virtual Pauli frame updates ("passive" error correction) rather than physical corrections ("active" error correction), coherent errors do not compound appreciably. Starting in a random Pauli frame is also advantageous. In fact, through perturbation theory arguments and supporting numerical simulations, we show that the logical qubit performance beyond distance 3 for correlated single-qubit Hamiltonian noise models (i.e., global errant qubit rotations), when employing these "lazy" strategies, essentially matches the performance of Pauli noise model with the same process fidelity (fidelity after one application). In a more general circuit model of noise, correlations may add constructively within syndrome extraction rounds but Pauli frame randomization from passive error correction mitigates this effect across multiple rounds.
Entanglement-Induced Resilience of Quantum Dynamics
This paper demonstrates that quantum entanglement naturally protects quantum systems from errors and noise without requiring additional error correction schemes. The researchers show that as entanglement grows in quantum many-body systems, it automatically confines and suppresses the impact of local perturbations and errors.
Key Contributions
- Theoretical proof that entanglement entropy growth provides intrinsic protection against quantum errors
- Demonstration of a passive error suppression mechanism that requires no additional qubits or control overhead
- Quantitative correlation between entanglement entropy and degree of error protection in quantum dynamics
View Full Abstract
Quantum many-body devices suffer from imperfections that destabilize dynamics and limit scalability. We show that the dynamical growth of entanglement can intrinsically protect generic quantum dynamics against coherent and perturbative noise. Through rigorous theoretical analysis of general quantum dynamics and numerical simulations of spin chains and fermionic lattices, we prove that entanglement-entropy growth confines the influence of local Hamiltonian perturbations, thereby suppressing errors in dynamical errors. The degree of protection correlates quantitatively with the entanglement entropy of subsystems on which the perturbations act, and applies broadly to both analog quantum simulators and real-time control protocols. This entanglement-induced resilience is conceptually distinct from quantum error correction or dynamical decoupling: it passively leverages native many-body correlations without additional qubits, measurements, or control overhead. Our results reveal a generic mechanism linking entanglement growth to dynamical stability and provide practical guidelines for designing noise-resilient quantum devices.
Error correction with brickwork Clifford circuits
This paper proves that random 1D Clifford brickwork circuits can form good quantum error correction codes with logarithmic depth, providing both approximate and exact error correction bounds. The research uses statistical mechanics techniques to analyze these random quantum circuits and establishes mathematical limits on the circuit depth needed for effective error correction.
Key Contributions
- Proof that random 1D Clifford brickwork circuits form good approximate quantum error correction codes in logarithmic depth
- Matching upper and lower bounds for the circuit depth required for exact error correction in random 1D Clifford brickwork circuits
View Full Abstract
We prove that random 1D Clifford brickwork circuits form (in expectation) good approximate quantum error correction codes in logarithmic depth. Our proof makes use of the statistical mechanics techniques for random circuits developed by Dalzell et al. [PRX Quantum 3, 010333], adapted extensively to our own purpose. We also consider exact error correction, where we give matching upper and lower bounds for the required depth in which random 1D Clifford brickwork circuits become error correcting.
Toward speedup without quantum coherent access
This paper proposes a hybrid classical-quantum algorithm that combines classical preprocessing of matrix data with quantum circuits to solve various computational problems. The approach aims to achieve quantum speedups for tasks like linear equation solving and data fitting while avoiding the strong input assumptions that limit many existing quantum algorithms.
Key Contributions
- Development of hybrid classical-quantum algorithm with logarithmic complexity in input dimension
- Demonstration of exponential speedups for certain matrices compared to existing methods
- End-to-end quantum data fitting application with practical prediction capabilities
- Block encoding technique that avoids strong input assumptions of previous quantum algorithms
View Full Abstract
Along with the development of quantum technology, finding useful applications of quantum computers has been a central pursuit. Despite various quantum algorithms have been developed, many of them often require strong input assumptions, which is hardware demanding. In particular, recent advances on dequantization have revealed that the quantum advantage is more of a mere artifact of strong input assumptions. In this work, we propose a variant of these algorithms, leveraging both classical and quantum resources. Provided the classical knowledge (the entries) of the matrix/vector of interest, a classical procedure is used to pre-process this information. Then they are fed into a quantum circuit which is shown to be a block encoding of the matrix of interest. From this block-encoding, we show how to use it to tackle a wide range of problems, including principal component analysis, linear equation solving, Hamiltonian simulation, preparing ground state, and data fitting. We also analyze our protocol, showing that both the classical and quantum procedure can achieve logarithmic complexity in the input dimension, thus implying its potential for near term realization. We then discuss several implications and corollaries of our result. First,, our results suggest there are certain matrices/Hamiltonians where our method can provide exponential improvement compared to the existing ones with respect to the sparsity. Regarding dense linear systems, our method achieves exponential speed-up with respect to the inverse of error tolerance, compared to the best previously known quantum algorithm for dense systems. Last, and most importantly, regarding quantum data fitting, we show how the output of our quantum algorithms can be leveraged to predict unseen data. Thus, it provides an end-to-end application, which has been an open aspect of the previous quantum data fitting algorithm.
Qudit stabiliser codes for $\mathbb{Z}_N$ lattice gauge theories with matter
This paper shows how lattice gauge theories with matter can be reformulated as quantum error correcting codes using qudits (quantum systems with N levels instead of just 2). The authors demonstrate that quantum error correction can reveal hidden mathematical relationships between different physical theories and show how to perform fault-tolerant quantum computations using these qudit codes.
Key Contributions
- Extended quantum error correction from qubits to qudits for lattice gauge theories with matter
- Demonstrated logical duality between different bosonic models through error correction mapping
- Showed implementation of universal fault-tolerant gates via state injection between compatible qudit codes
View Full Abstract
In this work we extend the connection between Quantum Error Correction (QEC) and Lattice Gauge Theories (LGTs) by showing that a $\mathbb{Z}_N$ gauge theory with prime dimension $N$ coupled to dynamical matter can be expressed as a qudit stabilizer code. Using the stabilizer formalism we show how to formulate an exact mapping of the encoded $\mathbb{Z}_N$ gauge theory onto two different bosonic models, uncovering a logical duality generated by error correction itself. From this perspective, quantum error correction provides a unifying language to expose dual descriptions of lattice gauge theories. In addition, we generalize earlier $\mathbb{Z}_2$ constructions on qubits to $\mathbb{Z}_N$ on $N$-level qudits and demonstrate how universal fault-tolerant gates can be implemented via state injection between compatible qudit codes.
Distilling Magic States in the Bicycle Architecture
This paper develops improved magic state distillation factories using Bivariate Bicycle (BB) codes that can operate within a single code block, achieving better space-time efficiency and lower error rates compared to conventional surface code approaches that require multiple code blocks and lattice surgery.
Key Contributions
- Development of magic state distillation factories on Bivariate Bicycle codes that execute within single code blocks
- Joint optimization framework for logical qubit mapping, gate scheduling, measurement nativization, and protocol compression via qubit recycling
- Demonstration of improved space-time volume and lower target error rates compared to leading surface code distillation factories
View Full Abstract
Magic State Distillation is considered to be one of the promising methods for supplying the non-Clifford resources required to achieve universal fault tolerance. Conventional MSD protocols implemented in surface codes often require multiple code blocks and lattice surgery rounds, resulting in substantial qubit overhead, especially at low target error rates. In this work, we present practical magic state distillation factories on Bivariate Bicycle (BB) codes that execute Pauli-measurement-based Clifford circuits inside a single BB code block. We formulate distillation circuit design as a joint optimization of logical qubit mapping, gate scheduling, measurement nativization, and protocol compression via qubit recycling. Based on detailed resource analysis and simulations, our BB factories have space-time volume comparable to that of leading distillation factories while delivering lower target error at a smaller qubit footprint, and are particularly compelling as second-round distillers following magic state cultivations.
A Unified Error Correction Code for Universal Quantum Computing with Identical Particles
This paper proposes a new quantum error correction approach for quantum computers built with identical particle qubits, showing that these systems interact differently with environmental noise than conventional qubits. The authors develop a unified framework where error correction can be implemented directly at the physical qubit level using non-unitary reversal operations.
Key Contributions
- Identification of fundamental differences between identical particle qubit-bath interactions and conventional qubit-bath interactions
- Development of a unified error correction framework using non-unitary reversal operations for fault-tolerant quantum computing
- Demonstration that dynamical decoupling and decoherence-free subspace structures remain effective in this new framework
View Full Abstract
We present a universal fault-tolerant quantum computing architecture based on identical particle qubits (IPQs), where we find that the first-order IPQ - bath interaction fundamentally differs from the conventional first-order qubit-bath interaction. This key distinction necessitates a redesign of existing strategies to fight decoherence. We propose that the simplest quantum error correction code can be realized directly within the physical qubit, provided that conventional correction and restoration are generalized beyond unitary operations to employ physically implementable reversal operations -- naturally placing logical and physical qubits on equal footing. We further demonstrate that dynamical decoupling (DD) remains effective within this unified framework, and that a decoherence-free subspace (DFS) -- like structure emerges. Unlike previous approximate treatments, our analytically solvable IPQ-Bath model enables rigorous testing of these strategies, with numerical simulations validating their effectiveness.
Generalized $\mathbb{Z}_p$ toric codes as qudit low-density parity-check codes
This paper develops improved quantum error correction codes by generalizing the Kitaev toric code to work with higher-dimensional quantum systems (qudits) and systematically searches for codes with better performance parameters, finding examples that achieve optimal trade-offs between code distance and information storage capacity.
Key Contributions
- Development of generalized Z_p toric codes for qudits with enhanced stabilizer structures
- Systematic search identifying optimal qudit LDPC codes with improved k*d²/n ratios
- Efficient computational method using Laurent polynomials and Gröbner basis to calculate logical dimensions
View Full Abstract
We study two-dimensional translation-invariant CSS stabilizer codes over prime-dimensional qudits on the square lattice under twisted boundary conditions, generalizing the Kitaev $\mathbb{Z}_p$ toric code by augmenting each stabilizer with two additional qudits. Using the Laurent-polynomial formalism, we adapt the Gröbner basis to compute the logical dimension $k$ efficiently, without explicitly constructing large parity-check matrices. We then perform a systematic search over various stabilizer realizations and lattice geometries for $p\in\{3,5,7,11\}$, identifying qudit low-density parity-check codes with the optimal finite-size performance. Representative examples include $[[242,10,22]]_3$ and $[[120,6,20]]_{11}$, both achieving $k d^{2}/n=20$. Across the searched regime, the best observed $k d^{2}$ at fixed $n$ increases with $p$, with an empirical relation $k d^{2} = 0.0541 \, n^{2}\ln p + 3.84 \, n$, compatible with a Bravyi--Poulin--Terhal-type tradeoff when the interaction range grows with system size.
Experimental characterization of coherent and non-Markovian errors using tangent space decomposition
This paper develops and experimentally validates a new method for diagnosing different types of quantum errors in single-qubit gates using tangent-space decomposition. The technique can distinguish between coherent errors, Markovian noise, and non-Markovian noise from a single measurement, and was tested on a trapped ion quantum computing platform.
Key Contributions
- Novel tangent-space decomposition method for quantum error characterization that distinguishes coherent, Markovian, and non-Markovian errors
- Experimental validation on trapped ion platform showing practical application for quantum control system diagnostics
View Full Abstract
Accurate characterization of coherent and non-Markovian errors remains a central challenge in quantum information processing, as conventional benchmarking techniques typically rely on Markovian and time-independent noise assumptions. In practice, however, quantum devices exhibit both systematic coherent miscalibrations and temporally correlated fluctuations, which complicate error diagnosis and mitigation. Here, we apply a technique based on tangent-space decomposition to characterize such error in single-qubit quantum gates implemented on a trapped ion platform. Small imperfections in a quantum operation are treated as perturbations of the target quantum map, represented as tangent vectors in the space of quantum channels. This formulations enables a natural decomposition of the deviation into three components corresponding to coherent, Markovian and non-Markovian processes.The relative weights of these components provide a quantitative measure of the contribution from each type of error mechanism, directly from a single tomographic snapshot. We experimentally validate this method on a single-qubit gates implemented on a trapped $^{40}$Ca$^+$ ion, where control is achieved through laser-driven optical transitions. By analyzing experimentally reconstructed process matrices, expressed in the Pauli Transfer Matrix and Choi representations, we identify and quantify non-Markovian effects arising from controlled injection of slow fluctuations in the experimental environment. We also characterize deterministic coherent miscalibrations using the same technique. This approach provides a physically transparent and experimentally accessible tool for diagnosing complex error sources in quantum control systems.
CQM: Cyclic Qubit Mappings
This paper proposes Cyclic Qubit Mappings (CQM), a technique that dynamically moves logical qubits around quantum hardware during compilation to average out spatial and temporal error variations in quantum computers using surface codes and lattice surgery operations.
Key Contributions
- Dynamic remapping technique to mitigate hardware heterogeneity in quantum computers
- Method to achieve average logical error rates by moving qubits spatially using lattice surgery
View Full Abstract
Quantum computers show promise to solve select problems otherwise intractable on classical computers. However, noisy intermediate-scale quantum (NISQ) era devices are currently prone to various sources of error. Quantum error correction (QEC) shows promise as a path towards fault tolerant quantum computing. Surface codes, in particular, have become ubiquitous throughout literature for their efficacy as a quantum error correcting code, and can execute quantum circuits via lattice surgery operations. Lattice surgery also allows for logical qubits to maneuver around the architecture, if there is space for it. Hardware used for near-term demonstrations have both spatially and temporally varying error results in logical qubits. By maneuvering logical qubits around the topology, an average logical error rate (LER) can be enforced. We propose cyclic qubit mappings (CQM), a dynamic remapping technique implemented during compilation to mitigate hardware heterogeneity by expanding and contracting logical qubits. In addition to LER averaging, CQM shows initial promise given it's minimal execution time overhead and effective resource utilization.
Electrical post-fabrication tuning of aluminum Josephson junctions at room temperature
This paper demonstrates a method to electrically tune aluminum Josephson junctions at room temperature using voltage pulses, allowing researchers to adjust qubit frequencies after fabrication. The technique can increase junction resistance by up to 270% while maintaining high qubit quality, providing a solution to frequency crowding problems in quantum processors.
Key Contributions
- Demonstrated controllable post-fabrication tuning of superconducting qubit frequencies while maintaining quality factors above 1 million
- Established practical protocols and limits for electrical tuning of Josephson junctions with up to 270% resistance increase
- Provided solution to frequency crowding in quantum processors through room-temperature junction modification
View Full Abstract
Josephson junctions are a key element of superconducting quantum technology, serving as the core building blocks of superconducting qubits. We present an experimental study on room-temperature electrical tuning of aluminum junctions, showing that voltage pulses can controllably increase their resistance and adjust the Josephson energy while maintaining qubit quality factors above 1 million. We find that the rate of resistance increase scales exponentially with pulse amplitude during manipulation, after which the spontaneous resistance increase scales proportionally to the amount of manipulation. We show that this spontaneous increase halts at cryogenic temperatures, and resumes again at room temperature. Using our stepwise protocol, we achieve up to a 270% increase in junction resistance, corresponding to a reduction of nearly 2 GHz of the qubit transition frequency. These results establish the achievable range, relaxation behavior, and practical limits of electrical tuning, enabling post-fabrication mitigation of frequency crowding in quantum processors.
Differentiable Maximum Likelihood Noise Estimation for Quantum Error Correction
This paper develops a new method called differentiable Maximum Likelihood Estimation (dMLE) to better estimate noise in quantum computers, which is crucial for quantum error correction. The approach uses gradient descent to optimize noise parameters and demonstrates significant improvements in reducing logical error rates compared to existing methods when tested on Google's quantum processor.
Key Contributions
- Development of differentiable Maximum Likelihood Estimation framework for quantum noise estimation
- Demonstration of up to 30.6% reduction in logical error rates for repetition codes and 8.1% for surface codes
- Integration of exact Planar solver and novel Tensor Network architecture for tractable likelihood evaluation
View Full Abstract
Accurate noise estimation is essential for fault-tolerant quantum computing, as decoding performance depends critically on the fidelity of the circuit-level noise parameters. In this work, we introduce a differentiable Maximum Likelihood Estimation (dMLE) framework that enables exact, efficient, and fully differentiable computation of syndrome log-likelihoods, allowing circuit-level noise parameters to be optimized directly via gradient descent. Leveraging the exact Planar solver for repetition codes and a novel, simplified Tensor Network (TN) architecture combined with optimized contraction path finding for surface codes, our method achieves tractable and fully differentiable likelihood evaluation even for distance 5 surface codes with up to 25 rounds. Our method recovers the underlying error probabilities with near-exact precision in simulations and reduces logical error rates by up to 30.6(3)% for repetition codes and 8.1(2)% for surface codes on experimental data from Google's processor compared to previous state-of-the-art methods: correlation analysis and Reinforcement Learning (RL) methods. Our approach yields provably optimal, decoder-independent error priors by directly maximizing the syndrome likelihood, offering a powerful noise estimation and control tool for unlocking the full potential of current and future error-corrected quantum processors.
Calderbank-Shor-Steane codes on group-valued qudits
This paper introduces a new class of quantum error-correcting codes called group-CSS codes that work on qudits (quantum systems with more than two levels) based on any finite group. The codes generalize existing CSS codes and quantum double models, providing new theoretical frameworks for quantum error correction with non-Abelian groups.
Key Contributions
- Introduction of CSS-like codes on group-valued qudits for arbitrary finite groups
- Proof that certain group-CSS codes reduce to CW quantum double models
- Construction of intrinsically non-Abelian code families with asymptotically optimal rate and distances
- Generalization of quantum double models with defects using ghost vertices
View Full Abstract
Calderbank-Shor-Steane (CSS) codes are a versatile quantum error-correcting family built out of commuting $X$- and $Z$-type checks. We introduce CSS-like codes on $G$-valued qudits for any finite group $G$ that reduce to qubit CSS codes for $G = \mathbb{Z}_2$ yet generalize the Kitaev quantum double model for general groups. The $X$-checks of our group-CSS codes correspond to left and/or right multiplication by group elements, while $Z$-checks project onto solutions to group word equations. We describe quantum-double models on oriented two-dimensional CW complexes (which need not cellulate a manifold) and prove that, when $G$ is non-Abelian and simple, every $G$-covariant group-CSS code with suitably upper-bounded $Z$-check weight and lower-bounded $Z$-distance reduces to a CW quantum double. We describe the codespace and logical operators of CW quantum doubles via the same intuition used to obtain logical structure of surface codes. We obtain distance bounds for codes on non-Abelian simple groups from the graph underlying the CW complex, and construct intrinsically non-Abelian code families with asymptotically optimal rate and distances. Adding "ghost vertices" to the CW complex generalizes quantum double models with defects and rough boundary conditions whose logical structure can be understood without reference to non-Abelian anyons or defects. Several non-invertible symmetry-protected topological states, both with ordinary and higher-form symmetries, are the unique codewords of simply-connected CW quantum doubles with a single ghost vertex.
High-Temporal-Resolution Measurements of the Impacts of Ionizing Radiation on Superconducting Qubits
This paper studies how ionizing radiation affects superconducting qubits by using high-resolution timing measurements to track qubit performance after radiation events. The researchers found that qubits recover from radiation-induced disruption in about 13 microseconds and that previously suspected correlations between certain qubit errors and radiation may not actually exist.
Key Contributions
- Demonstrated that two-level system scrambling events are not correlated with ionizing radiation events detectable by MKIDs
- Characterized fast recovery dynamics of superconducting qubits after radiation events with 13±1 μs time constant
- Quantified quasiparticle density response to radiation with 240/μm³/MeV peak density per deposited energy
View Full Abstract
We measure the effect of ionizing radiation on superconducting qubits with a timing resolution of 1 $μs$ using microwave kinetic inductance detectors (MKIDs) fabricated on the same substrate. We observe no correlation between two-level system (TLS) scrambling events and ionizing radiation events detected with the MKIDs, suggesting TLS scrambling events may not arise from ionizing radiation and instead the previously reported apparent correlation may be due to events without sufficient energy to trigger our MKIDs. We characterize the fast-time system recovery of transmons following a radiation event, where we observe the recovery of the enhanced qubit relaxation and excitation to be well-described by an exponential recovery to the baseline quasiparticle density, with a characteristic time of $13\pm1\ μ$s, and a peak quasiparticle density at the junction per deposited energy of $240/μm^3/MeV$. The fast recovery is consistent with literature reported values for Nb-based devices with direct injection of 2$Δ_{\text{Al}}$ phonons, demonstrating the recovery is strongly dependent on the proximity of niobium to the junction.
Continuous variable quantum key distribution channel emulator for the SPOQC mission
This paper describes the development of an optical channel emulator that simulates the dynamic conditions of satellite-to-ground communication links to test quantum key distribution systems. The emulator replicates atmospheric turbulence and orbital dynamics to evaluate the performance of quantum communication payloads for the SPOQC satellite mission launching in 2026.
Key Contributions
- Development of a novel optical channel emulator for satellite-to-ground quantum communications
- Accurate simulation of atmospheric turbulence and satellite trajectory effects for continuous variable quantum key distribution testing
View Full Abstract
In a free space optical (FSO) communication link from satellite to ground, the losses in the channel will be dynamic. Thus, the characterization of the FSO channel is of great importance and this can be emulated in the lab to evaluate the realistic performance of a satellite payload. In this work, we introduce a novel optical channel emulator capable of replicating these dynamics, especially for Low Earth Orbit based CubeSats. We demonstrate its ability to accurately emulate a satellite-to-ground optical communications channel under various atmospheric turbulence strengths, satellite trajectories, and optical ground station parameters at a given optical wavelength of interest. Our satellite channel emulator was designed to test and benchmark the performance of the continuous variable quantum key distribution payload for the Satellite Platform for Optical Quantum Communications mission - an in-orbit demonstrator for the UK's Quantum Communication Hub, to be launched in early 2026.
Machine learning of quantum data using optimal similarity measurements
This paper demonstrates a quantum machine learning protocol that uses bosonic quantum interference to efficiently measure similarity between quantum states (overlap measurements) without needing to fully characterize individual quantum data instances. The researchers implemented their approach on an integrated photonic processor and showed it can classify and learn from quantum data with optimal sample complexity.
Key Contributions
- Sample-optimal, hardware-efficient protocol for quantum state overlap estimation using bosonic interference
- Experimental demonstration of quantum data classification and online learning on integrated photonic processor
- Information-theoretically optimal sample complexity independent of system dimension
View Full Abstract
Quantum machine learning seeks a computational advantage in data processing by evaluating functions of quantum states, such as their similarity, that can be classically intractable to compute. For quantum advantage to be possible, however, it is essential to bypass costly characterisation of individual data instances in favour of efficient, direct similarity evaluation. Here we demonstrate a sample-optimal, hardware-efficient protocol for estimating quantum similarity -- the state overlap -- using bosonic quantum interference. The sample complexity of this approach is independent of the system dimension and is information-theoretically optimal up to a constant factor. Experimentally, we implement the scheme on \emph{Prakash-1}, a quantum computing platform based on a fully programmable integrated photonic processor. By preparing and interfering qudit states on the chip to directly extract their overlap, we demonstrate classification and online learning of quantum data with high accuracy in realistic noisy experiments. Our results establish joint overlap measurements as a scalable pathway to efficient quantum data analysis and a practical building block for network-integrated quantum machine learning.
Trajectory of Probabilities, Probability on Trajectories, and the Stochastic-Quantum Correspondence
This paper clarifies the conceptual distinction between two ways of describing probabilistic evolution: how probabilities change over time versus assigning probabilities to entire possible histories. The authors develop a mathematical framework connecting these descriptions and apply it to resolve confusion in stochastic-quantum correspondence literature.
Key Contributions
- Formal distinction between trajectory of probabilities and probability on trajectories with precise implementation framework
- Proof that implementations are non-unique and every probability dynamics admits Markovian implementation
- Clarification of fallacies in linearity arguments and proper interpretation of transition matrices in probability dynamics
View Full Abstract
The probabilistic description of the time evolution of a physical system can take two conceptually distinct forms: a trajectory of probabilities, which specifies how probabilities evolve over time, and a probability on trajectories, which assigns probabilities to possible histories. A lack of a clear distinction between these two probabilistic descriptions has given rise to a number of conceptual difficulties, particularly in recent analyses of stochastic-quantum correspondence. This paper provides a systematic account of their relationship. We define probability dynamics and stochastic process families together with a precise notion of implementation that connects the two descriptions. We show that implementations are generically non-unique, that every probability dynamics admits a Markovian implementation, and characterize when non-Markovian implementations are possible. We expose fallacies in common arguments for the linearity of probability dynamics based on the law of total probability and clarify the proper interpretation of ``transition matrices'' by distinguishing dynamics-level maps from the conditional probability matrices of implementing processes. We further introduce decomposability as the appropriate general notion of stepwise evolution for (possibly nonlinear) probability dynamics, relate it to divisibility in the linear case -- showing that the two can come apart -- and disentangle both notions from Markovianity and time-homogeneity. Finally, we connect these results to what we call statistical dynamics, in which linearity is indeed physically motivated, and contrast the framework with quantum mechanics.
Ground state and persistent oscillations in the quantum East model
This paper studies the quantum East model, a theoretical quantum spin system, finding that its ground state can be described by simple product states and identifying special boundary-driven oscillations that persist even in large systems. The work reveals how boundary effects can create long-lived quantum dynamics distinct from other known mechanisms.
Key Contributions
- Analytical characterization of ground state as spin-coherent product state in the s→-∞ limit
- Discovery of persistent coherent oscillations driven by boundary physics with size-independent energy gap
View Full Abstract
For the 1D quantum East model with open boundaries, we show that in the limit $s \to -\infty$, the ground state is accurately captured by a simple spin-coherent product state. We further identify a low-entanglement excited eigenstate that differs from the ground state only by a $π$-rotation of the boundary spin, remaining well approximated by a spin-coherent state. For a range of $-\infty<s<0$, the edge-coherent product state overlaps with two eigenstates separated by a size-independent energy gap, leading to persistent coherent oscillations of both global and local observables in the thermodynamic limit. These oscillations originate from boundary physics and are distinct from quantum many-body scars or hypercube-like Fock-space mechanisms.
Integrability breaking in semiclassical strings in Koopman-Krylov space
This paper develops a new mathematical framework called Koopman-Krylov to analyze chaos and non-integrable dynamics in semiclassical string theory solutions. The authors extend quantum Krylov methods to classical systems to characterize how small perturbations break integrability in string dynamics.
Key Contributions
- Extension of quantum Krylov methods to classical dynamical systems through Koopman-Krylov framework
- Development of diagnostic tools for characterizing integrability breaking in semiclassical string solutions
View Full Abstract
While very powerful, integrability in semiclassical string solutions is known to be a rare property. Motivated by the need to understand and characterise the large landscape of non-integrable string dynamics, we extend Krylov methods for probing chaos to classical systems. We introduce a Koopman-Krylov framework, formulated in the Koopman-von Neumann description of classical mechanics and implemented via a generator extended dynamic mode decomposition (gEDMD) approximation of the Koopman generator acting on observables. Using this framework, we study how integrability-breaking deformations of integrable string dynamics induce characteristic redistributions of spectral weight, leading to observable-dependent delocalisation and spreading in Krylov space. We illustrate the Koopman-Krylov diagnostics across three classes of non-integrable semiclassical string solutions.
From QED$_3$ to Self-Dual Multicriticality in the Fradkin-Shenker Model
This paper studies a theoretical quantum many-body system called the Fradkin-Shenker model, which describes quantum phase transitions in 2+1 dimensional lattice systems. The authors propose a continuum field theory description and establish dualities between different quantum field theories that could describe quantum spin liquids and magnetic phase transitions.
Key Contributions
- Proposed a continuum QED3 description for the staggered Fradkin-Shenker model with emergent U(1) symmetries
- Established a duality between Higgs-Yukawa-QED3 and the easy-plane CP1 model for describing quantum phase transitions
- Computed scaling dimensions using large-Nf expansion and showed agreement with emergent selection rules
View Full Abstract
We consider the Fradkin-Shenker ${\mathbb Z}_2$ gauge-Higgs lattice model in 2+1 dimensions, i.e. the toric code deformed by an in-plane magnetic field. Its phase diagram contains a multicritical CFT with gapless, mutually non-local electric and magnetic particles, exchanged by a ${\mathbb Z}_2^{\mathsf{D}}$ self-duality symmetry. We introduce a staggered generalization of the model in which these particles carry global $U(1)_e$ and $U(1)_m$ charges, respectively, and we propose a continuum QFT description in terms of QED$_3$ with $N_f = 2$ Dirac fermion flavors and a charge-two Higgs field with Yukawa couplings. The conjectured phase diagram harbors a multicritical CFT with $(O(2)_e \times O(2)_m)\rtimes\mathbb{Z}_2^\mathsf{D}$ symmetry, some of which is emergent in the QFT description. We compute the scaling dimensions of some operators using a large-$N_f$ expansion and find agreement with the emergent selection rules. The staggered model admits a deformation to the original Fradkin-Shenker model, which maps to unit-charge monopole operators in Higgs-Yukawa-QED$_3$ that break the $U(1)_e \times U(1)_m$ symmetry. We show explicitly that this deformation reproduces all features of the Fradkin-Shenker phase diagram. Finally, we propose a multicritical duality between Higgs-Yukawa-QED$_3$ and the easy-plane $\mathbb{ CP}^1$ model (i.e. two-flavor scalar QED$_3$ with a suitable potential), which describes spin-1/2 anti-ferromagnets on a square lattice. This duality implies a first-order line of Néel-VBS transitions ending in a deconfined quantum multicritical point, described by the same $O(2)_e \times O(2)_m$ symmetric CFT that arises in the staggered Fradkin-Shenker model, which separates it from a gapped ${\mathbb Z}_2$ spin liquid phase.
Butterfly Echo Protocol for Axis-Agnostic Heisenberg-Limited Metrology
This paper proposes a new quantum sensing protocol that uses chaotic quantum dynamics to create probe states for measuring rotations without knowing the rotation axis in advance. The method achieves optimal Heisenberg-limited sensitivity and can be implemented with simpler experimental setups compared to previous approaches.
Key Contributions
- Development of axis-agnostic rotation sensing protocol achieving Heisenberg scaling
- Use of easily-prepared random symmetric probe states via chaotic circuits instead of difficult anticoherent states
- Analytical and numerical analysis of dephasing effects with practical implementation in lanthanide atoms
View Full Abstract
The extreme sensitivity of chaotic systems to external perturbations makes them natural candidates for sensing applications. We propose a single-shot echo-based protocol for estimating small rotations about an unknown axis that leverages random symmetric probe states prepared via chaotic dynamics. In contrast to previous protocols for this axis-agnostic rotation sensing problem that depend on difficult-to-prepare anticoherent states, the random probe states used in our protocol can be prepared via constant-depth chaotic circuits composed of random one-axis twisting pulses. We demonstrate analytically that our protocol achieves Heisenberg scaling relative to an arbitrary rotation axis that need not be a priori known. We also investigate the effects of collective and single-particle dephasing in our protocol using analytical and numerical tools. While the requirements on dephasing rates to maintain Heisenberg sensitivity are strict, they are achievable in near-term experiments, for instance, for magnetometric rotosensing with high-spin lanthanide atoms such as dysprosium-164.
Analogue many-body gravitating quantum systems with a network of dipolar Bose-Einstein condensates
This paper proposes using networks of dipolar Bose-Einstein condensates to simulate quantum gravitational effects, extending from simple two-level systems to more complex many-body systems. The work demonstrates how these atomic ensemble platforms can explore gravitationally-induced quantum entanglement and decoherence at experimentally accessible scales.
Key Contributions
- Generalization of quantum-gravity interface studies from qubits to interacting N-level qudits using atomic ensembles
- Demonstration that dipolar Bose-Einstein condensate networks can simulate gravitational quantum effects
- Development of metrological witnesses for detecting gravitationally-induced entanglement and decoherence
- Extension to sensor networks that broaden entanglement detection capabilities
View Full Abstract
Operational probes of the interface between quantum mechanics and general relativity in the Newtonian regime -- via mass-energy equivalence in clocks or spatial superpositions in interferometers -- share a common description in terms of an effective qubit-qubit Ising coupling. Here we generalize both paradigms to interacting $(N+1)$-level effective qudits made of atomic ensembles with particle number, $N$. The many-body enhancement boosts the signal-to-noise and increases the effective interaction rate, facilitating the observation of gravitationally induced entanglement and decoherence, certified by metrological witnesses based on local and collective measurements. Furthermore, we show that quantum effects induced by gravitational interaction can be simulated by trapped bimodal Bose-Einstein condensates with long-range (e.g. dipolar) coupling, providing a programmable analogue platform to explore gravitating quantum dynamics at accessible time and energy scales. Finally, extending the protocol to a sensor network broadens the entanglement-detection window.
Efficient evaluation of fundamental sensitivity limits and full counting statistics for continuously monitored Gaussian quantum systems
This paper develops efficient mathematical methods for analyzing continuously monitored quantum systems that maintain Gaussian properties, focusing on bosonic linear systems like optical parametric oscillators. The work provides tools to calculate fundamental limits for parameter estimation and measurement statistics without computationally expensive full quantum state simulations.
Key Contributions
- Developed efficient differential equation formulation for Gaussian quantum systems under continuous monitoring that avoids Hilbert space truncation
- Provided analytical framework for computing quantum Fisher information and fundamental sensitivity limits in continuously monitored bosonic systems
- Demonstrated application to optical parametric oscillator for frequency estimation and thermodynamic uncertainty relations
View Full Abstract
Generalized master equations (GMEs) -- time-local but generally neither trace-preserving nor Hermiticity-preserving -- are convenient tools to compute properties of the environment of an open or continuously monitored quantum system. A two-sided master equation yields the fidelity and quantum Fisher information (QFI) of environment states, thereby setting fundamental limits for hypothesis testing and parameter estimation under continuous monitoring. For unmonitored noise or inefficient detection, the QFI of the detectable part of the environment may be obtained from a recently derived GME acting on multiple system replicas. Tilted master equations provide the full counting statistics of quantum jumps and diffusive measurements, enabling, e.g., studies of quantum thermodynamics beyond average values. Here we focus on bosonic linear systems, governed by a quadratic Hamiltonian and linear jump operators, whose dynamics preserves Gaussianity. For Gaussian initial states, we recast a generic GME as a compact set of ordinary differential equations for the covariance matrix (a Riccati-type equation), first moments, and normalization. These equations can be integrated efficiently without Hilbert-space truncation, and admit analytical results in simple settings. We also provide specialized forms for fidelity/QFI and full counting statistics. We illustrate the formalism with a continuously monitored optical parametric oscillator, using it to determine sensitivity limits for frequency estimation and to benchmark Hasegawa's thermodynamic uncertainty relations.
Quantum Confocal Microscopy in Fock Space with a 19 dB Metrological Gain
This paper demonstrates a new quantum sensing technique called quantum confocal microscopy that uses specially prepared quantum states of light to achieve measurement precision far beyond classical limits. The researchers achieved a record 19 dB improvement in measurement sensitivity by creating focused quantum probe states and efficiently extracting the measurement information.
Key Contributions
- Introduced quantum confocal microscopy framework with Fock-space lenses for deterministic quantum probe preparation
- Achieved record 19.06 dB metrological gain beyond standard quantum limit with scalable quantum circuit implementation
- Demonstrated near-Heisenberg scaling (N^-0.416) displacement sensitivity with probe states up to 500 photons
View Full Abstract
Quantum metrology promises measurement precision beyond classical limits by exploiting large-scale quantum states, yet realizing this advantage faces two fundamental challenges: the deterministic preparation of non-trivial quantum probes and the efficient extraction of metrological information in high-dimensional Hilbert spaces. Here, we introduce quantum confocal microscopy in Fock space that simultaneously resolves both challenges. Drawing a direct analogy between classical wave optics and quantum state evolution in a bosonic mode, we construct a confocal system with two Fock-space lenses. The first lens deterministically focuses a coherent state into a quantum probe with a tightly concentrated photon-number distribution, while the second lens maps the metrological information back to the vacuum state for efficient readout. Using a superconducting circuit QED platform, we prepare focused probe states with mean photon numbers up to ${N} = 500$, achieving a 21.5$\pm$1.1 dB compression of the photon-number uncertainty relative to a coherent state, with a scalable quantum circuit of $\mathcal{O}(1)$ operational depth. We demonstrate a displacement sensitivity scaling as $N^{-0.416}$, approaching the Heisenberg scaling ($N^{-0.5}$), and achieve a record metrological gain of 19.06$\pm$0.13 dB beyond the standard quantum limit. This work establishes quantum confocal microscopy as a scalable and practical framework for quantum-enhanced precision measurement, readily extendable to other bosonic platforms and high-dimensional quantum many-body systems.
Gaussian mode coupling of spectrally broadband photons from bulk spontaneous parametric down-conversion: A spatial-spectral mode analysis of fiber coupling
This paper analyzes how different spatial and spectral properties of photons from spontaneous parametric down-conversion (SPDC) sources affect key performance metrics like collection efficiency and spectral purity. The researchers develop a theoretical framework using Laguerre-Gauss mode decomposition to understand trade-offs between these metrics and validate their predictions experimentally.
Key Contributions
- Development of a spectral-spatial mode analysis framework using Laguerre-Gauss decomposition to understand trade-offs in SPDC source performance metrics
- Quantitative comparison of different phase-matching configurations including standard periodically poled and aperiodically poled Gaussian phase matching for type-II SPDC
- Experimental validation of theoretical predictions through spatial and spectral projection measurements
View Full Abstract
Photon sources based on spontaneous parametric down-conversion (SPDC) are central to experimental quantum optics and quantum technologies. Their performance is commonly quantified by three metrics: pair-collection probability, heralding efficiency, and spectral purity. In bulk-crystal SPDC, these metrics are known to be mutually constrained, yet the physical origin of the resulting trade-offs is often obscured. We show that these trade-offs originate from the frequency-dependent population of discrete spatial modes in the SPDC emission. By performing a Laguerre-Gauss mode decomposition at each frequency component, we show how spectral-spatial non-separability impacts collection probability, heralding efficiency, and purity. We apply this framework to two widely used quasi-phase-matching configurations: collinear degenerate type-0 and type-II SPDC in periodically poled bulk crystals, and quantify how different phase-matching functions shape the spectral-spatial mode structure. In particular, for type-II SPDC we compare standard periodically poled and aperiodically poled Gaussian phase matching. We experimentally validate some of our theoretical results using spatial- and spectral-projection measurements. This spectral-spatial mode analysis provides a quantitative and predictive framework for understanding and engineering bulk-crystal photon sources, enabling systematic multi-parameter optimization beyond qualitative design guidelines.
Polarization-selective quantum cooperative response in dual-species atom arrays
This paper develops dual-species atom arrays that can selectively control and modulate different polarizations of light by using atoms with different polarizabilities to break symmetry. The researchers demonstrate how these arrays can function as polarization-selective quantum optical devices by engineering cooperative optical responses between the different atomic species.
Key Contributions
- Development of dual-species subwavelength atom arrays with polarization-dependent subradiant modes
- Demonstration of scalable polarization-selective quantum light modulator using engineered atomic arrays
View Full Abstract
Atom arrays have emerged as a powerful platform for quantum light-matter interfaces, yet single-species arrays are constrained by in-plane symmetry, restricting polarization control. Here we investigate the cooperative optical response of dual-species subwavelength atom arrays, in which intrinsic polarizability difference breaks in-plane symmetry. By engineering the lattice spacing and detunings, the arrays exhibit polarization-dependent subradiant modes, enabling complete reflection of specific polarization component. Leveraging this mechanism, we assemble array units as functional pixels and demonstrate a scalable polarization-selective quantum light modulator. Our work establishes a dynamically reconfigurable atomic-photonic platform for versatile subwavelength quantum optical elements.
Quantum diffusion for a quantum particle with a correlated Gaussian noise
This paper studies how a quantum particle moves and spreads out when influenced by correlated Gaussian noise, deriving mathematical formulas to describe the particle's momentum and position over time. The work provides analytical solutions for understanding quantum diffusion processes in noisy environments.
Key Contributions
- Analytical solution of joint probability density function for quantum particle with correlated Gaussian noise
- Explicit expressions for mean square momentum and mean square displacement in quantum diffusion
View Full Abstract
We investigate the diffusive behavior of a quantum particle driven by a correlated Gaussian noise. We derive the analytical solution of the joint probability density function and obtain explicit expressions for the mean square momentum and the mean square displacement.
Connecting Quantum Contextuality and Nonlocality
This paper provides a unified theoretical framework connecting quantum contextuality and nonlocality using mathematical tools like sheaf theory and graph theory. It bridges abstract theoretical understanding with experimental implementations, particularly in photonic systems, showing how these quantum phenomena serve as resources for quantum technologies.
Key Contributions
- Unified sheaf-theoretic and graph-theoretic framework connecting contextuality and nonlocality
- Bridge between abstract theory and experimental realizations in photonic systems
- Theory-independent characterization of quantum statistical correlations
View Full Abstract
Quantum theory departs from classical physics in its treatment of correlations, most prominently through the phenomena of contextuality and nonlocality. Once regarded primarily as foundational curiosities, these effects are now understood as key operational resources for quantum computation, communication, and simulation. Although traditionally investigated in distinct settings, recent theoretical and experimental advances have revealed deep conceptual, mathematical, and operational connections between them. This review presents a unified perspective on these developments based on sheaf-theoretic and graph-theoretic frameworks, which provide theory-independent characterizations of statistical correlations. These approaches clarify the structural relationship between contextuality and nonlocality, facilitate the formulation of experimentally testable inequalities, and guide implementations in realistic physical platforms, with particular emphasis on photonic systems. By bridging abstract theoretical structures and concrete experimental realizations, this review sheds light on the nonclassical foundations of quantum correlations and their emerging role in quantum technologies.
Scaling and Luescher Term in a non-Abelian (2+1)d SU$(2)$ Quantum Link Model
This paper studies a quantum field theory model in 2+1 dimensions using computational tensor network methods to investigate string-like behavior between quarks. The researchers find evidence for confinement and rough string behavior, but discover unexpected deviations from theoretical predictions for certain universal constants.
Key Contributions
- Demonstration of tensor network methods for simulating non-Abelian gauge theories in 2+1 dimensions
- Evidence for rough string behavior with logarithmic width scaling but anomalous Luescher term coefficient
View Full Abstract
We investigate a non-Abelian SU$(2)$ quantum link model in 2+1 dimensions on a hexagonal lattice using tensor network methods. We determine the static quark potential for a wide range of bare coupling values and find that the theory is confining. We also probe the existence of a Luescher term and find a clear signal, however, the value of the dimensionless constant $γ$ strongly deviates from the expected universal value $-π/24$ for almost all values of the coupling $g^2$ we investigated. The width of the strings scales logarithmically with the string length again for all $g^2$-values, providing evidence for a rough string, with no indication for a roughening transition.
Dequantization Barriers for Guided Stoquastic Hamiltonians
This paper constructs a mathematical proof showing that certain quantum ground-state preparation problems cannot be efficiently solved by any classical algorithm, even with optimal starting conditions, while remaining solvable on quantum computers. The work strengthens previous results by ruling out all classical approaches rather than just specific algorithms.
Key Contributions
- Proves fundamental limitations of classical algorithms for stoquastic ground-state preparation problems
- Extends previous dequantization barriers from specific algorithms to all possible classical approaches
- Constructs explicit examples using spectral expander graphs with attached self-similar trees
View Full Abstract
We construct a probability distribution, induced by the Perron--Frobenius eigenvector of an exponentially large graph, which cannot be efficiently sampled by any classical algorithm, even when provided with the best-possible warm-start distribution. In the quantum setting, this problem can be viewed as preparing the ground state of a stoquastic Hamiltonian given a guiding state as input, and is known to be efficiently solvable on a quantum computer. Our result suggests that no efficient classical algorithm can solve a broad class of stoquastic ground-state problems. Our graph is constructed from a class of high-degree, high-girth spectral expanders to which self-similar trees are attached. This builds on and extends prior work of Gilyén, Hastings, and Vazirani [Quantum 2021, STOC 2021], which ruled out dequantization for a specific stoquastic adiabatic path algorithm. We strengthen their result by ruling out any classical algorithm for guided ground-state preparation.
Excited-state quantum phase transitions and chaos in a three-level Lipkin model
This paper studies quantum phase transitions in excited states of a three-level atomic system, developing new methods to analyze how quantum chaos affects these transitions. The researchers combine traditional quantum phase transition diagnostics with chaos-sensitive measures to create a framework for understanding complex quantum systems.
Key Contributions
- Development of a framework combining chaos-sensitive measures with excited-state quantum phase transition diagnostics
- Characterization of spectral structures in three-level Lipkin-Meshkov-Glick model using Poincaré sections and Peres lattices
View Full Abstract
Excited-state quantum phase transitions (ESQPTs) have been extensively studied in two-level models, but their characterization remains challenging in systems displaying mixed regular and chaotic dynamics. In this work, we investigate ESQPTs within the three-level Lipkin-Meshkov-Glick model, where an enlarged Hilbert space and multiple separatrices give rise to rich spectral structures strongly influenced by chaos. To investigate the different dynamical regions, we have calculated Poincaré sections and Peres lattices. In addition, by combining chaos-sensitive measures with standard ESQPT diagnostics, we provide a static analysis of ESQPT signatures in this model and establish a robust framework for future studies of its dynamical behavior. The degree of chaos and the Kullback-Leibler divergence are found to be very effective chaos-sensitive measures, which are complementary to ESQPT diagnostics such as the mean field limit and the participation ratio. Hence we provide a standard framework to work with ESQPTs in chaotic three-level systems.
A Maxwell Fish-Eye Lens in a Bose-Einstein Condensate
This paper demonstrates the creation of an optical lens analog using sound waves (phonons) in a Bose-Einstein condensate, where the researchers engineered a spatially varying speed of sound to mimic the focusing properties of a Maxwell fish-eye lens. The work shows how ultracold atomic systems can be used to simulate complex optical phenomena and control wave propagation.
Key Contributions
- Experimental realization of Maxwell fish-eye lens analog using phonons in BEC
- Demonstration of engineered refractive index control in ultracold atomic systems
- Framework for simulating wave propagation on effective spherical geometries
View Full Abstract
We experimentally realize an analogue of the optical Maxwell fish-eye lens (MFEL) using phononic excitations in a Bose-Einstein condensate (BEC). A MFEL is characterized by a radially symmetric, spatially varying refractive index with the remarkable property that rays emitted from any point within the lens are perfectly focused at their image points. While the implementation of such gradient-index lenses is challenging in conventional optical systems, BECs offer a highly tunable platform in which the spatially varying speed of sound of collective excitations -- phonons, the acoustic-wave analogues of photons -- can be engineered and their dynamics observed in real time. Time-resolved measurements of phonon wavefronts reveal focusing behavior that shows good agreement with analytical theory and numerical simulations. This work provides both a geometric and physical framework for engineering effective refractive indices using ultracold atoms, and simulating wave propagation on effective spherical geometries.
Thermodynamic uncertainty relation under continuous measurement and feedback with quantum-classical-transfer entropy
This paper derives a new thermodynamic uncertainty relation that applies when quantum systems are continuously measured and controlled with feedback. The work shows that information gained through measurement can improve the precision of quantum currents beyond traditional limits, while feedback control can enhance precision while reducing entropy production.
Key Contributions
- Derived thermodynamic uncertainty relation incorporating quantum-classical-transfer entropy for continuously measured quantum systems
- Demonstrated that information gain from measurement can enhance current precision beyond conventional TUR bounds
- Showed that feedback control can simultaneously improve precision while suppressing entropy production
View Full Abstract
We derive a thermodynamic uncertainty relation (TUR) under quantum continuous measurement and feedback control. By incorporating the quantum-classical-transfer entropy, which quantifies the information gained by continuous measurement, we show that the precision of currents is constrained by information-thermodynamic costs such as the entropy production and information gain. Our result shows that information gain has the potential to enhance the precision of currents beyond the bounds set by the conventional TUR. We illustrate the bound with a driven two-level system under continuous measurement and feedback, demonstrating that feedback achieves higher precision of currents while suppressing the entropy production.
Coupling-energy driven pumping through quantum dots: the role of coherences
This paper studies how electrons can be pumped through quantum dots by modulating coupling energies rather than traditional methods, examining two setups that rely on coherent quantum effects and off-resonant tunneling processes.
Key Contributions
- Exact solutions for electron pumping through quantum dots with arbitrary tunnel coupling strengths
- Identification of optimal parameter regimes for maximizing pumping current and energy efficiency in coupling-energy driven systems
View Full Abstract
We study the impact of off-resonant tunneling and coherences on the electron pumping through quantum dots. Thereby, we focus on two electron-pump setups where lowest-order tunneling processes are suppressed and the pump is exclusively driven by modulations of the coupling energy. The first setup is driven by switching on and off the couplings between the quantum dot and the leads, while the second setup employs measurements of the dot occupation. We derive exact solutions for arbitrarily strong tunnel couplings in the absence of Coulomb interaction, identify parameter regimes with optimal pumping currents or optimal energy efficiency, and discuss similarities between both pumping mechanisms.
Q-Tag: Watermarking Quantum Circuit Generative Models
This paper introduces the first watermarking framework for quantum circuit generative models (QCGMs), which embeds ownership information directly into AI models that automatically generate quantum circuits. The method protects intellectual property by ensuring generated circuits can be traced back to their original model while maintaining circuit functionality and resisting attacks.
Key Contributions
- First watermarking framework specifically designed for quantum circuit generative models
- Symmetric sampling strategy that aligns watermark encoding with model's Gaussian prior
- Synchronization mechanism for robust watermark detection against adversarial attacks
View Full Abstract
Quantum cloud platforms have become the most widely adopted and mainstream approach for accessing quantum computing resources, due to the scarcity and operational complexity of quantum hardware. In this service-oriented paradigm, quantum circuits, which constitute high-value intellectual property, are exposed to risks of unauthorized access, reuse, and misuse. Digital watermarking has been explored as a promising mechanism for protecting quantum circuits by embedding ownership information for tracing and verification. However, driven by recent advances in generative artificial intelligence, the paradigm of quantum circuit design is shifting from individually and manually constructed circuits to automated synthesis based on quantum circuit generative models (QCGMs). In such generative settings, protecting only individual output circuits is insufficient, and existing post hoc, circuit-centric watermarking methods are not designed to integrate with the generative process, often failing to simultaneously ensure stealthiness, functional correctness, and robustness at scale. These limitations highlight the need for a new watermarking paradigm that is natively integrated with quantum circuit generative models. In this work, we present the first watermarking framework for QCGMs, which embeds ownership signals into the generation process while preserving circuit fidelity. We introduce a symmetric sampling strategy that aligns watermark encoding with the model's Gaussian prior, and a synchronization mechanism that counteracts adversarial watermark attack through latent drift correction. Empirical results confirm that our method achieves high-fidelity circuit generation and robust watermark detection across a range of perturbations, paving the way for scalable, secure copyright protection in AI-powered quantum design.
Geometric control of maximal entanglement via bound states in the continuum
This paper demonstrates how to create maximally entangled quantum states using bound states in the continuum (BICs) in a system of two giant atoms coupled to a waveguide. The researchers show that the geometric design of the system - specifically the connection lengths and spacing - can be engineered to produce robust Bell-like entangled states.
Key Contributions
- Demonstration that geometric parameters can control maximal entanglement in giant-atom waveguide systems
- Analytical connection between system geometry and Bell state generation via bound states in the continuum
- Analysis of dynamical stability and robustness hierarchy for maximally entangled BICs
View Full Abstract
Bound states in the continuum (BiCs) convert dissipative open systems into effectively closed quantum subspaces through destructive interference. We show that two identical giant atoms coupled to a one-dimensional waveguide support BICs that coincide with maximally entangled atomic states. Most importantly, entanglement is predominantly determined by the geometric design; the ratio of intra-atomic connection lengths fixes the concurrence, while the propagation phase between atoms selects a family of Bell-like states. We further analyze the dynamical stability of these maximally entangled BICs under exact time evolution, revealing a clear hierarchy of robustness against parameter perturbations. Our results establish an analytical bridge between symmetry, geometry, entanglement, and BICs in giant-atom waveguide platforms.
Cryptographic Fragility of Standard Quantum Repeater Protocols
This paper identifies security vulnerabilities in standard quantum repeater protocols used for quantum Internet infrastructure, showing that adversaries can exploit the BBPSSW distillation protocol to corrupt entanglement while evading detection. The authors propose a new cryptographic network stack with trapdoor verification to address these vulnerabilities.
Key Contributions
- Demonstrated that BBPSSW distillation protocol can be exploited to purify error syndromes rather than entanglement in adversarial environments
- Proposed a Cryptographic Network Stack with trapdoor verification protocol using private randomness to secure quantum repeater networks
View Full Abstract
The security of the proposed quantum Internet relies on repeater protocols designed under the assumption of stochastic, characterizable noise. We demonstrate that in adversarial environments this assumption induces performance vulnerabilities for computationally bounded repeater nodes. We show that the standard BBPSSW distillation protocol recursively purifies error syndromes rather than entanglement. This leads to a state of low fidelity despite diagnostic metrics indicating perfect convergence. Moreover, we show that the verifier cannot check the adversarial influence via the maximum likelihood estimation algorithm since it is blind to computationally bounded observers. To address these vulnerabilities, we propose a Cryptographic Network Stack centered on a trapdoor verification protocol. The protocol exploits private randomness to restore operational stability without requiring channel characterization.
Equal-spin and opposite-spin density-density correlations in the BCS-BEC crossover: Gauge Symmetry, Pauli Exclusion Principle, Wick's Theorem and Experiments
This paper develops a theoretical framework for understanding how particles with different spin states interact and correlate with each other in ultracold Fermi gases, particularly focusing on the transition from weakly-bound Cooper pairs (BCS) to strongly-bound molecules (BEC). The work provides theoretical predictions that match experimental observations of lithium-6 atoms in two dimensions.
Key Contributions
- Development of general theory for spin-dependent density correlations valid across temperature, interaction strength, and dimensionality
- Identification that two-particle irreducible contributions are essential for explaining experimentally observed minimum in opposite-spin correlations
View Full Abstract
We develop a general theory of spin-dependent density-density correlations, that is valid for any temperature, interactions, dimensions and mass or population status of Fermi gases with two internal states. We use gauge invariance and the Pauli principle to establish constraints on the spin-dependent density-density correlations that are consistent with the fluctuation-dissipation and Wick's theorem. As an example, we study the spin-dependent density-density correlations from the BCS to the Bose regime in two dimensions at zero temperature, inspired by experiments in 6Li. We show that two-particle irreducible contributions involving collective excitations, many-particle scattering and vertex corrections, are essential to describe experiments. In particular they turn out to be responsible for the emergence of an experimentally observed minimum in the opposite-spin density-density correlations.
A quantum feasibility preserving modeling for the min cut problem
This paper develops a variational quantum algorithm for solving the minimum cut problem in weighted graphs by using a specialized XY mixer that ensures all quantum states correspond to feasible solutions. The approach avoids penalty terms and includes a metaheuristic strategy to handle larger problem instances by decomposing them into smaller subproblems.
Key Contributions
- Development of a feasibility-preserving quantum algorithm using XY mixer dynamics that restricts evolution to valid cut configurations
- Introduction of an iterative metaheuristic strategy for scaling quantum optimization to larger problem instances
- Demonstration of systematic control over initial probability distributions for quantum warm start techniques
View Full Abstract
We study the minimum cut problem in weighted undirected graphs using variational quantum algorithms in which only feasible cut configurations are explored. Although minimum cut admits efficient classical solutions, it is a fundamental component of more complex network optimization problems such as multicut and network interdiction. Our objective is to examine quantum models in which feasibility is preserved by the mixer dynamics, without introducing penalty terms in the cost Hamiltonian. We employ a ring structured XY mixer that restricts the quantum evolution to the subspace of valid cut configurations, ensuring that all sampled states correspond to feasible solutions. To address scalability limitations, we suggest an iterative metaheuristic strategy that decomposes large instances into smaller subproblems solved sequentially using the same quantum model. The results obtained using the mixer indicate that the initial probability distribution can be systematically controlled, thereby enabling the development of warm start techniques within variational quantum based algorithms.
Coherence squeezing in optical interference
This paper introduces the concept of 'coherence squeezing' in quantum optics, where researchers develop mathematical operators to characterize and reduce uncertainty in the coherence properties of light passing through double slits. They show how this new type of squeezing can improve the precision of interference measurements beyond what's possible with classical light.
Key Contributions
- Introduction of coherence squeezing as a new fundamental degree of freedom alongside phase, amplitude, and polarization squeezing
- Development of Hermitian operators and uncertainty relations that characterize optical coherence at interference slits
- Demonstration of enhanced precision in interferometric measurements through coherence uncertainty reduction
View Full Abstract
We introduce the concept of optical coherence squeezing in double-slit interference. We construct Hermitian operators that characterize the coherence at the slits, leading to coherence uncertainty relations and a corresponding squeezing condition. We also analyze states exhibiting such squeezing and show its manifestations in the uncertainty of the magnitudes and positions of the intensity fringes. Our work identifies coherence as a fundamental degree of freedom for squeezing, complementing phase, amplitude, and polarization, which could benefit quantum-enhanced interferometry.
Information and coherence as resources for work extraction from unknown quantum state and providing quantum advantages
This paper investigates how much work can be extracted from quantum systems when only partial information is available about their state, rather than complete knowledge. The researchers show that quantum coherence in measurement procedures enables greater work extraction than classical measurements alone, establishing coherence as the key quantum resource for this advantage.
Key Contributions
- Introduced observational ergotropy as a measure of work extraction under partial information constraints
- Demonstrated that quantum coherence in measurement projectors enables work extraction beyond classical limits
- Showed that fine-grained measurements allow greater work extraction than coarse-grained measurements
View Full Abstract
The amount of extractable work from a physical system is fundamentally connected to the information available about its state, as illustrated by Maxwell's demon and the Gibbs paradox. In standard thermodynamic protocols involving system--bath interactions, the maximum work is given by the free-energy difference between the initial state and the corresponding Gibbs state at the bath temperature. This motivates a natural question: does information also limit work extraction in closed quantum systems that do not involve a heat bath and where work is obtained through unitary operations generated by a time-dependent Hamiltonian? While ergotropy quantifies the maximum work extractable via unitary operations, it assumes complete knowledge of the quantum state, typically requiring full state tomography. In realistic scenarios, however, only partial information is accessible. In this case, the relevant figure of merit is observational ergotropy, which depends explicitly on the measurement used to probe the system. We show that observational ergotropy decreases under classical post-processing of measurement outcomes, implying that fine-grained measurements allow greater work extraction than coarse-grained ones. Moreover, maximizing observational ergotropy over all possible measurements recovers standard ergotropy, which decomposes into incoherent (classical) and coherent (quantum) contributions. Our results demonstrate that coherence in the measurement projectors constitutes the key resource, enabling work extraction beyond the incoherent limit and establishing coherence as the origin of quantum advantage in observational ergotropy extraction.
Metastable confinement in Rydberg lattice gauge theories
This paper studies confinement and string breaking phenomena in quantum gauge theories using Rydberg atom arrays as quantum simulators. The researchers demonstrate how competition between string tension and particle interactions leads to metastable confinement dynamics and controllable string breaking through resonant energy matching.
Key Contributions
- Demonstration of metastable confinement dynamics in U(1) lattice gauge theory using Rydberg atoms
- Discovery of resonant string breaking mechanism through controlled energy matching
- Extension of the mechanism to both static and Floquet-driven quantum systems
View Full Abstract
Confinement and string breaking are two fundamental phenomena in gauge theories. Signatures of both are currently pursued in quantum-simulator experiments, opening a new angle on strongly interacting dynamics of gauge fields out of equilibrium, complementary to traditional particle-physics settings. In this work, we report the emergence of metastable confinement dynamics in a U(1) lattice gauge theory, originating from the competition between string tension and four-Fermi coupling - a competition that naturally arises in Rydberg atom arrays. We show that the initial string state can be resonantly melted through controlled energy matching, a phenomenon we identify as resonant string breaking. We demonstrate this mechanism for both static and Floquet-driven systems, where periodic modulation generates a spectrum of tunable sideband resonances. Our work provides new insights into the mechanisms of confinement and string breaking driven by long-range interactions and time-dependent fields, which are available in current quantum simulators on a variety of platforms.
Experimental demonstration of the absence of noise-induced barren plateaus using information content landscape analysis
This paper experimentally studies barren plateaus in variational quantum algorithms on IBM quantum hardware, specifically investigating whether noise causes optimization gradients to vanish. The researchers demonstrate that under amplitude damping noise (characterized by T1 coherence times), noise-induced barren plateaus do not occur, contrary to some theoretical predictions.
Key Contributions
- Experimental demonstration on IBM quantum hardware that noise-induced barren plateaus do not occur under T1-dominated amplitude damping noise
- Development and application of Information Content Landscape Analysis (ICLA) to efficiently estimate gradient norms for large variational quantum circuits
- Evidence that conventional benchmarking metrics based on average device characteristics are insufficient for predicting variational algorithm performance
View Full Abstract
Variational quantum algorithms are a very promising tool for near-term quantum computing. However, despite their flexibility and wide applicability, their performance is fundamentally limited by Barren Plateaus (BP), where gradients vanish and optimization becomes intractable. Noise-Induced Barren Plateaus (NIBP) are particularly interesting, as they are predicted to arise due to noise accumulation independent of circuit structure. We experimentally study NIBP on IBM quantum hardware and demonstrate their absence under non-unital amplitude damping characterized by the qubit's $T_1$ coherence times. We use Information Content Landscape Analysis (ICLA) to efficiently estimate gradient norms for circuits ranging from 8 to 102 qubits, with hundreds of parameters and circuit runtimes of hundreds of microseconds. Classical simulations of the 8-qubit case under noiseless, depolarizing, amplitude damping, and dephasing noise models serve as a baseline comparison. We thoroughly analyze the experimental results considering calibration data, shot-noise, and circuit structure. We robustly observe that the gradient magnitude saturates beyond a characteristic circuit runtime, in contrast with the exponential decay expected from NIBP. Using recent theoretical results, we corroborate that under $T_1$-dominated noise NIBP do not occur and extract an effective $T_1^\text{eff}$ that is significantly shorter than suggested by standard calibration data. Our results experimentally confirm recent predictions on the absence of NIBP under non-unital noise. These findings also indicate that conventional benchmarking metrics based on average values for device characteristics may be insufficient to predict variational algorithm performance, but full distributions need to be considered.
A robust method to reach the motional quantum regime of (anti-)protons in cryogenic multi-Penning traps
This paper develops a new cooling method for charged particles in Penning traps that sweeps trapping frequencies to overcome anharmonicity issues, enabling particles like protons and antiprotons to reach quantum motion states for precision spectroscopy. The technique uses sympathetic laser cooling between spatially separated particles to achieve quantum-limited motion control.
Key Contributions
- Development of frequency-swept sympathetic cooling scheme that overcomes anharmonicity limitations
- Demonstration of pathway to reach quantum regime of motion for protons/antiprotons in Penning traps for CPT invariance tests
View Full Abstract
Sympathetic laser cooling is a key concept in precision spectroscopy and quantum state control of charged particles. Significant challenges arise in the metrologically relevant case where the effective interaction between the particles is weak and the particle to be cooled exhibits significant initial motional energy. Here we specifically address the most generally applicable case where the laser-cooled ion and the particle of interest are confined to two spatially separate potential wells with equal motional frequency for resonant enhancement of the cooling dynamics. We analyze the latter through numerical simulations and find that anharmonicities of the potential wells can prevent maintaining the resonance condition throughout the cooling process and thus inhibit a significant reduction in motional energy. We propose a cooling scheme that sweeps the trapping frequency of the potential wells. We show that this scheme enables efficient cooling from cryogenic temperatures all the way to the quantum regime of motion. As a specific application scenario, we analyze the sympathetic cooling of (anti-)protons into the quantum regime of motion for quantum-logic-spectroscopy-based tests of CPT invariance at the quantum limit in Penning traps. Nevertheless, our results and cooling strategies are generally applicable to other laser-inaccessible ion species.
Control of Multipartite Entanglement through Anisotropy against Thermal Noise
This paper studies how to protect quantum entanglement between multiple particles from environmental noise by adjusting the magnetic properties of a spin chain system. The researchers show that by tuning the anisotropy parameter in an XXZ spin chain, they can make certain entangled states more robust against thermal disruption.
Key Contributions
- Analytical demonstration that anisotropy tuning can enhance multipartite entanglement robustness against thermal noise
- Introduction of interaction-induced spectral control as a mechanism for stabilizing multipartite entanglement in quantum systems
View Full Abstract
Preserving multipartite entanglement in open many-body quantum systems is fundamentally limited by unavoidable environmental noise. We study the open-system dynamics of multipartite entanglement in an anisotropic XXZ spin chain interacting with a thermal spin bath, focusing on two states with distinct types of multipartite entanglement: the generalized GHZ and the generalized W state. Using a master-equation approach combined with the Bethe ansatz technique, we show analytically that robustness of multipartite entanglement at low temperatures can be enhanced by suitably tuning the anisotropy of the system. Our results highlight interaction-induced spectral control as a mechanism for stabilizing multipartite entanglement in quantum computing platforms.
Experimental investigation of the effect of dispersion on squeezing generation in a synchronously pumped optical parametric oscillator
This paper experimentally studies how dispersion affects the generation of squeezed light in a synchronously pumped optical parametric oscillator. The researchers found that squeezing levels remain unchanged despite varying dispersion conditions, leading them to propose a new model treating dispersion as spectral filtering in the quantum interaction.
Key Contributions
- Experimental demonstration that intracavity dispersion does not significantly affect squeezing generation in SPOPOs contrary to theoretical predictions
- Development of a new modeling framework treating dispersion as effective spectral filtering in the interaction Hamiltonian
View Full Abstract
An experimental investigation of intracavity dispersion effects in a synchronously pumped optical parametric oscillator (SPOPO) is presented. A flexible setup combining spectral and phase shaping of both pump and local oscillator fields with frequency-resolved balanced homodyne detection is employed to examine how intracavity dispersion influences squeezing. Different cavity configurations with varying finesse and dispersion conditions are studied, and the squeezing is measured as a function of pump power and local oscillator bandwidth. Contrary to expectations based on existing theoretical models, the measured squeezing levels remain essentially unchanged as dispersion varies. To account for these observations, a modeling approach is introduced in which intracavity dispersion is described as an effective spectral filtering occurring at the stage of SPOPO supermode generation. Within this framework, the filtering is incorporated directly into the interaction Hamiltonian of the nonlinear process. This perspective establishes a consistent experimental benchmark for the study of dispersion in SPOPOs and underscores the importance of spectral filtering in the interpretation of multimode squeezing experiments.
Full Single-Quantum Control of Particles in Penning Traps for Symmetry Tests at the Quantum Limit
This paper presents a quantum logic technique for ultra-precise measurements of antimatter particles (antiprotons) by coupling them to beryllium ions in specialized magnetic traps, aiming to test fundamental physics symmetries like CPT invariance. The researchers describe a new experimental setup that could push precision measurements of particle properties to the quantum limit.
Key Contributions
- Development of quantum logic inspired cooling and detection techniques for antiproton g-factor measurements
- Design of cryogenic multi-Penning-trap stack system for quantum-level control of antimatter particles
View Full Abstract
The BASE collaboration aims to measure antimatter systems with the highest precision in order to perform a rigorous test of CPT symmetry and search for physics beyond the Standard Model. As part of the BASE collaboration, we pursue the development of quantum logic inspired cooling and detection techniques for g-factor measurements of (anti-)protons. Implementing these methods requires full quantum-level control of individual antimatter particles confined in cryogenic Penning traps. By mapping the (anti-)proton's internal state onto a co-trapped 9Be+ "logic" ion via free Coulomb coupling in a double-well potential, we can accelerate measurement cycles and push g-factor precision measurements on (anti-)protons toward the quantum limit. Here, we present an overview of the proposed method and the current status of the project, with special emphasis on the new cryogenic multi-Penning-trap stack and the proton detection system.
No Absolute Hierarchy of Quantum Complementarity
This paper shows that the traditional hierarchy of quantum complementarity (which observables are more incompatible than others) is not absolute but depends on how quantum resources are configured. The researchers prove that two sets of observables can have reversed incompatibility ordering depending on whether quantum probes are arranged as identical copies or as parallel-antiparallel pairs.
Key Contributions
- Proves No-Comparison Theorem showing no global ordering of incompatible observables is preserved across all finite-copy configurations
- Demonstrates that quantum incompatibility depends on global configuration of quantum probes, not just the observables themselves
View Full Abstract
Bohr's principle of complementarity, prohibiting simultaneous access to certain physical properties within a single experimental arrangement, is considered to be a defining feature of quantum mechanics. It is commonly viewed as inducing an intrinsic hierarchy among incompatible observables: some sets of quantum properties are fundamentally more incompatible than others, as quantified by the maximal sharpness permitting their joint measurement. We show that this hierarchy ceases to be absolute in the multi-copy regime. Analyzing qubit spin observables, we prove a No-Comparison Theorem establishing that no global ordering of incompatible observable sets is preserved across all finite-copy configurations. In particular, two sets of observables can exhibit reversed complementarity ordering depending solely on whether the available resources are arranged as identical copies or as parallel-antiparallel pairs. Thus, the degree of quantum incompatibility is not an intrinsic property of observables alone but depends on the global configuration of the prepared quantum probes. Our results uncover a configuration-dependent structure of complementarity, reveal a subtle role of entanglement in shaping the structure of measurement limitations, and call for a reassessment of quantum information protocols under finite resources.
Generating entangled polaritonic condensates by pumping with entangled pairs of photons
This paper investigates how to create entangled quantum states in two spatially separated polariton condensates by pumping them with entangled photon pairs, and analyzes how long such entanglement can survive despite noise and losses in the system.
Key Contributions
- Demonstration that entangled polariton condensates can be maintained despite decoherence from excitonic reservoirs and photon losses
- Quantitative estimates for required entangled photon flux to achieve steady-state entanglement and characterization of entanglement lifetime
View Full Abstract
We investigate the steady state of two single-mode uniform spatially separated polaritonic conden- sates exposed to resonant pumping with entangled pairs of photons. We demonstrate the principal possibility of driving the system to an entangled state despite its exposure to noises arising from the excitonic reservoir and photon leakage through the microcavity mirrors. Estimates are provided for the flux of entangled particles required to drive the system into a steady state that violates the partial-transpose criterion for entanglement. Furthermore, we trace the evolution of the system after a sudden disappearance of the entangled pumping. Our analysis provides estimates for the entanglement lifetime in a system of two exciton-polariton condensates
Optimization-based Unfolding in High-Energy Physics
This paper develops a method to solve the 'unfolding' problem in high-energy physics (reconstructing true particle distributions from detector measurements) using optimization techniques that can run on both classical computers and quantum annealers. The researchers created a software package called QUnfold that implements their approach using D-Wave's quantum annealing hardware and showed it performs competitively with existing methods.
Key Contributions
- Reformulation of the physics unfolding problem as a QUBO optimization suitable for quantum annealing
- Development of QUnfold software package integrating classical and quantum-hybrid solvers for practical HEP applications
View Full Abstract
In High-Energy Physics, unfolding is the process of reconstructing true distributions of physical observables from detector-distorted measurements. Starting from its reformulation as a regularized quadratic optimization, we develop a framework to tackle this problem using both classical and quantum-compatible methods. In particular, we derive a Quadratic Unconstrained Binary Optimization (QUBO) representation of the unfolding objective, allowing direct implementation on quantum annealing and hybrid quantum-classical solvers. The proposed approach is implemented in QUnfold, an open-source Python package integrating classical mixed-integer solvers and D-Wave's hybrid quantum solver. We benchmark the method against widely used unfolding techniques in RooUnfold, including response Matrix Inversion, Iterative Bayesian Unfolding, and Singular Value Decomposition unfolding, using synthetic dataset with controlled distortion effects. Our results demonstrate that the optimization-based approach achieves competitive reconstruction accuracy across multiple distributions while naturally accommodating regularization within the objective function. This work establishes a unified optimization perspective on unfolding and provides a practical pathway for exploring quantum-enhanced methods in experimental HEP data analysis.
Effective Repulsive Action of Gravitational Quantum Superpositions Under Postselection
This paper proposes that when a mass is prepared in a quantum superposition of different spatial positions, it can create an apparent repulsive gravitational effect on a probe mass through postselection and negative weak values. The work suggests this demonstrates quantum superposition of gravitational forces and spacetime itself.
Key Contributions
- Theoretical framework for quantum superposition of gravitational forces leading to repulsive effects
- Proposed experimental implementation using spin-bearing nanocrystals to test quantum gravity effects
View Full Abstract
A classic feature of gravity is that it is an attractive force. If a source mass is prepared in a localized (classical- like) state, it will cause another probe mass to move towards it. Here we consider the situation in which a source mass is prepared in a quantum superposition of distinct spatial states while a probe mass interacts with it. Conditional on the detection of the source mass in a specific state, the probe mass will be found to move away from the source mass (repulsion). This signifies the quantum superposition of gravitational forces acting on the probe mass and thereby the fact that spacetime can exist in quantum superpositions. The technique used is the repulsive effect arising from an anomalous negative weak value. A potential experimental implementation with spin bearing nanocrystals is outlined.
Quantum squeezing in an all-resonant periodically poled lithium niobate microresonator
This paper demonstrates the generation of squeezed light on a chip using a lithium niobate microresonator, achieving record-high squeezing levels for integrated quantum optical platforms. The researchers created a device that produces quantum-enhanced light states with reduced noise, which can improve the sensitivity of optical measurements and sensors.
Key Contributions
- Achieved highest squeezing ratio (-7.52 dB on-chip) among integrated χ(2) cavity platforms
- Demonstrated first quasi-phase matched, fully resonant χ(2) cavity squeezer on chip
- Established scalable route to power-efficient integrated squeezed-light sources
View Full Abstract
Quantum noise limits the sensitivity of optical measurements, but squeezed states of light enable quantum-enhanced metrology, sensing, and information processing. Most on-chip squeezed-light sources rely on Kerr ($χ^{(3)}$) nonlinearities, remain limited by pump power and excess loss constraints. Quadratic ($χ^{(2)}$) platforms instead provide stronger parametric interactions, lower pump power requirements, and greater spectral engineering flexibility. Here, we demonstrate strong, broadband squeezed-light generation on a thin-film lithium niobate (TFLN) photonic chip using a dual-resonant optical parametric amplifier implemented in a single periodically poled LN (PPLN) microresonator. Near-full-depth domain inversion is achieved simultaneously with highly over-coupled resonances, exhibiting escape efficiencies exceeding 90% and intrinsic quality factors above 2.5 million in a 0.6 mm$^2$ X-cut TF-PPLN resonator, enabling efficient squeezing at 1587 nm when pumped at 793.5 nm. Operating in the continuous-wave regime, we directly measure -0.81 dB of squeezing below the shot-noise limit with a pump power of 27 mW, together with +4.29 dB of anti-squeezing. From these measurements, we infer an on-chip squeezing level of -7.52 dB $\pm$ 0.22 dB (95% confidence interval: [-7.96,-7.10] dB), and an on-chip anti-squeezing level of +9.62 dB $\pm$ 0.25 dB. We demonstrate single-mode squeezing at degeneracy with a squeezed-light spectrum exceeding 10.3 THz. This work reports the highest squeezing ratio among integrated $χ^{(2)}$ cavity platforms and the first quasi-phase matched, fully resonant $χ^{(2)}$ cavity squeezer on chip, establishing a scalable route to fully integrated power-efficient squeezed-light sources for quantum-enhanced sensing and metrology.
Ideal random quantum circuits pass the LXEB test
This paper proves that ideal (noiseless) random quantum circuits can reliably pass the linear cross-entropy benchmark test, which is used to demonstrate quantum computational advantage. The authors show different success probabilities depending on circuit depth and establish theoretical connections to statistical distributions of quantum circuit outputs.
Key Contributions
- Theoretical proof that noiseless random quantum circuits pass LXEB test with high probability for different circuit depths
- Established concentration properties of random circuit collision probabilities and connections to Porter-Thomas distribution
- Advanced the mathematical framework using higher moments and high-degree approximate designs for analyzing random quantum circuits
View Full Abstract
We show that noiseless random quantum circuits pass the linear cross-entropy benchmark (LXEB) test with high probability. If the circuits are linear depth, and thus form unitary 4-designs, the LXEB test is passed with probability $1-O(1/\sqrt{k})$, where $k$ is the number of independently drawn samples from the output distribution of the random circuit. If the circuits are of depth $\tilde O(n^2)$, and thus form unitary $n$-designs, the LXEB test is passed with probability $1-O(e^{-k \log(n)/n})$. In proving our results, we show strong concentration of the random circuit collision probability at linear depth and establish that the tails of the distribution of random circuit output probabilities start to resemble Porter-Thomas at near-quadratic depths. Our analysis employs higher moments and high-degree approximate designs.
Quantum-Optically Resolving the Number of Colloidal Quantum Dots in a Subwavelength Volume
This paper develops a quantum optical method to precisely count the number of individual quantum dots (1-10) confined in tiny capsules by analyzing how they collectively emit light. The technique uses time-domain measurements and photon correlation analysis to non-invasively determine exactly how many quantum emitters are present in a subwavelength volume.
Key Contributions
- Development of time-domain quantum optical methodology for precise counting of artificial atoms using second-order photon correlation
- Demonstration of Dicke superradiance framework for colloidal quantum dots enabling non-invasive numbering from 1-10 emitters
View Full Abstract
The number resolution of solid-state artificial atoms is of fundamental interest for the study of quantum few-body systems, yet remains experimentally challenging. Quantum optical experiments offer a non-invasive approach which links up macroscopic measurements with the quantity of quantum emitters. In this work, we propose a time-domain quantum optical methodology for the strict numbering of colloidal CdSe/CdS/ZnS quantum dots (QDs) confined in subwavelength-size polystyrene capsules. The non-polarized, homogeneously broadened emission of colloidal QDs in the subwavelength volume satisfies the description of Dicke's superradiance of identical quantum emitters. An analytic relation describes the numerical dependence of the second-order photon correlation on the number and the collective lifetime of emitters, yielding an experimental counting range of colloidal QDs from one to ten. This work provides a robust pathway for the non-invasive numbering of artificial atoms and the investigation of collective light-matter interactions at the nanoscale.
Thermodynamic Uncertainty Relation with Quantum Feedback
This paper derives a fundamental limit on precision in quantum systems under feedback control by extending the thermodynamic uncertainty relation to include quantum mutual information. The authors show how feedback control can suppress fluctuations at a thermodynamic cost and demonstrate improved precision in a quantum clock model.
Key Contributions
- Derived finite-time thermodynamic uncertainty relation for quantum systems with feedback control incorporating quantum mutual information
- Demonstrated enhanced precision in quantum clock model through feedback control using single thermal reservoir
View Full Abstract
Fluctuations are intrinsic to microscopic systems and impose fundamental limits on nonequilibrium precision, as captured by the thermodynamic uncertainty relation (TUR), which links current fluctuations to entropy production. While feedback control is expected to further suppress fluctuations, its role within the TUR framework has remained unclear, particularly in quantum systems where control is inherently information-driven. In this Letter, we consider open quantum systems weakly coupled to a thermal environment, in which quantum jumps are continuously monitored, and Markovian feedback is applied. Using quantum mutual information to quantify the information contribution induced by feedback, we derive a finite-time TUR for arbitrary time-integrated currents in terms of entropy production and mutual information. Our results uncover how feedback control suppresses fluctuations together with thermodynamic cost and establishes a fundamental precision bound imposed by information-based control. As an application, we analyze a quantum clock model and demonstrate that the clock precision can be enhanced by feedback control in the presence of a single thermal reservoir.
Finite key analysis of experimentally realized practical COW-QKD protocol
This paper presents an experimental implementation of the Coherent One-Way Quantum Key Distribution (COW-QKD) protocol under realistic conditions and develops a framework for analyzing security with finite key lengths. The researchers achieved stable secure key rates of 1.2-1.6 kbps and demonstrated that COW-QKD can work securely over medium-range distances up to 156-171 km of optical fiber.
Key Contributions
- Experimental implementation of COW-QKD protocol under realistic conditions with stable key rates
- Development of clean framework for finite key analysis of COW-QKD protocol
- Demonstration of secure operation over medium-range distances up to 156-171 km
View Full Abstract
An experimental implementation of the Coherent One-Way Quantum Key Distribution (COW-QKD) protocol is reported under realistic conditions, and a clean and easy-to-use framework for performing finite key analysis of the COW-QKD protocol is provided by extending a set of existing results. The framework provided here is used to perform finite key rate analysis of the COW-QKD protocol with respect to the actual parameters used in the experimental realization reported here. The system is kept running for several hours with different experimental parameters and stable secure key rates between 1.2 to 1.6 kbps are observed. In addition, QBER, phase error rate and secure key rate are obtained under finite key analysis, and it is shown that COW-QKD is secure for medium-range transmissions (up-to ~ 156 (171) km of optical fiber with 0.2 dB loss per km if detector efficiency is 0.1 (0.2) and other parameters are same as those used in this experiment).
Vibration induced transparency and absorption with two ion ensembles in a linear trap
This paper studies two groups of trapped ions and how they respond to laser light, discovering that vibrations can either make the ions transparent to light or cause them to absorb it, depending on how the laser frequency is tuned relative to the ions' vibrational modes.
Key Contributions
- Discovery of vibration-induced transparency in ion ensembles when laser is tuned to red sideband
- Demonstration of absorption-to-transparency conversion when laser is tuned to blue sideband
View Full Abstract
We study the spectra of collective low excitations of two atomic ion ensembles which are confined in a liner trap by addressing lases. When the left ensemble is driven by an external optical field, its corresponding response spectrum to the incident optical light shows a vibration-induced transparency phenomenon when the detuning of the laser addressing the ion is tuned to the first red sideband. In the case of the detuning tuned to the first blue sideband, the response spectrum shows a conversion from the absorption peak to the transparency window. Furthermore, we investigate the fluctuation spectra of the collective excitation modes of ion ensemble and show the similar phenomena.
SYK thermal expectations are classically easy at any temperature
This paper presents a classical algorithm that can efficiently compute thermal expectations in quantum many-body systems like the Sachdev-Ye-Kitaev (SYK) model, showing that these calculations don't require quantum advantage even when the thermal states are highly entangled. The results suggest that classical methods can handle these computations at all temperatures above a phase transition, challenging assumptions about when quantum computers provide computational advantages.
Key Contributions
- Development of a quasi-polynomial classical algorithm for thermal expectations with cost n^O(log n/ε)
- Demonstration that SYK model thermal calculations remain classically tractable even in highly entangled regimes previously thought to require quantum advantage
View Full Abstract
Estimating thermal expectations of local observables is a natural target for quantum advantage. We give a simple classical algorithm that approximates thermal expectations, and we show it has quasi-polynomial cost $n^{O(\log n/ε)}$ for all temperatures above a phase transition in the free energy. For many natural models, this coincides with the entire fast-mixing, quantumly easy phase. Our results apply to the Sachdev-Ye-Kitaev (SYK) model at any constant temperature -- including when the thermal state is highly entangled and satisfies polynomial quantum circuit lower bounds, a sign problem, and nontrivial instance-to-instance fluctuations. Our analysis of the SYK model relies on the replica trick to control the complex zeros of the partition function.
Loss-insensitive quantum noise reduction in a Raman amplifier with coherent feedback
This paper demonstrates a method to reduce quantum noise in Raman amplifiers by using coherent feedback of correlated Stokes fields and atomic spin waves. The technique achieves up to 6 dB noise reduction that is independent of feedback loss at high gain, making it potentially useful for quantum precision measurements.
Key Contributions
- Demonstration of loss-insensitive quantum noise reduction in Raman amplifiers using coherent feedback
- Achievement of 6 dB maximum noise reduction independent of feedback loss at high gain
- Phase-sensitive amplification properties that expand applications in quantum precision measurement
View Full Abstract
A quantum amplifier usually adds extra noise inevitably through coupling to internal degrees of freedom while amplifying the signal. The introduction of quantum correlations can effectively suppress this extra noise. In this work, we utilize the established quantum correlation between the Stokes field and atomic spin waves in the Raman amplification process to feedback a portion of the Stokes field into the amplifier. This leads to a reduction in quantum noise that is independent of the feedback loss at high gain. A maximum of 6 dB noise reduction is observed. The single-path feedback amplifier is found to be sensitive to the feedback phase, a property that expands its potential for applications in quantum precision measurement, and the general concept can be extended to integrated optics and fiber optic systems.
Optimizing Doppler laser cooling protocols for quantum sensing with 3D ion crystals in a Penning trap
This paper develops new computational methods to simulate laser cooling of very large 3D ion crystals (up to 100,000 ions) in Penning traps, and uses these simulations to optimize cooling protocols for quantum sensing applications.
Key Contributions
- Development of efficient numerical framework for simulating laser cooling of up to 10^5 ions in 3D crystals
- Discovery of new cooling pathways using axial components in E×B modes
- Demonstration of enhanced perpendicular kinetic energy cooling below 1 mK in prolate ion crystals
View Full Abstract
Large, 3D trapped ion crystals offer improved sensitivity in quantum sensing protocols, and are expected to be implemented as platforms in near-future experiments. However, numerical techniques used to study the laser cooling of such crystals are inefficient as the number of ions, $N$, in the crystal increases. Here we develop a powerful numerical framework to simulate laser cooling of up to $10^5$ ions stored in a Penning trap. We apply this framework to characterize and optimize the cooling of ellipsoidal 3D crystals. We document new pathways to enhanced cooling based on the addition of an axial component to the potential energy-dominated $\boldsymbol{E}\times\boldsymbol{B}$ modes. Furthermore, we observe greatly enhanced cooling of the perpendicular kinetic energy to below 1 mK in prolate ion crystals, enabling a simplified cooling beam setup for such crystals. We propose specific values of trap and laser beam parameters which lead to optimal cooling in a variety of examples. This work illustrates the feasibility of preparing large 3D crystals for high-sensitivity quantum science protocols, motivating their use in future experiments.
Gravitational decoherence and recoherence of a composite particle: the interplay between gravitons and a classical Newtonian potential
This paper studies how gravitational effects cause quantum systems to lose their quantum properties (decoherence) by analyzing the combined effects of gravitons and classical gravity on particles with internal structure. The researchers found that while classical gravity can sometimes restore quantum coherence, gravitons inevitably cause decoherence over long timescales, even for very small particles.
Key Contributions
- Extended analysis of gravitational decoherence by including both graviton effects and classical Newtonian potential interactions
- Demonstrated that classical Newtonian potential can lead to recoherence in systems without dynamical internal degrees of freedom
View Full Abstract
The fact that gravitational environments cannot be shielded (since gravity is universal) makes them of great theoretical interest to decoherence mechanisms and to the quantum-to-classical transition. While past results seemed to indicate that graviton-induced decoherence of spatial superpositions happens only for macroscopic systems, recently it was shown that this mechanism can be enhanced through the system's own dynamical internal structure. In this work, we extend this analysis by including the interaction with a classical Newtonian potential. We show that, although the graviton bath alone dominates the mechanism for short times compared to a timescale established by the size of the quantum spatial superposition, the interplay between the gravitons and the internal degrees of freedom of the system renders decoherence inevitable in the long-time limit, even for microscopic masses. We also show that this mechanism is slightly slowed down by the interplay with the classical Newtonian potential, which, for systems without dynamical internal degrees of freedom, can even lead to recoherence, at least in principle.
Quantified convergence of general homodyne measurements with applications to continuous variable quantum computing
This paper develops mathematical bounds to quantify how well broadband pulsed homodyne measurements converge to ideal quadrature measurements as the local oscillator amplitude increases. The authors apply these bounds to evaluate the performance of continuous variable quantum computing applications including quantum teleportation and error correction with GKP codes.
Key Contributions
- Derived rigorous fidelity bounds for convergence of broadband pulsed homodyne measurements to ideal quadrature measurements
- Applied theoretical bounds to practical continuous variable quantum computing protocols including GKP error correction and quantum teleportation
View Full Abstract
In arXiv:2503.00188 we introduced broadband pulsed (BBP) homodyne measurements as a generalization of standard pulsed homodyne quadrature measurements. BBP can take advantage of detectors such as calorimeters that have the potential for high efficiency over a broad spectral range. BBP homodyne retains the advantages of standard pulsed homodyne, enabling measurement of arbitrary quadratures in the limit of large amplitude local oscillators (LO). Here we quantify the convergence of standard and BBP homodyne quadrature measurements to those of the quadrature of interest. We obtain lower bounds on the fidelity of the post-measurement classical-quantum state of outcomes and unmeasured modes, and the fidelity of the states obtained after applying operations conditional on measurement outcomes. The bounds depend on the LO amplitude and the moments of number operators. We demonstrate the practical relevance of these bounds by evaluating them for standard pulsed homodyne used for estimating values of the characteristic function of the Wigner distribution, expectations of moments, for quantum teleportation and for continuous variable error correction with GKP codes.
Stabilization of Rydberg Dissipative Time Crystals Using a Scanning Fabry Perot Interferometer Transfer Lock
This paper demonstrates a low-cost laser stabilization technique using a scanning Fabry Perot interferometer to improve the stability of Rydberg atom experiments that study dissipative time crystals. The method significantly reduces laser frequency drift and improves measurement precision in multi-laser atomic physics setups.
Key Contributions
- Development of compact, low-cost laser stabilization using scanning Fabry Perot interferometer transfer locking
- Demonstration of improved stability for Rydberg atom experiments studying dissipative time crystals with order-of-magnitude improvements in Allan deviation
View Full Abstract
Stabilization of laser frequencies is critical for sensitive Rydberg measurements, including in applications such as dissipative time crystal (DTC) dynamics, yet conventional approaches often require complex or costly hardware. We demonstrate a compact, low cost stabilization method using a scanning Fabry Perot interferometer (SFPI) to transfer lock a 960nm coupler laser to an 852nm probe. The lock suppresses coupler multi MHz free running drift and improves the Allan deviation by up to an order of magnitude, reaching <75kHz at 66s. Applied to DTC oscillations using a Rb 2 photon D2 transition, the second harmonic generated 480nm (from 960nm lock) reduces DTC frequency drift from >20kHz to a few kHz and lowers instability by more than an order of magnitude with a minimum Allan deviation of 0.2kHz at <10s. These results establish SFPI-based transfer locking as a practical and accurate approach for scalable multi laser Rydberg experiments requiring long-term stability in a compact and low cost system.
Revisiting the Role of State Texture in Gate Identification and Fixed-Point Resource Theories
This paper analyzes protocols for identifying quantum gates (specifically CNOT gates) in quantum circuits using randomized input states, connecting this to quantum resource theories. The authors develop a general framework for 'fixed-point resource theories' and demonstrate improved gate identification methods that work across different laboratory measurement bases.
Key Contributions
- Improved fidelity-based gate identification protocol for distinguishing CNOT gates from single-qubit gates
- Introduction of fixed-point resource theories framework extending state texture and other quantum resource theories
- Demonstration of monotonicity properties for resource measures under free operations
View Full Abstract
A protocol for identifying controlled-NOT (CNOT) gates versus single-qubit-only gates in universal quantum circuits using randomized input states was recently shown to be intimately connected to the quantum resource of state texture. Here we revisit this gate identification protocol and demonstrate that a more general fidelity-based formulation succeeds for nearly all laboratory bases. We then examine a broader family of quantum resource theories, where a distinct resource theory can be defined for each choice of reference pure state, establishing core resource-theoretic requirements without the computational shortcut offered by the "grand sum" employed in the original formulation of state texture. By extending from single "resourceless" states to convex sets via a convex-roof construction, we recover single-qubit measures of known resource theories such as imaginarity and coherence. Finally, we introduce a family of "fixed-point resource theories" that includes fixed-point instances of the theories of state texture, genuine coherence, purity, and athermality. For these fixed-point resource theories we show that, under free operations, the fidelity-based lower bound is weakly monotonic, while specific violations of strong monotonicity are found for the convex-roof logarithmic measure.
Controlled symmetry breaking of the Fermi surface in ultracold polar molecules
Researchers demonstrate controlled deformation of the Fermi surface in ultracold polar molecules using dipole-dipole interactions, achieving tunable symmetry breaking through microwave shielding techniques. This work establishes a new platform for studying strongly correlated quantum matter and exploring pathways to topological superfluidity.
Key Contributions
- First observation of interaction-induced Fermi surface deformation in ultracold polar molecules
- Development of double microwave shielding technique that suppresses inelastic losses while preserving dipolar interactions
- Demonstration of continuous tuning between different interaction symmetries (axial U(1) to biaxial C2)
- Achievement of Fermi surface deformations up to 7% despite operating at much lower densities than previous atomic systems
View Full Abstract
Long-range anisotropic dipole-dipole interactions between ultracold polar molecules are predicted to drive exotic quantum phases, yet direct many-body signatures of these interactions in degenerate Fermi gases have remained elusive. Here, we report the observation of an interaction-induced controlled deformation of the Fermi surface, providing a clear many-body signature in a deeply degenerate Fermi gas of $^{23}\text{Na}^{40}\text{K}$ molecules. Using double microwave (MW) shielding, we prepare $8 \times 10^3$ molecules at $0.23(1)$ times the Fermi temperature, achieving a three-fold suppression of inelastic losses compared to single MW shielding while preserving strong elastic dipolar scattering. We observe Fermi surface deformations of up to $7\,\%$, more than two times larger than those observed in magnetic atoms, despite operating at two orders of magnitude lower densities. Crucially, we demonstrate continuous tuning of the interaction potential from axial U(1) to biaxial C$_{2}$ symmetry, directly imprinting this geometry onto the Fermi surface. We find excellent agreement between our experimental results and parameter-free Hartree-Fock theory. These results establish MW-shielded polar molecules as a highly tunable platform for exploring strongly correlated dipolar Fermi matter and offer a promising path towards topological superfluidity.
Quantum jumps in open cavity optomechanics and Liouvillian versus Hamiltonian exceptional points
This paper studies exceptional points in cavity optomechanical systems, which are special conditions where quantum states merge in non-Hermitian systems. The authors distinguish between two types of exceptional points based on whether quantum jumps are included in the dynamics, and show how thermal effects influence these points differently.
Key Contributions
- Distinguished between Liouvillian and Hamiltonian exceptional points in optomechanical systems based on quantum jump dynamics
- Developed unified spectral framework using thermofield formalism to describe hybrid exceptional points
- Demonstrated robustness of Hamiltonian exceptional points under weak quantum jump perturbations
View Full Abstract
Exceptional points, where two or more eigenstates of a non-Hermitian system coalesce, are now of interest across many fields of physics, from the perspective of open-system dynamics, sensing, nonreciprocal transport, and topological phase transitions. In this work, we investigate exceptional points in cavity optomechanics, a platform of interest to diverse communities working on gravitational-wave detection, macroscopic quantum mechanics, quantum transduction, etc. Specifically, we clarify the role of quantum jumps in making a clear distinction between Liouvillian and Hamiltonian exceptional points in optomechanical systems. While the Liouvillian exceptional point arises from the unconditional Lindblad dynamics and is independent of the phonon-bath temperature, the Hamiltonian exceptional point emerges from the conditional no-jump evolution and acquires a thermal shift due to an enhanced conditional damping. Employing the thermofield formalism, we derive a unified spectral framework that interpolates between these regimes via an analytical hybrid-Liouvillian description. Remarkably, in the weak-quantum-jump regime, the exceptional point is perturbed only at the second order, highlighting the robustness of the Hamiltonian exceptional point under small hybrid perturbations. Our work reveals a continuous family of hybrid exceptional points, clarifies the operational and physical differences between the conditional and unconditional dissipative dynamics in optomechanical systems, and provides a probe for thermal baths.
Hybrid Consensus with Quantum Sybil Resistance
This paper proposes a new blockchain consensus protocol that uses quantum position verification instead of traditional proof-of-work to prevent Sybil attacks, leveraging the unclonable nature of quantum states as a scarce resource while maintaining energy efficiency and fast confirmation times.
Key Contributions
- Novel consensus protocol combining classical hybrid consensus with quantum position verification for Sybil resistance
- Energy-efficient alternative to proof-of-work that exploits quantum uncloneability as unconditionally scarce resource
View Full Abstract
Sybil resistance is a key requirement of decentralized consensus protocols. It is achieved by introducing a scarce resource (such as computational power, monetary stake, disk space, etc.), which prevents participants from costlessly creating multiple fake identities and hijacking the protocol. Quantum states are generically uncloneable, which suggests that they may serve naturally as an unconditionally scarce resource. In particular, uncloneability underlies quantum position based-cryptography, which is unachievable classically. We design a consensus protocol that combines classical hybrid consensus protocols with quantum position verification as the Sybil resistance mechanism, providing security in the standard model, and achieving improved energy efficiency compared to hybrid protocols based on Proof-of-Work. Our protocol inherits the benefits of other hybrid protocols, namely the faster confirmation times compared to pure Proof-of-Work protocols, and resilience against the compounding wealth issue that plagues protocols based on Proof-of-Stake Sybil resistance. We additionally propose a spam prevention mechanism for our protocol in the Random Oracle model.
Energy efficient optical tracking for space quantum communication
This paper develops an energy-efficient optical tracking system for satellite-based quantum communication that reduces power consumption by treating tracking as a weak-signal estimation problem. The approach uses ground-based closed-loop systems with advanced filtering to maintain stable satellite tracking at much lower beacon power levels while preserving quantum key distribution performance.
Key Contributions
- Energy-efficient optical tracking system that reduces beacon power requirements for satellite quantum communication
- Demonstration of stable tracking at 34 mW transmitted power over -60 dB satellite-to-ground channel with negligible impact on QKD performance
View Full Abstract
Power consumption is a critical constraint for CubeSat based quantum communication, where tracking systems often dominate the onboard power budget. We demonstrate an energy-efficient approach that enables reliable satellite tracking at substantially reduced beacon power by treating tracking as a weak-signal estimation task. Using a closed-loop system with fine steering mirrors and higher-order Kalman filters on ground, we can maintain stable tracking at a transmitted power equivalent to 34 mW over a -60 dB satellite to ground optical channel. Our results show that the resulting penalties on QKD bit error rates and signal-to-noise ratios are negligible, allowing for more efficient power allocation to quantum payloads in CubeSat missions.
Time in gravitational subregions and in closed universes
This paper studies how to define gauge-invariant local observables in quantum gravity subregions using JT gravity as a model, showing how physical clocks can be constructed from spacetime geometry itself. The authors demonstrate that entropy in such systems comes from bulk contributions rather than just boundary terms, and extend their analysis to closed universe cosmological models.
Key Contributions
- Method for defining gauge-invariant observables in quantum gravity subregions using crossed product construction
- Demonstration that York time evolution leads to bulk entropy contributions rather than boundary-only formulas
- Extension of gravitational constraint techniques to closed Big-Bang universe models
View Full Abstract
What are gauge-invariant local observables in a subregion in quantum gravity? How does one even define such a subregion non-perturbatively? We study these questions in JT gravity. One can define a subregion by specifying the value of the dilaton at the boundary of the region. We study conformal matter correlators in such a subregion. There is a gravitational constraint associated with York time evolution within the causal diamond of the subregion. This constraint can be leveraged to construct gauge-invariant observables in quantum gravity, using a crossed product construction. The extrinsic curvature of Cauchy slices acts as the physical clock. This is a simple example of how gauge-invariant observables can be obtained by dressing to features of a spacetime (or other fields), without the need for introducing an external observer. The entropy associated with this algebra of observables is not an area, or any boundary term. We show that gravitational constraints only give boundary formulas for entropy when gauging isometric diffeomorphisms. York time flow is merely a conformal isometry, not an actual isometry, and thus leads to bulk contributions to entropy. We repeat our construction for Milne-type closed Big-Bang universes, which may be of independent interest.
Exponential speedup in measurement property learning with post-measurement states
This paper studies the problem of learning properties of quantum measurement operators, showing that conventional quantum resources (entanglement, auxiliary qubits, adaptivity) fail to provide efficient solutions, while access to post-measurement quantum states enables exponentially faster learning. The work identifies a fundamental new resource for quantum measurement characterization.
Key Contributions
- Identified exponential separation between classical outcome access and post-measurement state access in measurement learning
- Demonstrated that conventional quantum learning resources are ineffective for certain measurement property tasks
- Established post-measurement states as a qualitatively new resource for quantum certification protocols
View Full Abstract
Learning properties of quantum states and channels is known to benefit from resources such as entangled operations, auxiliary qubits, and adaptivity, whereas the resource structure of measurement learning, namely, learning properties of quantum measurement operators, remains poorly understood. In this work, we identify a measurement learning task for which access limited to classical measurement outcomes leads to an exponential lower bound on the query complexity, established via a distinguishing task between a genuine quantum projective measurement and a purely classical random number generator. Remarkably, this hardness persists even when arbitrary entangled operations, auxiliary systems, and fully adaptive strategies are allowed, indicating that conventional resources for state and channel learning are ineffective in this task. In contrast, when access to the post-measurement quantum state is available, the same task can be solved with constant query complexity using a simple measuring-twice protocol, without requiring resources that are useful for state and channel learning. Our results reveal post-measurement states as a qualitatively new and decisive resource for measurement learning, suggesting potential implications for the design of practical quantum certification protocols.
Trade-offs in Gauss's law error correction for lattice gauge theory quantum simulations
This paper analyzes trade-offs in using Gauss's law-based quantum error correction (GLQEC) for simulating lattice gauge theories on quantum computers. The researchers find that while GLQEC can reduce qubit overhead and achieve lower error rates initially, it requires periodic boundary conditions and exhibits faster decoherence over multiple error correction rounds compared to standard quantum error correction codes.
Key Contributions
- Proved that Gauss's law-based quantum error correction requires periodic electric fields, constraining lattice QED simulation designs
- Identified a mixing speed threshold (p_th=0.277) above which GLQEC performs worse than no error correction
- Demonstrated fundamental trade-offs between symmetry-based and universal quantum error correction for lattice gauge theory simulations
View Full Abstract
Gauss's law-based quantum error correction (GLQEC) offers a promising approach to reducing qubit overhead in lattice gauge theory simulations by leveraging built-in symmetries. For applications of GLQEC to 1+1D lattice quantum electrodynamics (QED), we identify two significant trade-offs. First, we prove via dimension-counting arguments that GLQEC requires periodic electric fields, thereby constraining the design space for lattice QED simulations. Second, we numerically compare GLQEC with a universal quantum error correction (UQEC) code, specifically the $d=3$ bitflip repetition code, and find that while GLQEC can achieve lower logical error rates in single-round error correction, it exhibits faster decoherence to the steady-state mixed ensemble under multiple rounds. The mixing speed penalty is manifest in observables of interest for both memory experiments and Hamiltonian evolution. We identify a mixing speed threshold, $p_{th}=0.277(2)$, above which using GLQEC exhibits even faster decoherence than without error correction. Our results highlight fundamental limitations of symmetry-based error correction schemes and inform corresponding constraints on formulations of lattice gauge theories compatible with error-robust quantum simulation techniques.
Loss Mechanisms in High-coherence Multimode Mechanical Resonators Coupled to Superconducting Circuits
This paper studies acoustic wave devices that can maintain quantum properties for very long times (up to 1 millisecond) when coupled to superconducting quantum circuits. The researchers identify what causes energy loss in these hybrid quantum systems and demonstrate record-breaking performance that could enable new quantum technologies.
Key Contributions
- Achieved phonon lifetimes up to 400 microseconds and coherence times approaching one millisecond in quantum mechanical resonators
- Identified defect density in piezoelectric materials as a limiting factor for quantum coherence in circuit quantum acoustodynamics systems
- Demonstrated large quantum coherence cooperativity of 1.1×10^5 in hybrid superconducting qubit-mechanical resonator systems
View Full Abstract
Circuit quantum acoustodynamics (cQAD) devices have a wide range of applications in quantum science, all of which depend crucially on the quantum coherence of the mechanical subsystem. In this context, high-overtone bulk acoustic-wave resonators (HBARs) are particularly promising, since they have shown very high quality factors with negligible dephasing. However, the introduction of piezoelectric films, which are necessary for coupling to a superconducting circuit, can lead to additional loss channels, such as surface scattering and two-level systems (TLS). Here, we study the acoustic dissipation of HBAR resonators in cQAD systems and find that the defect density of the piezoelectric material and its interface with the bulk are limiting factors for the coherence. We measure acoustic modes with phonon lifetimes up to 400 $μ$s and lifetime-limited coherence times approaching one millisecond in the quantum regime. When coupled to a superconducting qubit, this leads to a hybrid system with a large quantum coherence cooperativity of $C_{T_2}=1.1\times10^5$. These results represent a new milestone for the performance of cQAD devices and offer concrete paths forward for further improvements.
Lowering the temperature of two-dimensional fermionic tensor networks with cluster expansions
This paper develops new computational methods for studying quantum systems at finite temperatures by extending cluster expansion techniques to two-dimensional fermionic systems. The researchers use tensor network representations to simulate thermal states and apply their method to identify phase transitions in a spinless fermion model.
Key Contributions
- Extension of cluster expansion methods to two-dimensional fermionic tensor networks
- Development of improved truncation schemes for PEPO (projected entangled-pair operator) multiplication
- Demonstration of finite-temperature phase boundary detection in attractive fermion systems
View Full Abstract
Representing the time-evolution operator as a tensor network constitutes a key ingredient in several algorithms for studying quantum lattice systems at finite temperature or in a non-equilibrium setting. For a Hamiltonian composed of strictly short-ranged interactions, the Suzuki-Trotter decomposition is the main technique for obtaining such a representation. In [B.~Vanhecke, L.~Vanderstraeten and F.~Verstraete, Physical Review A, L020402 (2021)], an alternative strategy, the cluster expansion, was introduced. This approach naturally preserves internal and lattice symmetries and can more easily be extended to higher-order representations or longer-ranged interactions. We extend the cluster expansion to two-dimensional fermionic systems, and employ it to construct projected entangled-pair operator (PEPO) approximations of Gibbs states. We also discuss and benchmark different truncation schemes for multiplying layers of PEPOs together. Applying the resulting framework to a two-dimensional spinless fermion model with attractive interactions, we resolve a clear phase boundary at finite temperature.
Self-stabilized high-dimensional quantum key distribution on a metropolitan free-space link
This paper demonstrates a quantum key distribution system that operates over a hybrid metropolitan network combining free-space and fiber-optic transmission, achieving secure communication rates up to 95 kbit/s while running continuously for 48 hours without external reference signals.
Key Contributions
- Demonstrated high-dimensional quantum key distribution over hybrid free-space and fiber metropolitan link
- Achieved fully self-referenced operation for 48 hours without auxiliary optical reference channels
- Implemented adaptive encoding dimensionality optimization for realistic channel conditions
View Full Abstract
Quantum communication technologies capable of operating reliably across heterogeneous optical channels are essential for scalable metropolitan quantum networks. Here we demonstrate high-dimensional time-bin-encoded quantum key distribution over a hybrid metropolitan link comprising 1.7 km free-space transmission and 685 m of optical fiber. Operating at a clock rate of 500 MHz in the C-band, we implement both 2- and 4-dimensional protocols, and obtain estimated secure finite-key rates of (95 +- 28) kbit/s for 4D at (25.0 +- 2.0) dB loss and (59 +- 27) kbit/s for 2D at (23.5 +- 2.3) dB loss. Crucially, we achieve continuous operation over 48 h in a fully self-referenced architecture: initial synchronization, interferometric phase stabilization, and long-term drift compensation are performed exclusively using the detected quantum signals, without auxiliary optical reference channels. Our results thus establish a practical and versatile platform for hybrid free-space-to-fiber quantum communication and show that the encoding dimensionality can be adapted to the optimal operating regime of realistic metropolitan channels, providing a pathway toward efficient, autonomous and deployable quantum network nodes.
On the emergence of quantum mechanics from stochastic processes
This paper develops a mathematical framework connecting classical stochastic processes to quantum mechanics, showing how quantum dynamics can emerge from underlying random processes. The authors establish conditions under which this correspondence works and explain quantum phase information as a memory effect from the stochastic description.
Key Contributions
- Generalized stochastic-quantum correspondence with explicit dictionary between stochastic kernels and CPTP maps
- Identification of Chapman-Kolmogorov divisibility as key constraint for Lindblad master equation form
- Explanation of quantum phase emergence as compressed memory effect from phase-blind stochastic processes
View Full Abstract
The stochastic--quantum correspondence reinterprets quantum dynamics as arising from an underlying stochastic process on a configuration space. We generalize the correspondence by lifting an arbitrary stochastic kernel $Γ$ in finite dimension to a map $φ$ on $B(\mathcal H)$, formulating the associated lift-compatibility relation, and giving an explicit dictionary between $Γ$ and CPTP (Kraus) maps. We isolate Chapman--Kolmogorov divisibility of the lifted family as the decisive additional constraint: when a CK-consistent CPTP family exists, the lift admits a Lindblad master equation form. In this picture, off-diagonal (phase) degrees of freedom act as a compressed carrier of history dependence not fixed by transition kernels alone; conversely, the apparent emergence of quantum phase information from a phase-blind stochastic description is explained as a memory effect. Finally, we state and prove a divisibility criterion for the underlying stochastic kernels, expressed as a condition involving divisibility of the lifted map together with a diagonality requirement on the density operator.
Learning Quantum Data Distribution via Chaotic Quantum Diffusion Model
This paper proposes a new quantum generative model that uses chaotic Hamiltonian evolution instead of complex quantum circuits to learn and generate quantum data distributions. The approach is designed to be more practical for analog quantum hardware while maintaining comparable performance to existing quantum diffusion models.
Key Contributions
- Introduction of chaotic quantum diffusion model as hardware-efficient alternative to circuit-based QuDDPMs
- Demonstration of quantum generative modeling using time-independent Hamiltonian evolution with reduced implementation overhead
View Full Abstract
Generative models for quantum data pose significant challenges but hold immense potential in fields such as chemoinformatics and quantum physics. Quantum denoising diffusion probabilistic models (QuDDPMs) enable efficient learning of quantum data distributions by progressively scrambling and denoising quantum states; however, existing implementations typically rely on circuit-based random unitary dynamics that can be costly to realize and sensitive to control imperfections, particularly on analog quantum hardware. We propose the chaotic quantum diffusion model, a framework that generates projected ensembles via chaotic Hamiltonian time evolution, providing a flexible and hardware-compatible diffusion mechanism. Requiring only global, time-independent control, our approach substantially reduces implementation overhead across diverse analog quantum platforms while achieving accuracy comparable to QuDDPMs. This method improves trainability and robustness, broadening the applicability of quantum generative modeling.
Quantum tomography for non-iid sources
This paper proves that quantum state and process tomography can achieve optimal sample complexity even when quantum devices don't produce identical states over time due to noise, drift, or other realistic conditions. The authors show that projected least-squares tomography maintains the same theoretical performance bounds as ideal scenarios, just reconstructing a time-averaged quantum state or process instead.
Key Contributions
- Proved that projected least-squares tomography remains statistically optimal under adaptive and non-iid quantum state preparation
- Established sample complexity bounds matching optimal iid scaling for realistic experimental conditions with O(dr²/ε²) for rank-r states and O(d⁶/ε²) for process tomography
View Full Abstract
Quantum state and process tomography are typically analyzed under the assumption that devices emit independent and identically distributed (i.i.d.) states or channels. In realistic experiments, however, noise, drift, feedback, or adversarial behavior violate this assumption. We show that projected least-squares tomography remains statistically optimal even under fully adaptive state and channel preparation. Specifically, we prove that the sample complexity for reconstructing the time-averaged state or channel matches the optimal i.i.d. scaling for non-adaptive, single-copy measurements. For rank-$r$ states, the sample complexity is $\mathcal{O}(d r^2/ε^2)$ to achieve accuracy $ε$ in trace distance, while for process tomography it is $\mathcal{O}(d^6/ε^2)$ to achieve accuracy $ε$ in diamond distance. Thus, dropping the i.i.d. assumption does not increase the fundamental sample complexity of quantum tomography, but only changes the interpretation of the reconstructed object.
Quantum criticality in open quantum systems from the purification perspective
This paper develops a theoretical framework for classifying quantum phases in open quantum systems that interact with their environment, using a mathematical technique called purification to systematically categorize eight different types of mixed-state phases. The researchers create a three-dimensional phase diagram that reveals how these phases transition between each other and identify unique critical behaviors that only occur in open quantum systems.
Key Contributions
- Development of a purification-based framework for systematic classification of mixed-state quantum phases in open systems
- Construction of a three-dimensional phase diagram with eight fixed-point Hamiltonians characterized by topological indices
- Identification of unique critical behavior connecting distinct symmetry-breaking patterns specific to mixed-state systems
- Large-scale tensor network simulations revealing complex phase structures including pyramid-shaped symmetry-breaking regions
View Full Abstract
Open quantum systems host mixed-state phases that go beyond the symmetry-protected topological and spontaneous symmetry-breaking paradigms established for closed, pure-state systems. Developing a unified and physically transparent classification of such phases remains a central challenge. In this work, we introduce a purification-based framework that systematically characterizes all mixed-state phases in one-dimensional systems with $\mathbb{Z}_2^σ \times \mathbb{Z}_2^τ$ symmetry. By introducing an ancillary $κ$ chain and employing decorated domain-wall constructions, we derive eight purified fixed-point Hamiltonians labeled by topological indices $(μ_{στ},μ_{τκ},μ_{κσ}) \in \{\pm1\}^3$. Tracing out the ancilla recovers the full structure of mixed-state phases, including symmetric, strong-to-weak spontaneous symmetry breaking, average symmetry-protected topological phases, and their nontrivial combinations. Interpolations between the eight fixed points naturally define a three-dimensional phase diagram with a cube geometry. The edges correspond to elementary transitions associated with single topological indices, while the faces host intermediate phases arising from competing domain-wall decorations. Along the edges, we identify a class of critical behavior that connects distinct strong-to-weak symmetry-breaking patterns associated with distinct strong subgroups, highlighting a mechanism unique to mixed-state settings. Large-scale tensor-network simulations reveal a rich phase structure, including pyramid-shaped symmetry-breaking regions and a fully symmetry-broken phase at the cube center. Overall, our purification approach provides a geometrically transparent and physically complete classification of mixed-state phases, unified with a single $\mathbb{Z}_2^σ \times \mathbb{Z}_2^τ \times \mathbb{Z}_2^κ$ model.
Noise-adaptive hybrid quantum convolutional neural networks based on depth-stratified feature extraction
This paper develops a hybrid quantum-classical neural network that combines quantum convolutional neural networks with classical processing to improve performance under noise. The key innovation is using measurements from discarded qubits during pooling operations as classical features, making the system more robust to quantum noise and scalable.
Key Contributions
- Noise-adaptive hybrid QCNN architecture using depth-stratified feature extraction
- Demonstration of improved classification performance and convergence stability under realistic quantum noise conditions
- Scalable approach that maintains performance advantages as circuit size increases
View Full Abstract
Hierarchical quantum classifiers, such as quantum convolutional neural networks (QCNNs), represent recent progress toward designing effective and feasible architectures for quantum classification. However, their performance on near-term quantum hardware remains highly sensitive to noise accumulation across circuit depth, calling for strategies beyond circuit-architecture design alone. We propose a noise-adaptive hybrid QCNN that improves classification under noise by exploiting depth-stratified intermediate measurements. Instead of discarding qubits removed during pooling operations, we measure them and use the resulting outcomes as classical features that are jointly processed by a classical neural network. This hybrid hierarchical design enables noise-adaptive inference by integrating quantum intermediate measurements with classical post-processing. Systematic experiments across multiple circuit sizes and noise settings, including hardware-calibrated noise models derived from IBM Quantum backend data, demonstrate more stable convergence, reduced loss variability, and consistently higher classification accuracy compared with standard QCNNs. Moreover, we observe that this performance advantage significantly amplifies as the circuit size increases, confirming that the hybrid architecture mitigates the scaling limitations of standard architectures. Notably, the multi-basis measurement variant attains performance close to the noiseless limit even under realistic noise. While demonstrated for QCNNs, the proposed depth-stratified feature extraction applies more broadly to hierarchical quantum classifiers that progressively discard qubits.
Prodiabatic Elimination: Higher Order Elimination of Fast Variables with Quantum Noise
This paper introduces 'prodiabatic elimination,' an improved mathematical technique for analyzing quantum systems where some components change much faster than others. The method provides better accuracy than standard approaches while maintaining computational simplicity, demonstrated through examples of light-matter interactions in cavity systems.
Key Contributions
- Development of prodiabatic elimination technique that systematically extends adiabatic elimination with higher-order corrections
- Demonstration that the method consistently includes quantum noise contributions while maintaining computational efficiency
- Validation through two key examples: driven dissipative Jaynes-Cummings model and three-level STIRAP system
View Full Abstract
We introduce the prodiabatic elimination, a powerful approximation technique that systematically extends the adiabatic elimination of fast degrees of freedom in light-matter coupled systems. Through a controlled expansion of operators, the prodiabatic elimination incorporates higher-order corrections and consistently includes noise contributions, leading to a significantly improved performance compared to standard adiabatic elimination. Importantly, it retains the simplicity and computational efficiency of the adiabatic elimination, making it convenient for practical applications. We demonstrate the approach on two setups: a driven dissipative Jaynes-Cummings model and a three-level system in a two-mode cavity that performs stimulated Raman adiabatic passage (STIRAP). These examples establish the prodiabatic elimination as a robust and broadly applicable tool for analyzing open quantum systems.
Quantum Error Mitigation Simulates General Non-Hermitian Dynamics
This paper presents a method to simulate non-Hermitian quantum dynamics on near-term quantum computers without requiring ancilla qubits or continuous monitoring. The approach uses quantum error mitigation techniques to cancel unwanted quantum jump contributions and enable the study of exotic quantum phenomena that don't preserve probability.
Key Contributions
- Hardware-friendly protocol for simulating non-Hermitian dynamics without ancillas or controlled time evolution
- Novel application of quantum error mitigation to enable non-unitary quantum simulations on near-term devices
View Full Abstract
While non-Hermitian Hamiltonians enable exotic dynamical phenomena, implementing their nonunitary time evolution on near-term quantum devices remains challenging. We propose a hardware-friendly protocol that simulates non-Hermitian dynamics without continuous monitoring. Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) evolution via classical Gaussian white-noise averaging and to subsequently cancel the quantum-jump contribution at the level of the measured observable using stochastic quantum error mitigation (QEM). The scheme requires no ancillas or controlled time-evolution, while the mitigation layer uses only single-qubit operations. We validate the method through numerical simulations of a model with asymmetric hopping, interaction, and disorder. Our work provides a programmable and ancilla-free framework investigating exotic dynamics that are not completely-positive and trace-preserving using QEM.
Deep squeezing or cooling the fluctuations of a parametric resonator using feedback
This paper analyzes methods to achieve deep squeezing or cooling of quantum fluctuations in a parametric resonator using feedback control with lock-in amplifiers. The researchers use multiple theoretical approaches to study how feedback affects the resonator dynamics and demonstrate that very strong noise reduction can be achieved near certain bifurcation points.
Key Contributions
- Development of multiple theoretical methods (averaging, harmonic balance, Floquet theory) to analyze parametric resonators with feedback
- Demonstration that very strong squeezing and cooling of quantum fluctuations can be achieved near Hopf and saddle-node bifurcations
View Full Abstract
Here we analyze ways to achieve deep subthreshold parametric squeezing or cooling of a single degree-of-freedom parametric resonator enhanced by a lock-in amplifier feedback loop. Due to the feedback, the dynamics of the parametric resonator becomes more complex and a Hopf bifurcation at the instability threshold can occur. Initially, we calculate the phase-dependent gain of parametric amplification with feedback of an added ac signal. In one approach, we obtain the amplification gain approximately using two independent approaches: the averaging method and the harmonic balance method. We also obtain this gain more exactly using Floquet theory and Green's functions methods. The Hopf bifurcation was predicted by the harmonic balance method and by Floquet theory, but not by the averaging method. In our analysis of fluctuations, we Fourier analyze the response of the parametric resonator with feedback to an added white noise. We were able to calculate, in addition to the noise spectral density, the squeezing of fluctuations in this resonator with feedback. Very strong squeezing or cooling can occur. Deamplification and cooling occur near the Hopf bifurcation, whereas squeezing occurs near a saddle-node bifurcation.
Generating large-scale Greenberger-Horne-Zeilinger-like states in lattice spin systems
This paper proposes a new method to create large-scale entangled quantum states (GHZ-like states) in lattice spin systems using global operations that can be applied universally across different types of quantum systems. The approach uses Floquet engineering to generate these highly entangled states more efficiently than previous methods.
Key Contributions
- Universal and scalable scheme for generating large-scale GHZ-like states using only global operations
- Floquet engineering approach applicable to systems with arbitrary interaction ranges
View Full Abstract
Greenberger-Horne-Zeilinger (GHZ) state is a typical maximally entangled state which is pursued in both fundamental research and emerging quantum technologies. Preparing large-scale GHZ states in lattice spin systems is particularly appealing for quantum advantages, but conventional schemes face great challenges in scalability. Here we propose a universal and scalable scheme to generate large-scale GHZ-like states, which share similar entanglement and metrological properties with standard GHZ states, in lattice spin systems through global Floquet engineering. Our scheme requires only global operations and shows great advantage for large particle number. It is applicable to systems with arbitrary interaction ranges, offering a practical pathway for large-scale implementation of many-body entangled states in various systems.
Imperfect Graphs from Unitary Matrices -- I
This paper introduces a graph-theoretic framework called 'Topological Structure of Superpositions' (TSS) that maps quantum unitary matrices to directed graphs, where quantum states become vertices and non-zero amplitude transitions become edges. The authors discard phase and amplitude information to focus purely on connectivity patterns of quantum operators like Hadamard and Pauli gates.
Key Contributions
- Introduction of Topological Structure of Superpositions (TSS) framework for mapping unitary matrices to directed graphs
- Graph-theoretic analysis method that isolates connectivity and reachability properties of quantum operators
View Full Abstract
Matrix representations of quantum operators are computationally complete but often obscure the structural topology of information flow within a quantum circuit \cite{nielsen2000}. In this paper, we introduce a generalized graph-theoretic framework for analyzing quantum operators by mapping unitary matrices to directed graphs; we term these structures \emph{Imperfect Graphs} or more formally as \emph{Topological Structure of Superpositions}(TSS) as a tool to devise better Quantum Algorithms. In this framework, we represent computational basis states as vertices. A directed edge exists between two vertices if and only if there is a non-zero amplitude transition between them, effectively mapping the support of the unitary operator. In this paper we deliberately discard probability amplitudes and phase information to isolate the connectivity and reachability properties of the operator. We demonstrate how TSS intuitively helps describe gates such as the Hadamard, Pauli-(X,Y,Z) gates, etc \cite{nielsen2000}. This framework provides a novel perspective for viewing quantum circuits as discrete dynamical systems \cite{childs2009,aharonov2001} Keywords: Quantum Algorithms, Unitary Matrix Approach, Topological Structure of Superpositions (TSS), Graph Theory
On some mathematical problems for open quantum systems with varying particle number
This paper provides a rigorous mathematical derivation of the effective Hamiltonian H - μN for open quantum systems where particle number can change, proving its uniqueness and establishing mathematical foundations for the grand canonical ensemble used in statistical physics.
Key Contributions
- Rigorous derivation of the effective Hamiltonian H - μN for open quantum systems with varying particle number
- Mathematical proof that the Hilbert space for varying particle number systems is isomorphic to Fock space
- Rigorous justification of the surface-to-volume ratio approximation used in statistical mechanics
View Full Abstract
We derive the effective Hamiltonian $H - μN$ for open quantum systems with varying particle number from first principles within the framework of non-relativistic quantum statistical mechanics. We prove that under physically motivated assumptions regarding the size of the system and the range of the interaction, this form of the Hamiltonian is unique up to a constant. Our argument relies firstly on establishing a rigorous version of the surface-to-volume ratio approximation, which is routinely used in an empirical form in statistical mechanics, and secondly on showing that the Hilbert space for systems with varying particle number must be isomorphic to Fock space. Together, these findings provide a rigorous mathematical justification for the standard grand canonical formalism employed in statistical physics.
Landscape-Similarity-Guided Optimization in QAOA
This paper introduces Doubly Optimized QAOA (DO-QAOA), an improved version of the Quantum Approximate Optimization Algorithm that reduces computational runtime and quantum measurement requirements by exploiting similarities in optimization landscapes. The authors show that when variables are frozen in QAOA problems, the resulting reduced instances have similar landscape structures that can be grouped into a small number of effective classes, avoiding exponential scaling issues.
Key Contributions
- Introduction of DO-QAOA algorithm that reduces runtime and measurement overhead while maintaining performance
- Discovery of landscape-similarity universality in reduced QAOA instances and development of landscape-overlap order parameter to quantify correlations
- Demonstration that exponentially many reduced instances can be collapsed into O(1) effective landscape classes
View Full Abstract
Across diverse synthetic and real-world interaction graphs, the variational landscapes of reduced Quantum Approximate Optimization Algorithm (QAOA) instances obtained via variable freezing exhibit a robust universality. Leveraging this structure, we introduce Doubly Optimized QAOA (DO-QAOA), which lowers runtime and quantum measurement overhead while maintaining a competitive approximation ratio gap (ARG). Adapting the replica-overlap framework of spin-glass physics, we define a landscape-overlap order parameter $q$ to quantify geometric correlations between energy landscapes, revealing a sharp landscape-similarity transition as graph connectivity is tuned. Notwithstanding this transition, the dominant convex features of nearly all conditioned sub-instances remain aligned across both phases. Exploiting this persistence, DO-QAOA collapses the nominal $2^m$ reduced instances generated by freezing $m$ qubits into $K = O(1)$ effective landscape classes, eliminating the exponential proliferation in $m$. By leveraging landscape structure, DO-QAOA provides a scalable route to hybrid quantum-classical optimization under realistic hardware constraints, with potential applicability across variational quantum algorithms.
Revealing entanglement through local features of phase-space distributions
This paper develops new methods to detect quantum entanglement in continuous-variable systems by analyzing phase-space distributions at specific points rather than requiring full distribution measurements. The researchers create a hierarchy of entanglement detection criteria and demonstrate a practical measurement scheme using only passive optical elements and coherent states.
Key Contributions
- Formulation of infinite hierarchy of continuous-variable separability criteria based on phase-space quasiprobability distributions
- Development of practical measurement scheme using passive linear transformations and coherent ancillas that doesn't require full phase-space reconstruction
- Demonstration of effective detection method for non-Gaussian entanglement in relevant quantum state families
View Full Abstract
We formulate an infinite hierarchy of continuous-variable separability criteria in terms of quasiprobability distributions and their derivatives evaluated at individual points in phase space. Our approach is equivalent to the Peres--Horodecki criterion and sheds light on how distillable entanglement manifests in the phase-space picture. We demonstrate that already the lowest-order variant constitutes a powerful method for detecting the elusive non-Gaussian entanglement of relevant state families. Further, we devise a simple measurement scheme that relies solely on passive linear transformations and coherent ancillas. By strategically probing specific phase-space regions, our method offers clear advantages over existing techniques that rely on access to the full phase-space distributions.
Secret Key Rate Limits in Coexisting Classical-Quantum Optical Links
This paper studies how classical optical signals interfere with quantum key distribution (QKD) signals when both are transmitted through the same fiber-optic cable, and develops mathematical models to optimize the frequency placement of quantum channels to maximize secure key generation rates.
Key Contributions
- Derived closed-form expressions for calculating interference power from classical signals on quantum channels in fiber optics
- Demonstrated that placing QKD channels in upper E-band/lower S-band frequencies achieves higher secret key rates than traditional O-band placement
View Full Abstract
Classical-quantum coexistence enables cost-effective transmission of data and quantum signals over the same fiber-optic channel. Nevertheless, weak quantum-key distribution (QKD) signals are susceptible to non-linear interference generated from the classical traffic, primarily spontaneous Raman scattering (SpRS) and four-wave-mixing (FWM), as well as to unfiltered noise. In QKD protocols, increased channel loss and excess noise both reduce the secret key rates (SKRs), as illustrated in this work for the two-state BB84 and Gaussian-modulated coherent-states (GMCS) protocols. In this study, we derive closed-form expressions for evaluating the accumulated interference power from coexisting classical signals in a quantum frequency channel. Our model enables effective design of classical-quantum systems in single-mode fibers (SMFs), capturing the evolution of interference arising from the relevant physical phenomena. We utilize the model to examine frequency allocation in multiband transmission systems, demonstrating that, contrary to common practice of allocating QKD channels in the O-band, increased SKR is achieved by placing quantum channels in the upper E-/lower S-band across the relevant scenarios.
Entanglement recovery by reversing the effect of noise in quantum repeater
This paper proposes a method to recover quantum entanglement that has been degraded by noise in quantum repeater networks. The approach uses a probabilistic reversing operation to undo the effects of amplitude damping and photon loss, demonstrating substantial entanglement recovery even under strong noise conditions.
Key Contributions
- Development of heralded entanglement recovery protocol for quantum repeaters
- Analysis of recovery effectiveness in two-way and one-way repeater architectures
- Demonstration of entanglement recovery in parameter regimes with entanglement sudden death
View Full Abstract
We propose a method to directly recover the degree of entanglement distributed by entanglement swapping in the presence of noise. Our approach introduces a reversing operation that probabilistically undoes the effect of amplitude damping or photon loss on a single entangled pair, enabling heralded recovery of entanglement. We demonstrate that entanglement can be substantially recovered even under strong noise, including parameter regimes where the distributed entanglement would otherwise vanish due to entanglement sudden death. We analyze the effectiveness of the protocol in two representative repeater models, i.e.,~two-way and one-way architectures and identify the optimal reversing strategy. Due to its heralded and single-copy nature, our protocol is readily compatible with other entanglement recovery techniques such as entanglement purification and distillation. Our work provides a practical and experimentally feasible way toward robust entanglement distribution in current and near-term quantum repeater architectures.
Performance Comparison of QAOA Mixers for Ternary Portfolio Optimization
This paper applies the Quantum Approximate Optimization Algorithm (QAOA) to portfolio optimization problems in finance, comparing different mixer operators for ternary asset allocation decisions (hold, not hold, short sell). The researchers test various QAOA implementations on real German stock market data and evaluate performance under noise conditions.
Key Contributions
- Extension of QAOA to ternary portfolio optimization problems with three asset states
- Comprehensive comparison of different mixer operators including XY Mixers under noisy conditions
- Demonstration of QAOA performance on real financial data from DAX 30 stock index
View Full Abstract
The Quantum Approximate Optimization Algorithm (QAOA) is a quantum algorithm proposed for Noisy Intermediate-Scale Quantum (NISQ) devices and is regarded as a promising approach to combinatorial optimization problems, with potential applications in the financial sector. In this study, we apply QAOA to the portfolio optimization problem, which is one of the central challenges in financial engineering. A portfolio consists of a combination of multiple assets, and the portfolio optimization problem aims to determine the optimal asset allocation by balancing expected return and risk. In the context of quantum optimization, portfolio optimization is often formulated using discrete variables. Unlike conventional binary formulations, we consider a ternary portfolio optimization problem that accounts for three states-holding, not holding, and short selling-and compare its performance using different mixer operators. Specifically, we implement QAOA with the standard mixer and several XY Mixers (XY Ring, XY Parity Ring, XY Full, and QAMPA), and conducted simulations using real data based on the German stock index (DAX 30) for portfolios consisting of 5 and 8 assets. Furthermore, we introduce noise based on a depolarizing channel to investigate the behavior of the algorithm in realistic environments. The results show that while XY Mixers exhibit superiority in noiseless settings, their advantage degrades in noisy environments, and the optimal choice of mixer depends on both the number of QAOA depths and the noise strength.
Passive Environment-Assisted Quantum Communication
This paper investigates how to improve quantum information transmission through noisy channels by using specially prepared quantum states at an auxiliary input port. The researchers show that while ideal states can achieve perfect transmission, more practical states like squeezed cat states can still significantly enhance communication fidelity through optimized encoding and decoding strategies.
Key Contributions
- Demonstrates passive environment assistance can enhance quantum communication through lossy channels below 50% transmissivity
- Develops practical encoding/decoding strategies using experimentally accessible non-Gaussian states like Fock, cat, and squeezed cat states
View Full Abstract
As quantum information systems mature, efficient and coherent transfer of quantum information through noisy channels becomes increasingly important. We examine how passive environment-assisted quantum communication enhances direct quantum information transfer efficiency. A bosonic pure-loss channel, modeled as transmission through a beam splitter with a vacuum input state at the dark port, has zero quantum capacity when transmissivity is below 50%. Quantum communication through the channel can be enhanced by passive environment assistance, achieved via the selection of an appropriate input state for the ancilla port. Although ideal Gottesman-Kitaev-Preskill (GKP) states enable perfect quantum information transmission at arbitrarily small transmissivity, they are challenging to realize experimentally. We therefore explore more experimentally accessible non-Gaussian ancilla states, such as Fock, cat, and squeezed cat states, and numerically determine the optimal encoding and decoding strategies. We also construct analytical schemes that yield high-fidelity transmission and good information rates.
Efficient time-series prediction on NISQ devices via time-delayed quantum extreme learning machine
This paper develops a time-delayed quantum extreme learning machine (TD-QELM) that can predict time-series data on current noisy quantum computers by using shallow quantum circuits that encode multiple past data points simultaneously. The method outperforms existing quantum reservoir computing approaches on IBM's 127-qubit processor while being more robust to quantum noise.
Key Contributions
- Development of TD-QELM algorithm with shallow circuit depth independent of sequence length
- Demonstration of superior performance over conventional quantum reservoir computing on NISQ hardware
- Practical framework for time-series prediction that mitigates noise accumulation on current quantum devices
View Full Abstract
We proposed a time-delayed quantum extreme learning machine (TD-QELM) for efficient time-series prediction on noisy intermediate-scale quantum (NISQ) devices. By encoding multiple past inputs simultaneously, TD-QELM achieves shallow circuit depth independent of sequence length, thereby, mitigating noise accumulation and reducing computational complexity. Experiments using the NARMA benchmark on both noiseless simulations and IBM's 127-qubit processor demonstrate that TD-QELM consistently outperforms conventional quantum reservoir computing in prediction accuracy and noise robustness. These results highlight TD-QELM as a practical and scalable framework for time-series learning on current NISQ hardware.
Momentum Diffusion, Decoherence and Drag Force on a Magnetic Nanoparticle
This paper studies how magnetic nanoparticles in quantum superposition lose their quantum coherence due to thermal electromagnetic field fluctuations, deriving decoherence rates and drag forces using the fluctuation-dissipation theorem in the long-wavelength limit.
Key Contributions
- Complete derivation of decoherence rates for magnetic nanoparticles in thermal electromagnetic environments
- Extension to two-particle systems and comparison with dielectric nanoparticle properties
- Analysis of drag forces arising from electromagnetic field fluctuations
View Full Abstract
In this paper, we will provide a complete derivation of the decoherence rate for a magnetic nanoparticle in quantum superposition in the presence of the fluctuating electromagnetic field in a thermal background by using the fluctuation-dissipation theorem in the long-wavelength limit. The long-wavelength limit assumes that the superposition size is much smaller than the wavelength of the electromagentic filed fluctuations. We will extend this computation to two diamagnetic nanoparticles kept in quantum superposition adjacent to each other. We will also show how the drag force on a single nanoparticle arises from external electromagnetic-field fluctuations, and compare our results with those for the nanoparticle's dielectric properties.
Universal Sample Complexity Bounds in Quantum Learning Theory via Fisher Information matrix
This paper establishes theoretical bounds on how many quantum measurements are needed to learn the parameters of quantum systems, showing that these bounds are fundamentally determined by the Fisher information matrix. The authors apply their framework to specific problems like learning Pauli channels and estimating quantum expectation values, providing a unified theoretical foundation for quantum learning theory.
Key Contributions
- Established universal sample complexity bounds for quantum parameter estimation governed by the inverse Fisher information matrix
- Created a systematic task-independent framework for quantum learning theory under maximum-likelihood estimation
View Full Abstract
In this work, we show that the sample complexity (equivalently, the number of measurements) required in quantum learning theory within a general parametric framework, is fundamentally governed by the inverse Fisher information matrix. More specifically, we derive upper and lower bounds on the number of samples required to estimate the parameters of a quantum system within a prescribed small additive error and with high success probability under maximum likelihood estimation. The upper bound is governed by the supremum of the largest diagonal entry of the inverse Fisher information matrix, while the lower bound is characterized by any diagonal element evaluated at arbitrary parameter values. We then apply the general bounds to Pauli channel learning and to the estimation of Pauli expectation values in the asymptotic small-error regime, and recover the previously established sample complexity through considerably streamlined derivations. Furthermore, we identify the structural origin of exponential sample complexity in Pauli channel learning without entanglement and in Pauli expectation value estimation without quantum memory. We then extend the analysis to an error criterion based on the Euclidean distance between the true parameter values and their estimators. We derive the corresponding upper and lower bounds on the sample complexity, which are likewise characterized by the inverse Fisher information matrix. As an application, we consider Pauli expectation estimation with entangled probes. Finally, we highlight two fundamental contributions to quantum learning theory. First, we establish a systematic framework that determines the task-independent sample complexity under maximum-likelihood estimation. Second, we show that, in the small-error regime, learning sample complexity is governed by the inverse Fisher information matrix.
Coupling nitrogen vacancy centers in silicon carbide to nanophotonic resonators
This paper demonstrates how to improve the performance of nitrogen vacancy centers in silicon carbide by integrating them with nanophotonic structures like micro-pillars and micro-disks. The structures enhance light collection efficiency and improve magnetic field sensing capabilities, making these quantum defects more practical for quantum technologies.
Key Contributions
- Demonstrated 4-fold increase in photon collection from NV centers using micro-pillar resonators
- Achieved 24% improvement in magnetic field sensitivity through enhanced spin readout
- Developed broadband coupling approach using micro-disk resonators spanning 1150-1250 nm for NV emission lines
View Full Abstract
Silicon carbide (SiC) is a promising platform for scalable quantum technologies owing to its well-established, wafer-scale industrial processing. SiC also hosts a variety of optically active color centres including the nitrogen vacancy defect that has a spin-triplet ground state. However, strong phonon coupling in the infrared range limits photon extraction from these defects. Here, we use nanophotonic structures, specifically micro-pillar and micro-disk resonators, to enhance optical collection and spin-readout. The micro-pillar geometry yields a 4-fold increase in photon collection, accompanied by a 2.4-fold reduction in spectral noise in optically detected magnetic resonance measurements. Consequently, the magnetic field sensitivity is improved by 24%. The large mode volume of the micro-disk supports resonances spanning 1150-1250 nm, enabling broadband coupling to nitrogen vacancy emission lines. Our results demonstrate that fabrication of scalable photonic structures efficiently improves performance of silicon carbide color centers for integrated quantum light generation and sensing.
Passive Synchronization of Nonlocal Franson Interferometry for Fiber-Based Quantum Networks Using Co-propagating Classical Clock Signals
This paper demonstrates a method for synchronizing quantum networks by sending classical timing signals alongside quantum-entangled photons through the same fiber optic cable. By using different wavelength bands and achieving picosecond-precision timing, they enable high-quality quantum interference over 50 km distances without needing separate timing infrastructure.
Key Contributions
- Demonstrated passive synchronization technique using co-propagating classical clock signals with quantum photons in fiber networks
- Achieved 88.35% visibility nonlocal quantum interference over 50 km using cross-band allocation to suppress noise
- Provided scalable synchronization solution for metropolitan-scale quantum networks without external timing infrastructure
View Full Abstract
We demonstrate a robust, high-visibility nonlocal Franson interferometry for fiber-based quantum networks by co-propagating a classical Radio-over-Fiber clock signal with energy-time entangled photon pairs in the same fiber. Utilizing cross-band allocation (O-band for classical, L-band for quantum signals), the spontaneous Raman scattering noise photons are effectively suppressed. At the same time, their environmental delay fluctuations remain highly correlated for common-mode noise cancellation, achieving a passive synchronization with picoseconds precision. Over 50 km of single-mode fiber, this co-propagation enables nonlocal quantum interference with a visibility of (88.35\pm3.62)%, without relying on external dedicated timing infrastructure. This work provides a practical, scalable synchronization solution for metropolitan-scale entanglement-based quantum networks.
Nonlinearity-Inhomogeneity Competition in Discrete-Time Quantum Walks
This paper studies how nonlinear effects compete with random variations in discrete-time quantum walks on one-dimensional lattices. The researchers analyze how spatial and temporal randomness affects the ability of nonlinearity to trap quantum particles, finding that different types of disorder lead to distinct regimes of localization and spreading.
Key Contributions
- Identification of distinct quantum walking regimes through competition between nonlinearity and inhomogeneities
- Demonstration that spatial inhomogeneities weaken nonlinear self-trapping while temporal inhomogeneities enhance delocalization
- Comprehensive characterization using parameter diagrams showing how disorder modifies dynamical regimes
View Full Abstract
We investigate the interplay between nonlinearity and inhomogeneities in discrete-time quantum walks on one-dimensional lattices. Nonlinear effects are introduced through a Kerr-like, intensity-dependent local phase, while spatial and temporal inhomogeneities are implemented via random variations of the quantum gate operations. By analyzing typical quantities, such as the return probability and the participation function, we identify distinct quantum walking regimes as the nonlinear parameter $χ$ and the quantum gate parameter $θ$ are varied. Spatial inhomogeneities weaken nonlinear self-trapping and constrict the region of robust localization. In this process, partially localized regimes emerge, characterized by the coexistence of a confined core and dispersive wave-packet components. In contrast, temporal inhomogeneities act as time-dependent perturbations that continuously disrupt the phase coherence required for self-trapped excitation, thereby enhancing dispersive emission and promoting delocalization. By using $χ$ versus $θ$ diagrams, we display a comprehensive characterization of how inhomogeneities modify the stability and extent of prevailing dynamical regimes, elucidating the competition between nonlinearity and inhomogeneities in discrete-time quantum walks.
On fully entangled fraction of arbitrary $d\otimes d$ quantum states
This paper develops analytical methods to calculate the fully entangled fraction of quantum states in arbitrary d×d dimensional bipartite systems using Bloch representation. The authors derive upper bounds and provide exact calculations for specific classes of quantum states.
Key Contributions
- Analytical upper bounds on fully entangled fraction for arbitrary d×d bipartite quantum systems
- Exact analytical derivations of fully entangled fractions for specific classes of quantum states using Bloch representation
View Full Abstract
We study the fully entangled fraction of quantum states based on the Bloch representation of density matrices. Analytical upper bounds on the fully entangled fraction are obtained for arbitrary $d\otimes d$ bipartite systems. The fully entangled fractions for classes of $d\otimes d$ quantum states are analytically derived. Detailed examples are given to illustrate the advantages of our results.
Unsupervised Discovery of Intermediate Phase Order in the Frustrated $J_1$-$J_2$ Heisenberg Model via Prometheus Framework
This paper uses machine learning (specifically a variational autoencoder called Prometheus) to study a frustrated quantum spin system and identify different magnetic phases, including a mysterious intermediate phase whose nature has been debated. The researchers apply unsupervised learning to analyze quantum many-body ground states to discover order parameters without prior assumptions.
Key Contributions
- Application of validated machine learning framework to identify quantum phases in frustrated spin systems
- Systematic exploration of the debated intermediate phase in the J1-J2 Heisenberg model using unsupervised order parameter discovery
View Full Abstract
The spin-$1/2$ $J_1$-$J_2$ Heisenberg model on the square lattice exhibits a debated intermediate phase between Néel antiferromagnetic and stripe ordered regimes, with competing theories proposing plaquette valence bond, nematic, and quantum spin liquid ground states. We apply the Prometheus variational autoencoder framework -- previously validated on classical (2D, 3D Ising) and quantum (disordered transverse field Ising) phase transitions -- to systematically explore the $J_1$-$J_2$ phase diagram via unsupervised analysis of exact diagonalization ground states for a $4 \times 4$ lattice. Through dense parameter scans of $J_2/J_1 \in [0.3, 0.7]$ with step size 0.01 and comprehensive latent space analysis, we investigate the nature of the intermediate regime using unsupervised order parameter discovery and critical point detection via multiple independent methods. This work demonstrates the application of rigorously validated machine learning methods to open questions in frustrated quantum magnetism, where traditional order parameter identification is challenged by competing interactions and limited accessible system sizes.
Topological phase dynamics described by overtone-synthesized classical and quantum Adler equations
This paper extends the classical Adler equation for phase synchronization to include complex overtone coupling and analyzes both classical and quantum versions. The authors find that while the classical system exhibits topological features like quantized winding numbers, the quantum version surprisingly breaks this quantization due to superposition effects.
Key Contributions
- Extension of Adler equation to include overtone-synthesized coupling with topological phase dynamics
- Discovery that quantum superposition breaks classical winding-number quantization in the quantum regime
View Full Abstract
The Adler equation is a well-known one-dimensional model describing phase locking and synchronization. Motivated by recent experiments using optomechanical oscillators, we extend the model to include overtone-synthesized sinusoidal coupling with adiabatic temporal modulation. This extension gives rise to unique topological features such as winding-number quantization, discontinuous phase-slip transitions, and hysteretic and non-reciprocal phase dynamics. We further extend the analysis to the quantum regime, where we find a counterintuitive result: the breakdown of winding-number quantization. This arises from the superposition of different winding-number states in a closed-space Thouless pump. Moreover, hysteretic dynamics, once eliminated in quantum adiabatic approximation, is recovered in non-adiabatic calculations, as the superposition of two Floquet states with different PT eigenvalues becomes the quantum counterpart of phase trajectory.
Efimov Effect in Ultracold Microwave-Shielded Polar Molecules
This paper studies three ultracold polar molecules interacting through microwave-shielded dipolar forces, predicting observable Efimov physics - a quantum mechanical phenomenon where three particles can form bound states even when pairs cannot. The research demonstrates universal scaling behavior in these molecular trimers and proposes methods to create and detect them.
Key Contributions
- Prediction of Efimov physics in shielded dipolar molecules with characteristic universal scaling
- Demonstration that microwave shielding enables universality in both two-body and three-body molecular interactions
- Proposal of sudden approximation method to create and detect molecular trimers from trap states
View Full Abstract
A quantum-mechanical description is presented for the three-body physics of shielded dipolar molecules, including a prediction of observable Efimov physics. Despite the anisotropic and long-range nature of the interaction, shielding enables a regime in which universality emerges already at the two-body level and extends to the three-body sector, where Efimov physics emerges. On the negative side of the scattering-length resonance, computed trimer binding energies display the characteristic scaling expected for Efimov resonances. Finally, the sudden approximation can be used to create trimer bound states, starting from positive energy trap states as a way to create or detect these molecular trimers. Moreover, the three-body parameter expressed in dipolar units is found to be universal.
Markovian Embeddings of Non-Markovian Open System Dynamics
This paper develops a theoretical framework for simulating non-Markovian quantum systems by embedding them into larger Markovian spaces, unifying existing methods like HEOM and Lindblad-pseudomode approaches. The work provides both theoretical foundations and practical numerical tools for efficiently simulating quantum systems with memory effects.
Key Contributions
- Unified theoretical framework connecting different Markovian embedding approaches for non-Markovian quantum dynamics
- Development of numerically stable and efficient simulation methods for open quantum systems with memory effects
View Full Abstract
Embedding non-Markovian open quantum dynamics into an enlarged Markovian space offers a powerful route to nonperturbative simulations, where the dynamics of the extended space can be governed by multiple distinct Markovian equations. We show that these distinct embeddings arise from different unravelings of Gaussian bath self-energies, generating a family of deterministic, time-local equations for the extended system. Using the Brownian-oscillator spectral density as an illustrative example, we clarify the relationships among existing approaches, including the Hierarchical Equations of Motion (HEOM) and the Lindblad--pseudomode formalism, and demonstrate how this framework enables numerically stable and efficient simulations. This work provides both a transparent theoretical foundation for embedding techniques and a flexible platform for developing new methods to simulate non-Markovian quantum dynamics.
Phonon decoherence produced by two-level tunneling states
This paper develops a theoretical framework to understand how two-level tunneling states in surface defects cause decoherence in quantum phonon modes within crystalline resonators. The researchers derive quantum master equations to calculate phonon lifetimes and find that coherence is maximized at low temperatures despite increased mechanical losses.
Key Contributions
- Derivation of quantum master equation for phonon-TLS coupling systems
- Theoretical framework for calculating phonon decoherence from tunneling states
- Design principles for reducing phonon-TLS coupling through strain node positioning
View Full Abstract
Phonon modes within pristine crystalline resonators now routinely reach the quantum ground state. Such systems are attractive for quantum information science applications, as advanced fabrication and processing can enable relatively long quantum coherence times, and precision control can be realized through optical, electrical, or qubit coupling. In many state-of-the-art systems, the phonon lifetime is limited by disorder. In particular, native oxides or damaged `dead layers' at surfaces can host two-level tunneling states that lead to a particularly problematic form of dissipation that increases at lower temperatures. As mechanical losses are driven down in systems such as micro-fabricated bulk acoustic wave resonators, tunneling states are expected to emerge as the dominant mechanism for phonon decoherence. A quantitative description of these mesoscopic systems therefore requires a framework that captures interactions between a selected phonon mode and a large ensemble of TLS. Here, we derive a quantum master equation for this coupled system, permitting the phonon decoherence produced by two-level tunneling states to be calculated. As an example, we estimate the lifetime of a variety of quantum states within quartz micro-resonators hosting a thin surface layer of tunneling states. We find that the phonon coherence time is maximized at low temperatures, in spite of increased mechanical dissipation, and that phonon-TLS coupling can be reduced for modes with strain nodes at the surfaces.
Natural Qubit Algebra: clarification of the Clifford boundary and new non-embeddability theorem
This paper introduces Natural Qubit Algebra (NQA), a mathematical framework for representing quantum operations on qubits using real matrices and tensor products. The authors use this algebra to analyze quantum phenomena like Bell inequality violations and provide compact descriptions of quantum algorithms including Grover's search algorithm.
Key Contributions
- Development of Natural Qubit Algebra framework for compact representation of qubit operations
- Real Clifford normal form for two-qubit operators and identification with Clifford algebra Cl(2,2;R)
- Algebraic reformulation of Bell-CHSH scenario as spectral non-embeddability theorem
- Compact tensor-block representations of Bernstein-Vazirani and Grover algorithms
View Full Abstract
We introduce Natural Qubit Algebra (NQA), a compact real operator calculus for qubit systems based on a $2\times2$ block alphabet $\{I,X,Z,W\}\subset\mathrm{Mat}(2,\mathbb{R})$ and tensor-word representations. The resulting multiplication law induces a canonical $(\mathbb{Z}_2)^{2m}$-grading with a bicharacter that controls commutation signs, placing the framework naturally within the theory of color-graded and Clifford-type algebras. Within this language, we provide: (i) an explicit real Clifford normal form for two-qubit operators via the identification $\mathrm{Mat}(4,\mathbb{R})\cong\mathrm{Cl}(2,2;\mathbb{R})$; (ii) a purely algebraic reformulation of the Bell--CHSH scenario, where the quantum violation is expressed as a spectral non-embeddability of a noncommutative spinor algebra into any commutative Kolmogorov algebra; and (iii) compact factored representations of the Bernstein--Vazirani and Grover phase oracles, showing that both Clifford and non-Clifford examples can admit similarly structured symbolic descriptions. We clarify that Grover's iterate remains outside the Clifford group due to its continuous spectral rotation, consistent with the Gottesman--Knill theorem, while retaining a compact tensor-block form in NQA. The framework isolates spectral, algebraic, and syntactic aspects of operator structure, providing a graded operator language compatible with standard quantum mechanics.
Assessing quantum coherence in quantum annealers
This paper investigates whether D-Wave quantum annealers exhibit genuine quantum coherence by proposing a new diagnostic test based on many-body coherent oscillations (MBCO). The researchers find that while D-Wave devices show some quantum-like scaling behavior, they lack the expected oscillatory signatures that would confirm true quantum coherence.
Key Contributions
- Proposes many-body coherent oscillations (MBCO) as a new diagnostic tool for identifying system-wide quantum coherence in analog quantum simulators
- Demonstrates that D-Wave quantum annealers lack expected oscillatory behavior despite showing Kibble-Zurek scaling, suggesting limited quantum coherence in current devices
View Full Abstract
Demonstrating genuine many-body quantum coherence in large-scale quantum processors remains a central challenge for near-term quantum technologies. Recent experiments on D-Wave quantum annealers have investigated quenches of Ising chains and observed defect densities that show Kibble-Zurek scaling, consistent with coherent quantum dynamics. However, identical scaling can arise from classical or thermal processes. Here we propose the use of many-body coherent oscillations (MBCO) as a diagnostic for the identification of system-wide coherence in analog quantum simulators. Solving the time-dependent Schrodinger equation, we show that quenches of a staggered one-dimensional Ising chain across a quantum critical point produce oscillatory signatures in defect observables. We implement this model on the D-Wave Advantage quantum annealer. Using fast-anneal protocols, we find that, although defect densities follow Kibble-Zurek scaling, the expected oscillatory behavior is absent. We demonstrate that static disorder associated with individual qubits is not likely responsible for the absence of MBCO. Modest modifications to annealing schedules can dramatically enhance oscillation visibility. This work gives a general roadmap for the search for quantum coherence in noisy, large-scale quantum platforms.
The Inverse Born Rule Fallacy: On the Informational Limits of Phase-Locked Amplitude Encoding
This paper critiques common amplitude encoding methods in quantum machine learning, arguing that simple square-root mappings of classical probability distributions fail to capture quantum computational advantages because they ignore phase information. The authors propose an alternative approach called Dynamical Hamiltonian Encoding that preserves the non-commutative structure needed for genuine quantum speedups.
Key Contributions
- Rigorous proof that standard amplitude encoding methods lose quantum computational advantages by discarding phase information
- Introduction of Dynamical Hamiltonian Encoding as an alternative that preserves non-commutative quantum structure
View Full Abstract
In Quantum Machine Learning (QML) and Quantum Finance, amplitude encoding is often motivated by its logarithmic storage capacity arXiv:1307.0411. This paradigm typically relies on the mapping $ψ= \sqrt{P}$, treating the quantum state as a derivative of a classical probability distribution $P$. By restricting the data manifold to the positive real orthant $\mathcal{S}^+$, the accessible Hilbert space is effectively abelianized, rendering the representation ``phase-deaf''. We rigorously establish that while $P$ is a projection of $|ψ|^2$, the simple square-root mapping fails to recover the non-commutative structure necessary for genuine quantum advantage in classification tasks. Furthermore, we clarify why applying basis changes (like Hadamard gates) to these states fails to replicate the computational power of active phase-kickback mechanisms. Finally, we advocate for Dynamical Hamiltonian Encoding (based on QIFT), where data generates non-commutative evolution rather than serving as a static, phase-locked vector.
Using near-flat-band electrons for read-out of molecular spin qubit entangled states
This paper proposes a new method to electrically read out the quantum states of molecular spin qubits by measuring conductance differences between entangled singlet and triplet states. The researchers theoretically demonstrate that driving electrons through materials with flat electronic bands can distinguish between these quantum states, offering an alternative to slower magnetic resonance readout methods.
Key Contributions
- Theoretical demonstration of electrical readout for molecular spin qubits using conductance measurements
- Discovery that flat-band electrons enhance the contrast between entangled singlet and triplet state readouts
View Full Abstract
While molecular spin qubits (MSQs) are a promising platform for quantum computing, read-out has been largely limited to electron paramagnetic resonance which is often slow and requires a global system drive. Moreover, because one prerequisite for the Elzerman and Pauli spin blockade readout mechanisms typical of semiconductor spin qubits is tunneling of electrons between sites, these read-out modalities are unavailable in MSQs. Here, we theoretically demonstrate electrical read-out of entangled MSQs via driven many-electron spin unpolarized currents. In particular, using a time-dependent density matrix renormalization group approach we simulate a maximally entangled MSQ pair between two electronic leads. Driving itinerant electrons between the two leads, we find that the conductance is greater when the MSQs are in the entangled singlet state as compared to the entangled triplet state. This contrast in conductance is enhanced when the electronic density of states at the Fermi energy is large and for narrow bandwidth. Our results are readily applicable to molecules supramolecularly functionalizing semiconductors with relatively flat bands such as single-wall carbon nanotubes under a magnetic field.
Coherent Quantum Evaluation of Collider Amplitudes for Effective Field Theory Constraints
This paper presents a hybrid quantum-classical method for computing particle collision amplitudes in electron-positron scattering using quantum circuits. The authors encode particle kinematics into quantum states and use quantum hardware to calculate scattering amplitudes that can be compared with experimental collider data to test physics theories.
Key Contributions
- Hybrid quantum-classical framework for computing helicity amplitudes in particle physics
- Demonstration of quantum circuit-based calculation of scattering cross sections with direct comparison to experimental collider data
View Full Abstract
Precision measurements at electron-positron colliders provide stringent tests of the Standard Model and powerful probes of possible higher-dimensional interactions. We present a hybrid quantum-classical framework for computing leading-order helicity amplitudes for $e^+e^-\to \ell^+\ell^-$ scattering on gate-based quantum hardware and using the resulting cross sections to constrain both Standard Model couplings and effective field theory operators. In our approach, external kinematics are encoded into single-qubit Weyl spinors, and full helicity amplitudes are reconstructed by coherently combining diagrammatic contributions within a single quantum circuit. Classical post-processing yields physical amplitudes and differential cross sections that can be directly compared with collider data. As a proof of concept, we compute unpolarised angular distributions and perform binned likelihood fits to precision electron-positron measurements. The extracted bounds are statistically consistent with Standard Model expectations, demonstrating that quantum-assisted amplitude evaluation can interface directly with phenomenological analyses and experimental data. This work establishes a concrete pathway toward applying quantum computing to precision collider physics and effective field theory studies.
Optical repumping and atom number balancing in a two-color MOT
This paper demonstrates a novel technique for trapping strontium-88 atoms using two laser colors simultaneously - a blue magneto-optical trap (MOT) combined with a green MOT configuration that acts as both a repumping mechanism and additional cooling system. The researchers show this dual-color approach can trap 10 times more atoms than conventional single-color methods and allows precise control over atom numbers.
Key Contributions
- Development of a two-color MOT system that increases atom trapping efficiency by 10x compared to single-color repumping
- Demonstration of controllable atom number balancing between different trap states through experimental parameters
View Full Abstract
We study a novel repumping transition for $^{88}$Sr atoms trapped in a 'blue' magneto-optical trap. We show that, while the repumping efficiency is about three orders of magnitude smaller than for traditional schemes, it is sufficient for recycling all atoms, provided the repumping laser beams are arranged to form a 'green' magneto- optical trap (MOT) helping to cool and confine the atoms and preventing their loss. Our main findings are: (i) that the green MOT configuration is able to trap 10 times more atoms in the blue MOT than using the green transition merely as a repump, and (ii) that the atom numbers in the two-color MOT can be balanced through experimental control parameters. The interest of this scheme lies in its capability of reaching low temperature and its suitability for continuous atomic beam generation.
Topological Floquet Green's function zeros
This paper studies quantum systems that are periodically driven (Floquet systems) and analyzes special mathematical objects called Green's function zeros that can indicate topological properties. The authors focus on interacting quantum spin chains and propose how to implement and measure these systems using digital quantum simulators.
Key Contributions
- Introduction of Floquet Green's-function-based topological invariants for symmetry class BDI
- Analytical calculation of edge and bulk Green's functions for interacting Kitaev-like Floquet chains
- Demonstration that Floquet Green's functions can have zeros even without interactions
- Proposal for digital quantum emulator implementation with specific circuit design and observable measurements
View Full Abstract
Motivated by recent advances in digital quantum emulation using noisy intermediate-scale quantum (NISQ) devices and an increased interest in topological Green's function zeros in condensed matter systems, we here study Green's function zeros in topological Floquet systems. We concentrate on interacting Kitaev-like Floquet chains (or equivalently transverse field Ising circuits) and introduce Floquet Green's-function-based topological invariants for the corresponding symmetry class BDI. In the vicinity of special points in the free fermion phase diagram and using tailor-made interactions which lead to the Floquet version of symmetric mass generation, we analytically calculate both edge and bulk Green's functions. Just as in the case of continuum time evolution, topological bands of Green's function zeros may also contribute to the topological invariant. However, contrary to the case of continuum time evolution, Floquet Green's functions can have zeros even in the absence of interactions. Finally, we also discuss an implementation of this Floquet system in a digital quantum emulator: We present a circuit which encodes the interaction under consideration and pinpoint the observables carrying information about the topological Green's function boundary zeros.
Exact quantum transport in non-Markovian open Gaussian systems
This paper develops an exact theoretical framework for calculating heat, energy, and particle transport in quantum systems connected to multiple reservoirs, working beyond the usual weak-coupling approximations. The method reveals new physics including transient negative heat conductance that depends on how the system is initially prepared.
Key Contributions
- Exact framework for quantum transport in strongly-coupled non-Markovian systems using effective master equations
- Discovery of transient negative heat conductance regime dependent on initial system preparation
View Full Abstract
We build an exact framework to evaluate heat, energy, and particle transport between Gaussian reservoirs mediated by a quadratic quantum system. By combining full counting statistics with newly developed non-Markovian master equation approaches, we introduce an effective master equation whose solution can be used to generate arbitrary moments of the heat statistics for any number of reservoirs. This theory applies equally to fermionic and bosonic systems, holds at arbitrarily strong coupling, and resolves out-of-equilibrium transient dynamics determined by the system's initial state. In the steady-state, weak-coupling limit, we recover results analogous to those of the well-known Landauer-Büttiker formalism. We conclude our discussion by demonstrating an application of the method to a prototypical fermionic system. Our results uncover a regime of transient negative heat conductance contingent upon the initial system preparation, providing a clear signature of non-trivial out-of-equilibrium dynamics.
Reducing the Gate Count with Efficient Trotter-Suzuki Schemes
This paper develops improved Trotter-Suzuki decomposition schemes to reduce the number of quantum gates needed for simulating time evolution of quantum systems, particularly focusing on lattice field theories and demonstrating the approach on the Heisenberg model.
Key Contributions
- Development of optimized higher-order Trotter-Suzuki schemes that reduce gate count
- Demonstration of improved efficiency for quantum simulation of the Heisenberg model
- Creation of an optimization framework for finding efficient Trotterization schemes
View Full Abstract
Hamiltonian formulations of lattice field theories provide access to real-time dynamics, but their simulation is difficult to implement efficiently. Trotter-Suzuki decompositions are at the center of time evolution computation, either on quantum hardware or classically, for instance with the use of tensor networks. While low-order Trotterizations remain the standard choice due to their simplicity, higher-order schemes offer the potential for improved efficiency. In this work we outline a short guide to Trotter-Suzuki schemes and their implementations in general. To help with this, we highlight new efficient schemes found by our optimization framework, and demonstrate their performance on the Heisenberg model.
Effect of symmetry breaking on altermagnetism in CrSb and Formation of fragmented nodal curves
This paper studies altermagnetism in CrSb, a special type of antiferromagnetic material, investigating how breaking crystal symmetries through defects or strain creates fragmented nodal curves in momentum space. The research shows these symmetry modifications can produce anomalous Hall conductivities, potentially useful for quantum device applications.
Key Contributions
- Discovery of fragmented nodal curves formation when six-fold rotational symmetry is reduced to two-fold in altermagnetic materials
- Demonstration that symmetry-broken altermagnets can exhibit anomalous Hall conductivities with flexible Néel vector orientations
View Full Abstract
Phenomena concerning altermagnets have opened up a window for unconventional analysis of the momentum space spin polarization (MSSP) of antiferromagnetic materials. Taking the example of one of the widely investigated altermagnets, CrSb, we explore the underlying mechanisms leading to the formation or breaking of altermagnetism. With the aid of DFT calculation and symmetry analysis, we study the behavior of MSSP in the altermagnetic bands of pristine CrSb, along with a few model structures designed from the pristine one by hypothetical vacancy engineering and interstitial doping. We show that the six-fold rotational symmetry of the pristine CrSb can be reduced to a two-fold rotational symmetry via vacancy and doping engineering. We discover the formation of fragmented nodal curves (FNCs) across the Brillouin zone when in an altermagnetic material when the symmetry is restricted to two-fold rotation. Unlike the typical nodal planes and axes, the location of the FNCs in the momentum space is found to be band-specific. The formation of FNCs is further validated by introducing uniaxial strain to CrSb and by examining the band structure of RbMnPO$_4$, as they both exhibit a two-fold rotational symmetry responsible for altermagnetism. We observe that, unlike the pristine case, these FNCs have the potential to manifest anomalous Hall conductivities (AHC), while the Néel vector orients along both in-plane and out-of-plane directions. This flexibility of the AHC will pave the way for the application of altermagnets in the futuristic quantum devices.
Quantum Approximate Optimization for Decoding of Low-Density Parity-Check Codes
This paper proposes using the Quantum Approximate Optimization Algorithm (QAOA) to decode Low-Density Parity-Check (LDPC) codes, which are used for error correction in classical communications. The quantum approach aims to solve the decoding optimization problem more effectively than traditional Belief Propagation methods, especially for short codes and high-noise scenarios.
Key Contributions
- Novel application of QAOA to LDPC code decoding by formulating appropriate cost functions
- Demonstration that quantum optimization can outperform classical Belief Propagation decoding in certain scenarios
View Full Abstract
Decoding Low-Density Parity-Check (LDPC) codes is a fundamental problem in coding theory, and Belief Propagation (BP) is one of the most popular methods for LDPC code decoding. However, BP may encounter convergence issues and suboptimal performance, especially for short-length codes and in high-noise channels. The Quantum Approximate Optimization Algorithm (QAOA) is a type of Variational Quantum Algorithm (VQA) designed to solve combinatorial optimization problems by minimizing a problem-specific cost function. In this paper, we present a QAOA-based decoding framework for LDPC codes by formulating a decoding cost function that incorporates both parity-check constraints and soft channel reliability information. The resulting optimization problem is solved using QAOA to search for low-energy configurations corresponding to valid codewords. We test the proposed method through extensive numerical experiments and compare its performance with BP decoding. The experimental results demonstrate that the QAOA-based decoder achieves a higher probability of correctly recovering the transmitted codeword than BP across multiple experimental settings.
On Hydrodynamic Formulations of Quantum Mechanics and the Problem of Sparse Ontology
This paper examines hydrodynamic interpretations of quantum mechanics, particularly the many interacting worlds (MIW) framework where quantum behavior emerges from collections of discrete particle-like worlds. The authors identify a fundamental problem called 'sparse ontology' where discrete models fail as quantum systems decohere and branch, concluding that successful hydrodynamic quantum theories likely require continuous rather than discrete foundations.
Key Contributions
- Identification of the 'sparse ontology' problem in discrete hydrodynamic quantum models
- Analysis showing discrete MIW frameworks break down during decoherence processes
- Argument that hydrodynamic quantum mechanics requires essentially continuous ontology
View Full Abstract
Hydrodynamic reformulations of the Schrödinger equation suggest an interpretation of quantum mechanics in terms of a fluid flowing on configuration space. In the discrete hydrodynamic view, this fluid is not fundamental but emerges from many underlying microscopic fluid components whose collective behavior reproduces quantum phenomena. The most developed realization of this idea is the discrete many interacting worlds (MIW) framework, in which discrete particle-like worlds interact via inter-world forces and quantum probabilities are grounded in direct world counting. But there is also an older, continuous version of MIW. After reviewing the hydrodynamic and MIW formalisms, and emphasizing some of their interpretational advantages over the Everettian Many Worlds and Bohmian approaches, we argue that all discrete hydrodynamic models face a generic structural difficulty, which we call the problem of sparse ontology. Because wavefunctions typically branch under decoherence, the discrete components of the fluid are repeatedly partitioned into sub-ensembles, thereby thinning their density in configuration space and driving the dynamics away from the quantum regime once the components become sufficiently sparse. We conclude that successful hydrodynamic completions of quantum mechanics plausibly require an essentially continuous ontology.
Quantum feedback algorithms for DNA assembly using FALQON variants
This paper applies quantum algorithms called FALQON variants to solve DNA sequence assembly problems by converting them into optimization problems that quantum computers can solve. The researchers test three different versions of the algorithm on COVID-19 and human DNA data, finding that the enhanced versions work better and require fewer quantum circuit operations.
Key Contributions
- Development and comparison of three FALQON algorithm variants (standard, second-order, and time-rescaled) for DNA assembly
- Demonstration of improved convergence and success probabilities with reduced circuit depths for combinatorial optimization on near-term quantum hardware
View Full Abstract
Reconstructing DNA sequences without a reference, known as de novo assembly, is a complex computational task involving the alignment of overlapping fragments. To address this problem, a usual strategy is to map the assembly to a Quadratic Unconstrained Binary Optimization (QUBO) formulation, which can be solved by different quantum algorithms. In this work, we focus on three versions of the Feedback-based Algorithm, a protocol that eliminates classical optimization loops via measurement feedback. We analyze long-read DNA fragments from SARS-CoV-2 and human mitochondrial DNA using standard FALQON, second-order FALQON (SO-FALQON), and time-rescaled FALQON (TR-FALQON). Numerical results show that both variants improve convergence to the ground state and increase success probabilities at reduced circuit depths. These findings indicate that enhanced feedback-driven dynamics are effective for solving combinatorial problems on near-term quantum hardware.
Quantum Coherence of Top Quark Pairs Produced at LHC
This paper analyzes quantum coherence effects in top quark-antiquark pairs produced at the Large Hadron Collider by comparing theoretical predictions with experimental data from CMS. The researchers use a spin-density framework to study quantum interference patterns and find good agreement in most kinematic regions, with some deviations that may indicate new physics effects.
Key Contributions
- Reinterpretation of LHC spin-correlation data within a quantum coherence framework
- Demonstration that quantum coherence can serve as a probe for Standard Model validation and new physics detection in top quark physics
View Full Abstract
We study quantum coherence in top-antitop production at the LHC by comparing Standard Model predictions with CMS data across different kinematic regimes. Theory and experiment are statistically consistent in the near-threshold and boosted central regions, confirming that the spin-density framework captures the dominant helicity-interference structure. The intermediate-mass window shows a noticeable deviation, indicating enhanced sensitivity to radiative QCD effects. This work reinterprets measured spin-correlation data within a quantum-coherence framework, thereby introducing coherence as a complementary and experimentally grounded probe of the Standard Model spin structure and a potentially sensitive diagnostic of new-physics effects in the top-quark sector.
Asynchronous Multi-photon Interference for Quantum Networks
This paper develops and experimentally validates a theoretical framework for multi-photon quantum interference using continuous-wave light sources instead of pulsed sources, showing how to optimize detection timing windows to maximize interference visibility while maintaining practical photon rates for quantum communication networks.
Key Contributions
- Development of quantitative theoretical framework relating timing parameters, interference visibility, and multi-photon rates for CW sources
- Experimental validation using four-photon Hong-Ou-Mandel interference measurements
- Demonstration that CW operation can achieve comparable performance to pulsed sources while relaxing synchronization requirements
View Full Abstract
Advanced quantum communication protocols require high-visibility quantum interference between photons generated at distant nodes, which places stringent demands on optical synchronization. Conventionally, synchronization of optical wave packets relies on pulsed sources and precise optical path stabilization. An alternative approach employs continuous-wave (CW) photon-pair sources, where temporal indistinguishability is enforced by post-selecting detection events within a coincidence window $τ_w$ shorter than the photon coherence time $T_c$. Despite its conceptual simplicity, the quantitative relation between relevant time scales, achievable interference visibility, and usable multi-photon rates has remained unclear. Here, we develop in detail and experimentally validate a theoretical framework that quantitatively describes time-resolved multi-photon interference in the CW regime. We explicitly incorporate detector timing jitter, photon coherence time, and temporal post-selection. The model is verified using four-photon Hong-Ou-Mandel interference measurements. Based on this validated framework, we determine the coincidence window that maximizes usable four-photon rates for a target visibility. Finally, we compare CW and pulsed SPDC sources under equivalent indistinguishability constraints and show that CW operation can achieve comparable rates while relaxing optical synchronization requirements.
Restriction-Based Certificate of Bipartite Schmidt Rank in Hypergraph States
This paper develops new mathematical methods to analyze and certify quantum entanglement in hypergraph states by computing Schmidt rank across bipartitions. The authors introduce a restriction-based approach that can provide lower bounds on entanglement when traditional methods fail for complex quantum states.
Key Contributions
- Development of restriction-based method for computing Schmidt rank in hypergraph states using residual-free bilinear cores
- Introduction of combinatorial sufficient conditions (disjoint bridge matching) that guarantee existence of large full-rank cores for CCZ-type bridge patterns
View Full Abstract
We investigate bipartite entanglement in qubit hypergraph states across an arbitrary fixed bipartition. Using the real equally weighted (REW) representation, the Schmidt rank across the cut can be computed as the real rank of a phase-cleaned cross-cut sign matrix. Whereas graph states admit an exact cut-rank rule, because the cross-cut phase is purely bilinear, hypergraph states typically contain higher-degree cross-cut interactions, for which the cut-rank rule fails. Our approach certifies entanglement by fixing a single computational-basis assignment on a subset of qubits, thereby selecting a submatrix on an active slice. When this restriction removes all higher-degree cross-cut residues, the remaining cross-cut phase becomes bilinear up to cut-local terms. We call the resulting submatrices residual-free bilinear cores and show that they yield an exponential Schmidt-rank lower bound in terms of the $\mathbb{F}_2$-rank of an exposed core matrix. We further give a combinatorial sufficient condition, phrased as a disjoint bridge matching, that guarantees the existence of large full-rank cores for broad families of CCZ-type bridge patterns, and we present a search-and-verify procedure that constructs and certifies such cores directly from the hyperedge description.
Telemetry-Based Server Selection in the Quantum Internet via Cross-Layer Runtime Estimation
This paper develops a method called T_max for selecting the best quantum server in a quantum internet network by combining telemetry data from multiple system layers to minimize job execution time. The authors test their approach using simulations of quantum computing workloads distributed across heterogeneous server pools.
Key Contributions
- Development of T_max lightweight runtime scoring system for quantum server selection
- Comprehensive evaluation using NetSquid simulations with modified VQE workloads across different network scenarios
- Derivation of operating maps and sensitivity analysis for quantum internet deployment planning
View Full Abstract
The Quantum Internet will allow clients to delegate quantum workloads to remote servers over heterogeneous networks, but choosing the server that minimizes end-to-end execution time is difficult because server processing, feedforward classical communication, and entanglement distribution can overlap in protocol-dependent ways and shift the runtime bottleneck. We propose $T_{\max}$, a lightweight runtime score that sums coarse telemetry from multiple layers to obtain a conservative ranking for online server selection without calibrating weights for each deployment. Using NetSquid discrete-event simulations of a modified parameter-blind VQE (PB-VQE) workload, we evaluate $T_{\max}$ on pools of 10,000 heterogeneous candidates (selecting among up to 100 per decision) across crossover and bottleneck-dominated regimes, including temporal jitter scenarios and jobs with multiple shots. $T_{\max}$ achieves single-digit mean regret normalized by the oracle (below 10%) in both regimes and remains in the single-digit range under classical communication latency jitter for multi-shot jobs, while performance degrades for single-shot jobs under severe jitter. To connect performance to deployment planning, we derive an operating map based on requirements relating distance and entanglement rate requirements to protocol level counts, quantify how simple multiuser contention shifts the crossover, and use Sobol global sensitivity analysis to identify regime-dependent bottlenecks. These findings suggest that simple cross-layer telemetry can enable practical server selection while providing actionable provisioning guidance for emerging Quantum Internet services.
Internal dynamics and guided motion in general relativistic quantum interferometry
This paper develops a new theoretical framework for understanding how quantum particles' internal properties interact with their motion in gravitational fields, using advanced quantum field theory in curved spacetime rather than simpler approximations. The work unifies previous results and predicts new quantum effects including Berry phases that arise from gravitational coupling.
Key Contributions
- Development of generally covariant semiclassical framework for quantum-gravity coupling beyond linearized approximations
- Prediction of new effects including internal energy influence on field amplitudes and gravitationally-induced Berry phases
View Full Abstract
The coupling between internal degrees of freedom of quantum systems and their overall motion in an external gravitational field plays a central role in multiple extensions of Einstein's equivalence principle to quantum physics. While previous models of such effects were predominantly restricted to linearized gravity and often required the motion of quantum particles to follow prescribed world-lines, this letter shows how such phenomena can be understood using generally covariant semiclassical approximations in the framework of quantum field theory in curved space-times. This method provides a unification and generalization of previously established results, but also predicts new effects such as an influence of internal energies on field amplitudes, as well as correction terms to the internal Schrödinger equation that give rise to Berry phases.
Characterization-free classification and identification of the environment between two quantum players
This paper develops a protocol for two quantum players to determine the causal structure of quantum channels between them without prior knowledge of their devices or environment. The method uses statistical analysis of input-output data to classify different causal ordering strategies, with experimental demonstration on an optical platform.
Key Contributions
- Development of characterization-free protocol for classifying causal structure in quantum channels
- Proof that minimal random channels with two-outcome POVMs retain full protocol performance
- Experimental demonstration of reliable strategy distinction on optical platform
View Full Abstract
Classifying the causal structure of quantum channels is essential for verifying quantum networks and certifying quantum resources. We introduce a characterization-free protocol enabling two isolated players, Alice and Bob, to classify and identify the definite-order strategy adopted by an unknown environment mediating their channels. Without assuming knowledge of their devices or the environment, the players infer the causal order solely from input-output statistics by testing Markovian conditions that we prove are necessary and sufficient for each strategy class. Remarkably, we prove that even with a minimal random channel consisting of two-outcome POVMs and two-state preparations, the protocol retains full performance with probability one. We experimentally demonstrate the protocol on an optical platform, reliably distinguishing between several strategies. Our results provide a strong and robust tool for causal inference in quantum networks.
Quantum-limited detection of arrival time and carrier frequency of time-dependent signals
This paper derives and experimentally verifies fundamental quantum limits on simultaneously measuring the arrival time and frequency of light pulses, proposing an optimal detection scheme that reaches these theoretical limits. The work shows that finite detection windows create a quantum rotor problem rather than the standard harmonic oscillator case, requiring new uncertainty relations beyond the typical Heisenberg principle.
Key Contributions
- Derivation of quantum uncertainty bounds for joint time-frequency measurements using quantum rotor model
- Experimental demonstration of optimal detection scheme that saturates fundamental quantum limits
- Framework for Wigner function reconstruction beyond harmonic oscillator systems
View Full Abstract
Precise measurements of both the arrival time and carrier frequency of light pulses are essential for time-frequency-encoded quantum technologies. Quantum mechanics, however, imposes fundamental limits on the simultaneous determination of these quantities. In this work, we derive and experimentally verify the quantum uncertainty bounds governing joint time-frequency measurements. We show that when detection is restricted to finite time windows, the problem is naturally described by a quantum rotor, rendering the commonly used Heisenberg uncertainty relation inapplicable. We further propose an optimal detection scheme that saturates these fundamental limits. By sampling the Q-function, we demonstrate the reconstruction of the Wigner function beyond the harmonic oscillator. Using an experimental implementation based on a quantum pulse gate, we confirm that the proposed scheme approaches the ultimate quantum limit for simultaneous time-frequency measurements. These results provide a new framework for joint time-frequency detection with direct implications for precision measurements and quantum information processing.
Adversarial Information Gain in Non-ideal Quantum Measurements
This paper studies how adversaries can extract information from quantum measurement devices by analyzing the noise characteristics of non-ideal quantum instruments. The authors derive conditions for when an adversary can simultaneously perform measurements to gain information about either the same basis or a different basis than what the legitimate observer is measuring.
Key Contributions
- Derived necessary and sufficient conditions for compatibility between observer's non-ideal quantum instruments and adversarial measurements
- Quantified maximum information extraction by adversaries in terms of noise parameters for both same-basis and mutually unbiased basis scenarios
- Provided device implementation framework from adversary's perspective for same-basis information extraction
View Full Abstract
Performing a quantum measurement yields two different results: a classical outcome drawn from a probability distribution, according to Born's rule, and a quantum outcome corresponding to the post-measurement state. Quantum devices that provide both outcomes can be described through quantum instruments. In a realistic scenario, one can expect that the observer's obtained classical and quantum outcomes are non-ideal: this can be due to experimental limitations, but could also be explained by adversarial interference, that is, a second party that disturbs the device through a concealed measurement to obtain information. The second scenario can be interpreted through quantum compatibility, as it implies that both the observer's instrument and the adversary's measurement can be performed simultaneously. In this work, we show how the noise of the observer's device relates to the amount of information that the adversary can obtain. We study scenarios in which the adversary aims to acquire information on the same basis as the observer's measurement, or on a mutually unbiased basis with respect to the observer's basis. In both cases, we derive necessary and sufficient conditions for the compatibility of a single qubit non-ideal quantum instrument and a noisy meter, from which we obtain the maximum amount of information that the adversary can extract in terms of the noise parameters of the observer's instrument. Finally, we provide the device implementation from the adversary's point of view for the same basis scenario.
Experimental Asynchronous Measurement-Device-Independent Quantum Cryptographic Conferencing
This paper demonstrates an improved quantum key distribution protocol that allows multiple users to share identical secure keys simultaneously. The new asynchronous approach significantly improves the key generation rate compared to previous methods, making it more practical for large-scale quantum communication networks.
Key Contributions
- First experimental implementation of asynchronous measurement-device-independent quantum cryptographic conferencing
- Achieved key rate scaling of O(η) independent of user number, compared to O(η^N) in previous MDI QCC protocols
- Demonstrated practical techniques including FFT-based frequency estimation and phase drift compensation without global phase tracking
View Full Abstract
The quantum cryptographic conferencing (QCC) protocol, which distributes identical secure keys to user groups, is a crucial component of the quantum network. Previous experimental works have implemented the measurement-device-independent (MDI) QCC, of which the key rate in an $N$-user network scales down as $R\sim O(η^N)$, respectively. Building on the MDI QCC protocol, the asynchronous MDI (AMDI) QCC protocol theoretically integrates the mode pairing scheme into QCC, significantly boosting the key rate to $R\sim O(η)$, which is independent of the number of users, and thus demonstrating greater application potential. Experimentally, in this work, we implement the three-user AMDI QCC network without global phase tracking by adopting the fast Fourier transform-based frequency difference estimation and the phase drift compensation technique. Finally, we achieve a key rate of about $4.470\times10^{-9}$ bits per pulse under a maximum overall loss of about 59.6 dB. This work provides a scalable solution for the development of large-scale quantum communication networks in the future.
Enhancing low-temperature quantum thermometry and magnetometry via quadratic interactions in optomechanical-like systems
This paper demonstrates how quadratic coupling interactions in optomechanical-like systems can dramatically improve quantum sensors for measuring temperature and magnetic fields at low temperatures. The researchers show that these interactions create quantum squeezing and non-Gaussian states that provide orders-of-magnitude better sensitivity compared to standard radiation-pressure based sensors.
Key Contributions
- Demonstration that quadratic coupling in optomechanical systems generates intrinsic squeezing and non-Gaussian states that overcome vacuum fluctuation limits
- Orders-of-magnitude enhancement in quantum thermometry and magnetometry sensitivity in low-temperature regimes compared to standard radiation-pressure coupling
- Analysis of multiparameter estimation showing statistical correlations prevent simultaneous optimal estimation of temperature and magnetic field
View Full Abstract
Standard optomechanical sensors operating in the low-temperature regime often face fundamental precision limits imposed by vacuum fluctuations. Here, we demonstrate that moving beyond conventional radiation-pressure interactions and exploiting quadratic coupling can surpass these limits, generating intrinsic squeezing and non-Gaussian features in the probe state. We study quantum thermometry and magnetometry in a coupled two-resonator system, focusing on the estimation of a thermal bath temperature and an external magnetic field. The resonators are assumed to be in thermal equilibrium with a common bath, while a weak magnetic field acts on one of the resonators. We perform measurements on a single resonator, which serves as the probe for estimating both parameters. We compute the quantum Fisher information of the probe for two different interaction models between the resonators. Our results show that the counter-rotating terms in the quadratic interaction naturally induce squeezing at intermediate coupling and strong non-Gaussian correlations as the coupling increases further. These effects yield orders-of-magnitude enhancement in sensitivity in the low-temperature and weak-field regimes compared to standard radiation-pressure couplings. Finally, we investigate multiparameter estimation and find that, although the optimal measurements remain compatible, statistical correlations between parameters prevent the simultaneous estimation of temperature and magnetic field from attaining single-parameter precision.
$σ$-VQE: Excited-state preparation of quantum many-body scars with shallow circuits
This paper presents σ-VQE, a modified variational quantum eigensolver algorithm designed to find and prepare quantum many-body scar states (special low-entanglement eigenstates) using shallow quantum circuits. The method uses an energy-selective cost function that penalizes energy variance around a target energy, making it suitable for noisy intermediate-scale quantum devices.
Key Contributions
- Novel σ-VQE algorithm that targets mid-spectrum eigenstates and quantum many-body scar states using shallow circuits
- Energy-selective objective function with variance penalization that exploits limited expressibility of shallow circuits
- Unbiased estimation scheme for nonlinear cost function compatible with qubit-wise commuting grouping
- Experimental validation on IBM quantum hardware and benchmarking across multiple model families
View Full Abstract
We present and benchmark a type of variational quantum eigensolver (VQE), which we denote the $σ$-VQE. It is designed to target mid-spectrum eigenstates and prepare quantum many-body scar states. The approach leverages the fact that noisy intermediate-scale quantum devices are limited in their ability to generate generic highly-entangled states. This modified VQE pairs a low-depth circuit with an energy-selective objective that explicitly penalizes energy variance around a chosen target energy. The cost function exploits the limited expressibility of the shallow circuit as atypical low-entanglement eigenstates such as scar states are preferentially selected. We validate this mechanism across two complementary families of models that contain many-body scar states: the Shiraishi-Mori embedding approach, and the matrix-product state parent Hamiltonian construction. We define an unbiased estimation scheme for the nonlinear cost function that is compatible with qubit-wise commuting grouping and bitstring reuse. A proof-of-principle demonstration using a small-system instance was carried out on IBM Fez (Heron r2 QPU). These results motivate its use both as a practical "scar detector" and as a state-preparation primitive for initializing nonthermal eigenstate-supported dynamics.
Efficient two-color Floquet control of the RKKY interaction in altermagnets
This paper demonstrates how two-color laser driving can efficiently control magnetic interactions between impurity spins in altermagnetic materials, enabling better isolation of individual spins for quantum applications. The technique uses interference between laser frequencies to suppress unwanted spin-spin interactions while requiring weaker fields than single-color approaches.
Key Contributions
- Development of two-color Floquet control technique for suppressing RKKY interactions using weaker laser fields
- Discovery of altermagnet-specific effects including emergent in-plane Zeeman fields and tunable Dzyaloshinskii-Moriya interactions
- Demonstration of near-complete on-off switching of magnetic interactions through Fermi surface modulation
View Full Abstract
Magnetic impurities in real materials can mask the intrinsic spin-dependent properties of hosts. They interact indirectly through the Ruderman-Kittel-Kasuya-Yosida (RKKY) mechanism, which limits the use of isolated impurity spins in applications such as qubits and spintronics. Suppressing the RKKY interaction would therefore enable access to the host's unperturbed behavior while simultaneously isolating impurity spins for functional use. Although single-color laser driving can suppress the RKKY interaction, it typically requires strong fields that may be impractical or destabilizing. To overcome these limitations, we show that two-color laser driving provides efficient and tunable control over all components of the RKKY interaction using two weak laser fields. Focusing on two-dimensional Rashba altermagnets, we show that interference between one- and two-photon processes produces altermagnet-specific Floquet corrections. These include additional AC Stark shifts, magnetizations, spin-orbit renormalization, and emergent in-plane Zeeman fields that are absent under single-color driving and in non-altermagnetic systems. Notably, two-color driving induces a finite $z$-component of the Dzyaloshinskii-Moriya (DM) interaction, stabilizing in-plane chiral magnetism and related textures in Rashba altermagnets. These effects enable tunable, near-complete on-off switching of the Heisenberg, Ising, and DM interactions through a Lifshitz-like modulation of the Fermi surface. We also show that the tuning process is highly sensitive to the chirality of both beams. We further map phase diagrams for ferromagnetic and antiferromagnetic impurity alignment with clockwise and counterclockwise canting as functions of Rashba coupling and altermagnetic order. Finally, we discuss candidate material platforms and experimental feasibility.
Simulating Microwave-Controlled Spin Imaging with Free-Space Electrons
This paper develops a theoretical framework for combining electron microscopy with microwave-controlled spin resonance to enable atomic-scale imaging of individual electron spins. The technique uses free-space electrons as probes to detect magnetic phase shifts from spin systems, potentially enabling new capabilities in quantum spin research and nanoscale materials characterization.
Key Contributions
- Theoretical framework for spin resonance spectroscopy in transmission electron microscopy
- Demonstration that phase shifts from individual electron spins are detectable using free-space electron probes
- Optimization of imaging conditions using Classical Fisher Information to maximize signal-to-noise ratio
View Full Abstract
Coherent spin resonance techniques, such as nuclear and electron spin resonance spectroscopy, have revolutionized non-invasive imaging by providing spectrally resolved information about spin dynamics. Motivated by the recent emergence of electron microscopy methods capable of sensing microwave-excitations, we establish a theoretical framework for Spin Resonance Spectroscopy (SRS) in transmission electron microscopy (TEM). This technique combines microwave pump fields with focused electron probe beams to enable state-selective spin imaging at the atomic scale. Using scattering theory, we model the interaction between free-space electrons and electron spin systems, capturing both elastic and inelastic processes. The strongest effect of the spin system on the free electron is a magnetic phase shift. Our simulations demonstrate that phase shifts from individual electron spins are detectable in both image mode and diffraction mode. In principle, differential measurements under microwave control allow the extraction of local resonance frequencies that are influenced by the surrounding spin environment. By evaluating the Classical Fisher Information (CFI), we identify imaging conditions that maximize the signal-to-noise ratio (SNR), showing how defocus and beam width affect the measurement sensitivity. These findings establish a foundation for integrating SRS with high-resolution TEM, bridging spin spectroscopy and atomic-scale imaging, and enabling new capabilities in quantum spin research and nanoscale materials characterization.
A mathematical model for the Einstein-Podolsky-Rosen argument
This paper presents a rigorous mathematical model of the Einstein-Podolsky-Rosen (EPR) paradox using two entangled particles on a line and a spin at a fixed point. The authors prove that when one particle interacts with and flips the spin, the distant non-interacting particle acquires definite momentum, demonstrating quantum correlations.
Key Contributions
- Rigorous mathematical proof of EPR correlations in a specific physical model
- Demonstration of instantaneous momentum acquisition in distant particle upon spin flip interaction
View Full Abstract
We study a nonrelativistic system made of two quantum particles constrained to move on a line and a spin located at a fixed point of the line. Initially the two particles are in a maximally entangled state and the spin is down. The first particle interacts with the spin while the second particle is free, i.e., it does not interact neither with the first particle nor with the spin. We rigorously prove that there is a correlation between the state of the spin and the state of the second particle. More precisely, we show that, in a suitable scaling limit, if the first particle flips the spin, then the second particle possesses a definite momentum in the direction opposite to the spin.
Mach-Zehnder interferometer for in-situ characterization of atom traps
This paper presents a new method using Mach-Zehnder interferometry to precisely measure and characterize the trapping potentials that hold cold atoms in place. The technique can determine key parameters like trap frequency and anharmonicity, which is important for quantum simulators and sensors that rely on accurately controlled atom traps.
Key Contributions
- Novel Mach-Zehnder interferometer technique for in-situ trap characterization
- Method to accurately determine trap frequency and anharmonicity bounds in optical dipole traps
View Full Abstract
Manipulating cold atoms in traps is a key tool for numerous realizations of quantum simulators and quantum sensors. They require accurate modeling and characterization of the underlying trapping potentials. We introduce a technique based on the Mach-Zehnder interferometer for in-situ characterization of weakly anharmonic potentials. By simulating the interferometer in an optical dipole trap, we can accurately determine its trap frequency and upper bounds onto anharmonicity magnitudes.
Quantum discord of mixed states under noisy channels in the curved spacetime
This paper studies how quantum discord (a measure of quantum correlations) in two-qubit systems behaves when subjected to various types of noise in the curved spacetime around a black hole. The researchers find that quantum discord degrades as Hawking acceleration increases but never completely disappears, and exhibits symmetric behavior under certain noise channels.
Key Contributions
- Analysis of quantum discord behavior in curved spacetime under different noise channels
- Demonstration that quantum discord survives black hole effects without sudden death
- Discovery of symmetric discord behavior under bit flip and phase flip channels
View Full Abstract
We focus our attention on two-qubit mixed states as initial states, and apply the geometric measure of quantum discord to investigate quantum discord properties in the background of a Schwarzschild black hole under phase damping, phase flip and bit flip channels, respectively. Several analytical complementary relationships based on quantum discords for bipartite subsystems are proposed. For the three channel noises, the behaviors of discords are similar, the accessible discords always degrade as the Hawking acceleration rising, but sudden death never occurs, while the inaccessible discords increase from zero monotonically. Interestingly, in the case of the bit flip channel and phase flip channel, the discords perform symmetrically with the decay probability rising.
Quantum coherence of mixed states under noisy channels in noninertial frames
This paper studies how quantum coherence of three-particle mixed states behaves under different types of noise (phase damping, phase flip, bit flip) in the curved spacetime near a black hole. The researchers develop mathematical relationships to describe coherence properties and find that coherence degrades differently depending on the type of noise channel and the strength of the gravitational field.
Key Contributions
- Development of analytic complementary relationships for coherence concurrence in tripartite quantum systems
- Analysis of quantum coherence behavior under different noise channels in curved spacetime backgrounds
- Discovery that coherence concurrence of X-shaped mixed states equals l1-norm of coherence
View Full Abstract
We focus our attention on tripartite mixed states as initial states, and apply coherence concurrence to investigate quantum coherence properties in the background of a Schwarzschild black hole under phase damping, phase flip and bit flip channels, respectively. Several analytic complementary relationships based on coherence concurrence for tripartite subsystems are proposed. In the case of the bit flip channel, the behavior of the coherence concurrence is similar to the one of the phase damping channel, the accessible coherence concurrence always degrades as the Hawking acceleration rising, but sudden death never occurs, while the inaccessible coherence increases from zero monotonically. Interestingly, the coherence concurrence is decreasing at first and then increasing as the decay probability rising under phase flip channel. Unlike the case of tripartite pure states, the coherence concurrence of mixed state with X shape is equal to $l_1$ -norm of coherence.
Suppressed correlation-spreading in a one-dimensional Bose-Hubbard model with strong interactions
This paper studies how quantum correlations spread in a one-dimensional chain of interacting bosons, finding that strong interactions dramatically slow down the propagation of correlations through doublon-holon dynamics. The researchers show this system exhibits non-ergodic behavior and can be mapped to an effective spin model to predict correlation velocities.
Key Contributions
- Demonstration of suppressed correlation spreading in strongly interacting Bose-Hubbard systems through doublon-holon exchange dynamics
- Mapping of the Bose-Hubbard model to an antiferromagnetic transverse-field Ising model that accurately predicts correlation propagation velocities
View Full Abstract
We investigate signatures of non-ergodic behavior in the real-time evolution of a one-dimensional Bose-Hubbard model, where the initial state is a doubly occupied density-wave state. We show that the occupation dynamics at strong interactions is dominated by doublon-holon exchange which leads to a domain wall excitation and propagation. The latter manifests as a negated staggered pattern in the density-density correlations. While the single-particle and the pair correlation functions show highly localized correlations that decay rapidly away from the nearest neighbor. We show that the time scale of the domain-wall excitations depends on the inverse of the interaction strength and therefore dictates the slow relaxation dynamics. In the presence of a parabolic trap, the occupation dynamics at the edges become frozen and further suppresses the propagation of correlations. This suppression happens even for trap strengths weaker than the tunneling rate. We also show that the model can be mapped to an antiferromagnetic transverse-field Ising model in the limit of strong interactions and that the correlation-propagation velocity in the original model is well captured by the group velocity of the spin-wave excitation in the effective spin model.
Generative Deep Learning for the Two-Dimensional Quantum Rotor Model
This paper uses generative adversarial networks (GANs) and deep learning to study the ground-state properties and phase transitions of the two-dimensional quantum rotor model. The researchers developed specialized GAN architectures that can efficiently generate quantum ground-state samples and identify critical points in phase transitions, demonstrating how machine learning can accelerate quantum many-body simulations.
Key Contributions
- Development of conditional GANs with transposed convolutions for quantum ground-state generation
- Introduction of dynamically adaptive weighting factors in deep convolutional GANs for quantum phase transition analysis
- Demonstration of efficient critical point identification using latent variable analysis of quantum states
View Full Abstract
The advancement of diverse generative deep learning models and their variants has furnished substantial insights for investigating quantum many-body problems. In this work, we design two models based on the foundational architecture of generative adversarial networks (GANs) to investigate the ground-state properties and phase transition characteristics of the two-dimensional quantum rotor model (QRM). Within a semi-supervised learning framework, we incorporate multiple layers of transposed convolutions in the generator, enabling the conditional GAN to more efficiently extract low-dimensional encoded information. Analysis of one-dimensional latent variables associated with ground-state samples for different system sizes allows us to pinpoint the location of the critical point. In addition, we introduce dynamically adaptive weighting factors related to the distributional characteristics into the loss function of the deep convolutional GAN, and utilize upsampling techniques to enlarge the generated sample sizes. Comparisons of the optimization processes for mean magnetization and potential energy density across different magnetization regimes of QRM demonstrate that our model can efficiently generate valid ground-state samples, significantly reducing computational time. Our results highlight the promising potential of generative deep learning in quantum phase transition research, especially in critical point identification and the auxiliary generation of simulation data for quantum many-body models.
A note on entanglement detection via the generalized realignment moments
This paper develops new mathematical methods for detecting quantum entanglement in experimental settings by introducing generalized realignment moments with additional parameters. The authors show these criteria are more flexible and effective than existing methods for determining whether quantum states are entangled or separable.
Key Contributions
- Development of two new separability criteria based on generalized realignment moments
- Introduction of additional parameters that make entanglement detection more flexible and stronger than existing methods
View Full Abstract
The experimental detection of quantum entanglement is of great importance in quantum information processing. We present two separability criteria based on the generalized realignment moments. By incorporating additional parameters, these criteria prove to be more flexible and stronger than some of existing ones. Detailed examples are given to demonstrate their availability and feasibility for entanglement detection.
Tune-out wavelength for the thulium atom near 576 nm
This paper reports the theoretical prediction and experimental measurement of a tune-out wavelength for thulium atoms at approximately 576 nm, where the atomic polarizability becomes zero. The researchers demonstrated that Bose-Einstein condensation can be achieved in thulium atoms near this wavelength using optical dipole traps.
Key Contributions
- Precise measurement of tune-out wavelength for thulium at 575.646 nm
- Demonstration of Bose-Einstein condensation in thulium across wavelength range covering tune-out point
- Method to separate scalar and tensor polarizability components using trap frequency and RF spectroscopy
View Full Abstract
We report the theoretical prediction and measurement of a tune-out wavelength for the ground state of the thulium atom in a linearly polarized optical dipole trap with a wavelength of approximately 576 nm. The measurements were conducted using a combination of trap frequency and RF loss spectroscopy, thus making it possible to separate the scalar and tensor parts of the total polarizability without measurements in the range of negative total polarizability. The calculated tune-out wavelength is consistent with the measured one of $575.646_{-0.014}^{+0.016}$ nm in air. The existence of the zero in the polarizability for the Tm ground state was confirmed by the trap loss experiment, which also made it possible to refine the tune-out wavelength to $575.646_{-0.004}^{+0.004}$. Despite the presence of an imaginary part of the polarizability at some wavelengths, it was experimentally demonstrated that, with a proper choice of the dipole trap polarization, it was possible to achieve Bose-Einstein condensation of thulium atoms in the range from 575.348 to 575.689 nm, covering the tune-out wavelength.
Spatial Entanglement Sudden Death in Spin Chains at All Temperatures
This paper proves that quantum spin chains at any finite temperature have a fundamental limit to how far quantum entanglement can extend spatially. The researchers show that if you remove a sufficiently large section from the middle of a spin chain, the remaining left and right parts will be completely separable (not entangled).
Key Contributions
- Proves finite entanglement length exists for all local Hamiltonians on spin chains at finite temperature
- Establishes fundamental limits on spatial extent of quantum entanglement in thermal states
View Full Abstract
We prove a finite entanglement length for the Gibbs state of any local Hamiltonian on a spin chain at any finite temperature: After removing an interval of size at least equal to the entanglement length, the remaining left and right half-chains are in a separable state.
Task Concurrency and Compatibility in Measurement-Based Quantum Networks
This paper introduces a new design framework for quantum networks that considers how multiple quantum tasks can share entanglement resources simultaneously, rather than optimizing for individual tasks. The authors define 'compatibility' as a metric to determine when concurrent tasks can use the same pre-shared entanglement, showing 40-55% improvements in network efficiency.
Key Contributions
- Introduction of compatibility as a design-level metric for quantum networks to handle concurrent tasks
- Framework for optimizing shared entanglement resources across multiple simultaneous quantum tasks
- Demonstration of 40-55% improvement in network task capacity through compatibility-based design
View Full Abstract
Measurement-Based Quantum Networks (MBQNs) rely on multipartite pre-shared entanglement resources to satisfy entanglement requests. Traditional designs optimize these resources for individual tasks, neglecting that multiple tasks may arrive concurrently and compete for the same entanglement. We introduce compatibility as a design-level metric, capturing whether concurrent tasks can be satisfied by the same entanglement resources. We define a worst-case notion of compatibility where nodes are prevented from coordinating after task arrival and illustrate why tasks may be incompatible. Furthermore, we explore compatibility extensions that account for stochastic arrivals and the capability to supplement the pre-shared entanglement with additional entanglement on-demand, and show that incompatibility differs structurally dependent on the set of concurrent tasks. We argue that compatibility should be used for resource state design, building the foundation for determining which task pairs the network should support with pre-shared entanglement and which require execution-time coordination. Numerical simulations demonstrate this potential, with $(G,1)$-compatibility achieving a 40%-55% gain in simultaneously supported tasks relative to the single-task baseline. By incorporating compatibility as a fundamental design objective, quantum networks can move beyond single-task optimization towards scalable, robust architectures that effectively balance proactive entanglement distribution and supplemental reactive coordination.
First- and Second-Order Digital Quantum Simulation of Three-Level Jaynes-Cummings Dynamics on Superconducting Quantum Processors
This paper demonstrates digital quantum simulation of a three-level atom interacting with light using IBM's superconducting quantum computers. The researchers encode a three-level atomic system using multiple qubits and compare different Trotter decomposition methods to simulate the quantum dynamics digitally rather than with analog approaches.
Key Contributions
- Digital quantum simulation of three-level Jaynes-Cummings dynamics on NISQ devices
- Comparative analysis of first- and second-order Trotter decomposition methods for quantum simulation
- Demonstration of qutrit encoding using two qubits for multi-level atomic system simulation
View Full Abstract
This work presents a digital quantum simulation of a three-level atomic system interacting with a single-mode electromagnetic field based on the Jaynes-Cummings model, implemented on IBM Quantum superconducting processors. A qutrit is encoded using two physical qubits to represent the atomic states, while an additional qubit encodes the truncated field mode, enabling the realization of effective $Λ$-type atomic dynamics.The continuous-time light-matter interaction is implemented in a digital form by discretizing the evolution using Suzuki-Trotter decomposition. In contrast to an analog realization, the digital simulation replaces the continuous evolution with a sequence of quantum gates whose parameters are explicitly controlled. Phase evolution arising from the interaction Hamiltonian is digitally encoded using calibrated $R_Z$ gates, whose rotation angles are fixed by the physically relevant coupling scale and the chosen Trotter time step.State preparation is achieved using Hadamard and parametrized rotation gates, while the interaction dynamics are implemented through controlled operations. A comparative analysis between first- and second-order Trotter implementations reveals a trade-off between digital accuracy and hardware-induced noise. Overall, the results demonstrate that calibrated gate operations and noise-aware circuit design enable reliable digital simulation of multi-level light-matter interactions on noisy intermediate-scale quantum platforms.
Non-Clifford symmetry protected topological higher-order cluster states in multi-qubit measurement-based quantum computation
This paper develops new types of cluster states for measurement-based quantum computation by using non-Clifford gates and higher-order controlled gates, creating states with enhanced entanglement properties and multiple qubits at boundaries that can serve as inputs and outputs.
Key Contributions
- Systematic construction of non-Clifford cluster states using controlled phase-shift and higher-order controlled gates
- Demonstration that C^N Z gates create (2N+1)-body entangled states with N free edge qubits for computation
- Analysis of symmetry protection and degeneracy properties in these higher-order cluster states
View Full Abstract
A cluster state is a strongly entangled state, which is a source of measurement-based quantum computation. It is generated by applying controlled-Z (CZ) gates to the state $\left\vert ++\cdots +\right\rangle $. It is protected by the $\mathbb{Z}_{2}^{\text{even}}\times \mathbb{Z}_{2}^{ \text{odd}}$ symmetry. By applying general quantum gates to the state $ \left\vert ++\cdots +\right\rangle $, we systematically obtain a general short-range entangled cluster state. If we use a non-Clifford gate such as the controlled phase-shift gate, we obtain a non-Clifford cluster state. Furthermore, if we use the controlled-controlled Z (CCZ) gate instead of the CZ gate, we obtain non-Clifford cluster states with five-body entanglement. We generalize it to the C$^{N}$Z gate, where $(2N+1)$-body entangled states are generated. The $\mathbb{Z}_{2}^{\text{even}}\times \mathbb{Z}_{2}^{\text{odd}}$ symmetry is non-Clifford for $N\geq 3$. We demonstrate that there emerge $2^{2N}$ fold degenerate ground states for an open chain, indicating the emergence of $N$ free spins at each edge. They can be used as an $N$-qubit input and an $N$-qubit output in measurement-based quantum computation. We also study the non-invertible symmetry, the Kennedy-Tasaki transformation and the string-order parameter in addition to the $\mathbb{Z}_{2}^{\text{even}}\times \mathbb{Z}_{2}^{\text{odd}}$ symmetry in these models.
Quantum circuit design from a retraction-based Riemannian optimization framework
This paper develops new mathematical optimization methods for designing quantum circuits that can find ground states of quantum systems. The authors create a second-order optimization algorithm called Riemannian Random Subspace Newton (RRSN) that converges faster than existing first-order methods used in variational quantum algorithms.
Key Contributions
- Development of retraction-based Riemannian optimization framework for quantum circuit design
- Introduction of RRSN method - first scalable second-order Riemannian algorithm for quantum circuit optimization
- Derivation of explicit Riemannian Hessian expressions implementable on quantum hardware via parameter-shift rules
View Full Abstract
Designing quantum circuits for ground state preparation is a fundamental task in quantum information science. However, standard Variational Quantum Algorithms (VQAs) are often constrained by limited ansatz expressivity and difficult optimization landscapes. To address these issues, we adopt a geometric perspective, formulating the problem as the minimization of an energy cost function directly over the unitary group. We establish a retraction-based Riemannian optimization framework for this setting, ensuring that all algorithmic procedures are implementable on quantum hardware. Within this framework, we unify existing randomized gradient approaches under a Riemannian Random Subspace Gradient Projection (RRSGP) method. While recent geometric approaches have predominantly focused on such first-order gradient descent techniques, efficient second-order methods remain unexplored. To bridge this gap, we derive explicit expressions for the Riemannian Hessian and show that it can be estimated directly on quantum hardware via parameter-shift rules. Building on this, we propose the Riemannian Random Subspace Newton (RRSN) method, a scalable second-order algorithm that constructs a Newton system from measurement data. Numerical simulations indicate that RRSN achieves quadratic convergence, yielding high-precision ground states in significantly fewer iterations compared to both existing first-order approaches and standard VQA baselines. Ultimately, this work provides a systematic foundation for applying a broader class of efficient Riemannian algorithms to quantum circuit design.
Kondo breakdown as an entanglement transition driven by continuous measurement
This paper studies how the Kondo effect (where an impurity spin becomes entangled with surrounding electrons) breaks down when subjected to continuous measurement via a magnetic field. The authors use advanced theoretical methods to map this breakdown as a phase transition between an entangled state and a disentangled state.
Key Contributions
- Novel theoretical framework connecting Kondo breakdown to measurement-driven entanglement transitions
- Non-perturbative renormalization group analysis revealing phase diagram between screened and unscreened phases
- Identification of critical regime and non-Fermi liquid behavior at the entanglement transition
View Full Abstract
We study the breakdown of Kondo screening by a local magnetic field from the perspective of a measurement-driven entanglement transition in a monitored quantum system. Here, the Kondo coupling leads to the growth in entanglement of an impurity spin with it's fermionic environment, while the local field plays the role of a continuous observer. Using a non-perturbative Unitary Renormalization Group (URG) approach, we derive coupled renormalization-group flow equations for the Kondo exchange and the local field, and obtain a field-dependent RG phase diagram. The RG flows separate a low-energy Kondo-screened phase, where the impurity is absorbed into the Fermi sea and forms an entangled singlet with the conduction bath, from a polarized local-moment phase in which screening is frustrated and impurity-bath entanglement is suppressed. We identify the fixed-point Hamiltonians governing the two phases and the critical regime, and relate the transition to the emergence of a novel non-Fermi liquid. Various impurity signatures such as the spectral function and thermalisation of impurity observables are used to characterise this entanglement transition. These results offer insight into the interplay of decoherence and measurement in governing the dynamics of a prototypical quantum system.
Effect of atom-oscillator interaction on the aging transition in coupled oscillators
This paper studies how adding a two-level atom to coupled oscillator systems can modify aging transitions, where some oscillators become inactive. The researchers show that atom-oscillator interactions can reduce the threshold for these transitions in both classical and quantum regimes.
Key Contributions
- Demonstrates that atom-oscillator coherent coupling reduces the inactive-to-total oscillator ratio needed for aging transitions
- Provides analytical framework for understanding how atomic decay rate and coupling strength jointly control transition points in both classical and quantum oscillator systems
View Full Abstract
Oscillators are often employed as a model of radiation fields, which may couple to an atom and play an important role for creating and manipulating nonclassical states in quantum metrology, quantum simulation, and quantum information. Aging transitions in coupled oscillators have been studied extensively in both the classical and quantum contexts. It is well known that the onset of aging transitions can be modulated by the dissipative coupling between oscillators. In this study, we propose an alternative way to modulate the aging transition through coherent couplings between a two-level atom and the oscillators. Our findings reveal that, compared to atom-free systems in both classical and quantum regimes, the atom-oscillator coherent interaction reduces the inactive-to-total oscillator ratio required for aging transitions. Analytical results of the transition for both the classical oscillators and quantum oscillators suggest that the decay rate of the atom and the atom-oscillator coupling strength jointly change the aging transition point. The physics behind the observation is also elucidated in this article. Our research introduces a readily implementable strategy for manipulating aging transitions in more intricate systems, thereby advancing the control and understanding of these critical transitions in quantum technologies.
Assessing the Practical Feasibility of the Clader-Jacobs-Sprouse Quantum Algorithm for Calculating Radar Cross Sections
This paper evaluates the practical feasibility of a quantum algorithm developed by Clader, Jacobs, and Sprouse that can solve electromagnetic scattering problems exponentially faster than classical methods. The research specifically examines whether this quantum algorithm could be practically implemented for modeling radar cross sections of targets.
Key Contributions
- Assessment of practical feasibility for an existing quantum algorithm in electromagnetic scattering
- Analysis of quantum computational approaches for radar cross section modeling
View Full Abstract
In 2013, Clader, Jacobs, and Sprouse developed a quantum computing algorithm that solves electromagnetic scattering problems exponentially faster than the best known classical algorithm for that problem. We examine this quantum algorithm's potential practical feasibility for modeling a target's radar cross section. Doing so could be important for modeling and predicting radar behavior against emerging targets.
Aging of coupled qubits
This paper studies how networks of connected qubits transition from active oscillatory behavior to inactive states as more qubits become non-functional, finding a sharp threshold where excited state populations suddenly drop. The research extends classical 'aging transition' concepts to quantum systems and identifies key differences from coupled oscillator behavior.
Key Contributions
- Discovery of sharp threshold behavior in qubit aging transitions that differs from classical oscillator systems
- Theoretical framework for understanding collective behavior in networks of coupled qubits with dissipation
View Full Abstract
The aging transition refers to the shift from an oscillatory state to a globally ceased state due to some forms of deterioration in classical physics. Similar behavior has also been observed in quantum oscillators. Although it has received extensive attention in coupled oscillator systems, it has not yet been studied in coupled qubits. In this manuscript, we explore the aging transition in a network of coupled qubits. Our model describes {numerous} qubits driven by a laser, with both dissipative and coherent qubit-qubit couplings. The ratio of inactive qubits to total qubits and the population in the excited state of the qubits are employed to characterize the aging transition. We find a transition where the population in the excited states suddenly drops when the ratio exceeds a threshold. This behavior is intriguing and contrasts with coupled oscillators, where no sudden drop is observed. Additionally, we demonstrate how the couplings and driving laser influence the threshold. The underlying physics of the sudden drop is elucidated. The region where the aging transition occurs is determined based on stability analysis theory.
Hilbert Space Black Hole Analog: Unidirectional Transport without Driving
This paper demonstrates how interacting bosons in optical lattices can create unidirectional quantum transport that mimics black hole event horizons, where particles can only move in one direction without any external driving forces. The phenomenon relies purely on many-body quantum interactions that create an effective one-way boundary in the system's mathematical description space.
Key Contributions
- Discovery of unidirectional quantum transport mechanism without external driving or dissipation
- Demonstration of Hilbert space analog to black hole event horizons using interacting bosons
- Establishment of many-body interactions as fundamental route to directional transport for atomtronic circuits
View Full Abstract
Black holes permit matter to cross their event horizon in only one direction. We show that interacting bosons in optical lattices with asymmetric barrier exhibit an analogous phenomenon, creating unidirectional quantum transport without external driving or dissipation. This directionality emerges purely from many-body interactions, which cause asymmetric projection of the initial state onto transport-enabled or transport-forbidden sectors. The resulting dynamics create an effective one-way boundary in Hilbert space, forming a quantum analog of a black-hole event horizon. Our results establish interactions as a fundamentally new route to directional transport, enabling coherent rectification in atomtronic circuits by the use of intrinsic properties of the system only.
Fundamentals of Quantum Machine Learning and Robustness
This paper introduces the fundamentals of quantum machine learning (QML), explaining how quantum computing principles like superposition and entanglement can be applied to machine learning problems. It particularly focuses on adversarial robustness - how well QML models can resist malicious inputs designed to cause them to fail.
Key Contributions
- Establishes foundational concepts bridging quantum computing and machine learning communities
- Introduces adversarial robustness analysis for quantum machine learning models
- Connects quantum mechanical principles to learning algorithms and their vulnerabilities
View Full Abstract
Quantum machine learning (QML) sits at the intersection of quantum computing and classical machine learning, offering the prospect of new computational paradigms and advantages for processing complex data. This chapter introduces the fundamentals of QML for readers from both communities, establishing a shared conceptual foundation. We connect the worst-case, adversarial perspective from theoretical computer science with the physical principles of quantum systems, highlighting how superposition, entanglement, and measurement collapse influence learning and robustness. Special attention is given to adversarial robustness, understood as the ability of QML models to resist inputs designed to cause failure. We motivate the study of QML in adversarial settings, outlining distinctions between classical and quantum data and computations when the adversary is a core element. This chapter serves as a starting point to adversarial and robust quantum machine learning in subsequent chapters.
Is a covariant virtual tachyon viable?
This paper investigates whether virtual tachyons (faster-than-light particles) can be consistently described in quantum field theory using the fakeon framework. The authors identify fundamental problems that prevent formulating a covariant quantum field theory of virtual tachyons, including issues with Lorentz invariance and propagator support.
Key Contributions
- Identification of two fatal obstructions to virtual tachyon field theory: non-invariant commutation relations under Lorentz boosts and disjoint propagator support
- Quantitative limits on coupling strength for virtual tachyon interactions with Standard Model fields
View Full Abstract
Sidney Coleman has noted that superluminal particles or observers would be able to go back in time and have no definite trajectory according to subluminal observers, while not violating Lorentz invariance [1]. Recently, Dragan and Ekert have significantly developed similar ideas even further, which lead to formulation of ``quantum principle of relativity'' that intimately links the two theories [2]. However, field theory descriptions of an on-shell tachyon, described by scalar field $φ$ with negative mass squared parameter, lead to violation of basic principles of relativity or quantum mechanics. In this work, we investigate whether purely virtual tachyons can be consistent within the fakeon framework-the only known viable formulation of purely virtual particles. We identify two fatal obstructions. First, Lorentz boosts mix creation and annihilation operators, rendering the canonical commutation relations non-invariant despite formal invariance of the vacuum. Second, the real part of the tachyon Feynman propagator and Wheeler propagator have disjoint support, preventing application of both the fakeon prescription and Wheeler-Feynman absorber mechanism. Interactions with stable Standard Model fields further violate Lorentz invariance and the equivalence principle, and we provide quantitative limit on coupling strength of such scenario. Our analysis excludes possibility of formulating covariant quantum field theory of interacting virtual tachyons.
Toward a CMOS-integrated quantum diamond biosensor based on NV centers
This paper describes the development of a compact quantum biosensor that combines diamond nitrogen-vacancy (NV) centers with CMOS electronics to detect magnetic fields from biological samples. The system aims to replace bulky optical setups with integrated chip-based detection for practical biological sensing applications.
Key Contributions
- Integration of NV center quantum sensors with 40 nm CMOS SPAD array technology
- Demonstration of 90 nT/√Hz magnetic field sensitivity for biological sensing applications
- System-level design for compact quantum diamond biosensors replacing optics-heavy microscopes
View Full Abstract
We report progress toward a CMOS-integrated quantum diamond biosensing platform that combines nitrogen-vacancy (NV) centers in diamond with a custom 40 nm CMOS Single-Photon Avalanche Diode (SPAD) array. The system integrates on-chip active quenching and digital readout with external FPGA-based photon counting, compact microwave delivery, and practical optical excitation and collection schemes to support widefield optically detected magnetic resonance (ODMR). System-level design considerations spanning fluorescence collection efficiency, detector count-rate capability, and microwave homogeneity are analyzed with biological compatibility and scalability in mind. Using superparamagnetic iron oxide nanoparticle (SPION)-labeled HEK293T cells as a representative use case, simple dipole-field estimates indicate that sub-$μ$T sensitivity is required to resolve ODMR shifts within typical ensemble linewidths. Based on the proposed architecture and efficiency analysis, a magnetic field sensitivity of approximately 90 nT/$\sqrt{\mathrm{Hz}}$ per pixel is estimated. These results outline a practical path from optics-heavy quantum diamond microscopes toward compact, CMOS-integrated NV-based biosensors for quantitative magnetic imaging in complex biological environments.
Revealing Pseudo-Fermionization and Chiral Binding of One-Dimensional Anyons using Adiabatic State Preparation
This paper demonstrates experimental control of one-dimensional anyons using ultracold atoms in optical lattices, revealing exotic quantum behaviors like pseudo-fermionization and chiral bound state formation. The researchers use advanced experimental techniques including quasiperiodic drives and adiabatic state preparation to prepare and study two-body ground states of these unusual quantum particles.
Key Contributions
- First experimental demonstration of controlled 1D anyon behavior using ultracold atoms
- Discovery of pseudo-fermionization and chiral bound states in 1D anyons
- Development of techniques linking lattice and continuum anyon models
View Full Abstract
Fractional statistics give rise to quantum behaviors that differ fundamentally from those of bosons and fermions. While two-dimensional anyons play a major role in strongly correlated systems and topological quantum computing, the nature of their one-dimensional (1D) counterparts remains the subject of intense debate, with renewed interest fueled by recent experimental progress. Theoretically, 1D anyons are predicted to host exotic many-body phases and quantum phase transitions, yet experimental signatures have remained elusive. Using ultracold atoms in an optical lattice, we prepare two-body ground states of the 1D anyon-Hubbard model by combining Hamiltonian engineering via quasiperiodic drives and adiabatic state manipulation. We uncover the effects of statistical interactions that lead to pseudo-fermionization and to the formation of chiral bound states when particles remain close together. Our results establish a link between lattice and continuum realizations of anyon models, and mark important steps towards the precise control of 1D anyons in both equilibrium and out-of-equilibrium settings.
Zero-point energy of a trapped ultracold Fermi gas at unitarity: squeezing the Heisenberg uncertainty principle and suppressing the Pauli principle to produce a superfluid state
This paper investigates the lowest energy state of trapped ultracold Fermi gases at unitarity, analyzing how quantum uncertainty and Pauli exclusion principles combine to create superfluid behavior. The authors use a microscopic normal-mode approach rather than traditional Cooper pair theory to explain the quantum mechanical origins of superfluidity.
Key Contributions
- Development of normal-mode microscopic dynamics approach to describe superfluidity
- Analysis of how Heisenberg uncertainty and Pauli exclusion principles determine superfluid ground state properties
View Full Abstract
The zero-point energy of a trapped ultracold Fermi gas at unitarity is investigated in relation to the combined effects of the Heisenberg uncertainty principle and the Pauli principle. This lowest allowed quantum state is a superfluid state which has been studied extensively both experimentally and theoretically. The method used for the current investigation is based on a recent series of papers that proposed microscopic dynamics based on normal modes to describe superfluidity instead of real-space Cooper pairs. This approach yielded excellent agreement with experimental data for multiple properties and allowed the microscopic behavior underlying these results as well as the basis of universal behavior to be analyzed in detail using the group theoretic basis of this general N-body approach. This microscopic picture is now used to illucidate the roles played by the uncertainty principle and the Pauli principle in determining the energy and character of the lowest allowed quantum state including the squeezed character of this superfluid state and the suppression of the Pauli principle.
Quantum simulation in the Heisenberg picture via vectorization
This paper develops a method for simulating quantum systems using the Heisenberg picture (where operators evolve in time) on quantum computers by mapping operators to quantum states through vectorization. The approach enables new protocols for measuring quantum properties like correlators and entanglement entropies, with practical implementations demonstrated for 2D systems on both digital and analog quantum simulators.
Key Contributions
- General framework for Heisenberg picture quantum simulation using vectorization mapping
- New protocols for operator sampling, OTOCs computation, and operator entanglement entropy measurement
- Practical implementation schemes for 2D problems on digital and analog quantum simulators
View Full Abstract
We present a general framework for simulating quantum systems in the Heisenberg picture on quantum hardware. Based on the vectorization map, our framework fully exploits the mapping between operators and quantum states, allowing any task defined on Heisenberg operators to be mapped to standard Schrödinger-picture tasks that are naturally accessible via quantum computers and simulators. This yields new or improved protocols for tasks such as operator sampling, the computation of OTOCs/superoperator expectation values and their higher order moments, two-point correlators, and operator stabilizer and entanglement entropies. Our approach is also amenable to implementation, as it inherits the structure and resource requirements of the (forward and time-reversed) Schrödinger-picture quantum simulation problem. We demonstrate this by proposing implementations of our framework for a 2D problem on digital and analog quantum simulators, taking into account device connectivity constraints.
Quantum Information Approach to Bosonization of Supersymmetric Yang-Mills Fields
This paper develops a method to convert supersymmetric quantum mechanical systems into bosonic form using quantum information techniques. The authors construct representations using qubit operators that could enable solving supersymmetric problems on quantum computers.
Key Contributions
- Development of bosonization technique for supersymmetric systems using quantum information methods
- Construction of qubit-based representations suitable for hybrid quantum computer implementations
- First demonstration of representation induction across both fermionic and bosonic sectors
View Full Abstract
We consider bosonization of supersymmetry in the context of Wess-Zumino quantum mechanics. Our motivation for this investigation is the flexibility the bosonic fock space affords as any classical probability distribution can be realized on it making it a versatile framework to work with for quantum processes. We proceed by constructing a minimal bosonization of a system with one bosonic and two fermionic degrees of freedom. We iterate this process to construct a tower of SUSY systems that is akin to unfolded Adinkras. We then identify an osp(2|2) symmetry of the system constructed. To build an irreducible representation of the system we induce representations across the sectors, a first to our knowledge, as the previous work have focused on induction only within the bosonic sector. First, we start with a fermionic representation using Clifford algebras and then induce a representation to gl(2|2) and restrict it to osp(2|2). In the second method, we induce a representation from that of the bosonic sector. In both cases, our representations are in terms of qubit operators that provide a way to solve SUSY problems using quantum information based approaches. Depending upon the direction of induction the representations are suitable for implementation on a hybrid qubit and fermionic or bosonic quantum computers.
Energy gap of quantum spin glasses: a projection quantum Monte Carlo study
This paper studies how the energy gap in quantum spin glass models scales with system size, which determines the performance limits of quantum annealing for solving optimization problems. The researchers find that 2D models have unfavorable scaling that gets worse with size, while fully-connected models show more promising scaling behavior for quantum optimization.
Key Contributions
- Development of unbiased energy-gap estimator for quantum Monte Carlo simulations
- Demonstration that 2D Edwards-Anderson model has unfavorable super-algebraic gap scaling while Sherrington-Kirkpatrick model shows more promising N^(-1/3) scaling
View Full Abstract
The performance of quantum annealing for combinatorial optimization is fundamentally limited by the minimum energy gap $Δ$ encountered at quantum phase transitions. We investigate the scaling of $Δ$ with system size $N$ for two paradigmatic quantum spin-glass models: the two-dimensional Edwards-Anderson (2D-EA) and the all-to-all Sherrington-Kirkpatrick (SK) models. Utilizing a newly proposed unbiased energy-gap estimator for continuous-time projection quantum Monte Carlo simulations, complemented by high-performance sparse eigenvalue solvers, we characterize the gap distributions across disorder realizations. It is found that, in the 2D-EA case, the inverse-gap distribution develops a fat tail with infinite variance as $N$ increases. This indicates that the unfavorable super-algebraic scaling of $Δ$, recently reported for binary couplings [Nature 631, 749 (2024)], persists for the Gaussian disorder considered here, pointing to a universal feature of 2D spin glasses. Conversely, the SK model retains a finite-variance distribution, with the disorder-averaged gap following a rather slow power law, close to $Δ\propto N^{-1/3}$. This finding provides a promising outlook for the potential efficiency of quantum annealers for optimization problems with dense connectivity.
The quantum superluminality in the tunnel-ionization process of H-like atoms
This paper investigates quantum tunneling in hydrogen-like atoms with large nuclear charges, demonstrating that under extreme conditions the tunneling process can occur faster than the speed of light. The researchers use theoretical models validated against experimental attoclock measurements to show this superluminal quantum tunneling effect.
Key Contributions
- Demonstration of superluminal quantum tunneling in H-like atoms with large nuclear charge
- Validation of tunnel-ionization time-delay model against attoclock measurements
View Full Abstract
The quantum tunneling time remains the subject of heated debate, and one of its most curious features is faster-than-light or superluminal tunneling. Our tunnel-ionization model of the time-delay, presented in previous work, shows good agreement with the attoclock measurement in the adiabatic and nonadiabatic field calibrations, which also enables the determination of the barrier time-delay. In the present work, we show that the tunnel-ionization for H-like atoms with large nuclear charge can be superluminal (quantum superluminality), which in principle can be investigated experimentally using the attoclock scheme. We discuss the quantum superluminality in detail for the different regimes of the tunnel-ionization. Our result shows that quantum tunneling faster-than-light is indeed possible, albeit only under somewhat extreme conditions.
Nonlinear quantum optomechanics in a Fano-mirror microcavity system
This paper studies a quantum optomechanical system using a Fano-mirror design that can simultaneously achieve strong single-photon coupling and precise frequency control. The system can generate exotic quantum states like photon blockade and mechanical cat states under realistic experimental conditions.
Key Contributions
- Development of Fano-mirror optomechanical architecture enabling simultaneous single-photon strong-coupling and sideband-resolved regimes
- Demonstration of quantum state engineering capabilities including photon blockade and mechanical cat state generation under experimentally realistic parameters
View Full Abstract
We study a Fano-mirror optomechanical system in the quantum nonlinear regime. In this system, two strongly lossy optical modes hybridize through both coherent and dissipative couplings to form an effective optical mode with a drastically reduced linewidth. This linewidth reduction enables the system to access the single-photon strong-coupling and sideband-resolved regimes simultaneously. We formulate the system dynamics using an effective master-equation approach and benchmark it against quantum Langevin and dressed-state master-equation descriptions. With experimentally realistic parameters, we predict clear quantum signatures, including photon blockade and the generation of mechanical cat states. Our work establishes the Fano-mirror architecture as a promising platform for harnessing single-photon optomechanical nonlinearities for quantum state engineering under achievable experimental conditions.
Entanglement formation in two-dimensional materials within microcavity
This paper studies how quantum entanglement forms between two layered materials placed inside a microcavity, showing that electromagnetic coupling and spin-orbit interactions cause the system to rapidly transition from separate states to entangled quantum correlations. The researchers demonstrate that cavity geometry and material properties strongly influence the degree of entanglement achieved.
Key Contributions
- Theoretical framework for entanglement generation in cavity-embedded 2D materials
- Demonstration of rapid entanglement formation through electromagnetic coupling and spin-orbit interactions
- Analysis of geometric and material parameters affecting entanglement degree
View Full Abstract
In this work, the entanglement generation between two hexagonal-lattice layers embedded in a microcavity is studied, accounting for both electromagnetic coupling and intrinsic spin-orbit interaction (SOI). Utilizing a short-time dynamical approach, we perform a perturbative Taylor expansion of the reduced density matrix to characterize the bipartite quantum correlations between the hexagonal layers. We demonstrate that the system undergoes a rapid transition from a localized product state in the conduction bands at t = 0 to a coherent superposition of valence and conduction band states. Our results indicate that the degree of entanglement is highly sensitive to the interlayer photon propagator, which contains the geometric ratios of the layer positions and the height cavity, and the specific Fermi energy and SOI signatures of the respective layers. We show the emergence of spacelike-separated quantum correlations in the ultra-short evolution regime, suggesting that heterostructures in cavities may be suitable to develop experiments for a deep understanding of spacelike-separated quantum effects.
Spectroscopy of the Dirac oscillator perturbed by a surface delta potential
This paper studies how the energy levels of a Dirac oscillator (a relativistic quantum system) change when disturbed by a sharp surface delta potential. The researchers use mathematical Green function methods to calculate exact expressions for how all quantum states are affected by this perturbation.
Key Contributions
- Development of Green function method for calculating level shifts in perturbed Dirac oscillators
- Closed-form expressions for spectral shifts across all partial waves and parities
View Full Abstract
We study theoretically the level shift of the Dirac oscillator perturbed by any sharply peaked potential approaching a surface delta potential. A Green function method is used to obtain closed expressions for all partial waves and parities.
Quantum correlation and coherence in a mononuclear nickel-based molecular Magnet
This paper studies quantum properties like entanglement and coherence in a nickel-based molecular magnet system at different temperatures and magnetic fields. The researchers find that while entanglement disappears quickly as temperature increases, other quantum correlations persist even at room temperature, suggesting these materials could be useful for quantum technologies.
Key Contributions
- Demonstration that quantum correlations beyond entanglement persist at room temperature in molecular magnets
- Characterization of thermal robustness of different quantum resources in nickel-based spin systems
View Full Abstract
We investigate the behaviors of thermal entanglement, quantum correlation beyond entanglement namely, measurement-induced nonlocality (MIN) and coherence in a nickel radical molecular magnet (Et3NH)[Ni(hfac)2L], whose spin-spin interactions are well described by the Heisenberg model. Using experimentally estimated coupling parameters, we compute the thermal state of the system and analyze the dependence of quantum resources on temperature and magnetic field. The results indicate that the quantum resources of the nickel-radical molecular magnet persist even at room temperature. We show that while negativity (the entanglement measure) rapidly vanishes with increasing temperature and magnetic field, measurement-induced nonlocality and quantum coherence remain comparatively more stable and persist in regions where entanglement is absent. These results highlight the significance of nonclassical correlations beyond entanglement in thermally activated spin systems and suggest that such molecular magnets could serve as viable platforms for quantum information processing in realistic conditions.
A Quantum Internet Protocol Suite Beyond Layering
This paper proposes a new protocol architecture for quantum internet that replaces traditional layered networking approaches with dynamic composition. The system uses distributed orchestration and in-band control to handle quantum entanglement's unique properties of being non-local and stateful.
Key Contributions
- Dynamic composition architecture replacing static layering for quantum internet protocols
- In-band control mechanism using meta-headers and stamps for distributed quantum network coordination
View Full Abstract
Layering, the protocol organization principle underpinning the classical Internet, is ill-suited to the Quantum Internet, built around entanglement, which is non-local and stateful. This paper proposes a quantum-native organizational principle based on dynamic composition, which replaces static layering with a distributed orchestration fabric driven by the node local state and in-band control. Each node runs a Dynamic Kernel that i) constructs a local PoA of candidate steps to advance a service intent, and ii) executes the PoA by composing atomic micro-protocols into context-aware procedures (the meta-protocols). Quantum packets carry an in-band control-field (the meta-header) containing the service intent and an append-only list of action-commit records, termed as stamps. Successive nodes exploit this minimal, authoritative history to construct their local PoAs. As quantum packets progress, these local commits collectively induce a network-wide, direct acyclic graph that certifies end-to-end service fulfillment, without requiring global synchronization. In contrast to classical encapsulation, the proposed suite enforces order by certification: dependency-aware local scheduling decides what may run at a certain node, stamps certify what did run and constrain subsequent planning. By embedding procedural control within the quantum packet, the design ensures coherence and consistency between entanglement-state evolution and control-flow, preventing divergence between resource state ad protocol logic, while remaining MP-agnostic and implementation-decoupled. The resulting suite is modular, adaptable to entanglement dynamics, and scalable. It operates correctly with or without optional control-plane hints. Indeed, when present, hints can steer QoS policies, without changing semantics. We argue that dynamic composition is the organizing principle required for a truly quantum-native Internet.
GAP Measures and Wave Function Collapse
This paper studies GAP measures (also called Scrooge measures), which are probability distributions on quantum wave functions derived from density matrices. The authors prove that these measures are preserved under wave function collapse, meaning if you start with a GAP-distributed wave function and it collapses due to measurement or spontaneous collapse, the resulting wave function is also GAP-distributed.
Key Contributions
- Proof that GAP measures are preserved under wave function collapse for both measurement-induced and spontaneous collapse
- Mathematical characterization of conditional distributions after collapse in terms of GAP measures
View Full Abstract
GAP measures (also known as Scrooge measures) are a natural class of probability distributions on the unit sphere of a Hilbert space that come up in quantum statistical mechanics; for each density matrix $ρ$ there is a unique measure GAP$_ρ$. We describe and prove a property of these measures that was not recognized so far: If a wave function $Ψ$ is GAP$_ρ$ distributed and a collapse occurs, then the collapsed wave function $Ψ'$ is again GAP distributed (relative to the appropriate $ρ'$). This fact applies to collapses due to a quantum measurement carried out by an observer, as well as to spontaneous collapse theories such as CSL or GRW. More precisely, it is the conditional distribution of $Ψ'$, given the measurement outcome (respectively, the noise in CSL or the collapse history in GRW), that is GAP$_{ρ'}$.
Two components relativistic quantum wave equation for scalar bosons
This paper derives a two-component wave equation for scalar bosons that is analogous to the Dirac equation but first-order in time, unlike the standard Klein-Gordon equation. The formulation provides a different mathematical representation for relativistic scalar particles that reduces to the Schrödinger equation in the non-relativistic limit.
Key Contributions
- Derivation of a first-order-in-time wave equation for scalar bosons analogous to the Dirac equation
- Mathematical framework showing two-component representation for scalar bosons similar to four-component fermion representation
View Full Abstract
We show that, in the relativistic regime, scalar bosons satisfy a quantum wave equation which is quite analogous to the Dirac equation. In contrast with the Klein-Gordon equation it is first order with respect to time derivation. It leads in a regular way to the standard Schrödinger equation in the non-relativistic limit. There are two components for the wave function in this representation for the scalar boson, in a way completely analogous to the four components for the spin $1/2$ fermion in the Dirac equation.
Heat flow through the quantum heat valve coupled to ohmic baths via a master equation approach
This paper develops a theoretical model using master equations to study heat flow through quantum heat valves connected to thermal baths. The work addresses previous modeling issues where resonators were double-counted in both the bath spectral density and the open system, providing better agreement with experimental results.
Key Contributions
- Developed master equation approach that avoids resonator double-counting problem in quantum heat valve modeling
- Demonstrated improved theoretical agreement with experimental quantum heat valve results using ohmic spectral density
View Full Abstract
We provide a theoretical model for the non-equilibrium steady state heat flow through a quantum heat valve. The model is based on a master equation approach, where the partial secular approximation has been carefully performed in order to obtain accurate results. Our study assumes an ohmic spectral density for the two thermal baths of the model. This is in contrast with previous treatments of the quantum heat valve, where the baths have been assumed as being structured with a peaked spectral density near the resonance frequency of the resonator. These studies have also taken the resonator to be a part of the open quantum system of interest, which results in double counting of the resonator, as the latter appears both in the spectral density of the bath and as a part of the open system. Although this model accounts for the observations in a satisfactory way, it raises issues regarding its physical interpretation. Our method solves this conceptual problem. We apply it to describe an experiment on a quantum heat valve, showing that it successfully captures the experimental results and improves upon the previous theoretical model, which suffered from the resonator double-counting issue. Our findings confirm that the careful application of the master equation approach, in particular when it comes to the secular approximation, is a useful tool for explaining realistic experimental setups.
Rapid state-resolved single-atom imaging of alkaline-earth fermions
This paper demonstrates a new technique for rapidly detecting and distinguishing between multiple quantum states of single fermionic strontium atoms using their nuclear spin properties. The method can simultaneously measure up to four different quantum states within 100 microseconds with high fidelity, enabling better control and readout of these atoms for quantum applications.
Key Contributions
- Development of rapid state-resolved single-atom imaging technique for alkaline-earth fermions
- Demonstration of simultaneous detection of four quantum states with high fidelity (0.936-0.997)
- Enabling qudit-based quantum computing beyond traditional qubits using nuclear spin manifolds
View Full Abstract
Local Hilbert spaces with large dimension are of key interest for quantum information with applications in quantum computing and memories, quantum simulations and metrology. Thanks to its weak coupling to external perturbations, the large ground-state nuclear spin manifold of fermionic alkaline-earth atoms is an exciting resource to explore for quantum information. Simultaneous single atom and state-resolved detection however remains an outstanding challenge limiting the development of novel quantum computing and simulation schemes beyond qubits. Here, we report on a new imaging technique enabling the simultaneous detection of up to four quantum states encoded in the nuclear spin manifold of a single fermionic strontium atom within 100 microseconds, with state-resolved detection fidelities ranging from 0.936 to 0.997. This technique is further used to track the highly coherent nuclear spin dynamics after a quench highlighting the potential of this system for quantum information. These results offer fascinating perspectives for quantum science with multi-electron atoms ranging from qudit-based quantum computing to quantum simulations of the SU(N) Fermi-Hubbard model.
Separation of the Kibble-Zurek Mechanism from Quantum Criticality
This paper challenges the traditional Kibble-Zurek mechanism by showing that the predicted universal scaling of topological defects when sweeping through quantum critical points does not always hold. The researchers demonstrate that defect density can deviate from standard predictions and identify specific conditions under which universal scaling actually emerges.
Key Contributions
- Demonstrates that Kibble-Zurek scaling can fail even when crossing quantum critical points
- Identifies dynamical conditions under which universal defect scaling emerges in quasi-one-dimensional Fermi systems
View Full Abstract
When a system is swept through a quantum critical point (QCP), the Kibble-Zurek mechanism predicts that the average number of topological defects follows a universal power-law scaling with the ramp time scale. This scaling behavior is determined by the equilibrium critical exponents of the underlying phase transition. We show that the correspondence between Kibble-Zurek scaling and quantum criticality does not hold generally. In particular, the defect density can exhibit a suppression faster than the Kibble-Zurek prediction even when the quench crosses a critical point, while conventional Kibble-Zurek scaling may persist for quenches through a non-critical point. Our results, based on models representative of a broad class of quasi-one-dimensional Fermi systems, identify the dynamical conditions under which universal defect scaling emerges and clarify the relation between defect generation and equilibrium criticality.
High-resolution spectroscopy of 162Dy Rydberg levels
This paper presents the first high-resolution spectroscopy measurements of dysprosium Rydberg states, characterizing over 700 highly excited electronic states and precisely determining the atom's ionization energy. The work establishes the foundational spectroscopic data needed to use dysprosium atoms in quantum technologies that exploit Rydberg state properties.
Key Contributions
- First high-resolution spectroscopy of dysprosium Rydberg states with measurement of over 700 states
- Order-of-magnitude improvement in precision of dysprosium ionization potential measurement
- Multichannel quantum defect theory analysis enabling future quantum applications with dysprosium
View Full Abstract
Highly excited Rydberg states of lanthanides are a promising, yet largely unexplored, playground for quantum studies. Here, we report on the first high-resolution spectroscopy of 162Dy obtained by two-color trap depletion spectroscopy in a magneto-optical trap. The absolute excitation frequency of over 700 states with effective principal quantum number n between 21 and 130 is measured with an accuracy of 20 MHz. Most states are assigned to the 8 different series converging to the first 4f10(5I8)6s(2S1/2) J = 17/2 ionization potential. This energy is measured at EIP = 47901.8265 +/- 0.0008 cm-1, improving the precision of the literature value by over an order of magnitude. A multichannel quantum defect theory approach is used to benchmark and refine the assignments and to characterize six observed perturbing states belonging to higher ionization limits. These results pave the way for using dysprosium in Rydberg-based quantum architectures, leveraging the unique properties arising from its complex electronic structure. They also represent a compelling benchmark for ab-initio calculations of open-shell atomic systems.
Floquet product mode and eigenphase order
This paper studies the robustness of special quantum states called Floquet product modes in a quantum Ising model, showing that composite edge modes made from Majorana particles are more stable against perturbations than individual Majorana modes. The researchers analyze this robustness by examining the spectral properties of quantum states in finite chains.
Key Contributions
- Demonstration that Floquet product modes are more robust than individual Majorana edge modes against integrability-breaking perturbations
- Analysis of eigenphase order and spectral statistics of Floquet eigenstate quadruplets to explain mode robustness
View Full Abstract
We study the robustness of the Floquet quantum Ising model against integrability-breaking perturbations, focusing on the phase hosting both Majorana zero and $π$ modes. A recent work [Phys. Rev. B 110, 075117, (2024)] observed that the Floquet product mode, a composite edge mode constructed from both Majorana operators, is considerably more robust than the individual Majorana edge modes. We analyze these strong modes from the point of view of the eigenphase order present in finite chains with open boundary conditions. As a result of the Majorana modes, all Floquet eigenstates come in quadruplets in the integrable limit. We show that the robustness of the various modes as well as the behavior of the boundary spin correlation functions can be understood in terms of the spectral statistics of these quadruplets in the presence of integrability-breaking perturbations.
Unlocking photodetection for quantum sensing with Bayesian likelihood-free methods and deep learning
This paper develops fast data analysis methods using Bayesian likelihood-free techniques and deep learning to interpret complex photon detection patterns in quantum sensors. The methods enable real-time parameter estimation from non-classical light statistics, which is essential for operating quantum sensors at their fundamental quantum limits.
Key Contributions
- Comparison of Bayesian likelihood-free methods with deep learning for quantum parameter estimation
- Real-time inference capability for complex multiclick correlations in non-classical photodetection
- Application to driven nonlinear optomechanical devices with non-classical light emission
View Full Abstract
To operate quantum sensors at their quantum limit in real time, it is crucial to identify efficient data inference tools for rapid parameter estimation. In photodetection, the key challenge is the fast interpretation of click-patterns that exhibit non-classical statistics -- the very features responsible for the quantum enhancement of precision. We achieve this goal by comparing Bayesian likelihood-free methods with ones based on deep learning (DL). While the former are more conceptually intuitive, the latter, once trained, provide significantly faster estimates with comparable precision and yield similar predictions of the associated errors, challenging a common misconception that DL lacks such capabilities. We first verify both approaches for an analytically tractable, yet multiparameter, scenario of a two-level system emitting uncorrelated photons. Our main result, however, is the application to a driven nonlinear optomechanical device emitting non-classical light with complex multiclick correlations; in this case, our methods are essential for fast inference and, hence, unlock the possibility of distinguishing different photon statistics in real time. Our results pave the way for dynamical control of quantum sensors that leverage non-classical effects in photodetection.
Multiphoton Hong-Ou-Mandel Interference Enables Superresolution of Bright Thermal Sources
This paper presents a quantum optical imaging technique that uses multiphoton interference with single photons at a beamsplitter to achieve superresolution imaging of thermal sources beyond the diffraction limit. The method combines Hong-Ou-Mandel interferometry with multiphoton coincidence detection to enable precise imaging of bright nanoscopic sources in biological and chemical systems.
Key Contributions
- Development of a quantum superresolution imaging scheme using multiphoton Hong-Ou-Mandel interference that surpasses the diffraction limit
- Demonstration of enhanced precision scaling that matches ultimate quantum limits for thermal source imaging, particularly for sources emitting ~1 photon per frame
- Achievement of robust imaging performance with coarse pixel resolution using transverse momenta detection in Fourier space
View Full Abstract
We present a quantum optical scheme for imaging transversely displaced thermal sources of arbitrary intensities by employing multiphoton interference with a reference single-photon Fock state at a beamsplitter. Obtaining an analytical form for transverse momenta-resolved $L$-photon probabilities in either output, we show via Fisher information analysis that separation estimators built using interference sampling of multiphoton events exhibit significantly enhanced precision vis-à-vis existing imaging schemes over a wide range of separations and brightness. Even-photon-number coincidences exhibit constant precision in the sub-Rayleigh regime, demonstrating quantum superresolution of our scheme beyond the diffraction limit. For sources emitting on average $N_s\sim1$ photon per frame (such as in IR emission of thermal sources), precision bounds for our scheme scale linearly in $N_s$, exemplifying an enhanced precision of estimators in relation to weak sources $N_s\ll1$, and matching the ultimate quantum scaling. Finally, transverse momenta resolution in the Fourier plane produces finite imaging precisions for intermediate and large source separations using coarse pixel sizes of order $δy\sim100\,μ\mathrm{m}$ for exemplary image spot sizes $σ_x \sim 0.1\, μ\mathrm{m}$, in contrast with existing schemes of diffraction-limited direct imaging and superresolved inversion interferometric imaging that are severely degraded by coarse pixel sizes and have limited use. Combining the relatively straightforward sensing operation of Hong-Ou-Mandel interferometers with multiphoton coincidence detection of arbitrarily bright thermal sources and inner variable resolution of transverse photonic momenta, our scheme offers a robust alternative to non-invasive single-particle tracking and imaging of bright sources in nanoscopic chemical and biological systems.
Improving Generalization and Trainability of Quantum Eigensolvers via Graph Neural Encoding
This paper develops a machine learning approach that combines graph neural networks with classical neural networks to generate better starting parameters for variational quantum eigensolvers (VQEs), which are quantum algorithms used to find ground states of quantum systems. The method improves VQE performance by encoding the structure of quantum problems and generalizing across different instances without requiring retraining.
Key Contributions
- Novel end-to-end representation learning framework combining graph autoencoders with classical neural networks for VQE parameter generation
- Demonstrated improved generalization across Hamiltonian instances without instance-specific retraining
- Mitigation of barren plateau problem with significantly reduced gradient variance decay
- Accelerated convergence for quantum subspace-based eigensolvers
View Full Abstract
Determining the ground state of a many-body Hamiltonian is a central problem across physics, chemistry, and combinatorial optimization, yet it is often classically intractable due to the exponential growth of Hilbert space with system size. Even on fault-tolerant quantum computers, quantum algorithms with convergence guarantees -- such as quantum phase estimation and quantum subspace methods -- require an initial state with sufficiently large overlap with the true ground state to be effective. Variational quantum eigensolvers (VQEs) are natural candidates for preparing such states; however, standard VQEs typically exhibit poor generalization, requiring retraining for each Hamiltonian instance, and often suffer from barren plateaus, where gradients can vanish exponentially with circuit depth and system size. To address these limitations, we propose an end-to-end representation learning framework that combines a graph autoencoder with a classical neural network to generate VQE parameters that generalize across Hamiltonian instances. By encoding interaction topology and coupling structure, the proposed model produces high-overlap initial states without instance-specific optimization. Through extensive numerical experiments on families of one- and two-local Hamiltonians, we demonstrate improved generalization and trainability, manifested as reduced test error and a significantly milder decay of gradient variance. We further show that our method substantially accelerates convergence in quantum subspace-based eigensolvers, highlighting its practical impact for downstream quantum algorithms.
Krylov Distribution and Universal Convergence of Quantum Fisher Information
This paper develops a mathematical framework for efficiently computing quantum Fisher information (QFI) using Krylov subspace methods, which is important for quantum metrology and precision measurement. The authors identify universal convergence patterns and provide both theoretical insights and practical computational tools for high-dimensional quantum systems.
Key Contributions
- Development of spectral-resolvent framework for QFI computation using Krylov subspace methods
- Identification of two universal convergence regimes (exponential and algebraic decay) based on spectral properties
- Connection between quantum metrology, spectral geometry, and Krylov dynamics with practical computational tools
View Full Abstract
We develop a spectral-resolvent framework for computing the quantum Fisher information (QFI) using Krylov subspace methods, extending the notion of the Krylov distribution. By expressing the QFI as a resolvent moment of the superoperator $\mathcal{K}_ρ$ associated with a density matrix, the Krylov distribution quantifies how the QFI weight is distributed across Krylov levels in operator space and provides a natural measure for controlling the truncation error in Krylov approximations. Leveraging orthogonal polynomial theory, we identify two universal convergence regimes: exponential decay when the Liouville-space spectrum is gapped away from zero, and algebraic decay governed by hard-edge (Bessel) universality when small eigenvalues accumulate near zero. This framework establishes a direct connection between quantum metrology, spectral geometry, and Krylov dynamics, offering both conceptual insight and practical tools for efficient QFI computation in high-dimensional and many-body systems.
Quantum Resource Theory of Lasers
This paper analyzes laser coherence properties using quantum resource theory, showing how imperfections in laser light limit quantum coherence and affect quantum technology applications like qubit initialization. The researchers provide both theoretical framework and experimental validation for benchmarking coherent light sources in quantum protocols.
Key Contributions
- Established quantum resource theory framework for analyzing laser coherence properties and their limitations due to spontaneous emission
- Demonstrated direct connection between laser field quantum coherence and maximum achievable purity in qubit superposition state initialization
View Full Abstract
Lasers serve as the fundamental workhorses of photonic quantum technologies, with perfectly coherent light fields being essential for many protocols that generate nonclassical light, implement coherent control schemes, and initialize qubits. However, no laser is absolutely ideal and the implications of deviations from perfect coherence in quantum technological tasks remain unclear. In this study, we theoretically and experimentally explore the quantum coherence properties of lasers from a resource theory perspective, establishing a significant connection between photonics, quantum optics, and quantum information science. We demonstrate that the maximum achievable quantum coherence for laser light is constrained by spontaneous emission and the purity of the dephased laser field state. As a critical example application in quantum information protocols, we show that the quantum coherence of a laser field with a given mean photon number directly governs the maximum purity attainable when initializing a qubit in a superposition state through resonant driving. Our findings are highly relevant for bridging applied physics and engineering with integrated photonic quantum technologies and resource theories, paving the way for reliable benchmarking of various coherent light sources for applications in photonics and quantum protocols.
Symmetry and Exact Solutions of General Spin-Boson Models
This paper analyzes the mathematical symmetry structure of spin-boson models, which describe quantum systems interacting with their environment, and derives exact solutions for their energy spectra. The authors demonstrate their general theoretical approach with numerical calculations for a specific two-mode example.
Key Contributions
- Identification and exploitation of symmetry structure in general spin-boson Hamiltonians
- Exact analytical solutions for spin-boson model spectra
- Numerical demonstration of the solution method for two-mode systems
View Full Abstract
Spin-boson models are the canonical benchmark for quantum dissipation. We show the symmetry structure of general spin-boson Hamiltonians and obtain their spectra explicitly by exploiting the symmetry. As an illustration of the general case, we numerically demonstrate the exact solution for the two-mode case.
Two-parameter families of MPO integrals of motion in Heisenberg spin chains
This paper extends recent work on quantum spin chain integrability by discovering two-parameter families of matrix product operators (MPOs) that commute with Heisenberg spin chain Hamiltonians. The authors develop a symbolic algebra method to find these integrals of motion for XXX, XXZ, and XYZ models, expanding beyond the previously known one-parameter families.
Key Contributions
- Discovery of two-parameter families of MPO integrals of motion for Heisenberg spin chains
- Development of symbolic algebra approach for finding commuting operators with spin chain Hamiltonians
View Full Abstract
Recently, Fendley et al. (2025) [arXiv:2511.04674] revealed a new way to demonstrate the integrability of XYZ Heisenberg model by constructing a one-parameter family of integrals of motion in the matrix product operator (MPO) form. In this short note, I report on the discovery of two-parameter families of MPOs that commute with with the Heisenberg spin chain Hamiltonian in the XXX, XXZ, and XYZ cases. I describe a symbolic algebra approach for finding such integrals of motion and speculate about possible applications.
Direct access to the initial polarization of ${}^{13}C$ nuclei by measuring coherence evolution of an nitrogen-vacancy center spin qubit
This paper presents a new method to measure the initial polarization of carbon-13 nuclei in diamond by monitoring how an NV center spin qubit's coherence evolves over time. The technique allows researchers to indirectly determine nuclear spin polarization without needing direct access to the nuclear environment, using only measurements of the NV center qubit.
Key Contributions
- Development of an indirect method to measure nuclear polarization using NV center coherence evolution
- Demonstration that the technique works with minimal experimental requirements and doesn't depend strongly on applied magnetic fields
- Validation through simulations with up to fifteen randomly placed nuclear spins
View Full Abstract
We introduce a method for the measurement of the lower bound on the initial polarization of spinful nuclei in a diamond by following the coherence evolution of an NV center spin qubit after a simple scheme is operated on the qubit to facilitate the transfer of information from the environment into the qubit state. Current polarization measurement techniques are challenging to implement due to the need for direct access to the environment. In our method, information is obtained by measuring the difference of the evolution of the qubit coherence resulting from preparation phase when the environment evolution is conditional on the qubit pointer state. We find that the method does not depend strongly on the applied magnetic field, but rather on the number of spinfull nuclei that lead to decoherence, and gives a reasonable estimate if the environment is polarized. The key advantage of this approach is its simplicity and minimal experimental requirements, allowing the inference of initial nuclear polarizations without direct access to the environment. We demonstrate the efficacy of this method using a simulated environment of up to fifteen randomly placed nuclear spins.
Reversible Information Transformation via Quantum Reservoir Computing: Conditions, Protocol, and Noise Resilience
This paper develops a method to reverse quantum reservoir computing transformations, showing how to reconstruct input data from quantum system outputs using a four-equation protocol. The researchers demonstrate machine-precision reconstruction under ideal conditions but find that realistic noise limits practical performance, requiring error mitigation for deployment.
Key Contributions
- Development of four-equation encode-decode protocol for reversible quantum reservoir computing
- Demonstration of machine-precision input reconstruction using XYZ Hamiltonian reservoir with rank condition criterion
- Comprehensive noise analysis showing shot noise dominance and asymmetric resource allocation benefits
View Full Abstract
Quantum reservoir computing (QRC) exploits fixed quantum dynamics and a trainable linear readout to process temporal data, yet reversing the transformation -- reconstructing the input from the reservoir output -- has been considered intractable owing to the recursive nonlinearity of sequential quantum state evolution. Here we propose a four-equation encode-decode protocol with cross-key pairing and constructively show that quantum reservoir and key combinations satisfying all four equations exist. Using a full XYZ Hamiltonian reservoir with 10 data qubits, we expand the feature dimension to 76 without increasing qubit count and achieve machine-precision reconstruction (mean-squared error $\mathrm{MSE} \sim 10^{-17}$) for data lengths up to 30 under ideal conditions; the rank condition $\mathrm{dim}(V) \geq N_c$ is identified as a necessary criterion. A comprehensive noise analysis across seven conditions and four baseline methods reveals a clear hierarchy: shot noise dominates, depolarizing noise adds a moderate factor, and asymmetric resource allocation -- 10 shots for encoding, $10^5$ for decoding -- yields approximately two orders of magnitude MSE improvement by exploiting the asymmetric noise roles of the encryption and decryption feature matrices. Under realistic noise the MSE degrades to $10^{-3}$-$10^{-1}$, indicating that error mitigation is needed before practical deployment, but our results establish the feasibility of bidirectional reversible information transformation within QRC.
Magnon squeezing in the quantum regime
This paper demonstrates the first experimental observation of quantum-level magnon squeezing in a macroscopic yttrium iron garnet sphere. The researchers used a hybrid system coupling magnons to superconducting qubits via microwave cavities to create squeezed quantum states of collective spin excitations, achieving approximately 1 dB of squeezing below the vacuum level.
Key Contributions
- First experimental demonstration of quantum-level magnon squeezing in macroscopic spin systems
- Development of magnon-superconducting qubit hybrid platform for quantum nonlinear magnonics
- Implementation of Wigner tomography for characterizing squeezed magnon states
View Full Abstract
Squeezed states, crucial for quantum metrology and emerging quantum technologies, have been demonstrated in various platforms, but quantum squeezing of magnons in macroscopic spin systems remains elusive. Here we report the experimental observation of quantum-level magnon squeezing in a millimeter-scale yttrium iron garnet (YIG) sphere. By engineering a strong dispersive magnon-superconducting qubit coupling via a microwave cavity, we implement a significant self-Kerr nonlinearity to generate squeezed magnon states with their mean magnon number less than one. Harnessing a magnon-assisted Raman process, we perform Wigner tomography, revealing quadrature variances of $\sim\!0.8$ ($\sim\!1.0$~dB squeezing) relative to the vacuum. These results lay the groundwork for quantum nonlinear magnonics and promise potential applications in quantum metrology.
Curiosity Over Hype: Modeling Motivation Language to Understand Early Outcomes in a Selective Quantum Track
This paper analyzes motivation language in Spanish admission responses to predict student engagement and performance in a quantum computing education program in Peru. The researchers used text analysis methods to identify patterns suggesting that curiosity-driven applicants performed better than those focused on technology careers.
Key Contributions
- Application of natural language processing to predict success in quantum education programs
- Comparison of LDA topic modeling with embedding-based clustering for analyzing motivation in STEM education
View Full Abstract
We study whether latent motivation signals in short Spanish admission responses predict engagement and performance in an early quantum computing pathway run by QuantumHub Peru. We analyze N=241 applicants' open responses and link them to outcomes from two selective modules: Module 1 (secondary; mathematics and computing foundations; n=23) and Module 2 (secondary + early undergraduate; quantum fundamentals; n=36, including M1 continuers). To ensure baseline comparability, the M2 university entrance exam matched the difficulty of the M1 final. Final grades followed the program's official cohort-specific weightings (attendance/assignments/exam), which we retain to preserve ecological validity. Methodologically, we model text with Latent Dirichlet Allocation (LDA, k=8) and, for robustness, with sentence embeddings from a small multilingual language model, EmbeddingGemma-300M, projected via UMAP and clustered with HDBSCAN. This combination leverages the transparency of bag-of-words topics and the semantic richness of small language model embeddings. Descriptively, curiosity/learning topics show higher grades and attendance than technology/career-oriented topics; inferential tests are underpowered (e.g., linear R2 ~ 0.03; logistic pseudo-R2 ~ 0.04) so effect-size estimates should be viewed as preliminary rather than confirmatory. Embedding-based clustering yields seven clusters with 11.2% noise and modest agreement with LDA (ARI=0.068; NMI=0.163). Results suggest that brief motivation responses encode promising signals that could support early mentoring in rigorous STEM pipelines, while highlighting the need for larger, pre-registered studies.
Spectral Phase Encoding for Quantum Kernel Methods
This paper introduces Spectral Phase Encoding (SPE), a new method for quantum machine learning that combines discrete Fourier transforms with quantum kernel methods. The researchers test how well different quantum machine learning approaches handle noisy data and find that their DFT-based method is more robust to data corruption than alternatives.
Key Contributions
- Introduction of Spectral Phase Encoding (SPE) combining DFT preprocessing with diagonal phase-only quantum embeddings
- Comprehensive robustness analysis showing DFT-based quantum kernels have better noise resilience than PCA and random projection variants
View Full Abstract
Quantum kernel methods are promising for near-term quantum ma- chine learning, yet their behavior under data corruption remains insuf- ficiently understood. We analyze how quantum feature constructions degrade under controlled additive noise. We introduce Spectral Phase Encoding (SPE), a hybrid construc- tion combining a discrete Fourier transform (DFT) front-end with a diagonal phase-only embedding aligned with the geometry of diagonal quantum maps. Within a unified framework, we compare QK-DFT against alternative quantum variants (QK-PCA, QK-RP) and classi- cal SVM baselines under identical clean-data hyperparameter selection, quantifying robustness via dataset fixed-effects regression with wild cluster bootstrap inference across heterogeneous real-world datasets. Across the quantum family, DFT-based preprocessing yields the smallest degradation rate as noise increases, with statistically sup- ported slope differences relative to PCA and RP. Compared to classical baselines, QK-DFT shows degradation comparable to linear SVM and more stable than RBF SVM under matched tuning. Hardware exper- iments confirm that SPE remains executable and numerically stable for overlap estimation. These results indicate that robustness in quan- tum kernels depends critically on structure-aligned preprocessing and its interaction with diagonal embeddings, supporting a robustness-first perspective for NISQ-era quantum machine learning.
Characterization and active cancellation of power-line-induced motional-mode frequency noise in a trapped-ion system
This paper investigates how 60-Hz power-line noise affects the frequency stability of trapped ion quantum bits and develops an active cancellation system to suppress this noise. The researchers demonstrate that their noise cancellation technique extends the coherence time of the ion's motional modes from 10 ms to 35 ms, improving the quality of quantum operations.
Key Contributions
- Systematic characterization of power-line noise effects on trapped-ion motional mode frequencies using spin-echo Ramsey spectroscopy
- Development and implementation of active noise cancellation system that extends coherence time from 10 ms to 35 ms
- Practical framework for suppressing periodic noise in trapped-ion quantum computing platforms
View Full Abstract
The stability of motional-mode frequency is essential for realizing high-fidelity quantum gates in trapped-ion quantum computing. While broadband Gaussian noise has been extensively studied and mitigated using pulse shaping techniques, the impact of coherent periodic noise has remained largely unexplored. Here we report a systematic investigation of 60-Hz power-line noise and its effect on the secular frequencies of a single ${}^{171}\mathrm{Yb}^{+}$ ion. Using spin-echo Ramsey spectroscopy, we characterize the amplitude and phase of the resulting secular-frequency modulation and validate this characterization via passive phase correction of the Ramsey sequence. Building on this, we implement active cancellation by injecting a compensation tone into the set-point of a PI controller that stabilizes the trap RF drive amplitude. A phasor-fitting procedure optimizes the amplitude and phase of the compensation signal, enabling near-complete suppression of the 60-Hz component. With active cancellation engaged, the coherence time of a radial motional mode is extended from approximately 10 ms to 35 ms, consistent with the limit set by motional heating. Our results provide both a clear characterization of periodic motional-mode noise and a practical framework for its suppression in trapped-ion quantum computing platforms.
From Quantum Chaos to a Reversed Quantum Disentangled Liquid in a Disorder-Free Spin Ladder
This paper studies a disorder-free quantum spin ladder system and discovers a new phase called 'reversed quantum disentangled liquid' where the system avoids thermal equilibrium through interaction-driven localization rather than disorder. The researchers identify how varying interaction strength creates different dynamical regimes from integrable to chaotic to localized behavior.
Key Contributions
- Discovery of reversed quantum disentangled liquid as a new disorder-free route to many-body localization
- Identification of reentrant dynamical phase transitions from integrable to chaotic to localized regimes in spin ladders
- Demonstration of emergent local integrals of motion providing microscopic understanding of quasi-MBL dynamics
View Full Abstract
The mechanisms by which isolated interacting quantum systems evade thermalization extend beyond disorder-induced many-body localization, encompassing a growing class of interaction-driven phenomena. We investigate a spin-1/2 ladder with asymmetric XY leg couplings and tunable Ising interactions on the rungs, and identify the microscopic origin of many-body localization (MBL) in this setting. Through a suite of diagnostics -including entanglement dynamics, fidelity susceptibility, adiabatic gauge potential norms, level-spacing statistics and entropy of eigenstates- we uncover a reentrant progression of dynamical regimes as the rung coupling Jz is varied: integrable behavior at Jz=0, quantum chaos at intermediate Jz, and a robust nonthermal regime at strong coupling. In the latter regime, we demonstrate the emergence of a reversed quantum disentangled liquid (reversed-QDL), where the light species thermalizes while the heavy species remains localized. The strong-coupling limit further yields emergent local integrals of motion anchored in a fixed-point structure, providing a microscopic origin of the observed quasi-MBL dynamics. These results establish reversed-QDL as a distinct, disorder-free route to nonergodicity and broaden the classification of dynamical phases in quantum matter.
A Relation Between the Chrestenson Operator, Weyl Operator Basis, and Kronecker-Pauli Operator Basis
This paper establishes a new mathematical relationship between three different operator bases used in quantum theory for prime-dimensional Hilbert spaces: the Chrestenson operator, Weyl operator basis, and Kronecker-Pauli operator basis. The authors provide concrete examples for 3-dimensional and 5-dimensional systems to illustrate their theoretical findings.
Key Contributions
- Establishes new algebraic relation connecting Chrestenson, Weyl, and Kronecker-Pauli operator bases
- Provides explicit examples for d=3 and d=5 dimensional quantum systems
View Full Abstract
Within the framework of quantum theory, we review the Chrestenson operator, the Weyl operator basis, and the Kronecker-Pauli operator basis in $d$-dimensional Hilbert spaces using Dirac notation, where $d$ is a prime integer strictly greater than 2. We establish a new algebraic relation connecting these operators and present the cases $d=3$ and $d=5$ as illustrative examples.
Deterministic Ground State Preparation via Power-Cosine Filtering of Time Evolution Operators
This paper presents a new quantum algorithm for preparing ground states of quantum many-body systems using a Power-Cosine quantum signal processing filter with a single ancillary qubit. The method uses mid-circuit measurement and reset to achieve efficient ground state preparation with reduced hardware requirements compared to existing approaches.
Key Contributions
- Novel Power-Cosine quantum signal processing filter for deterministic ground state preparation
- Single-ancilla framework with mid-circuit measurement and reset that reduces spatial overhead
- Analytical proof of exponential excited state suppression with O(Δ^-2 log(1/ε)) circuit depth scaling
- Demonstrated exponential advantage over Trotterized Adiabatic State Preparation at equivalent circuit depths
View Full Abstract
The deterministic preparation of quantum many-body ground states is essential for advanced quantum simulation, yet optimal algorithms often require prohibitive hardware resources. Here, we propose a highly efficient, non-variational protocol for ground state preparation using a Power-Cosine quantum signal processing (QSP) filter. By eschewing complex block-encoding techniques, our method directly utilizes coherent time-evolution operators controlled by a single ancillary qubit. The integration of mid-circuit measurement and reset (MCMR) drastically minimizes spatial overhead, translating iterative non-unitary filtering into deep temporal coherence. We analytically demonstrate that this approach achieves exponential suppression of excited states with a circuit depth scaling of $\mathcal{O}(Δ^{-2}\log(1/ε))$, prioritizing implementational simplicity over optimal asymptotic complexity. Numerical simulations on the 1D Heisenberg XYZ model validate the theoretical soundness and shot-noise resilience of our method. Furthermore, an advantage analysis reveals that our protocol exponentially outperforms standard Trotterized Adiabatic State Preparation (TASP) at equivalent circuit depths. This single-ancilla framework provides a highly practical and deterministic pathway for many-body ground state preparation on Early Fault-Tolerant (EFT) quantum architectures.
Quantum Hamiltonian Learning using Time-Resolved Measurement Data and its Application to Gene Regulatory Network Inference
This paper develops a method to learn quantum Hamiltonians from time-resolved measurement data and applies it to infer gene regulatory networks by modeling gene interactions as quantum-like systems. The authors use quantum formalism to represent biological gene expression dynamics and test their approach on cancer genomics data.
Key Contributions
- Development of quantum Hamiltonian learning framework with finite-sample recovery guarantees
- Introduction of quantum-inspired modeling for gene regulatory network inference with application to cancer research
View Full Abstract
We present a new Hamiltonian-learning framework based on time-resolved measurement data from a fixed local IC-POVM and its application to inferring gene regulatory networks. We introduce the quantum Hamiltonian-based gene-expression model (QHGM), in which gene interactions are encoded as a parameterized Hamiltonian that governs gene expression evolution over pseudotime. We derive finite-sample recovery guarantees and establish upper bounds on the number of time and measurement samples required for accurate parameter estimation with high probability, scaling polynomially with system size. To recover the QHGM parameters, we develop a scalable variational learning algorithm based on empirical risk minimization. Our method recovers network structure efficiently on synthetic benchmarks and reveals novel, biologically plausible regulatory connections in Glioblastoma single-cell RNA sequencing data, highlighting its potential in cancer research. This framework opens new directions for applying quantum-like modeling to biological systems beyond the limits of classical inference.
Temporal magnon-qubit Mach-Zehnder interferometer
This paper proposes a quantum interferometer that uses microwave qubits and magnons (spin wave quasiparticles) to study single magnon decoherence. The system creates controllable entanglement between qubits and magnonic states using pulsed magnetic fields, enabling detailed investigation of quantum decoherence mechanisms at the single particle level.
Key Contributions
- Novel temporal Mach-Zehnder interferometer design using magnon-qubit entanglement
- Method to independently characterize different magnon decoherence channels
- Advancement toward single magnon quantum applications
View Full Abstract
A temporal magnon-qubit Mach-Zehnder (MZ) interferometer is proposed. The interferometer is based on controllable entanglement of a microwave qubit and a magnonic state, achieved by application of a pulsed magnetic field playing the role of a magnon-qubit temporal "beam splitter". Analogous to a typical MZ interferometer, the generated interference pattern of the final qubit population carries information about the magnon dynamics. One important application of the proposed scheme is the study of single magnon decoherence. Interestingly, this scheme allows one to independently determine rates of two possible decoherence channels. This may help enable single magnon state applications and answer fundamental questions of quasi-particle decoherence at single quantum levels.
Subsystem Statistics and Conditional Self-Similarity of Random Quantum States
This paper analytically derives statistical distributions for subsystems of random quantum states, discovering that they follow a universal Beta distribution law. The authors prove that random quantum states exhibit conditional self-similarity, meaning subsystem statistics can perfectly reconstruct full-system behavior, which provides new benchmarking methods for quantum random circuit sampling.
Key Contributions
- Analytical derivation of universal Beta distribution law for random quantum state subsystems
- Proof of exact conditional self-similarity in random quantum states enabling full-system reconstruction from subsystem data
- New framework for validating random circuit sampling through subsystem cross-entropy benchmarking
View Full Abstract
We analytically derive the bit-string probability distributions of subsystems of random pure states and depolarized random states using the Dirichlet distribution. We identify the exact Beta distribution as the universal statistical law of random quantum states, providing a unified finite-size description of full-system, subsystem, and conditional statistics. In the presence of depolarizing noise, these distributions are scaled and shifted by the noise strength, producing a noise-induced gap in their support. Remarkably, we prove that random states exhibit exact conditional self-similarity: the distribution of subsystem bit-string probabilities conditioned on specific outcomes of the complementary subsystem is identical to that of the full system. This hidden scale invariance enables the exact restoration of the full-system statistics from the marginalized Beta distribution via post-selection, and persists under depolarizing noise. Our results uncover a fundamental symmetry of Hilbert space and provide a scalable, rigorous framework for validating random circuit sampling via subsystem or conditional cross-entropy benchmarking.
Optimized Phase Masks for Absorption of Ultra-Broadband Pulses by Narrowband Atomic Ensembles
This paper uses genetic algorithms and spatial light modulators to optimize phase masks that enhance two-photon absorption in atomic ensembles, achieving up to 26x enhancement for photons from different pulses. The work focuses on improving ultra-broadband pulse absorption by narrowband atomic systems, with applications to quantum memory storage.
Key Contributions
- Demonstration of 26x enhancement factor for two-photon absorption using photons from different pulses
- Theoretical analysis showing up to 3x enhancement factors for two-photon absorption at large optical depths using optimized phase masks
View Full Abstract
By combining genetic algorithm and a spatial light modulator we theoretically analyse how to improve a two-photon cascade absorption in atomic ensembles, inspecting the impact of various configurations and parameters in the optimized phase mask. At low atomic densities, we compare the cases of sequential transitions with the two photons coming from the same pulse or from two different pulses. For the former, we predict an enhancement by a factor of $9.5$, similar to what was previously reported in the literature [Phys. Rev. Lett. {\bf 86}, 47 (2002)]. For the later, on the other hand, we obtain an enhancement factor of $26$ times. This absorption of two photons by different pulses is of particular interest for the storage of ultra-broadband single photons by atomic ensembles, in which case the second photon would come from a control pulse. We investigate this process as a function of the atomic density, demonstrating enhancements by factors up to 3 for the two-photon absorption after propagating through large optical depths. However, for the experimental conditions considered in the previous work by Carvalho {et al.} [Phys. Rev. A {\bf 101}, 053426 (2020)], in terms of control power and optical depths, we show that this enhancement in two-photon absorption would still result in just a modest increase of the absorption of a weak probe pulse.
Superresolution technique beyond the diffraction limit under a structured beam via different optical nanostructures
This paper develops superresolution optical techniques using structured light beams and solid immersion lenses to achieve extremely sharp focal spots (27 nm FWHM) that overcome the diffraction limit. The researchers use advanced beam shapes like Laguerre-Gaussian and Hermite-Gauss beams with elliptical nanostructures to enable high-resolution scanning applications.
Key Contributions
- Achievement of 27 nm FWHM focal spots using structured beams and elliptical solid immersion lenses
- Demonstration of superresolution technique with tolerance to fabrication errors and beam size variations
View Full Abstract
To overcome the limit of diffraction while achieving the superresolution technique, solid immersion lenses are the key optical elements for data storage and nanophotonics applications. Recent demonstrations have shown how different nanostructures (such as elliptical SILs) are used in diverse fields of increasing resolution in the presence of a structured Gaussian beam. By applying twisted beams such as angular momentum beams (Laguerre- Gaussian) and spatial higher-order Gaussian beams (Hermite- Gauss), we can attain a sharp (FWHM = 27 nm) near-field focal spot pattern, which is considerably better than the conventional macroscopic SIL. By numerical simulations, tolerance has been confirmed with a slight variation in beam size and geometrical modification to make the model compatible with fabrication errors. This narrow bandwidth intensity distribution can be utilized for scanning the sample with higher resolution, especially in the field of quantum technology.