Quantum Computing Paradigms: Adiabatic QC (AQC)
Table of Contents
(For other quantum computing paradigms and architectures, see Taxonomy of Quantum Computing: Paradigms & Architectures)
What It Is
Adiabatic Quantum Computing (AQC) is a universal paradigm of quantum computing based on the adiabatic theorem of quantum mechanics. It generalizes the idea of quantum annealing beyond just optimization. In AQC, one encodes the solution of an arbitrary computation in the ground state of some problem Hamiltonian $H_{\text{problem}}$. Instead of applying discrete gates, one evolves the quantum state continuously under a time-dependent Hamiltonian $H(t)$ from an initial easy state to the final state that encodes the answer. If the evolution is slow enough (adiabatic), the system stays in the instantaneous ground state throughout, thus ending in the ground state of $H_{\text{problem}}$, which yields the solution.
Mathematically, the setup is similar to QA: one prepares the system in the ground state of a simple Hamiltonian $H(0) = H_{\text{initial}}$ (e.g. a strong transverse field whose ground state is $|+…+\rangle$). Then $H(t)$ is varied smoothly to $H(T) = H_{\text{problem}}$ over total time $T$. The adiabatic theorem guarantees that if $T$ is large compared to $\frac{1}{g_{\min}^2}$ (where $g_{\min}$ is the minimum energy gap between ground state and first excited state during the evolution), the system will end in the ground state of $H_{\text{problem}}$ with high fidelity. In essence, computation is achieved by slow deformation of the Hamiltonian rather than sequences of gates.
Importantly, any quantum algorithm (in the circuit model) can be translated into an adiabatic process. This was proven in a landmark result by Aharonov et al. (2004). They described an efficient mapping whereby an arbitrary quantum circuit of $L$ gates is converted into a certain $H_{\text{problem}}$ whose ground state encodes the circuit’s output, and the adiabatic evolution steers the computer into that ground state in polynomial time. This implies that the adiabatic model and the standard circuit model are polynomially equivalent in computational power. In other words, AQC is a universal model of quantum computing, not limited to optimization problems. (Quantum annealing can be viewed as a subset or special case of AQC, usually with a restriction to particular forms of $H(t)$ that make it more practical but possibly less powerful.)
Mathematical Formulation: The general AQC algorithm for a problem works as follows:
- Choose a problem Hamiltonian $H_P$ such that its ground state encodes the answer to your computational problem (this might be constructed via a clever encoding – e.g. for an NP-complete problem, $H_P$ penalizes constraint violations so that the lowest-energy state corresponds to satisfying all constraints).
- Prepare the system in the ground state of a simple Hamiltonian $H_B$ (the “beginning” Hamiltonian). Commonly, $H_B = -\sum_i \sigma^x_i$ as in QA, or some Hamiltonian with an easy ground state like all spins aligned in $x$-direction.
- Evolve the Hamiltonian as $H(t) = (1-s(t)) H_B + s(t) H_P$ with $s(0)=0$ and $s(T)=1$. The function $s(t)$ is ramped slowly over total time $T$.
- If $T$ is large enough, by time $T$ the system will be in (or very near) the ground state of $H_P$. Measuring the state in the computational basis then yields the solution.
The runtime $T$ required depends on the minimum spectral gap $\Delta_{\min}$ between the ground state and first excited state of $H(t)$ for $t \in [0,T]$. If this gap is small (especially if it closes exponentially with problem size, as it can for some hard problems), then $T$ must be exponentially large to maintain adiabaticity, which negates the quantum advantage. Thus, the key challenge for AQC is to design Hamiltonian paths that avoid tiny gaps. Certain problems or algorithmic approaches (e.g. alternate paths or catalyst Hamiltonians) are studied to keep gaps reasonably large.
Key Academic Papers
The field of AQC was kick-started by Edward Farhi and colleagues, who in 2000 proposed the first adiabatic quantum algorithm for the NP-complete 3-SAT problem. Their work showed how an adiabatic approach could find a satisfying assignment by evolving through the solution space, which generated significant interest.
Subsequently, Dorit Aharonov et al. (2004/2008) proved the equivalence of AQC to the circuit model, a seminal theoretical result. This proof essentially showed any quantum circuit (quantum Turing machine) can be efficiently simulated by an adiabatic process, solidifying AQC as a true alternative paradigm.
Reviews like Albash & Lidar (2018) provide an in-depth account, covering theoretical developments (variants of the adiabatic theorem, complexity theory of Hamiltonians) and algorithmic accomplishments and limitations. They also discuss the special case of stoquastic AQC (where the Hamiltonian has only non-positive off-diagonal matrix elements in the computational basis), which is the regime “most AQC work to date” has focused on, and the potential obstacles it faces (stoquastic Hamiltonians might be less powerful because they can be easier to simulate classically, posing an “obstruction” to quantum speedup).
On the practical side, while there is no full-scale AQC computer yet beyond the quantum annealers, there have been small experiments demonstrating adiabatic evolutions on few-qubit systems (e.g. superconducting qubits adiabatically transferring states).
Comparison to Other Paradigms
Adiabatic quantum computing versus gate-based computing is analogous to analog vs digital approaches. In gate-based QC, we apply a discrete sequence of unitary operations (quantum gates) to transform the state and implement algorithms like a digital computer. In AQC, we analogly steer the Hamiltonian of the whole system – computation is done by the system’s natural time evolution. Both models ultimately leverage the same resources (superposition, entanglement, interference) and, as noted, have been proven computationally equivalent in power
fab.cba.mit.edu. However, they differ in practicalities:
- AQC tends to use continuous control (timing and analog strengths) whereas gate model uses pulses and discrete gate operations.
- Error correction and fault tolerance in AQC require different strategies; you can’t just do a quantum error-correcting code easily because you’re not doing discrete gates (though there are ideas like energy-gap protection and slowly correcting errors on the fly).
- Designing algorithms in AQC often means encoding the problem into a ground-state condition and hoping the adiabatic evolution finds it. This can be more intuitive for certain optimization or satisfiability problems, but less straightforward for, say, arithmetic or factoring (though it can be done by creating a Hamiltonian whose ground state represents the answer to the arithmetic).
- AQC can leverage some unique tricks, like quantum tunneling and interference in the Hamiltonian’s landscape, in ways that might be complicated to reproduce with discrete gates. Conversely, some algorithms discovered in gate model (like quantum Fourier transform) don’t have an obvious adiabatic counterpart except by indirect encoding.
Advantages
- Universality with Conceptual Simplicity: AQC is conceptually simple in that to “program” the computer, one mainly needs to design the right Hamiltonian $H_P$ for the problem. The physics does the processing by naturally evolving to the ground state. This can be more straightforward than figuring out a sequence of hundreds of logic gates for the same problem. In some sense, AQC lets you set up an analog computation and then just wait for the answer. The fact it’s a universal model means it can run any algorithm a circuit model can, given enough control over $H(t)$.
-
Deep Connections to Complexity Theory: The adiabatic model has shone light on interesting complexity classes. Certain restricted versions of AQC relate to classical complexity (e.g. stoquastic Hamiltonians relate to the complexity class
StoqMA
). Moreover, thinking in terms of Hamiltonians has led to new algorithms and insights. For example, Farhi et al.’s original adiabatic algorithm for unstructured search is related to Grover’s algorithm, and their AQC approach to the NP-complete 3-SAT problem inspired quantum optimization techniques. AQC thus provides a different lens to design algorithms, especially optimization and constraint satisfaction algorithms. - Robustness to Some Errors: Adiabatic processes have a form of built-in error suppression. If a disturbance is slow or gentle enough, the system can in principle stay in the ground state. Also, if the system starts to get excited, there’s a chance it can re-anneal back to the ground state if the evolution is paused or slowed. In an ideal closed-system scenario, as long as the evolution remains adiabatic, small perturbations won’t cause logical errors (they might just slightly change the required schedule). This is sometimes discussed as resilience to certain control errors – e.g. timing errors that are smooth might not cause transitions out of the ground state.
- Relation to Classical Heuristics: Adiabatic algorithms have analogies to classical heuristic algorithms like simulated annealing. This means one can often leverage the huge literature on classical optimization to inform quantum adiabatic algorithms. For instance, one can schedule the anneal in different ways (slow at points of small gap, faster when gap is large), akin to temperature schedules in simulated annealing. There are also quantum-classical hybrid algorithms like quantum approximate optimization (QAOA), which can be seen as a digitized version of adiabatic evolution, blending the two paradigms.
Disadvantages
- Gap Challenges and Slowdowns: The requirement to remain adiabatic (i.e. go slower than the inverse square of the minimum gap) is a double-edged sword. For many interesting problems, especially worst-case instances, the minimum gap can shrink exponentially with problem size. This means the run time needed for success might also blow up exponentially, wiping out any quantum speedup. For example, it’s known that an adiabatic algorithm for an unstructured search (Grover’s problem) also takes $\mathcal{O}(\sqrt{N})$ time, matching Grover’s gate-based performance—no better. But for more complex problems like NP-complete ones, we don’t know if adiabatic paths can be found that avoid tiny gaps. It’s possible that AQC ends up needing exponential time for those too (as classical algorithms do), in which case it wouldn’t solve NP-complete problems efficiently. In practice, finding and staying above a large gap is extremely challenging as problems scale.
- Decoherence and Excitations: AQC typically requires the system to stay in the ground state of a time-dependent Hamiltonian. This assumes an isolated quantum system. In reality, any coupling to the environment (decoherence) can cause transitions or excitations. If a stray photon or thermal excitation kicks the system into an excited state at some point, the whole premise fails unless there’s a way to recover. Because the computation might be long (requiring slow evolution), the system is exposed to noise for a longer duration than a fast gate-model algorithm might be. Error correction for AQC is an active research area – techniques like energy gap protection, dynamical decoupling during the anneal, or using error-correcting codes by encoding qubits into multiple physical qubits (at the cost of even smaller gaps) have been explored. But generally, managing decoherence in AQC is hard. In essence, AQC inherits analog analogies: just as analog classical computers are hard to make as precise as digital ones with error correction, analog quantum computing has intrinsic stability issues.
- Control Requirements: To realize a universal AQC, one must be able to implement an arbitrary problem Hamiltonian over potentially many qubits with precision. This is experimentally daunting. QA devices implement a relatively restricted form (basically an Ising model with two-body interactions). A truly universal AQC would need higher-order interactions or many ancillary qubits to encode problems, or some way to effectively simulate them. Engineering tunable multi-qubit interactions or complex Hamiltonians is much more difficult than engineering a base set of logic gates. In a sense, the hardware for AQC needs to directly encode the problem structure, whereas gate-based hardware just needs a fixed set of gates (like 1- and 2-qubit gates) and flexibility comes from how you sequence them. Scalability of AQC hardware is thus challenging — adding more qubits and connections increases analog complexity and calibration difficulty significantly.
- Lack of Circuit Benchmarking Tools: The circuit model benefits from a rich toolbox of algorithms, error correction codes, compilation techniques, etc., developed over decades. AQC is a more “physics-first” model, and its benchmarking often requires simulating quantum systems or using approximation methods. It’s not as straightforward to verify small AQC devices against theory, because classical simulation of many-body Schrödinger evolution can be hard (whereas small circuits can be simulated easily to verify outputs). This has made it sometimes difficult to debug and tune AQC experiments or to apply them to tasks beyond optimization.
Cybersecurity Implications
A fully functional, large-scale adiabatic quantum computer (which is universal) would have the same cryptographic implications as gate-model quantum computers. It could run Shor’s algorithm, Grover’s algorithm, and any other quantum code by appropriate Hamiltonian encodings. In fact, the proof of equivalence by Aharonov et al. explicitly shows how to encode a quantum circuit (which could be Shor’s algorithm) into an adiabatic process. Thus, if someone built a scalable AQC, all the standard public-key encryption (RSA, ECC, finite field DH) would be at risk just as with a circuit-based QC. However, building such an AQC is as hard as building a gate QC – perhaps harder, due to the analog control issues.
In the nearer term, one might worry less about AQC breaking cryptography and more about it solving certain hard optimization problems that could be security-relevant. For instance, some cryptographic schemes rely on the hardness of optimization problems (though usually those are NP-hard and believed hard even for quantum). If AQC found faster heuristics for things like the subset sum problem or certain lattice optimization problems, it could impact cryptanalysis. But at this point, there’s no evidence AQC can magically solve NP-hard problems efficiently; it suffers from small gaps and likely exponential time on those in worst cases, similar to classical algorithms. Another consideration: adiabatic quantum simulators could be turned on optimization problems underlying things like breaking block ciphers via optimization (e.g. optimizing differential trails or SAT formulations of cryptanalysis). If moderately sized instances beyond classical SAT solvers’ reach could be solved by AQC faster, that might aid some attack strategies. So far, though, this remains theoretical.
From a defense standpoint, one could use AQC as a tool for cybersecurity as well – for example, optimizing network configurations, solving scheduling of scans or allocations of limited security resources, etc. The Department of Energy and others have looked at quantum optimization (QA/AQC) for grid security and logistic problems. These are indirect implications, but worth noting. In summary, AQC’s primary security implication is tied to its universality: if realized, it threatens current cryptography just like any full quantum computer. For now, its impact is more on optimization tasks than on breaking encryption.
Who’s Pursuing
Many of the efforts in AQC overlap with quantum annealing efforts. D-Wave’s machines can be (and have been) used as experimental testbeds for AQC algorithms by programming time-dependent Hamiltonians. Researchers like Daniel Lidar have even implemented error suppression and small codes on D-Wave to test AQC with correction. Outside of D-Wave, academic groups continue to study AQC algorithms and theory. For example, Andrew Lucas (University of Colorado) and others have written on mapping various problems to Ising Hamiltonians (useful for AQC). Matthias Troyer (Microsoft) has contributed to understanding the limitations of stoquastic annealing. On the corporate side, Google hired Farhi and some colleagues around 2015, likely to work on quantum algorithms including AQC/QAOA — Farhi’s presence indicated interest in the adiabatic approach, though Google’s hardware is gate-based. IARPA (U.S. intelligence R&D) ran a program called Quantum Enhanced Optimization (QEO) a few years back, aimed at understanding if quantum annealing/AQC could outperform classical for optimization; several national labs and universities participated. While that program concluded without a clear quantum advantage, it spurred development of new approaches like QAOA.
In short, AQC is pursued by a mix of theorists and those with access to QA hardware, even if no dedicated “AQC-only” hardware exists yet (besides QA devices). Some startups (e.g. Quantum Computing Inc., not to be confused with IonQ or similar) have also expressed interest in adiabatic algorithms on available hardware. The field remains an active theoretical area, closely tied to quantum optimization and quantum algorithm design, awaiting future hardware that can fully exploit it.