Quantum Computing

The Many Faces of Decoherence

Quantum computers hold enormous promise, but they face a stubborn adversary: decoherence. This is the process by which a qubit’s fragile quantum state (its superposition or entanglement) leaks into the environment and effectively “forgets” the information it was carrying. For today’s leading quantum hardware modalities – superconducting circuits, trapped-ion qubits, neutral atoms in optical traps, photonic qubits, and semiconductor spin qubits in silicon – decoherence is the central obstacle to scalability and practical use. Understanding the sources of decoherence in each platform is crucial for scientists, engineers, and policy makers charting the future of quantum technology. Let’s see why “keeping qubits quiet” is such a complex challenge.

Material Noise: Flux and Charge Fluctuations in Solid-State Qubits

Solid-state quantum bits, such as superconducting loops and semiconductor quantum dots, are fabricated from materials – and imperfections in those materials are a major cause of decoherence. Two key culprits are flux noise (random magnetic flux changes through a circuit) and charge noise (random electric charge fluctuations). These noise sources have a 1/f frequency spectrum (stronger at low frequencies) and slowly wander in time, acting like an ever-shifting bias on the qubit’s energy levels. Think of it like a guitar string that won’t stay tuned because the instrument’s wood is subtly warping – the qubit’s resonance frequency keeps drifting due to microscopic changes in its environment.

Flux Noise (Magnetic Fluctuations)

Flux noise primarily affects superconducting qubits – particularly those that rely on magnetic flux in loops (e.g. flux qubits and tunable transmons with SQUID loops). Physically, flux noise is thought to originate from tiny fluctuating magnetic moments on the surfaces of the superconducting circuit. These could be spins of electrons or defects randomly flipping orientation, introducing jitter in the magnetic field threading the qubit’s superconducting loop. In essence, the qubit feels a randomly changing magnetic flux, which changes the qubit’s transition frequency and causes dephasing (loss of a stable phase relation in superposition). Over the past decades, a consensus emerged that “unpaired surface spins” are responsible, though their exact identity is still being investigated. Recent experiments have probed this by applying small external magnetic fields to align or cluster these surface spins, confirming their role in 1/f flux noise.

Flux noise sets a hard limit on coherence times T₂ in many superconducting devices. For example, a 2025 study of advanced fluxonium qubits found that 1/f flux noise “dominates the qubit decoherence” when the qubit is biased at certain points (like the half-integer flux frustration point). More disorder in the superconductor correlated with worse flux noise, indicating that cleaner materials with fewer magnetic defects can improve coherence. In practice, engineers often operate superconducting qubits at a “sweet spot” – a bias point where the qubit frequency is first-order insensitive to flux changes – to mitigate the effect of flux noise. At these sweet spots, small fluctuations cause minimal frequency shift, much like parking a pendulum at the bottom of its swing where it’s most stable. If the qubit must be tuned away from the sweet spot (for example, to couple qubits or implement gates), its dephasing rate rapidly increases as flux noise kicks in. To combat this, researchers explore techniques like dynamical decoupling (echo sequences that refocus low-frequency noise) and new qubit designs that inherently have reduced sensitivity to flux. One cutting-edge approach created an “inductively shunted” transmon design that spreads out flux-induced energy level shifts, achieving a 5× reduction in flux dispersion (flattening the curve at the sweet spot).

It’s worth noting that flux noise isn’t exclusive to superconductors. Any qubit with a magnetic degree of freedom can, in principle, be disturbed by fluctuating magnetic fields. For instance, spin qubits (electrons or nuclei in semiconductors) can suffer dephasing from environmental magnetic noise. In practice, spin qubits in isotopically enriched silicon have extremely stable magnetic environments (silicon can be purified of nuclear spins, eliminating a major magnetic noise source). Trapped-ion and neutral atom qubits often use pairs of internal states that are insensitive to external magnetic fields to first order (so-called “clock states”), specifically to avoid magnetic flux noise. In those systems, flux noise is usually negligible compared to other error sources. Thus, flux noise is a dominant decoherence mechanism for superconducting circuits, and engineers there are essentially doing materials science – trying to eliminate or tame those pesky surface spins causing the magnetic hiccups.

Charge Noise (Electric Fluctuations)

Another pervasive noise source is charge noise, which manifests as random jumps or drift in electric charges in and around a device. This could be due to electrons hopping in defects, stray ions on surfaces, or cosmic-ray-induced charge bursts. Charge noise was notorious in the earliest superconducting qubits (like the Cooper-pair box), which were so sensitive to single-electron charges that their coherence times were only nanoseconds. The now-standard transmon qubit was invented to solve this: by shunting the Josephson junction with a large capacitor, the transmon made its quantum levels largely insensitive to extra electrons, effectively “immunizing” it against charge noise at the cost of making the levels more closely spaced. This extended coherence from ~1 ns to many microseconds. However, even transmons aren’t completely off the hook – they still experience slower drift from fluctuating charged two-level defects (discussed below) and occasional abrupt jumps if a “charged trap” in the substrate captures or releases an electron.

In semiconductor spin qubits (quantum dots in silicon or III-V materials), charge noise is a leading decoherence mechanism. These qubits are essentially single electrons confined in electrostatic traps, so if the trap shape or electric field shifts by even a tiny amount, the electron’s energy levels shift. Imagine a single marble in a bowl – if the bowl’s shape quivers, the marble’s vibration frequency changes. Charge noise in semiconductors often comes from defects at the oxide interface or impurities that randomly trap charge, creating a fluctuating electric field. This leads to qubit dephasing and also to two-qubit gate errors (since exchange interactions between electrons depend sensitively on the electrostatic configuration). Recent research confirms that even in isotopically purified silicon (where magnetic noise from nuclear spins is minimized), “omnipresent” charge noise remains a dominant limiter. In silicon spin qubits, the effect is exacerbated when a strong magnetic field gradient is applied – a common technique to enable individual qubit control via magnetic resonance. The field gradient means any motion of the electron (caused by charge noise) converts into a magnetic noise on the spin, an effect known as transduced noise. This transduced charge noise can cause rapid dephasing, limiting coherent operation to perhaps milliseconds or less if unmitigated.

To fight charge noise, spin qubit researchers employ a variety of strategies. One is materials optimization – for instance, using very pure silicon and refined fabrication to reduce the density of charge traps. Another is operating in sweet spots: some spin qubit designs have bias points where the qubit frequency doesn’t change to first order with electric field (similar in spirit to superconducting sweet spots). Additionally, advanced error mitigation techniques have been demonstrated. A 2025 study showed that by using real-time feedback and adaptive control, one can stabilize a silicon qubit against slow charge fluctuations, doubling the coherence time and achieving single-qubit gate fidelities above 99.6% despite strong field gradients. In effect, they actively measure and cancel out the charge-induced frequency drift, proving high fidelity even in the presence of this noise. Such techniques, along with encoding schemes that make qubits inherently insensitive to uniform charge shifts, will be crucial for scaling solid-state qubit platforms.

It’s notable that trapped ions and photonic qubits are basically immune to charge noise – ions and neutral atoms are suspended in vacuum (no solid-state defects around them), and photons are uncharged. This highlights a key trade-off in quantum architectures: solid-state qubits (superconductors, spins, etc.) gain scalability and integration by piggybacking on microfabrication technology, but they inherit the defects and disorder of solid materials. Atomic and photonic qubits exist in pristine electromagnetic form, largely avoiding material noise; however, they face other challenges (like laser stability and vacuum requirements, discussed later). Each approach must contend with decoherence, but the “favorite” noise sources differ.

Two-Level Defects in Materials (TLS): Tiny Flaws, Big Problem

Many solid-state qubits suffer energy loss and dephasing due to microscopic two-level system (TLS) defects in the materials. These TLS are essentially little quantum systems of their own – for example, a dangling bond that can flip orientation, or a tunneling atom in an amorphous solid. They can absorb energy from the qubit or cause fluctuating fields. If flux and charge noise (discussed above) are like a drifting background hum, TLS defects are more like discrete resonant “bird chirps” that can directly interfere with the qubit.

In superconducting qubits, TLS defects often reside in the amorphous oxide of the Josephson junction or in dielectric interfaces of the circuit. When a qubit’s frequency happens to align with a TLS’s energy splitting, the two can hybridize – the qubit can transfer its excitation to the TLS, leading to sudden energy loss (a T₁ relaxation event) or simply an avoided crossing in its spectrum. TLS defects have been called a “dominant source” of noise causing both energy relaxation and decoherence in superconductors. Crucially, TLS are not stable – their frequencies can drift over hours or days (for instance, due to mechanical stress or temperature changes). This causes “spectral diffusion,” where a qubit’s sweet spot might unexpectedly move because the ensemble of TLS in the device changed. One striking implication: the optimal bias or frequency for a qubit can vary in time and between qubits because of TLS, posing a challenge for multi-qubit calibration. In other words, those tiny defects make the qubit environment nonstationary – an obstacle for long-term stability and scalability.

Recent experiments are revealing more about TLS behavior. For example, researchers find evidence for “strong” and “weak” TLS, distinguished by how they couple to strain in the device. Strongly strain-coupled TLS appear to produce dynamic disorder: their collective effect on a qubit’s noise changes after thermal cycling or even over time at low temperature. This dynamic nature is bad news for qubit stability, as noted above. Another 2024 study linked mechanical vibrations from the cryogenic cooling system to TLS-induced error bursts: vibrations from a pulse-tube cooler were found to shake the chip just enough to perturb TLS ensembles, causing correlated qubit errors in a system with very long T₁. Essentially, sound waves in the chip excited certain TLS or drove nonequilibrium effects, cutting the qubit lifetime in half intermittently. This surprising pathway – from a refrigerator’s vibration to qubit decoherence – underscores how sensitive superconducting devices have become as coherence times have improved into the millisecond range.

Mitigating TLS defects is challenging because they are intrinsic to the materials. However, progress is being made. Fabrication processes can be tuned to reduce TLS density (for instance, using crystalline AlOₓ barriers or annealing surfaces to eliminate dangling bonds). Some groups incorporate geometries that minimize participation of amorphous dielectrics (e.g. making junction areas smaller or using trenching so the electric field avoids surface oxides). There’s also a strategy of “TLS soaking”: saturating TLS with a strong microwave tone so they stay in an excited state and don’t interact with the qubit. This can sometimes improve T₁ if a particular TLS was limiting it. Another approach is materials engineering – a 2025 study looked at disordered superconducting films (for superinductors) and found that while disorder can change dielectric loss modestly, the flux noise from surface spins was more strongly correlated with material disorder. They hint that not all decoherence comes from surface TLS; some is from within the superconductor (perhaps quasiparticle-related, see next section). The takeaway is that TLS remain a critical topic: these atomic-scale imperfections can “spoil the performance of an entire circuit” if one sits on a qubit’s frequency. Devising qubits that are less affected by individual defect resonances – or finding ways to eliminate TLS – is an active area of research needed to push coherence times further.

Interestingly, other qubit modalities have analogs of TLS defects. In silicon spin qubits, one could consider a rogue charge trap that switches state as a kind of two-level defect causing telegraph noise. In NV-center qubits (a spin in diamond, which is a solid-state defect by design), nearby paramagnetic impurities can act like TLS that flip and cause magnetic noise. The density of such defects and their activity similarly limit coherence. However, compared to superconductors, atomic qubits (ions/neutral atoms) and photonic qubits don’t really have TLS in the same sense – since there are no solid interfaces, there’s no place for such two-level defects to live. This again highlights why atomic qubits can have extraordinarily long intrinsic coherence (minutes or hours in trapped ions for certain states) : the atoms are floating in vacuum, largely untouched by defects. Their decoherence will come from other causes, which we’ll now discuss.

High-Energy Hits: Quasiparticles and Cosmic Rays

Beyond the subtle hiss of 1/f noise and TLS, there are more dramatic decoherence sources – the quantum equivalent of a lightning strike. Cosmic rays and background radiation can deposit bursts of energy into qubit systems, creating quasiparticles and correlated errors that momentarily wreak havoc. It’s a reminder that quantum computers, like classical electronics, must contend with the wider environment, including natural radiation (albeit with different failure modes).

In superconducting devices, a major concern is quasiparticle poisoning. A quasiparticle in a superconductor is essentially a broken Cooper pair – a lone electron that can move around and dissipate energy. In a perfect superconducting circuit at 0 K, there would be no unpaired electrons, but real devices at ~10 mK do have a small steady population of quasiparticles from residual heating or radiation. More worrisome, a high-energy event (like a cosmic ray muon or an environmental gamma ray) can generate a shower of phonons in the chip that breaks many Cooper pairs at once, flooding the device with quasiparticles. When this happens, qubits may suddenly lose energy or change frequency until the quasiparticles recombine or diffuse away (which can take milliseconds). It’s like an impromptu “snowstorm” of electrons coursing through what should be a calm superconducting sea.

Experiments have shown that cosmic rays and background gammas induce correlated errors in superconducting qubits. In one study, researchers observed multiple qubits flipping simultaneously and identified the culprit as cosmic ray impacts causing bursts of charge and phonons. The energy from a single particle can spread across a chip: first it causes a localized spike (a couple of nearby qubits see charges jump), then phonons propagate through the substrate, spawning quasiparticles over hundreds of microns, which can flip other qubits or shorten their T₁. These are precisely the correlated errors that quantum error correction schemes fear, because they can overwhelm certain codes that assume errors are independent. A cosmic ray doesn’t single out just one qubit; it’s a splash that can hit many.

Mitigating these high-energy events is tough – we can’t stop cosmic rays, but we can shield and design differently. Researchers have suggested adding heavy shielding (lead, concrete) around cryostats to absorb some of the background radiation. Another idea is to use error correction codes that can handle burst errors or erasures. In fact, a 2024 physics result with Majorana qubits (topological qubits) is relevant here: it was shown that quasiparticle poisoning errors are not automatically suppressed by topology and still need to be dealt with, possibly by converting them into erasure errors that are easier for codes to correct. In other words, even in exotic qubit schemes like Majorana zero modes, a stray quasiparticle (which could be generated by radiation) will break the encoded quantum information unless actively managed.

Today, some superconducting quantum computer designs include normal-metal quasiparticle traps – segments of metal that intentionally invite quasiparticles to enter and get stuck (and recombine), siphoning them away from the sensitive Josephson junctions. These can reduce steady-state quasiparticle density. For the big cosmic ray events, research groups including IBM and others have contemplated building quantum computers underground or in old mines to reduce cosmic flux. It sounds extreme, but the “cosmic-ray threat to quantum computing” has been compared to the challenges faced in classical computing soft errors – except potentially worse, since quantum bits are more delicate. One can envision a future where data centers for quantum hardware include heavy radiation shielding as a standard practice.

Not all platforms see cosmic rays the same way. Trapped ions and neutral atoms, being individual particles in a vacuum, won’t experience a phonon shower, but a direct hit from a high-energy particle could still knock an ion out of its trap or cause a spontaneous emission. Fortunately, such events are extremely rare on the timescales of most experiments (but over years of operation, they’re a consideration for error rates). Semiconductor spin qubits could suffer if an ionizing particle creates charge traps in the material – similar to how radiation causes soft errors in CMOS. In fact, the space industry’s experience radiation-hardening classical chips might come full circle to aid quantum chip designers. Meanwhile, photonic qubits, especially those traveling through fiber or bulk optics, can be affected by cosmic rays in detectors (causing false clicks) more so than in the flying photons themselves. Superconducting tech, with its large-area chips and sensitive superconductors, currently stands out as most vulnerable to these high-energy disturbances, which is why it’s the focus of several studies on error rates beyond just the 1-qubit level.

Environmental Disturbances: Vacuum Collisions and Vibrations

So far we’ve focused on solid-state qubit noise and radiation, but in systems using trapped atoms and ions, environmental disturbances like background gas collisions and vibrations are the primary concern (in addition to control noise, which we address next). These qubits are well isolated from solid materials, but that means they rely on ultra-high vacuum and stable trapping fields. Any disturbance to the trapping potential or the atoms’ isolation can lead to decoherence or loss.

Background Gas Collisions (Vacuum Quality)

Trapped-ion and neutral-atom quantum computers operate in vacuum chambers at pressures around $$10^{-11}–10^{-10}$$ torr or better. This is to ensure that the qubits (ions or atoms) rarely, if ever, encounter a stray gas molecule from the air. If an atom/ion is hit by a random molecule (an event analogous to a billiard ball collision on the atomic scale), several bad things can happen: the qubit’s momentum gets a kick, spoiling its motional state; its internal state can be perturbed; or the atom can be knocked out of the trap entirely (leading to a loss error). Even a single collision can decohere an ion qubit by heating its motion or causing a state change. In a trapped-ion quantum computer, one collision can sometimes force a complete re-calibration or re-loading of the ions, causing minutes of downtime in an experiment – clearly unacceptable if it happens frequently.

Fortunately, at ultra-high vacuum, collisions are infrequent. In a well-maintained ion trap at $$~10^{-11}$$ torr, an ion might go many hours between collisions. Optical atomic clocks (which are basically single trapped ions or atoms) have measured collision rates to contribute negligible frequency shifts over days. However, as systems scale up with more ions and larger vacuum volumes, outgassing and vacuum maintenance become nontrivial. There is also a trade-off: ion traps with hundreds of ions or transport mechanisms might require more apparatus inside the chamber (micromachined traps, surfaces, etc.) that can outgas or increase collision probability. Recent trapped-ion hardware demonstrations emphasize that vacuum-related ion loss is a key limiting factor for continuous operation – as more ions are added or as they are shuttled around, the chance one gets hit eventually approaches certainty over long runs. Mitigation involves using better vacuum pumps, cryogenic trapping (cryopumps can effectively freeze out residual gas), and quickly replacing lost ions by having “reservoir” atoms available. Neutral atom arrays have similar issues: the atoms can be knocked out of their optical tweezers by collisions, limiting the retention time of a large array. Even if the qubit’s internal state is untouched, losing the atom is a loss of information.

Interestingly, background gas collisions can also cause subtle dephasing even without loss. In trapped-ion optical clocks, analyses have been done on how occasional elastic collisions cause tiny phase shifts in the clock transition – a small systematic decoherence effect that must be accounted for at the highest precision. This is more of a metrology concern than a quantum computing one, but it underscores that any interaction with the environment, even rare, can introduce phase noise.

The solution is straightforward in principle: push vacuum technology to the limits and possibly incorporate active vacuum sensing with cold atoms themselves (using the trapped atoms as in-situ vacuum gauges so you know if pressure rises). In practice, the vacuum requirement is an engineering overhead for atomic qubits that solid-state approaches don’t have. Policy makers evaluating quantum tech for deployability will note that ion/atom systems need complex vacuum chambers and pumps, which is one reason they currently fill rooms, whereas superconducting and silicon chips, in theory, could be more easily packaged (albeit in a dilution refrigerator). There’s ongoing research into miniaturizing vacuum setups – for example, using microscale ion pumps or developing ultra-high vacuum wafer bonding to seal neutral atom cells – to make this more tractable.

Vibrations and Acoustic Noise

Another environmental issue is mechanical vibrations. We saw in the previous section how vibrations can even affect superconducting qubits via microphonics (shaking TLS or qubit chips). In trapped-ion setups, vibrations of the optical tables or trap electrodes can directly modulate the trapping fields. If the trap position jitters by even nanometers, the ions feel a fluctuating electric field or a varying laser phase (if the laser beam is fixed in space). Laboratories take great care with optical table damping, acoustic shielding, and isolation to ensure that building vibrations or noises (even loud sounds) don’t couple into the experiment. A mechanical disturbance can map to qubit phase error since the reference frame of the qubit moves. For neutral atoms in optical lattices or tweezer arrays, if a mirror delivering a trapping beam vibrates, the whole lattice might shake, causing dephasing between atoms (imagine the difference if one atom’s trap moves closer versus farther relative to a laser phase node – that’s a phase error). Thus, vibrational stability is an often unsung requirement for atomic quantum computers.

Mitigation is largely through engineering: passive and active vibration isolation, as well as designing the experiments to be somewhat robust (e.g. using retro-reflection in optical setups so that common-mode vibrations cancel out). Another interesting approach is using feed-forward: some ion trap setups have accelerometers on the system and actively cancel detected vibrations from the control signals in real time. This is analogous to noise-cancelling headphones, but for mechanical noise affecting qubits.

Overall, trapped-ion and neutral atom modalities are blessed with extraordinarily low intrinsic decoherence (an ion’s quantum superposition of internal states can last minutes to hours in isolation), but extrinsic environmental factors like vacuum and vibrations tend to limit the effective coherence in practice. In comparison, a solid-state qubit in a chip doesn’t worry about air molecules (it’s in solid matter and a sealed package), and vibrations usually have negligible effect on an electron’s spin or a superconducting loop (except via mechanisms like TLS). So in solid-state systems, it’s the materials that are noisy; in atomic systems, the surrounding apparatus (vacuum system, lasers, traps) introduces the noise. Each platform transfers complexity to a different part of the setup.

Laser and Control Noise: The Noisy Conductor Problem

Quantum hardware doesn’t operate in isolation – we have to control qubits with electromagnetic pulses (microwave currents, laser beams, etc.) and measure them. The devices we use to do that – lasers, microwave generators, and control electronics – bring their own imperfections. If flux noise and cosmic rays are like “nature” throwing curveballs, control noise is a man-made challenge: keeping our control signals exquisitely stable and free of unwanted cross-talk. Here we highlight two major aspects: phase noise in lasers/microwaves and cross-talk between control channels.

Laser Phase Noise and Frequency Instability

High-fidelity quantum gates require extremely stable and coherent control fields. In trapped-ion and neutral-atom quantum computers, lasers are used to drive qubit transitions and entangling gates. Ideally, a laser is like a perfectly stable metronome ticking away with a fixed frequency and phase. In reality, lasers have finite linewidths (their frequency jitters slightly) and technical phase noise (due to vibrations of mirrors, electronic noise in locking circuits, etc.). If the laser phase drifts during a gate, it’s equivalent to an error in the qubit’s rotation angle or a phase slip in a Ramsey experiment.

Imagine trying to choreograph a synchronized dance (the qubits) to music, but the record player occasionally warps the music’s tempo – the dancers will fall out of step. Similarly, qubits lose synchronization with each other or with the intended control if the laser’s phase wanders. This is especially critical for multi-qubit gates like the Mølmer–Sørensen (MS) gate in trapped ions, which use the interference of two laser beams to entangle ions via a shared motional mode. Any fast phase noise on those beams directly translates to infidelity in the entangling operation.

Research in 2023 analyzed this effect in detail. It was found that high-frequency laser phase noise (noise faster than the laser’s intrinsic linewidth, e.g. caused by servo bumps or a noisy oscillator driving an electro-optic modulator) can noticeably reduce gate fidelities. The impact can be quantified by the noise power spectral density at frequencies to which the qubit is sensitive during the gate. In trapped ions, this typically means noise around the qubit transition frequency or the motional sideband frequency. The study provided a simple metric: essentially, the integrated noise in a certain band limits the fidelity of all operations driven by that oscillator. This analysis doesn’t just apply to ions – any qubit manipulated by an external field (superconducting qubits driven by microwave pulses, spin qubits driven by ESR pulses) could be affected by phase noise in the source. Superconducting systems generally use microwave sources with extremely low phase noise (often derived from crystal oscillators and multiplied up) and short gate times (~tens of nanoseconds), so the fractional phase drift during a gate is small. By contrast, trapped-ion gates last longer (tens of microseconds) and often rely on optical phase stability, which is harder to maintain – so laser noise has been a more pressing issue there.

Mitigations for laser phase noise include using ultra-narrow linewidth lasers (e.g. stabilized to high-finesse reference cavities, with Hz-level linewidths) for optical qubits. For Raman gates (which use two laser frequencies whose difference is the qubit frequency), any noise common to both beams cancels, so one uses noise-canceling techniques like deriving both frequencies from a single source or actively stabilizing the relative phase. Phase-locking lasers together and to microwave references is standard. Even so, technical noise can creep in from things like fiber vibrations (optical fibers carrying light to the experiment can stretch and shrink, adding phase noise – often solved by active fiber noise cancellation). There’s also a push toward microwave-driven gates for trapped ions in microfabricated traps, which use microwave circuits (more stable phase) plus magnetic field gradients to entangle ions, avoiding lasers altogether for certain operations.

An example of progress: a 2022 demonstration achieved two-qubit gate fidelities >99.9% in a two-ion system using carefully optimized pulses and probably very low-noise lasers. This suggests that with enough care, laser noise can be tamed to be below other error sources. Nonetheless, as systems scale, ensuring all lasers in a multi-qubit setup remain phase-aligned and low-noise simultaneously is a significant engineering challenge.

For semiconductor spin qubits, the analogue is microwave control signal noise. These qubits are driven by on-chip transmission lines delivering AC magnetic fields. Noise in the AWG (arbitrary waveform generator) or local oscillator could cause over-/under-rotation or phase slips in spin rotations. Typically, these are less of a limiting factor than charge noise or dephasing, but in the quest for >99.9% fidelities, even control electronics noise must be scrutinized. Some groups use multi-tone drives and advanced calibration (e.g. DRAG pulses, which shape pulses to cancel leakage and phase errors) to make the operations less sensitive to certain noise. Also, closed-loop feedback can adjust for slow drifts in control amplitude or phase, keeping the calibrations tight.

In summary, phase noise in control fields is like the background static in the communication with qubits. Atomic qubits demand almost metrological-grade lasers (borrowing techniques from atomic clock research) to avoid this static. Superconducting and spin qubits demand extremely clean microwave sources (borrowing techniques from RF engineering and radar). The common theme is using precision instrumentation to keep the quantum “song” in tune.

Crosstalk and Unintended Couplings

In multi-qubit processors, one qubit’s controls can inadvertently affect another qubit – this is crosstalk. It’s akin to hearing a neighbor’s radio through the wall: in a quantum chip, a pulse intended for qubit A might cause a slight rotation or frequency shift on qubit B if not properly isolated. Crosstalk can stem from control fields spilling over (e.g. a laser beam hitting an adjacent ion, or a microwave control line inducing currents in a neighboring qubit’s circuit), or from intrinsic coupling between qubits that isn’t fully turned off.

Crosstalk doesn’t always manifest as “decoherence” in the sense of randomness from the environment – it can be a consistent control error (systematic). But from the viewpoint of any single qubit, the unwanted influence of others acts like an unpredictable perturbation (especially when many operations happen in parallel), which can be treated as an effective noise source if not calibrated. Moreover, strong crosstalk can lead to correlated errors (if one pulse affects two qubits at once in an uncontrolled way).

In superconducting qubit arrays, a common crosstalk issue is unwanted ZZ coupling – even when two qubits are nominally idle, there can be a residual interaction causing their phases to entangle slightly. This means the frequency of one qubit depends on the state of its neighbor. If not corrected, this can induce phase errors, especially during parallel operations. Manufacturers like IBM address this by specific calibration to cancel static ZZ coupling (using additional detuning or echo pulses). Another form is microwave crosstalk: the control lines for qubits can capacitively or inductively couple to other qubits. It’s been noted that “microwave crosstalk in superconducting circuits significantly impacts system performance by inducing unwanted quantum state transitions and gate errors”. For instance, a fast control pulse on qubit 1 might have a spectral bleed that partly drives qubit 2, causing a small rotation on qubit 2 that wasn’t intended. As the number of qubits grows, managing such interference becomes like playing 3D chess with microwave signals.

Mitigation of crosstalk in superconducting systems is a concerted effort: engineers characterize a full crosstalk matrix (how much each control line affects each qubit) and then pre-compensate pulses or adjust amplitudes to cancel out the leakage. Techniques like dynamical decoupling can also help – essentially applying refocusing pulses so that any unintended rotations average out. A recent approach used multi-qubit gate calibrations that explicitly include crosstalk parameters, thus “teaching” the controller the cross-couplings so it can drive compensating counter-pulses in real time. In principle, with precise calibration, crosstalk can be turned from an adversary into just another known coupling that the control software accounts for.

In trapped-ion systems, crosstalk might mean a laser pulse hitting more ions than intended. When ions are spaced only a few micrometers apart in a chain, tightly focusing a laser on one ion’s location is challenging. Often a Gaussian beam will have some spillover onto neighbors. If those neighbors are not in the same state (or are spectator qubits), they will pick up a bit of the laser action. This can be mitigated by careful optical engineering (e.g. using multi-tone beams to null out light on neighbors, or individual addressing beams with shaping). Another ion-trap crosstalk issue is when performing entangling gates on multiple pairs of ions simultaneously: because they share the same trap, doing two gates at once can cause unintended collective motion. Research is ongoing on how to scale ion traps such that parallel gates in separate zones don’t interfere via the trap structure – the modular ion trap approach, with separate trapping zones or even separate traps connected by photonic links, is one way to avoid cross-talk between distant groups of ions.

For neutral atoms, crosstalk issues arise with Rydberg interactions: if you try to entangle atom A and B by exciting them to Rydberg states, a nearby atom C might also feel a van der Waals interaction even if not intended. Additionally, the addressing lasers for atoms in an array might not be perfectly distinct. Optical crosstalk can be reduced by using advanced beam shaping or by spacing atoms farther apart (at the cost of weaker interaction strength). Some designs use a two-photon excitation for Rydbergs where one laser is global and one is local, so that only the target atom, at the intersection of a global and a focused beam, gets fully excited. Still, ensuring that an off-target atom doesn’t accidentally get a partial excitation (which would decohere it) is nontrivial.

In semiconductor spin qubits, crosstalk can happen if the control electrodes for one quantum dot affect another (for example, tuning one gate shifts the potential of a neighbor dot – since they’re all in a chip, some capacitance is shared). Careful calibration and device design (with screening layers) can mitigate this. On the microwave side, if multiple spin qubits operate at similar frequencies, a pulse intended for one could also drive another (spectral crowding issue). Engineers then use slightly different resonance frequencies for each qubit (like having each spin in a slightly different magnetic field or with a different g-factor) to frequency-separate the control.

Cross-talk is often less visible in single- or few-qubit demos, but as processors integrate dozens of qubits operating simultaneously, it becomes a limiting factor for fidelity. A recent case in point: a 127-qubit superconducting chip required complex “simultaneous gate” calibrations; without them, cross-couplings would reduce fidelity when many gates run in parallel. Similarly, an ion trap with 20+ ions must contend with off-target light and motional mode cross-coupling. The community has developed quantum benchmarking tools to detect cross-talk errors specifically, treating them as a separate error model beyond simple single-qubit depolarization. By identifying where cross-talk is worst, hardware designers can reroute wiring or add shielding to reduce it. In the quest for fault tolerance, uncorrected cross-talk is dangerous because it tends to create correlated errors (affecting multiple qubits together) which are harder for quantum error correction to handle.

Spontaneous Emission and Photon Loss: Decay of Excited Qubits

Some quantum modalities use qubit states that are inherently unstable if unperturbed – an excited state that can spontaneously emit a photon and decay to a lower state. Also, any photonic qubit traveling through a medium can be lost or absorbed. These processes are forms of decoherence because information is either irreversibly leaked into a random photon or the qubit is outright lost from the register.

In trapped-ion and neutral-atom qubits, one usually chooses very stable states for the qubit basis (ground-state hyperfine levels or long-lived electronic states) so that spontaneous emission is extremely slow – on the order of millions of years for a microwave hyperfine qubit, effectively zero on experiment timescales. However, some schemes (especially in neutral atoms) involve Rydberg excited states during two-qubit gates. Rydberg states (highly excited atomic states) have finite lifetimes, typically microseconds to a few hundred microseconds, limited by spontaneous emission and blackbody radiation-induced transitions. When an atom is promoted to a Rydberg state to mediate entanglement, it can spontaneously decay back down, which collapses the qubit’s state (and often kicks out a random photon that can entangle with the atom – an uncontrolled event). This puts a fundamental limit on Rydberg gate fidelity and speed: gates must be done quickly compared to the Rydberg lifetime, and error rates on the order of a percent can arise from this decay channel alone in current experiments. Mitigation involves using higher principal quantum number states (which can have longer lifetimes) or clever gate protocols that keep the excitation time short. Additionally, blackbody radiation in the room-temperature environment can sometimes kick Rydberg atoms down or up; operating the array at lower temperature (like 77 K or so) can reduce that background excitation.

Another example: some ion qubits use an optical excited state as |1>, e.g. a metastable D level in Ca⁺ or Sr⁺. Those have long but not infinite lifetime (e.g. a second or so). If one were to hold information in that state too long, it will eventually decay spontaneously, emitting a photon (which could even be detected – that’s how some quantum logic experiments intentionally measure qubits). During normal gate sequences this is usually negligible, but it’s a consideration for memory qubits or if running slow sequences. Quantum memories in atoms that rely on excited states (like certain DLCZ protocols in quantum repeaters) have to contend with this – the stored excitation will eventually leak out as a photon, limiting storage time.

In photonic qubit systems, the analog of spontaneous emission is simply photon loss. A photon carrying quantum information might be absorbed or scattered as it propagates through a fiber, waveguide, or optical component. The loss doesn’t create a bit-flip or phase error – it removes the qubit entirely (an erasure error). In a quantum circuit model, a lost photon is like the erasure of that qubit from the register. Loss is the dominant noise in photonic quantum computing. For example, the impressive “Jiuzhang” photonic quantum supremacy experiments in 2020–2023 transmitted dozens of photons through a complex interferometer, but many more photons were lost than detected, meaning a lot of the quantum data was dropping out. For fault-tolerant photonic computing (like fusion-based or measurement-based schemes), dealing with loss requires redundancy – typically, generating more photons than needed and having mechanisms to detect and compensate for losses.

One advantage is that loss errors are often detectable (an absent click in a detector, for instance). This can be leveraged: losing a photon is an error, but if you know it was lost (erasure), the error correction can be easier than for an unknown Pauli error. Some topological codes in photonic schemes assume a certain fraction of erasures can be corrected if identified.

For bosonic qubits like the cat qubits in superconducting cavities (which are a kind of photonic qubit, except the photon is stored in a microwave resonator), photon loss is also the main decoherence channel. A cat qubit is a superposition of two coherent states of a resonator (approximately |α⟩ + |–α⟩). A single photon loss from the resonator generally causes a discrete jump in the logical state – in fact, cat qubits are designed so that single photon loss translates predominantly into a bit-flip error on the logical qubit (or phase-flip, depending on convention). The remarkable thing is that these cat codes can have a biased error profile: as you increase the average photon number α, the rate of bit-flips (from photon loss) can become exponentially suppressed relative to phase flips. Essentially, it takes multiple photon losses to cause a logical X error, whereas phase errors accrue more gradually. Recent work in 2025 achieved cat qubit coherence times of over 0.1 seconds for the logical bit-flip error (T_bit-flip ~ 0.1 s), using only a mean of 4 photons in the resonator. This is extraordinarily long in the superconducting context, far exceeding the ~0.2 ms physical T₁ of the resonator. It indicates an exponential suppression of bit-flips. However, the phase-flip times in that device were on the order of the resonator’s energy decay time (a few milliseconds, effectively). So cat qubits trade one form of decoherence for another: they get an exponentially lower X error rate at the cost of a manageable Z error rate. This kind of noise bias can be useful – one can design error correction schemes that specifically tackle the dominant error (phase flips) and treat bit-flips as rare events.

For more conventional qubits, mitigation of spontaneous emission is straightforward conceptually: don’t use short-lived states as your qubit states. That’s why quantum computing with atomic transitions usually sticks to ground or metastable states. If an excited state is used transiently (like in a gate or readout), make it fast. In photonics, mitigation of loss is about improving component quality (high transmission optics, low-loss waveguides) and clever architectures (like redundancy or building heralded gates that only proceed when photons haven’t been lost up to a point). For example, quantum repeaters introduce intermediate stations to catch and correct for photon loss in communications by entanglement swapping. On-chip photonic schemes might use entangled cluster states that have some robustness to missing photons.

It’s interesting to compare: in superconductors, energy decay (T₁ processes) is often due to TLS and quasiparticles as discussed, whereas in atoms, T₁ is basically infinite (except if you intentionally use a decaying state). In photons, “T₁” is the loss probability per meter or per component. Each modality has to fight either sudden death of the qubit or gradual phase drift or both. An approach like cat qubits blends the worlds: it uses bosonic modes that can have engineered decay (two-photon processes) to stabilize against single-photon loss. Majorana qubits similarly aim for a situation where certain errors (like local perturbations) don’t cause logical flips, only higher-order processes do – effectively creating a gap between fast benign errors and rare lethal errors.

Emerging Modalities: Topological and Exotic Qubits

Before concluding, it’s worth briefly looking at some emerging or speculative qubit modalities – such as topological qubits (Majorana zero modes) and bosonic qubits (cat codes, GKP states) – to see how decoherence plays out differently, and yet how familiar themes reappear.

Topological qubits, like those based on Majorana modes in nanowires or 2D materials, promise inherent protection from local noise. The idea is that the qubit is stored non-locally (for instance, in the joint parity of two Majorana bound states separated in space), so a local perturbation can’t flip the qubit state. This should make them immune to many noise sources that plague other qubits. However, real-world attempts at Majorana qubits find that they are still susceptible to environment in specific ways. The biggest issue is quasiparticle poisoning – if a stray quasiparticle (e.g. an electron from the environment) hops into the topological region, it changes the parity and thereby flips the qubit randomly. This is essentially a T₁ error for a Majorana qubit. And as one paper bluntly stated, these quasiparticle poisoning errors “are not suppressed by the underlying topological properties” – meaning topology doesn’t save you if an extra electron barges in; the system then has an extra fermion and the encoded information is disturbed. Therefore, practical Majorana qubits will require either ultra-clean environments to reduce stray quasiparticles or active error correction of those events (e.g. by turning them into erasure errors that can be detected ). Other decoherence factors for Majoranas include finite size effects – if the Majorana pair is not perfectly isolated, they have a tiny energy splitting between their joint even/odd parity states, effectively giving the qubit a slight bias that can cause phase rotation or dephasing over time. This is like a clock ticking when it should be stopped. The splitting decays exponentially with the separation of the Majoranas; to get it negligibly small, one might need fairly long wires or very carefully engineered symmetry. Any material disorder tends to couple Majoranas and spoil the topological degeneracy. Thus, the main message for topological qubits is that while they offer a potential reduction in sensitivity to certain noise (like uniformly distributed local perturbations), they introduce new sensitivities (like to any error that breaks the topological protection, e.g. quasiparticles, braiding imperfections, or simply the fact that no system is fully topological in a finite-size device). They also have to operate in superconductors, so all the earlier discussion of cosmic rays and materials still applies behind the scenes – a Majorana device must still contend with dielectric loss, flux noise on any flux-tunable elements, etc., though the logical information might be somewhat insulated from those if true topological behavior is achieved.

Bosonic qubits (cat qubits, binomial codes, GKP lattice states) are another frontier. We discussed cat qubits and their bias. A general observation is that bosonic codes often convert what would be random errors on a single photon into more structured errors on the encoded logical qubit. For instance, a single photon loss in a cat code causes a bit-flip (structured error) instead of a random trajectory in state space. The Gottesman-Kitaev-Preskill (GKP) code, which encodes a qubit in the phase space of an oscillator as a grid of points, turns small shifts (due to, say, a low-amplitude noise force or a small phase drift) into correctable shifts modulo a lattice spacing. The main decoherence for GKP is again photon loss and thermal noise – too large a shift (from a big energy decay or a big thermal kick) can move the state out of the correctable range. But within a certain error budget, GKP states can correct continuous errors by digitizing them. These bosonic approaches thus tackle decoherence not by eliminating the source, but by encoding the qubit cleverly so that the most likely errors are either caught or have less impact. One could say they exploit the structure of certain noise (e.g. predominance of photon loss over other errors) to extend effective coherence.

Finally, NV centers and other defect qubits (like silicon vacancies in SiC, etc.) – while not a major focus here, they bring their own decoherence challenges like interactions with a bath of surrounding spins (e.g. a dense nuclear spin environment) and photoionization cycles. Techniques like dynamic decoupling have pushed NV spin coherence into the second-to-minute regime by refocusing environmental spins. So even within solid-state, the spectrum ranges from highly coherent isolated spins in diamond to more fragile quantum dots in GaAs. Policy makers might note that each platform’s coherence is improving year by year as researchers identify the dominant noise and engineer around it. In 2023–2025 alone, we’ve seen multiple “coherence record” papers – from 0.4 ms T₁ in transmons, to >0.1 s cat qubit bit-flip times, to ~1 minute NV spin memories, to 10+ second trapped ion hyperfine coherence under dynamical decoupling. These numbers are orders of magnitude apart, but each is relevant in context (some are logical coherence times with encoding, some are physical).

Conclusion and Outlook

Decoherence is the fundamental challenge that shapes every aspect of quantum computer design. Each physical qubit modality must be understood as not just a two-level system in isolation, but as an entity immersed in a world of noise – whether that noise comes from microscopic defects, fluctuating fields, cosmic particles, or control imperfections. We’ve seen that:

  • Superconducting qubits grapple with material-based noise (surface spins causing flux noise, stray charges causing drift, and TLS defects causing energy loss ) as well as quasiparticle bursts from radiation. They benefit from fast operation and integration, but must be shielded and error-corrected meticulously to overcome these issues. Innovations like improved materials, protected qubit designs, and bosonic encoding (cat qubits ) are extending their coherence.
  • Trapped-ion qubits enjoy pristine isolation – negligible material noise and very long intrinsic coherence – yet face limits from laser stability, motional mode heating, and the ever-present need for ultra-high vacuum. Their decoherence is dominated by technical noise (laser phase errors, electromagnetic interference) and occasional environmental upsets (background gas collisions). Steady improvements in laser tech, vacuum engineering, and error-resistant gate protocols continue to push ion performance to new heights, as evidenced by >99.9% gate fidelities in recent demos.
  • Neutral atom qubits (optical tweezer arrays) are a newer platform marrying some advantages of ions (many identical atomic qubits) with potential for parallelism. They see decoherence from laser noise and intensity fluctuations (affecting trap depths and Rydberg excitations), atom loss via collisions, and finite Rydberg state lifetimes. Techniques like atomic clock transitions for qubits (to cancel magnetic noise) and echo sequences during Rydberg gates help mitigate these issues. As laser and optical control systems improve, neutral atom arrays are rapidly climbing in two-qubit gate fidelity. Their scalability (hundreds of atoms trapped) makes tackling decoherence crucial for achieving reliable entanglement across large 2D arrays.
  • Photonic qubits trade off the problem of maintaining quantum information over time for the problem of transmitting it across space. A photon won’t decohere in the sense of losing phase coherence in vacuum, but practical implementations must contend with loss and mode mismatch. The key noise sources are scattering loss, absorption, and detector noise. The path to scale photonic quantum computing lies in minimizing loss (through better fabrication of waveguides and using telecommunication wavelengths in fiber) and designing error-tolerant schemes (like fusion-based computation that accepts a certain loss rate). Interestingly, photonic systems are relatively immune to some errors that plague matter qubits – e.g. they aren’t affected by magnetic or electric field fluctuations once en route – which is why they are ideal carriers of quantum information over long distances (quantum communication). The challenge is that processing quantum information with photons requires generating large entangled resource states to counter the probabilistic nature of measurements and loss.
  • Semiconductor spin qubits (in silicon or SiGe) leverage the semiconductor industry’s prowess to aim for dense, scalable qubit chips. They have shown great strides: isotopic purification has largely solved the nuclear spin decoherence issue, and recent devices have achieved two-qubit gates with ~99% fidelity on a 300 mm wafer process. Decoherence for them now comes mainly from charge noise (affecting exchange interactions and spin splittings) and to some extent residual magnetic noise (from any remaining nuclear spins or magnetic field drift). Advanced cryogenic electronics may in the future help stabilize and feed back on qubit frequencies in real-time, similar to what was demonstrated in 2025 with adaptive control reducing noise by 10×. The long-term vision is a fault-tolerant spin-qubit quantum processor integrated with classical control circuits – essentially a quantum CMOS. Achieving that will require both reducing ambient decoherence and layering on quantum error correction to continually repair the qubits faster than noise can damage them.

In all cases, a combination of materials science, engineering, and smart error correction is pushing decoherence thresholds upward. Each modality has sweet spots: e.g., superconductors excel in fast gates but need error correction sooner due to ~100 µs coherence, whereas ions have super coherence (seconds) but slower gates and more overhead per operation – these differences will shape which technology suits which application (a short-depth algorithm might run on a smaller superconducting device today, whereas a memory-intensive communication task might favor ions or atoms).

From a policy and planning perspective, it’s important to recognize that no quantum technology is yet “perfectly stable” – all require an infrastructure to mitigate noise.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap