Quantum Computing

Silicon’s Hidden Advantage: How Biased Noise Could Slash the Cost of Quantum Error Correction

(Updated in March 2026)

Every resource estimate for a cryptanalytically relevant quantum computer (CRQC) rests on a set of assumptions about noise. How often do errors occur? What types of errors dominate? How effectively can error correction suppress them? These assumptions are usually treated as abstract parameters — a physical error rate of 10⁻³, a surface code threshold of ~1%, a certain number of physical qubits per logical qubit.

What is rarely discussed — and almost never quantified in threat models — is that the type of noise varies dramatically between qubit platforms. And the type of noise determines, to a degree that the security community has not yet absorbed, how many physical qubits you actually need.

Recent experiments on silicon donor qubits have revealed a noise profile that is not just different from other platforms — it is structurally advantageous for error correction. The noise is strongly biased: phase-flip errors dominate overwhelmingly, while bit-flip errors are essentially absent. This is not a one-off observation. It has been independently confirmed in both the SZIQA team’s error detection and logical operations experiments, and it arises from fundamental physics — the exceptionally long T₁ lifetimes of nuclear spins in silicon — rather than from device-specific engineering.

The theoretical literature tells us what this means: under biased noise, the fault-tolerance threshold can be pushed from ~1% to over 5%, the code distance required for a given logical error rate drops, and the physical qubit overhead of error correction shrinks accordingly. If these predictions hold at scale, a silicon-based CRQC could require meaningfully fewer physical qubits than any estimate based on standard symmetric noise assumptions suggests.

This is not a small correction. It could be a factor-of-several reduction. And it changes the math on quantum threat timelines.

What “Biased Noise” Actually Means

In the standard model of quantum noise — the depolarizing channel — the three types of Pauli errors (bit-flip X, phase-flip Z, and the combined Y) occur with equal probability. If your physical error rate is 1%, then roughly 0.33% of errors are X, 0.33% are Z, and 0.33% are Y. Quantum error-correcting codes like the surface code are designed to handle all three error types symmetrically, and the fault-tolerance threshold (~1% for the standard surface code) is calculated under this assumption.

But real qubits do not always fail symmetrically. In many physical systems, one type of error is far more likely than the others. This asymmetry is called noise bias, and it is quantified by the bias ratio η = p_Z / p_X, where p_Z is the phase-flip error rate and p_X is the bit-flip error rate. A bias of η = 1 means symmetric noise. A bias of η = 100 means phase-flip errors are 100 times more common than bit-flip errors. A bias of η = ∞ (pure dephasing) means bit-flip errors never occur.

Different qubit platforms have very different bias ratios. Superconducting transmon qubits typically have bias ratios near 1 — their noise is approximately symmetric. Cat qubits (stabilised bosonic modes) can have very high bias ratios, which is one of their primary selling points. And silicon donor nuclear spins, as the SZIQA experiments have now demonstrated, exhibit extreme bias: the Z-basis observables (sensitive to bit-flip errors) showed no measurable decay over the entire experimental timescale, while the X-basis observables (sensitive to phase-flip errors) decayed with coherence times of ~140–255 μs.

The physics behind this asymmetry is straightforward. A nuclear spin in silicon sits in a magnetic field. For the spin to flip (a bit-flip error), it must exchange energy with its environment — typically through spin-lattice relaxation mediated by phonons or spin-orbit coupling. But nuclear spins in silicon couple extremely weakly to these mechanisms. The T₁ relaxation time for ³¹P nuclear spins in ²⁸Si has been measured at many minutes to hours in some configurations — effectively infinite on the timescale of any quantum circuit. The nuclear spin simply does not spontaneously flip.

Phase-flip errors, by contrast, arise from fluctuations in the local magnetic field that shift the spin’s precession frequency. These fluctuations come primarily from the shared electron (whose hyperfine coupling creates an effective magnetic field that changes when the electron’s state fluctuates) and from any residual ²⁹Si nuclear spins in the lattice. These mechanisms are real but finite, giving the nuclear spin a T₂ coherence time of hundreds of microseconds to milliseconds — long by quantum computing standards, but finite.

The result: extremely high noise bias, arising from fundamental physics rather than engineering optimisation. This is not something that needs to be designed in or maintained by active stabilisation. It is an intrinsic property of the hardware.

What Theory Predicts: Thresholds Above 5%

A sequence of theoretical papers, primarily from the University of Sydney group (Tuckett, Bartlett, Flammia, and Brown), has systematically analysed how biased noise affects the performance of quantum error-correcting codes.

The results are dramatic.

In 2018, Tuckett, Bartlett, and Flammia showed that a simple modification of the surface code — orienting the code so that its structure aligns with the dominant error type — can achieve what they called an “ultrahigh error threshold” under biased noise. For pure dephasing (η = ∞), the threshold approaches 50% — meaning the code can tolerate almost any amount of phase noise alone.

In 2020, the same group (now including Brown) extended this to the fault-tolerant regime, where syndrome measurements themselves are unreliable. They obtained fault-tolerant thresholds exceeding 6% in the limit of strong dephasing, and approximately 5% for a bias ratio of η = 100 — an experimentally realistic regime. Compare this to the standard surface code threshold of approximately 1% under symmetric noise.

In 2021, Bonilla Ataides, Tuckett, Bartlett, Flammia, and Brown introduced the XZZX surface code — a variant that achieves even higher thresholds under biased noise by using a different stabiliser structure that naturally separates the two error types. The XZZX code surpassed all previously known thresholds for the surface code under biased noise and was shown to be compatible with practical decoders.

The practical consequence of a 5× higher threshold is not merely that error correction “works better.” It cascades through every level of the resource estimate. A higher threshold means you need a smaller code distance to achieve the same logical error rate. A smaller code distance means fewer physical qubits per logical qubit. Fewer physical qubits per logical qubit means fewer total physical qubits for the entire computation. And fewer total qubits means a smaller, cheaper, and faster quantum computer.

What This Means for CRQC Resource Estimates

Every major CRQC resource estimate published to date — from Gidney’s RSA-2048 analysis to the Pinnacle Architecture to Google’s cryptocurrency assessment — assumes surface code error correction with a physical error rate in the range of 10⁻³ to 10⁻⁴ and a threshold of approximately 1%. These estimates do not model platform-specific noise bias.

For silicon donor qubits with strongly biased noise, this means the estimates are likely conservative — they overstate the physical qubit requirements. How much they overstate depends on the exact bias ratio and the error correction architecture used, but the scaling is significant.

Consider a concrete example. The Pinnacle Architecture showed that RSA-2048 can be factored with fewer than 100,000 physical qubits using qLDPC codes at a physical error rate of 10⁻³ and a code cycle time of 1 μs — parameters that match silicon’s fast-clock characteristics. This estimate uses standard (symmetric) noise assumptions. Under biased noise, the code distance required for equivalent logical error rates would be lower, and the physical qubit overhead would shrink correspondingly.

The exact magnitude of the reduction depends on whether the bias is preserved through the error correction cycle — a technical requirement that depends on the gate set and code structure. Research on bias-preserving gates (Puri et al., 2020) and bias-compatible codes (XZZX surface code, Bonilla Ataides et al., 2021) suggests that it is possible, at least in principle, to maintain the bias advantage through fault-tolerant computation. Whether silicon’s specific gate implementations preserve bias in practice is an open experimental question — but one that, given the physics, has a favorable outlook.

For threat modelling purposes, the implication is this: a silicon-based CRQC might need 2× to 5× fewer physical qubits than estimates based on symmetric noise predict. This does not make a silicon CRQC imminent — the gap between 11 qubits and 100,000 remains enormous. But it does mean that the “finish line” for silicon may be closer than standard resource estimates suggest.

Which Platforms Benefit?

Silicon donor qubits are not the only platform with biased noise. Kerr-cat qubits (a superconducting bosonic encoding) are specifically designed to exhibit extreme noise bias, and much of the theoretical work on biased-noise codes was motivated by this architecture. Some trapped-ion implementations also show moderate bias.

But silicon’s bias has a distinctive advantage: it is intrinsic rather than engineered. Cat qubits require active stabilisation (continuous microwave drives) to maintain their bias, and the bias-preserving condition must be carefully maintained through every gate operation. Silicon nuclear spins are biased by default — the T₁ >> T₂ asymmetry is a consequence of the atom sitting in a crystal lattice, not of any active control scheme. This makes silicon’s bias inherently more robust and easier to maintain at scale.

The combination of intrinsic bias with CMOS-compatible fabrication is unique to silicon. No other platform offers both a naturally favourable noise profile and a credible path to industrial-scale manufacturing. This is what makes the biased noise finding more than an academic curiosity — it is a potential structural advantage for silicon in the CRQC resource competition.

What Remains Unproven

Several important questions must be answered before the biased noise advantage can be incorporated into quantitative threat models with confidence.

First, does the bias persist at scale? The SZIQA experiments measured bias in a single five-donor cluster. As systems scale to arrays of clusters with inter-cluster coupling, new noise channels — charge noise from control electronics, cross-talk between clusters, measurement back-action — could introduce additional bit-flip errors that reduce the effective bias ratio. Whether the bias holds at η > 100 across a large-scale processor is an open question.

Second, can bias be preserved through all gate operations? Standard two-qubit gates can convert phase errors into bit-flip errors, destroying the bias. Bias-preserving gate sets exist in theory, but their practical implementation in silicon donor systems has not been demonstrated. The SZIQA team’s CCCCZ-type gates, mediated by hyperfine coupling, may have natural bias-preserving properties — but this has not been formally characterised.

Third, what codes and decoders are optimal for silicon’s specific noise profile? The XZZX surface code and tailored decoders represent the state of the art for biased-noise error correction, but they have been studied primarily in the context of cat qubits. Adapting these codes to silicon’s particular noise model — which includes cross-talk-induced correlated errors as well as biased single-qubit errors — requires further theoretical and experimental work.

These are answerable questions, not fundamental barriers. But they must be answered before anyone can responsibly put a revised qubit count into a CRQC resource estimate.

The Bottom Line for Security Leaders

For CISOs and risk managers calibrating PQC migration timelines, the biased noise finding adds a specific, quantifiable uncertainty to existing CRQC resource estimates — and the uncertainty is in the wrong direction.

Standard estimates assume symmetric noise and produce physical qubit counts in the range of 100,000 to several million, depending on the architecture and error correction code. If silicon’s biased noise allows a 3× to 5× reduction in overhead, the low end of that range moves from 100,000 to perhaps 20,000–50,000 physical qubits. That is still far beyond current silicon hardware (11 qubits), but it is significantly closer to the near-term hardware roadmaps of silicon quantum computing companies than the standard estimates suggest.

More fundamentally, the biased noise finding illustrates a general principle that the security community tends to underappreciate: CRQC resource estimates are not fixed numbers. They are functions of hardware parameters — error rates, noise profiles, gate speeds, connectivity — that vary across platforms and improve over time. Any threat model that treats “number of qubits needed” as a single, platform-independent constant is missing a dimension of the analysis.

Silicon’s biased noise is one specific example of how platform-specific physics can shift the resource landscape. There will be others — advances in qLDPC codes, improvements in magic state distillation, new architectural optimisations — that further compress the resource requirements. The responsible posture is not to wait for these parameters to stabilise, but to begin migration now with the understanding that the target is moving.

The noise is not symmetric. The estimates should not assume it is.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap