Quantum Computing

Fidelity in Quantum Computing

Introduction

According to a recent MIT article, IBM aims to build a 100,000 qubit quantum computer within a decade. Google is aiming even higher, aspiring to release a million qubit computer by by the end of the decade. We witness a continuous push towards larger quantum processors with increasing numbers of qubits. IBM is expected to release a 1,000-qubit processor sometime this year.

On the surface, more qubits suggest more computational power, since qubits (quantum bits) can leverage quantum phenomena like superposition (being in a combination of 0 and 1 simultaneously) and entanglement (strong correlations between qubits) to tackle complex problems in ways classical bits cannot. Indeed, the more qubits, the greater potential for large-scale compute power. This has fueled a “qubit race” in the media, drawing parallels to a race for quantum supremacy – the point where quantum computers solve problems beyond classical reach.

However, focusing solely on qubit count is a red herring. Today’s quantum hardware remains noisy and error-prone, meaning that simply adding qubits without improving their reliability offers diminishing returns. Quantum states are famously fragile, easily disrupted by environmental heat, electromagnetic noise, and other disturbances. As a result, a handful of high-quality (high-fidelity) qubits can be far more valuable than a plethora of unstable ones.

IBM recognized this by introducing the “Quantum Volume” metric, which factors in qubit count and error rates – acknowledging that a few fault-tolerant bits are more valuable than a larger number of noisy, error-prone qubits. In practical terms, a quantum processor boasting 1000 qubits with mediocre fidelity might struggle to outperform a smaller device with fewer but high-fidelity qubits. This brings us to a critical concept in quantum computing: fidelity.

The Fidelity Imperative

Fidelity in quantum computing measures the accuracy of quantum operations – essentially how closely real quantum processes match the ideal, error-free processes. High fidelity means quantum gates and measurements are functioning correctly and producing reliable results, whereas low fidelity indicates more frequent errors that can corrupt computations.

Because quantum algorithms often require many sequential operations, even small error probabilities per operation compound rapidly. For instance, a 99% fidelity (1% error rate) per gate may sound acceptable, but over 100 gate operations the chance of zero errors drops to roughly 37%. In contrast, at 99.9% fidelity (0.1% error each), about 90% of 100-gate sequences can run error-free. This stark difference illustrates why every decimal point of fidelity matters. Quantum errors accumulate exponentially over circuit depth, so without very high fidelity, a quantum computer’s output becomes essentially random noise long before it harnesses its theoretical qubit advantage.

Today’s quantum processors, often termed Noisy Intermediate-Scale Quantum (NISQ) devices, operate in this delicate regime. NISQ-era machines have up to a few hundred qubits but lack error-correction and are highly prone to noise and decoherence. In fact, the NISQ “intermediate scale” is defined not just by qubit count but by limited gate fidelity as well. In these systems, achieving even 99% fidelity for two-qubit logic gates is a milestone, and state-of-the-art platforms like superconducting circuits or trapped ions are continually being calibrated to inch towards 99.9% or better per operation. Researchers have demonstrated single-qubit gates with errors as low as 1 in a million (99.9999% fidelity), and in 2021, the first two-qubit gates exceeding 99.9% fidelity were achieved on trapped-ion systems – the highest two-qubit accuracy of any platform at the time.

These achievements, while impressive, also underscore how challenging it is to maintain such accuracy as devices scale up. A quantum processor with high fidelity operations ensures that results of computations are trustworthy and reproducible. Conversely, a machine with thousands of qubits at, say, 99% fidelity might find that errors overwhelm the computation long before all those qubits can be put to use.

Notably, fidelity is as important as qubit count in determining a quantum computer’s practical power. IBM’s emphasis on Quantum Volume reflects this: adding qubits without improving their coherence and gate fidelity can reduce the effective computational capability.

An illustrative example comes from comparing hardware modalities. Superconducting qubit processors (used by IBM, Google, etc.) achieved rapid growth in qubit numbers (into the hundreds by 2023), but each qubit’s operational fidelity may hover around 99% for two-qubit gates. In contrast, trapped-ion systems (offered by Quantinuum, IonQ and academic labs) have far fewer qubits (often <50), yet each qubit can be manipulated with extremely high fidelity (99.9% or above).

In practice, a smaller high-fidelity trapped-ion machine can sometimes outperform a larger superconducting one on certain algorithms. This was seen in direct comparisons where an ion-based 5-qubit computer executed algorithms more accurately than a 5-qubit superconducting device, largely thanks to error rates and connectivity. As scale grows, the gap may widen: one analysis noted that in a 100-qubit superconducting chip arranged in a 10×10 grid, linking qubits at opposite ends could require dozens of sequential operations; even with 99.9% gate fidelity, those chains of operations accrue ~6% error probability, and a larger chip would demand even higher fidelities to avoid a blow-up of errors. The takeaway is clear – quantity of qubits means little without quality of qubits.

The Challenge of Quantum Error Correction

If high fidelity is the goal, one might ask: why not use error correction to boost fidelity the way classical computers do (by redundancy and correction codes)? The answer is that quantum error correction (QEC) is possible in theory – but it’s enormously demanding in practice. Here are several key challenges that make QEC far more complex than classical error correction:

Fragile Quantum States (Superposition and Entanglement)

Quantum computing’s power comes from superposition and entanglement, but these properties are a double-edged sword for error correction. In a classical computer, we could simply copy bits and cross-check them (the way Hamming codes and other classical codes work). In a quantum computer, we cannot directly copy unknown qubit states due to the no-cloning theorem. Measuring a qubit to see if an error occurred will generally collapse its superposition state or break entanglement, destroying the quantum information we’re trying to protect. QEC schemes avoid this by using clever circuit constructions: they encode one logical qubit across many physical qubits in an entangled state, and perform indirect multi-qubit measurements (called syndrome measurements) that reveal error syndromes without revealing the actual data. Even so, performing these syndrome measurements is delicate – they must extract just the information about the error (e.g. “a phase flip occurred on qubit 5”) and not anything about the superposed values of the qubits. The need to preserve fragile quantum states while checking them for errors is a fundamental hurdle that has no classical analog.

Decoherence and Environmental Noise

Decoherence is the process by which qubits lose their quantum behavior by interacting with their environment. It’s the reason quantum hardware must operate in extreme conditions (e.g. superconducting qubits at millikelvin temperatures, trapped ions in ultra-high vacuum, etc.) Despite these precautions, current qubits maintain coherence only briefly – superconducting qubits often for only milliseconds, and even more isolated ion-based qubits for a few seconds at best. This sets a stringent time limit: error correction must detect and correct errors faster than the environment causes new ones. Every operation, and even the act of QEC itself, risks introducing noise. The irony is that adding error-correction circuits means doing more quantum gates and measurements – which themselves can generate errors or hasten decoherence. Thus, quantum error correction is a race against the clock: the procedures have to be fast and fault-tolerant enough that they improve overall fidelity instead of making things worse. Today’s NISQ devices are so noisy that implementing full error-correcting codes on them often hurts more than it helps, unless the base error rates are below certain thresholds (often on the order of 10-3 or 10-4, depending on the code).

More Complex Error Types

In classical bits, an error is straightforward – a bit flip from 0 to 1 or vice versa. Qubits, by contrast, can experience bit-flip errors and phase-flip errors, which have no classical equivalent. A phase flip means a qubit’s phase (the relative quantum phase between |0⟩ and |1⟩ components of its state) is inverted, which can wreak havoc on quantum algorithms even if the bit value (0 or 1) appears unchanged. Even more troublesome, a single physical qubit can suffer a combined error (bit and phase flip together, corresponding to a Pauli-Y error). Effective QEC codes must correct all types of quantum errors without directly looking at the qubit’s state. In practice, this means QEC codes use multiple physical qubits to detect different error syndromes. For example, the simplest Shor code uses 9 physical qubits to correct one qubit: it can detect and correct one bit flip and one phase flip by spreading the information and using entangling checks. Generally, quantum codes treat errors as operators from a basis (like the Pauli X, Y, Z matrices), and syndrome measurements project the system into a state that tells us which error operator occurred without revealing the actual data. The upshot is that quantum errors are multi-dimensional and require more sophisticated bookkeeping than classical errors.

Overhead: Qubits, Qubits, and More Qubits

Perhaps the most daunting challenge is the sheer overhead QEC requires. To make a single logical qubit (an error-corrected, protected qubit), one must use many physical qubits. How many? That depends on the QEC code and the physical error rates, but in many leading approaches the number is astronomical by today’s standards. A popular QEC scheme, the surface code, needs on the order of ~1,000 physical qubits per logical qubit under realistic error rates. In other words, if you wanted 100 error-corrected qubits to run an algorithm, you might need ~100,000 physical qubits if using surface codes. This figure assumes each physical qubit is reasonably good (error <1%). If error rates are higher, you need even more redundancy.

There are more efficient codes (for example, quantum LDPC codes that IBM and others are researching aim to use perhaps 10× fewer qubits than surface codes), but even these still require dozens if not hundreds of physical qubits per logical qubit. The overhead extends beyond qubit count: more qubits means more interactions and more complexity in the system, which in turn can introduce new error modes. As a consequence, scaling up to a fault-tolerant quantum computer – one with enough logical qubits to do something classically intractable – likely means we need millions of physical qubits given current technology and codes.

This is a long-term engineering challenge, given that even the largest experimental devices have just over 1,000 physical qubits (with no effective error correction yet). It’s one reason experts caution that truly transformative quantum computing applications (like breaking encryption via Shor’s algorithm) will require not just thousands of qubits, but thousands of high-fidelity, error-corrected qubits, which puts them a decade or more out of reach.

The No-Cloning Theorem

As hinted above, the no-cloning theorem is a fundamental obstacle that shapes how QEC must be done. This theorem states that one cannot make a perfect copy of an arbitrary unknown quantum state. In classical error correction, if you want to protect a bit, you might simply copy it to multiple memory locations (e.g. triple modular redundancy) and if one copy is corrupted, the majority vote of the rest gives the correct value.

Quantum data can’t be safeguarded by naive copying. Instead, QEC cleverly spreads information across entangled qubits. For example, rather than copying a qubit’s state, a code will entangle multiple qubits such that the logical information is delocalized – no single qubit “has” the full information. This way, if any one qubit errors out, the information can be recovered from the joint state of the others. Syndrome measurements then pinpoint which qubit err’d without revealing the logical qubit’s value. The no-cloning theorem thus doesn’t make QEC impossible, but it forces QEC to use redundancy in an indirect way, via entanglement and syndrome extraction. The practical impact is again an increase in resource overhead and circuit complexity for any error correction scheme.

Other NISQ-Era Limitations

Beyond the theoretical challenges, today’s hardware limitations present very real constraints. NISQ devices not only have limited qubit counts and short coherence times, but also issues like connectivity (which qubits can directly interact with which) and gate speeds. Many QEC codes assume any qubit can interact with certain others to perform parity checks; in hardware, qubits might be arranged in a 2D grid or a line where interactions are local. This means implementing QEC might require extra “swap” operations to bring qubits together, adding more error opportunities.

Moreover, operations take time – if gates are too slow relative to decoherence, error correction can’t keep up. Some platforms like trapped ions have very high fidelity but relatively slow gate speeds (microseconds to hundreds of microseconds per gate), which could bottleneck an error correction cycle.

On the other hand, superconducting qubits operate faster (tens of nanoseconds per gate) but generally with lower fidelity per gate, meaning they suffer more errors in the same number of operations. Researchers are actively experimenting with techniques to extend coherence (e.g. better materials, dynamical decoupling) and increase connectivity (e.g. shuttling ions, coupling distant superconducting qubits via tunable buses) to mitigate these issues. Still, as of 2023, no quantum computer has yet demonstrated a fully error-corrected logical qubit that outlives the physical qubits supporting it – though partial demonstrations (catching one error at a time) have been made.


All these limitations reinforce a central point: a quantum processor with superb fidelity and error mitigation can accomplish more with 100 qubits than a noisy device with 1000 qubits where errors run rampant. It’s why many experts urge focusing on improving qubit quality (fidelity, coherence, connectivity) at least as much as qubit quantity.

Error Mitigation: Early Utility Without Full Fault Tolerance

Given the formidable challenge of full error correction, an interim approach has gained traction: quantum error mitigation. Error mitigation doesn’t eliminate errors completely (as error correction aims to do), but instead finds ways to reduce or counteract errors after the fact, through clever circuit design and post-processing. In the NISQ era, error mitigation techniques – like extrapolating results to zero-noise, probabilistic error cancellation, and subspace verification – have enabled some quantum computations to give useful results even with noisy hardware.

In fact, a breakthrough study by IBM researchers published in Nature in June 2023 provided evidence of quantum computing’s utility before achieving full fault tolerance. Using a 127-qubit superconducting processor (IBM’s Eagle chip), the team executed certain quantum circuits of a complexity that brute-force classical simulation could not handle. By carefully calibrating the device, improving coherence, and applying error mitigation strategies, they were able to measure accurate expectation values for these circuits, even though the raw circuit fidelity was low. In other words, they could extract meaningful results from an inherently noisy quantum computation by running the circuit multiple times and post-processing the outcomes to cancel out noise effects. The authors argued that this represents a concrete useful computation in the pre-fault-tolerant era – a hint that quantum computers need not be perfect to do something impactful, so long as we can understand and curb their errors. This result challenges the prevailing notion that we must fully conquer quantum error correction before quantum computers become useful. Instead, we are seeing a path where incremental improvements in fidelity combined with error mitigation may solve specific problems more efficiently than classical methods, even if error rates are still significant.

It’s important to temper expectations: these early demonstrations of quantum utility are problem-specific. In IBM’s case, the advantage was shown for calculating certain properties (expectation values) of random quantum circuits, which, while a valuable proof-of-concept, is not a broad application. It doesn’t mean quantum computers have broadly surpassed classical ones – far from it. What it signifies is a “targeted quantum advantage”: for carefully chosen tasks that align well with quantum hardware strengths (and where classical algorithms struggle), a noisy quantum computer with high fidelity operations and error mitigation can outperform classical simulation.

As hardware improves, we expect more of these niche cases to emerge, eventually broadening into more general quantum advantages. Crucially, all of these advances lean heavily on fidelity – the experiments require extremely careful control of errors. The 127-qubit IBM experiment, for example, was enabled by “advances in the coherence and calibration” of the processor and the ability to characterize and manipulate noise across the chip. In essence, they squeezed as much fidelity as possible out of the device and then used classical computation to compensate for the remaining noise. This hybrid approach will likely continue until true error-corrected quantum computers are realized.

Conclusion

In summary, while qubit count is the headline metric that often captures imaginations, fidelity and error management are equally (if not more) vital for quantum computing’s success. A quantum computer’s practical power is a balancing act between the quantity of qubits and the quality of its operations. We are learning that simply scaling up qubit numbers without improving fidelity offers diminishing returns – a noisy 1000-qubit machine might be no better (or even worse) than a 100-qubit machine with superior coherence and control. High fidelity enables deeper circuits and more complex algorithms to run before noise derails them, and it is a prerequisite for effective error correction in the future. Conversely, low fidelity qubits will require impractical levels of error correction overhead, or will limit computations to shallow, toy problems.

The road to large-scale, general-purpose quantum computing will therefore require major innovations in boosting fidelity – through materials science, engineering, and error correction techniques – alongside adding more qubits. This is reflected in industry roadmaps: for example, IBM’s goal of a 100,000-qubit machine by 2033 explicitly acknowledges that such a machine must be built on a foundation of fault-tolerance and high fidelity, not just raw qubit count. The coming years will likely bring steady improvements in gate fidelities, qubit coherence times, and smarter error mitigation, which in turn will unlock new milestones of quantum advantage.

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap