Quantum Computing

The Rise of Logical Qubits: How Quantum Computers Fight Errors

Quantum computing promises to tackle problems that stump today’s supercomputers, but there’s a catch: quantum bits (qubits) are notoriously error-prone. Even state-of-the-art qubits have error rates on the order of 1 in 1,000 per operation, which is far too high for running the billions of quantum logic gates required by useful algorithms. Enter the logical qubit – a concept that could be the key to making quantum computing scalable and reliable. In 2023, Google’s Quantum AI team demonstrated a milestone: by grouping 49 physical qubits into one logical qubit using an error-correcting surface code, they achieved a lower error rate than any of the individual qubits. It was the first time adding more qubits actually reduced the error, a crucial step toward fault-tolerant quantum computing. Let’s explore what logical qubits are, and why we measure quantum progress in these terms, and how they’re implemented.

What Is a Logical Qubit (and How Is It Different from a Physical Qubit)?

A physical qubit is the actual hardware realization of a quantum bit – for example, a tiny superconducting circuit, a trapped ion, or an electron spin. It’s the quantum analog of a transistor bit in your computer, except it can exist in superposition of 0 and 1. Physical qubits are fragile: stray vibrations, electromagnetic noise, or material defects can induce errors (flipping a 0 to 1 or randomizing the phase of a superposition). A logical qubit, by contrast, is an abstract, error-protected qubit that is encoded across many physical qubits. The idea is similar to how we use error-correcting codes in classical data storage: by spreading information across multiple bits, we can detect and correct errors. In the quantum case, we entangle several physical qubits so that the information is not localized in any single one. If one physical qubit in the group suffers a glitch, the collective encoding ensures the error can be identified and the underlying logical state can be restored. Essentially, a logical qubit “delocalizes” the quantum information across a team of physical qubits, so no single bad apple can spoil the barrel.

Crucially, a logical qubit behaves (from the user’s perspective) like a single, high-quality qubit – often idealized as an “error-free” qubit in theory. Researchers sometimes say a logical qubit is a perfect qubit that can be used in algorithms. In practice, logical qubits aren’t truly perfect, but their error rates are exponentially lower than those of the physical qubits that comprise them. For example, Google’s 49-qubit logical qubit in 2023 was still far from flawless, but its error rate per operation was slightly lower than that of a 17-qubit version or any single qubit alone. The power of a logical qubit is that by increasing the number of physical qubits in the code, you can drive the logical error rate arbitrarily low (given physical errors below a certain threshold). This is the principle of quantum error correction (QEC): trading many imperfect qubits for a single superior qubit.

To illustrate the difference, imagine trying to keep a sensitive piece of information safe. A single physical qubit is like a message written in disappearing ink – it might be lost at any moment. A logical qubit is like writing the message in a coded form across several notebooks and having a team continually check and correct any smudges. The message as a whole (the logical qubit’s state) can survive even if some letters get smudged (physical qubit errors), as long as not too many errors happen at once. The price paid is complexity: instead of one qubit to represent a piece of information, you might need dozens or even thousands.

Why Do Algorithms Speak in Terms of Logical Qubits?

When researchers discuss running a quantum algorithm like Shor’s factoring algorithm or simulating a chemical molecule, they usually talk about how many logical qubits are required. This is because algorithms are typically designed assuming error-corrected qubits that can perform long calculations without failing. A “logical qubit” in this context basically means a qubit reliable enough to handle the entire algorithm. If Shor’s algorithm needs, say, 1000 qubits entangled and manipulating data for millions of operations, those 1000 qubits all have to be logical qubits – otherwise the computation would crash midway due to errors. In fact, estimates for cracking RSA encryption via Shor’s algorithm indicate you’d need on the order of thousands of logical qubits with extremely low error rates (on the order of $$10^{−12}$$ per operation). One analysis suggests factoring a 2048-bit number might require roughly 5000 logical qubits with error rates around a trillionth, beyond what any set of physical qubits alone can achieve.

Expressing requirements in logical qubits gives a hardware-agnostic measure of algorithm complexity. It says: “If I had N perfect qubits, I could run this algorithm.” How we get those perfect (or near-perfect) qubits is left to the engineering of QEC. It’s similar to saying a classical computation needs X bits of precision and Y operations – one assumes error-corrected memory and computes with an abstraction of perfect bits, because real DRAM or CPU bit-flips are handled by error-correcting codes or redundancy under the hood.

In today’s quantum hardware, we don’t yet have any fully error-corrected logical qubits that can run indefinitely. So, when someone mentions current quantum processors have 50 or 100 qubits and can’t run Shor’s algorithm, the missing ingredient is that those are 50 physical qubits with error rates around 0.1–1%, not 50 logical qubits with error rates near zero. To run a useful algorithm, we first need to upgrade those physical qubits into logical qubits via error correction. That upgrade comes at a steep cost in quantity: each logical qubit will consume many physical ones. For instance, Google’s prototype logical qubit used 49 physical qubits to encode 1 logical qubit. If each logical qubit needed hundreds of physical qubits, you can see why factoring a large number (thousands of logical qubits required) might demand millions of physical qubits in practice. This is why leading quantum roadmaps aim to scale to huge numbers of qubits – not because the algorithms directly need, say, a million separate values, but because error-correcting that many logical qubits might require a million or more physical qubits in total.

In short, algorithmic requirements are phrased in logical qubits because that’s the level at which quantum algorithms can run reliably. It sets a target for hardware builders: produce enough stable logical qubits to meet the algorithm’s needs. And it sets a challenge: how efficiently can we turn physical qubits into logical ones?

Building a Logical Qubit: Quantum Error Correction in Action

So, how do we actually create a logical qubit out of noisy physical qubits? The answer is quantum error correction (QEC) codes. These are schemes that encode one qubit of quantum information into a larger entangled state of many qubits in such a way that certain errors can be detected and fixed. There are various families of QEC codes, each with pros and cons. The basic ingredients of any QEC code are: (1) Redundancy – multiple physical qubits per logical qubit, (2) Syndrome measurements – extra operations that periodically check for signs of errors without directly measuring the logical information, and (3) Decoding and correction – a method (often algorithmic or hardware-based) to interpret those syndrome signals and decide how to correct the system if an error is detected.

Let’s explore the dominant QEC approach – the surface code – and compare it with other approaches like color codes, bosonic codes (e.g. cat and GKP encodings), and concatenated codes. Each method implements logical qubits but in different physical ways, with different trade-offs in overhead and performance.

Topological Codes: The Surface Code and Color Codes

The surface code has emerged as the leading QEC code for most quantum computing platforms today. It’s a type of topological code that arranges qubits on a 2D grid (like a checkerboard) and only uses local interactions between neighboring qubits to check for errors. In a surface code, a logical qubit is encoded in the joint state of a d×d patch of physical qubits (d is called the code distance). For example, a distance-3 surface code might use a $$3×3$$ grid of data qubits (plus some ancillaries for measurement), and a distance-5 code uses a $$5×5$$ grid. Generally, a distance-d code can correct up to $$⌊(d−1)/2⌋$$ errors. The beauty of the surface code is that it has a relatively high error threshold – around 1% error per gate in the physical qubits – below which increasing the code size will exponentially suppress logical errors. That means if your hardware qubits are, say, 0.1% error-rate, you can affordably reach extremely low logical error rates by using a large enough patch.

Each logical qubit in a surface code requires on the order of $$d^2$$ physical qubits (so, dozens for small d, or thousands for larger d). For instance, Google’s distance-5 logical qubit spanned 49 physical qubits (25 data qubits and 24 measure qubits in a $$5×5$$ layout). The code works by continuously measuring “stabilizers” – specific multi-qubit parity checks – that don’t disturb the encoded information but reveal where errors have occurred. By tracking these syndrome measurements over time, a classical decoder can infer which physical qubits likely err’d and suggest corrections. In Google’s experiment, they ran the surface code in cycles and observed that the larger distance-5 code had a lower logical error per cycle than the smaller distance-3 code, indicating the code was indeed correcting more errors than it introduced. This was a crucial validation that QEC can scale: “increasing the size of the code decreases the error rate of the logical qubit”, as the team reported.

Surface codes are so popular in quantum hardware roadmaps because they only require nearest-neighbor coupling on a 2D array and have that relatively forgiving error threshold. Both Google and IBM have built quantum chips explicitly designed for surface-code implementation. IBM’s latest processors use a “heavy-hexagon” layout – a variation of a hexagonal lattice tailored for the surface code – to minimize crosstalk and meet the local connectivity needs of the code. The surface code is also flexible: by creating different shaped patches or connecting them via lattice surgery (merging and splitting patches), one can perform logical operations like CNOT gates or teleport logical qubits around the processor. It’s essentially a LEGO brick approach to quantum computing: once you can reliably create one logical qubit on a patch, you can imagine tiling the plane with many such patches to get multiple logical qubits, and then link them up as needed for computation.

Closely related are color codes, another class of 2D topological codes. A color code also arranges qubits on a lattice (often triangular tilings) and has check operators that involve multiple qubits (typically 3 or 4 per check). The key difference is that color codes have a richer structure allowing certain transversal gates – meaning you can apply a single-qubit operation across all physical qubits and realize a useful logical gate (something surface codes typically cannot do for non-Clifford gates). For instance, certain color codes allow a transversal implementation of the $$T$$ gate (a crucial quantum operation) which surface codes handle less directly. However, color codes usually have a lower error threshold than surface codes (due to those higher-weight checks). This means they can be more sensitive to noise – you need slightly better physical qubit fidelity to make them work as well. To date, surface codes have seen more experimental focus, but research continues into making color codes practical. In 2024, a team using a neutral-atom quantum processor demonstrated the preparation of logical qubits using both surface codes and color codes, even achieving “break-even” fidelity with a small distance color code (meaning the logical qubit survived as long as the best physical qubit). This suggests that, with hardware improvements, color codes could one day rival surface codes on an even footing.

Concatenated Codes: Building Layer upon Layer

Before topological codes took center stage, many quantum architects imagined building logical qubits using concatenated codes. The idea of concatenation is straightforward: you take a small quantum error-correcting code (say one that uses 7 physical qubits to encode 1 logical qubit, like the Steane $$[[7,1,3]]$$ code which can correct one error), and then you treat each of those “logical” qubits as input to another layer of coding. In effect, you encode a qubit within a code within another code. By doing this recursively, each level of concatenation drives the error rate down at the cost of an exponential blow-up in qubit count. For example, if one round of the 7-qubit code yields a logical error rate of $$p ~ 1e-3$$, concatenating it one more time (49 physical per logical) might push error to $$p^2 ~ 1e-6$$, and so on.

Concatenated codes (like Shor’s original 9-qubit code or Steane’s code) have relatively low thresholds in practice – often on the order of $$10^{−4}–10^{−3}$$ for physical error rates, depending on the code. This is because these codes typically involve multi-qubit gates acting on many qubits for syndrome extraction, which can introduce many opportunities for error. They also often assume any qubit can interact with any other as needed (to perform the encoding gates or check circuits), which can be challenging on hardware with limited connectivity. Ion trap systems, which have the luxury of essentially all-to-all connectivity via shuttling or collective modes, were early demonstrators of concatenated code techniques. In fact, the first experiments to show real-time quantum error correction used a 7-qubit code on trapped ions: in 2021, researchers at Quantinuum (formerly Honeywell) encoded a logical qubit into 7 ions and added 3 ancilla ions to repeatedly detect and correct errors on the fly. They successfully ran multiple rounds of a fault-tolerant Steane code, catching both bit-flip and phase-flip errors as they occurred, with a classical controller applying corrections in real time. This was the first demonstration that you can detect and fix quantum errors mid-computation – a big validation for concatenated code QEC. However, at the time, the logical qubit’s error rate was still higher than the best physical qubit in the system, meaning they hadn’t yet reached the “break-even” point. The road to true fault tolerance would require either better physical qubit fidelities or more levels of concatenation (which quickly becomes impractical in ion traps with limited qubit numbers).

Concatenated codes are not off the table – they are, in fact, a complement to other approaches. Some proposals involve using a small surface code as the base and concatenating it with another layer, or vice versa. The flexibility of concatenation is that you can tailor each layer to different error sources. But as a standalone approach, concatenation faces tough challenges: the overhead in qubit count can become astronomical for deep concatenation, and the requirement for complex gate connectivity can be a hardware nightmare. That’s a major reason why the field pivoted to surface codes, which need only one layer (not multiple concatenations) if the physical error rate is below threshold.

Bosonic Codes: Cats and GKP – An Alternate Path

Not all qubits are two-level systems like spins or transmons. In fact, another approach to quantum error correction is to encode a qubit’s information into a higher-dimensional quantum system, such as a quantum harmonic oscillator. These are called bosonic codes because they often use bosonic modes (like photons in a cavity). Two of the most prominent bosonic codes are the cat code and the Gottesman-Kitaev-Preskill (GKP) code.

  • Cat codes: Named after Schrödinger’s cat (the famous thought experiment), cat codes encode a logical qubit into superpositions of coherent states in a microwave cavity – essentially “alive” (|0⟩ state) + “dead” (|1⟩ state) superpositions of an electromagnetic field. The simplest cat code uses two coherent states |α⟩ and |–α⟩ as the basis for logical 0 and 1. These states are fairly distinct (like two well-separated points in phase space), so natural decoherence like photon loss tends to cause certain errors (like bit-flips between them) only rarely. Cat codes have a biased noise: they strongly suppress one type of error (bit-flips) at the cost of another type (phase flips due to photon loss) becoming relatively more likely. This bias can actually be a feature – if most errors are of one kind, one can apply simpler error correction tailored to that. Researchers have demonstrated cat-code qubits with lifetimes longer than the best constituent physical qubits, effectively showing error suppression. A landmark 2016 experiment at Yale encoded a qubit in a superconducting cavity cat state and extended the lifetime by about 3x compared to an uncoded qubit. More recently, in 2020, stabilized “Kerr-cat” qubits were shown, keeping the cat states alive and correcting bit-flips autonomously.
  • GKP code: The GKP code (proposed by Daniel Gottesman, Alexei Kitaev, and John Preskill) encodes a qubit into a grid of points in a continuous phase space of an oscillator. In practice, it means the logical |0⟩ and |1⟩ are quasi-periodic combs of different photon number parity or amplitude values. The GKP code is theoretically able to correct small shifts (analog errors like minor displacements of the oscillator’s state) up to a certain size. The predominant error in many bosonic systems is photon loss (which causes a random downward jump in photon number). Amazingly, GKP states are resilient against single photon loss – one loss manifests as a correctable shift in the encoded quantum information. Creating GKP states is hard, but in 2019 a team in Switzerland (ETH Zurich) managed to encode a qubit into a vibrational mode of a trapped ion, realizing a GKP state. In 2020, researchers in France demonstrated QEC of a GKP-encoded qubit in a microwave cavity, showing for the first time that errors could be actively corrected in a bosonic mode. By 2024, an experiment achieved autonomous quantum error correction of a GKP qubit in a superconducting circuit: they engineered a system that continuously corrected small shifts in the oscillator using a reservior engineering technique, and notably increased the logical qubit’s lifetime beyond the physical limit. In their words, they reached the point where “more errors are corrected than generated”, a clear break-even for that bosonic logical qubit.

Bosonic codes are attractive because one bosonic mode (like a single microwave cavity) can effectively behave as many physical qubits’ worth of storage. You can correct errors in a high-dimensional space without needing dozens of two-level qubits – potentially a more hardware-efficient route to a logical qubit. In fact, a 2025 study showed a concatenated bosonic code approach: they encoded cat qubits in five microwave resonators (with ancillary transmon qubits to stabilize them) and then applied a small five-qubit repetition code on top. The result was a distance-5 “bosonic logical qubit” whose error rate was slightly lower than a distance-3 version, indicating successful scaling of error correction in the bosonic realm. The logical error per cycle dropped to about 1.65% for the larger code, versus 1.75% for the smaller code. While those error rates are still high in an absolute sense, the trend suggested that with further improvements, bosonic + stabilizer hybrid codes could achieve the low error rates needed – and do so with potentially fewer physical elements than an all-transmon surface code.

The trade-offs for bosonic codes include: the need for high-quality oscillators (very long-lived cavities or modes), and the challenge of performing two-qubit gates between logical bosonic qubits (often one must convert them into an intermediary like a transmon or photon to interact, which can introduce error). Additionally, decoding analog syndromes (in GKP, syndrome info is continuous-valued) can be computationally intensive. Nonetheless, bosonic logical qubits are a promising parallel track, especially for platforms that naturally have bosonic modes (like photonic systems or circuit QED systems with cavities).

Trade-offs Among Approaches: Qubit Overhead, Error Thresholds, and Hardware Compatibility

Each approach to building logical qubits comes with its own trade-offs:

Physical Qubit Overhead: How many physical qubits (or modes) per logical qubit? Surface codes typically require the largest overhead at a given target error rate, because they use a 2D array (order $$d^2$$ qubits for distance d). For instance, achieving a logical error rate of $$10^{−12}$$ (suitable for deep algorithms) might require a surface code of distance ~25–30, meaning on the order of 1000 physical qubits for one logical qubit. Concatenated codes might achieve the same with a few levels of encoding (each logical uses, say, 7 physical, then 7 of those logicals = 49 physical, then maybe one more round = 343 physical), potentially somewhat lower overhead for moderate targets. Bosonic codes can, in theory, achieve big gains – one high-quality oscillator with some ancillas might replace dozens of physical qubits. For example, a single GKP qubit in a cavity combined with a small surface code could drastically cut down total qubits needed for a given logical error rate, as each GKP qubit has an effective error rate much lower than a bare physical qubit. However, bosonic approaches usually still require a few auxiliary qubits and hardware for stabilization.

Error Threshold and Tolerance: Surface codes shine with a high threshold (~1%). If your qubits are just barely error-prone (say 0.5% error per gate), a surface code will work by going to a large enough size. Color codes have somewhat lower thresholds (maybe ~0.3–0.5% in practice ), so they demand cleaner hardware to start with. Concatenated codes have thresholds in the 0.1–1% range depending on code and fault-tolerant protocol – and some specific codes like the Bacon-Shor (a type of subsystem code) can tolerate biased noise well, but generally concatenation was considered viable only if gates are fairly accurate (each level multiplies errors in circuits). Bosonic codes don’t have a strict “threshold” in the same way; rather their efficacy is often a function of oscillator quality. For cat codes, one speaks of a bias ratio (bit-flip vs phase-flip rates) – if you can make bit-flips exponentially rare (by increasing cat size) while keeping phase-flip in check via engineering, you effectively create a biased-noise qubit that can be corrected with high threshold using tailored codes. GKP codes have an effective threshold when concatenated with a stabilizer code, which can be quite high – some analyses show that a surface code with GKP-encoded qubits could tolerate error rates per GKP of several percent, meaning the analog encoding gives a cushion.

Syndrome Measurement Complexity: Measuring error syndromes (parities, photon number mod 2, etc.) can range from simple to elaborate. Surface codes require many simultaneous parity checks of neighboring qubits, which in superconducting circuits means a lot of microwave pulses and readouts every microsecond. This is a challenge but one that’s been met in small demos (Google and IBM have run cycles of many such checks). Concatenated codes often require multi-qubit gates or many-step circuits to extract syndromes, which can increase latency and errors during the measurement process itself. Bosonic codes sometimes allow autonomous syndrome extraction – e.g. a two-photon dissipation process that continuously corrects bit-flips in a cat qubit without needing measurements. That can reduce the active intervention needed, but designing such analog feedback is non-trivial.

Decoder Complexity and Latency: Once you have syndrome data, you need to compute correction instructions. The surface code famously can use a classical algorithm called minimum-weight perfect matching to quickly infer likely error chains causing the syndrome. Companies like Google have invested in fast hardware decoders to do this in real-time as the code scales up. Color codes may require more complex decoding (due to structure like “unavoidable hook errors” in circuits ). Concatenated codes have simpler decoding in principle – often just decoding each level in succession – but a deep concatenation could incur some latency. In a trapped-ion experiment, a classical controller was able to decode and correct within the coherence time of the qubits for a 7-qubit code, but as codes grow larger that becomes harder without specialized hardware.

Gate Implementation and Overhead: Forming a logical qubit is one thing; using it for computation is another. Some codes make certain gates easy and others hard. For example, in a surface code, a logical CNOT between two logical qubits can be done via a process called lattice surgery with relatively little overhead – just by merging and splitting patches. But a logical T (pi/8) gate is not easy – it typically requires a technique called magic state distillation, which itself consumes many logical qubits and operations. In contrast, a color code can do a T gate more directly (a potential advantage for future scaling). Bosonic codes often permit certain gates naturally (e.g. displacing a cat state in phase space corresponds to X or Z rotations on the logical qubit). However, entangling two bosonic logical qubits might require an intermediary entangling gate (like using a transmon to couple two cavities). The hardware has to support whichever two-logical-qubit gates you need for the intended algorithms.

Hardware Compatibility: Different codes may fit better on different hardware. Surface codes map beautifully to 2D grids of superconducting qubits (which is exactly what companies like IBM, Google, and Rigetti are building). In fact, most near-term quantum hardware roadmaps center on the surface code because of this natural fit and high threshold. Color codes might find a niche in architectures with triangular or hexagonal connectivity (some ion trap schemes or two-qubit gate topologies could emulate this). Concatenated codes could be well-suited for ion traps or photonic cluster-state computers where entangling arbitrary pairs is easier. Bosonic codes are obviously geared toward platforms that have bosonic modes: superconducting cavities, optical modes, or even collective motion modes in ions. For example, the GKP code has been explored in trapped-ion motion and circuit QED; cat codes in superconducting microwave cavities. Each hardware platform often explores a hybrid: superconducting qubit systems might use a surface code on qubits and embed those qubits in cavities for stability – a combo of bosonic and surface coding.

Why the Surface Code Dominates Quantum Computing Roadmaps

Given all these approaches, the surface code has become the star of most roadmaps for building large-scale quantum computers in the near to medium term. Why? In a nutshell, because it’s the most straightforward path known to a fault-tolerant quantum computer with tolerances that match current hardware. It strikes a balance between feasibility and efficiency:

Locality and Simplicity: The surface code needs only local nearest-neighbor interactions on a 2D plane. This meshes perfectly with chips that can be laid out in a grid. There’s no need for long-range interactions (unlike some concatenated schemes) or exotic quantum states (unlike bosonic codes which need high-quality cavities). It’s conceptually simple: just measure many X and Z parity checks in a checkerboard pattern, round after round. This simplicity is golden when trying to scale up, because every added complexity (like needing a transmon AND a cavity AND a coupler, etc.) is a potential point of failure or loss in yield.

High Error Threshold: With a threshold around 1%, the surface code can start working even if your physical qubits are not perfect. Many quantum hardware platforms are approaching that threshold in gate fidelity. Superconducting qubits, for instance, have 1-qubit gate errors ~0.1% and 2-qubit gate errors in the 0.5–1% range on the best devices. Ion traps have even lower gate error rates (0.01–1% depending on gate type). This means we are (or will soon be) in the regime where a surface code of modest size can beat the physical qubits. Indeed, Google’s 2023 result showed exactly that: their physical qubits had error ~0.2–0.4% per operation, and a distance-5 surface code brought logical error down a bit below that level. This gives confidence that as qubit quality inches better and code distances grow, logical error rates will plummet exponentially.

Experimental Momentum: Dozens of groups have built expertise with the surface code. There are optimized decoders, years of simulations, and increasing intuition among engineers for how to implement it. The first experiments correcting even a single error used surface-code-like constructs (distance-3 codes) with superconducting circuits. Tech giants and startups alike have poured resources into surface code prototypes. This momentum means improved calibration techniques, better error mitigation for the syndrome extractions, and a growing library of tricks to handle problems (for example, dealing with qubit leakage – when a qubit leaves the computational basis – has been studied in the context of surface codes and solutions are being developed).

Modularity for Scaling: The surface code enables a modular vision: one can imagine a quantum computer as an array of tiles, each tile being a logical qubit or a block of them. Companies like Google have roadmaps where the next milestone after a single logical qubit is to create a “long-lived logical qubit” (essentially increasing the distance to make it very stable), then a “tileable module” that can perform a logical two-qubit gate, and then scale out to many tiles. Each module might be, say, a 100×100 physical qubit block that encodes a handful of logical qubits which can interact with adjacent blocks. This is appealing for engineering: you can focus on replicating a uniform structure.

Community and Standards: Having a “default” error-correcting code like the surface code means the industry can develop standard practices – much like classical computing had standard error-correcting codes for memory, etc. We already see that language in use: companies announcing they aim for X physical qubits with surface-code error correction to achieve Y logical qubits. The surface code is often taught as the canonical example in grad textbooks and courses, so new quantum engineers are familiar with it. All this makes it the safe bet for roadmaps.

None of this is to say the surface code is the only way. In fact, some near-term devices might try simpler QEC first (like a repetition code for bias mitigation, or small codes for specific error correction) as stepping stones. But so far, the surface code has delivered on key promises – notably, the break-even error correction milestone in a real system – and thus it instills confidence that it can handle the bigger challenges ahead.

State of the Art in 2025: How Many Logical Qubits Do We Have?

As of 2025, quantum computing labs around the world have made significant strides in creating and operating logical qubits, though we’re still in the single- to few-qubit era of logical quantum processors. Here’s a snapshot of the progress:

Single Logical Qubit “Break-Even” Demonstrations: The year 2023 was pivotal. Google Quantum AI demonstrated a logical qubit using the surface code (distance 5, 49 physical qubits) that had a lower error rate than any of the component physical qubits or smaller codes. This was the first instance of quantum error correction actually improving quantum memory in a scalable way. Around the same time (2022–2024), other groups showed similar break-even points with different approaches: for example, the GKP bosonic code experiment in 2024 achieved longer-lived logical qubits than physical ones in a superconducting cavity system. The cat code concatenated with a small stabilizer code also achieved a slight improvement in 2025 as mentioned. In trapped ions, Quantinuum reported that with improvements, their logical qubit (Steane code) is approaching the break-even where it starts winning over the physical errors (though as of their latest reports it was not decisively below physical error rates yet). Each of these “one logical qubit” experiments is like the Kitty Hawk moment for quantum error correction – the flights are short and wobbly, but they prove the concept can lift off.

Multiple Logical Qubits and Entanglement: The next step is to show that logical qubits can interact and form logical entanglement – the basis of doing actual computations. In late 2024, IBM researchers in Zurich demonstrated entanglement between two logical qubits on a superconducting device. They did this by running a surface code on part of their chip and a Bacon-Shor code on another part simultaneously, then performing a transversal CNOT (an operation that goes across corresponding physical qubits of the two codes) and lattice surgery to entangle the logical qubits. The experiment used 133 physical qubits of an IBM system to maintain two distance-3 logical qubits (one of each code type) and verified a Bell state between them with a fidelity around 94% after error correction and postselection. This was an exciting proof that even with today’s noisy devices, one can carry out a logical two-qubit gate and generate entanglement at the logical layer. Also in 2024, a team led by researchers at Harvard and QuEra (using neutral Rydberg atom arrays) reported a programmable logical quantum processor with up to 40 logical qubits (using color codes) and even entangled as many as 48 logical qubits in a special error-detecting code configuration. They leveraged the ability to reconfigure atomic qubits and perform mid-circuit measurements to implement various small codes across a whopping 280 physical qubits. While some of those logical qubits were of very small distance (some were just error-detecting, not fully correcting), it represents the largest scale logical qubit array demonstrated to date – essentially a small quantum computer operating on logical qubits. They showed improvements in algorithm fidelity due to the encodings, compared to using the bare physical qubits, underscoring that even near-term logical qubits can boost performance for certain tasks.

Diverse Hardware Implementations: Logical qubits have now been realized in multiple hardware platforms. Superconducting circuits (transmon qubits) have shown surface code logical qubits (Google, IBM) and bosonic logical qubits (cat and GKP at Yale/Google/Nord Quantique). Trapped-ion systems have demonstrated small QEC codes (repetition codes, Steane code) with multiple rounds of correction, and even real-time feedback. Neutral atoms (Rydberg arrays) as mentioned have implemented color codes and surface codes in creative ways. We should also mention spin qubits in silicon: while they lag in qubit count, there has been steady progress in two-qubit gates and small error-detecting schemes; a full logical qubit in silicon is still on the horizon, but the community is eyeing the surface code for when 2D arrays of spin qubits become available. Photonic qubits (e.g. optical cluster-state quantum computers) inherently use an error-correction approach (fusion-based or topological photonic codes) – companies like PsiQuantum are pursuing photonic architectures where the entire computation is built on quantum error-correcting codes from the get-go. As of 2025, photonics haven’t demonstrated a logical qubit experimentally, but theoretical work suggests pathways to do so via clever entanglement of many light modes.

In terms of numbers: we have on the order of 1–2 working logical qubits in superconducting and ion platforms (with error rates slightly below physical). With cutting-edge neutral atom experiments pushing to dozens of logical qubits at low distance for specific demonstrations, we’re seeing the first hints of “logical qubit processors.” However, these logical qubits are still extremely error-prone compared to what’s needed for real algorithms. Think of it as the ENIAC era of logical qubits – we can count them, we can even do some toy calculations with them, but they’re far from the optimized, large-scale logical qubits we ultimately need.

Challenges Ahead: Scaling Up Logical Qubits

While the progress is impressive, major technical challenges remain on the road to large-scale logical qubit systems:

Scaling Physical Qubit Count: The most obvious challenge is the sheer number of physical qubits required. Today’s largest quantum chips have on the order of a few hundred qubits. To run even a modest algorithm with full error correction, we likely need thousands or millions of qubits. Engineering systems of that size – with sub-millisecond latency in control and readout, and with cryogenic or vacuum operation – is a daunting task. Efforts are underway (chip packaging, modular architectures, etc.) but this is a moonshot engineering problem akin to building the first supercomputers. As one quantum hardware director quipped, “we aim to build a machine with about a million quantum bits” for useful applications. The jump from 100 qubits to 1,000,000 qubits will require innovations at every level, from nanofabrication and wiring to software and error decoding.

Improving Physical Qubit Quality: Although error correction can reduce error rates, having better starting material (physical qubits) makes everything easier. There is a virtuous cycle here: the lower the physical error per gate, the smaller the code can be to achieve a given logical reliability. Many groups are simultaneously trying to push physical fidelities (through materials research, better fabrication, novel qubit designs like fluxonium or dopant spins) while also developing QEC. If, say, physical two-qubit gate errors drop from 0.5% to 0.05%, the overhead for a logical qubit might drop by an order of magnitude. There’s also the issue of error correlations – QEC works best under the assumption of independent errors. Real devices can have bursts of correlated errors (e.g. a cosmic ray hitting a chip causing multiple qubits to glitch ). Reducing and managing such events (perhaps with shielding or fast resets) is critical, because a correlated error affecting many qubits at once can defeat a code if not accounted for.

Decoding Technology: As logical qubits scale, the job of the classical co-processor that decodes errors becomes heavier. We need decoders that can handle maybe 1000 syndrome bits every microsecond, and output corrections in real-time. Researchers are exploring FPGA and ASIC implementations of decoders, as well as improved algorithms (including machine learning approaches to decoding). The challenge is to keep the decoding fast enough that it doesn’t become a bottleneck. In some cases, feedforward decisions (like whether to perform a logical gate depending on error syndromes) will be needed in the midst of algorithm execution, so ultra-low-latency decoding is a must for truly fault-tolerant gates.

Manufacturing and Yield: Building a chip with a thousand qubits is one thing; getting all thousand to perform well simultaneously is another. Yield of good qubits may be less than 100%, meaning larger arrays might have some “dead” qubits. Topological codes like surface code can sometimes work around defects (there’s research on patching holes or using ancilla rerouting if a check qubit is bad), but too many dead qubits and the code fails. Thus, scaling up will also stress the consistency and uniformity of fabrication in solid-state qubits. In ion traps or atoms, increasing numbers means more lasers or more trap zones – each adding a source of noise or calibration difficulty.

Complex Logical Operations: Thus far, most experiments have focused on preserving quantum information (memory) and perhaps simple entanglement at the logical level. Doing an actual quantum algorithm will involve sequences of logical gates, including non-Clifford gates like T, and possibly mid-circuit measurements and feedback between logical qubits. Executing a multi-step logical circuit while keeping errors at bay is a next big hurdle. For instance, teleportation of a logical qubit was recently demonstrated as a means of shuttling information between parts of a quantum processor. Such techniques will be essential in architectures where moving physical qubits is limited (like distributed or modular quantum computers). Each new logical operation (be it teleportation, state injection, logical reset) comes with its own failure modes that must be characterized and suppressed.

Cross-Technology Integration: It’s possible that scaling will require hybrid approaches – for example, using both superconducting qubits for fast operations and bosonic cavities for memory, or coupling ion trap modules with photonic links for communication. Integrating different technologies adds complexity: you have to manage errors at the interface (e.g. converting an ion’s state to a photon and back). Ensuring the whole system remains fault-tolerant across these conversions is non-trivial. But without this, building one monolithic device with everything might be impractical (think of the wiring for a million superconducting qubits – there’s not enough space in a dilution refrigerator for that many control lines; instead one imagines modular approach with local qubit modules connected by quantum interconnects).

NISQ vs. Fault-Tolerant Gap: We currently live in the NISQ (Noisy Intermediate-Scale Quantum) era, where algorithms are run on physical qubits with clever error mitigation but not full correction. As we introduce logical qubits, initially they will be few and expensive – perhaps we can make 1 or 2 high-quality logical qubits, but an algorithm might need 100. There will be an awkward period where using logical qubits for a task might not yet be advantageous because we can’t encode the whole algorithm’s worth. Researchers will have to find ways to leverage partial error correction – maybe encoding only the most critical qubits logically, or using logical qubits as repeaters for long circuits while others run uncorrected. Bridging this gap between small-scale QEC and full-scale fault tolerance will be as much a software challenge (quantum compilation and clever algorithm design) as a hardware one.

Despite these challenges, the trajectory is clear. The field has moved from theorizing about logical qubits and quantum error correction in the 1990s, to baby-step demonstrations in the 2010s, to now – in the mid-2020s – showing early practical logical qubits that actually beat physical hardware performance. Each year brings new “firsts,” like the first logical qubit memory beyond break-even, the first logical qubit entanglement, the first logical circuit. It’s a thrilling time reminiscent of the dawn of classical computing, with researchers improvising and improving each component relentlessly.

Conclusion

Logical qubits are the linchpin for delivering on the promise of quantum computing. They are the qubits as we wish we had them – long-lived and trustworthy – brought to life by the ingenuity of quantum error correction. By encoding information across many imperfect qubits, scientists have shown they can create a single superior qubit, and the more qubits you throw at it, the better it gets. This concept transforms how we talk about quantum computing: it shifts the focus from raw qubit count to usable qubit count. Ten physical qubits are just ten fragile quantum objects, but ten logical qubits (each perhaps made from dozens of physical ones) could someday form a small quantum computer capable of non-trivial computations.

As of 2025, logical qubits have graduated from theory to laboratory reality, albeit a very challenging one. The consensus is that surface codes will pave the way in the near term, forming the backbone of quantum computing roadmaps. Yet alternative approaches like color codes and bosonic encodings provide hope for more efficient or flexible implementations in the future. If current trends continue, the late 2020s may see the first handfuls of logical qubits that can perform simple algorithms with accuracy impossible on any bare physical device.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap