Post-Quantum

Capability 1.4: Qubit Connectivity & Routing Efficiency

This piece is part of an eight‑article series mapping the capabilities needed to reach a cryptanalytically relevant quantum computer (CRQC). For definitions, interdependencies, and the Q‑Day roadmap, begin with the overview: The Path to CRQC – A Capability‑Driven Method for Predicting Q‑Day.

(Updated in Sep 2025)

(Note: This is a living document. I update it as credible results, vendor roadmaps, or standards shift. Figures and timelines may lag new announcements; no warranties are given; always validate key assumptions against primary sources and your own risk posture.)

Introduction

Qubit connectivity refers to which qubits can interact directly (perform two-qubit gates) with each other. This is often visualized as a connectivity graph: each node is a qubit, and an edge between two nodes means those qubits can be coupled for a two-qubit gate. Some hardware has a dense graph (even complete or all-to-all connectivity), meaning any qubit can directly entangle with any other. Others have a sparse graph, e.g. a 2D grid where each qubit only connects to its nearest neighbors.

Routing efficiency then refers to how we perform operations between qubits that aren’t directly connected – essentially, the strategies to move quantum information across the device as needed. If qubits that must interact are not neighbors on the graph, one must route the information through intermediate hops. This typically involves inserting SWAP gates (which exchange the states of two qubits) to shuttle quantum states along a path until the desired qubits become adjacent, or using other techniques like shuttling physical qubits (moving ions or atoms), entanglement swapping/teleportation (using intermediary entangled pairs to transmit quantum states), or dynamic couplers that can connect distant qubits on demand.

In short, connectivity determines the “wiring” of the quantum processor’s internal network, while routing is how we utilize that wiring to get qubits to talk to each other.

Why does connectivity matter?

Just as a city with poor transportation links suffers traffic jams and long travel times, a quantum computer with limited qubit connectivity will suffer slow, communication-heavy computations. When qubits need many intermediate SWAPs or relay steps to interact, each extra step takes time and introduces additional error risk. High connectivity – ideally allowing any two qubits to interact with few or no hops – is thus essential for scaling up quantum algorithms without blowing the error budget or runtime.

This is especially critical for the kind of massive algorithms required to break cryptography. A cryptographically relevant quantum computer (CRQC) aiming to run Shor’s factoring algorithm on RSA-2048 needs thousands of logical qubits all working in concert over billions of operations. If those qubits are only arranged in a nearest-neighbor grid, the overhead of shuffling entanglement across the chip could slow the computation to a crawl or introduce so many errors that the algorithm fails.

In essence, connectivity turns a collection of many qubits into a single cohesive quantum computer. Without sufficient connectivity and efficient routing, a large device would behave more like isolated regions that can’t fully cooperate on one big problem – more akin to several smaller quantum computers running in parallel rather than one powerful machine attacking a unified task.

From a fault-tolerance perspective, connectivity is also a key enabler for quantum error correction (QEC) and other supporting operations. Leading QEC codes have specific interaction patterns (for example, a surface code logical qubit requires each data qubit to frequently interact with its neighboring measure qubits). If the hardware’s native connectivity doesn’t match the code, additional SWAPs or other workarounds are needed to execute error-correcting cycles. A platform with flexible connectivity can implement these QEC operations more directly and in parallel, improving the error correction cycle time.

Similarly, distributing special resource states (like magic states for non-Clifford gates) across the processor, or performing coordinated syndrome measurements for error detection, all become easier when any qubit can quickly connect to any other.

In summary, strong qubit connectivity and fast routing are mission-critical for achieving a CRQC: they ensure that having, say, a million physical qubits genuinely translates into the ability to execute one giant algorithm rather than a tangle of bottlenecks. Without this capability, even a quantum computer with enough qubits in principle could fail to break RSA within a feasible time, as it would spend too many cycles just “communicating” internally rather than doing useful work.

CRQC Requirements

What exactly is needed from connectivity for a cryptographically relevant quantum computer? Based on current benchmarks, we would require the ability to entangle arbitrary pairs of logical qubits (on the order of $$\sim$$1,000+ logical qubits, which corresponds to perhaps $$10^6$$ physical qubits) within only a few quantum error-correction cycles (each cycle possibly on the order of microseconds).

In practical terms, this means the “communication latency” between any two logical qubits must be extremely low – comparable to the time of just a couple native two-qubit operations. If a logical CNOT between distant parts of the processor takes, say, 100 times longer than a local gate, it would severely hurt the throughput. Thus, routing overhead per operation must be negligible relative to the base clock speed of the device.

Moreover, if the architecture is modular (comprised of multiple chips or modules connected via links), those inter-module connections must be high-fidelity (ideally 99% fidelity or better per entangling operation) and very low latency.

In other words, using a link to connect qubits in different modules should not be much slower or less reliable than doing a gate on the same chip. Meeting these requirements is necessary to execute something like Shor’s algorithm within the limited error-corrected operation budget and runtime available (often estimated on the order of days or weeks for factoring 2048-bit RSA).

High connectivity and routing efficiency ensure the quantum computer can keep all of its qubits busy doing computation rather than idling or waiting in line to interact, thereby maximizing the quantum operations throughput (QOT).

Qubit Connectivity Across Quantum Hardware Modalities

Quantum hardware platforms differ dramatically in their native connectivity and how they can route information. Below we survey how the major modalities approach this challenge, and the techniques used to enhance connectivity or perform routing:

Superconducting Qubits: 2D Grids and Swap Networks

Most superconducting qubit processors (like those from IBM, Google, Rigetti, etc.) are built on a chip with qubits laid out in a planar array, and qubits interact via microwave or tunable couplers with only their near neighbors. A common topology is a square or heavy-hexagonal lattice where each qubit has degree 2 or 3 connectivity. For example, IBM’s 127-qubit and 433-qubit chips use a heavy-hex lattice (each qubit connects to 2 or 3 others in a hexagonal tiling), which simplifies wiring and limits crosstalk.

The upside of this approach is that it’s relatively straightforward to engineer on chip and can scale to a few hundred qubits per die.

The downside is that any algorithmic interactions between qubits that are not neighbors must be broken down into nearest-neighbor steps. In a superconducting device, a SWAP gate (typically implemented with three CNOT gates) is the primary method of routing information; one can think of it as physically exchanging two qubits’ quantum states if they need to “trade places” in the lattice.

For a long-range interaction, one might perform a sequence of SWAPs moving a qubit’s state along a path (like passing a baton along a relay) until the two target qubits become adjacent and can perform the desired gate. Each such hop not only costs gate time but also accumulates error (since each SWAP is multiple physical operations). This can quickly balloon the circuit depth.

For instance, implementing a quantum Fourier transform on a linear or grid architecture requires swapping qubits many times to bring far-apart qubits together, massively increasing the total gate count compared to an all-to-all architecture – a known challenge for running Shor’s algorithm on limited connectivity. There is a concern that the SWAP overhead for certain circuits could negate speedups, making a nearest-neighbor superconducting device require orders of magnitude more operations to do the same algorithm as an all-to-all device (thus much higher error risk).

In short, limited connectivity in superconducting chips directly inflates execution time and error accumulation.

To mitigate this, superconducting architectures have pursued a few strategies. One is the use of tunable couplers between qubits. Instead of having static always-on connections (which can induce unwanted crosstalk), a tunable coupler (often an extra small superconducting element between two qubits) can be “turned on” to mediate an interaction and turned off to isolate the qubits at other times. IBM’s recent processors (e.g. the 133-qubit Heron and newer Heron R2) use tunable couplers for all nearest-neighbor links, which allows them to activate only the intended two-qubit gates at a given time and otherwise decouple qubits to reduce interference. This not only improves fidelity but also effectively means any given pair of connected qubits can be linked when needed and left alone when not, enabling more flexibility in scheduling gates in parallel across the chip. Google’s Sycamore and other designs similarly employ tunable couplers or tunable qubit frequencies to control coupling. The result is higher fidelity entanglement and the ability to run many two-qubit gates simultaneously without collisions. However, tunable couplers alone do not create new graph connections – they still largely work on a fixed nearest-neighbor layout.

Another approach is to extend connectivity beyond nearest neighbors by architectural innovation. IBM has explicitly laid out a modular scaling roadmap: rather than putting, say, 1000 qubits on one monolithic chip (which is difficult due to fabrication and wiring constraints), they plan to connect multiple smaller chips into one larger processor via high-speed links. IBM is introducing multi-chip modules like the 408-qubit “Crossbill” and 462-qubit “Flamingo” processors, which use chip-to-chip couplers to join separate dies into one continuous lattice. The heavy-hex topology is continued across the boundary of chips by these “quantum buses” or couplers, so a qubit at the edge of one chip can directly interact with a qubit at the edge of another chip as if they were neighbors. The goal is that the user doesn’t even realize there are multiple chips – it behaves like a larger contiguous device. By 2026-2027, IBM plans to demonstrate entangling not just two but multiple chips: their “Kookaburra” and “Cockatoo” architectures will use so-called L-couplers to link quantum chips like nodes in a network, effectively forming a modular quantum computer with thousands of physical qubits. This is akin to extending the roads between separate neighborhoods (chips) so that qubits on different chips can become neighbors during computation. Importantly, IBM is also augmenting on-chip connectivity: the 2025 “Loon” processor will test C-couplers that connect qubits over longer distances within the same chip (not just adjacent ones). These longer-range couplers can skip a few intermediate qubits to directly link more distant parts of the chip. The motivation is partly to support new QEC codes (like quantum LDPC codes) that require non-local parity checks – by adding a few long edges in the connectivity graph, one can implement these codes with far fewer SWAPs. It’s a clear example of co-design: choosing a better error-correcting code and modifying hardware connectivity (via C-couplers) to implement that code efficiently.

Beyond couplers, another method to effectively increase connectivity in superconducting platforms is using quantum teleportation or entanglement swapping through intermediate “communication qubits” or photons. In theory, one could create an entangled pair connecting two distant qubits (for example, using a microwave photon or an optical photon interface) and then perform a teleportation protocol to transfer a quantum state or perform a remote gate. This is not standard in today’s superconducting systems, but research is underway on microwave-to-optical transducers to link superconducting qubits in different cryostats or on different chips via optical fiber – essentially building a small quantum network between chips. A 2025 experiment by ETH Zurich, for instance, demonstrated entanglement of superconducting qubits in separate cryogenic modules using microwave photons and a superconducting link (albeit at modest fidelity). Meanwhile, companies like IBM are exploring quantum communication links as noted (Flamingo’s built-in communication link is likely optical or microwave-based).

These are early steps toward telecom-connected superconducting modules, where a photonic interconnect could give any qubit on one module a hook into another module without a direct wire. If achieved with high fidelity, this effectively gives a form of all-to-all connectivity across a distributed system – every qubit could reach every other qubit via a one- or two-hop entanglement path.

In trapped-ion quantum computers, qubit connectivity is in many ways simpler: in a single ion trap, every qubit (ion) can interact with any other via shared motional modes. Ions confined in electromagnetic traps have collective vibrational modes that lasers can use as a bus: by tuning lasers to those modes, one can entangle arbitrary pairs of ions without needing them to be adjacent in physical space. Thus, even if ions are arranged in a line, the interaction graph is fully connected. For example, IonQ’s systems have demonstrated 11 to 20+ ions where any pair can be directly entangled with a Mølmer-Sørensen gate in one step, no swap gates required. This all-to-all connectivity means that quantum algorithms do not incur the huge SWAP overhead that a sparse architecture would; any qubit can “reach” any other qubit as needed. In fact, having all-to-all coupling has been cited as a major advantage in solving problems with many long-range interactions (like certain optimization problems) because it yields faster time-to-solution and higher result fidelity – essentially, you can implement the desired interactions directly instead of decomposing them into sequences. As an example, IonQ recently solved a 36-qubit all-to-all connected spin-glass optimization problem on their trapped-ion system, something that would be extremely challenging on a limited-connectivity device, highlighting that full connectivity can tackle dense interactions optimally. Prof. Enrique Solano from Kipu Quantum (partnering with IonQ) put it succinctly: “Connectivity between qubits impacts efficiency and accuracy. Having all-to-all connectivity means faster time to solution, with higher quality results, and is a unique characteristic of trapped-ion systems.”. This reflects how critical this capability is: fewer routing steps translate to fewer errors and a greater chance of algorithmic success.

However, the story doesn’t end there – all-to-all within one trap still has scaling limits. The number of ions in a single trap is limited by mode crowding and gate speed: as you add more ions, the motional spectrum gets denser and gates between arbitrary ions tend to slow down and become less precise (because you have to spectrally isolate modes or use complex pulse shaping). Current high-fidelity two-qubit gates in ion traps take on the order of tens to hundreds of microseconds. If you have 50 or 100 ions all interacting globally, gates may need to be slower or more finely tuned, which increases the time per operation. Longer chains can also suffer more from phonon mode decoherence and difficulty in calibration. Thus, while 20-50 ions with all-to-all gates have been demonstrated (IonQ has reported up to 29 or more, Quantinuum H1 up to 32 ions, etc.), going to thousands of ions in one trap is not practical. This is where routing and modularity come in for ion-based systems as well.

Trapped-ion architectures pursue a strategy known as QCCD (quantum charge-coupled device), which envisions a large quantum processor as a network of many ion traps or trap zones that can shuttle ions between them. Instead of one long chain of 1000 ions, you might have dozens of smaller zones (each with, say, 10-50 ions) that are interconnected. Ion shuttling is a form of physical routing: ions can be physically transported through junctions on a micro-fabricated trap chip, split from one chain into another, or swapped in position, all using electric fields. Quantinuum (formerly Honeywell) has pioneered this approach – their devices have a “racetrack” design where ions can be rearranged and shuttled around to different interaction regions. In principle, this preserves the all-to-all logical connectivity because any ion can be moved next to any other ion given enough shuttling operations. They achieve this with remarkably low error; mid-circuit shuttling and recooling operations have been integrated without losing coherence on quantinuum’s H-series devices. As their website notes, “moving and regrouping qubits into arbitrary pairs…with near-perfect fidelity…enables maximum flexibility in algorithm design,” effectively realizing all-to-all connectivity in a flexible way. The benefit is that one can keep gate zones small (maintaining high fidelity and reasonable speed) but still connect any two qubits by moving them to the same zone when needed. The trade-off is latency – shuttling an ion might take a few hundred microseconds, during which other operations might be paused for that ion. Still, this can be parallelized to some degree, and if done within the QEC cycle time, it may be acceptable.

Beyond a single multi-zone chip, trapped ion researchers are also aggressively developing photonic interconnects between separate ion trap modules. The leading idea is to entangle ions in different traps by having them each emit a photon and interfering those photons to herald entanglement (a technique long used in ion networking experiments). Once two remote ions are entangled, quantum teleportation can be used to perform gates between qubits in different traps – effectively extending all-to-all connectivity across modules. A significant milestone was achieved by Oxford University in 2025: they demonstrated distributed quantum computing by linking two independent ion trap modules with a photonic network and performing a two-qubit gate between ions in separate traps. They used quantum teleportation to enact a logical CZ gate between an ion in Module A and an ion in Module B, with 86% fidelity. They even ran a simple algorithm (Grover’s search) that required multiple inter-module operations, as a proof that a computation can be spread over physically separated processors. This experiment, published in Nature, is arguably the first realization of a modular quantum computer: two traps, about two meters apart, connected by optical fiber, behaving as one system. While 86% gate fidelity is far below the >99% we have for local gates, it’s a start. Ongoing work aims to boost this by using better photon collection, cavity enhancements, and improved stabilization of the link. Companies like Quantinuum are also pursuing photonic connected modules: their design philosophy involves smaller high-fidelity traps networked optically. IonQ’s roadmap similarly indicates an intention to use optical switching networks to connect multiple 64 or 256-qubit ion trap devices in the coming years.

The current status for ion trap connectivity: within a single module, we have essentially all-to-all connectivity at high fidelity. Ion traps hold world records for two-qubit gate fidelities (recently 99.9% in certain systems) and demonstrate that eliminating SWAP overhead leads to fewer steps and thus fewer errors for complex circuits. As Quantinuum highlights, not needing SWAP gates means you can execute algorithms in far fewer operations, directly translating to higher success rates. This strength has allowed even current ion machines (with only tens of qubits) to solve problems that would be inefficient on superconducting chips of similar size, due to connectivity advantages. For scaling up: small-scale modular entanglement has been shown (2 modules, 2-3 qubits each). The next steps are increasing the rate and fidelity of these links, and connecting more modules. Within a few years we might see 3-4 trap modules networked (each perhaps with 50+ ions) which if successful, yields a ~200-qubit system with essentially all-to-all connectivity across it. The routing in ion systems can involve a combination of strategies: qubit teleportation for remote links and physical shuttling for moving qubits locally. Both have been individually demonstrated; the challenge is integrating them with QEC and at scale. Notably, ions have relatively slow cycle times (microseconds per gate), so one must ensure that adding communication steps (which might take similar order of time) doesn’t slow the overall algorithm beyond the error correction threshold. We’ll revisit that concern under challenges.

Neutral atom quantum computers (offered by companies like QuEra, Pasqal, ColdQuanta/Infleqtion, Atom Computing, etc.) present yet another connectivity paradigm. These systems trap neutral atoms (often using optical tweezers) and typically use Rydberg interactions or cavity-mediated interactions to entangle atoms. In a basic 2D neutral atom array, connectivity can be “local” – for example, Rydberg blockade typically affects only atoms within some micron-scale radius, meaning each atom can directly interact with those within that radius (a few nearest neighbors in a dense array). However, what makes neutral atom platforms unique is their dynamic reconfigurability. Using optical tweezers, one can physically move qubits or rearrange the array between operations. Atoms are not fixed on a chip; they can be transported by moving the laser traps or by swapping which trap holds which atom. This means the interaction graph is not static – you can effectively make any two atoms become neighbors by moving them close together when you need them to interact, then move them apart. As a result, neutral atom computers can achieve a form of on-demand connectivity that approaches all-to-all across the array (though not all interactions happen simultaneously). A common procedure is to “sort” atoms such that the ones that need to interact in the next step are brought adjacent to each other, perform parallel gate operations on those pairs, then rearrange for the next set of interactions. This flexibility has been demonstrated in experiments – for instance, QuEra’s 256-atom quantum simulator can rearrange atoms into different graph connectivities to simulate various problems, and Pasqal has shown dynamic reconfiguration of a 100-atom array to implement different circuit patterns. In essence, the ability to reposition qubits gives neutral-atom systems high effective connectivity. One can imagine it like a fully flexible network topology: today your qubits are arranged in a line, the next moment you can form them into clusters or pairs as needed, much like re-wiring a circuit on the fly.

Neutral atoms also have the advantage of relatively long coherence times (seconds are possible in atomic ground states) and they operate at or near room temperature, which simplifies certain scaling aspects (no complex cryogenics for the qubits themselves, though lasers and control systems are complex in other ways). The combination of reconfigurability and scalability is a selling point – these systems have already reached 100-300 qubits in experimental settings. For quantum computing (beyond analog simulation), one near-term goal is to use this connectivity for more efficient error correction. For example, researchers point out that the freedom to rearrange qubits means one could implement certain QEC codes or multiqubit gates that are very hard on a rigid chip. QuEra has noted that “high connectivity… allows for efficient execution of algorithms and new types of error correction”, enabling interactions that would otherwise require many hops. The ability to move any qubit next to any other qubit on demand effectively eliminates swap gate overhead at the logical level, just like all-to-all connectivity does.

There are a few approaches within neutral atoms to achieve entanglement: one is using Rydberg blockade (exciting atoms to high-energy states so that two nearby atoms can undergo a controlled interaction if within a certain distance). This yields fast gates (tens of nanoseconds potentially) but only for atoms within a certain neighborhood (perhaps a few micrometers apart). Thus, moving atoms is used to ensure the right pairs are within that range when needed. Another approach is to use photon-mediated gates – for example, putting atoms in optical cavities or using photonic links between them. This could, in the future, allow entangling even distant atoms without physically moving them, by routing photons between different trap sites (there are proposals for fiber-linked neutral atom modules much like ion modules). Some startups (like Atom Computing) also explore long-range dipole-dipole interactions with highly magnetic atom species that might give more connectivity without moving atoms.

Currently, routing in neutral-atom systems is achieved by fast electronic control of the tweezers to reshuffle the array. Impressively, experiments have shown the ability to rearrange hundreds of atoms with high success, effectively “sorting” them into a defect-free register or a desired geometry in a fraction of a second. There is a trade-off between speed and fidelity: moving atoms too fast or too often can heat them or cause loss, and it might be slow relative to gate times if not optimized. A recent 3,000-atom demonstration by QuEra (in a special continuous loading system) achieved a reloading and rearrangement rate of tens of thousands of atoms per second, though choosing not to rearrange could increase overall computational throughput. This hints that for very large systems, one might not want to excessively rearrange at every step, but even occasional reconfiguration provides huge flexibility.

In summary, neutral-atom platforms have an inherent ability to change their connectivity graph dynamically, something fixed-chip technologies lack. They start with a moderately connected geometry (atoms in a 2D array with local Rydberg links) but can route qubits by moving them – a bit like having sliding tracks connecting towns, where you can reposition the tracks as needed. Current research is pushing both the number of qubits and the reliability of rearrangement and multi-qubit operations. While still early for digital quantum computing (two-qubit gate fidelities are improving, recently crossing ~97-98% in some Rydberg systems), the promise is that once fidelities hit the needed threshold, the high connectivity will pay dividends. For a future fault-tolerant neutral atom quantum computer, one could envision dividing qubits into patches for a surface code or other code, and then dynamically reconfiguring patches or using mobile ancilla atoms to connect logical qubits when needed. Neutral atoms might also pair naturally with photonic interconnects for modularity at an even larger scale. The big challenge ahead is improving gate fidelity and crosstalk while performing these movements – ensuring that moving one atom doesn’t inadvertently disturb others, etc. But as an architecture, neutral atoms demonstrate that physical movement of qubits is a viable routing method at least at small scales, and it gives a level of connectivity flexibility unrivaled by static circuits.

Lastly, it’s worth discussing photonic qubit systems and hybrid approaches, since they handle connectivity very differently. Photonic quantum computers (like those pursued by PsiQuantum or used in many academic experiments for boson sampling, etc.) don’t have “wires” in the traditional sense – instead, they use photons propagating through waveguides or optical fibers. Connectivity in photonic circuits is essentially free-form: photons can be routed on-chip with beam splitters or off-chip with fibers, and any two photons can interfere to entangle, given the right optical circuitry. In measurement-based photonic quantum computing, one typically generates a large entangled cluster state (a huge graph state of photons) and then measurements on that cluster drive the computation. The cluster state’s graph connectivity essentially determines which qubits (photons) are entangled with which others. A 3D cluster state, for instance, can enable universal quantum computing and effectively provides a high-connectivity resource for logical qubits. PsiQuantum’s vision involves creating a massively entangled cluster of millions of photonic qubits in a silicon photonics chip, which would give a kind of all-to-all entanglement resource for performing gates between logical qubits by consuming parts of the cluster. The challenge here is not so much “routing” in space (since light can be sent anywhere on the chip via waveguides), but rather dealing with losses and synchronizing large numbers of photons. If loss were zero and sources perfect, photonics could naturally give you a very richly connected quantum processor (optical beams can cross without interference, networks of beam splitters can connect many modes). In reality, achieving deterministic entanglement with photons at scale is hard, but progress is being made on efficient light sources and detectors.

Photons also play the key role in quantum networking, which connects separate quantum processors. We already saw this in the trapped-ion context. More generally, any system that can emit or absorb photons (ions, neutral atoms, NV centers, even superconducting qubits with microwave-to-optical converters) can use photonic channels to link qubits that are meters or kilometers apart. This is crucial for any modular scaling – photons are basically the only viable information carriers to reliably connect quantum modules at distance (electrical or material connections are too loss-prone or bulky beyond a point). So the concept of a distributed quantum computer relies on photonic links to generate remote entanglement. We can regard a multi-module system connected by photonic entanglement as one larger computer with a certain graph topology defined by its inter-module connections. For instance, if every module can entangle with every other module via some optical switchboard, the entire network of modules has all-to-all connectivity at the module level. Within each module (which could be, say, a small quantum processor of 50 qubits of some physical type), you might have local connectivity as previously discussed; between modules, photons provide the bridge. The Oxford ion experiment can be seen as two nodes connected by a single photonic link – a simplest network topology (a direct channel between the two). Future systems may have more complex photonic interconnect topologies, like a star network or a hierarchical network (e.g. clusters of ions each in a module, modules connected pairwise or via a central optical switch). Quantum teleportation is the key protocol that leverages these photonic links: once an entangled pair is established between module A and module B, one can teleport a qubit state or even a two-qubit gate between modules by consuming that entanglement and sending a couple of classical bits. This effectively routes a quantum operation through the photonic network. Notably, IBM’s plan for 2027’s “Cockatoo” entails entangling two multi-qubit modules via optical L-couplers, which sounds like creating an EPR pair between the modules and using it for operations.

It’s important to highlight that photonic links are generally much slower and lower-fidelity (today) than on-chip gates. Fiber or free-space entanglement attempts often succeed only with some probability and may take microseconds to milliseconds of waiting for a success signal. For instance, the heralded entanglement in the Oxford demo took on the order of 1-2 ms for a successful generation (though they used a memory to hold qubits until entanglement was achieved). That’s orders of magnitude slower than local gates (which are microseconds). One of the holy grails is to dramatically speed up and multiplex these interconnects so that many entangled pairs can be created in parallel, boosting the effective bandwidth of communication between modules. PsiQuantum is implicitly betting on photonic technology to eventually deliver that kind of parallel entanglement generation on-chip at the needed scale.

In summary, photonic and hybrid approaches treat connectivity as something that can be accomplished via traveling qubits (flying photons) rather than moving matter qubits. They enable the concept of modular quantum computing – building a large system out of smaller pieces connected by a quantum network. The advantage is you don’t need to hard-wire every qubit pair or have a monolithic device; the disadvantage is added complexity, latency, and currently lower fidelity. Still, even for non-photonic platforms, photons are the go-to solution for long-distance links (e.g. linking superconducting quantum computers in separate cryostats or across a lab via an optical fiber). Hybrid systems may use transducers to convert microwave qubit excitations to optical photons that can be sent through fiber to another module, where another transducer converts it back – there are experiments and early prototypes of this, though fidelity is not yet high. We can foresee a future large-scale CRQC being a network of quantum modules (each module might be 1000 physical qubits) connected by photonic links that effectively give a connectivity graph spanning the whole machine. Achieving system-wide connectivity that is fast and transparent to the programmer is a major goal. Today, a few pioneering experiments (like the Nature paper from Oxford, and distributed runs by IBM with classical coordination) have shown the concept. But scaling that to many modules with error-corrected qubits is a substantial challenge.

Key Challenges in Scaling Connectivity & Routing

While each platform has its own twists, they all face common technical and system-level challenges in achieving the kind of connectivity and routing performance a CRQC demands. Below are some of the core challenges and considerations:

1. Routing Overhead and Circuit Depth: Perhaps the most direct issue is that limited connectivity increases the circuit depth (number of sequential operations) needed for a given algorithm. Every extra SWAP gate or teleportation step inserted to route qubits is additional overhead that bloats the logical circuit. This matters because of finite error rates – a longer circuit has more opportunities for error to creep in. For instance, on a 2D grid, a logical CNOT between distant qubits might require, say, 10 SWAP gates (30 physical two-qubit operations) – that’s 30 chances for error instead of 1 if they could interact directly. Even with error correction, these extra operations consume part of the logical operations budget (LOB). A CRQC might only tolerate on the order of $10^{12}$ total gate operations; if poor connectivity forces you to triple your gate count to implement the algorithm, you effectively cut the size of problem you can handle (or require more physical qubits to maintain the same logical fidelity). In extreme cases, as alluded earlier, the runtime could become so long that decoherence or even mundane issues like device stability over days become limiting. Weak connectivity thus risks turning a potentially week-long computation into one that would take years, defeating the purpose of building the large computer. The solution is either to improve the connectivity or to find compilation methods that minimize the swaps (and indeed there’s active research on better qubit routing algorithms in compilers and on using intermediate measurements to teleport data instead of swap, etc.). But fundamentally, physical connectivity is the hard constraint. As Quantinuum emphasizes on their trapped-ion systems: having all-to-all connectivity eliminates swap gates, meaning fewer steps and thus fewer errors – directly improving algorithm success rates. This is exactly why connectivity is considered a lever for both QOT and LOB: better connectivity means you can do more operations in parallel and you need fewer total operations to implement the algorithm. In CRQC terms, it’s vital that routing overhead per operation be negligible compared to the base operation, as the CRQC benchmark states. If every logical gate had a big time penalty due to routing, the effective logical gate rate drops and you might not finish the computation in time.

2. Latency and Cycle Time: Closely related to depth is the latency introduced by routing. In an error-corrected quantum computer, we operate in discrete cycles (especially for QEC codes like surface or LDPC codes). Let’s say each QEC cycle is, for example, 1 microsecond on a superconducting chip (just as an illustrative number). If a certain communication (like moving a state across the chip) takes 5 microseconds via a chain of SWAPs, that means it spans 5 cycles. If it’s a necessary operation in an algorithm, it either forces other qubits to idle or forces the pipeline to stall for those cycles. This can be a huge bottleneck for quantum operation throughput (QOT) – the number of logical operations you can execute per second. Ideally, we want the quantum computer to be limited only by the native gate speed and QEC overhead, not by extra waiting for data movement. Latency becomes an even bigger issue in modular systems: e.g., an optical link between modules might have a few microseconds of delay simply due to photon travel and detection, plus perhaps needing to wait for a successful entanglement shot. If not carefully handled, this could force the whole system to use a longer cycle time (because cross-module gates are slower). The challenge is to integrate communications without stretching the fundamental clock cycle of the quantum computer. The CRQC requirement of entangling arbitrary logical qubits “within a few QEC cycles” speaks to this – we can’t afford a situation where performing a needed interaction takes so long that error correction struggles to keep up. Some strategies to mitigate latency include parallelizing link usage (so while one entanglement link is being established, other operations continue elsewhere), using pipelining (performing teleportation in the background and buffering results), or engineering faster interconnects (e.g., improving photon collection or using multiplexing to get entangled links on demand). Still, latency is fundamentally constrained by physics (speed of light, etc.) and engineering (switching speed, network protocol). Ensuring that communication between any parts of the machine can occur without slowing the overall algorithm clock is a major scaling challenge. For example, IBM will need to show that connecting three Flamingo chips into a 1386-qubit system doesn’t force a slower cycle – their design likely tries to ensure the chip-to-chip coupler operates as fast as on-chip gates. If not, one module could become the slowest link that gates the speed of the entire computation.

3. Fidelity and Error Propagation: Increasing connectivity – especially through new couplers or long-range links – can introduce new error modes. Every hardware element we add (be it a tunable coupler, a photonic link, a microwave resonator bus, etc.) is another object that can fail or inject noise. For instance, tunable couplers can suffer from leakage or calibration errors that cause residual coupling or crosstalk. Long-range couplers (like IBM’s “C-coupler”) might be physically longer and more prone to loss or phase errors. Photonic links currently have fidelities in the 80-90% range for a single entangled pair, which is far below what’s needed for fault-tolerant operation (>99.9% ideally). Inter-module operations must reach the fidelity of local operations, or else those links become “weak points” that could negate error-correction gains. The CRQC spec calls for inter-module link fidelity ≥99%, and even that is likely a bare minimum – in practice, we may want 99.9% or better by the time we incorporate them frequently. Also, having more connectivity means more possible correlated errors: e.g., if a coupler connects distant parts of the chip, a failure in that coupler could entangle errors across those parts (correlation which QEC assumes are rare). In a strictly local lattice (like surface code), errors tend to be local; in a highly connected graph, an error on a central bus could create non-local correlated errors. This is why there’s a trade-off: some QEC codes (like surface code) prefer a 2D local structure partially because it naturally limits error correlations. If we move to a richer connectivity, we must ensure that our error correction and decoding strategies can handle any new error modes. This is an interdependency between connectivity and below-threshold error rates – we can’t consider connectivity in isolation from physical qubit fidelity. For example, if turning on a long-range coupler introduces extra noise to intermediate qubits (through crosstalk or mode leakage), that could raise their error rates and potentially violate the below-threshold condition needed for QEC. Thus, every connectivity improvement must be paired with engineering to maintain or improve fidelities. The good news is that in some cases, better connectivity can reduce overall error exposure (by cutting circuit depth, as noted in point 1). But that only pays off if the new operations themselves aren’t too error-prone. This is why demonstrations like IBM’s upcoming L- and C-couplers or Oxford’s entangling link are being closely watched – the question is, can they achieve these without significant error penalty? If an inter-chip entangling operation is at 99% fidelity and used sparingly, maybe QEC can tolerate it. If it’s 90%, it’s a problem. We may see techniques like entanglement purification (trading multiple noisy link attempts for one higher-fidelity connection) being used, though that again impacts throughput.

4. Control Complexity and Scalability: As connectivity grows, so does the complexity of controlling the system. A simple 2D grid is relatively easy to coordinate – you activate certain nearest-neighbor gates in a pattern, and compilers have well-defined constraints. If you now have a system where any qubit can interact with any other, how do you orchestrate that without conflict? You might have thousands of tunable couplers – ensuring none of them inadvertently couples when it shouldn’t is non-trivial. The classical control hardware (microwave lines, laser beams, etc.) needs to be scaled up or switched rapidly to address many possible pairs. In ion traps, if you can interact arbitrary pairs, you still must carefully tune lasers to not cause unwanted entanglement with others (crosstalk grows as connectivity grows). For shuttling-based systems, moving many ions simultaneously in complex patterns requires advanced electrode control and synchronization. Essentially, the routing scheduler becomes very complex: it has to decide, at each cycle, which qubits will be moved or entangled, ensure traffic management (no two ions going through the same junction at once unless intended, no two photons hitting the same detector, etc.), and handle contingencies like a failed entanglement attempt by rerouting or reattempting without derailing the whole computation. This is analogous to managing a large communication network or multi-core parallel processor – except with quantum states that cannot be copied or buffered easily. Timing is particularly critical. For example, if a photonic link succeeds a bit later than expected, will the receiving module wait or proceed and then try to catch up? How do we synchronize distributed operations to picosecond precision when separated by meters of fiber? These are questions of systems engineering that go beyond individual gate physics. As the number of “moving parts” increases (be it literally moving ions or figuratively many couplers and channels), the points of potential failure multiply. The control system will likely need to incorporate real-time feedback – e.g., signals that an entanglement succeeded or an ion arrived at location – to schedule next steps (this is often called feed-forward, which dynamic circuits allow). That adds another layer: a classical co-processor network that keeps up with the quantum machine’s cycle. IBM and others have started implementing such real-time control networks (IBM’s dynamic circuits and classical parallelization frameworks, for instance). The decoder for QEC is part of this control loop too, and if qubits are spread across modules, decoding might involve gathering data from all modules quickly. There’s concern that if decoding can’t keep up due to the added complexity of the connectivity (imagine a decoder trying to track an error that moved across modules via a teleported gate), then the benefit of connectivity could be lost. So, the challenge is to engineer the whole control stack – from microwaves/lasers and FPGA controllers up to software compilers – to handle a much more connected and parallel system than we’ve ever run. This is doable with enough effort (classical networks and parallel computing offer some inspiration), but it’s a non-trivial jump from today’s relatively small systems.

5. Topology Design and Interdependency with Error Correction: There is a deep interplay between the chosen qubit connectivity topology and the efficiency of error correction and computation. A challenge for researchers is to figure out what logical connectivity is actually needed to optimize overall performance. For instance, the surface code needs only local connections, but that comes at the cost of requiring a lot of extra qubits (overhead) to implement non-local logical gates like T gates (via lattice surgery or “wormhole” constructions). If one had a richer connectivity (say a few long-range links per logical qubit), one could implement certain operations more directly or even use a different error-correcting code with lower overhead. There’s evidence that quantum LDPC codes, which have higher rate (fewer qubits per logical) and potentially higher thresholds, require a more complex connectivity graph (often something like a expander graph or moderate-degree random graph). Implementing those connections in hardware is challenging, but if done, you could greatly reduce qubit overhead for error correction. This is a classic trade-off: flexibility vs. simplicity. The Path to CRQC overview mentions that topology choices (e.g. sparse planar vs. richer graphs) trade flexibility for fidelity and influence overall code efficiency. A sparse planar code (surface code) is easier on fidelity (only short-range interactions, fewer correlated errors) but less flexible (needs many qubits for magic state distillation, etc.). A richer graph might allow magic states or logical gates to be moved around more freely (imagine having a “teleportation channel” connecting distant logical qubits to inject a T state directly where needed, rather than moving it step by step). But that richer graph might introduce new error modes and require careful tuning. Finding the right balance – perhaps a hierarchical connectivity where local interactions handle most QEC and occasional long-range “express links” handle inter-module connections or logical gate teleportation – could be key. In any case, this is a challenge that is as much about architecture design as about hardware: we have to co-optimize the error correction protocol, the physical connectivity, and even the algorithm scheduling so that we use the available connectivity in the most effective way.

To summarize the above challenges: scaling up connectivity is not just a matter of adding more wires or links; it requires preserving high fidelity, avoiding excessive delay, and managing vastly increased complexity in control and error correction. Each approach comes with unique issues (e.g. ion shuttling might face limits in speed and complexity, photonic links face loss and probabilistic behavior, multi-chip couplers face alignment and calibration challenges, etc.), but all share the need to integrate with QEC. The gap to CRQC on this capability, as noted in the benchmark, is to go from “ad-hoc local routing and few-link demos” to “transparent, system-wide logical connectivity” across maybe $10^5$-$10^6$ physical qubits, without stretching the cycle time or exceeding the error budget. That means eliminating long SWAP chains (or making their effect negligible through error mitigation or reversible circuits) and incorporating high-fidelity inter-module connections with tight latency control. It’s a high bar, and achieving it will likely require multiple innovations working in unison: e.g., novel QEC protocols that allow occasional long-range teleportations, network architectures that schedule those teleportations efficiently, and hardware advances for fast, reliable couplers and links.

Outlook and Readiness Level

Given the state of the art, Qubit Connectivity & Routing Efficiency is currently at a low-to-mid technology readiness level (TRL) – roughly estimated around TRL 3-4 as of 2025. This means we have demonstrated critical phenomena in the lab (like entangling two chips, teleporting gates between modules, dynamically reconfiguring qubits, etc.), but we have not yet integrated these into a full, high-performing system. No existing quantum computer has at scale connectivity that meets CRQC needs – today’s devices either have limited connectivity and suffer SWAP overhead, or they have all-to-all at small scale (ions), or very preliminary links between systems.

However, the progress is steady and multifaceted. Research and engineering efforts from major players are directly targeting this capability:

  • IBM is executing on a clear roadmap to incrementally connect chips and extend connectivity range on chip (C-couplers in 2025, multi-chip modules by 2026-27, etc.). If successful, within a few years IBM could have a few thousand physical qubits connected into one instrument, which will be a crucial test of whether their coupling links can preserve fidelity and throughput. They’re also exploring alternative error-correcting codes (LDPC) that will leverage those longer links.
  • IonQ and Quantinuum (Trapped Ions) are pushing the boundaries of module linking and have already exploited all-to-all connectivity in algorithms. We expect to see perhaps a 2-4 module ion trap network demonstrated (with improved photonic link fidelity) within the next couple of years. Also, ion platforms will likely show small logical qubits and attempt to perform logic between them using their connectivity advantages – a chance to illustrate that magic states can be distributed or logical qubits entangled without huge delay, thanks to teleportation.
  • Neutral atom companies are scaling qubit numbers and gate fidelity; as they enter the range of 200-300 qubit devices with improving error rates, they might attempt rudimentary error-corrected operations or cross-array entanglement. Their dynamic connectivity could be a selling point if they, for instance, implement a small QEC code where atoms are rearranged to perform multi-qubit parity checks efficiently (something that might be cumbersome on a fixed qubit chip).
  • Photonic interconnects are being pursued by academic groups and startups, focusing on improving the entanglement rates. For example, efforts to use multiplexed photons (multiple frequencies or time bins at once) aim to boost the probability of getting an entanglement success in a given time. If one can get, say, thousands of entangled pairs per second between modules at 99% fidelity, that could be workable for modular QEC. PsiQuantum’s approach with photonic chips essentially puts the problem of connectivity into one of generating a giant entangled state – if they succeed in making, say, a million-photon 3D cluster, they inherently solve the connectivity problem by brute force entanglement (though the burden shifts to source and detector tech).

Direct impact on CRQC

It’s hard to overstate how directly connectivity affects the ultimate goal of breaking RSA. Even if we have enough logical qubits, if they can’t talk to each other quickly and reliably, the computation either won’t finish in time or will drown in error. This capability has a high direct impact on CRQC feasibility because it dictates whether the quantum computer can operate as a coherent whole. A machine with poor connectivity might theoretically be universal, but practically it would be so slow or error-prone that it couldn’t crack real-world cryptographic challenges in any reasonable timeframe. Conversely, a machine with excellent connectivity and routing can fully leverage every qubit and achieve massive parallelism. This is why we tie connectivity to Quantum Operation Throughput (QOT): better connectivity means more two-qubit operations can be executed per second (since you’re not serializing them through narrow channels) and more of the chip can be active simultaneously. It also ties to Logical Operation Budget (LOB) secondarily: better routing means fewer extra operations (SWAPs) are needed, so the effective depth of an algorithm in logical operations is closer to the algorithm’s ideal depth, not blown up by communication overhead.

Interdependencies

Looking beyond just itself, progress in connectivity will unlock or accelerate progress in other capabilities. For example, high connectivity could allow more efficient magic state distribution – if you can teleport magic states to where they’re needed, you don’t have to physically move them through many hops. That could reduce the number of magic states needed or allow distillation factories to be placed in separate modules feeding the main processor. It also relates to decoder performance – a well-connected system might allow more parallel syndrome extraction or partitioning the code into smaller surfaces connected by bridges, which could simplify decoding. But it could also complicate decoding if errors spread non-locally, so decoders might need to evolve to handle that. And of course, connectivity depends on QEC and below-threshold fidelities – we need the physical qubits to be good enough that adding more connections doesn’t tip us over the threshold. If, say, activating a long link temporarily increases error rates in qubits, we have to ensure those are still below threshold. Additionally, choices in syndrome extraction like how to lay out ancilla for measuring error syndromes, are impacted by connectivity (e.g., having parallel readout lines or the ability to move syndrome data quickly to a processing unit).

Conclusion

In conclusion, achieving efficient, high-fidelity qubit connectivity at scale is one of the grand engineering challenges on the road to a CRQC. It is about making many qubits function as one, bridging physical distance without incurring a heavy penalty. Over the next few years, we will likely see increasing demonstrations of “qubits without borders” – multi-chip processors acting as one, small quantum networks performing computations, and clever uses of teleportation and reconfiguration to execute circuits with less overhead. The Technology Readiness Level will climb as prototypes move from laboratory curiosities (entangling 2 traps, etc.) to incorporated subsystems in 100+ qubit devices. By TRL 5-6, we’d expect to see a few hundred physical qubits networked with most of these connectivity issues ironed out in a testbed. To reach full CRQC, ultimately thousands of logical qubits will need virtually seamless connectivity, possibly via a modular, layered network of physical qubits. It’s a bit like designing the internet for qubits – ensuring information can route efficiently anywhere it needs to go. Thanks to ongoing innovations by IBM, Google, Quantinuum, IonQ, QuEra, PsiQuantum, and many academic labs, the “roads and highways” for quantum information are steadily being built and widened. If quantum error correction provides the local infrastructure (keeping qubits stable), qubit connectivity provides the global infrastructure that ties everything together. Both are absolutely necessary: without a robust network of entanglement linking the whole machine, a future million-qubit quantum computer might never get the chance to fully flex its muscle on problems like breaking RSA. Conversely, with high connectivity and routing efficiency, every qubit can contribute at every step, keeping the giant machine humming along like a well-synchronized orchestra – and bringing us closer to the day a quantum computer can truly outperform classical machines on cryptography and beyond.

Technology Readiness Assessment

As of late 2025, Capability 1.4 (Qubit Connectivity & Routing) is in early demonstration phases (TRL ~3-4) – small-scale connectivity enhancements have been shown (e.g. limited multi-chip coupling, two-module entanglement, dynamic atomic rearrangement), but not yet at the scale or reliability needed for CRQC. The direct impact on CRQC is high, and improvements in this area will likely be one of the primary levers for increasing quantum operations throughput (QOT) going forward. Observers should watch for upcoming milestones such as multi-chip quantum processors operating as one unit, modular QEC experiments (e.g. an error-corrected logical qubit distributed across two modules), and increased all-to-all connectivity demonstrations (like larger ion networks or atom arrays executing error-corrected circuits). Each of these will mark a step closer to the level of connectivity efficiency required for a cryptographically relevant quantum computer. In the grand roadmap, solving qubit connectivity and routing at scale will transform quantum computers from isolated small clusters of qubits into the fully unified, parallel-processing engines we need for tackling RSA-2048 and other formidable computational tasks.

Acknowledgments

This capability area intertwines with many others in the quest for CRQC. It builds upon advances in hardware fidelity and QEC techniques, and in turn it empowers higher-level operations (magic state distillation, decoding, etc.).

By tracking both hardware innovations (like IBM’s couplers or photonic links in ion traps) and architectural breakthroughs (like new compilation methods or QEC codes that leverage connectivity), we gain a window into how and when a large-scale, fully programmable quantum computer might become reality. Connectivity is the glue that will hold together the first machines capable of factoring large numbers – and as such, it remains a critical focus for researchers and engineers bridging the gap between today’s prototypes and tomorrow’s cryptography-shattering devices.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the cquantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap