Quantum Networks

Entanglement Distribution Techniques in Quantum Networks

Introduction to Entanglement Distribution

Quantum entanglement is a unique resource that enables new forms of communication and computation impossible with classical means. Distributing entanglement between distant locations is essential for applications such as quantum key distribution (QKD), quantum teleportation, and connecting quantum computers for distributed quantum computing​. In QKD, for example, shared entangled pairs can be used to generate encryption keys with security guaranteed by quantum physics. In quantum computing, entanglement between remote qubits allows quantum information to be transmitted via teleportation, effectively “networking” quantum processors. Thus, a quantum network must be able to deliver entangled qubits between nodes on demand, analogous to how classical networks deliver bits.

Bell states (also known as EPR pairs) are the fundamental two-qubit entangled states that serve as the building blocks for distributed entanglement​. These four maximally entangled two-qubit states form an orthonormal basis. In Dirac notation, they are typically given by:

  • $|\Phi^+\rangle = \frac{1}{\sqrt{2}}\big(|00\rangle + |11\rangle\big)$
  • $|\Phi^-\rangle = \frac{1}{\sqrt{2}}\big(|00\rangle – |11\rangle\big)$
  • $|\Psi^+\rangle = \frac{1}{\sqrt{2}}\big(|01\rangle + |10\rangle\big)$
  • $|\Psi^-\rangle = \frac{1}{\sqrt{2}}\big(|01\rangle – |10\rangle\big)$

Each Bell state represents a pair of qubits (say held by Alice and Bob) that are perfectly correlated (or anti-correlated) in a specific basis. If Alice and Bob share a Bell state, a measurement on Alice’s qubit instantly collapses Bob’s qubit to a state consistent with the Bell state’s correlations. This nonlocal correlation is the essence of entanglement. Entangled photon pairs are a common physical instantiation of Bell states, often generated via spontaneous parametric down-conversion (SPDC) or similar nonlinear optical processes​. These photon pairs, when sent to two distant parties, can distribute entanglement to serve as the “links” in a quantum network. Entangled photon pair sources and Bell states underpin essentially all entanglement distribution techniques.

Direct Transmission

The most straightforward way to distribute entanglement is through direct transmission of entangled particles over a physical channel. In this approach, an entangled pair of qubits is created at a source, and the two particles are sent to two distant nodes (e.g. Alice and Bob) through optical fiber or free-space links. This method has been used in proof-of-principle quantum communication experiments, but it faces significant distance limitations due to loss and decoherence in the channel.

Over optical fiber, photon loss is the main limiting factor. Optical fibers have absorption and scattering that attenuate photons exponentially with distance (typical telecom fiber has about 0.2 dB/km loss). Because quantum states cannot be amplified (due to the no-cloning theorem​), the transmission efficiency drops rapidly with distance. In practice, entanglement distribution using standard fibers was historically limited to on the order of 100 km or less​. Beyond this scale, the probability that both photons of a pair survive to their destinations becomes extremely small. For example, previous record experiments achieved entanglement distribution over ~100 km of fiber​. Only very recently, through advanced stabilization and ultralow-loss fibers, have researchers extended fiber entanglement to 248 km, which more than doubled the prior distance record of ~100 km​​. This 248-km demonstration employed ultra-stable polarization-entangled photons and specialized fibers, highlighting that direct fiber links can reach hundreds of kilometers in ideal conditions​. However, the entanglement distribution rate at such extreme distances is exceedingly low and not practical for a functional network. In general, ~50–100 km is a reasonable practical limit for direct entanglement distribution in standard optical fiber, beyond which losses make the entangled link unreliable.

Free-space optical channels provide an alternative for direct transmission, especially for satellite links and line-of-sight communications. In free space (and especially in vacuum), photons can travel without the continuous absorption that occurs in fiber. Ground-to-ground free-space experiments (for instance between telescopes) have distributed entangled photons across distances like 144 km (Canary Islands experiment) under clear atmospheric conditions, albeit with losses from beam divergence and atmospheric turbulence. The most dramatic success in free-space entanglement distribution has been achieved with satellites: notably China’s Quantum Experiments at Space Scale (QUESS) mission, also known as the “Micius” satellite, demonstrated entanglement distribution to stations 1,200 km apart. In 2017, the Micius satellite beamed pairs of entangled photons to two ground observatories separated by ~1,200 km, and those photon pairs remained entangled, violating Bell’s inequality under strict locality conditions​​. This was accomplished via two downlink paths (satellite to each ground station) with a total travel distance up to 2,400 km, mostly through the vacuum of space​. Because most of the photon’s path was in space (virtually no absorption), the link efficiency was orders of magnitude higher than what would be obtained sending photons through thousands of kilometers of fiber​. The Chinese team reported a clear Bell-state correlation between the distant sites, conclusively distributing entanglement over 1,200 km​. This satellite-based approach shows that direct entanglement distribution can be extended to continental or even global scale by relaying quantum signals through space, bypassing the heavy losses of terrestrial channels​.

One challenge with satellite or free-space links is the requirement of line-of-sight and susceptibility to environmental conditions. Micius, for example, can only operate at night (to avoid sunlight) and requires precise telescopic tracking of the satellite​​. Nevertheless, satellite quantum communication has enabled intercontinental QKD—Micius was used to facilitate a secure video call between China and Austria by sharing entanglement and encryption keys over a combined distance of 7,600 km, using the satellite as a trusted relay​​.

Because of the distance limitations of direct transmission, real-world quantum communication networks to date have often used an intermediate approach involving trusted nodes (also called trusted repeaters). In a trusted-node network, the total distance is broken into shorter links of perhaps tens of kilometers. At each intermediate node, the quantum signal is measured and converted to a classical message (for example, extracting a secret key via QKD), and then a new quantum signal is prepared for the next segment​​. The intermediate node must be trusted not to compromise the secret, since it briefly possesses the key information. While this means the end-to-end link is not quantum secure against an untrusted relay, it allows extending quantum communication using existing technology. A prominent example is the Beijing–Shanghai network in China: a 2,000-km fiber QKD link connecting Beijing, Jinan, Hefei, and Shanghai, which went live in 2017. This network uses 32 trusted relay nodes spaced along the route​. Each fiber segment (between trust nodes) is under 100 km to manage loss, and keys are generated segment by segment and passed along. By chaining many links, the network achieves secure communication over a distance unattainable by a single quantum channel (with the caveat that security relies on node trust). In 2021, Chinese researchers integrated this ground fiber network with satellite links (including Micius), creating an even larger integrated quantum network spanning ~4,600 km – the longest to date​. This used a trusted relay structure combining 700+ fibers and two satellite connections.

In summary, direct distribution of entanglement is conceptually simple and has been demonstrated over distances from tens of kilometers (fiber) up to thousands of kilometers (space). Fiber links are limited by attenuation (~100 km scale without intervention), while free-space and satellites offer a way to reach global distances. Presently, long-distance quantum networks often rely on trusted intermediate nodes to manage loss, at the cost of requiring trust in those nodes. Overcoming these distance barriers without trusting intermediaries is a primary goal of quantum network research, which motivates entanglement swapping and quantum repeaters.

Entanglement Swapping Across Nodes

To extend entanglement beyond the direct transmission range without trusting intermediate devices, quantum networks employ entanglement swapping. Entanglement swapping is the process by which two particles that have never interacted become entangled, by means of intermediate entangled pairs and a joint measurement at a relay node. This is the fundamental mechanism behind quantum repeaters that allow multi-hop quantum communication.

The basic idea can be illustrated with three nodes in line: Alice, an intermediate node (say, Bob), and Charlie. Suppose Alice and Bob share one entangled pair of qubits, and Bob and Charlie share another entangled pair. Bob holds one qubit from each pair. Initially, Alice’s qubit (A) is entangled with Bob’s qubit (B), and Bob’s other qubit (C) is entangled with Charlie’s qubit (D). There is no entanglement directly between A and D. Now, if Bob performs a Bell state measurement (BSM) on his two qubits (B and C simultaneously), something remarkable happens: qubits A and D – held by Alice and Charlie – become entangled, even though they never interacted​. Bob’s BSM projects the B–C pair onto one of the four Bell states, and conditioned on that outcome, the A–D pair collapses into a corresponding entangled Bell state. In effect, the original entanglement has been swapped out: the entanglement between A–B and C–D is destroyed by the measurement, but in its wake Alice and Charlie share a fresh entangled link​.

Mathematically, entanglement swapping can be seen from the following identity. If the initial state of the four qubits A, B, C, D is two Bell pairs $|\Phi^+\rangle_{AB}$ and $|\Phi^+\rangle_{CD}$ (each of the form $(|00\rangle+|11\rangle)/\sqrt{2}$), we can rewrite the joint state in the Bell basis of the B–C system:

$∣Φ+⟩AB⊗∣Φ+⟩CD = 12(∣Φ+⟩BC⊗∣Φ+⟩AD+∣Φ−⟩BC⊗∣Φ−⟩AD+∣Ψ+⟩BC⊗∣Ψ+⟩AD+∣Ψ−⟩BC⊗∣Ψ−⟩AD) $.

$|\Phi^+\rangle_{AB} \otimes |\Phi^+\rangle_{CD} ~=~ \frac{1}{2}\Big(|\Phi^+\rangle_{BC}\otimes|\Phi^+\rangle_{AD} + |\Phi^-\rangle_{BC}\otimes|\Phi^-\rangle_{AD} + |\Psi^+\rangle_{BC}\otimes|\Psi^+\rangle_{AD} + |\Psi^-\rangle_{BC}\otimes|\Psi^-\rangle_{AD}\Big)\,$.

$∣Φ+⟩AB​⊗∣Φ+⟩CD​ = 21​(∣Φ+⟩BC​⊗∣Φ+⟩AD​+∣Φ−⟩BC​⊗∣Φ−⟩AD​+∣Ψ+⟩BC​⊗∣Ψ+⟩AD​+∣Ψ−⟩BC​⊗∣Ψ−⟩AD​)$.

This expansion shows that the four-qubit state can be expressed as an equal superposition of terms, each of which is a product of a Bell state of qubits B and C and a Bell state of qubits A and D​. If Bob performs a Bell state measurement on B and C, the measurement projects B–C onto one of the four Bell states (each occurring with equal probability $1/4$ in this case). Correspondingly, qubits A and D are projected onto the same Bell state (up to known local Pauli corrections depending on which outcome occurred). Thus, after the measurement, Alice and Charlie share an entangled Bell pair. Entanglement has been successfully swapped from the (A–B, C–D) pairs to the (A–D) pair. Importantly, Alice and Charlie did not have any direct interaction; Bob’s joint measurement and classical communication of the result are what allow the entanglement to be established at a distance.

Bell state measurements are at the heart of entanglement swapping. A BSM is a joint measurement that projects two qubits onto the Bell basis. In practice, performing a complete Bell measurement on two photonic qubits can be challenging (linear optics can distinguish only 2 of the 4 Bell states deterministically), but it is a well-studied operation in quantum optics and a critical component of protocols like teleportation and swapping​. In the context of a network, the BSM result may need to be sent as classical information to the end nodes to let them know which entangled state they share (or to apply corrections). However, for many applications like QKD, any Bell state is equally useful as long as it is known to the parties.

By chaining multiple entanglement swapping operations, entanglement can be extended over many hops: this is the principle of the quantum repeater. For instance, to entangle two end nodes across a continent, one could establish entanglement for many short links (neighboring nodes) and then perform successive BSMs at intermediate nodes to swap entanglement step by step until the end nodes are entangled. Each intermediate node does a BSM on the two qubits it holds from adjacent links, linking the segments into one long entangled pair. In theory, with ideal operations and enough swaps, there is no limit to the distance over which entanglement can be distributed.

A pioneering demonstration of entanglement swapping in a network was recently achieved in a three-node quantum network at Delft. In this experiment, three diamond-based qubit nodes (labeled Alice, Bob, Charlie) were connected in a line. First, Alice and Bob established an entangled link, and Bob stored his qubit of that pair in a quantum memory at his node​. Next, Bob and Charlie established an entangled link, with Bob’s communication qubit now entangled with Charlie. Bob then performed a joint Bell state measurement on the quantum memory qubit (holding Alice’s entanglement) and the communication qubit (holding Charlie’s entanglement)​. This BSM collapsed the state and resulted in Alice’s qubit and Charlie’s qubit becoming entangled, completing the entanglement swap​. Not only did this create end-to-end entanglement between non-neighboring nodes, but the Delft team also signaled a successful swap and even extended the experiment to create multipartite entanglement among all three nodes (by entangling a third qubit at Bob with the others)​. Quantum entanglement was thus distributed across a small network, an important proof-of-concept of how larger quantum internets could be built.

Entanglement swapping allows distant parties to share entanglement without any direct physical link between them at the time of entanglement creation​. Each link in the chain only needs to span a distance feasible for direct transmission (or covered by a trusted node), and the swapping operations extend the reach. Notably, the intermediate nodes (like Bob) do not learn anything about the quantum bits’ values through the BSM – they only facilitate the entanglement. This means that if the intermediate nodes are not trusted, the end-to-end entanglement can still be pure and secure (unlike the trusted node scenario where the node has full information of the key). Therefore, entanglement swapping is essential for untrusted-node quantum networks, enabling true end-to-end quantum correlations​. As quantum network pioneer Jian-Wei Pan noted, the Delft three-node network’s key achievement was realizing entanglement swapping between two links, “essential in extending entanglement distances via quantum repeaters.”​

One important consideration is that entanglement swapping (and any real operation) is not perfect in practice. If the initial entangled pairs are not pure or have limited fidelity, the act of swapping tends to propagate and often worsen errors. Each BSM itself can have a finite success probability and can introduce additional loss or decoherence. In fact, if two entangled links have fidelity $F<1$ (with perfect Bell states), the swapped entanglement between the end nodes will have a lower fidelity than $F$ in general​. For example, if two links have fidelity $w$ and $w’$, the fidelity of the swapped pair is $g_s(w,w’) = w,w’ + (1-w)(1-w’)$, which is $\le \min(w,w’)$​. In essence, entanglement swapping by itself doesn’t create perfect entanglement out of imperfect links – it actually can degrade the quality of entanglement unless supplemented by additional techniques. This leads to the need for entanglement purification, discussed next.

Entanglement Purification and Error Rates

Real-world entangled pairs are never perfectly pure Bell states; they suffer from noise, decoherence, and operational errors. As entangled qubits propagate through fiber or are stored in quantum memories, their state fidelity to an ideal Bell state drops. Moreover, performing multiple entanglement swaps in sequence compounds the imperfections of each link. If nothing is done, the end-to-end entanglement distributed over long distances would have low fidelity (i.e., a high error rate), making it less useful or even unusable for quantum communication protocols. Entanglement purification, also known as entanglement distillation, is a method to combat this problem by trading quantity for quality.

Entanglement purification is a protocol where two or more noisy entangled pairs are consumed to produce one entangled pair of higher fidelity (with some probability of success)​. The basic idea is analogous to classical error correction or sifting: if you have several imperfect entangled pairs, you perform measurements on them in a coordinated way to “filter out” the instances of higher-quality entanglement at the expense of discarding the lower-quality ones. After purification, the surviving entangled pair has improved fidelity (closer to a Bell state) at the cost that fewer entangled pairs remain. This is a crucial component for quantum repeaters, since it allows one to boost the fidelity of entanglement on each segment before swapping them, thereby ensuring that after many swaps the end-to-end entanglement is still high quality​.

A simple purification protocol involves two identical entangled pairs shared between two parties (say, Alice and Bob). Assume each pair is in some mixed state that has fidelity $F$ (the probability of being in the desired Bell state $|\Phi^+\rangle$, for example) and $F < 1$ due to noise. Alice and Bob can perform the following steps: they take one qubit from each pair locally and perform a certain joint operation (for instance, both apply a CNOT gate between the two qubits they possess, using one pair’s qubit as control and the other as target), then measure one of the two pairs’ qubits in a basis and compare results over a classical channel. Depending on the measurement outcomes (e.g. whether they match or not), they decide to keep the other pair or discard it. If the protocol is designed correctly, the kept pair, conditioned on a certain outcome, has higher fidelity than the originals. This is achieved essentially by filtering out cases where errors occurred.

For a concrete example: one famous protocol (the Bennett “quantum privacy amplification” protocol) uses two pairs at fidelity $F$ and keeps one pair if the measurement outcomes have a certain parity. The success probability of obtaining an output pair is less than 1 (some fraction of the attempts are discarded). In an idealized model, if the two initial pairs are in identical Werner states (mixture of a Bell state with white noise), the success probability $p_d$ and resulting fidelity $F’$ after one round of purification can be calculated as​​:

  • $p_d(F) = F^2 + (1 – F)^2$. (This is the probability that the two pairs “match” and purification succeeds, essentially the probability either both pairs were in the desired Bell state or both in the same orthogonal error state.)
  • $F’ = \dfrac{F^2}{,F^2 + (1 – F)^2,},. $ (This is the fidelity of the remaining pair if the round succeeds.)

As long as the initial fidelity $F > 0.5$ (which is needed or else the pair contains more noise than entanglement), one can show $F’ > F$, meaning the pair’s quality improves​. For example, if $F=0.6$ (60% fidelity), then $p_d = 0.36 + 0.16 = 0.52$ (a 52% chance of success), and if successful the new fidelity would be $F’ = 0.36/0.52 \approx 0.692$ (~69%). One round of purification thus increased fidelity from 0.6 to ~0.69 at the cost of consuming two pairs to get one, and with a 48% chance of having to try again (if it fails and yields no pair). Repeating purification on subsequently distilled pairs can raise the fidelity further. If we take two of these $F’\approx0.69$ pairs and purify again, we might get a pair of fidelity ~0.83 (with success probability ~0.48), and a further round could push fidelity into the 90%+ range​. In general, successive rounds (or using more pairs at once in advanced protocols) can approach arbitrarily high fidelity, at the cost of an exponentially growing number of initial pairs to get one final pair. For instance, using four low-quality pairs of fidelity 0.6 in a two-round nested purification could yield one pair of fidelity ~0.9​.

The trade-offs of entanglement purification are clear: it sacrifices efficiency for fidelity. Many entangled pairs (which may be costly to produce) are consumed to obtain a single high-quality pair. This reduces the raw entanglement distribution rate. There is also a need for classical communication and timing: after local operations, measurement outcomes must be communicated between the parties to decide whether the remaining pair is kept. This introduces a delay, at minimum on the order of the one-way or round-trip light travel time between the parties (depending on the protocol). If the parties are far apart (hundreds of km), this could be milliseconds of delay. Moreover, perhaps the biggest requirement is that during this process, the qubits must remain coherent – which means each party needs to hold onto the qubits (in quantum memory) while waiting for the classical signals. Quantum memory thus becomes an essential component for purification: each node must temporarily store entangled qubits without losing their quantum state fidelity while the protocol is executed. The stored states must survive long enough for the classical communication and any additional operations. This is technologically challenging, as quantum memories (e.g. atomic ensembles, quantum dots, NV centers, etc.) have finite coherence times.

Another consideration is that purification itself can fail (if the measurement outcomes indicate the pair should be discarded). If it fails, one has wasted those entangled pairs and must generate new ones to try again. This necessitates a supply of entangled pairs – effectively sufficient bandwidth on the lower-level links to create entanglement faster than it is being distilled away. Purification shifts the burden to producing a greater quantity of entanglement so that enough high-quality pairs can be distilled for use​. This typically means sources with high pair generation rates and perhaps multiplexing (sending many entangled pairs in parallel) are needed to maintain a reasonable key rate or data rate in a quantum network that employs purification.

In summary, entanglement purification is a critical tool to improve entanglement fidelity in the face of noise and decoherence. It uses multiple noisy pairs to obtain fewer, less-noisy pairs, at the cost of additional resources (pairs, memory, and communication). In a quantum repeater chain, purification might be performed on each elementary link until a target fidelity is reached, and only then would entanglement swapping be performed to extend to the next segment. Without purification, the errors from each link would accumulate over many swaps, resulting in poor end-to-end entanglement​. With purification, a high-fidelity end-to-end entangled pair becomes possible, at the expense of a low probability of success per attempt – which is mitigated by repeating the attempts many times. The development of efficient purification protocols and hardware (high-rate sources, long-lived memories) is thus a cornerstone of building a functional large-scale quantum internet.

Current Real-World State of Entanglement Distribution

Quantum networks are in their early stages, and fully operational long-distance entanglement distribution is still a developing technology. However, several notable real-world implementations and experiments have demonstrated the principles discussed above:

  • Metropolitan and Inter-City Fiber Networks (Trusted Nodes): As mentioned, the longest operational quantum communication network is the Beijing-Shanghai Trunk Line in China, spanning ~2,000 km and connecting multiple cities​. This network relies on 32 trusted intermediate nodes​ and distributes secret keys via entanglement-based QKD and other QKD protocols. While the entanglement is not maintained end-to-end (each link’s keys are combined classically at the nodes), this network has proven the ability to deploy quantum technology at scale over existing fiber infrastructure. Similar (though smaller) trusted-node networks exist elsewhere, such as the EU-funded SECOQC network testbed in Vienna (using a ring of trusted-node QKD links), and a Tokyo QKD network​. These have demonstrated integration with real-world telecom systems and provided important lessons on routing, network management, and security in quantum key distribution. Importantly, these networks highlight the need for quantum repeaters – because without them, one must rely on trust or break the distance into short spans, which is not ideal for ultimate security.
  • Satellite Quantum Communications: The Chinese QUESS/Micius satellite example is the centerpiece of space-based entanglement distribution so far. Micius not only distributed entangled photon pairs over 1,200 km​, but also enabled quantum teleportation of a photon’s state from a ground station up to the satellite (an uplink of ~500 km)​, and entanglement-based QKD between continents​. Following its success, plans are underway to launch a network of quantum satellites by 2030 to form a truly global quantum network from space​. Other agencies (ESA, NASA) are also exploring quantum communications payloads. Meanwhile, ground teams have tested entanglement and QKD over high-altitude drones and between ground observatories (dozens to 100+ km), anticipating a heterogeneous network of ground and space links. The current state-of-the-art is that satellites can serve as trusted nodes in space or as entanglement sources, greatly extending reach. For instance, in 2021, the Chinese integrated network combined fiber and two satellite downlinks to achieve QKD between nodes ~4,600 km apart using trust at the satellite and ground stations​. This hybrid approach may persist until true quantum repeaters are deployed: satellites connect distant regions, within each region fiber networks link cities, and within cities short-distance quantum links (or even wireless) connect end-users.
  • Entanglement Swapping and Quantum Repeater Prototypes: The Delft three-node network mentioned earlier stands as the first multi-node entanglement-based quantum network demonstration. Here, entanglement was distributed and swapped over two fiber links (~1.3 km between Alice and Bob, and similarly between Bob and Charlie, with Bob as an intermediate node). The experiment used NV-center qubits in diamond as network nodes, with intrinsic quantum memories (nuclear spins) to store entanglement. They demonstrated not only swapping to get entanglement between Alice and Charlie, but also entangled all three (a Greenberger–Horne–Zeilinger state)​. This experiment, reported in 2021, is a milestone towards quantum repeaters: it showed that we can generate entanglement on separate links, store it, perform a BSM, and succeed in end-to-end entanglement – all essential repeater operations. However, the entanglement rate was very low (minutes per successful entanglement), and the fidelities, while high enough to violate Bell inequalities, were modest. As Pan observed, the fidelities of entanglement, gate operations, and memory storage in this setup will need improvement for scaling up – these together determine how many swaps can be done before the state is too noisy.
  • Quantum Memory and Repeater Node Experiments: Apart from Delft’s work, there have been other advances demonstrating pieces of the quantum repeater puzzle. In 2021, two experiments published back-to-back in Nature (one by a group in Barcelona, another by a group in Hefei, China) showcased heralded entanglement with solid-state quantum memories at telecom wavelength. The Barcelona team used multiplexed quantum memories (rare-earth ion doped crystals) to store entangled photonic states. They demonstrated entanglement between two nodes ~50 m apart, heralded by a successful Bell-state measurement in the middle, using photons in the low-loss telecom band and with the ability to store and multiplex 62 temporal modes in their memory​. This implies the memory could hold 62 different time-bin entangled modes at once, greatly increasing the effective entanglement rate. The Chinese group similarly used absorptive quantum memories and demonstrated high-speed entanglement distribution with multiplexing. While these were two-node setups (essentially one elementary link of a would-be repeater chain), they achieved technical feats important for repeaters: entanglement at telecom wavelengths (for low loss in fiber), high bandwidth and multi-mode storage (for boosting rates), and high fidelity entanglement after storage.
  • Teleportion over Metropolitan Distance: In the US, a Caltech-Fermilab collaboration achieved quantum teleportation (which consumes entanglement) of qubits over a 44 km fiber loop network in 2020​. They teleported time-bin qubits with an fidelity >90%, heralding that even in a lossy 44 km fiber, entanglement (for teleportation) could be distributed and used with advanced superconducting nanowire detectors and off-the-shelf fiber. This is another sign that quantum entanglement distribution is becoming feasible outside the lab at greater scales.

In summary, operational quantum networks today are either short (a few tens of km with direct entanglement) or rely on trusted nodes for long distances. The longest-distance demonstrations of true entanglement distribution (without measurement) come from satellites (1200 km)​ and specialized fiber experiments (up to 248 km)​, but these are not yet routine communications networks. Quantum repeaters – the holy grail for long-distance, end-to-end entanglement distribution – are in the prototype stage. The Delft experiment connected three quantum processor nodes for the first time​, and other groups have shown entanglement with quantum memory over tens of kilometers​. These breakthroughs are establishing the ingredients for a future quantum internet. At present, one cannot yet call any large-scale network an entanglement-based quantum internet with untrusted nodes; rather we have a patchwork of shorter quantum links enhanced by either trusted relays or one-off entanglement swaps. The challenge going forward is to go from these isolated experiments to a scalable network where many nodes can be entangled through a chain of quantum repeaters, all while maintaining quantum security.

Future Outlook and Challenges

Building a scalable, high-fidelity quantum internet will require overcoming several major challenges and advancing entanglement distribution techniques significantly. Researchers worldwide are actively working on these fronts, and a rough roadmap for the future of entanglement distribution can be outlined as follows:

  • Improving Quantum Repeaters: The next big milestone is to create repeater nodes that are practical and can be deployed. This involves integrating quantum memories, entangled photon sources, and Bell-state measurement devices into robust systems. Key metrics to improve are:
    • Storage time: Memories must hold entangled states long enough for the protocols to run. Current memories (like NV centers or atomic ensembles) might hold states for milliseconds to seconds at best, which may limit distances (since classical signaling for swapping or purification could take milliseconds over hundreds of km). Future memories need longer coherence or the use of quantum error correction to preserve states.
    • Multiplexing and Rate: Since many entangled pairs may be needed (for purification or to overcome probabilistic entanglement generation), repeater nodes should produce and handle many entangled modes in parallel. Temporal, frequency, or spatial multiplexing can dramatically boost rates. As demonstrated by the 2021 experiments, multiplexed memories (dozens of modes) are being achieved​. High repetition-rate sources of entangled photons (e.g. pulsed lasers pumping nonlinear crystals or quantum dot sources) will also increase entanglement distribution throughput.
    • Wavelength conversion: Many good quantum memory technologies absorb/emitt photons in visible or near-infrared frequencies that are not optimal for long-distance fiber travel. Developing quantum frequency converters to translate entangled photons to the low-loss telecom band (1310 nm or 1550 nm) and back, without destroying entanglement, is an important task. Some repeater demonstrations already incorporate memories that natively work at telecom wavelengths​, which is ideal.
    • Fidelity and Error Correction: As networks scale up, even purification may not be enough if noise accumulates. There is research into quantum error-corrected repeaters (sometimes called third-generation repeaters) where qubits are encoded in error-correcting codes so that even if some photons are lost or some operations fail, the entanglement can be restored by automated quantum error correction. This is extremely demanding in terms of physical qubits, but in the long term could allow near-lossless, high-fidelity entanglement distribution over arbitrary distances by treating the channel like an error-corrected logical qubit channel.
    • Integration and Automation: A future quantum internet will need many repeaters, potentially every 50–100 km or so in fiber. This demands that the technology be integrated (not bulky lab setups) and automated for long-term operation. Already, efforts like the Quantum Internet Alliance in Europe are focusing on engineering prototype repeaters and developing control software to manage entanglement generation, swapping, purification, and routing across a network. Classical network control algorithms will be needed to decide how to allocate quantum resources, when to attempt entanglement, which links to use, etc., to meet the end-user demands for entanglement​.
  • Scaling to More Nodes and Users: So far, most experiments involve a handful of nodes. But a real network will have many users who want to establish entanglement on demand (just as many users exchange data on the internet). This raises issues of routing, congestion, and resource management in quantum networks. Unlike classical networks, you can’t copy qubits or buffer them arbitrarily long in a router. Entanglement is also a perishable resource – if not used (or stored properly), it decoheres. Researchers are exploring protocols for quantum routing where intermediate nodes can perform swaps in different order or choose which pairs to entangle next to create an end-to-end connection. There is also the notion of an entanglement switch or router that can connect different network segments dynamically. These higher-level networking challenges will become more important as the hardware (swapping/purification) improves. Early studies on small networks and theoretical work on quantum network algorithms are starting to appear, anticipating the needs of a quantum internet once dozens of nodes are present.
  • Interfacing with Existing Infrastructure: During the transition, quantum networks will likely overlay on classical networks. This means co-existing with classical signals in the same fibers and leveraging existing fiber cables, satellites, and data centers to house equipment. There will be practical engineering steps: co-location of quantum and classical equipment without cross-talk (quantum signals are single photons and very faint, so isolation from classical laser noise is needed), telecom component compatibility (using standard fiber, wavelength channels, etc., possibly dedicated fiber for quantum in a bundle), and dealing with environmental factors (fiber temperature changes, mechanical vibrations affecting phase, etc.). Some experiments have shown entanglement distribution in the presence of classical traffic by using distinct wavelengths and filtering​. The goal will be to have quantum channels operating alongside classical internet traffic – perhaps via dark fiber, or new infrastructure if needed – in a cost-effective way. Standardization of quantum network interfaces (e.g. what wavelength for quantum signals, what protocols for handshaking between nodes, etc.) is an emerging topic.
  • Security and Trust: A future quantum internet promises unconditional security based on physics, but only if the nodes between users are all quantum and untrusted. Achieving that means eliminating the need for trusted intermediate nodes. This will require full end-to-end entangled connections via repeaters. Until that is achieved, hybrid networks will mix trusted and quantum nodes. One challenge is developing methods to verify entanglement shared with a distant party – essentially, the users need to be sure the entangled state is genuine and not tampered. Techniques like checking for Bell inequality violation or using entanglement in QKD protocols (like E91 protocol) can serve this purpose. Another aspect is privacy: in a quantum network, users might perform distributed quantum computations or exchange sensitive data via teleportation. Protocols ensuring that even the quantum operations remain private (perhaps through quantum encryption or blind quantum computing) might be needed. These concerns are analogous to how we designed classical networks with authentication and encryption – except now some of those tasks can be offloaded to physics (QKD provides encryption keys, etc.).
  • Applications driving requirements: The development of entanglement distribution will also be guided by the applications envisioned. QKD is the most near-term application, needing moderate entanglement rates and high fidelity (to generate secure keys reasonably fast). But future uses include quantum teleportation of qubit states (to send quantum data between quantum computers), which will demand higher entanglement throughput because each qubit teleported consumes one entangled pair. Another exciting application is distributed quantum computing or entangled sensor networks. For distributed computing, one might perform a task on two remote quantum processors by entangling them and executing gates via teleportation. This requires extremely high-fidelity entanglement (because quantum gates are error-prone, and distributing an entangled state that serves as a gate between processors will need to be nearly perfect or error-corrected). For sensor networks, multiple nodes sharing entanglement can vastly improve precision of measurements (like clock synchronization or baseline telescopes); this typically requires multipartite entangled states (like GHZ states) distributed to many parties. Generating and distributing multipartite entanglement adds another layer of complexity (and usually can be built from many bipartite entanglements plus local operations). Each application (QKD, teleportation, sensing) has different requirements on entanglement quality, rate, and type (bipartite vs multipartite), and the network will have to accommodate them.

The future outlook is very promising. Governments and large organizations have put significant investments into realizing a quantum internet. The timeline often cited is that in the next 5–10 years we may see regional quantum networks with a few repeater hops, and in the 10–20 year timeframe, perhaps a global network that integrates satellites, fiber, and exotic technologies to connect quantum devices worldwide. Intermediate milestones on this road (as proposed by researchers like Wehner, Kimble, and others​) include developing a trusted-node quantum network (Stage 1) – which we have now in early forms – then a quantum repeater network that can correct for losses but maybe not errors (Stage 2), and later a fault-tolerant quantum network (Stage 3) that is fully error-corrected and can support quantum computing tasks over the network.

In the near term, we can expect incremental advances: more nodes being added in test networks (for example, going from 3 nodes to 4 or 5 in the Delft network, or connecting two such networks via a swap), increasing distance between repeater nodes (perhaps entangling memories 50–100 km apart, which may soon be attempted), and better rates via multiplexing. Each of these will likely be accompanied by academic publications marking the progress. Concurrently, satellite-based entanglement distribution will expand – Micius’s successors or similar satellites will attempt higher-rate entanglement distribution and even try quantum repeaters in space (where perhaps you can have long-lived quantum memory in orbit).

By combining space and ground networks, one can envision a truly global web of entanglement. For example, satellites distribute entanglement to ground stations in different cities, those cities have local quantum networks linking various users (banks, data centers, etc.), and those local networks might connect to quantum computers or sensors in the area. A user in New York could share an entangled pair with another in London by a chain: New York’s node entangled to a US satellite, satellite entangled to a London ground station, then within London network delivered to the user. Achieving this seamlessly will require all the pieces discussed: repeaters on the ground, reliable satellite links, and standard protocols so that an entangled pair delivered across such a heterogeneous network is trustworthy and useful.

Several open research questions remain on the path forward:

  • How to greatly improve entanglement distribution efficiency? This encompasses sources, detectors, memories, and multiplexing. For instance, can we create entangled photon pairs on demand (not probabilistically) to avoid wasting time? Can we achieve near-unity coupling of photons in and out of quantum memories? Can adaptive protocols make the most of every distributed pair (using it immediately or feeding it into purification if below target fidelity)?
  • How to ensure scalability and interoperability? As networks grow, we must ensure that different technologies can work together. A network might have nodes based on atoms, others based on superconducting qubits, others using photonic qubits. Interfaces between these (usually via photons as the common medium) need to be developed so that an entangled link can be created between any two types of quantum systems. Additionally, networking multiple links reliably means dealing with complex event scheduling – if one link succeeds before another, the network may have to buffer entanglement until the other catch up. Developing a quantum network stack (analogous to the classical TCP/IP stack) with layers for physical entanglement generation, link layer for purification and swapping, and application layer for using the entanglement is a topic of active research​.
  • Integration with classical networks: The quantum internet will not replace the classical internet but complement it. How to integrate the two is a question – for example, managing the classical communication that always accompanies quantum protocols (for synchronization, announcing measurement results, etc.) alongside the quantum channels. Latency issues become interesting: quantum protocols often require a classical round trip (for BSM outcomes or basis sifting in QKD), so the network must accommodate slightly longer latencies than typical classical high-speed data. Intelligent routing might send quantum signals along shorter paths (to minimize loss) even if they are not the shortest geographically, etc.
  • Robustness: Quantum signals are delicate. The network must be robust to node failures or channel disruptions. If a repeater node goes down, can the network reroute entanglement through an alternate path (if available)? This might require redundancy. Also, can entanglement distribution protocols be made self-healing – e.g., if an attempt fails, automatically try a new strategy without higher-level software needing to intervene? These operational considerations will be crucial for a real-world deployment where maintenance and uptime are important.
  • Towards a full quantum internet: Ultimately, researchers envision a quantum internet where any two users can, at the push of a button, get entangled qubits delivered to them, enabling secure communication, distributed quantum computing, or precision sensing applications​. Achieving this will likely involve a combination of technologies: ground repeaters for regional networking and satellites for long-haul links, as well as advancements in quantum error correction to maintain entanglement over thousands of kilometers. It may also require quantum network coding or other novel techniques to manage entanglement distribution in complex topologies (analogous to how classical networks use packet switching and routing algorithms to efficiently use the network).

In conclusion, entanglement distribution techniques have progressed from laboratory experiments over meters to early global-scale tests using satellites. Direct transmission of entangled photons is limited but has been pushed to record distances​, while entanglement swapping has been validated in small networks, marking the birth of quantum repeater technology​. Entanglement purification provides a pathway to manage errors at the cost of more resources, and real systems are beginning to incorporate purification and memory as those components improve. The current state of the art includes operational QKD networks with trusted nodes and a few pioneering quantum repeater demonstrations; the next steps will involve untrusted quantum repeaters that can extend entanglement to hundreds of kilometers between each other, eventually stitching together a true quantum internet. Each piece – sources, detectors, memories, quantum logic for swapping/purifying, and network control – is an area of active research and rapid progress. The coming decade will likely see the first repeater-assisted quantum networks that outperform what is possible with trusted nodes alone, and these will pave the way for a scalable quantum internet. The quest to distribute entanglement efficiently and over long distances is challenging, but the steady advances and numerous breakthroughs in recent years (from 1200 km satellite links​ to multi-node swapping networks​) suggest that entanglement distribution techniques will continue to improve, bringing the vision of a worldwide quantum network closer to reality.

Marin Ivezic

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven professional services firm dedicated to helping organizations unlock the transformative power of quantum technologies. Alongside leading its specialized service, Secure Quantum (SecureQuantum.com)—focused on quantum resilience and post-quantum cryptography—I also invest in cutting-edge quantum ventures through Quantum.Partners. Currently, I’m completing a PhD in Quantum Computing and authoring an upcoming book “Practical Quantum Resistance” (QuantumResistance.com) while regularly sharing news and insights on quantum computing and quantum security at PostQuantum.com. I’m primarily a cybersecurity and tech risk expert with more than three decades of experience, particularly in critical infrastructure cyber protection. That focus drew me into quantum computing in the early 2000s, and I’ve been captivated by its opportunities and risks ever since. So my experience in quantum tech stretches back decades, having previously founded Boston Photonics and PQ Defense where I engaged in quantum-related R&D well before the field’s mainstream emergence. Today, with quantum computing finally on the horizon, I’ve returned to a 100% focus on quantum technology and its associated risks—drawing on my quantum and AI background, decades of cybersecurity expertise, and experience overseeing major technology transformations—all to help organizations and nations safeguard themselves against quantum threats and capitalize on quantum-driven opportunities.
Share via
Copy link
Powered by Social Snap