China Just Pushed Device-Independent QKD (DI-QKD) to 100 Kilometres
A team led by Pan Jian-Wei has extended the “gold standard” of quantum-secure communication from tabletop experiments to metropolitan fibre – and published a quantum repeater blueprint alongside it. The West’s strongest objection to QKD just got harder to defend.
Table of Contents
8 Feb 2026 – A team at the University of Science and Technology of China (USTC), led by Pan Jian-Wei, has reported the first realization of Device-Independent Quantum Key Distribution (DI-QKD) over a 100 km fiber link. Unlike standard QKD, this protocol guarantees security even with untrusted hardware by observing the violation of Bell’s inequalities (CHSH). The experiment utilized single Rubidium atoms trapped in optical tweezers and employed Quantum Frequency Conversion (QFC) to down-convert signals to the telecom band. While the 100 km result demonstrated the physical feasibility of the protocol (asymptotic security), a fully extractable finite-size secure key – usable for immediate encryption – was successfully generated over an 11 km distance.
To appreciate why this matters, consider where DI-QKD stood just three and a half years ago. (I covered all these experiments in my DI-QKD explainer) In July 2022, three independent groups published DI-QKD demonstrations almost simultaneously. The Oxford experiment (Nadlinger et al.) extracted 95,628 secure key bits – but over a distance of 2 metres, with both trapped strontium ions sitting in the same laboratory. The Munich experiment (Zhang et al.) used rubidium-87 atoms separated across a university campus by 700 metres of fibre, but could only demonstrate positive asymptotic key rates – never enough statistics for finite-key extraction in its 75-hour runtime. A USTC photonic experiment showed feasibility to 220 metres but lacked the random basis switching required for a complete DI-QKD protocol.
As Antonio Acín, one of the theoretical architects of device-independent cryptography, observed at the time from ICFO Barcelona: in practice, we do not usually need full cryptographic schemes to secure a transaction between two people who are only two metres apart – taking a couple of steps would be more than enough.
The February 2026 result changes that equation. The distance progression from prior experiments (e.g., Oxford’s 2 meters in 2022) to USTC’s 11 km finite-key security represents an orders of magnitude improvement, with entangling speeds also orders of magnitude higher than previous two-photon schemes. And the 100-kilometre asymptotic demonstration – while not yet yielding extractable finite-size key at that range – shows that the physics does not break down at metropolitan and intercity scales.
For readers who want the technical foundations of why DI-QKD matters and how it differs from conventional QKD, my comprehensive DI-QKD explainer covers the Bell inequality framework, the trust hierarchy, and the security proof architecture in detail.
What they built
The USTC experiment builds on the same rubidium-87 single-atom platform used in the 2022 Munich demonstration, but with three critical engineering innovations that collectively enabled the distance leap.
First, single-photon interference for entanglement heralding. The Munich experiment used a two-photon heralding scheme, where success probability scales as η² – the square of the photon transmission through the fibre. This quadratic dependence is punishing: double the fibre length and you don’t halve your rate, you quarter it. The USTC team switched to a single-photon interference scheme, where the success probability scales as η – a linear dependence that fundamentally changes the rate-distance relationship. At 100 kilometres, this is the difference between a protocol that produces occasional events and one that produces essentially nothing.
Second, quantum frequency conversion to telecom wavelengths. Rubidium-87 atoms naturally emit photons at 780 nanometres – a wavelength where standard telecom fibre attenuates the signal at roughly 3.5 dB per kilometre, meaning half your photons are gone after barely 1 kilometre. The USTC team implemented quantum frequency conversion to shift these photons down to 1,315 nm, in the telecom O-band, where fibre loss drops to approximately 0.2 dB per kilometre. That single change extends the practical reach of each photon by more than an order of magnitude.
Third, a Rydberg-based photon emission scheme. Conventional atom-photon entanglement generation suffers from photon recoil – the atom gets kicked when it emits a photon, which degrades entanglement fidelity. The USTC team used a Rydberg state excitation pathway that suppresses this recoil noise, improving the quality of the entangled state and therefore the strength of the Bell inequality violation at distance.
Together, these three innovations produced a system that maintained Bell inequality violation (CHSH parameter S ≈ 2.589, well above the classical bound of 2) at all tested distances: 11, 20, 50, 70, and 100 kilometres. At 11 kilometres, the quantum bit error rate was approximately 3%, yielding a finite-key rate of 0.112 bits per entanglement event – enough to extract provably secure key material against the most general quantum attacks. At 100 kilometres, the error rate climbed above 7% and the key rate was positive only in the asymptotic analysis, meaning a practical key extraction at that distance would require either longer runtimes or higher entanglement generation rates than the experiment achieved.
The experiment ran for approximately 624 hours – 26 days of continuous operation – generating entangled pairs at roughly 0.53 per second. The absolute key generation rate at 11 kilometres was approximately 0.06 bits per second.
Under the Hood
The breakthrough relied on three critical innovations:
- Single-Atom Memories: Utilizing Rubidium-87 atoms to store quantum states, enabling the ‘memory’ function required for future quantum repeaters.
- Quantum Frequency Conversion: Down-converting the atom’s natural 780 nm emission to the 1.3 $$\mu$$m telecom band to minimize signal loss in the optical fiber.
- Single-Photon Interference (SPI): An advanced heralding scheme that detects entanglement via a single photon arrival, significantly boosting the rate of success compared to traditional two-photon methods.”
The caveats (and why they matter less than you might think)
Independent experts were quick to flag two significant limitations. Steve Rolston of the University of Maryland called the data rates “abysmally small – producing less than one bit of secure key every 10 seconds.” He also noted that the fibre was coiled in a laboratory, not deployed between geographically separated locations. The Phys.org coverage noted explicitly that all nodes were in the same lab, so the locality loophole – one of the two canonical loopholes in Bell tests – was not closed.
Antonio Acín, rating the work “excellent” and “a major achievement,” still observed candidly that Alice and Bob being in the same laboratory, connected by a coiled fibre of a given length, simulates the situation of being tens of kilometres apart but is not exactly the same in practice.
These are fair points. The coiled-fibre approach accurately simulates the optical loss of a deployed link but does not replicate the environmental noise, vibration, temperature fluctuations, and synchronisation challenges of real-world fibre. And the locality loophole – which requires space-like separation between Alice and Bob’s measurement events – is impossible to close at 100 kilometres with current technology without genuinely separated labs.
But here’s why these caveats, while technically valid, don’t diminish the result’s significance as much as they might initially suggest. The purpose of DI-QKD is to certify security through Bell inequality violation regardless of what the devices are doing internally. The critical experimental question was whether entanglement quality and Bell violation could survive 100 kilometres of fibre loss and noise – and it can. Deploying the same system between geographically separated labs would add engineering complexity but does not change the underlying physics. The distance problem was the fundamental one. The deployment problem is now an engineering task.
Carlos Sabín of the Autonomous University of Madrid made a related observation: the key error rate ranging from 3% at 11 kilometres to more than 7% at 100 kilometres shows the system is still far from completely error-free key distribution. True – but conventional QKD also operates with non-zero error rates. The question is whether the error rate stays below the threshold where secure key can be extracted, and at 11 kilometres, it clearly does.
The companion paper: quantum repeaters from the same lab, the same week
What most coverage missed, or mentioned only in passing, is that the DI-QKD paper in Science was published alongside a companion paper in Nature from the same USTC group: “Long-lived remote ion–ion entanglement for scalable quantum repeaters.”
This dual publication reveals a convergent research strategy. Quantum repeaters are the technology that would eventually allow DI-QKD – or any entanglement-based protocol – to operate over arbitrary distances without the exponential photon-loss penalty that currently throttles rates. The repeater paper demonstrated that quantum entanglement can persist in trapped-ion memories significantly longer than the time required to establish inter-segment connections – the fundamental requirement for chaining repeater nodes together.
If the DI-QKD result answers the question “can device-independent security survive metropolitan distances?”, the repeater result begins answering “how do we eventually push it further?”
How this fits into China’s quantum communications trajectory
The USTC DI-QKD result does not exist in isolation. It sits within a systematically expanding quantum communications programme that PostQuantum.com has tracked across multiple milestones – and the trajectory is striking for its coherence.
The same Pan Jian-Wei team that led this experiment also led the Micius satellite programme, which demonstrated satellite-to-ground QKD at intercontinental distances starting in 2016 and enabled the first quantum-encrypted intercontinental video call between Beijing and Vienna. They built the Beijing-Shanghai 2,032-kilometre fibre backbone with trusted relay nodes that now forms the spine of a national quantum communication network exceeding 12,000 kilometres across 17 provinces and 80 cities. They achieved twin-field QKD over 1,002 kilometres of standard fibre – a protocol that breaks the fundamental rate-distance limit without repeaters. And in March 2025, their Jinan-1 microsatellite achieved 12,900-kilometre intercontinental QKD between Beijing and Stellenbosch, South Africa – a satellite 10 times lighter and 45 times cheaper than Micius, with key generation rates 100 to 1,000 times faster.
Now add DI-QKD at metropolitan scale and a quantum repeater building block, both published in the same week.
This is not a scattered set of one-off experiments. It is a coordinated programme advancing across every vector simultaneously: satellite QKD for intercontinental reach, fibre QKD for distance records, microsatellite miniaturisation for commercial deployment, device-independent QKD for the strongest possible security guarantees, and quantum repeaters for scaling all of the above. China Telecom became the controlling shareholder of QuantumCTek – China’s leading QKD manufacturer – in January 2025. Four more microsatellites are planned for commercial launch in 2026. The 15th Five-Year Plan (2026–2030) explicitly lists quantum technology as a new driver of economic growth. MERICS estimated total Chinese investment in quantum technology at approximately $15 billion as of early 2026.
My take
I spent considerable time mapping the quantum security landscape for my clients and here on my blog. I’ve written extensively about why I think outright dismissal of QKD may be shortsighted, and I feel like this is my “I told you so!” moment.
I think this result matters more than its modest 0.06-bit-per-second key rate might suggest, because it directly targets the strongest technical argument Western security agencies use against QKD.
When the NSA says it “does not support the usage of QKD” for national security systems, when the UK’s NCSC says it “will not support QKD for government or military applications,” when ANSSI, BSI, and the Dutch NLNCSA jointly declare QKD “not yet sufficiently mature from a security perspective” – they are not primarily making arguments about key rates or fibre distances. Those are engineering parameters that everyone expects will improve. Their core technical objection is about device trust: the argument that the actual security provided by a QKD system is not the theoretical unconditional security from the laws of physics but rather the more limited security that can be achieved by hardware and engineering designs. The NSA phrases this almost exactly, and the NCSC echoes it: claims of unconditional security can never apply to actual implementations.
This is a legitimate and well-founded criticism of conventional QKD. The history of “quantum hacking” – from the detector-blinding attacks that compromised commercial QKD systems without trace, to the catalogue of source-side and calibration exploits documented in my DI-QKD technical explainer – validates the concern completely. Device-dependent QKD is only as secure as the accuracy of its device model, and no device model is perfect.
But DI-QKD was designed specifically to eliminate this vulnerability. Its security proof does not assume anything about the devices’ internal workings. It treats the quantum hardware as a black box – potentially manufactured by the adversary – and certifies security purely through the observed violation of a Bell inequality. The devices can literally be designed and built by Eve. As long as the statistics pass the Bell test, the proof holds.
The USTC result demonstrates that this is no longer a theoretical construction at cryptographically irrelevant distances. Device-independent security has been achieved at 11 kilometres with full finite-key composable security against the most general quantum attacks. The distance at which you can do DI-QKD is now comparable to the distance at which you’d actually want to do it – connecting government buildings across a city, linking data centres, securing financial institution communications.
Does this mean the NSA and NCSC should reverse their positions tomorrow? No. The key rate is still many orders of magnitude below what’s practical. The experiment was conducted with coiled fibre in a single lab. The locality loophole remains open. And DI-QKD still requires classical authentication – which means you need PQC anyway, a point the NSA makes repeatedly and correctly.
But the trajectory should give Western policymakers pause.
The gap is widening, not narrowing
I wrote in my analysis of why countries differ on QKD that China’s progress shows that many of the supposed “impossibilities” around QKD can be overcome with enough investment. That thesis is looking stronger by the month.
Consider what China has that the West does not: a 12,000-kilometre operational quantum communication network serving government, financial, and critical infrastructure users across 80 cities. A satellite constellation programme that has moved from the 630-kilogram Micius to miniaturised commercial-grade microsatellites generating keys 100–1,000 times faster. A $15 billion investment base. And now, the first metropolitan-scale DI-QKD demonstration – paired with the first quantum repeater building block – from the same integrated research programme.
The Western response remains fundamentally split. The NSA and NCSC are firmly PQC-first, PQC-only for most purposes. The EU hedges – funding the EuroQCI initiative across 26 member states, with Eagle-1 (ESA’s first QKD satellite) scheduled for 2026 launch and the €10.6 billion IRIS² satellite programme targeting 170 LEO satellites by 2027. France’s ANSSI and Germany’s BSI co-sign papers calling QKD immature while simultaneously participating in German and European QKD research projects.
I understand the logic of the PQC-first position. NIST’s post-quantum cryptography standards (ML-KEM, ML-DSA, SLH-DSA, and now HQC as a backup) are deployable today on existing infrastructure. They don’t require specialised hardware. They scale. They solve the near-term “harvest now, decrypt later” threat. Every organisation should be migrating to PQC. That is not in dispute.
But I keep returning to a question that the pure PQC position doesn’t adequately address: what happens if lattice-based cryptography turns out to have a structural vulnerability we haven’t found yet? NIST’s own selection of HQC – a code-based algorithm, not lattice-based – as a backup KEM in March 2025 was explicitly motivated by this concern. The SIKE break in 2022, where a NIST PQC finalist was broken by a classical algorithm in roughly an hour, demonstrated that mathematical assumptions we consider safe can fail catastrophically and without warning.
QKD, and eventually DI-QKD, offers insurance against exactly that scenario. Its security rests on the laws of physics, not on unproven mathematical hardness assumptions. In a defence-in-depth architecture, PQC handles authentication and broad-scale key management while QKD (or DI-QKD) provides an independent, physics-backed key distribution layer for the most sensitive links. Break one, and the other still stands. This is the “belt and suspenders” approach I’ve advocated before, and I think the USTC result strengthens the case for it.
The rate problem is real – but probably temporary
The single most valid criticism of DI-QKD today is the key rate. At 0.06 bits per second, it’s seven to eight orders of magnitude below commercial QKD systems (Toshiba demonstrated 13.7 Mbps at 10 kilometres; the first US commercial carrier deployment achieved 1.5 Mbps at 21.8 kilometres in December 2025). That’s an enormous gulf.
But I’d note two things. First, twin-field QKD at its record distance of 1,002 kilometres achieves only 0.0034 bits per second – comparable to DI-QKD’s rate. At extreme distances with extreme security guarantees, even conventional QKD faces severe rate penalties. The gap between DI-QKD and practical conventional QKD is large at metropolitan distances but narrows dramatically as you push the boundaries of either approach.
Second, the converging technology trends suggest the rate limitation is an engineering challenge, not a physics barrier. Hollow-core fibre – demonstrated by Petrovich et al. in Nature Photonics 2025 with a record 0.091 dB/km attenuation across a broad bandwidth that includes rubidium’s native 780 nm emission wavelength – could eventually eliminate the need for quantum frequency conversion entirely, simplifying the system and reducing loss. Microsoft is deploying 15,000 kilometres of hollow-core fibre across its Azure network. Multiplexed atom arrays could generate multiple entangled pairs simultaneously, multiplying rates proportionally. And the quantum repeater building block published alongside this DI-QKD paper points toward a future where the exponential photon-loss barrier is broken by design rather than brute-forced.
The progression from Delft’s one-pair-per-hour loophole-free Bell test in 2015 to Oxford’s 63-pairs-per-second DI-QKD in 2022 to USTC’s 0.53-pairs-per-second across 100 kilometres in 2026 may look slow in absolute terms. But the experimental trajectory – closing loopholes, extending distance, improving rates, adding finite-key security – has been remarkably consistent. Each generation addresses the previous generation’s primary limitation.
The realization of DI-QKD over 100 km is a “Sputnik Moment” for the Quantum Internet. It proves that the stringent requirements of Bell Tests – high efficiency, low noise, and memory coherence – can be met over deployed fiber infrastructure. While currently rate-limited, the underlying technologies (Quantum Frequency Conversion and Single-Atom Memories) are the exact building blocks required for Quantum Repeaters. As these technologies mature, we expect to see the emergence of “Switched Quantum Networks” where DI-QKD provides the ultimate layer of security for the world’s most sensitive data.
What this means for the quantum security roadmap
I think the USTC DI-QKD result and its companion quantum repeater paper mark a milestone that deserves a specific place in how we think about the long-term quantum security architecture.
For organisations doing PQC migration: nothing changes in the near term. PQC remains the urgent priority. Every system using RSA, ECDSA, or classical Diffie-Hellman needs to transition. NIST’s 2030/2035 deprecation timeline is real and appropriate.
For organisations building quantum networks: DI-QKD at 11 kilometres with finite-key security means that the highest-assurance form of QKD is now plausible at operationally relevant distances. It’s not deployable today – the rates are too low, the hardware is lab-grade, the cost is prohibitive. But it’s no longer a physics experiment with no path to deployment. It’s an engineering programme with a visible, if distant, destination.
For policymakers and strategists: China’s simultaneous advances across satellite QKD, fibre QKD distance records, microsatellite commercialisation, device-independent QKD, and quantum repeaters represent a coherent strategic programme that no other nation – and no coalition of nations – currently matches. The EuroQCI initiative is the closest Western analogue, and it is years behind in both scale and integration. The US has no comparable national QKD infrastructure programme.
Whether this matters depends on how you weight the scenarios. If PQC proves robust for decades, the absence of QKD infrastructure will be a non-issue. If lattice cryptography reveals a structural weakness, or if quantum computers advance faster than expected, or if the “harvest now, decrypt later” threat proves larger than estimated – then the countries with operational quantum-secured infrastructure will have a significant and hard-to-close advantage.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.