Post-Quantum, PQC, Quantum Security

Device-Independent QKD (DI-QKD)

The most paranoid protocol in quantum cryptography doesn’t trust its own hardware. Here’s why that matters – and how close it is to working.

Table of Contents

Why DI-QKD exists

Modern quantum key distribution (QKD) has always carried a slightly uncomfortable subtext: the math may be information-theoretic, but the box on the rack is engineered. And engineered systems fail in messy, non-theoretical ways.

That gap – between “provably secure on paper” and “secure in a live network with real detectors, lasers, firmware, calibration routines, and supply chains” – is exactly the space that device-independent QKD (DI‑QKD) is designed to collapse.

If you’ve followed my previous writing of next-generation QKD protocols, you’ve already seen DI‑QKD mentioned as the far edge of the roadmap: the “ultimate” version of QKD that doesn’t quietly assume your hardware behaves as modelled. That article framed it succinctly: DI‑QKD aims to deliver security even if the devices are untrusted or outright malicious, by leaning on observed quantum statistics rather than detailed device characterisation.

The academic community says the same thing, but with less diplomacy. A comprehensive review in npj Quantum Information calls DI‑QKD the “gold standard” of quantum key distribution, precisely because it relaxes the need to physically model devices – and therefore rules out broad categories of “quantum hacking” that exploit implementation loopholes rather than breaking the underlying cryptography.

So DI‑QKD exists for a very cybersecurity-flavoured reason: the strongest attacks against real QKD deployments are often not “Eve intercepts photons and is detected,” but “Eve exploits the detector,” “Eve exploits calibration,” “Eve exploits hidden degrees of freedom,” or “Eve exploits the vendor.” The literature is full of concrete examples where detector behaviour can be manipulated in ways that, under certain conditions, fundamentally compromise the security assumptions that the device model is built on.

DI‑QKD’s pitch is radical and clean: stop trusting the internal description of the devices; trust only what can be certified from the input–output statistics.

And this is a fascinating topics. So let’s dig into it.

The device trust problem: how “unhackable” QKD keeps getting hacked

Detector blinding: the attack that rewrote the narrative

The story of DI‑QKD really begins with an embarrassment. In 2010, a team led by Lars Lydersen at the Norwegian University of Science and Technology published a paper in Nature Photonics demonstrating that they could completely compromise commercial QKD systems – products marketed as offering physics-guaranteed security – without leaving any detectable trace.

Their technique was brutally elegant: by shining continuous bright light at the single-photon detectors inside Bob’s device, they “blinded” the avalanche photodiodes, forcing them out of their quantum-sensitive Geiger mode and into a classical linear regime where they behaved like ordinary power meters. In this state, the detectors only clicked when they received a bright enough pulse – and Eve could send carefully timed bright pulses that made the detectors click exactly when she wanted, allowing her to reconstruct the full secret key.

The quantum bit error rate – the standard alarm bell in QKD protocols – remained undisturbed. From Alice and Bob’s perspective, nothing looked wrong. The protocol completed normally. The key was generated. And Eve had a perfect copy.

This was not an isolated curiosity. The same team later demonstrated “thermal blinding” – a variant that uses even gentler optical power, making detection of the attack hardware practically impossible. The point was not to embarrass any particular vendor. The point was structural: the security proof of BB84 (and similar protocols) assumes specific physical behaviour of the detectors. If the detectors deviate from that assumed behaviour, the proof no longer applies – and the system can be quietly compromised.

A taxonomy of quantum hacking

The Lydersen attack opened the floodgates on what the community now calls “quantum hacking” research. The taxonomy of known attacks is instructive for understanding why DI‑QKD was needed:

Detector-side attacks exploit the measurement apparatus. Time-shift attacks take advantage of timing-dependent variations in detector efficiency to bias which detector fires, giving Eve partial information about the key bit. Detector efficiency mismatch attacks exploit the fact that real single-photon detectors rarely have perfectly identical response characteristics. After-gate attacks inject light precisely timed to arrive just outside the detector’s gating window, exploiting non-ideal temporal profiles. Each of these targets the same weakness: detectors don’t behave like the perfect mathematical objects in the security proof.

Source-side attacks target the preparation apparatus. Trojan horse attacks inject probe light into Alice’s device and analyse the back-reflections to determine her encoding choices – essentially reading the source’s internal state optically. Phase-remapping attacks exploit the fact that phase modulators in real systems don’t instantaneously switch between states, creating transitional encoding values that leak information. Photon-number splitting attacks (against weak coherent pulse sources) exploit the non-trivial multi-photon probability in laser-based sources to extract key information from the extra photons that Alice inadvertently sends.

Calibration and side-channel attacks exploit the operational processes surrounding QKD. Calibration attacks manipulate the setup routines that QKD systems perform before key generation, poisoning the reference frame that Alice and Bob use to interpret their data. Electromagnetic side-channel attacks extract information from the electronic signals that drive optical modulators. Acoustic side-channel attacks have even been theorised for certain device configurations.

A comprehensive review of practical QKD attacks published on arXiv catalogues these and other implementation vulnerabilities, concluding that practical security of QKD is an ongoing cat-and-mouse game between attack researchers and system designers. Countermeasures exist for many specific attacks – watchdog detectors, decoy-state protocols, optical isolation – but each countermeasure addresses a specific vulnerability rather than the structural problem.

The structural problem: models versus reality

Every one of these attacks exploits the same fundamental issue: the gap between the mathematical model of the device (which the security proof assumes) and the physical reality of the device (which Eve can exploit).

This is not unique to QKD – classical cryptographic implementations face analogous issues (timing attacks, power analysis, fault injection). But QKD’s marketing as “unconditionally secure” or “guaranteed by the laws of physics” makes the gap particularly sharp. The security is unconditional only if the devices match their mathematical descriptions. And no real device ever does, perfectly.

As the UK’s National Cyber Security Centre (NCSC) puts it with characteristic British directness in their quantum networking technologies white paper: the claims of “unconditional security” for QKD “can never apply to actual implementations.” And as France’s ANSSI notes in its technical position paper: the gap between theoretical security proofs and practical implementations remains a core concern.

This is where DI‑QKD enters: not as another patch for another attack, but as a fundamentally different approach to the relationship between security proofs and hardware.

The spectrum of device trust in QKD

Before diving into how DI‑QKD works, it helps to understand the trust hierarchy it sits atop. The QKD community has developed a graduated set of approaches to the device-trust problem, each removing a category of assumptions at the cost of more demanding experimental requirements.

Device-dependent QKD (DD-QKD): the baseline

Standard protocols like BB84, B92, and their derivatives are “device-dependent” in the sense that their security proofs rely on a detailed model of both the source and measurement devices. The proof specifies what quantum states the source prepares, what measurements the detectors perform, what Hilbert space dimension the system operates in, how detector dark counts behave, and so on. Security is then proven under the assumption that the real system stays within tolerances of that model.

This is fine until “unmodelled” turns into “exploitable.” Which, as we’ve seen, it does – repeatedly.

Countermeasures exist: decoy-state protocols address photon-number splitting attacks; measurement device monitoring addresses some detector exploits; source characterisation protocols bound source imperfections. But each countermeasure adds complexity, and the residual attack surface remains device-model dependent.

Measurement-Device-Independent QKD (MDI-QKD): removing the detector problem

MDI‑QKD, proposed by Hoi-Kwong Lo, Marcos Curty, and Bing Qi in 2012, offered a significant conceptual advance. In MDI‑QKD, Alice and Bob each prepare quantum states and send them to an untrusted third party (Charlie) who performs a Bell-state measurement. If Charlie is compromised – even if Charlie is Eve herself – the protocol’s security proof still holds. The measurement device is, by design, completely untrusted.

This directly addresses the detector-blinding class of attacks, which represent the most demonstrated and dangerous category of quantum hacking. MDI‑QKD was the field’s first practical answer to the question: “What if the detector is adversarial?

But MDI‑QKD still requires that Alice and Bob trust their source devices – the lasers, modulators, and encoding apparatus. Source characterisation is difficult and source-side attacks (Trojan horse, phase-remapping) are well-documented. MDI‑QKD removes the measurement side-channel surface but leaves the source side-channel surface intact.

As my previous analysis of next-generation QKD protocols notes, MDI‑QKD is already being incorporated into commercial systems and has been demonstrated at metropolitan distances. It’s a pragmatic, near-term upgrade to the device-trust posture of deployed QKD. But it is not the endgame.

One-sided device-independent QKD (1sDI-QKD): an intermediate step

One-sided device-independent QKD occupies a middle ground. Here, one party’s device (say, Alice’s) is fully characterised and trusted, while the other party’s device (Bob’s) is treated as a black box. Security relies on the violation of a quantum steering inequality – a weaker form of nonlocality that sits between entanglement and full Bell nonlocality.

The key advantage of 1sDI-QKD is that the detection efficiency threshold drops dramatically compared to full DI‑QKD: as low as approximately 50% on the untrusted side, compared to the approximately 82.8% required for loophole-free Bell tests in the standard CHSH setting. This makes 1sDI-QKD experimentally more accessible, though it provides weaker guarantees – it does not protect against an adversarial source on the trusted side.

Fully device-independent QKD: the apex

DI‑QKD sits at the top of this hierarchy. Both the source and the measurement devices are treated as black boxes. No assumptions are made about the internal workings of any quantum device. The security proof depends entirely on observed input-output statistics – specifically, on the violation of a Bell inequality.


In summary, the trust hierarchy is then:

  • DD-QKD – trust source, trust detectors, trust dimensional assumptions
  • MDI-QKD – trust source, but detectors can be adversarial
  • 1sDI-QKD – trust one party’s device, the other can be adversarial
  • DI-QKD – trust no quantum device at all

Each step removes a class of assumptions. Each step makes the protocol harder to implement. And each step makes the security guarantee stronger. DI‑QKD is often described as the endgame: the protocol whose security proof survives even if every quantum device was manufactured by the adversary.

Bell nonlocality as a cryptographic resource

DI‑QKD is where physics foundations stop being philosophical and start being operational security engineering.

Bell’s theorem: the 60-year-old result that powers modern cryptography

The story begins with John Stewart Bell, who showed in 1964 that any theory satisfying a particular notion of locality – roughly, the idea that distant events cannot influence each other faster than light – cannot reproduce all of quantum mechanics’ statistical predictions. This result, known as Bell’s theorem, was originally a contribution to the philosophical foundations of quantum mechanics. It took decades for the cryptographic implications to become clear.

The experimentally testable form most security professionals encounter first is the CHSH inequality – named after John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who formulated it in 1969. In a CHSH-style Bell test, two separated parties (Alice and Bob) each choose from two measurement settings and obtain binary outcomes. From the correlations between their outcomes, they compute a score:

S = E(0,0) + E(0,1) + E(1,0) − E(1,1)

where E(x,y) is the expected value of the product of their outcomes for measurement setting pair (x,y).

Bell’s theorem, as expressed through the CHSH inequality, states that for any local hidden-variable model – any theory in which measurement outcomes are predetermined by shared classical information – the CHSH score satisfies |S| ≤ 2. This is a hard bound. No amount of clever pre-arrangement, no amount of shared classical randomness, can push S above 2 if the underlying physics is local and classical.

Quantum mechanics violates this bound. Two parties sharing a maximally entangled state (such as a Bell pair of photons or trapped atoms) and performing optimal measurements can achieve S = 2√2 ≈ 2.828, the Tsirelson bound. The excess above 2 is the “Bell violation” – and it is this violation that DI‑QKD converts into a security guarantee.

From Bell violation to key security: the logical chain

The connection between Bell violation and cryptographic security is built on three linked insights, each rigorously formalised in the DI literature:

Insight 1: Bell violation certifies genuine quantum correlations. If Alice and Bob observe S significantly above 2, their outcomes could not have been produced by any classical strategy – no pre-programmed list of answers, no shared random number generator, no hidden communication (assuming the locality condition is enforced). This means the devices must be exploiting genuinely quantum resources, specifically entanglement.

Insight 2: Quantum correlations imply intrinsic randomness. Unlike classical correlations, which can always in principle be predicted by someone with sufficient information, quantum measurement outcomes on entangled systems are fundamentally unpredictable – even to someone who prepared the quantum state. The degree of unpredictability is directly tied to the degree of Bell violation.

Insight 3: Randomness bounds Eve’s information. If Alice’s measurement outcomes contain certifiable randomness, then any eavesdropper – including one who manufactured the devices – has bounded information about those outcomes. The stronger the Bell violation, the less Eve can know. This is enforced by the monogamy of entanglement: the more strongly Alice and Bob’s systems are correlated with each other, the less strongly either can be correlated with any third party (Eve).

Therefore, the observed CHSH score can be translated into a quantitative bound on Eve’s information – technically, a lower bound on the conditional von Neumann entropy of Alice’s outcomes given Eve’s quantum side information. From that entropy bound, standard classical post-processing techniques (error correction and privacy amplification) can distill a shared secret key of known length, with a rigorous guarantee on its secrecy.

Ekert’s E91: the protocol that planted the seed

Artur Ekert’s 1991 protocol, known as E91, was the first to propose using Bell-type correlations as a cryptographic mechanism. In E91, Alice and Bob share entangled photon pairs and perform measurements chosen from three possible basis settings each. Some measurement combinations are used for key generation (where their outcomes are maximally correlated), while others are used to compute a Bell parameter. A Bell violation confirms that the shared state is genuinely entangled and that no eavesdropper has disrupted the quantum correlations.

E91 was visionary, but it was not device-independent as originally formulated. The standard E91 security analysis still assumes known quantum states and measurement operators. The protocol creates the conceptual bridge between Bell tests and cryptography, but it does not complete the DI program because it relies on device characterisation for its security proof.

The DI leap happens when you drop the assumption that you know what’s inside the measurement boxes.

From E91 to DI: Acín and the 2007 framework

One influential early DI framing appears in the 2007 work by Antonio Acín and collaborators, who showed that for any observed CHSH value S > 2, one can derive a lower bound on the key rate of a QKD protocol secure against collective attacks – without any assumptions about the devices’ internal workings. This was the first rigorous connection between the degree of Bell violation and the amount of extractable secret key in a fully device-independent setting.

The same period saw the development of no-signalling-based approaches, which push DI security even further: asking how much secret key can be extracted if you assume only that faster-than-light signalling is impossible, making no reference to quantum mechanics at all. This yields weaker rates but demonstrates that the DI paradigm is robust even beyond quantum theory.

The DI-QKD protocol pipeline: how it actually works

A DI‑QKD protocol, stripped to its essentials, follows a pipeline that is conceptually simple even if the security proof behind it is not:

Step 1: Entanglement distribution. A source (which may be untrusted) distributes entangled quantum systems to Alice and Bob. These could be photon pairs, atom-photon entangled states, or any other bipartite quantum system. Crucially, the protocol does not need to know what quantum state is being distributed – only that something is being shared.

Step 2: Random measurement choice. In each round, Alice and Bob independently choose measurement settings from a small set (typically two settings each, for a CHSH-type test). The choice must be made using local randomness, independent of the devices’ history. This “free choice” or “measurement independence” assumption is essential – if Eve can predict or influence the settings, she can fake a Bell violation.

Step 3: Outcome recording. Each device produces a classical output for each measurement input. Alice and Bob record these input-output pairs across many rounds.

Step 4: Parameter estimation. On a randomly selected subset of rounds, Alice and Bob publicly compare their settings and outcomes to estimate the CHSH parameter S and the quantum bit error rate (QBER). If S is insufficiently above 2, or if the QBER is too high, the protocol aborts — the devices are either not producing good enough entanglement or something has gone wrong (or Eve is active). This is the point where the Bell test functions as a security certificate: passing it certifies that the devices’ behaviour is consistent with genuine quantum correlations of known quality.

Step 5: Key sifting. On the remaining rounds (where both parties used the “key generation” measurement settings), Alice and Bob’s outcomes form raw key material. Because entangled measurements can produce correlated (but not necessarily identical) outcomes, some disagreements exist.

Step 6: Error correction. Alice and Bob use an authenticated classical channel to reconcile their raw keys, correcting the disagreements. This leaks some information to Eve (bounded by the QBER), which must be accounted for in the final key rate calculation.

Step 7: Privacy amplification. Using universal hash functions, Alice and Bob compress their reconciled key to a shorter final key, removing any information that Eve might have gained. The amount of compression is determined by the entropy bound derived from the Bell violation in Step 4.

The output is a shared secret key of calculable length, with a rigorous upper bound on any adversary’s information about that key – derived entirely from the observed statistics, with no reference to the internal workings of any quantum device.

That pipeline is why DI‑QKD is often described accurately as turning a Bell experiment into a cryptographic protocol.

What DI-QKD actually assumes

DI‑QKD is sometimes marketed (especially outside the research community) as “security without assumptions.” That’s not correct. It is better described as security with a very different set of assumptions – and with a much smaller “device modelling” surface area.

Here are the assumptions that DI‑QKD explicitly keeps, because without them the problem becomes ill-posed for any protocol:

1. Quantum mechanics is correct

DI‑QKD assumes Born’s rule – that quantum measurement statistics follow the predictions of quantum mechanics. This is the deepest assumption, and the one that connects Bell violations to entropy bounds. If quantum mechanics were wrong in a way that allowed Bell violations without genuine entanglement, DI‑QKD’s security guarantees would not hold. Given that quantum mechanics has survived over a century of increasingly precise experimental tests, this assumption is considered extremely safe – but it is an assumption nonetheless.

2. Authenticated classical communication is still required

QKD – device-independent or not – does not magically authenticate identities. If you don’t authenticate the classical channel, a man-in-the-middle can run two separate QKD sessions and relay messages. Even the most QKD-sceptical government guidance repeats this point: QKD generates keying material but does not itself authenticate the source.

This is architectural. In practice, information-theoretically secure authentication can be built from universal hashing (Wegman-Carter style), but that still requires an initial shared secret to bootstrap trust. For practical deployments, this means either pre-shared symmetric keys or, more realistically, post-quantum public-key authentication mechanisms (PQC). The NSA explicitly calls out authentication as one of QKD’s fundamental limitations, and it applies equally to DI‑QKD.

3. Secure labs and bounded leakage are non-negotiable

DI‑QKD treats devices as black boxes mathematically, but it still assumes the boxes cannot freely exfiltrate secrets through “unauthorised classical information” – covert channels, side-channel emissions, or supply-chain-implanted communication paths.

The 2022 Nature DI‑QKD experiments state this bluntly as an operational requirement: the users must control when devices communicate, and devices must not send unauthorised classical information to an eavesdropper.

This is where DI‑QKD feels very familiar to cybersecurity professionals: you can have perfect cryptography and still lose to covert channels and supply-chain implant behaviour if the physical environment doesn’t constrain them. DI‑QKD’s answer to “what if the device phones home?” is the same as any SCIF’s answer: physical security and electromagnetic containment.

4. Measurement independence (free choice)

Alice and Bob must be able to generate measurement settings that are genuinely random and uncorrelated with the quantum state produced by the source or any information available to Eve. This “free choice” assumption is sometimes called “measurement independence,” and it is essential: if Eve can predict or correlate with Alice and Bob’s setting choices, she can orchestrate a strategy that fakes a Bell violation using purely classical means.

In practice, this means using a trusted random number generator for basis selection – ideally one based on a different physical process than the quantum devices being tested. Some protocols relax this to “partial” measurement independence, tolerating a bounded degree of correlation between settings and hidden variables, but full DI‑QKD in its standard form requires full independence.

5. Loophole-free Bell violation is not optional

DI‑QKD security relies on the claim “these statistics certify nonlocal behaviour.” But Bell tests historically had “loopholes” – ways a classical strategy can fake a violation if the experiment’s implementation is imperfect.

Two famous loopholes dominate the discussion:

The detection loophole: If too many events are lost or selectively discarded, a local hidden-variable model can mimic a Bell violation. The “fair sampling” assumption – that detected events are representative of all emitted events – fails when detection efficiency is low. For the standard CHSH inequality, the critical detection efficiency threshold is approximately 82.8%: below this, a local model can produce the observed statistics even if the true state is classical. With background noise, the threshold can shift, but the fundamental point remains – DI‑QKD demands high detection efficiency.

The locality loophole: If Alice’s and Bob’s measurement devices can communicate during a measurement round (or if the settings aren’t chosen quickly and independently enough), classical coordination can fake Bell-violating statistics. Closing this loophole requires space-like separation of the measurement events – meaning the time between setting choice and outcome registration must be shorter than the light travel time between the two labs.

The engineering takeaway: DI‑QKD is unusually sensitive to loss and device imperfections because you are not just trying to send qubits – you are trying to certify nonlocal correlations under adversarial interpretation of every imperfection.

6. Classical post-processing is faithful

The error correction, privacy amplification, and key distillation steps that follow the quantum phase must be correctly implemented. A bug in the privacy amplification software, or a compromised classical computer, would compromise the final key regardless of how perfect the quantum protocol was. DI‑QKD inherits this assumption from all key distribution protocols, classical or quantum.

What DI-QKD does NOT assume

Given that list, here is what DI‑QKD explicitly does not assume:

  • The source prepares any particular quantum state
  • The detectors perform any particular measurement
  • The devices operate in any particular Hilbert space dimension
  • The devices behave identically from round to round
  • The devices are manufactured by a trustworthy party
  • The devices’ internal components are unmodified or uninspected

The devices can literally be designed and built by the adversary. As long as the observed statistics pass the Bell test with sufficient margin, the security proof holds. This is a qualitative leap beyond every other QKD paradigm.

The theoretical machinery: from Bell scores to security proofs

The challenge of proving security against general attacks

Converting observed Bell violations into a rigorous security guarantee turns out to be extremely hard. The difficulty comes from the generality of the adversary model.

In device-independent security, Eve is allowed to perform coherent (general) attacks: she can prepare an arbitrary joint quantum state across all rounds of the protocol, entangle it with her own quantum memory, and defer her measurement until after Alice and Bob have completed all classical post-processing. She can correlate her strategy across rounds. She can adapt to any information leaked during error correction. The devices themselves might have hidden quantum memories that build up correlations over time.

Proving security against this adversary – without making any assumptions about the devices — requires showing that the entropy of Alice’s raw key, conditioned on everything Eve could know, is high enough to extract a key after error correction and privacy amplification.

Early proofs: collective attacks and i.i.d. assumptions

The earliest DI security proofs simplified the problem by restricting to “collective attacks,” where Eve is assumed to attack each round independently with the same strategy, and her information is bounded on a per-round basis. The 2007 Acín et al. framework proved security against collective attacks, showing that for a CHSH value S, the per-round conditional entropy satisfies a bound that is positive (allowing key extraction) whenever S > 2. The key rate was given by:

r ≥ 1 − h(Q) − χ(A:E)

where h(Q) is the binary entropy of the QBER, and χ(A:E) is Eve’s Holevo information, bounded as a function of S.

These early proofs were foundational, but they relied on the i.i.d. (independent and identically distributed) assumption – that each round of the protocol is statistically independent and identically distributed. This is unrealistic for untrusted devices, which might change their behaviour over time, build up internal correlations, or adapt to the protocol’s progress.

The Entropy Accumulation Theorem: the breakthrough

The breakthrough that made DI‑QKD theoretically viable against fully general attacks came from Frédéric Dupuis, Omar Fawzi, and Renato Renner with the Entropy Accumulation Theorem (EAT), published in 2018 in Nature Communications.

The EAT addresses a fundamental question: in a sequential protocol with n rounds, can you lower-bound the total smooth min-entropy of Alice’s outputs (conditioned on Eve’s side information) by summing up per-round entropy contributions, even if the rounds are not independent?

The answer is yes – under a specific structural condition called the “Markov chain” condition, which requires that each round’s devices produce outputs that, conditioned on the protocol’s history, are conditionally independent of future rounds’ quantum side information. This condition is automatically satisfied in protocols where the devices produce classical outputs in each round and the adversary cannot signal backwards in time.

The EAT’s power is that it converts a per-round entropy bound (which can be computed from the Bell score using the collective-attack analysis) into a global entropy bound against the most general adversary, with only a modest penalty term that scales as √n. This penalty vanishes as the number of rounds grows, making the asymptotic key rate identical to the collective-attack rate.

The 2018 EAT paper explicitly positions itself as theoretical groundwork for experimentally realistic device-independent cryptography, pointing to the then-recent progress in loophole-free Bell tests as the experimental foundation on which this theory could be built.

A generalised version of the EAT, published by Tony Metger and Renato Renner in Communications in Mathematical Physics in 2024, further extended the framework by relaxing technical conditions, improving the finite-size correction terms, and broadening applicability to a wider class of protocols including those with quantum side information.


In summary the current proof of DI-QKD theory: DI‑QKD moved the hard part of QKD security from “characterise the devices” to “prove that observed correlations enforce entropy against the most general attack.” The EAT and its generalisations solved the second problem. The remaining challenge is experimental: generating the high-quality Bell violations that the theory requires.

Loophole-free Bell tests: the experimental precondition

DI‑QKD is, operationally, “loophole-free Bell tests plus post-processing.” Every improvement in Bell testing is upstream of DI‑QKD feasibility. So the experimental history of loophole-free Bell tests is directly relevant.

Closing the loopholes separately

For decades after Bell’s 1964 paper, experimental Bell tests suffered from one loophole or another. Alain Aspect’s pioneering 1982 experiments with entangled photons provided strong evidence against local hidden variables but left the locality loophole open (the setting choices were not fully random and space-like separated). Subsequent photonic experiments improved the locality closure but suffered from the detection loophole: photodetector efficiencies were well below the ~82.8% threshold, requiring the “fair sampling” assumption.

Conversely, experiments with trapped ions achieved detection efficiencies above 99% but could not close the locality loophole because the ions were in the same trap, micrometres apart.

2015: The year the loopholes closed

The key inflection point came in 2015, when three independent groups achieved loophole-free Bell violations:

The Delft experiment (Hensen et al., Nature, 2015) used nitrogen-vacancy (NV) centres in diamond, separated by 1.28 km across the TU Delft campus. Two electron spins were entangled using an “event-ready” scheme: each NV centre emits a photon entangled with its electron spin, the two photons travel to a central station where a joint measurement “heralds” successful entanglement. This heralding eliminates the detection loophole (only heralded events are used, so there is no post-selection bias), while the 1.28 km separation and fast random setting choice close the locality loophole.

The result: a CHSH value of S = 2.42 ± 0.20, violating the classical bound with p < 0.039. The rate was only about one entangled pair per hour – far too slow for cryptography – but the conceptual significance was enormous. For the first time, a Bell inequality was violated with all major loopholes simultaneously closed.

The NIST and Vienna experiments (both also 2015) achieved loophole-free violations using entangled photon pairs with high-efficiency superconducting nanowire detectors, closing the detection loophole on the photonic side while using space-like separated measurement stations to address locality. These demonstrated that photonic platforms could also achieve loophole-free violations, pointing toward telecom-compatible implementations.

Together, the three 2015 experiments established that loophole-free Bell violation is an engineering target, not a metaphysical ideal. This shifted the question from “can it be done?” to “can it be done well enough and fast enough for cryptography?”

The state of DI-QKD experiments: 2022 and beyond

Three experiments, one week, in 2022

The transition from “loophole-free Bell test” to “device-independent key distribution” happened in a single remarkable week in July 2022, when three independent groups published DI‑QKD demonstrations almost simultaneously. Each used a different experimental platform and offered distinct insights into the challenges and possibilities.

The Oxford experiment (Nadlinger et al., Nature, 2022) used two trapped strontium-88 ions in a single laboratory, connected by a 2-metre optical fibre. The ions were entangled using photonic links, and the team implemented the complete DI‑QKD protocol pipeline: Bell parameter estimation, key sifting, error correction, and privacy amplification. From approximately 1.5 million Bell test rounds collected over roughly eight hours, they extracted 95,628 bits of device-independent secure key.

The numbers tell a story of extraordinary entanglement quality: CHSH parameter S = 2.677 (very close to the Tsirelson bound of 2.828), quantum bit error rate of only 1.44%, and a protocol efficiency high enough to generate key at a meaningful rate. The limitation was distance – both ions were in the same lab, separated by only 2 metres, so the locality loophole was not closed. But as a proof that the full DI‑QKD pipeline works end-to-end – from entanglement generation through Bell testing to final key extraction with finite-size security against general attacks – it was definitive.

The Munich experiment (Zhang et al., Nature, 2022) used two trapped rubidium-87 atoms in separate buildings at Ludwig-Maximilians-Universität München, connected by 700 metres of optical fibre spanning a straight-line distance of roughly 400 metres. This was the first DI‑QKD system deployed between genuinely distant users – not in the same room, not in the same building, but across a university campus.

Both the detection and locality loopholes were simultaneously closed: the event-ready entanglement scheme handled detection, and the 400-metre separation with fast random basis choice (using quantum random number generators) handled locality. The team observed S = 2.578 ± 0.075 and a quantum bit error rate of 7.8%. The asymptotic key rate was positive – about 0.07 bits per entanglement event — demonstrating that DI key generation is feasible in a real-distance setting. However, the entanglement generation rate was only about 1 event per 80 seconds – too slow to accumulate enough statistics for finite-key security within the experiment’s 75-hour runtime. So the Munich result demonstrated positive asymptotic rates but not finite-key extraction.

The USTC photonic experiment (Liu et al., Physical Review Letters, 2022) took a fundamentally different approach, using polarisation-entangled photon pairs at telecom wavelength (1560 nm) measured by superconducting nanowire single-photon detectors with heralded detection efficiency of approximately 87.5%. This was significant because it demonstrated a purely photonic path to DI‑QKD – potentially compatible with existing telecom fibre infrastructure. The team showed that positive key rates were feasible for fibre lengths up to 220 metres. However, the experiment lacked fully random basis switching (a requirement for complete DI‑QKD), making it a proof-of-principle rather than a full protocol demonstration.

What the 2022 experiments revealed collectively

Together, these three results mapped the landscape of DI‑QKD:

Trapped ions/atoms are currently the leading platform for high-fidelity entanglement – the Bell violations they produce are close to the Tsirelson bound, and their detection efficiency is near-unity. The event-ready heralding scheme naturally closes the detection loophole. The limitation is rate: generating entanglement over distance via photonic links is slow, because it requires photon emission, transmission, and coincidence detection.

Photonic systems offer compatibility with existing telecom infrastructure and potentially much higher rates. But achieving the detection efficiency required for loophole-free Bell violation remains challenging (superconducting nanowire detectors need cryogenic cooling, and overall system efficiency including coupling losses must exceed the ~82.8% CHSH threshold).

The rate-distance-security triangle became clear: you can currently have high security (strong Bell violation), or long distance, or high rate – but not all three simultaneously. DI‑QKD’s experimental challenge is optimising this triangle.

Why distance remains hard: the physics of loss

DI‑QKD is unusually unforgiving about loss. In conventional QKD, channel loss primarily reduces throughput – you get fewer key bits per second, but the ones you get are still secure (the security proof handles loss by accounting for it in the error rate analysis). In DI‑QKD, loss doesn’t just lower throughput; it threatens the ability to certify nonlocality at all.

Here’s why. When photons are lost in the channel, the corresponding Bell test rounds produce no outcome. If you simply discard those rounds – reasonable in conventional QKD – you open the detection loophole: a local hidden-variable model can exploit selective detection to fake Bell-violating statistics. The detection loophole threshold of approximately 82.8% for CHSH means that the overall system efficiency (source, channel, coupling, detection) must exceed this value. At typical telecom fibre attenuation of ~0.2 dB/km, you lose half your photons in about 15 km. That’s why the 2022 DI‑QKD demonstrations were limited to laboratory or campus scales.

Event-ready (heralded) entanglement schemes partially address this by conditioning on successful heralding events – only rounds where entanglement was confirmed are used for the Bell test, eliminating the post-selection bias that creates the detection loophole. But heralding itself depends on photon transmission, so the heralding rate drops exponentially with distance, throttling the overall key generation rate.

The npj Quantum Information review of DI‑QKD advances stresses that DI‑QKD “relies on the loophole-free violation of a Bell inequality,” requiring high-quality entanglement distribution and near-perfect quantum measurements – conditions that remain demanding with current technology even as proof-of-principle demonstrations accumulate.

Several architectural strategies are being pursued to address the loss problem:

Single-photon interference for entanglement heralding. In a two-photon heralding scheme, the success probability scales as η² (where η is the photon transmission through the channel). In a single-photon scheme, it scales as η – a linear rather than quadratic dependence that fundamentally changes the rate-distance relationship. This approach is being actively explored by leading groups.

Quantum frequency conversion. Atomic systems (rubidium, strontium, ytterbium) emit photons at wavelengths poorly matched to telecom fibre – typically 780 nm or 493 nm, where fibre losses are 3-5 dB/km. Quantum frequency converters can shift these photons to the telecom C-band near 1550 nm (~0.2 dB/km), dramatically extending the range.

Quantum repeaters. The ultimate solution to the distance problem. By dividing a long link into shorter segments with quantum memories and entanglement swapping, repeaters could maintain the high-fidelity entanglement that DI‑QKD demands while dramatically reducing the per-segment loss. In 2024, three groups – at Harvard (silicon-vacancy centres in diamond, 35 km deployed fibre), USTC (atomic quantum memories in a metropolitan network), and TU Delft (NV centres, approximately 25 km deployed fibre) – demonstrated memory-based entanglement distribution at metropolitan scales, establishing the building blocks for repeater chains. But practical multi-hop quantum repeaters remain years away from deployment.

Improved Bell inequalities. Theoretical work has produced families of generalised CHSH inequalities and randomness-generating Bell inequalities that yield higher key rates for the same level of noise, or tolerate more noise for positive key generation. This improves the parameter space in which DI‑QKD is feasible.

Geopolitics, deployment reality, and where DI-QKD fits

DI‑QKD sits at the intersection of three strategic narratives: the long-term future of quantum networks (“quantum internet” thinking), the near-term reality of standards-driven post-quantum cryptography (PQC), and deep national differences in appetite for deploying specialised quantum infrastructure versus migrating software cryptography.

China’s QKD-forward posture and the infrastructure mindset

As my analysis of why countries differ on QKD documents in detail, China has been unusually aggressive about treating QKD as strategic infrastructure.

The milestones tell a consistent story of national-scale commitment. The 2016 launch of the Micius satellite (see analysis of record-breaking quantum transmissions) demonstrated satellite-to-ground QKD at intercontinental distances. The Beijing-Shanghai fibre backbone spans 2,032 km with 32 trusted relay nodes. A 2021 Nature paper describes an integrated space-to-ground quantum communication network combining over 700 fibre QKD links, two satellite-to-ground links, and a trusted-relay ground network covering more than 2,000 km – extending end-to-end connectivity to distances up to 4,600 km.

By early 2025, the China Quantum Communication Network encompassed over 12,000 km of fibre and covered more than 80 cities across 17 provinces, with a Nature npj Quantum Information paper describing its transition from technology verification to carrier-grade operation. The Jinan-1 microsatellite, launched in 2024, achieved real-time QKD with compact mobile ground stations and demonstrated quantum-secured communication over 12,900 km between China and South Africa – a result I covered in its microsatellite QKD records reporting.

On distance records, China’s push for fibre-based QKD has been relentless. Twin-field QKD – a protocol that breaks the fundamental rate-distance limit of standard QKD without quantum repeaters – has been demonstrated over 1,000+ km of standard fibre by USTC teams. I tracked these advances in its world record QKD fibre distance coverage.

But there’s a pragmatic truth that tends to get lost in popular “unhackable encryption” narratives: that same 2021 Nature paper emphasises that quantum repeaters could enable global end-to-end quantum links in principle, but were not yet deployable – hence the reliance on trusted relays in large-scale networks today. Every relay node is a point of vulnerability, a “trust anchor” in a system that claims not to require trust.

If you look at DI‑QKD through this lens, it becomes less of a niche “ultimate protocol” and more like a missing capability needed to make quantum networks less dependent on trusted intermediate nodes. DI‑QKD doesn’t eliminate the need for repeaters, but it aligns with the end-to-end trust ideal that repeaters are supposed to enable. A mature quantum network with DI‑QKD-compatible repeaters would not require trust in any intermediate node or any device along the path – only in the physics itself and the security of the endpoints.

The West’s split personality: build some QKD, but prioritise PQC

The “West” is not monolithic, but several influential security agencies have published unusually direct scepticism about QKD as a broad mitigation strategy – often for reasons that have nothing to do with quantum mechanics and everything to do with deployment constraints, authentication, and operational risk.

The NSA states that it “does not recommend the usage of QKD and quantum cryptography for protecting National Security Systems unless and until these limitations are overcome,” explicitly calling out (among others) the lack of built-in source authentication, specialised infrastructure requirements, trusted relay risks, implementation-dependence, and denial-of-service sensitivity.

ANSSI (France’s cybersecurity agency) argues in its technical position paper that QKD may have niche defence-in-depth uses, but that state-of-the-art classical cryptography, including PQC, is “by far the preferred way” for long-term protection in modern communication systems, emphasising deployment constraints and practical security.

The NCSC (UK) makes a similarly strong argument in its quantum networking technologies white paper: QKD does not provide authentication, PQC is recommended as the primary mitigation, and QKD should not be relied on as a substantial security mechanism in isolation. It goes further by stating it will not support QKD for government or military applications.

Germany’s BSI and the Netherlands’ NLNCSA co-signed a joint position paper in early 2024 with ANSSI, declaring QKD “not yet sufficiently mature from a security perspective” for most applications.

These are not anti-quantum opinions. They are security engineering opinions: complexity, specialised hardware, operational validation difficulty, and trust-infrastructure issues matter.

At the same time, parts of the West are investing in quantum communications infrastructure – just with a different framing. The European Commission’s EuroQCI initiative describes an EU-wide quantum communication infrastructure with terrestrial fibre and a satellite segment, aiming for operational QKD services as part of a broader security strategy. The Eagle-1 satellite – ESA’s first in-orbit QKD demonstrator – is scheduled for launch in late 2025. The EU’s Quantum Europe Strategy, adopted in July 2025, commits to making Europe “a quantum industrial powerhouse” by 2030.

In the United States, a National Quantum Initiative Advisory Committee report frames QKD as one of several quantum networking application areas discussed by experts, while noting that NSA does not approve QKD for national security systems – essentially: “research continues, but wide deployment is not endorsed as a security baseline.”

And the biggest “West is different” anchor remains software migration: NIST positions PQC as the main scalable mitigation against future quantum computers, having released core PQC standards (ML-KEM, ML-DSA, SLH-DSA) in August 2024 with a deprecation timeline for quantum-vulnerable algorithms through 2035.

Where DI-QKD fits in that strategic puzzle

DI‑QKD complicates the simplistic “QKD versus PQC” discourse in a genuinely interesting way.

On one hand, DI‑QKD directly targets a core critique of practical QKD: implementation security. If you can genuinely run DI‑QKD at useful rates and distances, you have a physics-backed method to reduce dependence on detailed device characterisations – precisely the fragility that practical attacks exploit and that Western agencies cite as a fundamental limitation. The NSA’s argument that “the actual security provided by a QKD system is not the theoretical unconditional security from the laws of physics but rather the more limited security that can be achieved by hardware and engineering designs” becomes substantially less applicable when the security proof is explicitly independent of hardware design.

On the other hand, DI‑QKD does not make the big operational objections disappear:

  • You still need authentication (and therefore PQC, symmetric pre-shared keys, or both)
  • You still need secure endpoints and control of leakage
  • You still face denial-of-service sensitivity (Eve can always cut the fibre)
  • You still need to justify specialised infrastructure costs versus software-based upgrades
  • The key rates are currently orders of magnitude too low for practical use

So the realistic assessment is something like this: DI‑QKD is not a replacement for PQC. It’s closer to a future high-assurance ingredient for quantum networking architectures – especially those that want to push beyond trusted-node networks – while PQC remains the near-term, broad-spectrum cryptographic migration path because it’s deployable on existing platforms and integrates into existing protocols.

The most productive framing is not “QKD versus PQC” but defence in depth. PQC provides the practical, scalable, software-deployable baseline – essential for protecting the vast majority of communications. Conventional QKD, already maturing through MDI and twin-field variants, adds a physics-based security layer for the most sensitive links. And DI‑QKD represents the long-term aspiration: a protocol whose security does not depend on trusting anything except the laws of physics and the walls of your laboratory.

Hybrid architectures – where PQC handles authentication and QKD handles key distribution, each compensating for the other’s weaknesses – are already emerging commercially. Toshiba announced a QKD system integrating PQC authentication (ML-KEM) in early 2025. DI‑QKD would eventually make even the QKD component’s hardware characterisation irrelevant to the security argument.

The stepping-stone protocols: trust reduction in practice

The gap between today’s commercial QKD and full DI‑QKD has motivated a family of intermediate protocols that reduce trust assumptions incrementally while remaining experimentally feasible. These are not just academic curiosities – they represent the practical migration path that deployed QKD systems will follow as the underlying technology matures.

MDI-QKD is the most mature stepping stone. It eliminates all detector side-channel attacks while retaining source characterisation requirements. It is commercially deployed and has been demonstrated at metropolitan distances. Twin-field QKD, a variant of MDI, breaks the fundamental rate-distance limit without quantum repeaters and has achieved distances exceeding 1,000 km.

One-sided device-independent QKD (1sDI-QKD) requires only one party’s device to be untrusted, based on quantum steering inequalities rather than full Bell nonlocality. The detection efficiency threshold drops to as low as approximately 50%, compared to approximately 82.8% for full DI‑QKD. This dramatically expands the set of experimental platforms that can realise the protocol.

Semi-device-independent QKD makes minimal assumptions — such as bounding the dimension of the communicated quantum systems — without requiring full device characterisation. This approach trades some security strength for significantly relaxed experimental requirements.

Each protocol sits at a different point on the trust-reduction hierarchy:

DD-QKD → MDI-QKD → 1sDI-QKD → Full DI-QKD

And each step removes a class of assumptions at the cost of more demanding experiments. For organisations deploying QKD today, the trajectory suggests that security guarantees will strengthen progressively as the underlying technology matures, with each generation of hardware supporting protocols that assume less about the devices.

Looking forward: when does DI-QKD become practical?

The honest answer is: not yet, and not soon – but probably sooner than most people think.

The key rate challenge remains formidable. The 2022 Munich experiment produced approximately one entangled pair per 80 seconds. Conventional QKD systems now operate at gigabit-per-second key rates over metropolitan distances. DI‑QKD’s rates are currently six to nine orders of magnitude lower than practical alternatives.

But several convergent technology trends could dramatically change this picture over the next 5-10 years:

Multiplexed atom-photon interfaces. Parallelised atom arrays, demonstrated with ytterbium-171 in recent Nature Physics results, could generate multiple entangled pairs simultaneously, multiplying the rate by the number of atoms.

Improved photon collection efficiency. Cavity-enhanced atom-photon coupling could boost the raw photon emission rate and collection efficiency, directly improving the entanglement generation rate.

Better quantum frequency conversion. More efficient and lower-noise frequency converters would improve the signal-to-noise ratio of entanglement heralding over long fibre links.

Tighter finite-key bounds. Improved theoretical analyses, including the generalised EAT and new randomness certification techniques, reduce the number of protocol rounds required for finite-key security, lowering the barrier to practical key extraction.

Quantum repeaters. Memory-based entanglement distribution demonstrated in 2024 at metropolitan scales (35 km at Harvard, 25 km at Delft) establishes the building blocks. Multi-segment repeater chains with entanglement swapping would extend DI‑QKD to arbitrary distances while maintaining high entanglement fidelity.

Hollow-core fibres. Recent demonstrations of fibre attenuation below 0.1 dB/km (versus ~0.2 dB/km for standard telecom fibre) would effectively double the reach of each link – a significant advantage for a protocol where every fraction of a dB matters.

The most plausible timeline for the first operationally useful DI‑QKD deployment – meaning metropolitan-scale, with positive finite-key rates sufficient for practical key generation – extends into the late 2020s or early 2030s, contingent on quantum repeater maturation and rate improvements of several orders of magnitude. Wider deployment, competitive with conventional QKD, likely extends into the mid-2030s.

What the road to device independence reveals about trust

DI‑QKD is not yet a deployable technology. It is a theorem, backed by extraordinary experimental demonstrations, that redefines what cryptographic trust can mean.

Every security system embeds assumptions. Classical cryptography assumes computational hardness. Post-quantum cryptography assumes specific mathematical problems remain intractable against quantum algorithms. Conventional QKD assumes its hardware matches its mathematical model. DI‑QKD pushes the assumption boundary to its logical minimum: physics is real, your lab has walls, and you can flip a fair coin.

That minimal trust foundation comes at an enormous practical cost today. But the experimental trajectory from Delft’s one-pair-per-hour Bell test in 2015, through the three 2022 demonstrations that proved the full protocol works, to the ongoing efforts at extending range and improving rates, represents one of the fastest capability expansions in experimental quantum physics.

For cybersecurity professionals tracking the quantum threat landscape, DI‑QKD is worth understanding for three reasons. First, it provides the theoretical upper bound on what quantum cryptography can achieve – and knowing the ceiling helps you evaluate every product and claim below it. Second, it addresses the specific implementation-security criticisms that Western agencies use to dismiss QKD — and those criticisms are well-founded for device-dependent protocols. Third, as quantum networks mature and the “trusted relay” problem demands solutions, DI‑QKD’s device-agnostic security model may be the only one that scales trustworthily to global infrastructure.

Quantum Upside & Quantum Risk - Handled

My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.

Talk to me Contact Applied Quantum

Marin

I am the Founder of Applied Quantum (AppliedQuantum.com), a research-driven consulting firm empowering organizations to seize quantum opportunities and proactively defend against quantum threats. A former quantum entrepreneur, I’ve previously served as a Fortune Global 500 CISO, CTO, Big 4 partner, and leader at Accenture and IBM. Throughout my career, I’ve specialized in managing emerging tech risks, building and leading innovation labs focused on quantum security, AI security, and cyber-kinetic risks for global corporations, governments, and defense agencies. I regularly share insights on quantum technologies and emerging-tech cybersecurity at PostQuantum.com.
Share via
Copy link
Powered by Social Snap