Entanglement-Based QKD Protocols: E91 and BBM92

Table of Contents
Introduction to Entanglement-Based QKD
Quantum Key Distribution (QKD) is a method for two distant parties (traditionally Alice and Bob) to generate a shared secret key by exchanging quantum signals over an insecure channel. Its security is guaranteed by fundamental quantum mechanics: any eavesdropper (Eve) attempting to intercept or measure the quantum states will disturb them in a detectable way. In a typical QKD scheme like BB84 (Bennett-Brassard 1984), Alice prepares single photons in one of several possible polarization states (encoding 0/1 bits) and sends them to Bob, who measures each in a randomly chosen basis. After many photon transmissions, Alice and Bob publicly compare which bases they used and keep only those events where their bases matched, yielding correlated binary outcomes. Any attempt by Eve to glean information (by measuring photons in transit) introduces errors, alerting Alice and Bob to her presence. This prepare-and-measure approach (exemplified by BB84 and its variant B92) relies on encoding key bits into prepared quantum states and detecting disturbances via error rates.
Entanglement-based QKD offers an alternative paradigm that exploits quantum entanglement as a resource for secure key generation. Instead of one party sending prepared states to the other, a source produces entangled photon pairs and distributes them such that Alice and Bob each receive one particle of each pair. Because the entangled photons have correlated (indeed quantum-correlated) properties, measurements performed by Alice and Bob are strongly linked. Notably, the outcomes are intrinsically random for each party, yet perfectly correlated (or anti-correlated) with each other – a phenomenon with no classical analog. If an eavesdropper intercepts or measures one of the entangled photons, it breaks the entanglement, scrambling the correlations in a way that can be detected. This means entanglement-based schemes have an in-built alarm: a loss of the expected quantum correlations signals the presence of Eve.
There are important differences between prepare-and-measure QKD and entanglement-based QKD. In prepare-and-measure protocols like BB84, security often requires trust in the source (e.g. that Alice’s source emits true single photons, or else decoy-state methods are needed to thwart photon-number-splitting attacks). In entanglement-based protocols, the source of photon pairs can even be untrusted – it could be a third party or even under Eve’s control – yet Alice and Bob can still establish security by testing the entanglement. As long as the observed correlations violate a Bell inequality (indicating genuine quantum entanglement), the source cannot have predetermined the outcomes or leaked information to Eve. This provides stronger security guarantees: a malicious source or man-in-the-middle cannot fool Alice and Bob with fake (classically correlated) photon pairs, since only true entanglement will produce the correct high correlations needed to generate a key. In fact, the Ekert 1991 (E91) protocol was the first to propose using entangled pairs and Bell’s inequality checks for QKD, showing that entanglement could be used “even if one does not trust quantum mechanics” in a device-independent manner. While early implementations of entanglement-based QKD were less practical than BB84, they laid the groundwork for advanced security models like device-independent QKD (DI-QKD), wherein the devices need not be trusted as long as a Bell test is passed. Entanglement-based schemes thus combine the usual QKD security (detection of eavesdropping via error rates) with the potential for stronger, physics-backed assurances (no need to trust the source or measurement devices, thanks to entanglement verification).
In summary, entanglement-based QKD uses pairs of quantum-entangled particles shared between Alice and Bob to generate keys. It differs from prepare-and-measure QKD by not requiring one party to send explicit state preparations; instead, the key bits arise from correlated measurements of a shared entangled state. This approach inherently flags eavesdropping because any adversarial interaction collapses the entanglement and degrades the correlations. Although currently entanglement-based QKD systems are less mature and tend to have lower key rates than prepare-and-measure systems, they offer unique security advantages and are essential for advanced quantum networks and device-independent cryptography.
Mathematical Foundations
Quantum Entanglement and its Role in QKD
Quantum entanglement is a phenomenon where two or more particles share a joint quantum state such that their properties are strongly correlated beyond what is possible classically. A canonical example is a pair of photons in the singlet polarization state:
$|\Psi^{-}\rangle = \frac{1}{\sqrt{2}}\Big(|H\rangle_A |V\rangle_B \;-\; |V\rangle_A |H\rangle_B\Big)$
In this entangled state, if Alice measures her photon’s polarization and finds it horizontal, Bob’s photon will be vertical, and vice versa – they always get opposite results in the same basis, despite each outcome being random. The state cannot be factored into independent states for Alice and Bob; it embodies perfect anti-correlation with maximal randomness. This property is central to QKD: Alice and Bob can obtain correlated random bits by measuring their entangled particles. If an eavesdropper interacts with one of the particles, the entanglement is disrupted, collapsing the joint state into a mixture that no longer shows these strong correlations. In other words, the presence of entanglement means any attempt at eavesdropping introduces anomalies that honest parties can detect.
Bell Inequalities and the CHSH Test
The security of entanglement-based QKD can be certified by tests of Bell’s inequality. In local realistic theories (i.e. any classical model with hidden variables), the correlations between distant particles satisfy certain bounds. The most famous is the CHSH inequality (formulated by Clauser, Horne, Shimony, and Holt) which involves two observers (Alice and Bob) each choosing one of two possible measurement settings (call them $a$ or $a’$ for Alice, and $b$ or $b’$ for Bob). One defines a correlation coefficient $E(a,b)$ = probability(Alice and Bob get same outcome) $-$ probability(they get different outcomes), considering outcomes as binary ±1 values. The CHSH combination is:
$S \;=\; E(a,b) + E(a,b’) + E(a’,b) – E(a’,b’)$
Local realism enforces the inequality. Quantum mechanics, however, allows violations: for a maximally entangled pair of qubits measured at appropriate relative angles, one can obtain value which exceeds the bound of 2. This violation of the CHSH inequality is a witness of entanglement – it proves that the measurement outcomes were not predetermined by any local hidden variables and that the two particles share non-classical correlations. In an entanglement-based QKD protocol like E91, Alice and Bob will perform such a Bell test on a subset of their particle pairs. A significant violation confirms the presence of high-quality entanglement and thus indicates the absence of eavesdropping or any malicious tampering with the source. Conversely, if Eve has partially intercepted or replaced the pairs with classical correlations, the Bell inequality will not be violated, alerting the users that the channel is insecure. The CHSH test is therefore a powerful security check: it quantitatively bounds the information an eavesdropper could have gained, because any significant information leakage to Eve would reduce the entanglement and push the correlations back into the classical regime (no Bell violation).
Quantum Correlations – Mathematical Formulation
For entangled photon pairs, the joint measurement outcomes exhibit strong correlations that can be predicted by quantum theory. For example, consider polarization measurements at angles $\alpha$ (by Alice) and $\beta$ (by Bob) relative to some reference axis. A well-known result for the singlet state $|\Psi^-\rangle$ is that the correlation function is $E(\alpha,\beta) = -\cos[2(\alpha-\beta)]$. This means if Alice and Bob set their polarizers at the same angle ($\alpha=\beta$), $E = -\cos(0) = -1$, indicating perfect anti-correlation (always opposite outcomes, as expected for the singlet). If their polarizer angles differ, the correlation weakens according to the cosine of twice the angle difference. These sinusoidal correlations are what allow a Bell inequality violation. In the CHSH scenario, one chooses a specific set of angles that maximize the violation: for instance, Alice might choose between $\alpha$ and $\alpha’$ while Bob chooses between $\beta$ and $\beta’$, with $\alpha-\alpha’=\beta-\beta’=45^\circ$ and $\alpha-\beta = \alpha’-\beta’ = 22.5^\circ$. Quantum mechanics predicts $S = 2\sqrt{2}$ for those choices, violating the classical $S\le2$ limit. The mathematical condition $S>2$ is used in E91 as a security criterion: it implies the observed correlations cannot be reproduced if Eve had intercepted or measured the particles beforehand. In practice, the amount by which the Bell inequality is violated can even be used to estimate the secure key rate – a higher violation (closer to the quantum maximum) means less information could have leaked to Eve, allowing for more secret key bits to be distilled.
In summary, the core mathematical foundation of entanglement-based QKD lies in the description of entangled states (often Bell states) and the use of Bell inequality tests (like CHSH) as security monitors. The entangled state provides correlated random variables (bits) for Alice and Bob, and the Bell test provides a quantitative check that these correlations are genuinely quantum and un-compromised. This combination underpins the security of protocols like E91 and BBM92, distinguishing them from classical key exchange protocols and even from prepare-and-measure QKD by its additional layer of quantum correlation verification.
Detailed Breakdown of Entanglement-Based QKD Protocols
E91 Protocol (Ekert 1991)
The E91 protocol, proposed by Artur Ekert in 1991, is a landmark entanglement-based QKD scheme that uses violations of Bell’s inequality to ensure security. The protocol can be outlined in steps:
- Entangled Pair Generation: A source (which can be a third-party or one of the communicating users) prepares a large number of photon pairs in a maximally entangled state. In Ekert’s original description this is the spin-singlet (polarization singlet) state $|\Psi^-\rangle = \frac{1}{\sqrt{2}}(|0\rangle_A|1\rangle_B – |1\rangle_A|0\rangle_B)$. One photon from each pair is sent to Alice and the other to Bob, through presumably insecure quantum channels. If these photons remain undisturbed, Alice and Bob share an entangled state upon arrival.
- Random Measurements: Alice and Bob each have a set of measurement bases (orientations of their polarizers or Stern-Gerlach apparatus) to choose from. In E91, they use three possible bases each (for example, three polarization angle settings). Ekert’s proposal had Alice choose between angles ${0^\circ, 45^\circ, 90^\circ}$ and Bob between ${45^\circ, 90^\circ, 135^\circ}$ for their polarization measurements. For each entangled pair, Alice and Bob independently and randomly select one of their three bases and measure the incoming photon’s polarization. Each measurement yields a binary outcome (e.g., “0” for one polarization state and “1” for the orthogonal state).
- Public Basis Disclosure: After all photons have been measured, Alice and Bob communicate over a public but authenticated classical channel to compare which basis they used for each measurement (they do not reveal the measurement outcomes). They discard any pair of results where their chosen measurement settings were incompatible or not intended for key generation. In Ekert’s scheme, only a subset of the basis choices are used to generate key bits, while the other choices are reserved for testing security. For instance, if both happened to choose the basis corresponding to 45° vs 45° or 90° vs 90° (which are two instances of them measuring along the same orientation), those results might be kept for key material (because entangled singlet photons measured in the same basis give perfectly anti-correlated bits). The other cases (the majority of combinations where the bases differ by certain angles) are used to check the Bell inequality.
- Key Generation from Correlated Outcomes: From the subset of measurements where Alice and Bob used compatible bases designated for key generation, they obtain a string of highly correlated (in fact, anti-correlated) bits. For example, using the singlet state, if Alice got outcome “0” in a given basis, Bob, measuring in the identical basis, will get “1” with high probability (and vice versa). By convention, one party can flip their bit to create matching bits. These bits – after flipping Bob’s bits or mapping 0/1 appropriately – form the raw key. In the E91 scheme described in literature, two specific base choices (one pair of orientations for Alice and Bob) yield these key bits, and roughly one-ninth of the entangled pairs result in key bits if three bases are used (since only 2 of the 9 possible basis combinations produce key, as Ekert’s original had). In practice, to get a key of length $N$, they would need to send a larger number of pairs (the example in some descriptions is about using 9N/2 pairs to end up with $N$ key bits).
- Security Verification via Bell Test: This is the distinctive feature of E91. Alice and Bob take the measurement results from the pairs where they used the mismatched bases (the combinations not used for key) and use them to evaluate a Bell inequality – specifically the CHSH form of Bell’s inequality. They compute the correlation values for the appropriate combinations of measurement settings and then calculate the Bell parameter $S$ (as described in Section 2). If the inequality $|S| \le 2$ is violated (for example, they find $|S| \approx 2.5$ or greater, ideally approaching $2\sqrt{2}\approx 2.828$), this is evidence that the particles were indeed entangled and not tampered with. Ekert suggested that a sufficient violation of Bell’s inequality guarantees the secrecy of the shared correlations – because any eavesdropping would have reduced $S$ below the quantum limit. For instance, with an intact singlet state they expect $|S| = 2\sqrt{2}$, but if Eve had intercepted a photon and collapsed the pair into a separable state, $|S|$ might drop to $\sqrt{2} \approx 1.414$ (as in an example of a product state given in Ekert’s paper). In the protocol, if the measured Bell violation falls below a predetermined threshold, Alice and Bob abort the protocol, as it indicates a potential eavesdropper or device failure. Only if the test strongly indicates entanglement do they proceed.
- Post-Processing: Assuming the Bell test is passed, Alice and Bob now have a set of correlated bits (the raw key) that may still contain some errors (due to noise or detector imperfections) and potentially some information known to Eve (limited by the amount of Bell violation measured). They perform classical post-processing: first error correction to reconcile any mismatches in their raw key, and then privacy amplification to reduce Eve’s information to negligible levels. The result is a shared, identical secret key. (While Ekert’s original protocol description focused on the Bell test aspect, in practice these classical steps are standard in all QKD protocols to distill an ultra-secure final key.)
It’s worth noting that E91 in its original form was more of a theoretical blueprint than a ready-to-use protocol. It demonstrated the principle that entanglement and Bell’s inequality could ensure security. However, at the time, no complete security proof was provided regarding how much key could be extracted if the Bell violation was partial. This led to later refinements, such as the BBM92 protocol, which simplified the approach. Nevertheless, the E91 protocol is profound because it introduced the idea of device independence: if the entangled outcomes violate a Bell inequality, the key is secure even if the devices or source are untrusted. In essence, E91 showed that “quantum correlations can be used to distribute cryptographic keys with checks against eavesdropping built directly into the laws of physics” – a concept that has influenced modern quantum cryptography research significantly.
BBM92 Protocol (Bennett, Brassard, Mermin 1992)
Shortly after E91, Bennett, Brassard, and Mermin proposed the BBM92 protocol (published in 1992 as “Quantum Cryptography Without Bell’s Theorem”). BBM92 is essentially a practical adaptation of BB84 into an entanglement-based setting. It forgoes the explicit Bell inequality test and instead uses the more straightforward error rate checking method from BB84, while still relying on entangled photon pairs for distribution. The protocol works as follows:
- Entangled Photon Source: One party (say Alice) or a centralized source generates a large number of entangled photon pairs in a known Bell state. A common choice is the $|\Phi^+\rangle$ state, defined as $|\Phi^+\rangle = \frac{1}{\sqrt{2}}(|0\rangle_A|0\rangle_B + |1\rangle_A|1\rangle_B)$. In polarization terms, this can be a pair of photons whose polarizations are always identical (both horizontal or both vertical) but random – so if measured in the same basis, Alice and Bob always get the same bit. Alice keeps one photon from each pair and sends the other photon to Bob through the quantum channel. (The source could also be a separate node sending photons to both, which is equivalent as long as the source is honest. In many descriptions, Alice is treated as the source for convenience.)
- Random Basis Measurement: Just like BB84, Alice and Bob independently choose between two complementary measurement bases for each photon they receive. Typically these are the $Z$ basis (rectilinear, e.g. horizontal/vertical polarization) and the $X$ basis (diagonal, e.g. $45^\circ/135^\circ$ polarization). We can denote these bases as $B1$ and $B2$ for simplicity: $B1={|e_0\rangle, |e_1\rangle}$ might be the horizontal/vertical basis, and $B2={|f_0\rangle, |f_1\rangle}$ the diagonal basis at 45°. Importantly, the chosen entangled state $|\Phi^+\rangle$ has the property that it is symmetric in both bases: one can also express $|\Phi^+\rangle = \frac{1}{\sqrt{2}}(|f_0\rangle_A|f_0\rangle_B + |f_1\rangle_A|f_1\rangle_B)$. This means if both Alice and Bob measure in the diagonal $B2$ basis, they will also always get identical outcomes (either both get result “0” corresponding to $|f_0\rangle$, or both “1” corresponding to $|f_1\rangle$). For each entangled pair, Alice and Bob randomly choose either basis $B1$ or $B2$ to measure their photon. They keep a record of which basis was used and the outcome (bit) obtained.
- Classical Communication and Sifting: After all measurements, Alice and Bob publicly announce their sequence of chosen bases (but not the measurement results). They go through the list and identify the instances where they happened to choose the same basis. Any pair where one used $B1$ and the other used $B2$ (mismatched basis) is discarded entirely. Only the events where both chose $B1$ or both chose $B2$ are kept, since those are the cases where their measurement outcomes are expected to be perfectly correlated. If we started with say $N$ entangled pairs, about half on average will be discarded in this sifting step, leaving roughly $N/2$ pairs where bases aligned.
- Raw Key Creation: For each of the remaining pairs (where Alice and Bob measured in the same basis), they now share a correlated bit. In fact, given the state $|\Phi^+\rangle$, their bits will be exactly the same (e.g. both 0 or both 1) because of the entanglement’s correlation in those bases. Alice and Bob agree on a mapping of quantum outcomes to bit values: for example, they might agree that the state $|e_0\rangle$ (horizontal) or $|f_0\rangle$ (45°) corresponds to bit 0, and $|e_1\rangle$ (vertical) or $|f_1\rangle$ (135°) corresponds to bit 1. Using this convention, when they measured in the same basis, they will have identical bit values. Thus, they can construct a raw key by taking those identical outcomes. At this point, ideally, if there were no disturbances, Alice’s raw key string should match Bob’s raw key string (this is sometimes called the “sifted key”).
- Eavesdropping Check (Error Rate Analysis): To ensure the key is secure, Alice and Bob need to estimate how much an eavesdropper might have interfered. Instead of using a Bell inequality, BBM92 follows the BB84 strategy: they sacrifice a subset of the sifted key bits to test for errors. Specifically, Alice and Bob can randomly select a sample of the correlated bits they have and publicly compare them. If Eve has been attempting to intercept photons, her interventions will have introduced discrepancies. For example, suppose Eve intercepts the photon on its way to Bob and measures it in a random basis. This action collapses the entangled state so that by the time Bob’s photon (or a replacement photon from Eve) arrives, it is no longer entangled with Alice’s photon. If Eve measured in the wrong basis relative to Alice and Bob’s choice, she will send Bob a photon that yields a random result relative to Alice’s outcome. Consequently, whenever Alice and Bob happened to use the same basis, if Eve chose a different basis (which happens 50% of the time for her, if she guesses at random), Alice’s and Bob’s results will disagree 50% of those times. In other words, Eve’s presence would cause a certain measurable error rate in the sifted key. By checking a random subset of bits, Alice and Bob can estimate the quantum bit error rate (QBER). For an entangled QKD system with no eavesdropping and low noise, the QBER should be very low (just stemming from detector noise or dark counts). But an active Eve would induce a significant error percentage. For example, the analysis shows that an intercept-resend attack by Eve would cause about 25% of the bits to be wrong (because half the time Eve chooses a wrong basis and in those cases half of those results are erroneous). Alice and Bob compare their sample—if they find, say, 0% or a very small fraction of disagreements, they can be confident the line is secure; if the error rate is above a certain threshold, they abort the protocol and throw away the key. In practice, they might also use error correction on the remaining bits after sampling, but a high error rate cannot be corrected without tipping off that something is wrong.
- Post-Processing: Assuming the error rate was acceptable (below the security threshold), the remaining sifted key (excluding the bits revealed for testing) is then processed with error correction (to iron out any minor mismatches) and privacy amplification (to eliminate any partial information Eve could have gained, for example if she caused some noise). The end result is a secure shared key between Alice and Bob.
The key differences between BBM92 and E91 are in the security validation method and the number of bases used. BBM92 doesn’t explicitly test a Bell inequality; instead, it leverages the fact that entanglement-based QKD can be treated similarly to BB84. If Alice and Bob observe a low QBER, it serves as an entanglement witness in a practical sense – it implies their particles were mostly in the expected entangled state, even if they didn’t do a CHSH calculation. In fact, Bennett et al. showed that one can “transfer the security proofs of BB84 to entangled-state protocols”. BBM92 uses only two measurement bases (just like BB84’s two bases), rather than three as in E91, making it simpler to implement. By removing the Bell test and using the sifting and error check approach, BBM92 was more amenable to analysis with the existing tools of QKD security at the time. It was understood as the natural entangled analogue of BB84.
Another difference is that E91 often considered a third-party source of entanglement and the notion of device independence, whereas BBM92 can be implemented with one of the legitimate parties creating the entangled pairs and does not inherently provide device independence (it assumes the usual trusted-device scenario, aside from the quantum channel). In practice, many experiments that are called “E91” implementations actually follow the BBM92 procedure (entangled pairs + two bases + QBER check) rather than literally performing a Bell inequality check. This is because it’s easier to generate a key efficiently that way. Nevertheless, the spirit of both protocols is similar: use entangled photons to generate a correlated key and verify security by detecting disturbances in quantum correlations.
Photon Pair Generation and Coincidence Detection: In both E91 and BBM92, a practical concern is how the entangled photon pairs are generated and distributed. Typically, entangled pairs are produced via spontaneous parametric down-conversion (SPDC) in nonlinear crystals or entangled photon sources using quantum dots, etc. These sources emit photon pairs at random times. Alice and Bob’s devices must identify which detections correspond to the same pair. This is done by timing synchronization and coincidence detection. Both parties timestamp their photon detection events, and only those detections that occur within a given coincident time window (indicating they originated from the same emission) are considered a valid pair. Any pair where one photon is detected without a coincident partner at the other side is discarded. This adds a layer of complexity: the need for tight synchronization and possibly higher timing jitter tolerance in detectors. The overall rate of valid entangled pairs (coincidence rate) can be much lower than the raw pair generation rate, especially over long distances with loss.
Error Analysis: Aside from eavesdropping, entanglement-based protocols must contend with noise and imperfections. Channel loss, detector dark counts, and multiple-pair emissions in SPDC (where occasionally two pairs are produced nearly simultaneously) can all contribute to an observed QBER even without an eavesdropper. For instance, multi-pair events can lead to accidental coincidences that aren’t true entangled pairs, or stray photons that cause errors. The security analysis of BBM92 treats these errors similarly to BB84 – as long as the QBER is below a certain threshold (often around 11% for single-photon BB84 with one basis, or a similar bound for entangled schemes), privacy amplification can still yield a secure key. If QBER is too high, no secure key can be extracted (it may indicate too much noise or hacking). In practice, entanglement-based QKD implementations have to carefully manage these error sources: using high-quality optics, low-noise single-photon detectors (often cooled avalanche photodiodes or superconducting nanowire detectors), and sometimes narrow timing windows to reduce accidental coincidences. The protocols assume any error, whatever its origin, is a potential leak to Eve, so they will sacrifice key bits accordingly to distill only the provably secure portion.
To summarize, BBM92 streamlines entanglement-based QKD by using two bases and error rate checks akin to BB84, rather than performing a Bell test. It highlights how an entangled pair source can simply replace the single-photon source of BB84: Alice and Bob’s measurements play the role of state preparation and measurement in BB84. The protocol’s security is typically analyzed in the same framework as BB84 (with entanglement in the background ensuring the correlations). This made entanglement-based QKD feasible to implement with the technology available in the 1990s and beyond, and most experimental entanglement-based QKD realizations follow the BBM92 scheme.
Security Comparison
Eavesdropping Resistance
Both prepare-and-measure (e.g. BB84) and entanglement-based (E91/BBM92) QKD are designed to resist eavesdropping by exploiting the no-cloning theorem and measurement disturbances. In an entanglement-based protocol, the presence of entanglement adds an extra layer of security: an eavesdropper cannot even siphon off part of the quantum signal without detectable disturbance. If Eve tries to intercept an entangled photon en route to Bob and measure it, she collapses the entangled state to a separable state. Bob effectively receives a photon that is no longer entangled with Alice’s, reducing the correlations between Alice and Bob. This will either show up as a high QBER (in BBM92) or a failure to violate Bell’s inequality (in E91). In fact, any man-in-the-middle attack where Eve inserts herself to intercept and resend signals is fundamentally limited: she cannot replicate the entangled correlations that Alice and Bob expect without actually distributing entangled particles herself. If Eve tries to act as the source of entangled pairs, unless she distributes genuine entanglement to Alice and Bob, the Bell test (if used) will fail. And if she does distribute genuine entangled pairs, she gains no information about the outcomes (because measuring or entangling herself to the pairs would spoil the entanglement). Thus, entanglement-based QKD inherently forces an eavesdropper into a dilemma: either remain passive (and get nothing), or attack and be revealed.
In terms of a quantitative security comparison: in BB84, an ideal intercept-resend attack yields QBER = 25%, which Alice and Bob would easily notice and abort. In entanglement-based QKD, the analogous full intercept attack similarly produces a high error rate or nullifies the Bell violation. Realistic attacks might be more subtle, but modern security proofs for both BB84 and entanglement-based protocols assure that as long as the observed error rate is below a certain threshold, any information Eve might have gained can be removed by privacy amplification. Entanglement-based protocols have been proven secure under very general conditions (even if the source is untrusted), largely thanks to the fact that one can imagine that instead of Alice sending states, a source distributes entanglement and then conceptually one can map it to an equivalent prepare-and-measure picture for the security proof.
Security Model Differences (E91 vs BBM92 vs BB84)
E91 was pioneering in introducing the concept of using entanglement and Bell inequality as a proof of security. However, originally it lacked a rigorous security proof and was seen as somewhat theoretical. BB84, on the other hand, had simpler analysis available (later fully rigorous proofs came). BBM92 bridged this gap by essentially putting E91 in BB84 terms. The key differences are:
Trusted Devices Assumption
BB84 and BBM92 assume that Alice’s and Bob’s equipment is properly characterized and not under Eve’s control (these are sometimes called “device-dependent” protocols). E91 pointed toward device-independent security: if Bell’s inequality is violated by a sufficient amount, one doesn’t even need to trust the inner workings of the devices. In fact, later research showed that if Alice and Bob conduct a loophole-free Bell test (no additional assumptions), they can achieve DI-QKD – secure key distribution even with black-box devices. In practice, DI-QKD is extremely challenging (requiring very high detection efficiency and low noise to close loopholes), but it’s a long-term advantage of entanglement-based schemes.
So, E91’s security model can be considered stronger (not needing trust in devices) but at the cost of more stringent technical requirements. BBM92’s model is similar to BB84’s: it assumes devices are trusted or at least that any device flaws are accounted for in the QBER. It does not inherently protect against a malicious device the way a Bell test could.
Source Independence
In BB84, typically Alice is the source of quantum states. If a third-party source is used (say a telecom operator sends states to Alice and Bob), one has to trust that source or use additional methods to detect leakage. In entanglement-based QKD (BBM92/E91), the source could be completely untrusted. Even if Eve provided the entangled pairs, she cannot know the outcomes of Alice’s and Bob’s measurements unless she somehow intercepted or influenced them, which the Bell test or QBER would reveal. This is a significant difference: entanglement-based QKD does not require a trusted source, since entanglement-based correlations (and their verification) ensure security.
This property makes entanglement-based protocols attractive for network scenarios where a service provider distributes entangled particles to clients – the clients can verify entanglement and need not trust the provider.
Bell Test vs QBER
E91 explicitly incorporates a Bell inequality check as the security criterion, whereas BB84 and BBM92 implicitly rely on error rates. From a theoretical standpoint, Bell violation is a stricter condition than low QBER. You can have a situation with low QBER but no Bell violation if, for example, the correlations are merely classical (like a cunning Eve shares some classical correlated bits with Alice and Bob which produce few errors but also don’t violate Bell inequality). However, in practice, to systematically get low QBER without true quantum correlations is difficult for Eve without being detected in some other way (she would essentially have to pilot the source and send identical bits to Alice and Bob – but then she’d still have to evade whatever channel monitoring exists, and she wouldn’t know their measurement bases in BBM92 beforehand).
Security proofs show that even without checking Bell’s inequality, BBM92 is secure as long as error rates are low and certain privacy amplification steps are taken. That said, if one wants ultimate device-independent security, one would incorporate the Bell test (as in DI-QKD implementations).
Vulnerabilities
All real QKD systems (BB84 or entanglement-based) can be vulnerable to implementation loopholes – for instance, detector blinding attacks, timing side channels, etc., that are outside the idealized theory. BB84 systems have suffered several such attacks in experiments (Eve doesn’t break the quantum principles, but hacks the hardware). Entanglement-based systems share many of these vulnerabilities if not properly addressed.
One notable attack type on entanglement QKD could be if Eve somehow replaces the entangled source with a fake source emitting tailored classical correlations. However, if Alice and Bob rely on Bell test, this would be caught. If they don’t perform a Bell test (as in standard BBM92 implementations), then the system reduces to the BB84-like security – which is still unconditional in theory, but the source being external could introduce new angles for attack (like an entanglement source that sends additional information tags or multiphoton signals – analogous to trojan-horse attacks).
In summary, BB84 and BBM92 share many implementation security assumptions, while E91 (Bell-test-based) can alleviate some of those by not trusting even the devices.
Side-Channel Countermeasures
Many of the countermeasures developed in QKD (e.g., decoy states to handle multiphoton emissions, detector blinding countermeasures, timing randomization, etc.) apply to both prepare-and-measure and entanglement-based setups. For instance, an SPDC source can produce multiple photon pairs; one might worry Eve could capture one of the pair and let another go through. But because each pair is entangled only with itself, capturing one entire pair doesn’t give Eve info about another pair’s outcomes – it just reduces the count rate. If the source emits two pairs at once and Eve somehow siphons off one photon from each, the remaining two photons aren’t entangled with each other, so that attempt fails to produce consistent correlations and would mess up the QBER. Still, controlling multiphoton probability (by using faint laser pulses in BB84 or low pump power in SPDC for entanglement) is important.
The bottom line: in an entanglement-based system, any attempt by Eve to steal a photon or otherwise entangle herself with the photons will show up as a deviation from the expected quantum statistics (either extra errors or reduced Bell parameter). This inherent feature is a strong security plus.
MDI-QKD vs DI-QKD
Two advanced paradigms often discussed in the QKD security context are Measurement-Device-Independent QKD (MDI-QKD) and Device-Independent QKD (DI-QKD). Both are relevant to entanglement-based concepts:
MDI-QKD
This is a protocol innovation that removes all vulnerabilities associated with the measurement devices (typically the single-photon detectors) – a component often targeted by hacking attacks. In MDI-QKD, Alice and Bob do not send photons to each other; instead, they prepare quantum states (which can be entangled or not) and send them to a third-party “Charlie” who performs a joint measurement (like a Bell state measurement) on the incoming signals. Charlie announces the success of this joint measurement, which heralds the creation of an effective entanglement between Alice and Bob’s systems. Crucially, Charlie’s measurement result does not reveal the key – it’s akin to him telling Alice and Bob the parity of their bits without knowing the bits (like saying “your bits are same or different”). As a result, even if Charlie (or the detectors) is malicious, he learns nothing about the key bits; at best, a corrupt Charlie could refuse to perform measurements properly, causing the protocol to fail (denial of service) but not compromising security.
MDI-QKD is called “measurement-device-independent” because Alice and Bob don’t have to trust the measurement devices at the node – those can be fully controlled by Eve and it won’t give her the key. This approach effectively uses entanglement swapping: even though Alice and Bob may send, say, weak coherent pulses, Charlie’s Bell state measurement entangles their photons.
MDI-QKD has been experimentally demonstrated, and while it suffers from low key rates over long distances, it has shown immunity to all detector side-channel attacks. In summary, MDI-QKD leverages the idea of a central Bell measurement (an entanglement concept) to remove the need for trusted detectors.
DI-QKD
Device-Independent QKD goes a step further – it removes the need to trust even the state preparation. It is essentially the realization of the original E91 dream: if Alice and Bob can observe a strong Bell inequality violation using their devices, they can trust that a secure key can be distilled, regardless of what’s inside the boxes.
DI-QKD effectively treats the devices as black boxes and relies only on observed statistics. This is the highest level of security, but it’s very demanding to implement. It requires a loophole-free Bell test with high efficiency and low noise to get enough violation to extract a key. Only recently have there been experimental demonstrations inching toward DI-QKD (for example, using entangled nitrogen-vacancy centers or high-quality photon setups to violate Bell’s inequality while generating raw key data). DI-QKD is intimately tied to entanglement – you cannot have device independence without entanglement, because you need that Bell violation.
As noted earlier, E91 is considered a precursor to DI-QKD. In the future, if DI-QKD becomes practical, it will offer the ultimate form of quantum security: even if the QKD devices are built by an untrusted manufacturer (or even by your adversary), as long as they produce outcomes that violate Bell’s inequality by the expected amount, the generated key is secure.
Summary
In terms of security comparisons: Standard BB84/BBM92 assume trusted devices (device-dependent). MDI-QKD relaxes trust in detectors (a major security hole in practice) by using an entanglement-based measurement strategy. DI-QKD relaxes trust in everything by leveraging entanglement-based tests. Thus, entanglement is the enabling resource for the strongest security regimes. The trade-off is complexity and key rate: DI-QKD and MDI-QKD are typically slower and more experimentally complex than standard QKD. Nonetheless, they are active research areas precisely because they tackle real-world attack vectors that simpler QKD might be vulnerable to.
To conclude this comparison: E91 (with Bell test) offers conceptually the highest security (device-independent in principle), BBM92 offers practical security equivalent to BB84 but with an entangled source (often considered part of “second-generation” QKD), and BB84 is the original prepare-and-measure scheme that is simpler but requires certain trust assumptions (like a trusted source or decoy state implementation). All are information-theoretically secure against eavesdropping under their respective assumptions, but entanglement-based protocols open the door to new security validations and network models (e.g. untrusted nodes, device independence) that are not possible with a simple prepare-and-measure approach.
Practical Implementations
Implementing entanglement-based QKD in the real world presents technical challenges, but significant progress has been made in both laboratory and field settings. Here we overview various implementation aspects:
Experimental Realizations
The first proof-of-principle entanglement-based QKD experiments were performed in the 1990s and early 2000s. For example, teams demonstrated E91/BBM92 protocols over optical fiber and free-space optical links, showing that entangled photon pairs could indeed generate secret keys under real conditions. A notable early experiment used entangled photons over short distances in the lab to implement Ekert’s protocol. Over time, distance records were steadily pushed. In fiber optics, entangled photon pairs have been distributed over dozens of kilometers. By 2008, entanglement-based QKD was demonstrated over 100 km of fiber using advanced detectors. More recently, experiments have entangled photons across over 200 km of fiber (in one case 248 km in a field experiment), although at that extreme distance the key rates are extremely low. The fundamental limitation in fiber is loss: optical fibers have attenuation (on the order of 0.2 dB/km in telecom wavelength fiber), so entangled photons face exponential loss with distance. For instance, 100 km of typical fiber might attenuate signals by ~20 dB (i.e., only 1% of the photons make it through). With entangled photons typically generated by SPDC sources, which themselves have low pair production efficiency, getting a usable coincident detection rate at 100+ km is a feat requiring ultralow-noise detectors. One experiment reported a secure key rate of about 110 bits per second over a 10 km fiber – using wavelength multiplexing to boost rates – which was a record for entanglement QKD. But at longer distances, rates drop off; at ~50 km, key rates might be on the order of a few bits per second or less with current technology.
In free-space implementations, entangled photons have an advantage: photons traveling through air or vacuum don’t suffer the same exponential absorption as in fiber (though they do face spreading loss and atmospheric turbulence). A milestone achievement was the Chinese Micius satellite (launched in 2016) which demonstrated entanglement-based QKD between two ground stations ~1200 km apart, via the satellite acting as an entangled photon source in space. The satellite beamed one photon of each entangled pair to one ground station and the other photon to a second ground station, creating a secret key between those distant locations. They performed a Bell test between the ground stations to verify entanglement and thus ensure security without needing to trust the satellite. This was the first intercontinental QKD link without a trusted relay, relying purely on entanglement and the laws of physics for security. The photon losses were enormous (on the order of 65–70 dB of channel loss) due to the long distance and diffraction, meaning very few photon pairs were detected in coincidence. Nonetheless, the experiment succeeded in distributing entanglement and generating a secret key, proving feasibility for satellite-based entanglement QKD. Other free-space experiments include quantum key distribution between telescopes (for example, a 144 km free-space link was done between Canary Islands observatories in 2007 using entangled photons). These free-space experiments often must contend with background light (necessitating nighttime operation for single-photon), atmospheric turbulence (beam wandering), and strict pointing and tracking to keep telescopes aligned.
Fiber vs. Free-Space/Satellite
Fiber-based entanglement QKD is suitable for relatively shorter distances – think metropolitan or inter-city links up to maybe 100 km without intermediate nodes. It has the advantage of being weather-independent and leveraging existing fiber networks. Free-space and satellite QKD can cover much larger distances (hundreds to thousands of kilometers) and connect remote sites, but require clear line-of-sight and are subject to weather or time-of-day constraints (though there is research into daytime QKD with better filtering and/or using wavelengths that suffer less background noise). In a network scenario, one can also combine both: use satellites for long haul (continent-scale) links, and fibers on the ground for last-mile distribution.
Technological Challenges
- Entangled Photon Source: The quality of the entangled source is crucial. Most systems use SPDC in nonlinear crystals, where a pump laser generates pairs of photons. SPDC is probabilistic: one can only increase pair rate by increasing pump power, but that also raises the chance of double-pair emissions (which can increase QBER or complicate security). New sources like entangled photon generation from quantum dots or atomic ensembles are being explored for brighter, more deterministic pair generation. Additionally, sources need to emit photons in a suitable wavelength (e.g. around 1550 nm for fiber transmission, because that’s where fiber loss is lowest). There have been demonstrations of entangled photon sources directly at telecom wavelengths, or using frequency conversion to convert entangled photons from a convenient generation wavelength down to telecom wavelength.
- Synchronization and Timing: Because Alice and Bob’s photons are generated at the same source (or by the same event), they need to coordinate timing to identify which detection at Alice corresponds to which detection at Bob. This typically requires sub-nanosecond time synchronization. Often, a separate classical synchronization signal or clock distribution is used. Alternatively, one can use frame-based methods where they encode a timestamp on each detection. Either way, achieving picosecond-level timing resolution and low jitter in detectors is important to reduce accidental pairings. Jitter in detector or time-tagging can allow unrelated photons to appear as coincident, which introduces error.
- Polarization/Phase Stability: Entanglement can be in polarization, time-bin, or other degrees of freedom. In fiber, polarization can drift due to temperature and stress, so maintaining polarization entanglement over long fibers might require compensation. Some experiments prefer energy-time or time-bin entanglement, which is more robust in fiber (two-photon interference Franson-type setups) but then key extraction often uses two mutually unbiased bases realized by interferometers, which have to be stabilized in phase. Free-space is generally kind to polarization (it’s stable through vacuum/atmosphere, neglecting birefringence in air which is minimal), but pointing is a challenge.
- Detection: Single-photon detectors are a critical component. In entanglement QKD, often both parties have detectors (one for each photon). High detection efficiency directly translates to higher key rates, since lost photons reduce coincidences. Superconducting nanowire single photon detectors (SNSPDs) have efficiencies above 80% at telecom wavelengths and very low noise, which have enabled many recent record experiments (though they require cryogenic cooling). More common avalanche photodiodes (APDs) have decent efficiency (~20% at 1550 nm for InGaAs APDs) and can operate with thermoelectric cooling, but have higher dark count rates and afterpulsing issues. The detector technology choice impacts the distance and rate achievable.
- Quantum Repeaters (future solution): Because both fiber loss and the probabilistic nature of entanglement generation severely limit distances and rates, a major ongoing effort is the development of quantum repeaters. A quantum repeater would allow entanglement to be extended over long distances by dividing the channel into segments, entangling each segment, and then doing entanglement swapping (and using quantum memory to temporarily store entangled states). This would effectively create longer-range entanglement without direct transmission over the full distance. Quantum repeaters are extremely challenging (they require reliable quantum memory and teleportation operations), but if realized, they could connect entanglement-based QKD links to form a true long-distance quantum network without trusted intermediary nodes. Presently, because repeaters are not yet available, long-distance QKD networks often resort to trusted nodes (intermediate stations that receive key from one link and re-transmit on another, essentially performing key relay). Trusted nodes, however, are security weak points (they must be physically secure). Entanglement-based QKD with repeaters would eliminate the need for trust in intermediate nodes, since the entanglement (and thus the key) would be end-to-end quantum-secured.
Despite the challenges, there have been notable implementations. To highlight a few:
- The Geneva QAunta experiment (2002) used entangled photons over 67 km of optical fiber (with extra low temperatures to reduce noise) to generate keys.
- The SECOQC project in Europe (mid-2000s) built a network where one link was entanglement-based (others were BB84).
- The Chinese Quantum Science Satellite (Micius, mentioned above) in 2017 performed entanglement distribution and even a form of entanglement-based QKD with keys exchanged between China and Austria.
- Multiple groups have demonstrated entanglement-based QKD in urban fiber networks for distances like 20–50 km, often as part of testbed quantum networks. For example, NTT in Japan reported entanglement QKD over 50 km deployed fibers.
- In 2020, researchers entangled photons across a 248 km fiber (between two labs) – this was a world record for fiber, though key generation was not practical at that extreme, it served as a entanglement distribution test.
Satellite and Free-Space Implementations: Beyond Micius, there are proposals and ongoing work for more satellites. The European Space Agency and others have plans for quantum communication satellites that will use entanglement (or trusted node QKD) to connect distant nodes. Free-space QKD is also being tested for airborne platforms (drones, aircraft) as relays for entangled photons.
Synchronization and Clock Recovery: One interesting practical solution is to send a bright laser as a guide or use one photon of the pair as a signal and the other as idler – but in entanglement QKD both photons are used for key. So usually a separate synchronization channel is used. In the Micius satellite experiments, they used precise timing signals and filtering to ensure coincidences were identified under high loss.
In summary, practical implementations of entanglement-based QKD have reached city-scale and even intercontinental scale (via satellite), albeit with low keys rates compared to prepare-and-measure QKD. Fiber-based entanglement QKD works up to a few hundred kilometers with trusted nodes bridging longer spans. Free-space entanglement QKD has been demonstrated to >1000 km. The main challenges are generating and detecting entangled photons efficiently, overcoming loss, and maintaining synchronization. As technology improves (especially single-photon detectors and possibly quantum repeaters), entanglement-based QKD is expected to become more practical and might form the backbone of future quantum networks.
Industry and Commercial Interest
The field of QKD has transitioned from purely academic research to early commercial products in the last two decades. Most commercial QKD systems to date have been based on prepare-and-measure protocols (BB84 with attenuated lasers, etc.) because they are simpler and can achieve higher key rates over common distances. However, there is growing interest in entanglement-based QKD from both companies and government initiatives, as these systems promise advantages for security and network scaling in the long term.
Several companies and startups are actively exploring or offering entanglement-based QKD solutions:
- Qubitekk (USA): Qubitekk is a company that explicitly uses entangled photon sources for QKD. Their systems are designed for industrial control system security (for example, securing electrical power grid communications). Qubitekk has demonstrated entanglement-based QKD links around 20 km with key rates on the order of 100 kbps, which is impressively high for entangled photons (achieved through efficient sources and detectors). They have participated in U.S. Department of Energy projects (the Quantum Grid Initiative) to secure power infrastructure using quantum keys. One trial in Chattanooga, TN integrated a QKD system with entangled photons (BBM92 protocol) into a utility network.
- Toshiba (Japan/UK): Toshiba is a major player in QKD technology and has mainly focused on BB84-based fiber QKD (they have record long-distance demo for decoy-state BB84). However, Toshiba’s research labs have also investigated entanglement-based schemes and sources, particularly for future networked scenarios. In their product line, entanglement-based is not yet commercial, but they are involved in trials and standardization that consider entanglement for next-generation QKD networks.
- ID Quantique (Switzerland): ID Quantique was one of the first companies to sell QKD systems (they started with a BB84 system over fiber). They also primarily stick to prepare-and-measure in products. But they have interest in all QKD tech, including entanglement, and have provided devices (like detectors) for entanglement experiments. As of now, IDQ’s commercial offerings remain BB84/decoy-type, but as an industry leader, they keep an eye on entanglement developments especially for device-independent QKD (they have some research collaborations in that direction).
- QuantumCTek (China): This is a Chinese QKD company that has been involved in the big QKD network in China (which spans 2000+ km using mostly trusted nodes and decoy BB84). They, along with government projects, no doubt are looking at entanglement-based QKD for integrating with satellites and future networks. In 2021, they launched some products and also participated in satellite QKD experiments. Their main commercial push is currently decoy-state QKD, but again, they will likely incorporate entanglement tech as it matures.
- Other Startups & Labs: There are startups focusing on quantum networks (e.g., Aliro Quantum in the US, QuintessenceLabs in Australia albeit they focus on continuous-variable QKD, etc.) which highlight entanglement. Also, academic spinoffs and national labs are building pilot quantum networks with entanglement capabilities. For instance, the EU Quantum Flagship program has projects for entanglement distribution across fiber networks (like the UK Quantum Network, German projects like Q.network, etc., some of which explicitly target entanglement-based QKD and multi-party scenarios).
The current deployments of entanglement QKD are mostly in testbeds and demonstrations, rather than wide commercial roll-out. This is due to the complexity and cost. Entanglement-based QKD devices involve delicate photon sources and often more detectors (both sides need detectors, whereas in a typical BB84 system only Bob might have a detector unit). Key rates are generally lower for entanglement QKD at comparable distances, which can limit practical use for high-bandwidth encryption needs. Additionally, components like ultra-low-noise detectors or entangled photon sources are expensive and not as turn-key as standard laser and modulator setups.
However, commercial feasibility is improving as technology advances:
- The cost of SNSPDs (which dramatically improve entanglement QKD performance) is coming down slowly as they become more widely used.
- Integrated photonic sources of entanglement on chips are being developed, which could eventually allow entangled photon pair generation in a compact, stable form (e.g., using silicon photonics or other waveguide platforms to generate entangled photons via micro-ring resonators).
- Companies in the telecommunications sector (like BT in Britain, etc.) have interest in quantum-secured links and are participating in trials that include different QKD types.
Quantum Networks and Future Outlook: The long-term vision driving interest in entanglement-based QKD is the quantum internet – a network where entanglement can be distributed on demand between any nodes, enabling not just QKD but also protocols like quantum teleportation, distributed quantum computing, and quantum sensor networks. Entanglement-based QKD is a foundational application for such a network. Governments are funding large initiatives for quantum communication infrastructure (EU’s EuroQCI, US Quantum Network initiatives, China’s quantum network expansions). These often include developing entanglement distribution tech, satellite QKD, and eventually quantum repeaters.
In industry, certain sectors find the highest value in QKD (financial, government, critical infrastructure). As they adopt QKD for ultra-security, they may start with proven BB84 systems now, but over time, to support networked topologies or untrusted relay scenarios, they might upgrade to entanglement-based systems. For example, a bank network that currently uses trusted nodes might later deploy entangled sources to remove the need to trust intermediate points.
Another future aspect is multi-party QKD – entanglement allows scenarios like quantum conference key agreement (where multiple users share a single group key via a GHZ entangled state distribution). This is something not easily done with sequential BB84 links. Some projects (like the mentioned Q-Fiber in Germany) explicitly aim to demonstrate four-user entanglement-based key sharing. Such capabilities interest enterprises that might want to secure a multi-node communication (for example, secure video conferences with quantum keys shared among all participants).
Commercial readiness: It is generally recognized that entanglement-based QKD (sometimes called “second generation QKD”) is a bit behind the first generation (BB84-type) in technology readiness. A Fraunhofer Institute report in 2023 noted that entanglement-based QKD “is not yet as mature as prepare & measure QKD and only achieves low key rates at present, but could be advantageous for complex communication networks.”. This captures the state of affairs: current systems are often slower and more complex, but they open doors for networks and scenarios that single-link QKD cannot support gracefully.
In the near future, we can expect:
- More field trials of entanglement QKD (especially in conjunction with emerging quantum networks in metropolitan areas).
- Possibly early adopter use-cases where the extra security or topology flexibility is worth the complexity (for example, securing backbone connections between data centers via entanglement QKD if they want to avoid trusting intermediate nodes).
- Integration with classical telecom infrastructure: efforts are on to multiplex entangled photons alongside classical data in the same fiber (using different wavelength channels), which would be crucial for real-world deployment so that a quantum channel doesn’t require a dedicated dark fiber.
- Standardization: groups like ETSI and the ITU are considering QKD standards; entanglement-based protocols will be part of those discussions, especially for interoperability of future devices and certification (ensuring an entangled source meets certain specs, etc.).
In the long term (5-10+ years), if quantum repeaters become viable, entanglement-based QKD will likely become the dominant paradigm, because only entanglement distribution allows for hopping across multiple nodes without losing security at each hop. This would enable continental or global quantum secure communications without trusting intermediate nodes (the ultimate vision of secure quantum networks). Even without full-blown repeaters, the combination of satellites and fiber links can cover global distances – a network of quantum satellites could connect multiple ground networks of entanglement QKD, delivering keys worldwide. Companies and government agencies are definitely interested in that prospect for diplomatic, military, or financial communications security on a global scale.
To conclude, while prepare-and-measure QKD currently leads the market due to simplicity and higher key rates, entanglement-based QKD protocols like E91 and BBM92 are at the heart of next-generation quantum communications. Ongoing improvements in photonic technology are steadily closing the gap in performance. The additional security guarantees (e.g., tolerance of untrusted devices) and network capabilities (multi-user, untrusted relay) provided by entanglement make it a very attractive approach for future large-scale quantum-secure networks. We are already seeing a convergence of research and industry efforts toward making entanglement-based QKD practical, with field demonstrations paving the way for eventual commercial adoption. The vision of a world-wide web of entangled quantum devices – securely providing keys and enabling other quantum protocols – is driving interest and investment in entanglement-based QKD today. With continuing progress, E91/BBM92-based systems or their device-independent descendants may well become the gold standard for ultra-secure key distribution in the years to come.