Post-Quantum Cryptography (PQC) and Network Connectivity: Challenges and Impacts

Table of Contents
[Updated in July 2025]
Post-Quantum Cryptography (PQC) promises future-proof security against quantum adversaries, but its adoption comes with practical challenges in real-world networks. Many PQC algorithms have significantly larger keys, ciphertexts, and signatures than current cryptography, which can strain network links and devices.
PQC Algorithms and Networking Overhead
Bigger Keys and Messages
A defining characteristic of many PQC algorithms is their increased data size. For example, NIST’s chosen lattice-based schemes have much larger public keys and ciphertexts than traditional RSA or ECC. The CRYSTALS-Kyber key encapsulation mechanism (KEM) uses ciphertexts around 768 bytes, and post-quantum digital signatures like CRYSTALS-Dilithium can be ~2-3 KB, compared to a few tens of bytes for classical ECDH keys or a few hundred bytes for RSA signatures. While these sizes are manageable in web applications, they pose challenges in constrained environments.
Each PQC handshake in protocols like TLS or IKE transmits more data – potentially multiple kilobytes – which can increase packet count or cause IP fragmentation.
Additionally, some alternate PQC schemes (like certain code-based or hash-based signatures) are even larger; e.g. SPHINCS+ 128-bit security signatures can be tens of kilobytes in size, which is orders of magnitude above typical TLS certificate components.
Performance Impact
Transmitting larger cryptographic blobs can slow down connection setup. Each extra kilobyte in a handshake adds latency and overhead. I previously noted that an additional 1 KB in a TLS handshake (due to a PQC key) can increase response time by about 1.5%. That may sound minor, but it compounds with multiple handshake messages and can be noticeable for latency-sensitive applications (voice, video, real-time control).
If a post-quantum handshake spans several IP packets, it also increases the chance of packet loss or reorder affecting the handshake, requiring retransmissions. Under ideal network conditions, studies show the total page load or transaction time impact of PQC is modest – often under 5-15% for typical payload sizes on stable, high-bandwidth links.
However, unstable or low-quality networks amplify the effect: in one experiment simulating low bandwidth, adding a PQC KEM and signature slowed the TLS 1.3 handshake by ~32%, and in lossy networks the slowdown was even greater. Essentially, PQC algorithms introduce “data-heavy” handshakes that are more sensitive to network speed and reliability than classical handshakes.
Computational Load
Besides larger size, some PQC algorithms demand more CPU or memory, particularly on constrained devices. Most lattice-based schemes are optimized for speed and can be as fast as (or even faster than) RSA/ECDSA on server-grade hardware. But memory footprint and processing can be problematic on microcontrollers or older hardware. For example, baseband units, IoT gateways, and customer-premises devices with limited CPU or RAM may struggle with the math (e.g. polynomial multiplications) and the buffering of bigger keys.
High-throughput routers might need hardware accelerators (FPGA/ASIC) to handle lattice ops at line rate. While computation is not a “link” issue per se, it becomes a connectivity issue if crypto processing causes handshake timeouts or if devices simply cannot complete the negotiation in a timely manner.
Impacts on Different Network Types
PQC’s viability can vary greatly with network category and quality. Lets examine several scenarios, from robust networks to extreme edge cases, highlighting where PQC may cause communication issues or even fail altogether.
Broadband and High-Bandwidth Networks (Ethernet/Wi-Fi)
On wired Ethernet, fiber, or modern Wi-Fi networks, which offer high bandwidth and low latency, PQC handshakes generally have minimal impact on communication quality. A few extra kilobytes of handshake data can be transmitted in a few milliseconds on a broadband link.
Studies confirm that on stable, high-capacity networks, the time-to-last-byte increase due to PQC is often under 5% for typical web payloads. For instance, Cloudflare’s early deployment of hybrid post-quantum TLS reported that most connections showed no perceptible slowdown, as these networks easily absorb the larger key exchange. There is little risk to basic connectivity on such links; packets aren’t likely to be dropped purely for being larger (since Ethernet MTUs can handle ~1500B or more, and TCP will fragment across multiple packets if needed). As long as endpoints and middleboxes support the new algorithms, communications proceed normally.
We typically do not expect outright failures on broadband networks due to PQC – only a slight performance hit in handshake latency or CPU usage.
One caveat is ensuring that networking equipment and software are updated for PQC support. Even in an enterprise LAN or data center, an outdated firewall, proxy, or IDS appliance might choke on unusual handshake parameters. Cases have been observed where middleboxes got “confused” by larger post-quantum keys or cipher suite identifiers, resulting in connection failures. Thus, even in otherwise robust networks, protocol compatibility needs testing. But with proper updates (many vendors are adding PQC support to TLS stacks, VPNs, etc.), standard Ethernet/Wi-Fi networks can handle PQC traffic without issue.
In summary, high-bandwidth networks face minimal risk of communication breakdown with PQC, though admins should verify that all devices on the path accept the bigger handshakes and certificates.
Cellular Networks (4G/5G Mobile)
Cellular networks present a more nuanced picture. Modern 4G LTE and 5G networks have decent bandwidth (tens or hundreds of Mbps) and latencies in the tens of milliseconds, so in theory they can carry PQC handshakes comfortably. Indeed, early trials are promising: for example, Japan’s SoftBank pilot of a hybrid post-quantum VPN over live 4G/5G showed only marginal added latency, demonstrating that careful integration can secure traffic without degrading performance.
Likewise, a recent 5G study found that using NIST’s chosen lattice KEM (Kyber) with Dilithium signatures offered the best efficiency for latency-sensitive apps, essentially keeping handshake delays comparable to classical cryptography. In other words, a smartphone or 5G user equipment can complete a PQC TLS handshake in on the order of a few tens of milliseconds when signal quality is good.
However, cellular links can be variable and constrained in certain respects:
Higher Latency & Loss
Cellular networks, especially at cell edges or under load, may have higher jitter, occasional packet loss, and greater round-trip times than wired links. PQC’s larger handshake can exacerbate these conditions. If a handshake packet (like a big Certificate or key message) is lost, the retransmission on a high-latency link adds significant delay.
Research has shown that under unstable or lossy conditions, post-quantum handshakes suffer proportionally more delay than classical ones – in one experiment, a lossy 5G channel made PQC handshakes 6-8× slower than in a no-loss scenario. This means a VPN or TLS setup over a weak cellular signal might time out or take noticeably longer if using heavy PQC algorithms.
Fragmentation and MTU Issues
Cellular networks (and the Internet paths they traverse) sometimes impose stricter MTU limits or may drop fragmented packets. A big concern is for protocols that run over UDP (such as VPN/IKE or QUIC): if a PQC handshake message exceeds the typical UDP packet size (~1200-1500 bytes), IP fragmentation occurs. Many mobile carriers employ NAT and firewall policies that discard fragmented UDP traffic for security, which can outright break a connection attempt. This is a real issue observed with post-quantum VPN trials. For example, in IPsec/IKEv2, the PQC public keys were so large they could not fit in the initial message without fragmentation. To address this, the IETF introduced an extension: the IKE_INTERMEDIATE exchange and IKE fragmentation (RFC 7383 & RFC 9242), which allow splitting the post-quantum key across multiple smaller messages. If a VPN appliance doesn’t implement these, a 5G phone using PQC IKE might simply never complete the handshake because the fragments get dropped en route.
Thus, without protocol-level fragmentation support, PQC could “not work” over some cellular paths. The mitigation is to ensure VPN servers/clients use the new standards to avoid IP-layer fragmentation (e.g., enable IKEv2 fragmentation and multiple key exchanges).
Latency-Sensitive Applications
5G networks promise ultra-reliable low-latency communication (URLLC) for applications like AR/VR, industrial control, etc., targeting end-to-end latencies under 20 ms. In such cases, even the slight added delay of a PQC handshake could be problematic.
A measured scenario found that using a bulky hash-based signature (SPHINCS+) in a 5G TLS handshake drove latency to ~140 ms, more than 3× a lattice-based alternative, which would miss the latency budget for URLLC apps. This indicates that while normal consumer traffic (web browsing, video) will be fine with PQC, certain real-time services over cellular might need special consideration (e.g. performing handshakes in advance, using faster algorithms, or sticking to classical crypto if quantum threat is not imminent for that session).
In summary, cellular networks can generally handle PQC, but edge cases exist. VPN over cellular must be configured to avoid fragmentation issues, and network operators should test how PQC affects call setup times or control-plane exchanges.
Encouragingly, telcos and vendors are actively working on this: 3GPP is studying PQC for future releases, and telecom equipment providers are adding support in base stations and core network software. Early experiments (SK Telecom, SoftBank, etc.) suggest that with optimized algorithms and hybrid modes, PQC can be added to mobile networks with negligible impact on user experience. The highest risk is in misconfiguration or legacy support – e.g., older 3G networks or poorly configured carrier NATs that can’t deal with large handshake packets.
Low-Power IoT and Constrained Networks (LPWAN)
The most challenging environment for PQC is the realm of Low-Power Wide Area Networks (LPWANs) and other highly constrained wireless links used for IoT. Technologies like LoRaWAN, Sigfox, NB-IoT, LTE-M, Zigbee/802.15.4, and BLE prioritize minimal power usage and have very limited payload sizes and duty cycles. These networks were never designed with multi-kilobyte handshakes in mind, and indeed researchers are warning that current PQC algorithms are “unusable in many IoT systems using constrained radio networks” due to message size alone.
Consider LoRaWAN as an illustrative case:
- In LoRaWAN, a single frame payload is extremely small (often 51 bytes in EU bands, and only 11 bytes in US bands for application data after headers). Furthermore, LoRaWAN devices typically operate under a duty cycle of 1% or less, meaning a node can only transmit for e.g. 36 seconds per hour in total. A post-quantum handshake requiring, say, a 1.5 KB key exchange plus a 2 KB certificate would need to be fragmented into dozens of LoRaWAN frames. With mandatory delays between frames, completing such a handshake could literally take hours and burn an excessive amount of the device’s energy budget. The completion time would be “completely unacceptable”, if it succeeds at all. Any packet loss would require retransmissions, further compounding delay due to back-off and duty cycle limits. Essentially, standard PQC handshakes simply cannot operate within LoRaWAN’s constraints under normal conditions.
Other LPWANs face similar issues:
- 6TiSCH (IEEE 802.15.4 mesh) networks have frame sizes max 127 bytes, but in a multi-hop scenario the net payload per packet may be ~45 bytes. A large handshake would flood the mesh with many fragments, potentially causing network formation to stall.
- Sigfox has extremely low data rates and tiny uplink messages (12 bytes payload) – sending kilobytes over it is impractical.
- NB-IoT (narrowband IoT) can carry more data than LoRa/Sigfox, and it doesn’t use fixed small frames; however, it is still low-bandwidth (tens of kbps) and optimized for infrequent, small transmissions. Larger messages significantly increase transmission time and energy usage on NB-IoT devices. A multi-kilobyte handshake might keep the radio on much longer, draining battery. NB-IoT also typically has higher latency scheduling, so extra round trips hurt responsiveness.
In summary, today’s PQC algorithms would severely impact or even preclude communication in many LPWAN/IoT contexts:
- The risk to communication is highest here – devices might not complete the security handshake or might violate duty cycle/regulatory limits trying, effectively knocking themselves offline.
- Even if eventually successful, the latency and power cost would negate the purpose of a low-power network (imagine a sensor taking minutes to establish a secure session before it can send a 20-byte reading).
Edge IoT Cases – Mitigate or Defer
For these scenarios, experts suggest a few approaches. One is to delay adoption of PQC on such links until absolutely necessary – continuing to use lightweight ECC (which fits in tens of bytes) for the coming years, given that truly effective quantum attacks may be at least a decade away. The rationale is that we should “allow ECC until the risk of CRQCs is imminent” while pushing cryptographers to design smaller-footprint PQC suitable for IoT.
Another approach is to use non-interactive or one-way schemes if available, to cut down message exchanges. (Classical Diffie-Hellman had a non-interactive property in some IoT uses – a device could publish a static public key. Most PQC KEMs don’t support a true static one-message mode without some trust assumptions, and NIKE schemes are still an open research topic.)
In practical terms, if PQC absolutely must be used in IoT, some mitigations include:
- Pre-distribution of keys: For example, provision each device with a post-quantum symmetric key or a PQC keypair at manufacturing time, so that frequent heavy handshakes are unnecessary. This doesn’t solve long-term post-quantum security (keys embedded could be vulnerable if not updated), but it reduces on-air overhead.
- Gateway offloading: Use an edge gateway or base station as a crypto terminator. The IoT device communicates with the gateway via existing lightweight encryption (perhaps within a local trust zone), and the gateway, which has more power and a better link to the cloud, handles the PQC handshake upstream. This introduces trust in the gateway but can bridge ultra-constrained devices to PQC-enabled networks.
- Fragmentation and profile tuning: If an IoT protocol can be updated, define a profile for PQC that uses tailored fragmentation, compression, or smaller parameter sets. For instance, LoRaWAN could in theory fragment a large message over many packets (LoRaWAN has a packet fragmentation specification for firmware updates). Testing would be needed to see if any PQC handshake can be made to fit within realistic wait times – likely only feasible for one-time provisioning, not frequent rekeys.
- Alternate cryptography: Explore whether alternate PQC candidates (not yet standardized) with smaller artifacts could be used for IoT. Some algorithms that were NIST candidates (SIKE, etc.) had small ciphertexts but were broken; still, the IoT need may spur new designs. Also, stateful hash-based signatures (XMSS, etc.) have small signatures if you only need one per device, but managing their one-time key usage is complex for IoT.
In essence, LPWAN and extreme IoT networks are the “edge cases” most at risk of PQC adoption impacts. Network engineers should be aware that a blanket mandate to “turn on PQC everywhere” could unintentionally cut off low-power sensors. Careful planning, specialized protocols, or delaying PQC for those links will be necessary to avoid communication breakdown.
Satellite and High-Latency Links
Long-distance satellite links (e.g. GEO satellites, deep-space communication) and other high-latency networks deserve a brief mention. These links often have round-trip latencies from 100 ms up to several seconds, and limited bandwidth. A larger handshake doesn’t fundamentally break anything here, but it does slightly extend the connection setup time. For instance, if a TLS handshake normally takes 2 RTTs (~600 ms on a GEO satellite), adding a few kilobytes might add an extra RTT if fragmentation occurs, pushing it to 3 RTTs (~900 ms).
If packets need to be retransmitted (satellite links can have non-trivial loss), the impact of resending a large PQC message is higher – each lost handshake packet might cost half a second or more in delay. Thus, satellite comms could see noticeable increases in handshake time, though once the connection is up, the relative overhead on bulk data transfer is minor.
One specific risk is if any satellite modems or links impose a very small MTU or use datagrams that can’t handle larger sizes. If so, the same fragmentation issue arises. But generally, as long as the protocol (e.g. TCP) handles the splitting, PQC will still “work” over satellite, just a bit slower. The main lesson for high-latency networks is similar to lossy networks: test the timeouts and ensure the handshake parameters (like TCP SYN retries, etc.) are tuned so that a slower PQC handshake isn’t prematurely aborted by the application. For critical systems, increasing initial handshake timeout thresholds might be prudent when PQC is enabled.
Legacy and Heterogeneous Network Elements
A final category of “connectivity” dependency lies not with the link layer, but with legacy network elements and middleboxes scattered in the path. Corporate networks and ISPs often have many devices (firewalls, load balancers, NAT routers, intrusion detection systems, old VPN concentrators, etc.) that inspect or transform traffic. Many of these expect certain sizes and patterns for handshakes. Introducing PQC can expose hidden bugs or limits:
- A firewall might have a fixed buffer or rule expecting TLS ClientHello messages below a certain size, and drop ones that are larger once PQC ciphersuites are included.
- Middleboxes performing deep packet inspection might not recognize the new cipher identifiers and could terminate the connection attempt.
- Older VPN servers might not support the new algorithms at all, or might mis-handle certificates that contain post-quantum public keys (which can be several KB in X.509 encoding). As an example, some standard X.509 fields and certificate management protocols needed extensions to accommodate the larger PQC keys and hybrid certificates. If not updated, systems could reject these certificates as malformed.
Real incidents have been documented: Google and others found that some proxies would “break when faced with unexpectedly large keys or novel cryptographic parameters,” causing traffic to fail. Cloudflare similarly warned that “a lot of buggy code” in middleboxes might make a post-quantum TLS connection fail for reasons like a middlebox being confused by larger keys. This is not so much a bandwidth problem as a protocol compliance problem, but it directly affects connectivity – the connection can’t be established until the offending box is fixed or bypassed.
The mitigation here is extensive compatibility testing in heterogeneous environments. Enterprise network engineers should test PQC-enabled connections across their infrastructure (including through VPNs, proxies, etc.) and work with vendors to patch any non-compliant gear. In some cases, enabling crypto-agility features can help – for example, TLS 1.3 is more adaptable and simpler than TLS 1.2, so using TLS 1.3 (which most middleboxes by now handle) with PQC is better than trying to retrofit PQC into old TLS 1.2 sessions. Likewise, enabling only the most widely supported PQC algorithms (e.g. Kyber and Dilithium which are becoming standards) and avoiding uncommon or very large algorithms can reduce surprises.
In summary, the network ecosystem must be prepared. Even if the raw link can carry PQC traffic, one non-upgraded component in the path can block the connection. Thus, the dependency of PQC on connectivity extends to requiring a “clean” path that is quantum-safe aware.
Testing PQC Over Networks
Robust testing is crucial to identify and address the issues discussed. Both cryptography experts and network engineers will need to collaborate on testing plans that cover a variety of scenarios:
Lab and Field Trials: Emulate the network conditions of interest and measure PQC performance. For example, use a network simulator or emulator (like a 5G testbed or a Wi-Fi lab setup) combined with PQC-enabled software (OpenSSL with libOQS, hybrid TLS libraries, PQC-capable VPN software, etc.). Measure handshake latency, success rate, retransmissions, CPU load, and throughput for different algorithms. In a 5G emulator study, researchers did exactly this – using a 5G core (Open5GS) and UE simulator with TLS libraries supporting PQC, they gathered metrics under varying client loads and radio conditions. Similar tests can be run for Wi-Fi vs Ethernet, 4G vs 5G, etc., to see how PQC handshakes behave.
Loss and Latency Injection: Use tools to introduce packet loss, latency, and jitter to mimic poor networks (e.g. a cellular edge or a satellite hop). Observe how many handshakes fail or how long they take with and without PQC. As noted, studies show a greater impact on handshake time in volatile networks. By quantifying this, one can decide if timeouts need adjusting or if alternate algorithms are required for such conditions.
IoT Device Testing: For constrained devices, prototype what’s feasible. If you have an IoT node and gateway, attempt a scaled-down PQC exchange – maybe using the smallest parameter sets (e.g. a smaller ring-LWE variant or a one-time signature). Measure energy consumed and time taken. For example, an IoT-focused experiment might try using Dilithium in a LoRaWAN join procedure and count how many fragments are needed and if any drop out. Even if it fails, that data is valuable to guide standard bodies on what sizes are utterly impractical. (The Ericsson team paper already provides calculations and arguments that current PQC sizes are way beyond LPWAN limits.)
Compatibility Testing: Set up test connections through various network devices – different models of VPN concentrators, routers, firewalls – to ensure they permit PQC handshakes. Cloudflare’s public beta in 2022 was an example of this at Internet scale: by enabling hybrid PQC for real websites, they collected reports of handshake failures caused by middleboxes and worked on fixes. Enterprises can do their own smaller-scale version by enabling PQC in a staging environment and seeing if any internal tools break. It’s important to include older and unusual network paths in these tests (e.g. a remote user on a 3G network VPN’ing into the office) to catch edge problems.
Test Metrics: Key metrics to monitor include:
- Handshake completion rate (how often do PQC handshakes succeed vs classical under identical conditions).
- Handshake duration (time to establish secure session, possibly broken down by network wait vs computation time).
- Data overhead (bytes sent in handshake, number of packets or fragments).
- CPU and memory usage on endpoints (to identify if, say, a router’s CPU spikes handling a PQC handshake).
- Application-level impact (does the application using the connection experience timeouts or user-visible delays).
- Power consumption (for battery-powered devices, measure if PQC significantly drains battery during connection setup).
By comparing these metrics with and without PQC, one can pinpoint where the biggest gaps or pain points are.
Mitigation Strategies and Solutions
After identifying potential problems, there are several strategies to ensure a smooth rollout of PQC without compromising connectivity or performance:
Algorithm Selection
Not all PQC algorithms are equal. For instance, lattice-based schemes like Kyber (for key exchange) and Dilithium or Falcon (for signatures) hit a sweet spot of strong security with relatively moderate sizes. On the other hand, some alternatives like SIKE (now broken) or certain code-based schemes had smaller sizes but are off the table due to security, while hash-based signatures like SPHINCS+ are extremely large (making them unsuitable for time-sensitive scenarios).
Within the choices available, prefer the algorithms that minimize data and computational load for a given security level. If you don’t need the highest security level for a use case, using a lower parameter set (e.g. Kyber-512 instead of Kyber-768) can cut sizes further.
In summary, choose the most efficient PQC algorithm that meets your security requirement, especially for constrained links.
Hybrid Modes Wisely Used
During the transition, hybrid cryptography (combining classical and PQC) is common for safety. But hybrid means even more data (dual key exchanges, dual certs). On a constrained link, consider whether hybrid is necessary or if a pure PQC mode can suffice for that application (with the understanding of some risk if PQC is later found weak). Some IoT deployments might opt for pure ECC now and plan a jump to pure PQC later, skipping hybrid due to overhead.
Alternatively, if using hybrid, see if one component can be kept minimal – e.g. use a lightweight classical algorithm (Curve25519) combined with a PQC KEM, rather than RSA+PQC which would balloon sizes. Standards like IKEv2’s multi-key exchange allow flexibility to negotiate hybrids; use those that make sense and turn off ones that add too much overhead for your link.
Protocol & Implementation Tuning
Ensure that protocols are configured to handle large handshakes:
Enable IKEv2 fragmentation and intermediate exchanges for IPsec VPNs.
In TLS, if using DTLS (UDP-based TLS for say VoIP or IoT), tune the max handshake message size to avoid IP fragmentation – DTLS 1.3 can negotiate fragmentation at the record layer or you may limit certificate chain size.
Update any hardcoded buffer sizes in applications (some apps had fixed limits that a PQC certificate could exceed).
Increase initial timeout thresholds for handshakes on high-latency links so they don’t abort too quickly. Similarly, allow for a slight increase in retransmission counts for the first flight of packets.
Use session resumption or connection reuse to avoid frequent full handshakes. For example, a client could do one heavy PQC handshake to establish a TLS session, then resume that session for subsequent connections (TLS 1.3 tickets or 0-RTT data can help here).
Use caching of PQC credentials when possible. If a device verifies a server’s post-quantum certificate once, cache that so next time it doesn’t need to download a large cert chain again.
Infrastructure Upgrade
Work with vendors to upgrade firmware and software across the network:
Firewalls and proxies should be made crypto-agile – able to let new cipher suites through and handle larger packets.
VPN and SD-WAN devices need support for new standards (IKE RFCs, hybrid modes). For instance, ensure that any IPsec equipment supports RFC 9380/9370 (multiple key exchanges) and RFC 8784 (PQC algorithms for IKE) so that your site-to-site tunnels won’t break when PQC is turned on.
IoT gateways and base stations should be evaluated – some may need memory upgrades or new crypto chips to perform PQC within acceptable time. If not, plan to replace or augment them (some vendors are creating quantum-safe VPN software that can be retrofitted to existing routers as an interim solution).
Increase link MTUs if possible on local networks to accommodate bigger packets (e.g. use jumbo frames internally), which can reduce fragmentation likelihood.
Crypto-Agility and Fallback
Incorporate crypto-agility in design. This means having the ability to disable or swap out an algorithm quickly if it proves problematic. For example, if a particular PQC algorithm is causing handshake failures on a certain network segment, you should be able to configure clients/servers to try a different algorithm or revert to classical for that segment while the issue is resolved. Ensuring that systems support multiple PQC options (and combinations) gives flexibility.
As a safety net, monitor performance and have a rollback plan when first enabling PQC on critical services.
Lower-Layer Forward Error Correction
An unconventional but possible mitigation for lossy low-power links is to add error correction or redundancy for important handshake packets. If a PQC handshake absolutely must be done over an unreliable link, using FEC at the application layer to send redundant pieces might help avoid costly retransmissions. This is more of a research idea – essentially trade a bit more upfront data for resilience.
Staged Deployment & Monitoring
Deploy PQC gradually and monitor. Start with non-critical links or a subset of users. Use deep monitoring to catch if timeouts or failures spike. Cloudflare’s approach of enabling PQC by default but observing fallback rates is instructive – they could detect if many connections fell back to classical, indicating a problem in some network path. Telcos similarly are advised to do “extensive testing, given the mission-critical nature of telecom services,” possibly running PQC in parallel (dual-stack) before switching fully.
Community and Standardization Efforts
Finally, engage with the broader community. The issues of PQC in constrained networks are known, and bodies like the IETF, 3GPP, and GSMA are actively discussing them. By participating in these discussions or at least following the guidance, network professionals can adopt recommended best practices. For example, the UK’s NCSC guidance explicitly calls out avoiding IP-layer fragmentation in IKEv2 as a requirement for quantum-safe VPNs, prompting use of IKE fragmentation mechanisms in implementations. Following such guidance will prevent many connectivity pitfalls.
Conclusion
PQC brings new dependencies between cryptography and network connectivity. Unlike the relatively small and efficient crypto of the past, post-quantum algorithms force us to consider link capacity, latency, and device limitations as first-class concerns in security design.
Some network environments – particularly low-power and low-bandwidth links – will face significant challenges in a post-quantum migration, potentially impacting communication reliability. Other environments, like typical broadband and even 5G, will see smaller performance hits but still require careful integration to avoid edge-case failures (like those due to fragmentation or unprepared middleware).
The good news is that with proactive planning and testing, these challenges are surmountable. By understanding the limitations (e.g. why a PQC VPN handshake might falter on a weak cellular link, or why a sensor network can’t just “turn on” quantum-safe TLS), we can devise strategies to mitigate issues – whether through smarter protocol design, algorithm choices, or interim measures for constrained cases. Both network engineers and cryptographers will need to collaborate: engineers to ensure the pipes and devices can handle the new crypto, and cryptographers to possibly refine algorithms for real-world constraints (smaller keys, one-round protocols, etc.)