Pick One Layer: How to Choose the Post-Quantum Migration That Protects the Most
Table of Contents
Introduction
Most organizations treat PQC migration as an all-or-nothing infrastructure overhaul. Inventory everything, plan everything, migrate everything, validate everything. The scope is paralyzing, and the result is predictable: most organizations are still in the planning phase while their encrypted traffic accumulates on adversary storage arrays.
Recent formal research from Cisco gives us one of the clearest formal treatments of a principle many practitioners already suspected: in a nested encryption stack, one quantum-safe confidentiality layer can be enough to protect the payload, even if every other layer remains vulnerable to a cryptographically relevant quantum computer (CRQC). An adversary who breaks those outer layers still hits a wall at the quantum-safe layer. The data stays encrypted. Metadata protection, however, depends on where the quantum-safe layer sits: only the outermost layer’s status determines what connection metadata is exposed.
This finding transforms the migration question. Instead of “how do we migrate everything,” the first question becomes “which single layer gives us the most protection for the least effort?” That is an answerable question, and the answer differs by architecture. The goal of this article is to walk through six common enterprise scenarios and show how to identify the best first move in each one. The Cisco paper’s own case studies cover consumer and prosumer stacks (iMessage, HTTPS over Wi-Fi, WireGuard VPN); the six enterprise scenarios below are my application of the same composition rule to the architectures that PostQuantum.com’s readers actually operate.
Two important caveats before we proceed. First, one layer handles confidentiality only. Authentication follows the opposite rule: every layer with public-key authentication must migrate for quantum-safe authentication. I will address the authentication challenge separately in each scenario. Second, “pick one layer” is a starting strategy, not an end state. Complete PQC migration remains the destination. But getting one layer migrated today is categorically better than having zero layers migrated while you plan a comprehensive program. The PQC Migration Framework is designed for exactly this kind of phased approach.
The Decision Framework
Before examining specific architectures, it helps to understand what factors determine which layer is the best first candidate.
Centralization of control matters most. A layer where one configuration change protects traffic for thousands of users delivers more value per unit of effort than a layer that requires changes on every endpoint. A reverse proxy, a VPN concentrator, or a service mesh control plane can apply PQC to all traffic flowing through it.
Scope of protection determines what you actually defend. Migrating the outermost layer (the network tunnel or Wi-Fi encryption) protects both payload and metadata, including connection patterns, destination addresses, and traffic volumes. Migrating an inner layer (application-layer encryption or TLS) protects the payload but leaves metadata exposed. Both are valid, depending on your threat model.
Operational feasibility is the constraint that separates theory from practice. Some layers are controlled by your team and can be updated in weeks. Others depend on vendor software, protocol standards, hardware refresh cycles, or ecosystem coordination that could take years. The best first layer is often the one you can actually change.
Persistence of existing protection affects urgency. If a layer already uses purely symmetric cryptography (pre-shared keys, AES-256 with no public-key key exchange), that layer has no Shor-class vulnerability and can usually be deferred relative to public-key layers. A caution: the Cisco paper classifies AES-128 as Q-Unsafe because Grover’s algorithm reduces it to 64-bit effective security, at the threshold of classical feasibility. Symmetric deferral assumes adequate key lengths (AES-256-class encryption and modern MACs/KDFs). Layers using RSA, ECDHE, ECDSA, or other public-key operations are the immediate priority.
With these factors in mind, let’s walk through six enterprise architectures.
Scenario 1: Corporate Web Traffic Through a SASE/Proxy Architecture
The stack: User device → TLS 1.3 to SASE edge → SASE tunnel (IPsec or proprietary) to SASE POP → TLS 1.3 to destination web server.
Active cryptographic layers: TLS on the client side, the SASE overlay tunnel, and TLS on the server side. All three typically use ECDHE (often X25519) for key exchange, making all three Q-Unsafe.
The best first layer: The SASE overlay tunnel or client-to-edge TLS.
Most SASE architectures terminate and re-originate TLS at the edge. The overlay tunnel between SASE edge and SASE POP is the segment most directly under IT control. If your SASE provider supports hybrid PQC for the overlay tunnel, enabling it protects all corporate traffic on the WAN segment through a vendor configuration rather than an internal engineering project. Cloudflare has already deployed hybrid PQC key exchange (ML-KEM) across its network and is working toward full post-quantum security including signatures by 2029, with others following.
The protection this buys depends on which segment gets migrated. For the SASE-to-origin TLS leg (the re-originated connection to the destination web server), PQC protection requires both the SASE platform and the destination server to support hybrid PQC key exchange. The SASE provider cannot unilaterally make that leg quantum-safe if the destination does not support PQC. For destinations that do support PQC TLS, this segment becomes quantum-safe for confidentiality. For destinations that do not, it remains classical.
The client-to-edge segment is more promising for unilateral action: if the SASE agent and edge both support PQC (which the organization controls), that leg can be migrated regardless of what happens on the internet-facing side.
What it does not protect: whichever segments remain on classical TLS still expose metadata to an HNDL adversary. If the outermost layer (user device to SASE edge) uses classical TLS, an adversary capturing traffic on the local network (Wi-Fi, campus LAN) can recover connection metadata: the fact that you are connecting to the SASE provider, connection timing, and traffic volumes. For most enterprise threat models, the WAN segment is the higher-risk capture point, but organizations with nation-state adversary concerns should evaluate each segment separately.
Authentication gap: The TLS certificates on both sides of the SASE edge remain ECDSA or RSA. A quantum adversary could forge server certificates and mount an active man-in-the-middle attack. This requires a CRQC operating in real time (not an HNDL scenario), which makes it a later-phase risk, but it must be addressed before CRQC arrival. Certificate infrastructure migration is the harder, longer project. Start with a cryptographic inventory of your certificate chain now so you understand the scope.
Scenario 2: Cloud-Native Microservices with a Service Mesh
The stack: External client → TLS to API gateway/load balancer → mTLS between services via service mesh (Istio, Linkerd, Consul Connect) → database connections (TLS or application-level encryption).
Active cryptographic layers: Ingress TLS, east-west mTLS (often dozens or hundreds of service-to-service connections), and database encryption.
The best first layer: The service mesh control plane.
Service meshes manage mTLS for all east-west traffic through a centralized control plane (Istio’s istiod, Linkerd’s control plane). The mesh’s sidecar proxies (typically Envoy) handle certificate rotation and cipher suite negotiation. Updating the mesh configuration to use hybrid PQC TLS for all service-to-service communication can protect hundreds of internal connections through a single policy change.
This is a high-leverage move because microservices architectures generate enormous volumes of internal API traffic. Financial calculations, patient records, customer data, proprietary algorithms, all flowing between services with mTLS that currently uses ECDHE key exchange. An HNDL adversary who captures east-west traffic (through a compromised node, a container escape, or a misconfigured network policy) can store it all for quantum decryption. Migrating the mesh to PQC closes that exposure for every service simultaneously.
What it does not protect: ingress traffic from external clients depends on the API gateway’s TLS configuration, which is a separate migration. Database connections are typically separate from the mesh. Both should follow as next steps.
The authentication problem is acute here. Service meshes issue short-lived certificates (often rotating every 24 hours) using ECDSA or RSA. Every mTLS connection is an independent authentication point that a quantum adversary could target. The good news: because certificate issuance is centralized in the mesh CA, the mesh is a high-leverage place to migrate authentication. But ML-DSA (standardized from CRYSTALS-Dilithium) migration is not only a CA setting: the mesh proxies (every Envoy sidecar), certificate profiles, TLS validation stack, identity control plane, and policy layer must all support PQC certificates before this becomes a safe production change. The centralization advantage is real, but the data plane must keep up with the control plane.
Scenario 3: Remote Access VPN
The stack: User device → VPN client → encrypted tunnel (IPsec IKEv2 or WireGuard) to VPN concentrator → corporate network → application servers (TLS or unencrypted).
Active cryptographic layers: VPN tunnel (IKEv2 with ECDHE or WireGuard with Curve25519), and optionally TLS to internal applications.
The best first layer: The VPN tunnel itself.
The Cisco research showed that a standard WireGuard VPN provides zero quantum protection because Curve25519 key exchange is Shor-vulnerable. The same applies to IKEv2 with ECDHE. An adversary capturing encrypted VPN traffic today can decrypt the VPN-protected session once a CRQC can break the classical key exchange, unless an inner application layer provides its own quantum-safe confidentiality. Without that inner protection, browsing destinations, application data, authentication tokens, and internal network topology are all exposed.
Migrating the VPN to PQC transforms it from a quantum liability into a quantum shield. Every application, every protocol, every byte of traffic flowing through the tunnel gains confidentiality protection from a single infrastructure change at the VPN concentrator.
For WireGuard specifically, there is an intermediate option: enabling PSK (pre-shared key) mode. WireGuard’s Noise protocol can mix a 256-bit pre-shared key into the handshake, making the derived symmetric keys independent of Curve25519. Even if the EC discrete log is solved, the PSK contribution keeps the keys secret. This makes the tunnel Q-Safe for confidentiality without waiting for a full protocol replacement. The operational cost is distributing and managing PSKs out of band, which is non-trivial at scale but achievable for a defined set of VPN concentrators.
For IPsec IKEv2, RFC 9370 provides a multiple-key-exchange framework, and ML-KEM-for-IKEv2 profiles are progressing through IETF drafts. Several vendors (Cisco, Palo Alto, Fortinet) have implemented or announced hybrid PQC key exchange support. Check your vendor’s current PQC roadmap; this may already be available as a configuration option in your existing deployment.
Authentication gap: Even with PSK mode enabling Q-Safe confidentiality for WireGuard, peer authentication still relies on Curve25519 static keys, which remain Q-Unsafe. A quantum adversary could impersonate the VPN server if they can break the authentication in real time (an active attack), though they could not decrypt past sessions protected by PSK-derived keys. IKEv2 uses RSA or ECDSA certificates for peer authentication, equally Q-Unsafe. Complete VPN quantum safety requires migrating both key exchange (confidentiality) and peer authentication (integrity). Certificate-based VPN authentication shares the same CA infrastructure migration challenge as TLS.
Scenario 4: OT/SCADA Environments
The stack: HMI/SCADA server → industrial protocol (Modbus/TCP, DNP3, OPC UA) → PLC/RTU. Typically wrapped in an IPsec VPN or a dedicated OT network segmented from IT.
Active cryptographic layers: Often just IPsec on the network boundary. Many industrial protocols run unencrypted or with fixed symmetric authentication (DNP3 Secure Authentication uses HMAC-SHA-256). Some deployments have no cryptographic layers at all.
The best first layer: The OT network boundary IPsec gateway.
OT environments present a different category of challenge. Industrial protocols like Modbus/TCP were designed without cryptography (though the Modbus Organization now publishes a security extension using TLS). DNP3 Secure Authentication added HMAC-SHA-256, which is symmetric and classified as Q-Safe in the Cisco paper’s taxonomy (128-bit effective security post-Grover, well above feasibility thresholds). OPC UA defines security profiles with fixed algorithm combinations that cannot be renegotiated without specification revision.
In most brownfield OT environments, adding PQC at the industrial protocol layer is not a practical first move. Some protocols have security extensions, but PQC support is not generally available in deployed PLC/RTU firmware, and certification, interoperability, and lifecycle constraints make protocol-layer migration a long standards-and-refresh problem. As I have written in the context of crypto-agility architecture, any protocol with fixed algorithm identifiers is structurally incompatible with agility until the protocol specification itself is revised.
What you can do is migrate the network boundary. If OT traffic crosses an IPsec tunnel between sites (which is common in distributed utility, manufacturing, and pipeline environments), migrating that tunnel to hybrid PQC key exchange protects all encapsulated industrial protocol traffic. The PLCs do not need to change. The SCADA software does not need to change. The IPsec gateway handles the quantum protection at the perimeter.
This is precisely the single-layer confidentiality principle in action. One Q-Safe layer at the boundary protects everything inside it, including protocol traffic that could never be upgraded directly.
Authentication gap: OT environments face the worst authentication migration timeline in any sector. Industrial device certificates (if they exist at all) are often embedded in firmware that cannot be updated. Many OT authentication mechanisms use symmetric keys that are not Shor-vulnerable, which provides some protection but not complete authentication integrity. The CRQC Quantum Capability Framework timeline suggests that organizations have years, not months, before active authentication attacks become feasible, but the OT remediation timeline is also measured in years (or decades for embedded devices). Start the inventory now.
Scenario 5: Email with Transport and End-to-End Encryption
The stack: Email client → S/MIME or PGP encryption (application layer) → SMTP with STARTTLS between mail servers → recipient mail server → recipient client.
Active cryptographic layers: Application-layer end-to-end encryption (S/MIME or PGP) and transport-layer TLS between mail servers.
The best first layer: Application-layer encryption (S/MIME or PGP).
Email is the classic HNDL target. Diplomatic communications, M&A discussions, legal privileged correspondence, medical records. Encrypted email captured in transit today could be decrypted in a decade with a CRQC. The exposure window is as long as the data’s sensitivity lifetime.
If your organization uses S/MIME, the CA/B Forum has already added ML-DSA and ML-KEM algorithm specifications to the S/MIME Baseline Requirements (via ballot SMC013), initially to enable experimentation with PQC certificates. The harder constraint is not the abstract permission to encode those algorithms; it is real-world CA issuance, root-program policy, client support, CMS/S/MIME interoperability, and enterprise PKI readiness. Organizations running their own PKI for S/MIME can issue hybrid PQC certificates (combining ML-DSA with ECDSA) for internal use now, ensuring backward compatibility while adding quantum protection. Note that publicly trusted TLS server certificates remain a different story: the Server Certificate Baseline Requirements currently permit only RSA and ECDSA.
For OpenPGP users, PQC support is moving through the standards and early-implementation pipeline (the OpenPGP PQC draft specifies composite ML-KEM+ECDH encryption and ML-DSA+EdDSA signatures, with Sequoia providing pre-release support), but it is not yet a mainstream interoperable deployment option.
The interim strategy: even without PQC at the email application layer, migrating SMTP TLS between mail servers to hybrid PQC key exchange protects email in transit on the server-to-server segment. This does not protect against HNDL adversaries who capture mail at rest on either server, but it does protect the transport channel. It is a partial measure, and for high-sensitivity communications, it is insufficient. But it is achievable now for organizations that control their mail infrastructure.
Authentication gap: S/MIME signing certificates use RSA or ECDSA. A quantum adversary could forge signed emails, attributing fabricated content to legitimate senders. For legal, financial, and regulatory communications where email signatures carry evidentiary weight, this is a material risk that organizations should factor into their authentication migration planning.
Scenario 6: API-Driven Financial Services Architecture
The stack: Mobile app or partner system → TLS to API gateway → API gateway → mTLS to backend microservices → encrypted database connections → HSM for signing and key management.
Active cryptographic layers: External TLS, internal mTLS, database TLS, HSM-based signing.
The best first layer: The API gateway’s external TLS termination.
Financial services process some of the highest-value data subject to HNDL risk: transaction records, account credentials, customer PII, trading algorithms, regulatory filings. API gateways are natural choke points. Every external API call flows through them, and most organizations operate a small number of gateway instances (or use a managed service like AWS API Gateway, Azure API Management, or Kong).
Migrating the API gateway to hybrid PQC TLS is the necessary first step, but achieving quantum-safe confidentiality on inbound connections requires both sides of the handshake to support PQC. Every mobile app, partner integration SDK, and client library must be updated to send a PQC key share in their TLS ClientHello. Until clients are updated, those connections remain classical. For organizations controlling their own mobile apps, this is a coordinated app-release plus gateway-upgrade. For partner integrations, it requires ecosystem coordination. AWS has begun deploying ML-KEM-based hybrid key establishment across key service endpoints, including KMS, S3, and CloudFront, and API Gateway offers TLS policies that can leverage post-quantum cryptography.
For organizations running their own API gateways (Kong, Nginx, Envoy-based), the upgrade path depends on the underlying TLS library. OpenSSL 3.5+ provides native support for standardized PQC algorithms (ML-KEM, ML-DSA, SLH-DSA) and hybrid schemes; older OpenSSL 3.x deployments can use the OQS provider for testing and transition; BoringSSL-based stacks can use ML-KEM where Google’s implementation is available. The engineering effort is real but bounded, and it is concentrated in a small number of well-understood systems.
What about the HSM? Hardware security modules present a harder problem. HSMs used for transaction signing, key wrapping, and certificate issuance may not support PQC algorithms yet. HSM vendor PQC roadmaps vary widely, and hardware refresh cycles are long. HSMs are a critical authentication dependency that cannot be addressed by single-layer confidentiality migration. Plan for this as a separate workstream with its own timeline, and pressure your HSM vendor for a concrete PQC delivery date.
The Authentication Tax
Across all six scenarios, a pattern repeats: one layer handles confidentiality; authentication resists shortcuts. The composition rule is unforgiving. Every public-key authentication point is an independent attack surface.
But the urgency profile differs. HNDL is a passive attack happening today. An adversary captures your encrypted traffic now, stores it, and decrypts it when a CRQC arrives. Authentication attacks require a CRQC operating in real time. The adversary must forge credentials during the connection, not after the fact. This means confidentiality migration is more urgent for most organizations, because the damage from HNDL is being accumulated right now.
Authentication migration, however, has a longer lead time. Certificate infrastructure, PKI hierarchies, HSM firmware, protocol updates, vendor dependencies. The crypto-agility architecture article covers why these dependencies compound, and why starting the authentication inventory now (even if deployment is years away) avoids a compressed migration window later.
For practitioners weighing these trade-offs, the Trust Now, Forge Later analysis explains why authentication migration, while less urgent in the HNDL sense, carries higher consequences when it becomes exploitable. A forged signing key can cause active harm; a decrypted historical message is a confidentiality breach. Both matter, but they demand different response timelines. And some authentication artifacts have long-lived consequences that blur the urgency distinction: code signing certificates, firmware signing keys, root CA hierarchies, and document signatures all create trust relationships that persist for years or decades after the signing event. “Less urgent than HNDL” should not be read as “safe to postpone.”
Putting It Together: A Decision Tree
For security architects planning their first PQC migration, the process reduces to four steps:
Step 1: Map your protocol stack for each major data flow. What cryptographic layers exist between the data source and its destination? Which layers use public-key key exchange (Shor-vulnerable) and which use symmetric operations (Grover-reduced at worst)? The CBOM (Cryptographic Bill of Materials) approach provides a structured methodology for this inventory.
Step 2: Identify the most centralized migration point. Which layer can you update once and protect the most traffic? Look for proxies, gateways, VPN concentrators, service mesh control planes. Avoid layers that require per-device or per-endpoint changes for your first move.
Step 3: Check vendor PQC readiness. Can your SASE provider, cloud platform, VPN vendor, or service mesh enable hybrid PQC today? Some already can. If your most centralized layer depends on a vendor that is not PQC-ready, move to the next-best candidate rather than waiting.
Step 4: Deploy hybrid PQC and move on to the next layer. For internet-facing TLS and many transition scenarios, hybrid key exchange (classical plus post-quantum running in parallel) is the prevailing deployment pattern: it preserves classical compatibility while adding quantum protection. Hybrid adds implementation complexity and may be temporary, so evaluate it per protocol and vendor support rather than treating it as a universal default. Once the first layer is live, begin planning the authentication migration and the next confidentiality layer.
This is the practical application of the PQC Migration Framework‘s phased approach. The framework was designed for exactly this kind of iterative, priority-driven migration. Organizations that wait for a perfect, comprehensive plan will find themselves perpetually planning while their exposure grows.
The Deadlines Are Not Waiting
The reason to act now is not a prediction about when a CRQC will arrive. As I have argued repeatedly, the deadlines are already set by forces that do not depend on quantum hardware timelines. NIST’s draft IR 8547 sets out an expected transition path: 112-bit classical public-key algorithms deprecated after 2030 and disallowed after 2035, with stronger classical parameter sets disallowed after 2035. NSA’s CNSA 2.0 requires new National Security System acquisitions to be CNSA 2.0-compliant from 2027 (with software/firmware migration timelines beginning even earlier), and mandates CNSA 2.0 algorithms across most system types by the end of 2031. Google has committed to completing its migration by 2029. Regulators, insurers, and clients are building quantum readiness into their procurement requirements and risk assessments.
Every day of delay is another day of encrypted traffic accumulating in adversary storage. One PQC layer stops that accumulation for payload confidentiality. Identify your best first layer, deploy it, and then continue the broader migration from a position of meaningful protection rather than total exposure.
Quantum Upside & Quantum Risk - Handled
My company - Applied Quantum - helps governments, enterprises, and investors prepare for both the upside and the risk of quantum technologies. We deliver concise board and investor briefings; demystify quantum computing, sensing, and communications; craft national and corporate strategies to capture advantage; and turn plans into delivery. We help you mitigate the quantum risk by executing crypto‑inventory, crypto‑agility implementation, PQC migration, and broader defenses against the quantum threat. We run vendor due diligence, proof‑of‑value pilots, standards and policy alignment, workforce training, and procurement support, then oversee implementation across your organization. Contact me if you want help.